text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Web Exclusives: Systems Quick, do this math problem: 1,010 x 15,580. Most of us would run for a calculator—or at least scrap paper and a pencil. Would you believe one answer is 6? In 15,580 different situations, the bacterium E. coli and its 1,010 genes rely on just 6 basic ways to survive. Why is this important? Although human metabolism is more complicated than a microbe's, the two share many basic elements, says bioengineer Bernhard Palsson of the University of California, San Diego. Developing a model to understand the metabolic behavior of bacteria could help scientists predict the behavior of other complex biological systems, like our bodies. To get started, Palsson and his team created a list of "lab broths" that could meet the basic needs of E. coli. The scientists wrote mathematical formulas to describe how E. coli metabolizes the "soup" nutrients in a variety of environmental conditions. Then, they pieced together 50 years of data on E. coli metabolic reactions. Computers did a lot of the work! In addition to discovering only six basic behaviors, the researchers learned that E. coli makes its metabolic "decisions" based upon just a few key ingredients. The findings suggest that despite having thousands of working parts, complex systems like E. coli metabolism can still be understood—and that asking questions of very complex systems can result in a "simple answer." Palsson and others plan to use the same modeling approach to study healthy and diseased human cells.
<urn:uuid:d51a50a9-57fa-4b29-9e91-075e16bc06d3>
3.546875
322
Knowledge Article
Science & Tech.
53.209101
At T-0, the shuttle's twin solid rocket boosters ignite, and the spacecraft lifts off. About 10 minutes later, the astronauts are in a low orbit, traveling at 25 times the speed of sound and lapping the planet once every 90 minutes. The astronauts' destination -- the space station -- is circling some 240 miles (384 km) above Earth, but the shuttle is behind the station and must catch up. To close the distance, the astronauts conduct periodic firings of the shuttle's on-board thrusters. The astronauts reach the space station three days after launching from KSC. The final approach, from behind and below the station, is slow and methodic. Constant air-to-ground communications are maintained as the shuttle makes a giant loop around to the top of the station and, slowly attaches to the docking port. Once the shuttle and station are fully mated, the two crews can mingle. There's a brief celebration, but the astronauts have a very full schedule and get to work almost immediately. Because of zero gravity (or microgravity, to be more precise), working in space is quite different from working on Earth. Astronauts must get used to being weightless, which causes bone and muscle deterioration and requires that everything loose -- including sleeping astronauts -- be tied down. Eating, drinking and using the bathroom are especially challenging activities for astronauts in orbit. Over the years NASA has designed ingenious solutions that make living in space as comfortable as possible. While in orbit, astronauts spend most of their time in the relatively safe confines of the shuttle or space station. Many missions, however, require a spacewalk, perhaps to deploy a satellite or make repairs. During a spacewalk, an astronaut must wear a space suit -- what NASA calls an extravehicular mobility unit (EMU) -- to protect and sustain him or her in the vacuum of outer space. Each EMU has a hard upper torso, a lower torso assembly and legs. A portable life support system, or PLSS, integrates fully with the suit and is worn like a backpack. The weight of the EMU-PLSS assembly is considerable. The suit itself weighs about 110 pounds (50 kg), the PLSS about 310 pounds (141 kg). For this reason, NASA designed EMUs for work in weightless conditions only, where the weight of the suit itself is unimportant. The Apollo suit, by comparison, was much different. Including the life support backpack, the Apollo suit weighed about 180 pounds (82 kg). Most shuttle missions last two to three weeks. Usually, one shuttle astronaut will trade places with one of the astronauts on the space station at the end of the mission. Those returning to Earth board the shuttle and prepare for departure. Before undocking, the shuttle commander will generally bid farewell to the station commander. Then the shuttle will spring loose from the docking port and back gently away from the station. A final lap allows shuttle crew members to snap pictures of the station. Then it's back to Earth. Re-entry can be quite dangerous. On the next page, you'll read about the challenges astronauts face as they return home.
<urn:uuid:efba06dc-3f2f-4e9c-84f5-ee075b750fbc>
3.421875
637
Knowledge Article
Science & Tech.
52.936019
This is Part Four of a six-part series telling the story of humankind's efforts to understand the origins of life and the potential for life on other worlds by studying organisms that survive deep below Earth's oceans around hydrothermal vents. When the Voyager and Galileo spacecraft visited Jupiter's moons Io and Europa, scientists were faced with the exciting possibility that these strange worlds might host exotic forms of life. Scientists observed that Europa was rich with water in its icy outer layer. If there was tidal heating on Europa, could it be enough to melt the bottom of the moon's ice crust, perhaps enough to sustain a liquid water ocean? Europa does experience strong tidal forces. The moon orbits close to Jupiter, and the giant planet's gravity stretches the 2,000-mile wide Europa by more than 100 feet. This makes the moon's orbit ever-so-slightly non-circular, giving the orbit a tiny eccentricity, but one with enormous implications. Huge tidal forces are the crucial factors that suggest Europa's ocean is able to remain in its liquid water phase instead of freezing completely into the layer of ice above it. According to planetary scientists, everything interesting about Europa follows from this subtle eccentricity. If a liquid ocean exists on Europa, could the seafloor be similar to our own oceans here on Earth? In the absence of sunlight, could hydrothermal vents provide the necessary energy to support ecosystems on the icy moon? Last Updated: 26 February 2013
<urn:uuid:3c3b9975-2aed-4967-b971-5387fb240d02>
4.21875
297
Knowledge Article
Science & Tech.
39.983102
Electric field strength is a quantitative expression of the intensity of an electric field at a particular location. The standard unit is the volt per meter (v/m or v · m -1 ). A field strength of 1 v/m represents a potential difference of one volt between points separated by one meter. Any electrically charged object produces an electric field. This field has an effect on other charged objects in the vicinity. The field strength at a particular distance from an object is directly proportional to the electric charge, in coulomb s, on that object. The field strength is inversely proportional to the distance from a charged object. The field-strength-vs-distance curve is a direct inverse function, and not an inverse-square function, because electric field strength is specified in terms of a linear displacement (per meter) rather than a surface area (per meter squared). An alternative expression for the intensity of an electric field is electric flux density . This refers to the number of lines of electric flux passing orthogonally (at right angles) through a given surface area, usually one meter squared (1 m 2 ). Electric flux density, like electric field strength, is directly proportional to the charge on the object. But flux density diminishes with distance according to the inverse-square law, because it is specified in terms of a surface area (per meter squared) rather than a linear displacement (per meter). Sometimes the strength of an electromagnetic field ( EM field ) is specified in terms of the intensity of its electric-field component. This is done by engineers and scientists when talking about the radio-frequency field strength at a certain location arising from sources such as distant transmitters, celestial objects, high-tension utility lines, computer displays, or microwave ovens. In this context, electric field strength is usually specified in microvolts per meter (µV/m or µV · m -1 ), nanovolts per meter (nV/m or nV · m -1 ), or picovolts per meter (pV/m or pV · m -1 ). The relationship among these units is shown in the table. |Unit||To convert to v/m, |µV/m||10 -6||10 6| |nV/m||10 -9||10 9| |pV/m||10 -12||10 12|
<urn:uuid:8fe7e695-dec9-4ad0-89d5-ddad9119ca83>
4.03125
492
Knowledge Article
Science & Tech.
50.475891
Who among us hasn’t spent some time gazing at the clouds? Perhaps we have lain in a grassy field or lawn and looked for shapes in the puffy white blobs that floated lazily across the blue expanse above. Or we watched the sky catch fire at the setting (or rising) of the day. For some, maybe the only relevance of clouds is whether they will produce rain (or hail, or snow, or a tornado). Regardless of the specific nature of our relationships with clouds, we have them. For me, I am most fascinated by the shapes and colors clouds can assume. The absolute best cloud formation I’ve seen was here in the Adirondacks. I was driving back from Ray Brook and there in the sky was a herd of banthas* – must’ve been a hundred of them. Each cloud was the same shape, and as they slowly changed, they changed in unison. It was pretty amazing. Clouds, at least here on Earth, are made from condensed water vapor.** It doesn’t sound very exciting, does it? Warm air absorbs water vapor (this is why winter air is dry), and warm air rises. As the warm, moist air rises, it cools. As it cools, the water condenses into droplets, or ice crystals. If enough of these droplets are close enough together, they form a visible mass we call a cloud. Why are clouds white? And why are they not always white? This has to do with how light bounces on, around, off, water particles. Take your average cloud – it’s large, it’s deep, and it is highly reflective of all wavelengths of light within the visible spectrum. In other words, it reflects all light we can see, and thus it looks white (the color white is made up of all the colors). As the sunlight penetrates further into the cloud, it is scattered more and more, leaving less to be reflected. This is why the bottoms of clouds are often darker, even grey. Think rain clouds. These are very dense – lots of condensed water vapor. We’ve all see clouds that are red, orange and pink – glorious shades that show up when the sun is low on the horizon. These colors, however, are not IN the clouds, though. These colors appear as reflections from the sun. A great explanation I found for this is that it is the same as if you shone a red flashlight onto a sheet – the sheet reflects the red light, it doesn’t turn red itself. But some clouds look bluish, or greenish, or even yellowish. These are all structural. For example, the blueish-grey clouds are caused from light scattering within the cloud. Blues and greens are short wavelength colors and are very easily scattered by the water droplets (reds and oranges are long wavelengths, and they are reflected, see paragraph above). If you see a green cloud, it is that color because the sunlight is being scattered by ice instead of water droplets. This can be a clue to weather prognosticators as to what kind of weather we can expect (hail, snow, tornadoes). Yellow clouds are apparently quite rare, and their color tends to come from pollutants in the atmosphere, like smoke. Then there are iridescent clouds. These are very uncommon. Iridescent clouds usually sport pastel colors, looking much like mother-of-pearl. Sometimes, however, their colors can be quite intense. Iridescent clouds are formed when the light shines through thin clouds (often the edges of clouds) made from nearly uniform droplets. Each ray of light strikes one droplet and all the droplets participate in cumulative diffraction, the end result of which is a cloud that shimmers with all the visible colors.*** I’ve only seen this once, and that was because I was wearing polarized sunglasses at the time – dark glasses can help make these events visible. It was amazing. Cloud gazing isn’t something that should be left to children or the idle. Everyone should take the time to watch the clouds. Not only can it be a relaxing activity (can an activity be relaxing?), but it can also be informative. Just think, our ancestors knew their clouds and had a weather sense that most of us have lost today, traded in for the ease of technology. Sometimes I think our ancestors had the better plan. * For those who don’t get this reference, banthas are the creatures from “Star Wars” that the Sand People and Tuskan Raiders rode. They are imaginary, obviously, but even so, that’s exactly what the clouds looked like. ** Clouds can form on any moon or planet that has an atmosphere, but this doesn’t mean they are made from water vapor. Venus’s clouds are made of sulfuric acid. On Mars, they are made of ice. If you go to Jupiter and Saturn, be prepared for ammonia clouds, and if you travel to Uranus or Neptune, you’ll find the clouds are made from methane gas. Even outer space has clouds made of space debris – these are often called nebulae.
<urn:uuid:8270b9e9-c3f0-454e-bc89-e8f703eabe9a>
2.96875
1,081
Personal Blog
Science & Tech.
62.405349
Full name: light day Plural form: light days Alternate spelling: lightday Category type: length Scale factor: 25902068371200 The SI base unit for length is the metre. 1 metre is equal to 3.86069554627E-14 light day. Valid units must be of the length type. You can use this form to select from known units: I'm feeling lucky, show me some random units A light day (also written light-day) is a unit of length. It is defined as the distance light travels in an absolute vacuum in one day (of 86,400 seconds) or 25,902,068,371,200 metres (~26 Tm). Note that this value is exact, since the metre is actually defined in terms of the speed of light. The light day isn't very frequently used at all since there are few astronomical objects or distances of that magnitude; the Oort cloud, for example, is thought to extend between 290 and 580 light-days out from the Sun.
<urn:uuid:28e03bbe-a072-4c3d-9c29-8a4a1aa5fbd4>
3.265625
222
Structured Data
Science & Tech.
71.914583
Titan is remarkable because it is the only known moon in the solar system that has a substantial atmosphere—largely nitrogen with a minor amount of methane and a rich variety of other hydrocarbons. Its surface is completely hidden from view (except at infrared and radio wavelengths) by a dense, hazy atmosphere. The diameter of Titan is 3,200 mi (5,150 km), and it is the second-largest satellite in the solar system after Jupiter's Ganymede. Titan is larger than the planet Mercury. Titan's surface temperature is about –280°F (–175°C), and its surface pressure is about 50% greater than the surface pressure of Earth. In 1990, radio telescope data showed that Titan reflects and scatters radio waves, suggesting that the satellite has a solid surface, possibly with small hydrocarbon lakes or ponds. Infrared images of Titan taken in late 1999 by the W. M. Keck II telescope in Hawaii also revealed features that could be frozen land masses separated by frigid hydrocarbon seas and lakes. Other features might be highlands, and one dark area appeared to be a large impact crater or basin. Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved. More on Titan from Infoplease:
<urn:uuid:dbbec5f6-848e-47e8-ad6a-cb75d3a894b3>
4.28125
263
Knowledge Article
Science & Tech.
43.385766
Astronomy Packet: 8 Assignment # 5 Final Project: Choose one of the following 1. Through a pair of binoculars observe the moons of Jupiter 15 nights in a row. Make precise drawings each night of the locations of each of the moons. 2. Chose an astronomy topic which we covered in this class. Study this topic in much more detail, learning as much as you can. Write a five page double spaced report about your topic. 3. Prepare an oral presentation about Astronomy. Visit an elementary school class or daycare center and present to them your presentation. You must video tape this presentation so it can be graded. 4. Build a scale model of the Solar System. Include planets, asteroids and moons. Your final project should be of the highest quality. Give it your best effort. When you have completed the final project, give it to a trusted adult, such as a parent or teacher to grade.
<urn:uuid:590aa02e-ccc3-4480-8a77-05c1f92c225a>
3.671875
197
Tutorial
Science & Tech.
62.486267
If you are not a condensed matter physicist, vanadium oxide (VO2) may be the coolest material you’ve never heard of. It’s a metal. It’s an insulator. It’s a window coating and an optical switch. And thanks to a new study by physicists at Rice University, scientists have a new way to reversibly alter VO2′s electronic properties by treating it with one of the simplest substances — hydrogen. So what is VO2? It’s an oxidized form of the metal vanadium, an ingredient in hardened steel. When oxygen reacts with vanadium to form VO2, the atoms form crystals that look like long rectangular boxes. The vanadium atoms line up along the four edges of the box in regularly spaced rows. A single crystal of VO2 can have many of these boxes lined up side by side, and the crystals conduct electricity like wire as long as they are kept warm. “The weird thing about this material is that if you cool it, when you get to 67 degrees Celsius, it goes through a phase transition that is both electronic and structural,” said Rice’s Douglas Natelson, lead co-author of the study in this week’s Nature Nanotechnology. “Structurally, the vanadium atoms pair up and each pair is slightly canted, so you no longer have these long chains. When the phase changes, and these pairings take place, the material changes from being a electrical conductor to an electrical insulator.” While other materials exhibit a similar electronic about-face, VO2 is unique in that the change occurs at a relatively modest temperature — around 153 degrees Fahrenheit — and sometimes at incredible speed — less than a trillionth of second. In recent years, scientists have put these quirky properties to work. In 2004, a group in London used VO2 to design a temperature-sensitive window coating that could absorb sunlight on cold days and turn reflective on hot days. And electronics researchers are also working to create optical switches from VO2. “As an experimental physicist, VO2 is intriguing because the detailed physics of the material are still not well understood, and theoretical models alone cannot give us the answers,” said Natelson, professor of physics and astronomy and of electrical and computer engineering at Rice. “Experiments are key to understanding this.” In 2010, Natelson and postdoctoral research associate Jiang Wei began to systematically study the phase changes in VO2. Wei and graduate student Heng Ji began by using a process called vapor deposition to grow VO2 wires that were about 1,000 times smaller than a human hair. One set of experiments on wires that had been baked in the presence of hydrogen gas returned particularly odd readings. Wei, Ji and Natelson determined that the hydrogen was apparently modifying the VO2 nanowires, but only those in contact with metal electrodes. “The gold electrodes we were using to supply current to the experiment were acting as a catalyst that split the hydrogen gas molecules into atomic hydrogen, which could then diffuse into channels in the VO2,” Natelson said. “It appears that the hydrogen is taken up into the VO2 crystals, and this changes their electronic properties. If a little hydrogen is added, the phase transition happens at a slightly lower temperature, and the insulating phase becomes more conductive. If enough hydrogen is added, the transition to the insulating phase disappears altogether.” To gain insight into just how the hydrogen is able to alter the transition, the experimenters consulted with theoretical physicist Andriy Nevidomskyy, assistant professor of physics and astronomy at Rice. Nevidomskyy’s calculations showed that the hydrogen changes the amount of charge in the VO2 material and also forces the crystal to expand slightly. Both of these effects favor the metallic state. This is not the first time physicists have lowered the transition temperature of VO2 by adding other materials — a technique known as “doping.” But Natelson said Rice’s hydrogen doping is unique in that it is completely reversible: To remove the hydrogen, the material simply has to be baked in an oven at moderate temperature. “On the applied side, there may be a number of applications for this, like ultrasensitive hydrogen sensors,” Natelson said. “But the more immediate payoff will likely be in helping us to better understand the physics involved in the VO2 phase transition. If we can find out exactly how much hydrogen is required to shut down the transition, then we will have a knob that we can turn to systematically raise or lower the temperature in future experiments.” This story is reprinted from material from RICE University, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.
<urn:uuid:61cf9cb0-ea13-4d84-95ca-990db76be8ee>
3.515625
1,003
Truncated
Science & Tech.
34.557173
Identification and distribution of simple and acylated betacyanins in the Amaranthaceae. - PubMed: 11308355 Red-colored plants in the family Amaranthaceae are recognized as a rich source of diverse and unique betacyanins. The distribution of betacyanins in 37 species of 8 genera in the Amaranthaceae was investigated. A total of 16 kinds of betacyanins were isolated and characterized by HPLC, spectral analyses, and MS. They consisted of 6 simple (nonacylated) betacyanins and 10 acylated betacyanins, including 8 amaranthine-type pigments, 6 gomphrenin-type pigments, and 2 betanin-type pigments. Acylated betacyanins were identified as betanidin 5-O-beta-glucuronosylglucoside or betanidin 6-O-beta-glucoside acylated with ferulic, p-coumaric, or 3-hydroxy-3-methylglutaric acids. Total betacyanin content in the 37 species ranged from 0.08 to 1.36 mg/g of fresh weight. Simple betacyanins (such as amaranthine, which averaged 91.5% of total peak area) were widespread among all species of 8 genera. Acylated betacyanins were distributed among 11 species of 6 genera, with the highest proportion occurring in Iresine herbstii (79.6%) and Gomphrena globosa (68.4%). Some cultivated species contained many more acylated betacyanins than wild species, representing a potential new source of these pigments as natural colorants.
<urn:uuid:2222eda0-011a-4acd-bdd1-86ce90869c88>
3.109375
369
Academic Writing
Science & Tech.
25.641581
Agarose gel electrophoresis Agarose gel electrophoresis is used to separate DNA or RNA molecules by size. Nucleic acids are negatively charged and are moved through an agarose matrix by an electric field (electrophoresis). Shorter molecules move faster and migrate further. - Cast a gel - Place it in gel box in running buffer - Load samples - Run the gel - Image the gel - TAE - better resolution of fragments >4kb; - TBE - better resolution of 0.1-3kb fragments; TBE is better suited for high-voltage (>150V) electrophoresis than TAE because of its higher buffering capacity and lower conductivity; |dye||0.5-1.5% agarose||2.0-3.0% agarose*| |xylene cyanol||4000-5000 bp||750 bp| |bromophenol blue||400-500 bp||100 bp| for recipes see: Agarose gel loading dye - If you are getting unexpected bands on your gel you may want to look at the common issues in agarose gel electrophoresis page. - If you have no experience with gel electrophoresis or are explaining it to someone new, here is a cute java demo of what happens.
<urn:uuid:51f1a338-46c0-439e-ac4d-144144e9cc08>
3.375
292
Tutorial
Science & Tech.
44.770023
What is sql? SQL, which stands for Structured Query Language, is a special-purpose language used to define, access, and manipulate data. SQL is non procedural, meaning that it describes the necessary components and desired results without dictating exactly how results should be computed. If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:ca272d56-35dc-42bd-8494-302306d21df9>
2.703125
98
Q&A Forum
Software Dev.
43.6025
THL Toolbox > Developers' Zone > Web Development > Xml Markup in THL > Titles of Articles and Monographs There are a several types of document titles that can possibly be encoded in the metadata of an XML essay. Titles are found listed within the teiHeader -> fileDesc -> titleStmt, along with the author's name. The titles are differentiated by their "type" attribute. For example, the full title of an essay would be marked up in the following way: <TEI.2> <teiHeader lang="eng"> <fileDesc> <titleStmt> <title lang="eng" level="a" type="full">The Three provinces of Mṅa’-ris: Traditional Accounts of Ancient Western Tibet</title> <title lang="eng" level="a" type="brief">The Three provinces of Mṅa’-ris</title> … </titleStmt> … </fileDesc> … </teiHeader lang="eng"> … </TEI.2> The following values can be used for the title’s "type" attribute are: - full: This is the full title of the essay including the subtitle after the colon, if there is one. It is used at the beginning of an essay, the header of the first page. - brief: This is the abbreviated header used in the headers of pages following the first. - citation: This is an version of the full title to be displayed in HTML display for the citation of the article. In cases where there are diacritics, internal markup, or other things that do not display correctly in the display of the citation, this version of the title may be added. It contains the HTML version of the full title, escaped by the CDATA wrapper in the following way: <title type="citation"><![CDATA[Review of <i>Thundering Falcon: An Inquiry into the History and Cult of Khra ’brug, Tibet’s First Buddhist Temple</i>, by Per K. Sørensen et al.]]></title> - browser: This is the version of the title to be used to set the browser window’s title if such is desired. An example is: <title lang="eng" level="a" type="browser">Monks, by José Cabezón</title> Note: In the JIATS essays, the markup of the scholarly and popular versions of the title have been separated into two different title elements without a type attribute but with differing rend attributes. <title rend="s"> refers to the scholarly (i.e. Wylie) version of the title. <title rend="p"> refers to the popular (i.e. phonetic) version of the title. This form of the markup is still supported for JIATS essays only.
<urn:uuid:678ef3ab-52a8-4a34-9192-97d8e09598a9>
2.796875
604
Documentation
Software Dev.
54.5362
Water temperatures in the Trinity and Lower Klamath River are monitored to understand how well the dam releases met expected water temperature criteria. Major findings from 2009 include: - Spring water temperatures in the Trinity River generally fell within the “Optimal” or “Marginal” thermal regime for smolts, except for two brief periods, May 18 and May 30, where water temperatures entered the “Unsuitable” criterion for smolts. - The North Coast Region Basin Plan temperature objective of 15.6 °C at Douglas City to protect adult salmon was exceeded 33 days between July 1 and September 15 (Figure). The maximum temperature experienced during this time period was 16.5 °C, or a 0.9 °C exceedence. - The temperature objective of 13.3 °C at the North Fork Trinity River was not exceeded. The reason for meeting this objective was in part due to using the deeper auxiliary outlet at Trinity Dam to release cooler water. - Despite not meeting the objective all the time, the prescribed flow did result in increased temperature differences between the Trinity River and the Klamath River that indicated that these flows moderated water temperatures during a time when water temperatures of the Klamath River were increasing. The most pronounced influence of Lewiston Dam releases on downstream temperatures occurred from August 25 to August 31 when a pulse flow, peaking at 2,767 cfs, was released to support the ceremonial needs of the Hoopa Valley Tribe. During this time, the water was on average 2.9 °C colder than the Klamath River at the confluence of the Trinity River. A peak difference of 4.4 °C was recorded on August 26, a time when flow from Lewiston Dam represented a dominant flow source to the Klamath River. The large differential resulted in a temperature reduction of 2.0 °C below the confluence and between 1.0 and 1.5 °C difference at the Terwer Gage of the Lower Klamath River. Suggested further reading: Scheiff, T and Zedonis, P (2010) The influence of Lewiston Dam releases on water temperatures of the Trinity and Klamath Rivers, CA. April to October, 2009. Faux, R (2010) Application of airborne thermal infrared (TIR) imagery to the understanding of spatial temperature patterns in the Trinity River. Oral presentation provided at the 2010 Trinity River Science Symposium Wittler, R; Yaworsky, R; and Manza; P (2010) Temperature management of the CVP northern system. Oral presentation provided at the 2010 Trinity River Science Symposium Zedonis, P (2009) The influence of Lewiston Dam releases on water temperatures of the Trinity and Klamath Rivers, CA. April to October, 2008. Zedonis, P (2008) The influence of Lewiston Dam releases on water temperatures of the Trinity and Klamath Rivers, CA. April to October, 2007. Watercourse Engineering (2007) Trinity River flow and temperature modeling project. Stutsman, M R (2005) Mid-Klamath river salmonid health and abundance in response to a proactive flow release from Lewiston Dam on the Trinity River, California, 2004. Zedonis, P and Newcomb T (1997) An evaluation of flow and water temperatures during the spring for protection of salmon and steelhead smolts in the Trinity River, California. Vermeyen, T (1997) Use of temperature control curtains to control reservoir release water temperatures.
<urn:uuid:631db1f5-f55d-475e-88d1-373065ca48f6>
3.125
731
Knowledge Article
Science & Tech.
41.894251
The movement of moisture or water vapor in the atmosphere is important in determining the weather. In the swirling motion of atmospheric water vapor, hurricanes can be seen. During the summer of 1995, a series of hurricanes formed off the coast of Africa -- Allison, Erin, Felix, Humberto, Iris, Luis, Marilyn, Noel, Opal, Roxanne, and Tanya -- then moved west across the Atlantic to the Caribbean and East coast of the United States. The white and light blue areas contain the most water vapor, while the dark areas are the driest. The small bright areas are storms and hurricanes. The big spirals show large-scale circulation patterns in the Earth's atmosphere.
<urn:uuid:9eaab9ab-1d3c-4a10-afee-1fb50d5a982c>
3.6875
140
Knowledge Article
Science & Tech.
44.406667
Research project: Systematics of freshwater isopods (Phreatoicidea) - Start date: - Australian Biological Resources Study (ABRS) Isopods in the suborder Phreatoicidea are found in surface streams, burrowing in the soil in moist areas, in yabbie burrows, in ground waters, and other hypogean waters. They have a fossil record extending 325 million years into the Paleozoic, and their biogeography and the fossil record reveals that they have been living on the Australian continent longer than the marsupial mammals, yet they show surprisingly little morphological differentiation or change from their fossil ancestors. This research on the Phreatoicidea improves our knowledge of their taxonomy and distribution, and provides the basic data for understanding their biogeography. Dr George D. F. (Buz) Wilson , Principal Research Scientist
<urn:uuid:d5ce9374-693d-4dc2-9b21-acee4385fd14>
3.078125
184
Academic Writing
Science & Tech.
21.298168
Search Loci: Convergence: There is more in Mersenne than in all the universities together. In G. Simmons Calculus Gems, New York: McGraw Hill Inc., 1992. Leonard Euler's Solution to the Konigsberg Bridge Problem Euler's Proof, Part II In Paragraph 6, Euler continues explaining the details of his method. He tells the reader that if there is more than one bridge that can be crossed when going from one landmass to the other, it does not matter which bridge is used. For example, even though there are two bridges, a and b, that can take a traveler from A to B, it does not matter with Euler’s notation which bridge is taken. In this paragraph, Euler also discusses the specific problem he is dealing with. He explains, using his original figure, that the Königsberg problem needs exactly eight letters, where the pairs of (A,B) and (A,C) must appear next to each other exactly twice, no matter which letter appears first. In addition, the pairs (A,D), (B,C), and (C,D) must occur together exactly once for a path that crosses each bridge once and only once to exist. In Paragraph 7, Euler informs the reader that either he needs to find an eight-letter sequence that satisfies the problem, or he needs to prove that no such sequence exists. Before he does this for the Königsberg Bridge problem, he decides to find a rule to discover whether a path exists for a more general problem. He does this in Paragraph 8 by looking at much simpler example of landmasses and bridges. Euler draws Figure 2, and he begins to assess the situations where region A is traveled through. Euler states that if bridge a is traveled once, A was either where the journey began or ended, and therefore was only used once. If bridges a, b, and c are all traveled once, A is used exactly twice, no matter if it is the starting or ending place. Similarly, if five bridges lead to A, the landmass A would occur exactly three times in the journey. Euler states that, “In general, if the number of bridges is any odd number, and if it is increased by one, then the number of occurrences of A is half of the result.” In other words, if there is an odd number of bridges connecting A to other landmasses, add one to the number of bridges, and divide it by two, to find out how many total times A must be used in the path, where each bridge is used once and only once (i.e. Total Occurrences of A where A has an odd # of bridges = (# of Bridges - 1) / 2 ). Using this fact Euler solves the Königsberg bridge problem in Paragraph 9. In that case, since there are five bridges that lead to A, it must occur three times. (See Figure 1, reproduced here for easy access.) Similarly, B, C, and D must appear twice since they all have three bridges that lead to them. Therefore 3(for A) + 2(for B) + 2(for C) + 2(for D) = 9, but Euler already stated that there must only be eight occurrences for the seven bridges. This is a contradiction! Therefore, it is impossible to travel the bridges in the city on Königsberg once and only once. The end, or is it? While the people of Königsberg may be happy with this solution, the great mathematician Leonhard Euler was not satisfied. Euler further continues his proof to deal with more general situations.
<urn:uuid:97410c97-5ebc-48d3-87d2-42d696d66a83>
3.46875
769
Academic Writing
Science & Tech.
60.369795
Man searching for instruments to help him to calculate Man has always searched for tools to help him count. As economical life developed, these tools became more and more necessary. The early humans knew only three words to describe a counting result : 1,2 and 'a lot'. The latter meaning 'three' as well as any other number To indicate more precisely a quantity, they began to use twigs and pebbles. For each object or animal they exchanged, they threw a certain number of these pebbles or twigs together, and afterwards they counted them. So, if somebody wanted to exchange five cows for chickens, and let's suppose that each cow is worth three chickens, they threw for each cow three pebbles together, then they took for each pebble a chicken. Later they used tallysticks : long sticks in which they notched little incisions.( Each incision represented one unit ) THE PLUMMET BY THE EGYPTIANS Because the Egyptians made always big constructions, they needed a way to construct an angle of 90°.For this purpose, they made use of a plummet, because it always makes a perfect angle of 90° with the ground. ( They were also the ones that found out that the surface of an isosceles right-angled triangle can be calculated by the formule 'L´B/2', discovering that they had just to split the surface of a square. Later, they realised that this formula could be used for the surface of any triangle.) CALCULATION WITH THE FINGERS Although men counted already ages before the Romans with their fingers, still most of the Romans and even men in the middle ages counted this way.They even invented a way to multiply The fingers above the two who are lying against each other, represented a number from 1 to 2. (But now each hand apart !). They multiplied each hand apart (1´ 2) and then the two hands (2´ 2). At the end, they added the two numbers they became from the fingers above and beyond the touching fingers : 60+4 = 64 = 8´8 ! In South America, the Incas(12th to 16th century) developed a way to calculate with ropes which were all tangled up. They called them 'quipus'. |A 'quipu' consisted of a lot of ropes, in different colours, which were all tied up (partly parallel, partly from one common point) on a bigger one. They expressed numbers as well as sorts. The colours represented these sorts, the ropes were the numbers.| |The knots on the end of a rope were the units, the knots above them the tens And they had also already invented something for '0' : they left more space then normal between two knots ! To read such a quipu, they started with the highest numbers. There were also knots made of different ropes. Scientists today still aren't sure of there use, but they were prabably representing a multiplication. These quipus could only be read by certain people.| Mainly because the colours represented something different in each quipu. In a quipu of the harvest, yellow could stand for wheat; but in a quipu of The Treasury, it stands for gold The Incas made an inventory of everything they had, and for each sort there were special Incas who knew how to make a quipu for it. They had to teach this to their sons, so that their would always be someone who knew how to make that kind of quipu. We can say that the Inca-kingdom was ruled by the quipus ! |The abacus is an acient calculating device made up of a frame of parallel wires on which beads are strung. The method of calculating with a handful stones on a 'flat surface' (latin Abacus)was familiar to the Greeks and Romans, and used by earlier civilisations, possibly even in ancient Babylon. |But the most famous abacus is the 'SuanPan', which the Chinese used from 1200 A.D. It has two decks, an upper deck and a lower deck. The Chinese abacus is made of bamboo- sticks and ivory beads. Each rod on the upper deck has two beads, each rod on the lower deck has five beads. Such an abacus is called a 2/5 abacus. Each bead in the upper deck has a value of FIVE; each bead in the lower deck has a value of ONE.| The beads are considered counted, when moved towards the beam that separates the two decks. The extreme-right column represents the column of the units; the next to the left represents the column of the tens. After 5 beads are counted in the lower deck, the result is 'carried' to the upper deck; after both beads in the upper deck are counted, the result (10) is then carried to the left adjacent column. You can add, subtract, divide and multiply with an abacus. The 2/5 model survived until about 1850 A.D., at which time it evolved into the 1/5 abacus until around 1930 A.D. |The 1/4 abacus is the model actually preferred and manufactured in Japan; At the moment, the 1/5 models are very rare and the 2/5 models are difficult to find outside of China. The abacus is still used by shopkeepers in Asia and 'Chinatowns' in North America. Its use is still taught in certain schools in the Far East. In 1946 a contest between a Japanese Abacist(Kiyoshu Matzukai)and an Electronic computer was held for 2 days resulting in an unmistakable victory of the Abacist.| THE SLIDE RULE The slide rule was invented in 1622 by the English mathematician William Oughted. The French army officier Amédée Mannheim (1831-1906)devised a later version. Until the calculator was invented, engineers and scientists always used the slide rule. It is a mechanical instrument that is used to compute mathematical functions such as multiplication and division, involution and extraction of roots and some later models even computing exponential and trigonometric functions. |Calculations are performed by moving two graduated scales over each other and reading the result with the aid of a travelling cursor. Slide rule operation is based on logarithmic scales which convert multiplication into addition. The most common slide rule consists of an horizontal sliding bar held between two fixed bars and a cursor which travels across the entire structure. Each bar contains scales representing different mathematical functions such as squaring a number or taking a logarithm.| |You can ad 4 by 5 with 2 put the zero of the second bar under the first number of your addition(4) on the first bar, then look at the second bar for your second number (5). Then the answer is the number above that number: 9 ! If you'd like to subtract 8 from 3, then you'll have to put the 3 of your second bar under the 8 of the first one, and then just look at the number above the zero from your second bar : 5 ! If you want to multiply and to divide, you need logarithms. A logarithm is the exponent of a number to a specified base. EXAMPLES : 2log 8 = 3 and 3log 27 = 3 The first log tables (to base e) were published by the Scottish mathema- tician John Napier in 1614.'e' is an irrational number equal approximately to 2.7183. Base-e logarithms are called the natural or Napierian logarithms. Base-ten logarithms were introduced by the Englishman Henry Briggs (1561-1631) and Dutch mathematician Adriaen Vlacq (1600-1667). They are used frequently, and you can find them also on your calculator. John Napier didn't only found out about the logarithms, he also invented a kind of a calculator, but in his most early stage. With his bars, it was possible to move the columns of the multiplication table and to bring them next to each other in accordance with the numbers of the multiplied number. Moreover each section was diagonaly divided to distinguish the units from the tens. John Napier described the working of it in his Rhabdologia (1617). But it appeared that the Arabs had already invented such an instrument before John Napier did. Later this instrument became more manageable because of the use of a rotating cilinder. Presumable it was Kaspar Schott S.J.who made this improvement. Back to Calculators Page
<urn:uuid:e7403e06-bae8-4de8-a1c9-6b6a56513a11>
3.59375
1,828
Knowledge Article
Science & Tech.
57.830921
I just have a quick question about time dilation/proper time because my physics book makes it a little confusing. Let's say we have an observer on Earth, and then an observer on a space ship. The space ship leaves Earth, flies to the Moon, and then returns to Earth. Who is the person measuring the proper time and why? I know that a clock "runs slower" when it is in motion because it is in frame S' which is the rest frame of the clock, but doesn't the observer on Earth also have a clock that is in it's rest frame? Each observer has their own proper time measured by the clock in their rest frame. However, one man's proper time is not another man's proper time. Time dilation means that each observer will see the other observer's clock running slower (compared to their own proper time measuring clock). But everything is perfectly symmetric from either observer's point of view as long as the relative motion is uniform. You measure your clock ticking at the "normal rate" (your proper time) and you see the other person's clock ticking at a slower rate. Similarly the other person measures their clock ticking at the "normal rate" (their proper time) and they see your clock ticking at a slower rate. This is all well and nice, but it gets interesting when the two compare their clocks after one of them does a round trip. This means that one of them necessarily had to accelerate and decelerate and was not in uniform motion (technically, was not on a geodesic). Now you have an opportunity to actually compare those two clocks and you'll always find that the person in uniform motion (in this case, the observer at rest on Earth) was the one whose clock has ticked the most, and hence aged the most. The best way to understand this is to realize that the length of paths in spacetime is measured by the total proper time along that path (measured by that path-traveller's clock in their rest frame). One can show that the paths of uniform motion (geodesics) have that length maximized, so any path that deviates from a geodesic (because of accelerations), will necessarily measure a shorter total proper time after a round trip. EDIT AFTER FIRST COMMENT: Time dilation isn't the appropriate effect to consider in this particular problem -- length contraction is. In Nick's frame, a length contracted ship passes by at speed $v$. In Molly's frame, a point-object (heh) Nick passes by an uncontracted ship at speed $v$. Clearly, this should happen quicker in Nick's frame because of the length contraction. Thinking in terms of time dilation simply doesn't help here. Think from the point of view of each observer and it will be quickly obvious which effect to use.
<urn:uuid:0f1bb56d-e341-4542-9acc-4b691f2a5212>
2.9375
582
Q&A Forum
Science & Tech.
56.684866
Making Text Adventure Games in Ruby Making a Text Adventure Game in Ruby Text adventure games were a very popular game genre on minicomputers and microcomputers from the 1970's through the 1990's. This article series will take you through each step of making a text adventure game in Ruby. One Tree to Rule Them All Some problems seem difficult, there are just too many things to consider and it's going to become a mess. However, if you analyze it, there's often a single data structure that will make the whole thing easier. We've been organizing our game data in a tree, but a tree is kind of hard to visualize. Single objects and arrays you can simply puts to the terminal, but trees are a bit more difficult. Two methods are presented, one that prints the tree to the terminal and another that generates an image. Finding and Moving Nodes While text adventure games have many interactions, almost everything aside from the scripting involves finding a node and moving it somewhere else. Now that we have most of the support code for the game finished, it's time to implement the player. The player where everything happens, the API for this whole thing, if you will. A text adventure game without scripting, using only the built-in game mechanics is a game that gets boring quickly. This is especially true, since most challenges in text adventure games are puzzle-based. Saving, Loading and Cleaning Up Saving and loading a game state is often a difficult process. You must save the entire state of the game and load it exactly as it was, or something will go amiss. However, this is trivial for our text adventure game.
<urn:uuid:bf59b49f-affa-4fc2-9218-3d81b006a5e4>
2.96875
344
Tutorial
Software Dev.
46.725443
Revealing the True Solar Corona Imaging may have inadvertently led astrophysicists astray in understanding the Sun Sometimes seeing shouldn’t be believing. For 150 years, the solar corona has been studied by the use of images—including visual, photographic and processed. The assumption was that the observed variations in brightness were indicative of the processes that produce the solar wind and that the wind therefore emanated from polar coronal holes. The author, however, studies the plasma density of the solar wind, the only parameter that has been extensively studies in both the corona and the solar wind. From these data, he concludes that the solar wind emanates radially from the entire Sun. Go to Article
<urn:uuid:7a801a3d-7dfb-43b5-b655-96c1f5713edf>
3.03125
145
Truncated
Science & Tech.
26.060435
Posted by Kate on Monday, May 5, 2008 at 12:31pm. You must have a table in your text or in your lab manual that gives the solubility of some of the salts at various temperatures. . I assume this is an experiment you are conducting. KCl is rather soluble. In the question above this one it says refer to a solubility chart, but my professor never taught us how to answer these questions while using a solubility chart so I have no idea what to do. I've tried going to his office hours for help, but he's only there when I have a class. It's the standard solubility chart (if you looked it up on google or something) but I have no idea how to answer the question. You can't draw a graph on these boards but if its a table, can you either describe it or list the columns and temperatures? You can't type information on this board in columns but you can do it this way, using periods for spaces T........mass KCl/100 mL So, for KCl, using the graph... T.....mass KCl/100 mL 20....32 or 33 90....52 or 53 Is the solubility of KCl then 46 grams/100 mL at 70 degrees C? (I don't know if that is the unit for you OR if you were just copying the unit I gave as an example.) If 46 g/100 mL, then g in 50 mL = 46 x (50 mL/100 mL) = 23 g/100 mL. Science - I am figuring out how much KCL will dissolve in 50 grams of water at ... CHEMISTRY - At what temperature do KCL and KNO3 have the same solubility If a ... Chemistry Help? Thanks - At what temperature do KCL and KNO3 have the same ... AP Chemistry - To prepare 1.000 kg of 5.5 percent by mass KCl (aq), one may ... Chemistry - If 30 grams of KCl is dissolved in 100 grams of water at 40 degrees ... Chemistry - How many more grams of Li SO4 can you dissolve in 250g of water at ... chemistry - How many more grams of Li SO4 can you dissolve in 250g of water at ... chemistry - you dissolve x in 175g of water to make a saturated solution at 50 ... chemistry - what is the maximum amoutn of KCL that can dissolve in 200 gram of ... Organic Chemistry - A student tried to dissolve 5.0 grams of impure acetanilide... For Further Reading
<urn:uuid:476ba663-0161-43f0-bd8f-5c4ea764bc91>
3.3125
551
Q&A Forum
Science & Tech.
92.689599
Posted by navroz on Tuesday, January 3, 2012 at 2:12pm. 4+5+6+7 = 22 So, there would be but why is the denominator 4!5!6! If all the cookies were different, you'd have just 22! different ways. But, if you just consider the 4 chocolate chip cookies, they are indistinguishable from each other. If they were different, then there would be 4! different ways to eat the chocolate chip cookies. But you don't care which of them you are eating, so you divide by the 4! ways which are the same. Consider the case of 4 cookies, 3 of which are A, and also a single B. If you write down all the ways to eat them, numbering the A's, you get A1 A2 A3 B A1 A2 B A3 A1 A3 A2 B A1 A3 B A2 A1 B A2 A3 A1 B A3 A2 A2 A1 A3 B A2 A1 B A3 A2 A3 A1 B A2 A3 B A1 A2 B A1 A3 A2 B A3 A1 A3 A1 A2 B A3 A1 B A2 A3 A2 A1 B A3 A2 B A1 A3 B A1 A2 A3 B A2 A1 B A1 A2 A3 B A1 A3 A2 B A2 A1 A3 B A2 A3 A1 B A3 A1 A2 B A3 A2 A1 As you can see, there are 4! = 24 ways to eat the cookies. But, if all the A1 A2 A3 are replaced by just A, we have only A A A B A A B A A B A A B A A A which is 4!/3! = 4 Expand that to your problem, and you see why we divide by n! if there are n identical objects. but wat happened to the 7 shouldnt it be 4!5!6!7! There are 7 assorted other cookies. Thus, they are different, and can be eaten in distinguishable orders. math - the cookie monster has a package of cookies with him consisting of 4 ... finite math - A tray contains 14 chocolate chip cookies and 16 oatmeal raisin ... Math - A cookie jar contains 3 chocolate-chip, 2 peanut butter, 1 lemon, 1 ... Statics - In a chocolate chip cookie investigation, you decide to compare two ... Math 7 - NYS Math Exam Review Help! (Q2) - 9. Eric's mother wants to help ... English CRT 250 - your child is trying to prove that she did not steal chocolate... Math - It is a story problem. Courtney has 27 chocolate chip cookies and 36 ... probability - 1. 52% of adults say chocolate chip is their favorite cookie. 40 ... Math - Mrs.Chou placed 45 cookies on a plate, and 36 of the cookies were ... math - The school had a bake sale to raise money. A group of 11 students bought ... For Further Reading
<urn:uuid:ec51b523-46a1-4670-ac25-d97f7ced2e8f>
3.109375
693
Comment Section
Science & Tech.
99.23128
GPR operates in the electrical conduction wavelength region of the electromagnetic spectrum. Whereas seismic response is a function of acoustic properties, GPR response is a function of the electromagnetic properties: dielectric permittivity , magnetic permeability , and electrical conductivity . Dielectric permittivity is a complex function having real and imaginary components. The real portion of dielectric permittivity is usually expressed as dielectric constant , which is the ratio of the electric-field storage capacity of a material to that of free space. The imaginary portion of dielectric permittivity is usually expressed as dielectric loss, which represents attenuation and dispersion. Dielectric loss is negligible if the conductivity of a material is low, less than 10 milliSiemens/meter (mS/m), as it is for many geologic materials. Thus, dielectric constant is typically the primary component of dielectric permittivity. Magnetic permeability, the magnetic field divided by the magnetic field strength, is the product of the permeability of free space and relative magnetic permeability . The effect of magnetic permeability on GPR response is negligible for materials with a relative magnetic permeability value of = 1, which is the value for most sedimentary materials. Dielectric permittivity, magnetic permeability, and electric conductivity are frequency dependent and behave differently over various frequency ranges (Powers, 1997). Dielectric constant generally decreases with increasing frequency, while conductivity and dielectric loss increase with increasing frequency. However, their behavior is relatively consistent over the typical GPR antenna frequency range of 25-1,500 MHz. Dielectric constant is a critical GPR parameter because it controls the propagation velocity of electromagnetic waves through a material and the reflection coefficients at interfaces, as well as affecting the vertical and horizontal imaging resolution. Therefore, knowing dielectric-constant values of materials helps in planning GPR surveys and in better understanding and interpreting GPR images. Measured dielectric-constant values for various rocks and minerals may be found in the literature (e.g., Davis and Annan, 1989; Daniels, 1996; Olhoeft, 1989; Schon, 1996; Ulaby et al., 1990). Reported bulk dielectric-constant values of common earth materials are presented in table 1, and reported dielectric-constant values of common minerals and fluids are presented in table 2. These data are broadly useful; however, bulk dielectric constants of rocks and sediments actually reflect complex mixtures of materials and architectures that vary from one rock lithology to the next. In rocks and sediments, dielectric properties are primarily a function of mineralogy, porosity, water saturation, frequency, and depending on the rock lithology, component geometries, and electrochemical interactions (Knight and Endres, 1990; Knoll, 1996). Variations in each of these parameters can significantly change bulk dielectric constants. Dielectric mixing modeling is a forward-modeling technique that provides a basis for predicting expected bulk dielectric-constant values based on specific input parameters. Numerous dielectric-constant mixing models have been proposed, and all fall within four broad categories: effective medium, empirical and semi-empirical, phenomenological, and volumetric (Knoll, 1996) (table 3). Table 1. Bulk dielectric constants ( measured at 100 MHz) of common earth materials. |Material||from Davis and Annan, 1989||from Daniels, 1996| |Fresh water ice||3-4||4| |Sea water ice||4-8| |Soil, sandy dry||4-6| |Soil, sandy wet||15-30| |Soil, loamy dry||4-6| |Soil, loamy wet||10-20| |Soil, clayey dry||4-6| |Soil, clayey wet||10-15| Table 2. Dielectric constants of common minerals and fluids. Note: These values are for specific minerals and fluids from specific study sites. Minerals and fluids taken from other sites may have slightly different dielectric constant values or may exhibit dielectric anisotropy. |Material||Dielectric constant||Frequency (MHz)||Source| |Acetone||20.9||1||Lucius et al., 1989| |Air||1.0||1||Lucius et al., 1989| |Benzene||2.3||1||Lucius et al., 1989| |Carbon tetrachloride||2.2||1||Lucius et al., 1989| |Chloroform||4.8||1||Lucius et al., 1989| |Cyclohexane||2.0||1||Lucius et al., 1989| |Ethylene glycol||38.7||1||Lucius et al., 1989| |Gypsum||6.5||750||Martinez and Byrnes, 1999| |Methanol||33.6||1||Lucius et al., 1989| |Mica||6.4||750||Martinez and Byrnes, 1999| |Tetrachloroethene||2.3||1||Lucius et al., 1989| |Trichloroethene||3.4||1||Lucius et al., 1989| |Water||80||1||Lucius et al., 1989| Table 3. Summary of dielectric mixing model categories (adapted from Knoll, 1996). |Effective medium||Compute dielectric properties by successive substitutions||Bruggeman- |-Accurate for known geometries||- Cumbersome to implement - Need to choose number of components, initial material, and order and shape of replacement material |Sen et al., 1981; Ulaby et al., 1986| |Empirical and semi-empirical||Mathematical functional relationship between dielectric and other measurable properties||Logarithmic; Polynomial||-Easy to develop quantitative relationships -Able to handle complex materials in models |-There may be no physical justification for the relationship -Valid only for the specific data used to develop the relationship and may not be applicable to other data sets |Dobson et al., 1985; Olhoeft and Strangway, 1975; Topp et al., 1980; Wang and Schmugge, 1980| |Phenomenological||Relate frequency dependent behavior to characteristic relaxation times||Cole-Cole; Debye||-Do not need component properties or geometrical relationships||-Dependent on frequency-specific parameters||Powers, 1997; Ulaby et al., 1986; Wang, 1980| |Volumetric||Relate bulk dielectric properties of a mixture to the dielectric properties of its constituents||Complex Refractive Index (CRIM); Arithmetic average; Harmonic average; Lichetenecker-Rother; Time-Propagation (TP)||-Volumetric data relatively easy to obtain||-Do not account for micro-geometry of components -Do not account for electrochemical interaction between components |Alharthi and Lange, 1987; Birchak et al., 1974; Knoll, 1996; Lange, 1983; Lichtenecker and Rother, 1931; Roth et al., 1990; Wharton et al., 1980| This paper provides a brief discussion of dielectric-constant mixing models, a general review of the important equations governing GPR response, and presents an application of Time-Propagation (TP) dielectric mixing modeling to predict reflection coefficients, reflection travel-times, and imaging resolution. Three examples illustrate TP modeling of sandstones and carbonates, and the relationship between dielectric constant and porosity , mineralogy (Xm), water saturation (Sw), fluid-rock electrochemical interaction, and hydraulic permeability (k). A downloadable Excel 97 workbook containing interactive worksheets involving TP modeling and reflection coefficient and two-way travel time modeling is included as appendix A. Kansas Geological Survey Web version December 3, 2001
<urn:uuid:0927bd7c-55c4-488f-a904-f241738f293f>
3.671875
1,726
Academic Writing
Science & Tech.
30.196233
Voltage-gated ion channels open and close in response to changes in the electric environment of the membrane. This is achieved though a voltage sensor that detects voltage by use of key charged elements or “gating charges”. Changes in membrane potential cause motion of the gating charges thus inducing conformational changes in the whole protein and resulting in opening or closure of the channel. The opening event consists of positive charges moving outwardly while they move inwardly for closing the channels during repolarizations. The movement of these charges is detectable in voltage clamp as small current that precedes the ionic currents and is known as “gating current”. Their movement can also be detected using optical methods, where a fluorescent dye can be coupled to the outside of the channel and changes in fluorescence can be measured as the local environment changes due to charge movement. For many voltage-gated ion channels the charges are conserved positively charged amino acids and their identity has been studied extensively using mutagenesis and heterologous expression. Taken together, these studies indicate that most of the gating charges reside within the S4 segment of the channels. For a great review see: The Voltage Sensor in Voltage-Dependent Ion Channels 'Francisco Bezanilla. Physiological Reviews, Vol. 80, No. 2, April 2000, pp. 555-592. Experiments To Determine Gating Charges The steep dependence of channel opening on membrane voltage allows voltage-dependent K+ channels to turn on almost like a switch. Opening is driven by the movement of gating charges that originate from arginine residues on helical S4 segments of the protein. To determine which sections of the protein sequence is responsible for this voltage "switch-sensor," Aggarwal and MacKinnon (Neuron, 1996) created charge-neutralizing mutations on the first four positive charges from the N-terminus and the C-terminus. The gating charge response of C-terminus mutants was almost identical to that of wild-type channels; however, mutations induced on the N-terminus positive arginines resulted in channels that failed to open when the appropriate voltages were applied using the patch clamp method. Hence, their experiment shows that the movement of the NH2-terminal half but not the COOH-terminal half of the S4 segment underlies gating charge. Movement of the Gating Charge The end result of the gating charge is to open up the channel at a specified voltage. To do that the sensing element of the channel has to move in response to a membrane potential, and it has to transfer its movement to the pore gate, causing it to open. To understand the nature of the movement of the S4 segment--the part of that channel that seems most likely to contain the gating charges--groups sequentially mutated its residues to cysteines. By using thiol reacting agents in the extracellular space, these mutated cysteines allows for a determination of the location of each residue in different electrical states of the membrane. In addition, groups also attached flurophores to the sequential cysteines to study the distances moved by the various residues of the S4 segment using FRET. Although these indirect studies are not definitive, they all indicate that the S4 helix underoes a rotation upon depolarization. Some of the studies indicate that there might be addition movements, such as a coupled translocation on top of the rotation, or even, perhaps, movement by the surrounding helixes. Once the S4 segement moves, it has to, somehow, convey its change to the tail ends of the S6 segment--the gate. Evidence is much more scarce as to how that happens. By changing the length of the S4-S5 linker, groups have shown that part of the channel to play a role, suggesting that the linker segment connects to the S6 tail. However, there is still a possibilty that the movement of the S4 segment causes more of a gross change within the protein, forcing the rest of the surrounding helixes to adapt to it change, which then lead to the opening of the gate. Useful reviews: Horn, R. "A new twist in the saga of charge movement in voltage-dependent ion channels." Neuron. 2000 Mar;25(3):511-4 Horn, R. "Coupled movements in voltage-gated ion channels." J Gen Physiol. 2002 Oct;120(4):449-53 Aggarwal SK, MacKinnon R. (1996) Contribution of the S4 segment to gating charge in the Shaker K+ channel. 16(6): 1169-77. Recent updates to the site:
<urn:uuid:c37d4fab-ed30-478c-85d3-4eedec65c8a9>
3.078125
984
Knowledge Article
Science & Tech.
42.519498
Welcome to part 3 of my 10 part series on PHP. In the first two parts I introduced you to the language and to what software you needed to run it. In this episode we will look at some simple PHP syntax, and we'll write a couple of small scripts to get our feet wet, and get a feel for the language. What does a PHP script look like? PHP traditionally is embedded into HTML code within a web page, as this was its initial intent. However it's becoming increasingly more popular for web application authors to write and generate the HTML using PHP from a page that is made entirely of PHP code. The following two examples of the world renowned hello world program should help to show the difference: Embedding in HTML <title><?php print "My First Script"; ?></title> print "<h1>Hello World</h1>"; My personal preference is embedding the PHP code within the HTML code, however there are plenty of alternative theories out there, and I would encourage you to explore and develop your own style of writing scripts. As an example if you ever decide to work with the Typo3 framework (http://www.typo3.org/) then you'll be highly likely to use the Full PHP approach. For the remainder of this series we'll be using the embedded HTML approach. The first script line by line As you can see most PHP statements are included in special PHP tags which for the most part are no different to regular Because of this, we can freely use them inter-changeably anywhere you would use any other HTML tags. Looking at the title line in the first example above, you can see that we generate the title using PHP code and a static string. You could use a variable, but since we're not covering them until next time I'll skip over that for So, what can I put in these PHP tags? Well, anything that's valid PHP can go in there, print statements, function calls, variable assignments and so on, as an example using the print statement: We are printing the output of the date function directly into a set of <H1> tags, which in this case is a string ready to be printed. Some functions however return different types such as arrays and resources, the good news is that in most cases PHP will try to format the output and display something useful. For arrays however (We'll cover arrays in more detail in future article in the series) you'll likely want to use the print_r routine, for example: The small snippet above will display the contents of the POST array, something which can be very useful when testing forms of data. Simply create a PHP page with just the above in, and set it as the action in your HTML form tag, and hey- print_r will list the array items in sequence, and if wrapped up in <pre> tags will also lay the output out neatly line after line, as though it was being output to a terminal or command line. One last useful tip that often aids in debugging is to output the phpinfo call, EG: This provides a wealth of information about the environment PHP is running in, and can greatly help when first setting up your server, or when you need a quick reference of what extras you have installed. For now we'll leave it there, but I would encourage you to read the online PHP Manual. Have a look at each of the function calls and see what they each return. Customize the script above to output different pieces of information. In this episode we looked at our first script, and examined the ways in which it could be written, in the next episode we'll be looking at variables, the life blood of any program not least PHP, and remember don't be afraid to experiment, for the most part unless your playing carelessly with file functions, there's no damage you can do, one of the best places to learn is the PHP web site and the user code
<urn:uuid:dd20d9af-0ee8-4c3e-9509-fedf3c3beba0>
3.234375
878
Truncated
Software Dev.
54.03834
Electric Potential Difference Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of electric circuits. Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Cirque du Circuit: A Unit on Electric Circuits for High School Physics This is a 10-day multimedia unit that explores the fundamentals of electric circuits in a web-based, interactive format; it demonstrates the use of Compadre filing cabinets.PhET Simulation: Charges and Fields With this applet, you can place a source charge in the workspace. Then insert a test charge and move it around and observe how the potential changes.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Electricity and Electrical Energy. Electric Field and the Movement of Charge Perhaps one of the most useful yet taken-for-granted accomplishments of the recent centuries is the development of electric circuits. The flow of charge through wires allows us to cook our food, light our homes, air-condition our work and living space, entertain us with movies and music and even allows us to drive to work or school safely. In this unit of The Physics Classroom, we will explore the reasons for why charge flows through wires of electric circuits and the variables that affect the rate at which it flows. The means by which moving charge delivers electrical energy to appliances in order to operate them will be discussed in detail. One of the fundamental principles that must be understood in order to grasp electric circuits pertains to the concept of how an electric field can influence charge within a circuit as it moves from one location to another. The concept of electric field was first introduced in the unit on Static Electricity. In that unit, electric force was described as a non-contact force. A charged balloon can have an attractive affect upon an oppositely charged balloon even when they are not in contact. The electric force acts over the distance separating the two objects. Electric force is an action-at-a-distance force. Action-at-a-distance forces are sometimes referred to as field forces. The concept of a field force is utilized by scientists to explain this rather unusual force phenomenon that occurs in the absence of physical contact. The space surrounding a charged object is affected by the presence of the charge; an electric field is established in that space. A charged object creates an electric field - an alteration of the space or field in the region that surrounds it. Other charges in that field would feel the unusual alteration of the space. Whether a charged object enters that space or not, the electric field exists. Space is altered by the presence of a charged object; other objects in that space experience the strange and mysterious qualities of the space. As another charged object enters the space and moves deeper and deeper into the field, the affect of the field becomes more and more noticeable. Electric field is a vector quantity whose direction is defined as the direction that a positive test charge would be pushed when placed in the field. Thus, the electric field direction about a positive source charge is always directed away from the positive source. And the electric field direction about a negative source charge is always directed toward the negative source. Electric fields are similar to gravitational fields - both involve action-at-a-distance forces. In the case of gravitational fields, the source of the field is a massive object and the action-at-a-distance forces are exerted upon other masses. When the concept of the force of gravity and energy was discussed in Unit 5 of the Physics Classroom, it was mentioned that the force of gravity is an internal or conservative force. When gravity does work upon an object to move it from a high location to a lower location, the object's total amount of mechanical energy is conserved. However, during the course of the falling motion, there was a loss of potential energy (and a gain of kinetic energy). When gravity does work upon an object to move it in the direction of the gravitational field, then the object loses potential energy. The potential energy originally stored within the object as a result of its vertical position is lost as the object moves under the influence of the gravitational field. On the other hand, energy would be required to move a massive object against its gravitational field. A stationary object would not naturally move against the field and gain potential energy. Energy in the form of work would have to be imparted to the object by an external force in order for it to gain this height and the corresponding potential energy. The important point to be made by this gravitational analogy is that work must be done by an external force to move an object against nature - from low potential energy to high potential energy. On the other hand, objects naturally move from high potential energy to low potential energy under the influence of the field force. It is simply natural for objects to move from high energy to low energy; but work is required to move an object from low energy to high energy. In a similar manner, to move a charge in an electric field against its natural direction of motion would require work. The exertion of work by an external force would in turn add potential energy to the object. The natural direction of motion of an object is from high energy to low energy; but work must be done to move the object against nature. On the other hand, work would not be required to move an object from a high potential energy location to a low potential energy location. When this principle is logically extended to the movement of charge within an electric field, the relationship between work, energy and the direction that a charge moves becomes more obvious. Consider the diagram above in which a positive source charge is creating an electric field and a positive test charge being moved against and with the field. In Diagram A, the positive test charge is being moved against the field from location A to location B. Moving the charge in this direction would be like going against nature. Thus, work would be required to move the object from location A to location B and the positive test charge would be gaining potential energy in the process. This would be analogous to moving a mass in the uphill direction; work would be required to cause such an increase in gravitational potential energy. In Diagram B, the positive test charge is being moved with the field from location B to location A. This motion would be natural and not require work from an external force. The positive test charge would be losing energy in moving from location B to location A. This would be analogous to a mass falling downward; it would occur naturally and be accompanied by a loss of gravitational potential energy. One can conclude from this discussion that the high energy location for a positive test charge is a location nearest the positive source charge; and the low energy location is furthest away. The above discussion pertained to moving a positive test charge within the electric field created by a positive source charge. Now we will consider the motion of the same positive test charge within the electric field created by a negative source charge. The same principle regarding work and potential energy will be used to identify the locations of high and low energy. In Diagram C, the positive test charge is moving from location A to location B in the direction of the electric field. This movement would be natural - like a mass falling towards Earth. Work would not be required to cause such a motion and it would be accompanied by a loss of potential energy. In Diagram D, the positive test charge is moving from location B to location A against the electric field. Work would be required to cause this motion; it would be analogous to raising a mass within Earth's gravitational field. Since energy is imparted to the test charge in the form of work, the positive test charge would be gaining potential energy as the result of the motion. One can conclude from this discussion that the low energy location for a positive test charge is a location nearest a negative source charge and the high energy location is a location furthest away from a negative source charge. As we begin to discuss circuits, we will apply these principles regarding work and potential energy to the movement of charge about a circuit. Just as we reasoned here, moving a positive test charge against the electric field will require work and result in a gain in potential energy. On the other hand, a positive test charge will naturally move in the direction of the field without the need for work being done on it; this movement will result in the loss of potential energy. Before making this application to electric circuits, we need to first explore the meaning of the concept electric potential.
<urn:uuid:6cb91733-cc31-4b4f-a278-b41cdcc993f2>
4.1875
1,735
Tutorial
Science & Tech.
38.82064
|Jan3-13, 09:33 AM||#1| Charge distribution over a solid conductor I was watching a video on the internet about charge distributions over solid conductors. The solid conductor was heart shaped which was positively charged. The lecturer in the video said that when you touched this conductor, the charge would distribute itself non-uniformly over the surface of the conductor. I understand why non-uniformly, however, why would the distribution be on the surface? I know why there is no charge inside (explained via Gauss) but wouldn't when I touch it, because humans are conductors, the charge would simply flow through my body and down to earth? Why is this not the case? When you are discharging an object, you just touch it with a conductor to transfer the charge. EDIT: Would it help if I provided the link to the video? |Jan3-13, 12:39 PM||#2| You are making the assumption that you are grounded. Pretty much the only way you can have the charge flow through your body to the earth is if your body was in contact with a grounded conductor. |Jan3-13, 03:08 PM||#3| I see. So if I was standing on some conducting object then charge would flow to earth. If I am not, then the charge would distribute itself on the surface if we are concerned with conductors. |Similar Threads for: Charge distribution over a solid conductor| |Charge Distribution on Conductor||Classical Physics||9| |Conductor/Image Charge, Volume Charge Distribution||Advanced Physics Homework||1| |distribution of charge within a CONDUCTOR||General Physics||8| |charge distribution on a conductor.||Introductory Physics Homework||0| |Charge distribution on a conductor||Classical Physics||10|
<urn:uuid:33916c4a-1ffe-43ca-b8a6-cce7cc009169>
3
393
Comment Section
Science & Tech.
50.681005
The Banded Coral Shrimp is a common reef creature. We have seen Banded Coral Shrimp near several Caribbean islands, including Bimini and Bonaire. The Banded Coral Shrimp is a cleaner. That is, the Banded Coral Shrimp spends a lot of its time cleaning fish. This shrimp will find a place to sit on the coral reef and wait for a fish to swim near and stop. Then the shrimp will crawl all over the fish to remove parasites, dead skin and scales, and whatever else is on the fish that shouldn't be there. The shrimps bright colors make it easy to see. Its red and white bands serve as an advertisement of its cleaning services. Fish are attracted to the shrimp for cleaning, and fish will wait patiently while being cleaned by the shrimp. The Banded Coral Shrimp is a crustacean (say, "crust-AY-shun"). Crustaceans are animals such as shrimps, crabs, and lobsters. Crustaceans have armored bodies. Banded Coral Shrimps have 5 pairs of legs. Their front legs have claws like tweezers that they use to clean fish and pick up food. Their long antennae help them identify food and explore the reef, and they wave their antennae to get attention and attract fish to come nearby for cleaning. We saw this Banded Coral Shrimp at a reef called The Strip near Bimini in The Bahamas. This shrimp was an the sand at the bottom of the coral head 40 feet underwater. Also notice the green plants called Small-Leaf Hanging Vine Alga covering the coral head. The brown spotted lump on the sand next to the shrimp at the left edge of the picture is a Smooth Star Coral. Learn more about the Coral Reefs of Bimini on the 2001 ReefNews CD-ROM Bimini: Jewel of the Gulf Stream Dr. Jonathan Dowell took this picture using a Nikonos V with 28mm lens and SB105 strobe. This photo was taken during the ReefNews research expedition to Bimini, June 2001.
<urn:uuid:d50034b6-3497-4173-98db-249245887056>
3.234375
433
Knowledge Article
Science & Tech.
70.15903
Asteroid Belts Could Be Key to Finding Alien Life - 11:16 AM - Categories: Space By Ian Steadman, Wired UK If we want to find intelligent life elsewhere in the universe, it might be wise to look for stars with asteroid belts similar the one in our own Solar System. According to the theory of punctuated equilibrium, evolution goes faster and further when life has to make rapid changes to survive new environments — and few things have as dramatic an effect on the environment as an asteroid impact. If humans evolved thanks to asteroid impacts, intelligent life might need an asteroid belt like our own to provide just the right number of periodic hits to spur evolution on. Only a fraction of current exoplanet systems have these characteristics, meaning places like our own Solar System — and intelligent aliens — might be less common than we previously thought.Rebecca Martin of the University of Colorado in Boulder and Mario Livio of the Space Telescope Institute in Baltimore have hypothesised that the location of the Solar System’s asteroid belt — between Jupiter and Mars — is not an accident, and actually necessary for life. As the Solar System formed, the gravitational forces between Jupiter and the Sun would have pulled and stretched clumps of dust and planetoids in the inner Solar System. The asteroid belt lies on the so-called “snow line” — fragile materials like ice will stay frozen further out, but closer in they will melt and fall apart. During the formation of the Solar System, cold rock and ice coalesced into the planets as we know them. However, as Jupiter formed, it shifted in its orbit closer to the Sun just a little bit before stopping. The tidal forces at work between Jupiter and the Sun would have torn apart the material on the snow line, preventing a planet forming and leaving behind an asteroid belt — which today has a total mass only one percent of that which would have been there originally. Those asteroids would have bombarded the inner Solar System — including Earth — and, in theory, provided the raw materials needed for life (like water) and also giving evolution a kickstart by drastically changing the early Earth’s climate and environment. To check that this wasn’t just something restricted to our Solar System, Martin and Livio looked at data from Nasa’s Spitzer telescope, which has so far found infrared signals around 90 different stars which can indicate the presence of an asteroid belt. In every case, the belts were located exactly where Martin and Livio had predicted the snow line should be relative to each star’s mass, supporting their snow line theory of asteroid belt formation. If these are the circumstances which allow intelligent life to evolve somewhere, then it will make the task of finding aliens we can chat with a lot harder — few stars with exoplanets that we’ve found so far have the right setup of a dusty asteroid belt on the snow line with a gas giant parked just outside it. If the gas giant has formed but not shifted in slightly, as Jupiter did, then the belt will become so full of large objects that the inner planets will be bombarded too frequently for life to fully take hold; if the gas giant continues to move inwards as it orbits, it won’t just stop the belt turning into a planet — it’ll suck everything of any serious size up and leave behind only minor fragments of space rock and dust, including any planets life could evolve on. Martin and Livio then looked at 520 gas giants found orbiting other stars — in only 19 cases were they outside of where that star’s snow line would be expected to be. That means fewer than four percent of exoplanet systems will have the right setup to support the evolution of advanced, intelligent life in accordance with the punctuated equilibrium theory. Martin, the lead author of the research, published in Monthly Notices of the Royal Astronomical Society, writes: “Our study shows that only a tiny fraction of planetary systems observed to date seem to have giant planets in the right location to produce an asteroid belt of the appropriate size, offering the potential for life on a nearby rocky planet. Our study suggests that our Solar System may be rather special.” Source: Wired UK
<urn:uuid:abcce538-5526-4c4a-a0fe-8ed18023629f>
3.53125
864
Truncated
Science & Tech.
27.099419
Solar Eclipse and a Lunar Eclipse are rare and very dramatic events. Even if these eclipses are partial, it is still worth a look. An eclipse occurs when the Moon comes between Earth and the Sun (Solar Eclipse), and when the Moon (Lunar Eclipse). Because the Moon is much smaller than the Earth, Solar Eclipses are more rare to a specific area on Earth because of the shadow produced by the Moon is small compared to the shadow made by the Earth during a Lunar Eclipse. Solar Eclipse dates and times, click Total Solar Eclipse like this one is an unforgettable sight. Please visit the Solar Viewing Safety page for details on how to safely view this wonderful phenomenon. One of the effects of a Solar Eclipse is a marked decrease in overall sunlight in the region where the eclipse is occurring. To me, this is as dramatic as the eclipse itself. Going outside during an eclipse is encouraged - please enjoy this rare event! Lunar Eclipse dates and times, click Lunar Eclipse with the traditional copper appearance is seen here. Only a total lunar eclipse will look this dramatic. The color is the result of red light refracted from perfectly safe to view a Lunar Eclipse - and it is certainly safe to go outside when a Lunar Eclipse is occurring. This wonderful image is care of the Grasslands Observatory: Back to Top
<urn:uuid:b32f4e77-0d4b-47c0-a548-a7da1f57653b>
3.40625
303
Knowledge Article
Science & Tech.
40.255632
Have any experiments been carried out involving sprouting and growing plants in a zero gravity environment? If so, what was the outcome? How did the plants sprout out of the soil without gravity? Did they grow outward or toward light sources? There have been several experiments in growing plants in microgravity (strictly speaking, we do not achieve "zero-g" since astronauts remain in orbit about the Earth). Changes in plant growth due to the influence of a gravity field is sometimes called gravimorphogenesis. More specifically, gravitropism is a differential growth response of plant organs to gravity. For example, roots grow downwards (positive gravitropism) and shoots grow upwards (negative gravitropism) on Earth. Studies (e.g. 1) suggest that in micro-g, there is no preferred direction; roots may grow "up" and shoots "down". It is thought that this growth response is due to the relative distribution of auxin in the plant. On Earth, auxin will preferentially move down into the root-tips due to the location of amyloplasts in the root-cap cells. In micro-g, amyloplasts do not settle at the "bottom" of the plant, therefore there is a more generalized distribution of auxin, and therefore there will be no preferred growth direction. In addition, changes in plant gene expression as a response to micro-g environments are also being investigated (2) and suggest that auxin transport inhibitors may block the activation of the auxin responsive promoters in Nicotiana spp. (tabacco). (1) Mechanisms in the Early Phases of plant Gravitropism CRC Crit Rev Plant Sci. 2000 ;19 (6):551-73 11806421 Cit:65 (2) Transcription Profiling of the Early Gravitropic Response in Arabidopsis Using High-Density Oligonucleotide Probe Microarrays, Plant Physiol. 2002 October; 130(2): 720–728. doi: 10.1104/pp.009688 I find over 100 articles returned by a PubMed query for "Arabidopsis microgravity." Arabidopsis was taken aboard at least one if not more Space Shuttle missions to answer this and other similar questions. A couple recent papers from this search are: Spaceflight transcriptomes: unique responses to a novel environment. Paul AL, Zupanska AK, Ostrow DT, Zhang Y, Sun Y, Li JL, Shanker S, Farmerie WG, Amalfitano CE, Ferl RJ. Astrobiology. 2012 Jan;12(1):40-56. PMID: 22221117 An endogenous growth pattern of roots is revealed in seedlings grown in microgravity. Millar KD, Johnson CM, Edelmann RE, Kiss JZ. Astrobiology. 2011 Oct;11(8):787-97. doi: 10.1089/ast.2011.0699. PMID: 21970704 Parabolic flight induces changes in gene expression patterns in Arabidopsis thaliana. Paul AL, Manak MS, Mayfield JD, Reyes MF, Gurley WB, Ferl RJ. Astrobiology. 2011 Oct;11(8):743-58. doi: 10.1089/ast.2011.0659. PMID: 21970703 Gene expression changes in Arabidopsis seedlings during short- to long-term exposure to 3-D clinorotation. Soh H, Auh C, Soh WY, Han K, Kim D, Lee S, Rhee Y. Planta. 2011 Aug;234(2):255-70. PMID: 21416242 A novel phototropic response to red light is revealed in microgravity. Millar KD, Kumar P, Correll MJ, Mullen JL, Hangarter RP, Edelmann RE, Kiss JZ. New Phytol. 2010 May;186(3):648-56. PMID: 20298479
<urn:uuid:1280f048-6ae0-4ece-96c3-3b17142b840f>
3.59375
835
Q&A Forum
Science & Tech.
67.012657
in the constellation Aquila 14 times the mass of the Sun Diameter roughly 50 miles (85 km) equal to the size of a large city Measuring the motions of a companion star Because a black hole is both massive and compact, it exerts a strong gravitational pull on the material around it. Astronomers can deduce the presence of a stellar-mass black hole (one that is a few times as massive as the Sun) by measuring the velocity of a companion star in a binary system. In a system that contains a black hole and another type of star (one that produces visible light or other forms of energy), the orbital speeds of the two component stars is much greater than in a system with two "normal" stars (stars that are similar to the Sun). Measuring the orbital speeds of the two components in a binary system, along with the distance between the stars, reveals the system's total mass. Using other techniques, astronomers can determine the mass of the luminous companion. By subtracting that from the system's total mass, they can determine the mass of the dark companion, which reveals whether it is a black hole or a less-dense object like a neutron star. This technique is like the one that astronomers use to deduce the masses of planets in solar systems other than our own. More about black hole discovery methods » This document was last modified: June 28, 2011.
<urn:uuid:4dfb37f9-96f9-436c-987a-ef16c92dd090>
3.875
288
Knowledge Article
Science & Tech.
40.922894
What does symmetry have to do with physics? As my ultimate goal is talk about physics, it's time for a paragraph on that subject, just so we don't lose sight of where we're going. So imagine a game of pool. Pool is a game of physics, the outcome of any play depends on factors such as friction, collision angles, the coefficient of restitution of the balls (ie. their bounciness) and a whole host of other factors. But despite the complexity of the factors involved, we know that they are invariant under rotation. What I mean is that if we were to rotate the table and everything on it by some random angle (eg. from a North-South orientation to a NW-SE orientation) it makes no difference to the game. (Well, maybe the floor has a slight tilt, so let's rotate the room as well if that's the case.) If everything is rotated together then you end up with a situation that's essentially identical. One of the fundamental principles of physics is that the laws of physics don't care about the absolute orientation of anything. We called the set of rotations in a plane SO(2) so we can summarise this by saying that pool has SO(2) symmetry. But does it have SO(3) symmetry? Obviously rotating the table so that it tilts will change the way the game plays out. But that's because of an 'accident' of history, we just happen to be on the surface of the Earth and there's a gravitational field directed downwards. But from the point of view of fundamental physics there's nothing special about 'down'. If we were to tilt the pool table and then tilt the entire Earth underneath it, we should still expect to see no change to the physics of pool. In fact, we get to try this experiment out every day because the entire pool/Earth system rotates through (approximately) 360° every day and we don't notice any change in the way pool tables work. So pool actually has SO(3) symmetry, but because of stuff going on in our neighbourhood (ie. a having a planet under our feet), it looks like it only has SO(2) symmetry. SO(3) is a fundamental symmetry of physics, but that fact that on Earth gravity messes this up and leaves us with SO(2) symmetry is a phenomenon known as symmetry breaking. More of this later, back to the mathematics. Lie Group Representations I've talked about how groups can act on spaces. For example SO(3) acts on 3D space and SO(2) acts on 2D spaces, ie. planes. But there's some flexibility here and we can decouple the group from the space it acts on. The way SO(2) acts on a plane is given by a certain rule. We can make up new rules and study those. For example, we could allow SO(2) to act on 3D space by this rule: rotate your x and y coordinates using the same rule as in the plane, and leave z unchanged. So if a certain 2D rotation (ie. a 90 degree rotation) maps the point (x,y) to (y,-x), then according to this new rule it maps (x,y,z) to (y,-x,z). We could make up other rules. For example we could apply SO(2) to y and z, and leave x untouched. In fact, we can make infinitely many rules like this because there are infinitely many axes (by axis I just mean direction) we could choose to leave untouched. We can define a representation to be a rule that takes an element of a group and turns it into a transformation on a space. We're interested mainly in what are called linear representations (I'll give some justification for this in part 3). These are representations where the transformations map the origin to the origin, map straight lines to straight lines, and map the midpoints between pairs of points to the midpoints between pairs of points. It's not hard to see that rotations do all three of these things. In fact, as I will only talk about linear representations, I'll drop the word linear from now on. I just showed how SO(2) has multiple representations because of the different ways in can be applied to 3D space. But we can also define alternative rules for how to apply SO(2) to the same space, eg. the plane. Here's a really simple one called the trivial representation: we simply say that elements of SO(2) do nothing. It's not very interesting, but it is a perfectly valid representation. Here's another: apply elements of SO(2) backwards. So if we have an element of SO(2) that says "rotate by 10° clockwise" the backwards rule says "apply a 10 degree rotation anticlockwise". It's a different representation. But if you look at the plane from underneath, it actually looks just like the original representation. So even though these are different representations, there's a sense in which it's equivalent to the original one. But here's a representation of SO(2) that really is different to the original one: If the element of SO(2) says "rotate by x°" we rotate by 2x° instead. We can think of it as a double-speed rule. We apply the rotations in SO(2) twice. If we run through the elements of SO(2) starting at 0° and working our way up to 360°, then the representation rotates our space twice as fast and ends up rotating the space twice. What about a half-speed rule? Sounds like it might work. But it fails for a simple reason. A 360 degree rotation is the same as a zero degree rotation. But the half-speed rule says that the former should rotate by 180° and the latter should rotate by 0°. These are distinct rotations, so our rule doesn't make any sense. As a result we're restricted to rules that are n-speed rules, where n is an integer. It's not hard to see that if we choose n to be a negative integer then by looking at the plane from underneath we get the same rule as using -n, looking from above. So we can discard the ones corresponding to negative integers. And we're left with one rule for each non-negative integer. In fact, these are all the representations that are possible on a 2D plane. [Optional "advanced section": Now go back to 3D again. When we apply SO(2) to 3D space we find that SO(2) always leaves some axis fixed. So given a 3D representation of SO(2), our 3D space always splits into a 1D space that's left unchanged, and a 2D space. You can think of SO(2) as using the trivial rule on the 1D space, and using any of the rules in the previous paragraph on the 2D space. Something similar happens in any dimension of space. The representation of SO(2) will split up into pieces with some axes left untouched, and others, always grouped in pairs, that transform like in the previous paragraph. In other words, the representations of SO(2) can be broken down into fundmental building blocks which we call irreducible representations. In the previous paragraph I actually classified all of the irreducible representations of SO(2). We can use the properties of SO(2) to understand other Lie groups. The idea is that not only is SO(2) frequently a subgroup of Lie groups, it's often a subgroup in multiple ways, ie. there are multiple ways to find a copy of SO(2) in SO(3). We've already seen that SO(2) can be embedded in SO(3) by interpreting rotations around any fixed axis as elements of a sub-SO(2). But notice how if we find two different embeddings of SO(2), they are forced to "interfere" with each other. For example, let's choose an SO(2) that acts on the x-y plane as described above. Suppose we now pick another SO(2) subgroup. No matter how we choose it, it must act on either x or y. There simply isn't room to find two SO(2)'s that don't at some point overlap with each other. But imagine the group of 4-dimensional rotations, caled SO(4). (It's not that scary, it's just like SO(3) except that we can make rotations that "mix up" any pair of directions, or combinations of such rotations.) We could pick one SO(2) that acts on the x-y plane and another that acts on the z-t plane (assuming the fourth axis is called 't'). But we won't be able to pick more than two for the same reason as before. Suppose we pick as many SO(2)'s as are possible. Then what we have is whats known as a maximal torus. Here's a feeble attempt at drawing all of this. One of the SO(2)'s inside SO(4) rotates one pair of axes into each other, the other one rotates the other pair: Now think about representations of a Lie group. These are a rule that tell you how each element in the group transforms our space. As the elements of our SO(2) subspaces obviously live in the group the rule must apply to these also. So a representation on a group also gives a representation of all of the SO(2)'s in it. So each SO(2) making up our maximal torus must basicaly act like we described above: by rotating some pairs of axes around at some "speed" and leaving other axes untouched. So if we pick a maximal torus of a Lie group, any representation splits up into a bunch of pairs of axes and each pair has an integer "speed" associated to each SO(2) in the maximal torus. The tuple of "speeds" for each pair of axes is known as a weight. By interpreting the weights as the coordinates of points, the collection of weights that arise from any particular representation can be drawn in a diagram. The dimension of the diagram is the number of non-interfering SO(2)'s in the maximal torus. And if you want to see what these diagrams look like, Garrett Lisi has some nice ones in his paper. Note that that paper also contains some "root diagrams". I don't have time to talk about those except to say that (1) they are weight diagrams for one particular special representation and (2) they tell you a lot about the geometry of all possible sets of weights for a particular Lie group. One last thing for the "advanced" section: a similar analysis can be carried out for Lie algebras as opposed to Lie groups. When physicists draw diagrams of weights they are often talking about the weights of Lie algebras, but these things are intimately related. There's probably one important message to take from this section: there are a lot of constraints on representations, you can't just make up any old rule. So just knowing that a given Lie group acts on some space you already know a lot of information, even if you don't know what the exact representation is. BTW One of the biggest applications of representation theory is in chemistry where you can read off information about the number of electrons allowed in atomic orbitals directly from representations of SO(3).] That was pretty tough. Next time I'll talk about physics and it should get a bit easier. I'll explain why much of modern physics is the study of Lie group representations and I'l explain the 'exceptional' and 'simple' in the title of Garrett's paper. (And I apologise for all of my sins of omission. For example, the analysis above only works for certain types of Lie group and really everything should be done using complex numbers, not real numbers. But I'm trying to compress a few hundred pages of mathematics into a single posting.)
<urn:uuid:a0f46b7e-4189-4dab-87d3-2793e7950d26>
2.859375
2,506
Personal Blog
Science & Tech.
62.531743
Population growth rates of reef sharks with and without fishing on the Great Barrier Reef: robust estimation with multiple models Hisano, Mizue, Connolly, Sean R., and Robbins, William D. (2011) Population growth rates of reef sharks with and without fishing on the Great Barrier Reef: robust estimation with multiple models. PLoS ONE, 6 (9). pp. 1-10. |PDF (Published Version) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader| View at Publisher Website: http://dx.doi.org/10.1371/journal.pone.0... Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and applying multiple assessment methods, to obtain robust estimates of population trends in species threatened by overfishing. |Item Type:||Article (Refereed Research - C1)| © 2011 Hisano et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |FoR Codes:||06 BIOLOGICAL SCIENCES > 0602 Ecology > 060207 Population Ecology @ 80%| 01 MATHEMATICAL SCIENCES > 0102 Applied Mathematics > 010202 Biological Mathematics @ 20% |SEO Codes:||83 ANIMAL PRODUCTION AND ANIMAL PRIMARY PRODUCTS > 8302 Fisheries - Wild Caught > 830299 Fisheries- Wild Caught not elsewhere classified @ 50%| 97 EXPANDING KNOWLEDGE > 970106 Expanding Knowledge in the Biological Sciences @ 50% |Deposited On:||01 Feb 2012 12:13| |Last Modified:||25 May 2013 01:45| Last 12 Months: 24 |Citation Counts with External Providers:| Repository Staff Only: item control page
<urn:uuid:4f55d3be-584c-4d74-8e39-f84354cb3731>
2.75
708
Academic Writing
Science & Tech.
21.072102
I don’t know much about physics but an electrical enthusiast. In simple terms ignoring maths here what is plasma exactly? Is a corona a type of plasma involved with electromagnetic radiation? As you can probably see I’m clueless. Some simple explanation to understand the basics would be appreciated. Thanks. Plasma has been called a fourth state of matter because it’s when a gas is heated enough that it ionizes–some electrons aren’t bound to a specific atom. Because of the free electrons, it’s highly conductive. This is used in fluorescent tubes, but the plasma is so conductive that a ballast coil is run in series to limit the current. You can also see plasma in a neon sign, a welding arc, and decorative plasma globes. Corona is a breakdown of air in an electric field and that’s also a plasma. But you’d only have electromagnetic radiation with alternating current and plasma can also be created with a direct current. If you see corona leakage in connection with a picture tube circuit, it’s DC. With a Tesla coil, it’s radio frequency AC. In a fluorescent tube or neon sign, it’s 60 cycle AC. If you look on Youtube, there are movies of the candle-in-the-microwave-oven experiment. Yes, a flame is also a plasma, and being conductive will absorb the radiation in a microwave oven. That’s how they get the blobs of plasma to float around. They’re being fed by the microwave energy. on: 19th February 12
<urn:uuid:c8b7b971-e9bb-4ad1-8434-e71607282c27>
3.09375
334
Q&A Forum
Science & Tech.
59.537834
Photo: Chad Horwedel (flickr) Maybe you remember those old westerns where some outlaw is being chased by a posse, and in order to escape, he lies down in shallow water and breathes through a hollow reed. The posse, befuddled, rides on by. Unfortunately, this trick doesn’t actually work. A little physics keeps getting in the way. Taking a breath is such a common occurrence that we rarely even notice we are doing it, but every time we inhale, we are redistributing the air in our immediate vicinity. Our chests are expanding, both filling up our insides as well as pushing aside the air that surrounds us. That’s easy enough to do with air, but water is a much denser medium. Pushing it aside takes some doing, as you quickly find when trying to wave your hand underwater. Also, unlike breathing above water, our cowboy is not just redistributing the medium. He is actually displacing the medium, water, in order to pull a pocket of air down into it. As our cowboy begins to inhale, he will discover that the external pressure on his lungs exerted by the surrounding water is much greater than his muscular ability to inflate them. In essence, with every breath he has to push aside most of the water in the pond. A swimmer with a snorkel can still breathe, but notice that snorkelers float right on the top. Were they to try sinking underneath, even a moderate amount, the same thing would happen. The other problem is that a hollow reed is about as thin as a straw. Try breathing through a straw even above water and you’ll soon be gasping.
<urn:uuid:4b32e1da-a65b-4a34-9f51-fbf48fd0f36d>
3.109375
359
Knowledge Article
Science & Tech.
55.110131
Any name that appears in a function's argument list, or any name that is set to a value anywhere in the function, is said to be local to the function. If a local name is the same as a name from outside the function (a so-called global name), references to that name inside the function will refer to the local name, and the global name will be unaffected. Here is an example: >>> x = 'lobster' >>> y = 'Thermidor' >>> def f(x): ... y = 'crevettes' ... print x, y ... >>> f('spam') spam crevettes >>> print x, y lobster Thermidor Keyword parameters have a special characteristic: their names are local to the function, but they are also used to match keyword arguments when the function is called.
<urn:uuid:9a3dbc44-63e4-49b8-8880-5c4bb5dbf568>
3.5625
171
Documentation
Software Dev.
64.761735
Referred to as typhoons in the Pacific, hurricanes in the Atlantic, and tropical cyclones by meteorologists these intensely powerful storms claim more lives each year than any other kind of storm. They can uproot trees, tear houses off of their foundations, and throw cars around like they were toys. It is not unusual for winds to reach 250 mph (360kph)in hurricanes. And their "storm surge" has a 25 feet high wall of water that completely deluges coastal areas. But what role does the Sun have in the development of hurricanes? During the summer and fall months, the sun's radiation beats down continuously on the ocean waters in the tropics. As a result, warm air rises and drifts upward. Cooler air from the sky takes the place of the rising, warm air. This is known as a convection cycle. The convection cycle is what it takes to start a storm. The cooler air begins spinning counter clockwise around the developing storm. As the warm air continues to rise in the convection cycle, the atmospheric pressure will fall, making the winds blow stronger. The cycle continues on and on until it either " blows out " or the conditions are right to turn into a hurricane. Before a hurricane is able to develop, the ocean waters must have a surface temperature of at least 80 degrees F. Air near the ocean surface must have a great deal of moisture. And the winds must be converging which means coming together from different directions. As a storm system develops, moisture continues to evaporate from the ocean surface. This moisture condenses as it rises. Soon, clouds and rain become caught up in the circular motion of a storm. As long as the tropical storm remains over the warm water of the open ocean, the hurricane can get stronger and larger. The system is called a tropical depression if the winds of the developing storm remain less than 35 mph. The storm is classified as a tropical storm when the wind speeds reach 35 to 74 mph. Now the storm is given a name so that it can be identified, tracked, and weather forecasts may be made. When the storm winds reach 75 mph or more, a hurricane is said to be "born". Hurricanes can be as large as 600 miles in diameter, and can reach to heights of 50,000 feet into the sky. ©Copyright 1998 Elizabeth Beckett, Holly Bernitt, and Vishwa Chandra.
<urn:uuid:0ee75a35-c338-4ec5-8cd0-08278d81d4b6>
4.0625
492
Knowledge Article
Science & Tech.
62.141131
Does managed coastal realignment create saltmarshes with ‘equivalent biological characteristics’ to natural reference sites? Article first published online: 19 SEP 2012 © 2012 The Authors. Journal of Applied Ecology © 2012 British Ecological Society Journal of Applied Ecology Volume 49, Issue 6, pages 1446–1456, December 2012 How to Cite Mossman, H. L., Davy, A. J., Grant, A. (2012), Does managed coastal realignment create saltmarshes with ‘equivalent biological characteristics’ to natural reference sites?. Journal of Applied Ecology, 49: 1446–1456. doi: 10.1111/j.1365-2664.2012.02198.x - Issue published online: 29 NOV 2012 - Article first published online: 19 SEP 2012 - Manuscript Accepted: 1 AUG 2012 - Manuscript Received: 22 MAR 2012 - Natural Environment Research Council - Atriplex portulacoides ; - dyke breach; - habitat creation; - managed retreat; - redox potential; - salt-marsh restoration; - Coastal saltmarshes provide distinctive biodiversity and important ecosystem services, including coastal defence, supporting fisheries and nutrient cycling. However, c. 50% of the world's coastal marshes are degraded or have been lost, with losses continuing. In both Europe and North America, there is a legal requirement to create habitats to substitute for losses. How well do created habitats replicate natural salt marshes? - We compared plant communities and environmental characteristics of 18 deliberately realigned (managed realignment, MR - between 1 and 14 years old), 17 accidentally realigned (AR, 25–131 years old) sites with those on 34 natural reference saltmarshes in the UK. - Halophytic species colonized individual realignment sites rapidly, attaining species richness similar to nearby reference marshes after 1 year. Nevertheless, the community composition of MR sites was significantly different from reference sites, with early-successional species remaining dominant, even on the high marsh. - The dominance of pioneer species on the low and mid-marsh may be because, at the same elevation, sediments were less oxygenated than on reference sites. Sediments were well oxygenated on the high marsh, but were often drier than on natural marshes. - Overall community composition of AR marshes was not significantly different to reference marshes, but the characteristic perennials Limonium vulgare, Triglochin maritima, Plantago maritima and Armeria maritima remained relatively rare. In contrast, the shrub Atriplex portulacoides was more abundant, and its growth form may inhibit or delay colonization by other species. - Synthesis and applications. Marshes created by managed realignment do not satisfy the requirements of the EU Habitats Directive. Adherence to the Directive might be improved by additional management interventions, such as manipulation of topographic heterogeneity or planting of mid- and upper-marsh species. However, given the inherent variation in natural saltmarshes and projected environmental change, policies that require exact equivalence at individual sites may be unachievable. More realistic goals might require minimum levels of a range of ecosystem functions on a broader scale, across catchments or regions.
<urn:uuid:e1ffbcc2-d0dc-41b9-8115-c94feba362a6>
2.71875
694
Academic Writing
Science & Tech.
21.320675
APS/S. Benjamin and J. Smith Lower right: Structurally, the NV center consists of a pair of adjacent defects in the diamond lattice, namely, a nitrogen atom substituting for a carbon, together with a vacancy (a missing atom). Main figure: In abstract terms, each NV center is composed of at least one nuclear spin, and an electron complex with total spin 1. Bermudez et al. have determined that under a suitable driving field, the electron spins will constitute a channel by which the nuclei in neighboring NV centers can interact, thus achieving entanglement.
<urn:uuid:cdb72aa2-02fd-4aa0-8467-e76f18314261>
3.078125
120
Knowledge Article
Science & Tech.
50.1025
Please read my blog on ‘TIME-a Non Linear Theory’ to know of the full implications. Stephen Hawking has said: “We should look for evidence of a collision with another universe in our distant Past.” Some experts believe that what we call the universe may only be one of many. Is there any conceivable way that we could ever detect and study other universes if they exist? Is it even falsifiable? Penrose made the sensational claim that he had glimpsed a signal originating from before the Big Bang working with Vahe Gurzadyn of the Yerevan Physics Institute in Armenia. Penrose came to this conclusion after analyzing maps from the Wilkinson Anisotropy Probe. These maps reveal the cosmic microwave background, believed to have been created just 300,000 years after the Big Bang and offering clues to the conditions at that time. Penrose’s finding runs directly counter to the widely accepted inflationary model of cosmology which states that the universe started from a point of infinite density known as the Big Bang about 13.7 billion years ago, expanded extremely rapidly for a fraction of a second and has continued to expand much more slowly ever since, during which time stars, planets and ultimately humans have emerged. That expansion is now believed to be accelerating due to a scientific X factor called dark energy and is expected to result in a cold, uniform, featureless universe. Penrose, however, reportsPhysics World, takes issue with the inflationary picture “and in particular believes it cannot account for the very low entropy state in which the universe was believed to have been born – an extremely high degree of order that made complex matter possible. He does not believe that space and time came into existence at the moment of the Big Bang but that the Big Bang was in fact just one in a series of many, with each big bang marking the start of a new “aeon” in the history of the universe.” The core concept in Penrose’s theory is the idea that in the very distant future the universe will in one sense become very similar to how it was at the Big Bang. Penrose says that “at these points the shape, or geometry, of the universe was and will be very smooth, in contrast to its current very jagged form. This continuity of shape, he maintains, will allow a transition from the end of the current aeon, when the universe will have expanded to become infinitely large, to the start of the next, when it once again becomes infinitesimally small and explodes outwards from the next big bang. Crucially, he says, the entropy at this transition stage will be extremely low, because black holes, which destroy all information that they suck in, evaporate as the universe expands and in so doing remove entropy from the universe.” The foundation for Penrose’s theory is found in the cosmic microwave background, the all-pervasive microwave radiation that was believed to have been created when the universe was just 300,000 years old and which tells us what conditions were like at that time. The evidence was obtained by Vahe Gurzadyan of the Yerevan Physics Institute in Armenia, who analysed seven years’ worth of microwave data from WMAP, as well as data from the BOOMERanG balloon experiment in Antarctica. Penrose and Gurzadyan say they have clearly identified concentric circles within the data – regions in the microwave sky in which the range of the radiation’s temperature is markedly smaller than elsewhere. The Cosmic Microwave Background (CMB) radiation is the remnant heat from the Big Bang. This radiation pervades the universe and, if we could see in microwaves, it would appear as a nearly uniform glow across the entire sky. However, when we measure this radiation very carefully we can discern extremely faint variations in the brightness from point to point across the sky, called “anisotropy”. These variations encode a great deal of information about the properties of our universe, such as its age and content. The “Wilkinson Microwave Anisotropy Probe” (WMAP) mission has measured these variations and found that the universe is 13.7 billion years old, and it consists of 4.6% atoms, 23% dark matter, and 72% dark energy. According to Penrose and Gurzadyan, as described in arXiv: 1011.3706, these circles allow us to “see through” the Big Bang into the aeon that would have existed beforehand. They are the visible signature left in our aeon by the spherical ripples of gravitational waves that were generated when black holes collided in the previous aeon. The “Penrose circles” pose a huge challenge to inflationary theory because this theory says that the distribution of temperature variations across the sky should be Gaussian, or random, rather than having discernable structures within it. Julian Barbour, a visiting professor of physics at the University of Oxford in an interview with Physics World, says that these circles would be “remarkable if real and sensational if they confirm Penrose’s theory”. They would “overthrow the standard inflationary picture”, which, he adds, has become widely accepted as scientific fact by many cosmologists. But he believes that the result will be “very controversial” and that other researchers will look at the data very critically. He says there are many disputable aspects to the theory, including the abrupt shift of scale between aeons and the assumption, central to the theory, that all particles will become massless in the very distant future. He points out, for example, that there is no evidence that electrons decay. Penrose and colleague Gurzadyn have answered the numerous critics who say that the circles do not contradict the standard model of cosmology in follow up paper, published on arXiv. In the short article, they agree that the presence of circles in the CMB does not contradict the standard model of cosmology. However, the existence of “concentric families” of circles, they argue, cannot be explained as a purely random effect given the pure Gaussian nature of their original analysis. “It is, however a clear prediction of conformal cyclic cosmology,” reports Physics World.
<urn:uuid:7180cd55-64e3-4b8e-9759-5052d94f4dfe>
3.109375
1,314
Personal Blog
Science & Tech.
32.595874
Power series supply us with a new way to describe functions: we can specify the coefficients of each power in the series. This raises the following questions. Given a function defined previously, what does its power series look like? What do the terms of the power series tell us about the function? A standard power series looks like f0 + f1x + f2x2 + ... + fnxn + ... We can also look at power series for which we subtract some value, call it z, from the variable x. These look like f0 + f1(x - z) + f2(x - z)2 + ... + fn(x-z)n + ... A standard notation for such things is We can answer our second question by differentiating the function represented by the series n times and then setting x = z. Differentiating n times kills off all terms which have degree strictly less than n, and setting x = z kills all terms which have degree strictly greater than n. We are left with the effect of these operations on the nth term alone. We then can deduce: f (n)(z) = n! fn This tells us that the nth term, fn, in this series, is the nth derivative of f(x) evaluated at x =z, divided by n! This gives us an answer to our first question as well. If we apply this statement to each term in the series, we find: Let us apply this result to some functions we know. First the exponential function is its own derivative hence its own second derivative and so on. Evaluating all of these at x = z give the same answer, namely exp(z). This tells us: If we keep differentiating the sine function, we first get cosine then minus sign then minus cosine, then sine again, and the derivatives repeat this pattern in blocks of 4. This gives us: Exercise 15.4 Do the same thing for cos(x) and do the same for both sine and cosine for z = 0. Deduce the "addition theorems" for sines and cosines from all these results. Radius of Convergence of Power Series If you change the magnitude of the series variable, you change the ratio of successive terms. Since series converge when this ratio is any factor fixed factor less than 1, power series typically converge up to some maximum magnitude of the expansion variable, which value is the limiting ratio of . This ratio is called the radius of convergence of the series. If we define these functions in the complex plane, so that our variables can be complex numbers, this ratio has a geometric meaning. It turns out that it is the distance from the expansion point, here z and the nearest singularity of the function f. For example, for the geometric series, there is a singularity of the function at x = 1. This means that the radius of convergence, expanding around 0 is 1. Exercise 15.5 What is the radius of convergence of the functions sine and exp? If you expand the function (1-x)-1 about z = -4, what will the radius of convergence be?
<urn:uuid:67b735d2-88ea-4a39-90f8-a4c4115852da>
3.75
667
Tutorial
Science & Tech.
73.206311
Number of Chains (Asymmetric Unit) Search for structures based on the number of chains in the asymmetric unit. The asymmetric unit is the smallest portion of a crystal structure to which symmetry operations can be applied in order to generate the complete unit cell (the smallest repeating unit of a crystal). This concept applies to structures determined by X-ray crystallography and is not relevant to structures determined by NMR. Chains are the individual polymers (macromolecules) contained in a PDB structure. A structure may contain multiple identical chains. Hence, this search option queries for the total number of polymers in the asymmetric unit, regardless of whether that includes multiple identical molecules. 4HHB contains 4 polymer chains, 2 copies of the hemoglobin alpha chain (chains A and C), and two copies of the hemoglobin beta chain (chains B and D). Enter "Between 4 and 4" and press "Submit Query" to return all structures that contain exactly 4 chains in the asymmetric unit. The results include 4HHB. Enter 30 into the left box, and leave the right box empty. This will query for all structures in the PDB that contain at least 30 chains in the asymmetric unit.
<urn:uuid:9f4b0e28-e556-4807-9831-e02849f0ee0d>
3.5
253
Tutorial
Science & Tech.
47.605668
Roger Pielke Sr. (Colorado State) has a blog (Climate Science) that gives his personal perspective on climate change issues. In it, he has made clear that he feels that apart from greenhouse gases, other climate forcings (the changes that affect the energy balance of the planet) are being neglected in the scientific discussion. Specifically, he feels that many of these other forcings have sufficient ‘first-order’ effects to prevent a clear attribution of recent climate change to greenhouse gases. In general, I heartily agree – other forcings are important, even essential, for understanding observed climate variability and, as a community, we are only just starting to get to grips with some of the more complicated effects. Obviously, though, not all forcings are of the same magnitude (either globally or regionally) and so it is useful to separate the ‘first-order’ forcings from those that are relatively minor. But what exactly is ‘first-order’ and what is not? It is helpful to distinguish forcings that are important in the global mean, from those which might be important locally but not have much impact for ‘global warming’. A good metric for the importance of a global forcing is the radiative forcing (i.e. the global mean radiation impact at the tropopause for an instantaneous change – the so-called “instantaneous forcing”). There are other definitions (e.g. the “adjusted forcing” where stratospheric temperatures are allowed to adjust), but for the purpose of this article these differences are not that important (for those who are interested there is a good discussion of the different forcing definitions in Hansen et al (2005, JGR)). While the definition of a forcing may appear a little arbitrary, the reason why radiative forcing is used is because it (conveniently) gives quite good predictions of what happens in models to the global mean temperature once the climate system has fully responded to the change. Thus the forcing can be used as a shorthand for the climate response without having to do the experiment. Of course, the global mean temperature isn’t the only useful metric of climate change – regional temperatures and precipitation are arguably much more societally relevant – but it does have a good signal-to-noise ratio. That is, changes to the system are more clearly discerned in the global mean temperature than at a regional level, mainly because the noisy ‘weather’ component increases as you go to smaller scales. Figure 1: Two breakdowns of the global mean forcings since the pre-industrial. Factors causing warming are red, cooling factors, blue. The top panel shows the direct effects of the individual components, while the second panel attributes various indirect factors (associated with atmospheric chemistry, aerosol cloud interactions and albedo effects) and includes a model estimate of the ‘efficacy’ of the forcing that depends on its spatial distribution. (From Hansen et al (2005, JGR)). Subjective uncertainties associated with the well-mixed GHGs are around ~0.15 W/m2, for ozone ~0.07 W/m2, tropospheric aerosols around ~1 W/m2 and solar ~0.3 W/m2. The figure here gives one estimate for how many of those forcings have changed over the industrial period (1750-2000). This assessment is from my own lab and so I may be a little biased, but although there are significant uncertainties (particular for the aerosol indirect effects), it probably gives a reasonable idea of the current thinking. The forcings illustrated here are from the well mixed greenhouse gases (GHGs) (CO2, CH4, N2O, CFCs), tropospheric and stratospheric O3, direct aerosol effects (from sulphates, nitrates, organic and black carbon), land use change, solar irradiance, volcanic aerosols, and various indirect effects (on clouds, stratospheric water vapour, snow albedo etc.). Reasonable estimates of these 16 different effects (and counting…) were included in the GISS simulations for the upcoming IPCC assessment. The land use change used in the figure is related to the deforestation dataset of Ramankutty and Foley (1999) and includes the effects of albedo and vegetation type change, but not the impacts of increased irrigation or the ‘greening’ of the high latitudes (due to climate changes and possible CO2 fertilisation effects) . These latter two effects are expected to lead to slight warming, but the overall impact of land use changes is expected to be negative (i.e. a cooling) (Myhre and Myhre, 2003), although the uncertainty is still significant (maybe 0.5 W/m2 either way). To my mind, the ‘first-order’ forcings would be the ones without which you can’t really do without in assessing global climate change. I would therefore argue that for the global mean the well-mixed GHGs and the counterbalancing reflecitve aerosol effects are ‘first-order’ – without GHGs there is no appreciable warming signal, and without the aerosols, the warming from GHGs is excessive and important changes in the diurnal cycle and cloudiness are not captured. Everything else (apart from volcanos, which are a special case) is in the noise. If we were to break it down even further, I would argue that CO2, CH4 and sulphates (the main non-soot aerosol) were the only ‘first order’ forcings. It is curious to note that this is the combination of forcings that were predominantly used in the simulations discussed in IPCC (1995) where the conclusion was made that the ‘balance of evidence’ supported the notion of ongoing human-caused climate change. Before the emails come streaming in, let me make it clear that this isn’t to say that ‘second order’ forcings are unimportant. On the contrary, many of these effects have very specific signatures in the climate system (in the stratosphere, in the Arctic and in the tropics) that need to be understood much better – however they are unlikely to have a big impact on the global mean temperature. Thus when it comes to global warming, neither land use change/vegetation type, nor for instance, the biogeochemical effect of increased CO2 are ‘first order’. The first example is clearly important locally (impacts of deforestation, urban development etc.) while the second effect is as yet inadequately unquantified but there doesn’t appear to be any a priori reason to think it is globally important. It doesn’t therefore make much sense to claim that some of the smaller forcings are ‘first-order’ despite their importance, and conceivably dominance, at smaller scales. To be sure, some of these effects (such as the impact of irrigation on surface water vapour, or land use changes on evapotranspiration) are not easily dealt with in terms of the tropospheric radiative forcing – a point that was well made in the National Academies report on radiative forcing (on which Dr. Pielke was an author). However, the dominance of well-mixed greenhouse gases on the anthropogenic forcing over the last few decades is robust to almost any estimate of the uncertainty in the other forcings. This is clearly a different opinion to that held by Dr. Pielke. However, this is probably due to our different perspectives in what we feel are important questions (local vs. global), rather than a disagreement over fundamentals. Hansen, J., Mki. Sato, R. Ruedy, L. Nazarenko, A. Lacis, G.A. Schmidt, G. Russell, I. Aleinov, M. Bauer, S. Bauer, N. Bell, B. Cairns, V. Canuto, M. Chandler, Y. Cheng, A. Del Genio, G. Faluvegi, E. Fleming, A. Friend, T. Hall, C. Jackman, M. Kelley, N. Kiang, D. Koch, J. Lean, J. Lerner, K. Lo, S. Menon, R. Miller, P. Minnis, T. Novakov, V. Oinas, Ja. Perlwitz, Ju. Perlwitz, D. Rind, A. Romanou, D. Shindell, P. Stone, S. Sun, N. Tausnev, D. Thresher, B. Wielicki, T. Wong, M. Yao, and S. Zhang 2005. Efficacy of climate forcings. J. Geophys. Res., in press. Myhre, G. and A. Myhre, Uncertainties in radiative forcing due to surface albedo changes caused by land use changes, Ramankutty, N., and J.A. Foley (1999). Estimating historical changes in global land cover: croplands from 1700 to 1992.
<urn:uuid:8ba9bd25-df20-4b69-91a9-3fe4fe1e16f6>
2.921875
1,904
Comment Section
Science & Tech.
49.901898
What Is a Theory Anyway? Scientists use the word theory differently than nonscientists. "It’s just a theory," you hear people say. When speaking casually, people often use the word "theory" to mean a "guess" or a "feeling". In science, a theory means much more. In science, theories are explanations of aspects of nature that are based on facts, Laws, inferences, and tested hypotheses. Examples of important scientific theories that are supported by evidence include cell theory, gravitational theory, evolutionary theory, and particle theory. These theories are so well supported by evidence that the broad ideas are no longer questioned although details are examined by testing hypotheses. Many different scientists challenge and test theories by developing hypotheses based on the theory and testing them with observations and predictions to see if they are in agreement with the theory. Sometimes those tests lead the scientists to modify the theory, or in some cases even disprove a theory entirely. For example, several thousand years ago, Mesopotamians and then Greeks established the flat-Earth theory that contended that the Earth was flat. Aristotle (384-322 BC) disproved the theory by collecting evidence that proved that the Earth is roughly spherical. Wouldn’t it be boring if scientists already knew everything? Fortunately, we are always learning more about our planet, our solar system, and the universe. Scientists will never know everything there is to know about the Earth, much less the universe. There will always be more to explore and new technologies to help us explore even further! Even though scientists are always making more discoveries, valid and useful theories based on current evidence and further scientific testing of the theories can be made. Scientific theories are put together based on evidence. Just like detectives at a crime scene, scientists collect evidence and then generate a logical explanation based on that evidence. Often scientists will formulate several possible explanations and then test all to identify which are valid and which are not. Valid (or accepted) scientific theories must be verified based on evidence, not beliefs or faith. Even well established scientific theories are open to further study. The theory of evolution is the explanation of how life on Earth developed and changed over time (and continues to develop and change!). The theory is based on evidence from modern genetics, ancient fossils, and observations of evolution happening today. Although scientists continue to explore many details of the theory, the general theory is well tested and supported by many sources of data. Just like every theory, it will continue to be modified as even more data are collected and more testing is done.
<urn:uuid:23fef12a-7d05-4073-a381-d22a37ee1880>
3.75
523
Knowledge Article
Science & Tech.
32.435412
Hi! Thanks for such an interesting question. This is what is called by some as an example for the Coriolis force BUT according to most experts THIS DOES NOT APPLY to the explanation of the rotation of water draining in your sink. ?Don't believe what you hear about Coriolis making the water in a sink or toilet rotate one way as it drains in one hemisphere, the other way in the other hemisphere. The Coriolis force is noticeable only for large-scale motions such as winds.? USA Today: ?Understanding the Coriolis force? ?Is it possible to detect the Earth?s rotation in a draining sink?? ?Yes, but it is very difficult. Because the Coriolis force is so small, one must go to extraordinary lengths to detect it. But, it has been done. You cannot use an ordinary sink for it lacks the requisite circular symmetry: its oval shape and off-center drain render any results suspect. Those who have succeeded used a smooth pan of about one meter in diameter with a very small hole in the center. A stopper (which could be removed from below so as to not introduce any spurious motion) blocked the hole while the pan was being filled with water. The water was then allowed to sit undisturbed for perhaps a week to let all of the motion die out which was introduced during filling. Then, the stopper was removed (from below). Because the hole was very small, the pan drained slowly indeed. This was necessary, because it takes hours before the tiny Coriolis force could develop sufficient deviation in the draining water for it to produce a circular flow. With these procedures, it was found that the rotation was always ?Explanations are funny things. Indeed, what do you mean by suggesting that the difference in behavior is a result of the difference in the (underlying) velocity. Let?s start by ignoring the fact that you thought (incorrectly) that the issue was just the easterly component of the velocity and just explore the difference between these two situations: the large scale with a big velocity difference and the small scale with a small velocity difference. The implication is that if one were to match the velocity differences on the two scales, then the Coriolis force, or maybe the displacement, would suddenly be the same in each case. Alas, it would not be so (unless other things were ?The traditional response to what you have suggested is that you have confused two things, a spatial scale and temporal scale. As usually presented, the Coriolis force (being a force) produces a result (such as displacement) over time. The spatial scale (how far it is across something, or how far an object travels) does not even appear in the equations. There is much to commend this approach. Within this context, it is not the spatial scale that produces the effect (for a given force), but how long the event lasts. A sink drains quickly (not much time for a small force to produce a significant displacement); a missile or the air in a hurricane takes much longer to traverse its territory (a much longer time for a force to produce a significant ?Bad Coriolis FAQ? Ascher Shapiro of MIT demonstrated this in 1962 with the following experiment. ?All this was demonstrated way back in 1962 by one Ascher Shapiro, a researcher at the Massachusetts Institute of Technology. Shapiro filled a circular tank six feet in diameter and six inches high in such a way that the water swirled in a clockwise direction. (Remember, now, that Coriolis forces in the Northern Hemisphere act in a ?Shapiro then covered the tank with a plastic sheet, kept the temperature constant, and sat down to read comic books or whatever scientists do while they wait for their experiments to percolate. When he pulled the plug after an hour or two, the water went down the drain clockwise, presumably because it still retained some clockwise motion ?On the other hand, if Shapiro pulled the plug after waiting a full 24 hours, the draining water spiraled counterclockwise, indicating that the motion from filling had subsided enough for the Coriolis effect to take over. When the plug was pulled after four to five hours, the water started draining clockwise, then gradually slowed down and finally started swirling in the opposite direction.? ?Do bathtubs drain counterclockwise in the Northern Hemisphere?? A more technical explanation can be found here. ?In a kitchen sink, of course, speeds and time scales are much smaller than hours and miles. Water rushing down a drain flows at speeds on the order of a meter per second in most sinks, which are themselves less than a meter wide. Qualitatively, there doesn't seem to be much chance for deflection. Quantitatively, putting these numbers into Equation 1 results in an estimated change in rotation of only a fraction of a degree per second, and a very small fraction at that...less than an arc-second (1/3600th of a degree) per second over the course of the entire draining of the sink, ignoring additional effects caused by conservation of angular momentum and the like. Under extremely controlled conditions, this can cause water to flow out of a container counter-clockwise in the northern hemisphere and clockwise in the southern hemisphere, but your kitchen sink is not so controlled. Things like leftover spin from filling the sink (even when the water looks still, it's rotating slowly for a long time after it seems to stop), irregularities in the construction of the basin, convection currents if the water is warmer or colder than the basin, and so forth, can affect the direction water goes down the sink. Any one of these factors is usually more than enough to overwhelm the small contribution of the Coriolis effect in your kitchen sink or bathtub. Research in the 1960s showed that if you do carefully eliminate these factors, the Coriolis effect can be observed.? "Getting Around The Coriolis Force" Search terms used: water drains clockwise rotation coriolis Shapiro I hope these links would help you in your research. Before rating this answer, please ask for a clarification if you have a question or if you would need further information. Thanks for visiting us. Google Answers Researcher
<urn:uuid:37f04562-c1ca-480d-901e-d714afe0ca18>
3.40625
1,382
Q&A Forum
Science & Tech.
50.001063
What is “windows” header file ? December 2, 2012 2 Comments When I look back at my initial days of programming I can feel myself smiling over little, though unintentional but silly mistakes that I used to make and an even bigger smile comes when I think about the misconceptions I had in those days. No doubt they are all part of the learning experience. C++ was my first ever programming language that I learned. I was using Windows XP at that time so it was quite natural to use “windows.h” in my programs. You must be thinking “OK, Everything seems fine, so what is all this fuss about?” Well my friends the story has just begun Here goes a short question answer session of my story . Q: Why did I use windows.h ? Ans: I used it for following two statements: Q: What I thought about windows.h? Ans: I considered it a Standard C++ Library Header File just like iostream, fstream and iomanip :) Q: What did I know about windows.h? Alright so the story ends here. Those of you who know the details must be smiling and may even be saying “It happens dude!” whereas the rest of you will certainly smile after completely reading this post. The sole purpose of this post is to tell the beginners about “windows.h” and a little bit of details so that they do not have to go through all the trouble that I had to face. WINDOWS.H is not a Standard C++ Library Header File which means that it does not come with C++ package (remember C++ is a language used for development of apps for various Operating Systems, not only Windows) instead it is a Windows-specific header file for the C/C++ programming language which contains declarations for all of the functions in the Windows API, all the common macros used by Windows programmers, and all the data types used by the various functions and subsystems. It defines a very large number of Windows specific functions that can be used in C/C++. Don’t worry if you couldn’t understand some or most of what you read in the last paragraph because I’ll be explaining things shortly. First of all remember that Windows itself is a piece of software what we call an Operating System. For those who don’t know the details, it’s the job of an operating system to manage all the activity performed by hardware present on your computer. So whenever we run an application, actually it is running on to the operating system, telling the operating system what to do and how to do it in order to get specific results. In order for things to work smoothly a bunch of small programs have been collected together into different various files by Microsoft. Each of these files contains basic commands to perform a task, individually none of them would make a complete software but collectively they are amazing, so these pieces of codes were called Win APIs (Application Programming Interface) where ‘Win’ obviously specifies that this interface is for Windows applications only. So by now you should be able to understand that in order to use specific features of Windows we will have to use some predefined lines of code (API’s) in a standard way. Remember the purpose of this post is not to teach you windows programming but to prepare you mentally for some basics of windows programming. I will soon be posting about some contents of “windows.h” file and a general guideline along with some good tutorials for programmers so stay in touch. Till then happy programming
<urn:uuid:f714003e-cd4b-49b5-8812-a8e6bf9a118d>
2.921875
742
Personal Blog
Software Dev.
56.514301
||This article may require cleanup to meet Wikipedia's quality standards. (October 2010)| Condensation is the change of the physical state of matter from gaseous phase into liquid phase, and is the reverse of vaporization. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition. Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow-flake formation within clouds—or at the contact between such gaseous phase and a (solvent) liquid or solid surface. A few distinct reversibility scenarios emerge here with respect to the nature of the surface. - absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation. - adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation. - adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the specie's triple point—is reversible as sublimation. Most common scenarios Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser". How condensation is measured Psychrometry measures the rates of condensation from and evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion. Applications of condensation Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application. Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation. It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails. Numerous living beings use water made accessible by condensation. A few examples of these are the Australian Thorny Devil, the darkling beetles of the Namibian coast, and the Coast Redwoods of the West Coast of the United States. Condensation in building construction Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion and energy loss due to increased heat transfer. To alleviate these issues the air ventilation in the building needs to be improved. This can be done in a number of ways; opening windows, turning on extractor fans, drying clothes outside and covering pots and pans whilst cooking to name a few. Air ventilation systems can be installed that help move air throughout a building. - Air well (condenser) - Bose–Einstein condensate - Condenser (heat transfer) - DNA condensation - Kelvin equation - Phase diagram - Phase transition - Retrograde condensation - Surface condenser - Groasis Waterboxx
<urn:uuid:2159b2e4-c55d-425c-a5dc-e0205e238673>
3.953125
723
Knowledge Article
Science & Tech.
20.853914
(Submitted November 07, 1996) Is it possible that the Sun could just burn out? What manages to keep the Sun so hot all of the time? The Sun is basically a thermonuclear bomb with a built-in thermostat. Just as in a hydrogen bomb, hydrogen atoms are fusing together to make helium atoms and this nuclear reaction produces heat (along with the light that we see). If the reactions go on too fast, the Sun expands slightly (just like a balloon expands when you heat up the air in it). This slows down the reactions and then the Sun cools and contracts. If it contracts too much, the nuclear reactions speed up, and then the Sun heats up and expands again. So the Sun stays at the same temperature, burning its nuclear fuel at a steady rate. At the rate it is going, we have about 4 billion years left until the Sun burns Andy Ptak and the Ask an Astrophysicist team Questions on this topic are no longer responded to by the "Ask an Astrophysicist" service. See http://imagine.gsfc.nasa.gov/docs/ask_astro/ask_an_astronomer.html for help on other astronomy Q&A services.
<urn:uuid:af064280-db0f-4af0-b19a-b99b36822bad>
3.75
271
Q&A Forum
Science & Tech.
69.919038
5.8. Vacuum Fluctuations and Perturbations Recall that the structures - clusters and superclusters of galaxies - we see on the largest scales in the universe today, and hence the observed fluctuations in the CMB, form from the gravitational instability of initial perturbations in the matter density. The origin of these initial fluctuations is an important question of modern cosmology. Inflation provides us with a fascinating solution to this problem - in a nutshell, quantum fluctuations in the inflaton field during the inflationary epoch are stretched by inflation and ultimately become classical fluctuations. Let's sketch how this works. Since inflation dilutes away all matter fields, soon after its onset the universe is in a pure vacuum state. If we simplify to the case of exponential inflation, this vacuum state is described by the Gibbons-Hawking temperature where we have used the Friedmann equation. Because of this temperature, the inflaton experiences fluctuations that are the same for each wavelength k = TGH. Now, these fluctuations can be related to those in the density by Inflation therefore produces density perturbations on every scale. The amplitude of the perturbations is nearly equal at each wavenumber, but there will be slight deviations due to the gradual change in V as the inflaton rolls. We can characterize the fluctuations in terms of their spectrum AS(k), related to the potential via where k = aH indicates that the quantity V3 / (V')2 is to be evaluated at the moment when the physical scale of the perturbation = a / k is equal to the Hubble radius H-1. Note that the actual normalization of (218) is convention-dependent, and should drop out of any physical answer. The spectrum is given the subscript "S" because it describes scalar fluctuations in the metric. These are tied to the energy-momentum distribution, and the density fluctuations produced by inflation are adiabatic - fluctuations in the density of all species are correlated. The fluctuations are also Gaussian, in the sense that the phases of the Fourier modes describing fluctuations at different scales are uncorrelated. These aspects of inflationary perturbations - a nearly scale-free spectrum of adiabatic density fluctuations with a Gaussian distribution - are all consistent with current observations of the CMB and large-scale structure, and have been confirmed to new precision by WMAP and other CMB measurements. It is not only the nearly-massless inflaton that is excited during inflation, but any nearly-massless particle. The other important example is the graviton, which corresponds to tensor perturbations in the metric (propagating excitations of the gravitational field). Tensor fluctuations have a spectrum The existence of tensor perturbations is a crucial prediction of inflation which may in principle be verifiable through observations of the polarization of the CMB. Although CMB polarization has already been detected , this is only the E-mode polarization induced by density perturbations; the B-mode polarization induced by gravitational waves is expected to be at a much lower level, and represents a significant observational challenge for the years to come. For purposes of understanding observations, it is useful to parameterize the perturbation spectra in terms of observable quantities. We therefore write where nS and nT are the "spectral indices". They are related to the slow-roll parameters of the potential by Since the spectral indices are in principle observable, we can hope through relations such as these to glean some information about the inflaton potential itself. Our current knowledge of the amplitude of the perturbations already gives us important information about the energy scale of inflation. Note that the tensor perturbations depend on V alone (not its derivatives), so observations of tensor modes yields direct knowledge of the energy scale. If large-scale CMB anisotropies have an appreciable tensor component (possible, although unlikely), we can instantly derive Vinflation ~ (1016 GeV)4. (Here, the value of V being constrained is that which was responsible for creating the observed fluctuations; namely, 60 e-folds before the end of inflation.) This is remarkably reminiscent of the grand unification scale, which is very encouraging. Even in the more likely case that the perturbations observed in the CMB are scalar in nature, we can still write where is the slow-roll parameter defined in (196). Although we expect to be small, the 1/4 in the exponent means that the dependence on is quite weak; unless this parameter is extraordinarily tiny, it is very likely that Vinflation1/4 ~ 1015 - 1016 GeV. We should note that this discussion has been phrased in terms of the simplest models of inflation, featuring a single canonical, slowly-rolling scalar field. A number of more complex models have been suggested, allowing for departures from the relations between the slow-roll parameters and observable quantities; some of these include hybrid inflation [167, 168, 169], inflation with novel kinetic terms , the curvaton model [171, 172, 173], low-scale models [174, 175], brane inflation [176, 177, 178, 179, 180, 181, 182] and models where perturbations arise from modulated coupling constants [183, 184, 185, 186, 187]. This list is necessarily incomplete, and continued exploration of the varieties of inflationary cosmology will be a major theme of theoretical cosmology into the foreseeable future.
<urn:uuid:70f4cd9a-d14d-4270-9a04-d6590baac7ed>
3.359375
1,130
Academic Writing
Science & Tech.
25.567577
Two different methods of geochronology allow geologists to date small quantities of minerals: The 1st one uses the radioactive decay of the natural Potassium 40, obtained from the ratio K40/K39. The reactor neutrons are utilised to transform Potassium into Argon, and then the ratio Ar40/Ar39 is measured The 2nd one is convenient to date minerals that contain Uranium. The ratio U235/U238 is representative of the age and it is obtained by irradiating the mineral and counting fissions of U235, compared to the number of spontaneus fissions of U238.
<urn:uuid:1cabd77f-914c-474e-8fb9-1b75efe3c3b5>
3.21875
127
Knowledge Article
Science & Tech.
28.229084
Nov 01 2010 Intro to Environmental Systems and Ecosystem Ecology Most of our prior studies have been focused on the components that make up our world, but, in reality, the world isn’t made up of individual components, but rather of one complex whole. While categories are useful to understand subject matter, the planet operates as a complex network of interlinked systems. This is the reason why solving environmental problems can be so difficult as one must consider all system factors and behaviors. A system is defined by its inputs and outputs (the factors) and by its feedback loops (the behaviors). Feedback loops can be positive (bad) or negative (good) [confusing right?]. Negative feedback loops are good because they enhance stability in a system, they self-regulate, and the outputs limit the inputs. Positive feedback loops are bad because they destabilize systems, self-accelerate, and their outputs can drive inputs (or become the inputs themselves). <– Positive feedback loop <– Negative feedback loop As far as an introduction to this material, I think that this is a good start. Until I draw the short straw again, P.S. That was supposed to be a joke… don’t hurt me.
<urn:uuid:2a9f5817-d1ad-43f2-8bea-91dee61746ad>
3.203125
254
Personal Blog
Science & Tech.
45.020814
Suzaku (formerly Astro-E2) Launch Date: July 10, 2005 Mission Project Home Page - http://heasarc.gsfc.nasa.gov/docs/astroe/astroegof.html Suzaku, formerly known as NeXT, is Japan's fifth X-ray astronomy mission, and was developed at the Institute of Space and Astronautical Science of Japan Aerospace Exploration Agency (ISAS/JAXA) in collaboration with U.S. (NASA/GSFC, MIT) and Japanese institutions. Suzaku covers the energy range 0.2 - 600 keV with the two instruments, X-ray CCDs (X-ray Imaging Spectrometer; XIS), and the hard X-ray detector (HXD). Suzaku also carries a third instrument, an X-ray micro-calorimeter (X-ray Spectrometer; XRS), but the XRS lost all its cryogen before routine scientific observations could begin. The U.S. Suzaku Guest Observer Facility (GOF) is located at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The GOF is part of the Office of General Investigator Programs (OGIP) in the Astrophysics Science Division (ASD). The primary responsibility of the U.S. Suzaku GOF is to enable U.S. astronomers to make the best use of the Suzaku mission. To fulfill this responsibility, the Suzaku GOF staff performs such activities as supporting the U.S. side of the Suzaku proposal selection process, distributing usable data to U.S. Guest Observers, helping Guest Observers to analyze their data, and creating the mission archive. In addition to the tasks listed above, the U.S. Suzaku GOF activities include the development of software, the compilation and production of documentation for that software, and the provision of expert help. All of the U.S. Suzaku GOF's activities involve close collaboration with the Japanese Suzaku team. The original Astro-E was launched February 10, 2000, but there was a problem with the first stage of the Japanese rocket, and the satellite was declared unusable. - Helped astronomers study the elemental composition of a star from supernova debris. - Provided an estimate that there have been approximately several hundred million "Type II" supernova explosions in the Milky Way galaxy since its birth. - Measured the rate of a black hole's spin and found evidence for how a black hole bends light. Last Updated: June 7, 2012 - JAXA Suzaku Website - http://www.isas.jaxa.jp/e/enterp/missions/suzaku/index.shtml - More about Suzaku - http://www.nasa.gov/mission_pages/astro-e2/main/index.html
<urn:uuid:8676c043-8b9e-433b-a8ab-805e529bb539>
2.84375
600
Knowledge Article
Science & Tech.
48.883027
Astonishing Science. Spectacular museum. A European Space Agency project has its sights set high – taking materials science into space. Experiments carried out on board space-bound rockets are helping scientists develop the materials of the future. These two new types of material are intermetallics – compounds of metals with some extraordinary properties. This 'Texus sounding rocket' is loaded with experimental kit and launched from the Arctic Circle. Titanium aluminide is incredibly strong, even at extreme temperatures, so is ideal for making jet-engine parts which have to withstand great heat and huge forces. This intriguing material has extraordinary properties which promise to cut the weight of jet-engine parts – an application with enticing environmental benefits. This is an ingot of Titanium aluminide. This extremely strong material maintains its strength even when the temperature soars to above 700 °C. Strength is a crucial property in turbine blades. Inside a Rolls-Royce jet engine, the force on a turbine blade is about the same as a weight of 10 tonnes – equivalent to hanging a double-decker bus from it. In this diagram of a jet engine, the titanium aluminide turbine blades would be on the right. Titanium aluminide turbine blades are half as dense as the nickel alloy blades currently used in jet engines. Reducing the density and so the weight of an aircraft engine has huge environmental benefits. Lighter engines mean lighter planes that require less fuel. Lower fuel consumption by aircraft will result in lower production of polluting gases such as carbon dioxide. Jet-engine emissions contain greenhouse gases which affect Earth’s climate. Reducing them will reduce the impact of aviation on climate change. Resembling an alloy, titanium aluminide is a compound of metals, but has a much more ordered structure. This ordered structure gives this material its unique properties. Until now, it had been very difficult to cast titanium aluminide into moulds and scientists were unable to take advantage of its superior properties. But thanks to the space-based experiments, scientists have new data describing the properties of titanium aluminide. Now they understand better how gravity affects the casting process. Armed with this knowledge, materials scientists are developing new casting techniques, leading to the successful production of prototype titanium aluminide blades. Two turbine blades, one made from titanium aluminide, the other a traditional nickel alloy. The titanium aluminide turbine blade is built to withstand the intense heat and stresses generated inside a jet engine. Titanium aluminide’s low density means that a turbine blade made from it has about half the weight of a blade made from a traditional nickel alloy. The strength of the blade is a result of both the properties of titanium aluminide and the sophisticated techniques used to cast it. Page 1 of 2Next: Page 2 - Materials of the Future
<urn:uuid:674a2189-6d41-4c2a-9539-7f6c3ad31a0a>
4.09375
587
Knowledge Article
Science & Tech.
29.365161
hash_hmac function takes binary input, i.e. strings made of raw bytes. Character encoding should not come into play at all. For example, if you were to use SHA256 to compute the HMAC key, from a password, you'd do something like this: // second parameter selects raw bytes or ASCII hex for output (false = hex, true = raw) $hmac_key = hash("sha256", $password, true); // last parameter is, again, whether or not to output raw bytes echo hash_hmac("sha256", $data, $hmac_key, false); Assuming the password is "polynomial", the $hmac_key variable should be: Note that this is a hex representation of the contents of the variable - in reality it'll contain raw bytes. Now let's assume that our message is "Hello there!". The result is as follows: (the above was generated by QuickHash) HMAC schemes require that the key length is equal to the block size of the hash. Some implementations use a hash algorithm to produce the correct size if the input it larger or smaller than the block size. Usually the hash algorithm used is the non-HMAC version of whatever you're using. I'm unsure as to whether PHP does this, and (if they do) what method they use.
<urn:uuid:d926e335-ae02-48e5-8035-0e21809f6ef9>
2.84375
285
Q&A Forum
Software Dev.
56.404455
The Search for QCD Exotics Particles predicted by the theory of quantum chromodynamics help explain why the fundamental building blocks of matter are impossible to isolate Protons in the nuclei of atoms would fly apart were it not for the strong nuclear force. This force is carried by particles (called "gluons") just as the electromagnetic force is carried by particles (photons). Photons have no charge and cannot associate together; hence there are no atoms of light. But gluons carry a type of charge (so-called "color" charge) and so can clump together. The result of such an amalgamation is a glueball, a particle made up of nothing more than the force that holds nuclei together. Physicists have long sought experimental evidence for glueballs and for exotic combinations of glueballs and ordinary particles. The authors describe some of these past attempts and explain their own strategy for uncovering various exotic particles predicted by the theory of quantum chromodynamics (QCD). Go to Article
<urn:uuid:3cb2f8df-8935-4cb6-b5f4-dcf451770582>
3.703125
206
Truncated
Science & Tech.
30.579441
Trees have extreme longevity: bristlecone pine Bristlecone pines can survive for thousands of years in harsh environments by shutting down non-essential processes. |Biomimetic Application Ideas| > Visit strategy page "Bristlecone pines receive very little water and food throughout the year…and the trees stand on dolomite, a form of limestone that contains few nutrients. "To survive on this ascetic diet, Pinus longaeva invests very little energy in growth… "'It shuts down all its non-essential processes,' says Sussman. 'This looks half dead most of the time, perhaps with just one branch that appears to be alive.'" (NewScientist 2010) Pinus longaeva D. K. Bailey IUCN Red List Status: Vulnerable Some organism data provided by: Conifer Database Organism/taxonomy data provided by: Species 2000 & ITIS Catalogue of Life: 2008 Annual Checklist Application Ideas: Energy saving method for industries, homes. Business metaphor for long-term survival. Industrial Sector(s) interested in this strategy: Business Adelia L. Barber The University of Wyoming
<urn:uuid:36d5764f-1fa0-4280-86ae-c0109a5ebbe4>
3.09375
252
Knowledge Article
Science & Tech.
28.3832
The Sahara Forest Project combines two proven technologies in a new way to create multiple benefits: producing large amounts of renewable energy, food and water as well as reversing desertification. A major element of the proposal is the Seawater Greenhouse; a brilliant invention that creates a cool growing environment in hot parts of the world and is a net producer of distilled water from seawater. The Sahara is used here as a metaphor for any desert that formerly supported vegetation and could do so again, given sufficient water. The second technology, Concentrated Solar Power (CSP) involves concentrating the sun’s heat to create steam that drives conventional turbines, producing zero carbon electricity twice as efficiently as photovoltaics. The two technologoes have very promising synergies that make the economic case even more attractive. Since the 1980’s, rainfall has increased in several regions, while drying has been observed in the Sahel, the Mediterranean, southern Africa, Australia and parts of Asia. In his report for the Fourth World Conference on the Future of Science “Food and Water for Life” held in Venice last September, Charlie Paton put it this way: The Sahara Forest Project aims to provide a new source of fresh water, food and renewable energy in hot, arid regions, as well as providing conditions that enable re-vegetating areas of desert. The Sahara is used here as a metaphor for any desert that formerly supported vegetation and could do so again, given sufficient water. This ambitious proposal combines two established technologies – the Seawater Greenhouse and Concentrated Solar Power (CSP) – to achieve highly efficient synergies. Both processes work optimally in sunny, arid conditions. Demonstration plants are already running in Tenerife, Oman and the United Arab Emirates. The estimate that building 20 hectares of greenhouses combined with a 10MW CSP project would cost around $104 million, (€80m) (£65m). How It Works To begin, seawater is drawn into each greenhouse complex and dripped over evaporators to be turned into vapor, creating a warm, humid environment poised for growing plants. More water suspended in the air reduces the amount of fresh water needed for direct irrigation. When the air is cycled through the greenhouse to bring more carbon dioxide to the plants, the humid air is released back into the atmosphere and adds moisture to the local environment. The design team proposes that with enough acreage, it may contribute enough added moisture to induce local rainfall. The evaporators find their necessary power from Concentrated Solar Power (CSP) arrays stretched out across the landscape. Using mirrors to focus sunlight and heat liquid for steam production, CSP is viewed by many as the most viable source of renewable energy in the near term. It can be twice as efficient as photovoltaic panels in energy production as it uses the sun’s energy to create power. The system also produces a great deal of waste heat. By themselves, these two systems are impressive technologies with a great deal of potential, but linked and integrated together, their possibilities rise exponentially. The excess heat of the CSP facilities can be captured through cogeneration and used for the desalination of more saltwater. The project team estimates that onsite power can desalinate 40 million cubic meters of water for terawatt-hour of harvested solar power—that is over 10.5 billion gallons. Strips of greenhouses can be arranged to shield the CSP mirror arrays and reduce dust and sand collection that lowers their efficiency. Three new export streams can emerge from each project location, all of which are in extreme demand around the globe: clean power, fresh water, fresh food. As with any good system built on ecological underpinnings, its function begets its own continued success. Theoretically, as the installations grow in size and number more sand is replaced with greenhouses or planted fields. Moisture content in the air will continue to rise while the ground temperature of more acres will continue to fall. The expansion of deserts could be reversed to eventually re-vegetate some of the world’s harshest climates turning them into net producers of vital resources. While the project is an impressive map for a regenerative, progressive model, I think that the possibilities go even further. - Plant waste from greenhouses is rich in nutrients and can be composted to produce a base for naturally fertilizing future crops or spread over surrounding area to instigate new native plant growth. - Another possibility is taking a page from the city of Kalundborg’s playbook and using the wealth of heated salt water for fish farming. This could produce yet another food crop and another organic waste stream that can be used to create organic fertilizers. - So much desalination will also produce a great deal of salt, which draws us back to CSP. One of the reasons CSP seems so promising is the opportunity for power storage with heated salt solutions being one of the frontrunners. Eventually, excess power could be sold day and night to surrounding townships. So what’s the catch? Well how much it costs to build solar greenhouses, CSP arrays and the labor to manage them all has to factor in somehow and chart a realistic time frame for expansion. There is also the fact that the Sahara is the world’s largest desert (3.3 million square miles) and constitutes nearly a quarter of Africa. Such statistics begs the question of how many facilities would have to be created before the stated goal of local climate alteration was actually achieved. The number could be staggering.
<urn:uuid:ba9d2157-2f14-4e65-8647-5741944360cd>
3.71875
1,144
Personal Blog
Science & Tech.
35.090095
Unit tests are supplementary source code created to automatically test the functionality and correctness of code modules. They are a way of formalizing assumptions about the code’s behavior. With unit tests, one can validate code and code changes at any time, and know exactly when a new change breaks existing code. This page discusses unit testing of EmacsLisp scripts and programs. A unit test is a piece of a code that checks the correctness of some other code. In Emacs Lisp, meaningful “units” to test are functions, commands and macros. Various testing frameworks simplify the task of writing and running tests. The following terms are commonly encountered: (assert (= (+ 2 2) 4))verifies that “2 + 2 = 4”; if it wasn’t (the comparison returned assertwould signal an error. download-webpage” may return an appropriate value without actually connecting to the Web. The precise meanings of these terms may vary, though there is some commonality among xUnit-like frameworks (like SUnit for Smalltalk, JUnit for Java, and lisp-unit for Common Lisp). The following unit test frameworks exist for Emacs Lisp. ErtTestLibrary – Elisp Regression Testing (ERT) by Christian M. Ohler is now included in Emacs. The framework provides facilities for defining and running test suites, reporting the results, and debugging test failures interactively. ElUnit is an experimental framework by PhilHagelberg. It is inspired by regress.el, Ruby’s Test::Unit framework and xUnit, and provides test suites and fixtures. It is deprecated in favor of test-simple.el is intended to be I use this my Debugger front-end https://github.com/rocky/emacs-dbgr el-expectations.el by rubikitch is a small framework focusing on simplicity and readability. It is modeled after Ruby expectations (for example, (expect 4 (+ 2 2)) verifies “2 + 2 = 4”) and works with el-mock.el, a DSL-based mock/stub framework. ecukes is a cucumber-like framework for writing tests. It allows you to write human-readable feature tests. El4r (Emacs Lisp for Ruby) leverages Ruby’s Test::Unit framework by using the EmacsRuby extension language. You write unit tests in EmacsRuby, while the code is in EmacsLisp. In other words, you can treat the EmacsLisp code as a black box and feed it inputs and check the outputs using EmacsRuby. unit-test.el by MarkTriggs reports the pass/fail status of your unit tests (in any language). You need to define a function that runs your unit tests and returns non- nil if they pass. Depending on the output, it will display green or red “light” graphic icon ( xpm) on the Emacs window mode line. To sprinkle your code with tests, wrap them in (eval-when-compile ;; unit test code ...) Then, when you byte-compile the file, the test code is executed; when you load the file, it is not. A more flexible approach is to define a run-my-tests variable and set it to t during compilation: (defvar run-my-tests nil) (eval-when-compile (setq run-my-tests t)) (when run-my-tests ;; unit test code ...) That compiles the test code too, and the run-my-tests variable is evaluated a lot when loading the file. But since all the tests are skipped when loading, the compiled .elc files are still much faster than the uncompiled Assertions can be made with the assert macro from cl.el: (assert (= (+ 2 2) 4)) (assert (/= (* 2 3) 5)) Remember to load Assertions can be grouped into tests with (defun my-test-arithmetic () (assert (= (+ 2 2) 4)) (assert (/= (* 2 3) 5))) Tests can be grouped into test suites with (defvar my-test-suite) (add-hook 'my-test-suite 'my-test-arithmetic) To run the suite: (when run-my-tests (run-hooks 'my-test-suite)) Functions can be temporarily rebound with flet from cl.el: (defun my-test-division () (flet ((/ (dd ds) 0)) (assert (= (/ 1 0) 0)))) The above will suppress an arith-error when dividing by For more complex scenarios, mocker.el is a layer on top of flet that supports programmable mock functions. While fixtures are a great syntactic simplification in other languages, they are not very useful in Lisp, where higher-order functions and unwind-protect are available. One approach is (unwind-protect (progn ;; Set up ... (my-test-arithmetic)) ;; ... tear down ) The documentation of ert.el sketches out a more general solution. That there is so much interest in this looks great, but could please someone start making a comparison of the different approaches? What does they support? How easy are they to use? Could more simple packages be built upon more general packages? Etc. Indeed. Given that ERT is now part of GNU Emacs, I think particularly helpful would be if people who know and like other test frameworks summarised the differences, what they percieve as missing from ERT etc., so other people can make up their mind as to whether to stick with it or not, or push for improvements to ERT being accepted by the maintainers. One question: do any of these packages support TAP (the Test Anything Protocol)? I don’t see a single reference to it. In short order I added optional TAP output to test-simple.el. However after doing that I realize you are not going to be able to use prove unless one creates a custom Test::Harness. Open an issue on the github page if this is worth pursuing. Does anybody know any mock or stub frameworks in Emacs Lisp? Because many Emacs Lisp functions have side-effects, a mock / stub framework is essential to unittest in Emacs Lisp. – rubikitch I intended to create mock.el, but I didn’t yet … Matsuyama
<urn:uuid:d08a73e2-52d5-4dfa-9fd6-3f272c5d52b0>
3.21875
1,392
Documentation
Software Dev.
57.957333
Category: Science in Action Subject(s): Astronomy/Space Science Keywords: hubble space telescope, origins of the universe, cosmology, nebula, galaxy, stars, big bang, nasa, space shuttle, space telescope science institute, goddard space flight center, universe, astronomers, kathryn gaulke, goddard space flight center clean room, kevin carmac |00:53:51||Wrap Up of Hubble Telescope SM4 (Webcast)| On May 11, 2009, the space shuttle Atlantis was launched from the Kennedy Space Center and dock ... |00:32:23||Hubble Space Telescope Servicing Mission 4 (Webcast)| On May 11, 2009, the space shuttle Atlantis was launched from the Kennedy Space Center and ... |0:40:48||The Very Latest from Hubble (Webcast)| On the occasion of Hubble's 15th birthday we unveil two spectacular mosaic images from the teles ... |00:30:43||A New View of the Universe (Webcast)| Join us as NASA releases the first images from the Hubble Telescope's new camera, NICMOS (the Near ... |00:32:12||Summing It Up (Webcast)| Exploratorium staff Ron Hipschman and Robyn Higdon sum up the last five days of spacewalks, and s ...
<urn:uuid:d9f1ec67-a697-4a28-84de-29b84305a6fc>
3.15625
288
Content Listing
Science & Tech.
59.593593
Actinopterygii (ray-finned fishes) > Tetraodontiformes (Puffers and filefishes) > Tetraodontidae Etymology: Takifugu: A Japanese word with several meanings; taki = waterfall + fugu = fish; it could be also understood as taki = to be cooked in liquid + fugu = a venomous fish. Environment / Climate / Range Marine; demersal. Temperate; 44°N - 35°N Length at first maturity / Size / Weight / Age Maturity: Lm ?, range 33 - ? cm Max length : 52.0 cm TL male/unsexed; (Ref. 7038) Northwest Pacific: southern Hokkaido, Japan to the East China Sea. Occurs in the sublittoral zone (Ref. 11230). The skin, gonads, liver, intestines and even the blood contain deadly toxin but still utilized fresh for human consumption, especially in Japan (Ref. 9988). Masuda, H., K. Amaoka, C. Araga, T. Uyeno and T. Yoshino, 1984. The fishes of the Japanese Archipelago. Vol. 1. Tokai University Press, Tokyo, Japan. 437 p. (text). IUCN Red List Status (Ref. 90363) Threat to humans Poisonous to eat (Ref. 559) ReferencesAquacultureAquaculture profileStrainsGeneticsAllele frequenciesHeritabilityDiseasesProcessingMass conversion CollaboratorsPicturesStamps, CoinsSoundsCiguateraSpeedSwim. typeGill areaOtolithsBrainsVision Estimates of some properties based on empirical models Phylogenetic diversity index (Ref. 82805 = 0.5000 [Uniqueness, from 0.5 = low to 2.0 = high]. Bayesian length-weight: a=0.01273 (-0.15020 - 0.17567), b=2.97 (2.88 - 3.07), based on LWR estimates for species & family-BS (Ref. 93245 Trophic Level (Ref. 69278 ): 3.4 ±0.4 se; Based on size and trophs of closest relatives Resilience (Ref. 69278 ): Medium, minimum population doubling time 1.4 - 4.4 years (Assuming tm=3). Vulnerability (Ref. 59153 ): Moderate to high vulnerability (47 of 100) .
<urn:uuid:28d3bd48-28e8-4319-82a8-7f617f4ef5d0>
3.046875
548
Knowledge Article
Science & Tech.
52.467004
Life-forms We Can’t Live Without Plants are one of two major Kingdoms of life forms. There are about 300,000 plant species on Earth. Plants are the only life forms that can produce their own food using energy from sunlight. Plants produce almost all of the oxygen in the air that humans and other animals breathe. Plants are also an important source of food, building materials, and other resources that make life possible for Earth’s animals. The plant kingdom consists of a wide range of species. New plant species are being discovered every day. All fit into Mosses & Liverworts: Mosses and liverworts are green plants that are usually small. Their leaves are often just one cell thick. Neither mosses nor liverworts have any woody tissue, so they never grow very large. There are about 14,000 species. Ferns: Ferns are a very ancient family of plants. Early fern fossils show that ferns are older than land animals and far older than the dinosaurs. They were thriving on Earth for 200 million years before flowering plants evolved. Ferns live in sheltered areas under the forest canopy, along creeks and streams, and in other wet places. They cannot grow well in dry areas. There are about 12,000 species. Cone Plants: Most cone plants, or conifers, are trees. They represent some of the oldest and largest living species on Earth. Conifers are often called evergreen trees, because their leaves (thin needles) usually remain on the trees all year. They have no flowers or fruits. Instead, seeds appear on cones and are scattered by the wind or by animals. There are about 650 species. Flowering Plants: Flowering plants include many of the most familiar plants. The distinctive feature of this plant group is the flower, a cluster of specialized leaves that help in reproduction. Not all flowers are as bright and obvious as the sunflower blossoms in the picture. Oaks, ivy, and grasses also produce flowers. Their flowers are not as showy, so people don’t always notice them. All flowering plants produce seeds from which new plants grow. There are about 270,000 species. Shoots, Roots, and Other Important Plant Parts The basic plant structure includes two organ systems: the shoot system and the root system. The shoot system consists of the parts of the plant that are above ground such as leaves, buds, and stems. In flowering plants, flowers and fruits are also part of the shoot system. The root system is made up of those parts of the plant below ground, such as the roots, tubers, Click on the picture below to learn where these plant parts are found and what they do. Ready, Set, Grow! Most plants grow from seeds, bulbs, or spores. Since plants cannot move, they use animals, birds, and the wind to scatter, or disperse, their seeds. Seeds come in all sizes. Some flower seeds are as small as grains of salt. Others, such as coconuts, are quite large. After it germinates, the first thing a growing seed does is send out a root. The root anchors the plant and absorbs nutrients and water from the soil. Next, a sprout with the first leaves grows. The leaves reach toward the sunlight. Food from the Sun Plants are the only organisms that have a green pigment called chlorophyll in many of their cells. Chlorophyll is found mainly in the leaves. It allows plants to make food (types of starch) from sunlight, water, and a gas called carbon dioxide. This special plant process is called photosynthesis. Plants release the gas oxygen during photosynthesis. Minerals from the Soil Minerals from the soil help build the solid material in plant roots, stems, and leaves. Carbon, hydrogen, and oxygen from the air and water make up over 90% of most plants. Plants Make Life Possible One of the most important things plants do is create oxygen. This makes life on Earth possible for animals. Large areas of plants such as forests and grasslands are needed for creating oxygen. Scientists and conservationists worry that if large areas of uncut forest are not protected, the whole planet’s survival system could be harmed. Food for Life A cluster of fruit ripens on this grapevine. Each grape contains seeds for a new plant. Many animals, including people, love to eat the fruit of many different plant species. In addition to making food for themselves, plants make food for animals. Animals eat many different plant parts. For example, cows, horses, and antelope eat the leaves of grass. Primates, such as monkeys, eat fruits and leaves. People eat almost all parts of a plant including underground roots and tubers (potatoes, carrots, and radishes). We also eat leaves (lettuce, spinach), fruits (oranges, apples, bananas), and seeds (rice, wheat, and corn). Even the bark can be good! The cinnamon on a breakfast roll comes from the bark of a cinnamon tree. However, not all plants are good to eat. Some plants are poisonous. Plants are the largest and oldest organisms on Earth. The tallest plant is a coast redwood tree in California in the United States. It stands 112 m tall. The oldest organism on Earth is thought to be the creosote bush. This plant lives in California’s Mojave Desert. One of these small circular bushes was found to be nearly 12,000 years old! Plants and People People are able to live all over the Earth because plants make seeds that can be stored and carried to other places. This has helped various species of plant spread to many parts of the world. Think of wheat, rice, corn, and beans. These plants are grown all over the world. Without these important foods, people’s lives would be very different. The seeds of these plants are good to eat, full of nutrients, and can be made into many different foods. The Source of Many Things For centuries, plants have been one of the most useful natural resources in the world. Even today, plants are one of the most important materials people use for building houses, making clothes, cooking, and heating. If you take a moment to think about all the things that you use each day, you’ll find that plants are the source of many of them. Here are some - Breakfast cereal (rice, corn, wheat, soy) - The cardboard box the cereal came in (wood fiber from trees) - The chair and table you sat at for breakfast (wood from trees) - The books and paper you use at school (wood fiber from trees) - The air you are breathing now (oxygen from plants) - The clothes you are wearing (cotton, linen, and hemp from plants) More than Just a Pretty Face Many drinks we enjoy are made from plants and their seeds – coffee, cocoa, cola, and fruit juices such as orange juice and apple juice. People around the world give beautiful flowers as gifts for birthdays and weddings. Not a day goes by in which our lives are not affected by flowering plants. Nearly all of our food comes from flowering plants. Useful products such as rope and burlap are also made from the fibers of flowering plants. A large number of widely used drugs, including medicines such as aspirin, come from flowering plants. Many commercial dyes are extracted from flowering plants. Plants and the Environment It is impossible to think about an environment without plants. Even environments like the hot desert or freezing polar regions have plants. These plants have adaptations that help them survive the harsh conditions. Plants create the base for most environments. All of the different types of plants in an environment are commonly referred to by scientists as “plant communities.” Over the past few decades, people have begun to think more about the important relationship plants have to people, animals, and the overall health of the environment. Here are a few of the important ways plants help the environment. Can you think of other ways? - Trees and other plants hold the soil in place so that wind and rain don’t create severe erosion. - Fallen leaves and rotting wood help enrich the soil other plants need to - Shade from trees and large bushes keeps us cool. Shade also provides places for wildlife to live and hide. - Trees create homes. A large old tree 40 m tall may be the home of over 1,000 species! Plants in Peril Rosy periwinkle is an important plant from the tropical forests of Madagascar. The plant produces chemicals that help fight the disease cancer. Plants are disappearing. Every year over 11.5 million hectares of tropical forest is cut and then burned to clear land for farming and cattle grazing. This kind of change is called deforestation. With fewer trees and other plants to convert carbon dioxide to oxygen, too much carbon dioxide builds up. Deforestation is one cause of global warming. The conservation of plants and forests is now something more people and governments are beginning to discuss very seriously as one way to protect the environment.
<urn:uuid:96d99963-841a-4149-b920-e7ea615abaf0>
3.890625
2,016
Knowledge Article
Science & Tech.
61.760808
|Home | Welcome | What's New | Site Map | Glossary | Weather Doctor Amazon Store | Book Store | Accolades | Email Us| Weather Almanac for May 2007 THE SMELLS OF WEATHER Ahhh, May. With luck, the April showers have brought an outbreak of May flowers. My favourite Spring aroma is the smell of lilacs in bloom. The airs, however, are not always filled with flowery fragrance. I'm not just talking about that skunky air that makes up air pollution, or those foul emissions from natural sources such as swamps. I'm not even talking here about those wonderful smells coming from new mown hay, the sea or flowers. I am talking about weather conditions that have a characteristic odour. "Well," you say, "I know I can see the weather, I can feel it, I can hear it, but can I really smell weather?" My answer is a qualified "Yes." Qualified in that often the underlying source of the smell of weather comes from the soil, new-mown hay, flowers, or the ocean. But the reason we smell these better at times is due to changes in the atmosphere that brings these smells to our noses, and that is a function of the weather. Certain weather conditions promote the accumulation of odorous compounds in the air, and others inhibit them. One condition that is most likely to have a definite smell to it whether good or bad occurs when winds become light and the air stagnant. Such conditions in an urban area are the bane of air quality regulators because the local air will accumulate any pollutants emitted into it. At best the buildup of pollutants can be a smelly nuisance, at worst it can be deadly. Such conditions most often occur late at night or in the early morning when nighttime radiational cooling builds up a temperature inversion which traps odours close to the ground. Great if you are surrounded by lilacs, bad if surrounded by industrial sources or even a busy restaurant. Particular smells associated with a change in wind direction may indicate coming weather changes, though this is often a local effect, that is, it is dependent on the location of the odour source to you. For example, if an east wind brings rain and a swamp lies to your east, the smell of the swamp may be a good indication of rain in the near future. Of course, you can, in some circumstances, work this backward. A particular smell might give you an indication of a specific wind direction blowing in your vicinity. There are other smellweather correlations that have validity in sensing the coming weather. For ages, people have based weather lore on their sense of smell. Perhaps you have heard these sayings: "Flowers smell best just before a rain." "When ditches and ponds offend the nose, Look for rain and stormy blows." Many weather sayings are downright wrong, but others have a ring of truth. These smelly adages fall in the latter category for several reasons. First, warm and humid air enhances our sense of smell, the humidity carrying many odorous molecules to our noses. In most instances, warmer conditions beget more odours because the molecules that cause the odour sensation are more readily vaporized when it is warmer. And humidity is important because many odorous compounds must be connected to water molecules to be smelled. Our noses work best in moist conditions than in dry. From a meteorological perspective, it is well established that humidity increases as the threat of rain increases. Lowering barometric pressure and rising air currents are also indicators of coming precipitation, and they both enhance the transmission of natural smells far afield. When the wind blows from a given direction with an approaching weather system, we may smell the gases given off by distant wet plants and soil, or some other source, long before the rains move over us. We know that there are several species of bacteria that live in the soil that emit their spores when it rains. The impact of raindrops around these bacteria kick the spores into the air where they are wafted along for a great distance. Such spores have the musty odour that we often associate with coming rainfall. There are a few weather smells that are totally atmospheric. The most common is the smell of ozone and other compounds during a severe lightning storm. In this case, the lightning's electrical energy builds new compounds in the air from those already present. Ozone, for example, may form when the lightning discharge splits an oxygen molecule into two oxygen atoms, which in turn can combine with another oxygen molecule to form tri-atomic oxygen: ozone. Sunlight can do the same wizardry to trace molecules in the air. Airs filled with photochemical smog are the result of this solar cooking of the chemical soup surrounding human settlements. Compounds emitted by vegetation can also be cooked in the atmospheric kettle to form a variety of new compounds. (You might argue that these sun-produced odours have human or natural origins, which is true, but I think of them as the precursors to the smell, not the cause of the smell itself.) The absence or mutting of strong odours can also be a weather indicator. Colder and drier conditions inhibit the release of smelly compounds and their ability to be detected by our noses, resulting in the air having a "clean" smell. Rising pressure may also inhibit odours from being released from the ground or surface water in great enough concentration to be detected by our noses. These atmospheric properties are often indicative of fair weather, so a clean smell to the outdoor air can give an indication of a coming clearing of wet weather. A heavy or prolonged rainfall may also washout many offending molecules from the air, leaving it clean-smelling like freshly washed clothes. The ultimate in cold and dry can be found during the winter when ice and snow dominate the weather. The winter is characterized by dormancy in plants and often frozen surfaces that further hide away potential odorous sources. (Of course, there are always potential odours from human sources industry, restaurants, traffic, etc.) But these inhibitions do not prevent some people from "smelling" a snowstorm's approach. These olfactory-enhanced individuals likely are able to pick up on similar clues as are present with approaching rain, though the smells are much subtler, and likely missed by the rest of the population. So the next time you hear someone says "It smells like rain," you might keep that umbrella handy. Learn More About the Weather From These Relevant Books To Purchase Notecard, Now Available! Order Today! The BC Weather Book:
<urn:uuid:1cd08750-e01f-42f7-a9ac-f90e120bec0d>
2.703125
1,357
Nonfiction Writing
Science & Tech.
45.934973
Counterintuitive Examples and Discrepant Events - Diluting a weak electrolyte (HC2H3O2) with water increases the electrical conductivity. (See Suggested Laboratory Activity, Strong vs. - Carbon dioxide bubbled through limewater causes a precipitate to form. Continued bubbling causes the precipitate to disappear. (See Underground sculpture, ChemMatters (1984),1(2), pp. 10-11. - Some active metals react with either acids or bases to produce hydrogen gas. (For example, aluminum will react with either hydrochloric acid or sodium hydroxide, releasing hydrogen gas. See Suggestions for Other Demonstrations, Making hydrogen gas from an acid and a base.) - The same amount of hydrogen gas will be produced when a sample of an active metal is added to equal volumes of concentrated acid solution and dilute acid solution (if the metal is the limiting reactant). - Bicarbonate salts can be used to neutralize either an acid or a base. This can be illustrated by the following equations: Metaphors and Analogies - [H+] vs. pH: a see-saw relationship (see Pictures in the Mind below); one goes up (increases) while the other goes down (decreases). - A proton shifting from an acid to a base can be likened to a baseball being thrown from a pitcher (the acid) to a catcher (the base). - Universal indicator color changes follow the colors in the rainbow as the pH moves from 2 to 10. The name ROY G BIV helps keep the colors straight: Red, Orange, Yellow, Green, Blue, Indigo, Violet. A pH of 7 produces a yellow-green hue. Pictures in the Mind 1. Ionization. Graphical pictorial representation of the behavior of acids of different strengths in aqueous solution. Acids and Bases
<urn:uuid:48329d62-ce0a-4c74-89e0-37cde56cf8e2>
3.890625
418
Content Listing
Science & Tech.
42.658627
Most VB6 procedures—subs and functions—take a fixed number of arguments. The arguments and their types are specified in the procedure definition and, when the procedure is called, the arguments that are passed must precisely match the definition. In some situations, you may want to define one or more optional arguments that can be included or omitted when a procedure is called. You can do so with the Optional keyword. A procedure can have one or more optional arguments. You must place the optional arguments at the end of the argument list after any non-optional arguments. For example: Sub foo(A As Integer, B As String, Optional C As Single, Optional D As Date) You can also specify a default value for an optional argument that will be used if the argument is not passed, like this: Sub goo(Optional rate As Single = .05) If the argument rate is passed to the procedure, VB will use the passed value; if it's omitted, VB will use the value .05. If an optional argument is omitted and does not have a default value specified, the VB value for declared but uninitialized variables for that type is used, such as 0 for numeric variables. Note: You cannot use optional arguments in procedures that use ParamArray arguments. Advance your scripting skills to the next level with TechRepublic's free Visual Basic newsletter, delivered each Friday. Automatically sign up today!
<urn:uuid:d0d2e63a-6542-41d2-9d2d-f9accb9486fb>
3.5
294
Tutorial
Software Dev.
41.802272
This first image shows CONTOUR breezing past a comet nucleus (artist's rendition). The second image is a picture of all of the orbits involved. It shows how CONTOUR will be able to intersect the paths of the three comets. Images Courtesy of NASA The COmet Nucleus TOUR (CONTOUR) will be launched in 2002. The spacecraft will spend six years studying three different comets. It will take pictures of the comets' nuclei and also collect comet dust. CONTOUR is similar to Giotto, when scientists studied Comet Halley. Scientists say we need to study comets because they will tell us more about the Earth and how it formed. CONTOUR mission page Other comet missions Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: NASA’s Comet Nucleus Tour (called CONTOUR) launched July 3, 2002. The CONTOUR spacecraft will fly by at least two comets. It will take pictures and collect dust from the nucleus of each comet. Learning...more We are sad to report that CONTOUR is lost in space. The CONTOUR spacecraft was launched July 3, 2002 to explore the nucleus of comets. It was to take pictures and catch dust from the nucleus of at least...more Cassini is the name of a robot spacecraft. Cassini is studying the planet Saturn. It is also studying many of Saturn's moons and Saturn's cool rings. Cassini blasted off from Earth in 1997. It took Cassini...more Do you know what MESSENGER stands for? It's the MErcury Surface Space ENvironment, GEochemistry Ranging mission! What does this mean? Well, the spacecraft will study Mercury's atmosphere, crust and polar...more
<urn:uuid:7eba25c4-a557-4b73-81ae-bfee764cebee>
3.734375
399
Content Listing
Science & Tech.
65.850495
Optical science and engineering affect almost every aspect of our lives. Millions of miles of optical fiber carry voice and data signals around the world. Lasers are used in surgery of the retina, kidneys, and heart. New high-efficiency light sources promise dramatic reductions in electricity consumption. Night-vision equipment and satellite surveillance are changing how wars are fought. Industry uses optical methods in everything from the production of computer chips to the construction of tunnels. Harnessing Light surveys this multitude of applications, as well as the status of the optics industry and of research and education in optics, and identifies actions that could enhance the field's contributions to society and facilitate its continued technical development.
<urn:uuid:56fed92c-13c8-4c33-87c4-36bbd906042d>
2.984375
134
Content Listing
Science & Tech.
23.562308
The Collatz Conjecture is one of the Unsolved problems in mathematics, specially in Number Theory. The Collatz Conjecture is also termed as 3n+1 conjecture, Ulam Conjecture, Kakutani’s Problem, Thwaites Conjecture, Hasse’s Algorithm, Syracuse Problem. If you keep repeating this procedure, you shall reach the number 1 at last. » Starting with 1 — we get 1 in first step. » Starting with 2 (even) — we get 1 in second step and in one operation » Starting with 3 (odd) — we get 1 in 8th step Similarly, you can check this conjecture for every positive integer; you should get 1 at last according to this conjecture. Let be a positive integer. Then it either be even or odd. A. If n is even: Divide by and get . Is it 1? — conjecture applies on that positive integer. Again if it is even — redo the same work. If it is odd, then— see next step! B. If n is odd: Multiply by & then add to find . Is it 1? — conjecture applies on that positive integer. Again if it is even — redo the same work you did in A. If it is odd, then— redo the work of B! Problem in this Conjecture This conjecture has been tried on various kind of numbers, and those numbers have satisfied the Collatz Conjecture. But the question is that – Is this conjecture applicable to every positive integer? Mathematicians have found no good use of Collatz Conjecture in Mathematics, so it is considered as a useless conjecture. But over all, it is unsolved — and we can’t leave any unknown or unsolved problems & principles in Math. - Unsolved Problems in Mathematics (innoworld.wordpress.com) - Understanding Poincaré Conjecture (wpgaurav.wordpress.com) - The Depth Of The Möbius Function (rjlipton.wordpress.com) - The AC0 Prime Number Conjecture (gilkalai.wordpress.com)
<urn:uuid:cbb75eb4-a124-41e8-b4dd-5c49a5e7a0b4>
2.921875
459
Knowledge Article
Science & Tech.
58.95913
This experiment involves the synthesis and characterization of a simple porphyrin: 5,10,15,20-tetraphenylporphyrin. This product is a structural analog of important biomolecules, such as the heme portion of the oxygen-transport molecule hemoglobin and chlorophyll of plants. The carbon skeleton of the porphyrin ring is prepared by eight electrophilic aromatic substitution reactions between four equivalents of pyrrole and benzaldehyde. Traditionally, porphyrin syntheses have been carried out in corrosive, high-boiling solvents. Toxic reagents have been used to oxidize the intermediate porphyrinogen ring to porphyrin. In this experiment, porphyrinogen is prepared by the solvent-less reaction between pyrrole and benzaldehyde in the gas phase, and then oxidized to porphyrin by air oxygen. The product is identified by thin-layer chromatography, purified by column chromatography, and further characterized by ultraviolet-visible spectroscopy. This gas-phase reaction eliminates the need for hazardous solvents, avoids the use of corrosive reagents, and uses air as a mild and safe oxidant for converting porphyrinogen to porphyrin. The link to the laboratory procedure includes pre- and post-lab questions. Summary prepared July 2005 by Don T. Fujito, Chemistry Department, La Roche College. Doxsee, K. M.; Hutchison, J. E. Green Organic Chemistry - Strategies, Tools, and Laboratory Experiments, Print 2004; pp 152-158.
<urn:uuid:46c7cd9e-d98b-45bd-96e1-e20ee93c3c17>
3.3125
336
Knowledge Article
Science & Tech.
22.051264
The two chimpanzee flights in Project Mercury were to reveal significant medical data. The suborbital flight of Ham was without complications, but it was considerably less complex than Enosí orbital flight. In the Mercury-Atlas 5 (MA-5) orbital flight, Enos performed a complex multiple operant task as lie twice orbited the earth. The 42-pound subject, whose age was estimated to be 63 months, had been exposed to simulated launch accelerations on the centrifuge at the University of California. He had also served as a subject for a laboratory model of a 14-day flight. Over a 16-month period he had received a total of approximately 1,263 hours of training, of which 343 hours were accomplished under restraint conditions in a model of the actual couch used in flight.6 According to Henry, the results of the two animal flights (Ham and Enos) showed that: (2) Blood pressures, in both the systemic arterial tree and the low-pressure system, were not significantly changed from preflight values during 3 hours of the weightless state. (3) Performance of a series of tasks Involving continuous and discrete avoidance, fixed ratio responses for food reward, delayed response for a fluid reward, and solution of a simple oddity problem, was unaffected by the weightless state. (4) Animals trained in the laboratory to perform during the simulated acceleration, noise, and vibration of launch and reentry were able to maintain performance throughout an actual flight. (2) A 7-minute (MR-2) and a 3-hour (MA-5) exposure to the weightless state were experienced by the subjects in the context of an experimental design which left visual and tactile references unimpaired. There was no significant change in the animal's physiological state or performance as measured during a series of tasks of graded motivation and difficulty. (3) The results met program objectives by answering questions concerning the physical and mental demands that the astronauts would encounter during space flight and by showing that these demands would not be excessive. (4) An incidental gain from the program was the demonstration that the young chimpanzee can be trained to be a highly reliable subject for space-flight 6. Frederick H. Rohles, Jr., Marvin E. Grunzke, and Herbert H. Reynolds, "Performance Aspects of the MA-5 Flight," ch. 9 in Results of the Project Mercury Ballistic and Orbital Chimpanzee Flights, NASA SP-39, 1963. 7. James P. Henry, "Synopsis of the Results of the MR-2 and MA-5 Flights," ch. 1 in Results of the Project Mercury Ballistic and Orbital Chimpanzee Flights, NASA SP-39, 1963.
<urn:uuid:3d7c45f6-990d-4b27-b21a-ca776368097a>
3
573
Academic Writing
Science & Tech.
48.760774
The closer we come to inventing a viable tractor beam - a ray of light that can move objects - the more obvious it is that real tractor beams will not look anything like the glowing blue rays from science fiction. So how will beams of light actually move matter? Let's find out by looking at three ways that scientists can already move solid objects using nothing more than light. Take the laser thruster engine, which is activated by a laser beam, but moved by traditional propulsion. The concept is based on an experimental motor, where lasers fire pulses into solid propellant, shoving the propellant out of the craft in one direction to thrust the spacecraft the opposite way. With multiple propellants, the craft can be steered in different directions-and if the engine is modified to allow an external laser pulse to trigger the propellant expulsion, the engine can be steered by an outside source. With some modifications, these thrusters could be attached to space junk that needs to be pushed out of orbit, or even to an astronaut's suit so an adrift and unconscious astronaut could be steered to safety by shipmates. Moving tiny objects with optical tweezers But in a laser thruster engine, the laser itself is not pushing the object, merely triggering an engine that does all the work. Optical tweezers, on the other hand, actually move objects with the power of light alone. Although the photons that make up a focused beam of light have no mass, they do carry momentum, and when they hit an object and are forced to bend around it, their direction, and therefore their momentum, changes - which means that the object feels a miniscule force as part of conservation of momentum. In an "optical trap," a focused laser beam, more intense at the center, is pointed at a miniscule particle ranging from 10 to 100 nanometers long. As the particle deflects photons and they scatter around it, they also are holding it in place in an optical trap. By moving the laser beam, researchers can actually move the particle as well. These "optical tweezers" have been used to confine cells, track the motion of bacteria, apply small forces, modify cell membranes, and study molecular motors. Moving larger objects using laser beams with hollow centers Despite the many applications of optical tweezers, manipulating objects with lasers is a lot less cool when those objects are so small. Another technique improves tractor beams' abilities, moving glass beads a hundred times the size of the nanoscale objects that optical tweezers can push around. Rather than relying on the momentum of photons, this method uses their heat. A laser beam with a hollow center - two counter-propagating beams of light are overlapped so that they cancel out in the middle - is focused on a tiny glass bead so that the air around the bead remains cool while the air molecules farther away are heated by the laser and bounce around much more quickly. When the bead drifts into the hot air, it's like moving into a mosh pit of hot air molecules: the frenetic motion quickly pushes the bead back into the still, cool air in the center of the laser beam. The bead can also be pushed along the length of the composite hollow laser beam by changing the intensity of the contributing lasers, heating the air on one side of the particle and pushing it along the beam with different velocities. This successful technique is estimated to be able to move small objects up to 10 meters in air - but because of its reliance on air molecules, it would not work in the vacuum of space. Pulling objects with Bessel beams Clearly, lasers can successfully push small objects along, but pulling them is another story. A couple methods deal with this hurdle by manipulating specific kinds of objects. For example, if the object can be induced to carry an electrical charge, the laser can drag it a short distance. But the most promising technology for pulling an object with laser light is that of Bessel beams. While the cross-section of an ordinary laser looks like a filled in circle, the cross-section of a Bessel beam resembles a target, a set of concentric circles. The innermost circle of a Bessel beam also remains focused for longer than a typical laser would. Bessel beams can even travel through objects, rather than being stopped by them. All of these traits make Bessel beams ideal for tractor beam applications. For example, the increased focus of a Bessel beam makes it better suited for pushing on just part of an object, rather than the whole thing, increasing control of the object's movement. In addition, because a Bessel beam moves "through" objects, reconstructing itself on the opposite side, the light waves that make up the beam cannot only push the object, but they can also superimpose with a separate light source in order to build up more energy on the far side of the object than the near side, pulling the object toward the laser source. This nifty trick lets a Bessel beam both push and pull an object. Although tractor beam technology continues to advance, it remains primarily limited to very tiny objects. So by all means, dream of tractor beams - just don't dream too big.
<urn:uuid:942cb32d-90b3-492e-8cce-1bc8f2b92019>
4.4375
1,061
Listicle
Science & Tech.
42.458551
Section 4: Gravitational and Inertial Mass A subtlety arises when we compare the law of universal gravitation with Newton's second law of motion. The mass that appears in the law of universal gravitation is the property of the particle that creates the gravitational force acting on the other particle; for if we double , we double the force on . Similarly, the mass in the law of universal gravitation is the property of the particle that responds to the gravitational force created by the other particle. The law of universal gravitation provides a definition of gravitational mass as the property of matter that creates and responds to gravitational forces. Newton's second law of motion, F=ma, describes how any force, gravitational or not, changes the motion of an object. For a given force, a large mass responds with a small acceleration and vice versa. The second law provides a definition of inertial mass as the property of matter that resists changes in motion or, equivalently, as an object's inertia. Figure 8: Equality of gravitational and inertial mass. Source: © Blayne Heckel. More info Is the inertial mass of an object necessarily the same as its gravitational mass? This question troubled Newton and many others since his time. Experiments are consistent with the premise that inertial and gravitational mass are the same. We can measure the weight of an object by suspending it from a spring balance. Earth's gravity pulls the object down with a force (weight) of , where g is the local gravitational acceleration and the gravitational mass of the object. Gravity's pull on the object is balanced by the upward force provided by the stretched spring. We say that two masses that stretch identical springs by identical amounts have the same gravitational mass, even if they possess different sizes, shapes, or compositions. But will they have the same inertial mass? We can answer this question by cutting the springs, letting the masses fall, and measuring the accelerations. The second law says the net force acting on the mass is the product of the inertial mass, , and acceleration, a, giving us: or . But g is a property of the Earth alone and does not depend upon which object is placed at its surface, while experiments find the acceleration, a, to be the same for all objects falling from the same point in the absence of air friction. Therefore, is the same for all objects and thus for . We define the universal gravitational constant, G, to make . The principle of the universality of free fall is the statement that all materials fall at the same rate in a uniform gravitational field. This principle is equivalent to the statement that . Physicists have found the principle to be valid within the limits of their experiments' precision, allowing them to use the same mass in both the law of universal gravitation and Newton's second law. Measurements of planets' orbits about the Sun provide a value for the product GMs, where MS is the mass of the Sun. Similarly, earthbound satellites and the Moon's orbit provide a value for GME, where ME is the mass of the Earth. To determine a value for G alone requires an a priori knowledge of both masses involved in the gravitational attraction. Physicists have made the most precise laboratory measurements of G using an instrument called a "torsion balance," or torsion pendulum. This consists of a mass distribution suspended by a long thin fiber. Unbalanced forces that act on the suspended mass distribution can rotate the mass distribution; the reflection of a light beam from a mirror attached to the pendulum measures the twist angle. Because a very weak force can twist a long thin fiber, even the tiny torques created by gravitational forces lead to measurable twist angles. Figure 9: Schematic of a torsion balance to measure the gravitational constant, G. Source: © Blayne Heckel. More info To measure G, physicists use a dumbbell-shaped mass distribution (or more recently a rectangular plate) suspended by the fiber, all enclosed within a vacuum vessel. Precisely weighed and positioned massive spheres are placed on a turntable that surrounds the vacuum vessel. Rotating the turntable with the outer spheres about the fiber axis modulates the gravitational torque that the spheres exert on the pendulum and changes the fiber's twist angle. This type of experiment accounts in large part for the currently accepted value of (6.674280.00067) x 10-11 N-m2/kg2 for the universal gravitational constant. It is the least precisely known of the fundamental constants because the weakness of gravity requires the use of relatively large masses, whose homogeneities and positioning are challenging to determine with high precision. Dividing GME found from satellite and lunar orbits by the laboratory value for G allows us to deduce the mass of the Earth: 5.98 X 1024 kilograms.
<urn:uuid:95772f40-7029-4e1b-87e7-a3d322586f98>
4.1875
991
Tutorial
Science & Tech.
40.034772
Cosmologists can now proclaim with confidence (but with some surprise too) that, in round numbers, our universe consists of 5% baryons, 25% dark matter, and 70% dark energy. It is indeed embarrassing that 95% of the universe is unaccounted for: even the dark matter is of quite uncertain nature, and the dark energy is a complete mystery. The network of key arguments is summarised in Figure 1. Historically, the supernova evidence came first. But had the order of events been different, one could have predicted an acceleration on the basis of CDM evidence alone; the supernovae would then have offered gratifying corroboration (despite the unease about possible poorly-understood evolutionary effects). Figure 1. The network of arguments that point towards a flat Universe dominated by 'dark energy'. Our universe is flat, but with a strange mix of ingredients. Why should these all give comparable contributions (within a modest factor) when they could have differed by a hundred powers of ten? In the coming decade, we can expect advances on several fronts. Physicists may well develop clearer ideas on what determined the favouritism for matter over antimatter in the early universe, and on the particles that make up the dark matter. Understanding the dark energy, and indeed the big bang itself, is perhaps a remoter goal, but ten years from now theorists may well have replaced the boisterous variety of ideas on the ultra-early universe by a firmer best buy. They will do this by discovering internal inconsistencies in some contending theories, and thereby narrowing down the field. Better still, maybe one theory will earn credibility by explaining things we can observe, so that we can apply it confidently even to things we cannot directly observe. In consequence, we may have a better insight into the origin of the fluctuations, the dark energy, and perhaps the big bang itself. Inflation models have two generic expectations; that the universe should be flat and that the fluctuations should be gaussian and adiabatic (the latter because baryogenesis would occur at a later stage than inflation). But other features of the fluctuations are in principle measurable and would be a diagnostic of the specific physics. One, the ratio of the tensor and scalar amplitudes of the fluctuations, will have to await the next generation of CMB experiments, able to probe the polarization on small angular scales. Another discriminant among different theories is the extent to which the fluctuations deviate from a Harrison-Zeldovich scale-independent format (n = 1 in the usual notation); they could follow a different power law (i.e. be tilted) , or have a `rollover' so that the spectral slope is itself a function of scale. Such effects are already being constrained by WMAP data, in combination with evidence on smaller scales from present-day clustering, from the statistics of the Lyman alpha absorption-line `forest' in quasar spectra, and from indirect evidence on when the first minihalos collapsed, signalling the formation of the first Population III stars that ended the cosmic dark age. In parallel, there will be progress in `environmental cosmology'. The new generation of 10-metre class ground based telescopes will give more data on the universe at earlier cosmic epochs, as well as better information on gravitational lensing by dark matter. And there will be progress by theorists too. The behaviour of the dark matter, if influenced solely by gravity, can already be simulated with sufficient accuracy. Gas dynamics, including shocks and radiative cooling, can be included too (though of course the resolution isn't adequate to model turbulence, nor the viscosity in shear layers). Spectacular recent simulations have been able to follow the formation of the first stars. But the later stages of galactic evolution, where feedback is important, cannot be modelled without parametrising such processes in a fashion guided by physical intuition and observations. Fortunately, we can expect rapid improvements, from observations in all wavebands, in our knowledge of galaxies, and the high-redshift universe. Via a combination of improved observations, and ever more refined simulations, we can hope to elucidate how our elaborately structured cosmos emerged from a near-homogeneous early universe.
<urn:uuid:a9fd252a-1c4a-48cc-9299-0518df8ff9d9>
2.890625
865
Academic Writing
Science & Tech.
27.455701
A few more facts about group actions There’s another thing I should have mentioned before. When a group acts on a set , there is a bijection between the orbit of a point and the set of cosets of in . In fact, if and only if if and only if is in if and only if . This is the the bijection we need. This has a few immediate corollaries. Yesterday, I mentioned the normalizer of a subgroup . When a subgroup acts on by conjugation we call the isotropy group of an element of the “centralizer” of in . This gives us the following special cases of the above theorem: - The number of elements in the conjugacy class of in is the number of cosets of in . - The number of subgroups conjugate to in is the number of cosets of in . In fact, since we’re starting to use this “the number of cosets” phrase a lot it’s time to introduce a bit more notation. When is a subgroup of a group , the number of cosets of in is written . Note that this doesn’t have to be a finite number, but when (and thus ) is finite, it is equal to the number of elements in divided by the number in . Also notice that if is normal, there are elements in . This is why we could calculate the number of permutations with a given cycle type the way we did: we picked a representative of the conjugacy class and calculated . One last application: We call a group action “free” if every element other than the identity has no fixed points. In this case, is always the trivial group, so the number of points in the orbit of is is the number of elements of . We saw such a free action of Rubik’s Group, which is why every orbit of the group in the set of states of the cube has the same size.
<urn:uuid:18735377-4c33-4313-8f17-44bb49f2ff03>
3.359375
413
Personal Blog
Science & Tech.
61.304799
Friday 17 May Ivell’s sea anemone (Edwardsia ivelli) Ivell’s sea anemone fact file - Find out more - Print factsheet Ivell’s sea anemone description A simple animal, the sea anemone is made up of a column with a mouth, used to take in food and expel waste, and several tentacles. In Ivell’s sea anemone, there are twelve transparent tentacles, nine in an outer ring lying flat on the substrate, and three in an inner ring, held vertically, or over the mouth. Each tentacle has a few stripes of cream colour across it (2).Top Ivell’s sea anemone biology Sea anemones are largely sedentary, moving occasionally by creeping extremely slowly or by inflating slightly and allowing currents to move them. They feed by holding out their tentacles to catch passing food particles and transfering them to the mouth. Little is known of the habits of this species, other than it is a passive predator that captures its prey in its tentacles, lives in a burrow and is very wary. The only way one might see this elusive animal is by scooping up some sediment in a bucket, leaving it to stand for some time, and then carefully peering over the rim to catch the anemone unawares (4). Ivell’s sea anemone was first discovered by Dick Manuel in 1975 when he and his colleague Professor Richard Ivell were examining Widewater for anemones. Manuel named the anemone after Prof Ivell, who has since returned to look for the anemone and to encourage the protection of the Widewater Lagoon (5).Top Ivell’s sea anemone range Found only in the Widewater Lagoon in West Sussex, Ivell’s sea anemone may no longer be extant, having not been found since 1983 despite detailed searches (2).Top Ivell’s sea anemone habitat Ivell’s sea anemone was found in an isolated saline lagoon, where it sheltered on the bottom in long burrows in deep, soft mud (2).Top Ivell’s sea anemone statusTop Ivell’s sea anemone threats The habitat of Ivell’s sea anemone is threatened by habitat degradation as a result of reduced seawater inflow from adjacent marshes. Pollution from nearby gardens following the run-off of pesticides and fertilisers has also caused reduced water quality (3).Top Ivell’s sea anemone conservation Inclusion in the UK’s Biodiversity Action Plan scheme has resulted in the drafting of a management plan for Widewater Lagoon. The site has now been proposed as a priority Special Area of Conservation under the EC Habitats Directive. Plans to restore the site include the improvement of the water quality and quantity, and searches will continue for Ivell’s sea anemone, with plans for translocation if it is ever rediscovered (3).Top AuthenticationThis information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: firstname.lastname@example.orgTop - IUCN Red List (February, 2005) - Encyclopedia of Marine Life of Britain and Ireland (February, 2005) - UK BAP (February, 2005) - Curson, J. (2002) Pers. comm. - Lancing Village (February, 2005) MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends. Terms and Conditions of Use of Materials Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors. Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; - download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use; - teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User. End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials. Additional use of flagged material Green flagged material Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use. Creative commons material Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details. Any other use Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use. Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted.
<urn:uuid:852e1c27-3eed-4ce3-861c-c98905285f47>
3.6875
1,325
Knowledge Article
Science & Tech.
26.59774
Rovers on wheels NASA’s four martian rovers all took engineering inspiration from the Soviet Union’s 1970s Moon missions. June 25, 2012 |The engineering origins of NASA’s Mars rovers lie in the Soviet Union’s Moon exploration program, part of its competition with the United States in the 1960s and 1970s.| Astronomy magazine subscribers can read the full article for free. Just make sure you're registered with the website. You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:872951a4-de15-44ad-8974-9ee397c2e7ce>
3.28125
265
Truncated
Science & Tech.
45.232632
The wildfires that make the news are vilified for good reason. These out-of-control burns pose threats to people’s lives and property. Often, they are also dangerous to natural ecosystems. Though some of these fires are caused by arson or by careless people, others occur because of the inevitable build-up of plants in areas that have not been allowed to burn for one reason or another. If these areas were allowed to burn naturally, the resulting fires would actually benefit the surrounding environment. However, if allowed to build up, they can cause uncontrollable conflagrations. We would often be better off letting nature take its course. Let’s examine some benefits of natural wildfires. As mentioned before, if we allow wildfires to happen naturally, they become a part of the natural order of things. These fires can burn large areas, but they promote biodiversity by consuming species that have overstepped their bounds. This process is similar to natural selection. For example, if an animal is transported into an ecosystem where it has no natural enemies, it may grow unchecked and wreak havoc on the environment. If fires don’t burn certain plants, they can profligate and overwhelm other plants. Fires also are useful because they break down vegetation that will eventually enrich the soil with its minerals. This allows the soil to support a more diverse array of plant life, which in turn supports more animal species. Again, the downside of allowing an environment to stagnate is that one species or another may become too dominant and crowd out other species. This is not, in itself, such a bad thing. However, by doing this, the successful species can unintentionally get rid of the food chain below it, thereby depriving itself of a food supply in the future. If we don’t allow fires to happen, we work against nature.
<urn:uuid:194b8d09-a971-4a49-b6f8-a43fedef851f>
3.890625
374
Personal Blog
Science & Tech.
41.150956
Common Lisp the Language, 2nd Edition A string is simply a vector of characters. More precisely, a string is a specialized vector whose elements are of type string-char. X3J13 voted in March 1989 (CHARACTER-PROPOSAL) to eliminate the type string-char and to redefine the type string to be the union of one or more specialized vector types, the types of whose elements are subtypes of the type character. Subtypes of string include simple-string, base-string, and simple-base-string. base-string == (vector base-character) simple-base-string == (simple-array base-character (*)) An implementation may support other string subtypes as well. All Common Lisp functions that operate on strings treat all strings uniformly; note, however, that it is an error to attempt to insert an extended character into a base string. The type string is therefore a subtype of the type vector. A string can be written as the sequence of characters contained in the string, preceded and followed by a " (double quote) character. Any " or \ character in the sequence must additionally have a \ character before it. "Foo" ;A string with three characters in it "" ;An empty string "\"APL\\360?\" he cried." ;A string with twenty characters "|x| = |-x|" ;A ten-character string Notice that any vertical bar | in a string need not be preceded by a \. Similarly, any double quote in the name of a symbol written using vertical-bar notation need not be preceded by a \. The double-quote and vertical-bar notations are similar but distinct: double quotes indicate a character string containing the sequence of characters, whereas vertical bars indicate a symbol whose name is the contained sequence of characters. The characters contained by the double quotes, taken from left to right, occupy locations within the string with increasing indices. The leftmost character is string element number 0, the next one is element number 1, the next one is element number 2, and so on. Note that the function prin1 will print any character vector (not just a simple one) using this syntax, but the function read will always construct a simple string when it reads this syntax.
<urn:uuid:ef3febb8-9c86-43f4-a924-68a9f4e1891a>
4.09375
478
Documentation
Software Dev.
42.787253
The Echinoderms lack a head and have five-point radial symmetry. These fascinating animals live only in marine environments. They have an endoskeleton made out of calcareous plates, which is often protected by spines. The plates that make up the endoskeleton often support the spines and enclose the coelom, an anatomical feature used for movement, respiration, collecting food, and as a sensory mechanism. The coelom also houses the reproductive organs and alimentary canal. Echinoderms can be found in all oceans in all zones with approximately 6,000 described species. The two main subphylums in phylum Echinodermata are Eleutherozoa and Pelmatozoa. Subphylum Eleutherozoa conatins the superclasses Asterozoa and Cryptosyringida. Superclass Asterozoa contains the sea stars/starfishes in Class Asteroidea and the extinct Class Somasteroidea. Superclass Cryptosyringida contains Class Echinoidea (heart urchins, sand dollars, and sea urchins), Class Holothuroidea (sea cucumbers), and Class Ophiuroidea (basket stars, brittlestars, and snake stars). Subphylum Pelmatozoa contains the Class Crinoidea (feather stars and sea lillies). Mature echinoderms have five points that face outward from the center of the body with a mouth underneath and the anus on top. There are exceptions to this plan however; some echinoderms lack an anus and others, like the crinoids, have both the mouth and the anus on the same side of the body. Scientists refer to the side of the body with the mouth as the oral side and the side with the anus as the aboral side. Crinoids, ophiuroids, and holothuroids have tube feet to help collect food particles floating towards their body. Other types of echinoderms like asteroids are carnivorous and will surround or throw their stomach over their prey. Some echinoids even have teeth used to chew and dismantle plants and small animals. Most echinoderms reproduce sexually producing larvae that feed on phytoplankton until they reach maturity. Some species of echinoderms develop their offspring in embryonic sacs located on the outside of their bodies. Echinoderms have fascinating water-vascular systems that likely originated from some sort of respiratory system that evolved to include food gathering and movement. They accomplish these tasks through the use of their numerous hollow tube feet that resemble tentacles. There are two rows of tube feet on the outside of the body that fill with seawater so that when the animal expands or contracts, water is drawn into the feet. Once filled, the feet extend outward allowing the animal to walk. Suckers located at the tips of the tube feet are often used to grab prey or to hold onto solid objects when the echinoderm wants to remain attached to something. The most familiar echinoderm known to humans is probably the sea star, categorized in the superclasses Asterozoa and Cryptosyringida. There are two classes of sea stars which include Asteroidea and Ophiuroidea. True sea stars and sun stars in are in Class Asteroidea while brittle stars and basket stars are in Class Ophiuroidea. Echinoderms in the class Asteroidea have arms that are smoothly connected to the body; echinoderms in Ophiuroidea have arms that shoot out from a disk-like center. Both are able to regenerate their limbs when one is broken off. In some cases, a lost limb can generate a whole new sea star. The small bumps on top of the sea star are referred to as dermal branchiae and are used to absorb oxygen from the water for respiration. Pedicellaria are small appendages used to keep foreign bodies off of the sea star. The madreporite is a hard opening on the aboral side of the sea star used to regulate and filter sea water. Sea stars also have an eye-like structure at the end of each arm, called the eyespot, used to detect light. Hemichordates are a relatively small phylum. These creatures are extremely important to the study of the evolution of vertebrates. They are characterized by a body divided into three main areas: the preoral lobe, the collar, and the trunk. Hemichordates are partial chordates and are closely related to the first chordates. According to DNA analysis, hemichordates are closely related to echinoderms, which is also apparent during observations of hemichordate and echinoderm larval stages. Hemichordates have gill slits, a structure that resembles a notochord but is called the stomochord, a dorsal nerve cord, and a reduced ventral nerve cord. There are three classes of hemichordates which include Enteropneusta, Pterobranchia, and Graptolithina. The most well-known class is the Enteropneusta or “acorn worms”. Acorn worms have gill slits, burrow into the sediment, and likely feed on dirt and detritus. They can reach up to 2.5 m or 8 ft in length but most are actually quite small. In the Pterobranchia class, there are only a few species notably different from the acorn worms. Pterobranchs live in colonies connected by stem-like stolons. Each tiny individual is referred to as a zooid and has one gill slit. The Graptolithina are most well-known in the fossil record showing up in the Ordovician and Silurian times. Feedback & Citation Find an error or having trouble with something? Let us know and we'll have a look! Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more.
<urn:uuid:1cc80200-0164-4a50-8b33-1d6f742d659d>
4.125
1,332
Knowledge Article
Science & Tech.
35.719999
Gray Reef Sharks, Carcharhinus amblyrhynchos Taxonomy Animalia Chordata Elasmobranchii Carcharhiniformes Carcharhinidae Carcharhinus amblyrhynchos Description & Behavior Gray reef sharks, Carcharhinus amblyrhynchos (Bleeker, 1856), aka grey reef sharks, blacktail reef sharks, bronze whalers, shortnose blacktail sharks, and whaler sharks, are dark gray or bronze-gray above (dorsal side) and white below (ventral side); their caudal (tail) fins have a conspicuous wide black posterior (rear) margin; the undersides of their pectoral and pelvic fins have black tips and posterior margins, but their fins otherwise are not conspicuously black or white-tipped except for a pale-tipped first dorsal in some individuals in the Indian Ocean. They are a medium-sized to large shark with broadly rounded snouts. The origin of their first dorsal fin is usually over or just in front of the rear tips of their pectoral fins. They have no interdorsal ridges. Maximum size is reported to reach up to 2.55 m; however the largest measured on record was 1.72 m (female). Most are <1.5 m in length. Their maximum weight is 34 kg and their maximum reported age is 25 years. World Range & Habitat Gray reef sharks occur on continental and insular (island) shelves and the oceanic waters adjacent to them. They are common on coral reefs, often in deeper areas near drop-offs, in atoll passes, and in shallow lagoons adjacent to areas of strong currents. They form schools during daylight hours in aggregations of up to 100 individuals. Although they are active during the day, they are more active at night when individuals spread out over large areas of the reef, often entering shallow lagoons. They spend their time during the day cruising along the shallow forereef and in reef channels, especially in areas with strong current; individuals often move into reef passes with ebb tides. The average home range is 4.2 km2. They are found throughout the Indo-Pacific: Madagascar and the Mauritius-Seychelles area, possibly India; also in the Red Sea to South Africa where Carcharhinus wheeleri, the blacktail reef shark, is found. In the Pacific, the gray reef shark ranges from southern China to northern Australia and the Tuamotu Archipelago to Hawaii. This is one of the three most common reef sharks in the Indo-Pacific, the two others are blacktip reef sharks, Carcharhinus melanopterus, and whitetip reef sharks, Triaenodon obesus. They are found in depths between 0-800 m. A gray reef shark was photographed at 800 m by a submersible off Hawaii. Feeding Behavior (Ecology) Gray reef sharks feed on reef fishes, squids, cephalopods, crabs, lobsters, and shrimps. They have been observed herding fishes against the reef face before attacking. They tend to be aggressive under baited conditions and readily enter into an excited mob feeding pattern (true feeding frenzies are extremely rare), at which time they may become quite dangerous to humans. This species is viviparous with a yolk-sac placenta; they give birth to 1-6 pups in a litter following a gestation period of about 12 months. Birth size is between 45-60 cm. Males and females mature at about 7 to 7.5 years, males mature at 1.3-1.45 m, females mature at about 1.2-1.35. The expected life span is at least 25 years. Conservation Status & Comments Gray reef sharks are a curious and aggressive species repeatedly indicated in human attacks. They should not be touched, cornered, or approached by divers. They performs a dramatic threat display, featuring a raised snout, stiffly lowered pectoral fins, an arched back, and exaggerated swimming movements. They are fished commercially for human consumption, fishmeal and other shark products. This species is important in dive eco-tourism in French Polynesia, Palau, and the Maldives. This widespread social species was formerly common in clear tropical coastal waters and oceanic atolls. Its restricted habitat choice, site fidelity, inshore distribution, small litter size, relatively late age at maturity and increasing mismanaged fishing pressure suggests that this species is under threat. Although caught in tropical multi-species fisheries, it has considerably greater value if protected for dive tourism. This species is quickly being depleted, particularly in the Maldives. The gray reef shark, Carcharhinus amblyrhynchos, is listed as Near Threatened (NT) on the IUCN Red List of Threatened Species: NEAR THREATENED (NT) - A taxon is Near Threatened when it has been evaluated against the criteria but does not qualify for Critically Endangered, Endangered or Vulnerable now, but is close to qualifying for or is likely to qualify for a threatened category in the near future. References & Further Research Research Carcharhinus amblyrhynchos » Barcode of Life ~ BioOne ~ Biodiversity Heritage Library ~ CITES ~ Cornell Macaulay Library [audio / video] ~ Encyclopedia of Life (EOL) ~ ESA Online Journals ~ FishBase ~ Florida Museum of Natural History Ichthyology Department ~ GBIF ~ Google Scholar ~ ITIS ~ IUCN RedList (Threatened Status) ~ Marine Species Identification Portal ~ NCBI (PubMed, GenBank, etc.) ~ Ocean Biogeographic Information System ~ PLOS ~ SCIRIS ~ SIRIS ~ Tree of Life Web Project ~ UNEP-WCMC Species Database ~ WoRMS Feedback & Citation Find an error or having trouble with something? Let us know and we'll have a look! Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more.
<urn:uuid:b458ebf5-3e38-4dee-b5a5-729ce5f5fa9a>
3.28125
1,353
Knowledge Article
Science & Tech.
40.195678
Fred Pearce, consultant The future might be bright if solar power takes off around the world (Image: Donkeysoho/Plainpicture) Project Sunshine by Steve McKevitt and Tony Ryan shows the great promise of solar power, but time is running out for its advocates to make it shine WE ARE stardust. And ultimately the energy that powers our world comes from sunlight. For two centuries, we have been tapping into "fossilised sunlight", burning solar energy trapped over millions of years in coal, oil and natural gas. Now we have to get back to catching real-time sunbeams. That is the key premise of this book about a multimillion pound (largely public cash) scheme known as Project Sunshine which is based at the University of Sheffield, UK. Penned by project leader Tony Ryan and writer Steve McKevitt, it is lucid, optimistic - and plans to save the world. To stave off climate change, the authors calculate we must find up to 20 terawatts of low-carbon capacity by 2050. The sun delivers almost 10,000 times more, of which some 600 TW might be harvested. It is the only renewable source able to deliver the energy generating capacity we need. One day, solar will be the "source of all the energy we consume", they say. In this future, nuclear, wind, geothermal and hydropower are simply stopgap technologies while we get solar up and running. Electricity grids will be powered by spray-on organic photovoltaics that turn sunlight into electricity. We can redesign photosynthesis to make the liquid fuels, such as methanol, that will replace oil. Photosynthesis is nature's way of storing solar energy, but it is, they say, "lousily inefficient". So we can forget regular biofuels: the holy grail is super-productive bioengineered photosynthesis - artificial leaves, if you will. Sunshine is the only thing we won't run out of. We may suffer peak oil, peak soil and peak metals, but the sun will keep shining. This is stirring stuff, and well told. But the authors have trouble placing their solar dream in the wider world. They are stuck in simple-minded environmental homilies and hand-me-down futures full of techno-optimism. Take their second, subsidiary theme: feeding the world. They have to address this carefully, because all those solar panels and artificial leaves will take up lots of land that would otherwise be used for growing food. No problem: food production is on the cusp of a revolution as powerful as their own solar transformation - genetic modification. They are emphatic about the urgent need for GM foods, and resort to crude Malthusianism in the cause. "Without GM, people will starve in their tens, if not hundreds, of millions," they say. Phooey. GM technology will be very useful for many things, and we should stop fearing it. But when up to half of the food we grow is wasted, and when tens of millions of hungry African farmers could triple yields with existing crops if only they could afford a few bags of fertiliser, GM crops look like the solution to the wrong problem. The authors' naivety leads them to mischaracterise a range of problems as supply issues when they are about something else. On energy and food, they pay lip service to the need for much greater efficiency in production, distribution and use - but rarely get further. That's a shame. If "it's going to be much easier to use less energy in the future than it will be to generate more power", as they argue, why not discuss how? Given the importance they attach to the rapid adoption of new technologies, it is also odd that the authors' analysis of human progress, which occupies much of the first half of the book, spends little time on why even the best ideas often fail to be adopted. Thus we learn that the Roman Empire "had no culture of innovation at all... across its whole 800-year history". It faltered on the verge of inventing both the printing press and the steam engine, postponing the industrial revolution by almost two millennia. Eventually the British made it happen, turning an "inconsequential European backwater" into "the world's foremost industrial power". Since the authors call for a similar transformation to decarbonise our energy economy, why not first try to understand why the Romans failed while the Brits succeeded so spectacularly? Time is short for the solar revolution. As well as fast-tracking the technologies, we have to find out how to break the logjam created by the old technologies. If we continue to act like Romans rather than 18th-century Brits, then stardust is all we will be. This article appeared in print under the headline "A place in the sun?" Project Sunshine: How science can use the sun to fuel and feed the world by Steve McKevitt and Tony Ryan
<urn:uuid:03f384dc-a00c-4b34-8275-8ca8074d2bd4>
3.46875
1,022
Nonfiction Writing
Science & Tech.
49.883285
is a leader in DOE studies of carbon sequestration. in Carbon Storage Studies The quantity of carbon in the earth's atmosphere currently 780 billion tons has been rising by roughly 3.3 billion tons per year over the past 10 years. Three approaches are being considered in an attempt to reduce the buildup of carbon dioxide in the atmosphere because of concerns that it may be contributing to global warming and potentially devastating climate change. One approach is to trim emissions of carbon dioxide by using energy more efficiently. A second is to burn fuels (e.g., hydrogen) or produce energy in systems (e.g., hydropower, solar power, or nuclear power plants) that emit little or no net carbon dioxide. This approach would include burning woody biomass and producing liquid fuels from renewable resources, such as ethanol from corn. A third approach is both new and not yet well understood. This approach entails capturing carbon dioxide from the atmosphere and from stack emissions of fossil-fuel combustion facilities, converting some of it into useful products, and transferring most of it to above-ground and below-ground terrestrial ecosystems such as forests and underground coal seams and to the ocean. The process of long-term storage of captured carbon is called carbon sequestration. As part of its climate change technology initiative, the U.S. Department of Energy's Office of Science in 1999 formed two centers to study carbon sequestration: one focusing on terrestrial ecosystems and the other on oceans. The centers will conduct research and help focus and coordinate research across a wide range of disciplines. The goal is to find environmentally acceptable ways of keeping atmospheric carbon dioxide from reaching concentrations that could cause unacceptable climatic changes. The DOE Center for Research on Enhancing Carbon Sequestration in Terrestrial Ecosystems (CSITE) is led by a consortium comprising DOE's Oak Ridge, Pacific Northwest, and Argonne national laboratories. The center's co-leaders are Gary Jacobs of ORNL and Blaine Metting of Pacific Northwest National Laboratory (PNNL). |Structure of a leaf that is less efficient than some grasses at fixing carbon. This center will receive $6 million over three years. Collaborating in center studies will be researchers from Colorado State University, North Carolina State University, Ohio State University, the Rodale Institute in Pennsylvania, Texas A&M University, the University of Washington, the Joanneum Research Institute in Austria, and the US Department of Agriculture. From the viewpoint of terrestrial ecosystems, carbon sequestration is the removal of carbon dioxide from the atmosphere by enhancing natural absorption processes and storing the carbon for a long time in vegetation and soils. Carbon sequestration may be accomplished by fixing more carbon in plants by photosynthesis, increasing plant biomass per unit land area, reducing decomposition of soil organic matter, and increasing the area of land covered by ecosystems that store carbon. Research to date has shown that one way to increase carbon sequestration is through better land management. If modest changes in farming and forestry practices are made, plants and soils may more efficiently remove carbon dioxide from the atmosphere and store it in long-lived "pools" such as forest reserves, wood products, or soil organic matter. The longer that carbon is sequestered, the slower the rate of increases in atmospheric carbon dioxide. |Mac Post (left) and Don Todd take soil samples on the Oak Ridge Reservation for later analysis to determine their carbon content. Field research will be conducted at several sites, including DOE's national environmental research parks at ORNL and the Fermi National Accelerator Laboratory, as well as US Department of Agriculture sites in Alabama and South Carolina, the Rodale Institute Research Center in Pennsylvania, and forestry industry research sites in the Pacific Northwest and the Southeast. The second center, which is focusing on ocean carbon sequestration, is led by a consortium of DOE's Lawrence Berkeley and Lawrence Livermore national laboratories. The DOE Center for Research on Ocean Carbon Sequestration (DOCS) will receive a total of $3 million over three years. DOCS will have collaborators from the Massachusetts Institute of Technology, Moss Landing Marine Labs, the Pacific International Center for High Technology Research, Rutgers University, and the Scripps Institution of Oceanography. Center co-leaders are Jim Bishop (Lawrence Berkeley) and Ken Caldeira (Lawrence Livermore). DOCS will study the feasibility, effectiveness, and environmental acceptability of injecting carbon dioxide into the ocean and fertilizing marine organisms on the ocean's surface.
<urn:uuid:91c344bb-b491-48a0-9ba9-576f9277676d>
3.359375
905
Knowledge Article
Science & Tech.
25.120552
Planetary and Satellite Motion Need to see it? View the Satellite Motion animation from the Multimedia Physics Studios.PhET: Gravity and Orbits Explore satellite motion with this interactive Java applet from PhET. View the orbit of moons and planets or both simultaneously.Flickr Physics Visit The Physics Classroom's Flickr Galleries and enjoy a visual tour of the topic of gravitation.Shockwave Studios Manipulate, watch and interact with the Orbital Motion activity from the Shockwave Studios.PhET Simulation: My Solar System Now here's a chance of a lifetime - build your own solar system with this interactive Java applet from PhET. Total cosmic phun. This problem-based learning unit targets concepts of universal gravitation and satellite motion.Newton’s Mountain Model Re-create the Newton's Mountain thought experiment with this interactive EJS simulation from Open Source Physics (OSP).PhET: Gravity and Orbits This interactive Java applet from PhET allows the user to explore the orbits of satellites, moons and planets. Includes force and velocity displays.Shockwave Studios Orbital Motion from the Shockwave Studios is an excellent accompanying activity to this page.Curriculum Corner Practice makes perfect with this computational activity from The Curriculum Corner.PhET Simulation: My Solar System This interactive Java applet from PhET allows students to build their own solar system with up to four bodies. Preset solar systems can also be studied. Circular Motion Principles for Satellites A satellite is any object that is orbiting the earth, sun or other massive body. Satellites can be categorized as natural satellites or man-made satellites. The moon, the planets and comets are examples of natural satellites. Accompanying the orbit of natural satellites are a host of satellites launched from earth for purposes of communication, scientific research, weather forecasting, intelligence, etc. Whether a moon, a planet, or some man-made satellite, every satellite's motion is governed by the same physics principles and described by the same mathematical equations. The fundamental principle to be understood concerning satellites is that a satellite is a projectile. That is to say, a satellite is an object upon which the only force is gravity. Once launched into orbit, the only force governing the motion of a satellite is the force of gravity. Newton was the first to theorize that a projectile launched with sufficient speed would actually orbit the earth. Consider a projectile launched horizontally from the top of the legendary Newton's Mountain - at a location high above the influence of air drag. As the projectile moves horizontally in a direction tangent to the earth, the force of gravity would pull it downward. And as mentioned in Lesson 3, if the launch speed was too small, it would eventually fall to earth. The diagram at the right resembles that found in Newton's original writings. Paths A and B illustrate the path of a projectile with insufficient launch speed for orbital motion. But if launched with sufficient speed, the projectile would fall towards the earth at the same rate that the earth curves. This would cause the projectile to stay the same height above the earth and to orbit in a circular path (such as path C). And at even greater launch speeds, a cannonball would once more orbit the earth, but now in an elliptical path (as in path D). At every point along its trajectory, a satellite is falling toward the earth. Yet because the earth curves, it never reaches the earth. So what launch speed does a satellite need in order to orbit the earth? The answer emerges from a basic fact about the curvature of the earth. For every 8000 meters measured along the horizon of the earth, the earth's surface curves downward by approximately 5 meters. So if you were to look out horizontally along the horizon of the Earth for 8000 meters, you would observe that the Earth curves downwards below this straight-line path a distance of 5 meters. For a projectile to orbit the earth, it must travel horizontally a distance of 8000 meters for every 5 meters of vertical fall. It so happens that the vertical distance that a horizontally launched projectile would fall in its first second is approximately 5 meters (0.5*g*t2). For this reason, a projectile launched horizontally with a speed of about 8000 m/s will be capable of orbiting the earth in a circular path. This assumes that it is launched above the surface of the earth and encounters negligible atmospheric drag. As the projectile travels tangentially a distance of 8000 meters in 1 second, it will drop approximately 5 meters towards the earth. Yet, the projectile will remain the same distance above the earth due to the fact that the earth curves at the same rate that the projectile falls. If shot with a speed greater than 8000 m/s, it would orbit the earth in an elliptical path. The motion of an orbiting satellite can be described by the same motion characteristics as any object in circular motion. The velocity of the satellite would be directed tangent to the circle at every point along its path. The acceleration of the satellite would be directed towards the center of the circle - towards the central body that it is orbiting. And this acceleration is caused by a net force that is directed inwards in the same direction as the acceleration. This centripetal force is supplied by gravity - the force that universally acts at a distance between any two objects that have mass. Were it not for this force, the satellite in motion would continue in motion at the same speed and in the same direction. It would follow its inertial, straight-line path. Like any projectile, gravity alone influences the satellite's trajectory such that it always falls below its straight-line, inertial path. This is depicted in the diagram below. Observe that the inward net force pushes (or pulls) the satellite (denoted by blue circle) inwards relative to its straight-line path tangent to the circle. As a result, after the first interval of time, the satellite is positioned at position 1 rather than position 1'. In the next interval of time, the same satellite would travel tangent to the circle in the absence of gravity and be at position 2'; but because of the inward force the satellite has moved to position 2 instead. In the next interval of time, the same satellite has moved inward to position 3 instead of tangentially to position 3'. This same reasoning can be repeated to explain how the inward force causes the satellite to fall towards the earth without actually falling into it. Occasionally satellites will orbit in paths that can be described as ellipses. In such cases, the central body is located at one of the foci of the ellipse. Similar motion characteristics apply for satellites moving in elliptical paths. The velocity of the satellite is directed tangent to the ellipse. The acceleration of the satellite is directed towards the focus of the ellipse. And in accord with Newton's second law of motion, the net force acting upon the satellite is directed in the same direction as the acceleration - towards the focus of the ellipse. Once more, this net force is supplied by the force of gravitational attraction between the central body and the orbiting satellite. In the case of elliptical paths, there is a component of force in the same direction as (or opposite direction as) the motion of the object. As discussed in Lesson 1, such a component of force can cause the satellite to either speed up or slow down in addition to changing directions. So unlike uniform circular motion, the elliptical motion of satellites is not characterized by a constant speed. In summary, satellites are projectiles that orbit around a central massive body instead of falling into it. Being projectiles, they are acted upon by the force of gravity - a universal force that acts over even large distances between any two masses. The motion of satellites, like any projectile, is governed by Newton's laws of motion. For this reason, the mathematics of these satellites emerges from an application of Newton's universal law of gravitation to the mathematics of circular motion. The mathematical equations governing the motion of satellites will be discussed in the next part of Lesson 4. 1. The fact that satellites can maintain their motion and their distance above the Earth is fascinating to many. How can it be? What keeps a satellite up? 2. If there is an inward force acting upon an earth orbiting satellite, then why doesn't the satellite collide into the Earth?
<urn:uuid:ec357a72-328d-42cb-b8fc-683bcb870b35>
3.015625
1,715
Tutorial
Science & Tech.
40.757427
How to convert watts to joules You can calculate joules from watts and seconds, but you can't convert watts to joules since watt and joule units represent different quantities. Watts to joules calculation formula The energy E in joules (J) is equal to the power P in watts (W), times the time period t in seconds (s): E(J) = P(W) × t(s) joules = watts × seconds J = W × s What is the energy consumption of an electrical circuit that has power consumption of 30 watts for time duration of 3 seconds? E(J) = 30W × 3s = 90J
<urn:uuid:06a6e47f-dfd9-4553-8c65-08da402542c1>
3.03125
149
Tutorial
Science & Tech.
46.842552
This image shows the rock called "Souffle". Click on image for full size Image from: NASA/JPL Martian Surface Winds The surface pressure of Mars is about 1/150th that of the surface pressure of the Earth. This means that there are much fewer molecules in the atmosphere. This means that the atmosphere near the surface of Mars has much less inertia than that near the surface of the the Earth. Putting the atmosphere of the Earth in motion must be like putting molasses in motion compared to putting the atmosphere of Mars in motion. This means that the Martian surface winds can be accelerated to higher speeds than those on Earth, see the classes of terrestrial wind speeds. The general circulation pattern of winds is also very different from the terrestrial circulation pattern. These winds can be whipped to an extreme during the frequent Martian global dust storms. Because of Mars' lower gravity, the winds can more easily lift and carry sand particles. Sand particles from the surface driven by winds contribute to sand erosion of the surface. Features found by the Mars Pathfinder lander provided plenty of evidence for sand erosion by wind. But the lower atmospheric pressure of Mars makes it harder for the winds to impart momentum to sand particles lying on the ground. Thus the amount of sand grains lifted from the surface are accelerated to high speeds may be different from what would be expected on the Earth. This makes the erosion of Martian rock a little different than on Earth. The first weather measurements made from the surface of Mars were performed by the Mars Pathfinder mission. These measurements provided some "ground truth" for the strength of Martian winds. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Dried debris left after a flood is "wind mobile" and can be lifted into the air by winds. The general process by which this occurs is called "saltation". Saltation is the primary form of abrasion and erosion...more There are two main weathering agents on Mars: wind and acid fog. Although acid fog can be very important, because large amounts of water are not readily accessible from the Martian surface, the action...more This is the first image showing clouds of Mars taken from the lander. Ground based viewing of Mars has shown that clouds seem to be plentiful only in the middle latitudes This may be because water of Mars...more This is image of a Martian sunset illustrates just how thin the Martian atmosphere is. The terrestrial "blue sky" comes about because molecules of the atmosphere scatter sunlight. In this image, the Martian...more The surface pressure of Mars is about 1/150th that of the surface pressure of the Earth. This means that there are much fewer molecules in the atmosphere. This means that the atmosphere near the surface...more Mars is much like Venus-- it's very bright and therefore easily spotted in the night sky. Because of this, we don't know who exactly discovered Mars. We do know it was named after the Roman god of war,...more If approved, the Mars 2003 mission will have two important parts. The first is the Mars Surveyor 2003 Lander, which will be launched sometime between May 27, 2003, and June 17, 2003. The lander will use...more This image of a potential landing site for the Mars '98 mission was provided by the Mars Global Surveyor mission. The landing site was suppose to be in the south polar region of Mars. In the image, ground...more
<urn:uuid:f7c2a2a7-4971-4663-9d14-81b6d72c284d>
4.3125
731
Content Listing
Science & Tech.
59.632033
Artículos con la etiqueta Physics From envying comic book characters to pondering extra dimensions while staring at fish, Dr. Michio Kaku recounts the experiences that made him one of the world's most colorful scientists. Propelling a spaceship with photons would be like trying to energize a spaceship with a flashlight. In the 1999 film "The Matrix," characters could simply learn a new set of skills by uploading a program into their brains. When (if ever) will we be able to that in real life? Einstein believed that free will was just an illusion, and that awareness of this lack kept him from taking himself and others too seriously. But Einstein was plain wrong, says Dr. Kaku. The physicist scoffed at the idea of quantum entanglement, calling it "spooky action at a distance." And while it has in fact been proven to exist, this entanglement can't be used to transmit any usable information. The physicist sees two major trends in the world today: the first is toward a multicultural, scientific, tolerant society; the other, as evidenced by terrorism, is fundamentalist and monocultural. Whichever one wins out will determine the fate of man El físico Michio Kaku considera que hay dos grandes tendencias en el mundo actual: la primera es hacia una sociedad multicultural, la sociedad científica, tolerante, y el otro, como lo demuestra el terrorismo, es fundamentalista y monocultural. Cualquiera que sea la gana uno determinará el destino del hombre Speakers Richard Feynman: Physicist One of the best known and most renowned scientists in history, Richard Feynman pioneered quantum mechanics. His knack for accessible explanations made him a popularizer of physics of equal distinction to laypeople. Why you should listen to him? Richard Feynman began his career at a crossroads in history, assisting the Manhattan Project with the development of the atomic bomb. Soon he was producing breakthrough understandings of particle physics and quantum mechanics, for which he won the Nobel Prize in 1965. His pictorial representations of the actions of subatomic particles are still widely used today (they're now called Feynman diagrams). Feynman acted as an adviser on the commission investigating the space shuttle Challenger disaster. Books based on his lectures and conversations became best-sellers, and cemented him in the public mind as an explainer of science. He was a legendary prankster, a charismatic free-thinker and an avid bongoist. "At twenty-three ... there was no physicist on earth who could match his exuberant command over the native materials of theoretical science. [...] Feynman seemed to possess a frightening ease with the substance behind the equations, like Albert Einstein at the same age, like the Soviet physicist Lev Landau -- but few others." What's it like to be pals with a genius? Onstage at TEDxCaltech, physicist Leonard Susskind spins a few stories about his friendship with the legendary Richard Feynman, discussing his unconventional approach to problems both serious and ... less so. About Leonard Susskind Leonard Susskind works on string theory, quantum field theory, quantum statistical mechanics and quantum cosmology at Stanford
<urn:uuid:4c340943-9eea-477a-9a8c-e641ad82ee98>
2.765625
695
Content Listing
Science & Tech.
41.892689
Word "environment" is most commonly used describing "natural" environment and means the sum of all living and non-living things that surround an organism, or group of organisms. Environment includes all elements, factors , and conditions that have some impact on growth and development of certain organism. Environment includes both biotic and abiotic factors that have influence on observed organism. Abiotic factors such as light, temperature, water, atmospheric gases combine with biotic factors (all surrounding living species). Environment often changes after some time and therefore many organisms have ability to adapt to these changes. However tolerance range is not the same with all species and exposure to environmental conditions at the limit of an certain organism's tolerance range represents environmental stress. Environmentalism is very important political and social movement with goal to protect nature environment by emphasizing importance of nature role in protection of the environment in combination with various actions and policies oriented to nature preservation. Environmentalism is movement connected with environmental scientists and many of their goals. Some of these goals include: 1. to reduce world consumption of fossil fuels 2. to reduce and clean up all sorts of pollution (air, sea, river...) with future goal of zero pollution 3. emphasis on clean, alternative energy sources that have low carbon emissions 4. sustainable use of water, land, and other scarce resources 5. preservation of existing endangered species 6. protection of biodiversity First goal reducing the world consumption of fossil fuels is very important to fight against climate change and global warming phenomenon. Fossil (non renewable) fuels are mainly responsible for global warming as during the combustion of fossil fuels carbon dioxide (one of the greenhouse gases) gets released into the atmosphere. In fact reducing the emission of carbon dioxide is the most important thing if we want to successfully fight global warming. Reducing and cleaning up pollution is also very important task. Every day we hear the news about tremendous pollution of our air, seas, rivers. Pollution creates unhealthy environment, and often causes many health problems and different diseases. Third goal is very obvious. World needs a lot of energy and if we want to reduce the use of fossil fuels then we should have some other alternative energy sources to satisfy world energetic needs. These alternative energy sources such as wind energy, solar power and hydroenergy, have all great potential, and are also ecologically acceptable. However their use is still negligent on global scale and fossil fuels are still dominant energy sources. Water is precious but also scarce resource that needs to be preserved for future generations. Sustainable use of water, land and other resources is therefore vital to enable future life of our planet. The number of endangered species is lately increasing rapidly and many species have become extinct in the last 50 years or so. Preservation of endangered species is important to save number of ecosystems and to protect biodiversity of our planet. Biodiversity is very important in enabling the life on earth since all species are connected in perfectly balanced circle, each with their very own role. Humans are not owners of this circle but only one small part that needs even the smallest parts of this circle for its proper functioning. However we seem to be forgetting this more often than not.
<urn:uuid:5d2b13c5-0a50-41f8-bcd7-29b5b8c219e6>
3.59375
634
Personal Blog
Science & Tech.
28.33426
Here's how I prefer to think about it. Consider the implementation of a variable containing a 32 bit integer. When treated as a value type, the entire value fits into 32 bits of storage. That's what a value type is: the storage contains just the bits that make up the value, nothing more, nothing less. Now consider the implementation of a variable containing an object reference. The variable contains a "reference", which could be implemented in any number of ways. It could be a handle into a garbage collector structure, or it could be an address on the managed heap, or whatever. But it's something which allows you to find an object. That's what a reference type is: the storage associated with a variable of reference type contains some bits that allow you to reference an object. Clearly those two things are completely different. Now suppose you have a variable of type object, and you wish to copy the contents of a variable of type int into it. How do you do it? The 32 bits that make up an integer aren't one of these "reference" things, it's just a bucket that contains 32 bits. References could be 64 bit pointers into the managed heap, or 32 bit handles into a garbage collector data structure, or any other implementation you can think of, but a 32 bit integer can only be a 32 bit integer. So what you do in that scenario is you box the integer: you make a new object that contains storage for an integer, and then you store a reference to the new object. Boxing is only necessary if you want to (1) have a unified type system, and (2) ensure that a 32 bit integer consumes 32 bits of memory. If you're willing to reject either of those then you don't need boxing; we are not willing to reject those, and so boxing is what we're forced to live with.
<urn:uuid:b067f96d-6c08-42be-95c2-1b15846f7c9e>
2.6875
382
Q&A Forum
Software Dev.
55.687247
The latest in science and technology news |By MARK OLLIG| Where can you learn about gold nanorods that are 1,000 times smaller than a human hair? What website tells you about the highly sensitive optical sensors, super lenses, and “invisible objects” for use in the military? Do you want to be on the “cutting edge” and “in the know” when it comes to the latest scientific breakthroughs and discoveries? One fascinating website on the Internet worth checking out is located at http://physorg.com. This website is a “smorgasbord” if you will of tantalizing science, futuristic technology articles and more. Those of you out there who love to know about the latest discoveries that scientists and engineers are working on can “feast” all day with what you will learn here. Physorg.com has in-depth articles about physics, space, earth science, nanotechnology, robotics, computing and much more. You can also vote on the stories you like. These votes are tabulated and shown next to each article along with the time the article was posted. New information is constantly being made available. Physorg.com recently posted an article about treating robots with a little more respect. An ethical code for the treatment of robots? Yes, says the South Korean government. Their government task force is drawing up a “code of ethics” to stop humans from misusing robots or vice versa. A five-member task force made up of experts, futurists and a science fiction writer began work on a “Robot Ethics Charter” this past November. “The government plans to set ethical guidelines concerning the roles and functions of robots, as robots are expected to develop strong intelligence in the near future,” the South Korean Ministry of Commerce, Industry and Energy said in a statement. In 2013, the Korean Institute of Science and Technology wants to have developed robotic “caregivers” that could assist with household tasks and monitor the health of the elderly. Does anyone remember “Rosie” the robot maid from the TV cartoon “The Jetson’s”? I can see the new car bumper stickers proudly proclaiming: “I love my robot.” Just one more: “My robot is smarter than your honor student.” The famous science fiction writer, Isaac Asimov wrote in one of his short stories about the “Three Laws of Robotics.” The first law says a robot may not injure or allow a human being to come to harm. The second law says that a robot must obey orders given to it by human beings unless it harms them. The third law states that a robot must “protect its own existence.” Remember when “The Robot” from the “Lost in Space” TV show would find itself in danger? It would emit “high voltage lightning charges” towards the “threat” from those robotic claws flailing wildly in the air. I suppose The Robot was just following Asimov’s third law. If you are into the latest happenings in outer space, you might be interested in knowing what instruments on NASA’s Cassini spacecraft have found. It has evidence for oceans or seas most likely filled with liquid methane or ethane on Saturn’s moon Titan. One NASA photo shows that one “lake” on Titan is slightly larger when compared to a photo of Earth’s Lake Superior. What’s a Bits & Bytes column without mentioning something new about computing? I can report that the world’s first consumer one-terabyte (1TB) hard disk drive (HDD) is now available from Hitachi Global Storage Technologies. We all know that 1TB is equal to 1 trillion bytes or 1000 gigabytes (1000GB). The “1TB Hitachi Deskstar 7K1000” internal and external hard disk drive for consumers personal computers will cost around $400.00. Computer disk drive manufacturer Seagate Technologies is also scheduled to release a consumer 1TB HDD storage device this year. Standard Definition or SD formatted video based on two hours per movie, would mean that 1TB could store 500 SD movies. Using High Definition or HD format, 1TB could store 125 HD movies. With a 1TB HDD, you can store “. . . 300,000 digital photographs at the highest quality, one million e-books, or 250,000 songs as an MP3 player.” This is according to Doug Pickford, director of product and market strategy for enterprise products with Hitachi Global Storage Technologies. I heard his comments on the podtech.net website. Pickford went on to say he believes “. . .using present day physics, we could see a 50 terabyte hard disk drive.” I downloaded the specifications datasheet on the 1TB Hitachi Deskstar 7K1000 from http://www.hitachigst.com. According to this datasheet, the warranty on the Hitachi 1TB hard disk drive is good for only three yearsso do your back-ups early and often. In an interesting side-storys Dell Computer is introducing its “Video Time Capsule” that will allow contributors to share their digital videos for generations to come. Dell set up a website for us to submit/upload our video messages at http://www.studiodell.com. All of the video content submitted to studiodell.com for the remainder of 2007 will be copied onto the 1TB Hitachi Deskstar 7K1000 hard disk drive which will be stored for 50 years at the Dell campus in Round Rock, Texas. What will happen 50 years from now when they attempt to play those archived videos? One thing I do know the 1TB Hitachi Deskstar 7K1000 hard disk drive will be “out-of-warranty” by 47 years.
<urn:uuid:6e4d5f96-fbee-4876-a9e6-53443610ef0a>
2.921875
1,262
Personal Blog
Science & Tech.
58.189585
Scientific Investigations Report 2012–5179 As part of the U.S. Department of the Interior sustainable water strategy, WaterSMART, the U.S. Geological Survey documented hydrologic and water-quality conditions in the lower Apalachicola–Chattahoochee–Flint and western and central Aucilla–Suwanee–Ochlockonee River basins in Alabama, Florida, and Georgia during low-flow conditions in July 2011. Moderate-drought conditions prevailed in this area during early 2011 and worsened to exceptional by June, with cumulative rainfall departures from the 1981–2010 climate normals registering deficits ranging from 17 to 27 inches. As a result, groundwater levels and stream discharges measured below median daily levels throughout most of 2011. Water-quality field properties including temperature, dissolved oxygen, specific conductance, and pH were measured at selected surface-water sites. Record-low groundwater levels measured in 12 of 43 surficial aquifer wells and 128 of 312 Upper Floridan aquifer wells during July 2011 underscored the severity of drought conditions in the study area. Most wells recorded groundwater levels below the median daily statistic, and 7 surficial aquifer wells were dry. Groundwater-level measurements taken in July 2011 were used to determine the potentiometric surface of the Upper Floridan aquifer. Groundwater generally flows to the south and toward streams except in reaches where streams discharge to the aquifer. The degree of connection between the Upper Floridan aquifer and streams decreases east of the Flint River where thick overburden hydraulically separates the aquifer from stream interaction. Hydraulic separation of the Upper Floridan aquifer from streams located east of the Flint River is shown by stream-stage altitudes that differ from groundwater levels measured in close proximity to streams. Most streams located in the study area during 2011 exhibited below normal flows (streamflows less than the 25th percentile), substantiating the severity of drought conditions that year. Streamflow and springflow measured at 202 sites along 2,122 stream miles during July 20–24, 2011, identified about 286 miles of losing streams, about 1,230 miles of gaining streams, and about 606 miles of streams with no flow. Water-quality field properties measured at 123 stream and 5 spring sites during July 2011 yielded water temperatures ranging from 20.6 to 31.6 degrees Celsius, dissolved oxygen ranging from 0.47 to 9.98 milligrams per liter, specific conductance ranging from 13 to 834 microsiemens per centimeter at 25 degrees Celsius, and pH ranging from 3.6 to 8.03. First posted September 10, 2012 Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge. Gordon, D.W., Peck, M.F., and Painter, J.A., 2012, Hydrologic and water-quality conditions in the lower Apalachicola–Chattahoochee–Flint and parts of the Aucilla–Suwanee–Ochlockonee River basins in Georgia and adjacent parts of Florida and Alabama during drought conditions, July 2011: U.S. Geological Survey Scientific Investigations Report 2012–5179, 69 p., 1 sheet, available online at http://pubs.usgs.gov/sir/2012/5179/. Purpose and Scope Description of the Study Area Stream and Lake Characteristics Station-Numbering Systems for Wells and Surface Water Hydrologic Conditions and Stream-Water Quality during July 2011 Water Quality of Streams and Springs Appendix. Map showing location of all measurement sites used in this study, lower Apalachicola–Chattahoochee–Flint River basin and in western and central parts of the Aucilla–Suwannee–Ochlockonee River basin, Georgia and Florida (20x24-inch sheet)
<urn:uuid:8f98f49e-33fd-46ae-8da2-72d00448cbaa>
3.078125
822
Academic Writing
Science & Tech.
42.896327
Last week, the European Commission announced the investment of 1bn Euros (1,000,000,000€) in graphene research and its development as a new material. It was great news for science and I wanted to know more about this new material that was all over the media. Graphene is a one-atom-thick sheet made of carbon atoms arranged in a honeycomb (hexagonal) lattice and it is one million times thinner than human hair. If graphene layers are stacked one on top of another they form graphite (the mineral). It was isolated in 2004 by researchers at the University of Manchester. It has very advantageous properties and all of them are found in the same material, which is the reason for its popularity. Graphene is a great electricity conductor: electrons can travel through graphene easier than through copper and they travel very fast. It conducts heat (thermal conductivity) better than other materials, which is essential to dissipate heat and keep it cool. It is harder than diamond and 300 times harder than steel for a one-atom-thick layer. It can be stretch up to 20 % of its initial length and it comes back to its original shape. It transmits 97% of the light, which makes it almost transparent to the naked eye; however, it can be modified with certain chemicals to reflect light at a certain wavelength or colour. Graphene is quite sticky, which means that it can easily adhere and release atoms and other molecules. These properties make it a great material with numerous applications: it can be used to create flexible touch-screen displays for mobile devices and LCD screens; it could produce very light and hard composite materials to be used in aircraft and car manufacture; it could be used to produce high-speed electronics for computers and communication technologies; it can be used to produce batteries, fuel cells and photovoltactic cells; it could make chemical sensors; and these are only some examples. The main challenge of graphene technology is its production method. There are several ways to produce graphene, but I would like to explain the exfoliation of graphite method because it was the one used by Univeristy of Manchester researchers when they first isolated graphene and I think it is a bit rudimentary for such a cutting edge technology. It involves the use of Scotch tape to peel-off layers of graphite. This has to be repeated several times until you get super thin sheets of graphene that cannot be seen with the naked eye (you can watch a video about this in the Resources section bellow). All the production methods have in common that it is very expensive to produce graphene, they produce very small quantities and they are difficult to scale up. This makes graphene one of the most expensive materials on earth. However, despite the limitations of the production methods, graphene is a very promising material with lots of great properties and innumerable applications. The EU investment in graphene research will definitely improve the production methods and boost research to find new and exciting applications for a technology that is often referred as the ‘21st century material’. Introducing graphene (University of Cambridge): http://www.youtube.com/watch?v=dTSnnlITsVg How to make graphene (PhysicsWorldTV): http://www.youtube.com/watch?v=ehvksWx3AJQ University of Manchester Graphene research. http://www.graphene.manchester.ac.uk/ Park, Sungjin, and Rodney S. Ruoff. Chemical methods for the production of graphenes. Nature nanotechnology 4.4 (2009): 217-224.
<urn:uuid:5931e3f7-f42c-4f6b-bdcd-6bdb915efd07>
3.609375
746
Personal Blog
Science & Tech.
45.722574
This summer, the Arctic lost an area of sea ice equivalent to the state of Maine every day for a month. When the meltback was over in September, the Arctic shed an area of ice the size of Canada and Texas combined — a 40 percent decline over the historical average. And just last month, scientists reported that the pace of ice loss in Greenland is five times greater than it was in the 1990′s, a development they called “extraordinary.” Some predict ice-free summers in the Arctic as soon as 2016. Yet, these changes have gotten only modest coverage in the press. Even as scientists documented the “astonishing” melt in the Arctic this summer, television news outlets covered Vice Presidential Candidate Paul Ryan’s workout routine three times more than record sea ice loss. Why aren’t people paying attention? One reason is that it’s difficult to imagine the scope of the problem. For those with only a casual understanding or interest in global warming, the changes listed above might read like another laundry list of environmental impacts that aren’t relevant to daily life. That’s where James Balog, star of the new film Chasing Ice, comes in. As a long-time photographer, Balog has tried to illustrate the interaction between humans and nature throughout his career. In 2007, after personally witnessing the melting of glaciers on an assignment for National Geographic, he started a groundbreaking project to document the demise of the world’s ice. Called the Extreme Ice Survey, Balog and his team put 27 cameras in place around the world and have taken pictures of glaciers every hour of daylight since. Chasing Ice documents the enormously challenging process of getting the project off the ground, as well as the jaw-dropping final product showing geologic changes taking place in just a few years. Suddenly, the melting of the Arctic becomes real, immediate, and terrifying. More importantly, through the time-lapsed photos and the film’s narration, Balog and director Jeff Orlowski successfully humanize the glaciers and explain why their changes are so important. This is one of the most important outcomes of the film. And judging from the response of both viewers and film critics, their approach is moving people in a big way. Watch Chasing Ice. Bring your family, bring your friends, watch it on the big screen if you can. It will fill you with awe for the beauty of ice, admiration for the tenacity of Balog and his crew, and terror at the scale of changes we’re creating on earth. I spoke to Chasing Ice star James Balog about the film and his philosophy behind communicating the reality of climate change: Stephen Lacey: I wanted to ask about your initial thoughts on climate change. You talk in the film about being a skeptic back in the 80’s when people like James Hansen were really first starting to raise alarms in the policy sphere. So as a nature photographer, at what point did you look around and realize that you could see some of these changes firsthand and how did that change your perspective? James Balog: Well, I have to confess that my initial resistance to this was connected with my work on some other big environmental issues back in the late 80′s and early 90′s on the extinction of animals and deforestation. There was a finite well of worry that I was willing to climb over and there were only so many things I wanted to occupy my brain with. So part of it was like, “oh my God here’s another issue.” I’ve also been a little bit of a skeptic over the years about how activists like to paint things in very black and white terms; heroes and villains in order to motivate their bases and make issues really simple so that they can get people to pay attention. So there was that. But an even bigger thing was that I thought that the science was simply based on computer models which at the time were not at all bomb proof. Now of course they are quite good – they’re not perfect but they are extremely good. And I took the time to learn in the late 90′s that the science was not about computer models, it was about actual tangible physical evidence that was preserved in the ice cores of Greenland and Antarctica. That was really the smoking gun showing how far outside normal, natural variation the world has become. And that’s when I started to really get the message that this was something consequential and serious and needed to be dealt with. SL: So in order to document these changes, the Extreme Ice Survey was born in the mid-2000s. You set up 27 cameras in Alaska, Iceland, Greenland and Montana and took pictures every hour of daylight for a few years. Describe what you saw when you got the images back and started looking through them and creating these sequences.
<urn:uuid:aad7e891-f7a5-4596-955a-3c039a7299d6>
2.875
1,002
Audio Transcript
Science & Tech.
49.77241
(part of ZopeInAnger) Forget those explanations about patterns. And forget about Python classes. Interfaces are a Zope 3 invention mainly to: - Allow the exposing of an API (every component has an external interface) - Allow the Zope 3 machinery to query the interface Interfaces can be queried by Zope 3 so when they are registered factories can find them. Interfaces can be queried by Zope 3 so they can be used to automatically add documentation through the [ZMI]?. Interfaces can be queried by Zope 3 so definitions allows so called schema's - basically a list of fields - to be queried and webforms to be automatically generated. Interfaces can set constraints - so when you access a value they can throw an error allowing for strict(er) type checking. Interfaces can set constraints - so, for example, containers know what other interfaces can be contained. Interfaces need to be declared as implemented within your Class definitions - thereby allowing Zope 3 to query for an instance of an object that provides that interface. Key to understanding interfaces is that they can be queried and constraints can be added. Quite simple, really. Python is a dynamically typed language and class definitions are not protected. By introducing interfaces a promise is made by any Class that declares that it implements a given interface to actually implement that interface. Python being a dynamic language, it is possible to implement only part or even none of an interface and still declare that an interface is provided by that Class. Sometimes this is done as a convenience in the prototyping phase of development, but feature complete code should always implement the full interface as declared. Any Zope 3 developer who is providing a baseline guarantee of quality for their code will always implement the full interface that they are declaring, as well as provide a level of assurance that the implementation is correct through automated testing suites. In fact the interface solution is quite elegant. Because - by virtue of a shortcoming in Python, one could argue - Zope 3 now has a standard convention for components to expose their interfaces. Other computer languages have the option of stricter class definitions - but many frameworks would actually benefit from such an interface specification. I hope this helps you understand Zope 3 interfaces. To study how to implement them see ZopeGuideInterfaces. If you feel this page can be improved please edit it. Note: I make it a point not to get too technical - that is left to the (online) books. The Interfaces wiki contains the history of zope.interface development, as well as some more recent discussion. The zope.interface.README.txt file provides a complete discussion and set of examples for working with interfaces. The information below is from the doc string from zope.interface.interfaces.IInterface? and provides a good, succinct description of what a Zope 3 Interface object is. Interface objects Interface objects describe the behavior of an object by containing useful information about the object. This information includes: o Prose documentation about the object. In Python terms, this is called the "doc string" of the interface. In this element, you describe how the object works in prose language and any other useful information about the object. o Descriptions of attributes. Attribute descriptions include the name of the attribute and prose documentation describing the attributes usage. o Descriptions of methods. Method descriptions can include: - Prose "doc string" documentation about the method and its usage. - A description of the methods arguments; how many arguments are expected, optional arguments and their default values, the position or arguments in the signature, whether the method accepts arbitrary arguments and whether the method accepts arbitrary keyword arguments. o Optional tagged data. Interface objects (and their attributes and methods) can have optional, application specific tagged data associated with them. Examples uses for this are examples, security assertions, pre/post conditions, and other possible information you may want to associate with an Interface or its attributes. Not all of this information is mandatory. For example, you may only want the methods of your interface to have prose documentation and not describe the arguments of the method in exact detail. Interface objects are flexible and let you give or take any of these components. Interfaces are created with the Python class statement using either Interface.Interface or another interface, as in:: from zope.interface import Interface class IMyInterface(Interface): '''Interface documentation''' def meth(arg1, arg2): '''Documentation for meth''' # Note that there is no self argument class IMySubInterface(IMyInterface): '''Interface documentation''' def meth2(): '''Documentation for meth2''' You use interfaces in two ways: o You assert that your object implement the interfaces. There are several ways that you can assert that an object implements an interface: 1. Call zope.interface.implements in your class definition. 2. Call zope.interfaces.directlyProvides on your object. 3. Call 'zope.interface.classImplements' to assert that instances of a class implement an interface. For example:: from zope.interface import classImplements classImplements(some_class, some_interface) This approach is useful when it is not an option to modify the class source. Note that this doesn't affect what the class itself implements, but only what its instances implement. o You query interface meta-data. See the IInterface methods and attributes for details.
<urn:uuid:48fc963e-ad1b-4537-b9e6-85abcef81dab>
2.96875
1,138
Documentation
Software Dev.
29.735312
READ CHAPTERS 5,6,7 STUDY ALL VOCABULARY TERMS - Know how to do relative humidity problems. - Know how to do orographic uplift problems. - Know the location of all wind and pressure belts. - Know all measures of standard atmospheric pressure at sea level. - Does air pressure increase or decrease with increased elevation. - Understand the Coriolis Effect. - Know which local winds follow a daily or seasonal pattern. - What are prevailing winds? - Know the characteristics of cyclones and anticyclones in each hemisphere. - Know the characteristics of cumulus, stratus, and cirrus clouds. - Do isobars increase or decrease in value towards the center of an anticyclone? - Would precipitation be higher on windward or leeward slopes of major mountain ranges? - List the three ways to make the air rise, expand and cool. - Is fog a form of condensation or precipitation? - What is a squall line? - Define a hurricane by wind speed. - What direction do mid- latitude cyclones tend to move across North America? - What is an occluded front? - Understand the terms saturation and dew point. - What causes westerly winds? - At what time of the day would you expect to find the highest relative humidity? - Can jet streams provide heat exchange in the atmosphere? - Can lighting occur within the same cloud? - What is a steep pressure gradient? - Is a radiation fog most likely to form during an inversion? - The closer the dew point temperature is to the actual temperature, the higher or the lower the relative humidity? - What do isobars connect? - What determines the capacity of the air to hold moisture? - Do tropical cyclones form out of a single air mass? - Understand the characteristics of warm fronts and cold fronts. - You will have a matching portion related to a cross section of a mid-latitude cyclone. (1) Present possible explanations why precipitation is so low in the Great Basin of the U.S. (Nevada-Utah area). (2) Present possible explanations why precipitation is so high in the Amazon Basin. 3/14/2012 1:22 PM
<urn:uuid:ec70bfa3-527f-4a74-bc39-e6d85c2738ad>
3.921875
492
Content Listing
Science & Tech.
51.852268
http://www.nature.com/nature/journal/v4 ... 10343.html Role of sulphuric acid, ammonia and galactic cosmic rays in atmospheric aerosol nucleation Atmospheric aerosols exert an important influence on climate1 through their effects on stratiform cloud albedo and lifetime2 and the invigoration of convective storms3. Model calculations suggest that almost half of the global cloud condensation nuclei in the atmospheric boundary layer may originate from the nucleation of aerosols from trace condensable vapours4, although the sensitivity of the number of cloud condensation nuclei to changes of nucleation rate may be small5, 6. Despite extensive research, fundamental questions remain about the nucleation rate of sulphuric acid particles and the mechanisms responsible, including the roles of galactic cosmic rays and other chemical species such as ammonia7. Here we present the first results from the CLOUD experiment at CERN. We find that atmospherically relevant ammonia mixing ratios of 100 parts per trillion by volume, or less, increase the nucleation rate of sulphuric acid particles more than 100–1,000-fold. Time-resolved molecular measurements reveal that nucleation proceeds by a base-stabilization mechanism involving the stepwise accretion of ammonia molecules. Ions increase the nucleation rate by an additional factor of between two and more than ten at ground-level galactic-cosmic-ray intensities, provided that the nucleation rate lies below the limiting ion-pair production rate. We find that ion-induced binary nucleation of H2SO4–H2O can occur in the mid-troposphere but is negligible in the boundary layer. However, even with the large enhancements in rate due to ammonia and ions, atmospheric concentrations of ammonia and sulphuric acid are insufficient to account for observed boundary-layer nucleation.
<urn:uuid:12a4ce5d-8da8-43f2-a8aa-198617eb61a2>
2.859375
379
Comment Section
Science & Tech.
24.871758
An old adage explains the difference between climate and weather; "Climate is what you expect, weather is what you get." Climatology is the analysis of general weather patterns over a long period of time. For example, The World Meteorological Organization (WMO) specifies the period for climate analysis to be thirty years. Weather, on the other hand, is comprised of the meteorological conditions that an area is currently experiencing. Weather can, and frequently does, vary markedly from climatology. Scientists at the NHC have studied the historical record and have prepared a climatological chart for each month of the hurricane season that depicts the most likely area for tropical cyclone development and the most likely path that a hurricane would follow. The NHC offers the following disclaimer; "hurricanes can originate in different locations and travel much different paths from the average." A case in point was Vince, which in mid-October of 2005, became the first known tropical cyclone to strike the Iberian Peninsula. Vince defied climatology by forming far to the east and following a most unconventional path. of atmospheric variables influence the development and path of tropical cyclones, and these variables change as our hemisphere transitions from summer to winter. Below is a series of charts for the months of August, September, October and November that provide the NHC's climatological analysis of origin and anticipated path, mean sea surface temperature, and mean sea level barometric pressure. A review of the time series show how the region of formation expands both eastward and westward as August gives way to September. Also note the region of warm water that expands ever so slightly to the east and the contraction of the area enclosed by the 1020mb isobar on the chart of sea level pressure. It is the retreat of the North Atlantic subtropical high that allows a cyclone an opportunity for a more northeasterly path. As the season progresses into October, the region of formation contracts considerably in response to sea surface cooling in the Caribbean basin. Both the retreat of the subtropical high and surface cooling in the Caribbean basin are the result of the Sun's southward progression into the Southern Hemisphere. The continued retreat of the subtropical high combined with the strengthening westerly winds diminishes the possibility for landfall in the Gulf of Mexico while increasing the chances for a Florida landfall. Finally, as the season draws to a close in November, the areas of formation, sufficiently warm water and the subtropical high have contracted considerably. Hurricane formation isn't impossible but the environment is far less supportive. While there are tropical cyclones such as Vince that defy climatological convention, Wilma's formation and subsequent track was eerily consistent with the NHC's climatological analysis. Note how Wilma, a late October hurricane, formed exactly where expected and headed to the northwest before moving northeasterly across Florida and up the East Coast. Previous: Cyclone Embryos Next: Hurricane Structure Restore the Frames Version © 2005-2006 Mark A. Thornton
<urn:uuid:2ea36eae-83f5-4d8d-bc71-c7390042862e>
3.640625
630
Knowledge Article
Science & Tech.
24.851862
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895. Please note, Degree Days are not available for Agricultural Belts Contiguous U.S. Temperature Rankings, February 1943 More information on Climatological Rankings (out of 119 years) |94th Coldest||1936||Coldest since: 1942| |26th Warmest||1954||Warmest since: 1932|
<urn:uuid:a06f7d40-5cfa-4410-ab02-1c6951255e8f>
2.859375
139
Structured Data
Science & Tech.
58.063077
User:Emran M. Qassem/Balmer In this lab we measured the Rydberg constant for Hydrogen and Deuterium using a spectrometer and compared them with each other and the accepted value of the Rydberg constant. To do this we used a spectrometer which we first calibrated with a mercury gas tube and the known wavelengths of the mercury spectral lines. Having the calibration, we used the spectrometer to measure the wavelengths of the spectral lines of Hydrogen and Deuterium. This gave us the data to calculate the Rydberg constant. During our procedure, we found many spectral lines in our Hydrogen that seemed like they should not have been there. We even marked down a yellow line which was not documented anywhere we looked. It was suggested by our professor Dr. Koch that maybe this source was not really hydrogen, or that maybe the spectrometer had an issue with it. The Rydberg constants we measured: - for Hydrogen - for Deuterium The accepted value is: The accepted value is within one sigma of our measured value. It seems our measurements are good. We did change the 690 wavelength calibration correction from -9 to 2, which I am not too sure about doing. It did give us better data, and might very well have been a mistake in the calibration, but I believe the correct thing to have done in this case is to go back and remeasure instead of changing it to the value which matches the rest and works out well. In this lab I learned that doing the experiment and writing everything down is more important than getting a good result. Even though we had a few problems with extra spectral lines and in the end we changed one of the calibration parameters, we took note of everything, so that it was documented.
<urn:uuid:89cb8804-b5b8-450a-9519-3dce97bdc298>
3.046875
374
Audio Transcript
Science & Tech.
49.027466
- Almost always has something to do with strings or matching ~ As a prefix operator, coerces to subtype Str (string) ~ As an infix operator: concatinate or stitch characters together, instead of . in perl5. Specification Operators containing this character $~ not an operator, but a twigil for a slang ~^ ~| ~& bitwise logical operations on buffers or strings ~= String Append, the post-assignment mutant form of ~ ~~ and !~~ The Smart Match operator and its negated form. When used inside regexes ~~ and !~~ inside a regular expression: cause a nested submatch to be performed. ~ is a helper for matching nested subrules with a specific terminator as the goal. <~~ inside a regular expression starts an extensible metasyntax for sub-pattern re-use (and must be closed with >) Most hyper operators and meta operators have functional ~ forms. Old, deprecated, or other language uses =~ and !~ in Perl5 used to be for matching. Now it is ~~ (see above). =~ is always a syntax error in Perl6. ~ as a prefix operator in perl5 was a logical binary invert. Now that is ~^ or +^.
<urn:uuid:f81ea435-fc32-43c4-b145-61ccd9fcdd44>
3.71875
271
Structured Data
Software Dev.
44.087143