text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Environmental Issues: Water All Documents in Water Tagged conservation and restoration - Re-Envisioning the Chicago River Adopting Comprehensive Regional Solutions to the Invasive Species Crisis - In response to a public health emergency more than 100 years ago, engineers reversed the Chicago River and built the Chicago Sanitary and Ship Canal to carry wastewater away from Lake Michigan, the city’s source of drinking water. The canal also provides a shipping link between the Mississippi River and the Great Lakes, opening navigation not only to recreational boats and commercial barges, but also to invasive species, and it diverts massive amounts of water from Lake Michigan. The unfolding Asian carp crisis reveals more than just the challenges faced by local, state, and federal agencies in stopping invasive species from entering the Great Lakes. It also exposes critical infrastructure deficiencies in the region’s wastewater, stormwater, and transportation systems.Get document in pdf. - Florida Everglades - Answers to questions including: What is the Everglades? What types of environmental threats does the region face? What's being done today to save the Everglades? - Clearing the Waters From the Chesapeake to California, NRDC is fighting to restore America’s threatened waterways - The United States has made significant progress cleaning up the nation's waterways since Congress passed the Clean Water Act in 1972, but much more remains to be done. Although some of the most obvious signs of contamination have disappeared, other sources of pollution persist, and water resources are frequently overtaxed, particularly in the West. - Fish Out of Water How Water Management in the Bay-Delta Threatens the Future of California's Salmon Fishery - This July 2008 issue paper examines the operation of water management projects in California as one of the most significant -- and reversible -- causes of fishery collapse and provides comprehensive policy recommendations for restoring and sustaining this treasured resource. For additional policy documents, see the NRDC Document Bank. For older publications available only in print, click here. Sign up for NRDC's online newsletter Water on Switchboard NRDC experts write about water efficiency, green infrastructure and climate on the NRDC blog. Recent Water Posts - USAID Releases Water Strategy - posted by Elizabeth Shope, 5/22/13 - New Draft Fracking Rules Give Industry a Free Pass - posted by Frances Beinecke, 5/17/13 - Voices for America's Wildlife - Fishermen Know that Protecting Endangered Salmon Protects Fishing Jobs - posted by Doug Obegi, 5/16/13 NRDC Gets Top Ratings from the Charity Watchdogs - Charity Navigator awards NRDC its 4-star top rating. - Worth magazine named NRDC one of America's 100 best charities. - NRDC meets the highest standards of the Wise Giving Alliance of the Better Business Bureau.
<urn:uuid:04154293-3767-45ae-a1a3-19149958d075>
3.359375
593
Content Listing
Science & Tech.
22.452563
|REBOL 3 Docs||Guide||Concepts||Functions||Datatypes||Errors| |TOC < Back Next >||Updated: 18-Feb-2009 Edit History| As previously mentioned, blocks are not normally evaluated. A do function is required to force a block to be evaluated. There are times when you may need to conditionally evaluate a block. The following section describes several ways to do this. The if function takes two arguments. The first argument is a condition and the second argument is a block. If the condition is true , the block is evaluated, otherwise it is not evaluated. if now/time > 12:00 [print "past noon"] past noon The condition is normally an expression that evaluates to true or false ; however, other values can also be supplied. Only a false or a none! value prevents the block from being evaluated. All other values (including zero) are treated as true, and cause the block to be evaluated. This can be useful for checking the results of find, select, next, and other functions that return none! : string: "let's talk about REBOL" if find string "talk" [print "found"] found either now/time > 12:00 [ print "after lunch" ][ print "before lunch" ] after lunch Both the if and either functions return the result of evaluating their blocks. In the case of an if, the block value is only returned if the block is evaluated; otherwise, a none! is returned. The if function is useful for conditional initialization of variables: flag: if time > 13:00 ["lunch eaten"] print flag lunch eaten Making use of the result of the either function, the previous example could be rewritten as follows: print either now/time > 12:00 [ "after lunch" ][ "before lunch" ] after lunch Since both if and either are functions, their block arguments can be any expression that results in a block when evaluated. In the following examples, words are used to represent the block argument for if and either. notice: [print "Wake up!"] if now/time > 7:00 notice Wake up! notices: [ [print "It's past sunrise!"] [print "It's past noon!"] [print "It's past sunset!"] ] if now/time > 12:00 second notices It's past noon! sleep: [print "Keep sleeping"] either now/time > 7:00 notice sleep Wake up! The any and all functions offer a shortcut to evaluating some types of conditional expressions. These functions can be used in a number of ways:either in conjunction with if, either, and other conditional functions, or separately. Both any and all accept a block of expressions, which is evaluated one expression at a time. The any function returns on the first true expression, and the all function returns on the first false expression. Keep in mind that a false expression can also be none!, and that a true expression is any value other than false or none!. Both the any and all functions only evaluate as much as they need. For example, once any has found a true expression, none of the remaining expressions are evaluated. Here is an example of using any : size: 50 if any [size < 10 size > 90] [ print "Size is out of range." ] number: none print number: any [number 100] 100 Similarly, if you have various potential values, you can use the first one that actually has a value (is not none! ): num1: num2: none num3: 80 print number: any [num1 num2 num3] 80 data: [123 456 789] print any [find data 432 999] 999 Similarly, all can be used for conditions that require all expressions to be true : if all [size > 10 size < 90] [print "Size is in range"] Size is in range You can verify that values have been set up before evaluating a function: a: "REBOL/" b: none probe all [string? a string? b append a b] none b: "Core" probe all [string? a string? b append a b] REBOL/Core The until function repeats a block until the evaluation of the block returns true (that is, not false or none! ). The evaluation block is always evaluated at least once. The until function returns the value of its block. The example below will print each word in the color block. The block begins by printing the first word of the block. It then moves to the next color for each color in the block. The tail? function checks for the end of the block, and will return true, which will cause the until function to exit. color: [red green blue] until [ print first color tail? color: next color ] red green blue The while function repeats the evaluation of its two block arguments while the first block returns true. The first block is the condition block, the second block is the evaluation block. When the condition block returns false or none!, the expression block will no longer be evaluated and the loop terminates. Here is a similar example to that show above. The while loop will continue to print a color while there are still colors to print. color: [red green blue] while [not tail? color] [ print first color color: next color ] red green blue The condition block can contain any number of expressions, so long as the last expression returns the condition. To illustrate this, the next example adds a print to the condition block. This will print the index value of the color. It will then check for the tail of the color block, which is the condition used for the loop. color: [red green blue] while [ print index? color not tail? color ][ print first color color: next color ] 1 red 2 green 3 blue 4 The last value of the block is returned from the while function. A break can be used to escape from the loop at any time. |TOC < Back Next >||REBOL.com - WIP Wiki||Feedback Admin|
<urn:uuid:bea7696b-bf1a-491a-a53c-fd443f448aaa>
3.28125
1,289
Documentation
Software Dev.
65.420946
The turacos, also known as plantain eaters and go-away birds, make up the bird family Musophagidae (literally banana-eaters). In southern Africa both turacos and go-away birds are also commonly known as louries. Traditionally this group has been placed in the cuckoo order Cuculiformes, but Sibley-Ahlquist taxonomy raises this group to a full order Musophagiformes Turacos are medium sized arboreal birds common to sub-Saharan Africa, living in forests, woodland and savanna. Their flight is weak, but they run quickly through the tree canopy. They feed mostly on fruits and to a lesser extent on leaves, buds, and flowers, occasionally taking small insects, snails, and slugs. The turacos and plantain eaters are brightly colored birds, usually blue, green or purple. The green color comes from turacoverdin, the only true green pigment in birds. (Other “greens” in bird colors result from a yellow pigment such as lypochrome combined with the prismatic blue physical structure of the feather itself.) Their wings contain a red pigment turacin. Both pigments are unique to this group. The Go-away-birds are mainly grey and white. The Musophagidae build large stick nests in trees, and lay 2 or 3 eggs.
<urn:uuid:dec41c90-a812-49e0-ae94-6248bb1cae87>
3.453125
292
Knowledge Article
Science & Tech.
47.168022
Next time kids say they don’t like vegetables, try out an experiment that will bring out the fun side of our leafy friends. Mom and Kiddo of the blog What Did We Do All Day? shows us how to play with color in this demonstration that uses an acid, a base, and a vegetable. She suggests keeping some of the solution in the fridge for a rainy day and allowing kids to experiment on their own. You will need: - purple or red cabbage - small and large glass jars - baking soda - measuring cup - 1/4 teaspoon What to do? - Chop up a cabbage and simmer on the stove for 20 minutes to make a cool purple liquid (kids, please let a grown-up do this) - After the purple brew has cooled, collect some small and large jars. Place about 1/4 tsp baking soda and 1/4 tsp water in one jar, a small amount of vinegar in another and about 1/4 cup purple brew in a third. - Put some of the brew in a measuring cup and pour 1/4 tsp of the brew in each of the first small jars. What happens when you mix the purple brew with the different solutions? - In the jar filled with a 1/4 cup of purple brew, pour about 1/4 cup vinegar. What happens? - Next, add 1/4 tsp baking soda to the same solution. What is your observation? How does it work? Red cabbage contains a chemical called flavin and flavin has the ability to change color based on the pH level of certain liquids. Nuetral solutions, (like water) are purple. Acid solutions, like the vinegar, turn will turn flavin red. Basic solutions, like the baking soda water, become blue. You can check out Mom and Kiddo’s full post of this experiment HERE. Let us know what your results are when you make your own purple brew. What would happen if you tried different vegetables? What would happen if you used cream of tartar, lemon juice, salt, lemonade, or other materials from your kitchen pantry? Can you make your own litmus paper and test the pH of the solution? What would the science world be without vinegar & baking soda? It would be a little less exciting at the Community School of West Seattle. Michelle Taylor teaches a K-2 program there and she decided to add a little science excitement to her classroom. With a little vinegar, baking soda, a bottle and a balloon, her students were able to to observe chemistry at work inflating the balloon. (Instructions for this experiment below) “You could hear the screams all through the school – it was so exciting.”
<urn:uuid:475dafb3-3e0e-4ae0-ab1d-b660d4fb18e8>
3.21875
564
Personal Blog
Science & Tech.
69.737857
Remember the headline from a few days ago about the impending supernova from "nearby" recurring nova T Pyxidis? T Pyxidis is a star which goes boom every 20 years, but hasn't had an event since 1967. This week some astronomers suggested T Pyxidis is headed for supernova, one of the aftereffects of which would be the stripping away of the Earth's ozone layer, massively increased gamma radiation, the creation of millions of Incredible Hulk-like monsters, and the eventual destruction of life on earth . As it turns out, we'll probably be fine. Our friends the Gormogons point us to a more sober analysis from Discovery's fantastic Bad Astronomy Blog. T Pyxidis is 3,260 light years away. Even if it does become a Type 1a supernova--rather than a less-powerful Type II supernova (wouldn't it have been easier to have Division IA and Division IAA?)--earth is too far away to feel the effects. How far? About an order of magnitude of 10. The mistake the astronomers seem to have made is that they were using gamma-ray strength data not from a supernova, but from a GRB (gamma ray burst). In conclusion: No supernova yet; no danger if there is a supernova; no marauding Hulks; no extinction of life on earth. There's a lesson in here somewhere.
<urn:uuid:335b5a41-3f40-468c-905f-753ef1ced79b>
3.375
294
Personal Blog
Science & Tech.
55.035692
From Ed Yong: All modern penguins wear a suit of black feathers, but prehistoric members of the group didn’t go for the dinner jacket look. A newly discovered penguin, known as Inkayacu, was dressed in grey and reddish-brown hues. It is neither the oldest nor the largest penguin fossil, it doesn’t hail from a new part of the world, and it provides few clues about the group’s evolution. However, it does have one stand-out feature that probably secured its unveiling in the pages of Science – its feathers. Read about Inkayacu‘s magnificently preserved fossil feathers and what they tell us about this prehistoric bird at Not Exactly Rocket Science. Ed also has the artists’ renderings of what this powerful penguin may have looked like. 80beats: What Color Were Feathered Dinosaurs and Prehistoric Birds? 80beats: Emperor Penguins May Be Marching to Extinction by 2100 80beats: Researchers Use Feather “Fingerprints” to Track Penguins Discoblog: The Mystery of the Macaroni Penguin and the Bad Egg Discoblog: To Track Penguins, Scientists Use High-Tech Satellite Images of…Droppings Image: Science / AAAS
<urn:uuid:4658e48b-2f34-438d-bf60-cb0313a38d47>
3.203125
269
Personal Blog
Science & Tech.
43.783462
Rowe, J. E. and Kelly, M. and Hewitt, CN (1993) The occurrence of high indoor radon levels in carboniferous bedrock areas of NW England. Radiation Protection Dosimetry, 46 (3). pp. 201-205.Full text not available from this repository. In NW England, certain types of bedrock of Carboniferous age have been found to have a relatively high radon potential, in particular limestones and some mudrocks. These result in frequencies of domestic indoor radon gas concentrations above the Action Level (200 Bq.m-3) of 5% and 3% for the Dinantian and Namurian subdivisions of the Carboniferous, respectively. More particularly, villages built on such bedrocks can have frequencies of homes 10-15% above the Action Level. Surveys based on the geological radon potential can effectively locate small areas with incidences of high indoor radon levels which qualify them for consideration as Affected Areas (>1% frequency). |Journal or Publication Title:||Radiation Protection Dosimetry| |Subjects:||G Geography. Anthropology. Recreation > GE Environmental Sciences| |Departments:||Faculty of Science and Technology > Lancaster Environment Centre| |Deposited On:||20 Jan 2009 10:29| |Last Modified:||26 Jul 2012 16:04| Actions (login required)
<urn:uuid:8e863161-2620-40b2-a2f8-a88c7c05e11d>
2.875
293
Academic Writing
Science & Tech.
39.292883
Using the VSEPR theory one can easily determine what is repeling what in a molecule , however, determining the shape of the mole cule can be annoying at time In order to help out with this problem here is a list that will help you determine the shape of a simple molecule given the number of electron pairs: (Seeing as ACSII art isn't the best way to represent 3D molecules I'll draw them with ASCII and then describe them. Because of this, the diagrams will not be a true representation of the molecules' shape) A molecule will be linear when it has one or two pairs of electrons in the valence level of the central atom and one or two bonding pairs attached to the central atom. Eg: BeF2, H2, HCl H - Cl or F - Be - F Linear molecules look like a line . They're straight Triangular planar molecules look like a triangle with a dot in the middle of it. To make a triangular planar molecule you need three electron pairs in the valence level of the central atom and three bonding pairs attached to the central atom. An example is BF3 The angle between each of the bonds is theoretically If you can imagine a triangular pyramid with an atom in the middle you'll have a good idea of what a tetrahedral molecule looks like. Tetrahedral molecules have four electron pairs in the valence energy level of the central atom and four bonded pairs. The tetrahedral shape is one of carbon's favourites =). CH4 and CF4 are good examples. / | \ H H H If your central atom of the molecule has four pairs of valence electrons, but only three bonded pairs then it'll be pyramidal. It looks much the same as a tetrahedral, but without the atom in the middle. Instead of having the atom in the middle it is on the top. Examples: NH3, PCl3 / | \ H H H What are those dots above the Nitrogen ? They're the lone pair of electrons that are repelling the hydrogen . I put them in so that you can see why the molecule isn't Triangular Planar. If we again replace a bonded pair with a lone pair we'll get a different sort of shape. The V-shaped molecule is bent because the lone pairs of electrons are repelling the bonded pairs. The angle between the bonded pairs is 104.5 degrees. Examples: H20, F2O I kept this linear for last because it has lone pairs of electrons attached to the central atom. The other linear molecules don't. This molecule has one bonded pair of electrons and three lone pairs. Examples: F2, Cl2 :F - F: Those are the basic molecular structures that you have to deal with in grade in Western Australia . I'm sure that there is alot more to it when it comes to organic chemistry , but I don't have that knowledge. In spite of that, you can always work it all out with valence shell electron pair repulsion theory if you really need to =) Oh, I almost forgot double bonds. If a molecule has a double or a triple bond you treat it as if it were only one negative region . Of course it'll have a stronger negative charge , but it'll act as only one region of negative charge.
<urn:uuid:2811d171-a237-4680-8427-927bd72e520e>
3.546875
715
Tutorial
Science & Tech.
57.290591
From Global Warming Art This figure compares the global average surface temperature record, as compiled by Jones and Moberg (2003; data set TaveGL2v with 2005 updates), to the microwave sounder (MSU) satellite data of lower atmospheric temperatures determined by Christy et al. (UAH 2003; data set tltglhmam version 5.2 with 2005 updates) and Schabel et al. (RSS 2002; data set tlt_land_and_ocean with 2005 updates). These two satellite records reflect two different ways of interpreting the same set of microwave sounder measurements and are not independent records. Each record is plotted as the monthly average and straight lines are fit through each data set from January 1982 to December 2004. The slope of these lines are 0.187°C/decade, 0.163°C/decade, and 0.239°C/decade for the surface, UAH, and RSS respectively. It is important to know that the 5.2 version of Christy et al.'s satellite temperature record contains a significant correction over previous versions. In summer 2005, Mears and Wentz (2005) discovered that the UAH processing algorithms were incorrectly adjusting for diurnal variations, especially at low latitude. Correcting for this problem raised the trend line 0.035°C/decade, and in so doing brought it into much better agreement with the ground based records and with independent satellite based analysis (e.g. Fu et al. 2004). The discovery of this error also explains why their satellite based temperature trends had disagreed most prominently in the tropics. Within measurement error, all of these records paint a similar picture of temperature change and global warming. However, climate models predict carbon dioxide based greenhouse warming should result in lower atmosphere warming roughly 1.3 times higher than the surface warming. This prediction is consistent with the RSS vs. surface comparison, though by contrast the UAH vs. surface comparison suggests a troposphere warming by slightly less than the surface of the Earth. Note: In the above figure, there is still a significant discrepancy between the very earliest satellite measurements and the ground based measurements at that time. For this reason only the interval 1982-2005 was used in calculating each trend. Including the earliest years leads to a wider dispersion , with trends of 0.170°C/decade, 0.116°C/decade, and 0.192°C/decade for the surface, UAH, and RSS data respectively. The origin of this discrepancy is unclear. This figure was prepared by Robert A. Rohde from publicly available data. - [abstract] Christy, J.R., R.W. Spencer, W.B. Norris, W.D. Braswell and D.E. Parker (2003). "Error estimates of version 5.0 of MSU/AMSU bulk atmospheric temperatures". J. Atmos. Oceanic Technol. 20: 613-629. - [abstract] [ [ Fu, Q., C.M. Johanson, S.G. Warren, and D.J. Seidel (2004). "Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends". Nature 429: 55−58. - [abstract] Jones, P.D. and A. Moberg (2003). "Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001". Journal of Climate 16: 206-223. - Mears, Carl A. and Frank J. Wentz (2005). "The Effect of Diurnal Correction on Satellite-Derived Lower Tropospheric Temperature". Science Express : published online 11 August 2005. - Matthias C. Schabel, Carl A. Mears, and Frank J. Wentz (2002). "Stable Long-Term Retrieval of Tropospheric Temperature Time Series from the Microwave Sounding Unit". Proceedings of the International Geophysics and Remote Sensing Symposium III: 1845-1847. GWArt images and pages linking to this file Wikipedia pages and images linking to this file Click on a date/time to view the file as it appeared at that time. |current||01:23, 18 November 2005||512×363 (30 KB)||Robert A. Rohde|
<urn:uuid:7f7e7af3-5097-492b-9f0f-9ec4e687717f>
3.296875
901
Knowledge Article
Science & Tech.
60.488932
Solve for real numbers x, y, and z Graph the 3 equations with Graphing Calculator as 3D. The image shows three intersecting tubes with elliptical cross-sections. Click here to open a GCF file. (You must have Graphing Calculator 3.5 to open this file but the image could be graphed with other software.) Opening in GC 3.5 allows the image to rotate ans show different perspectives. Examining the 3D image will not provide a solution but it may suggest where to look. For example it appear the three tubes may have a common point in the region where x, y, and z are positive. Try fixing one of the variables -- for example replace z with k -- and examine some 2D graphs for different k. These are contour plots for each z = k plane. One solution is very close to (x, y, x) = (1.71674, 3.05009, 5.679).
<urn:uuid:8b986600-f3e9-4ed2-a4d0-eb10797d5157>
3.28125
203
Tutorial
Science & Tech.
75.425
In that video, the speaker conceptualizes a clock that measures time by reflections of light. If we consider speed of light as reference to our time measurement, I guess it is natural to expect that our time measurements will be warped if we are traveling at speeds comparable to that of light. I was wondering, what if if we measure time with something else as reference? Say, a digital clock, assuming it works under such extreme speed/acceleration... So, my questions: - Would our time measurements then be still warped? - I think, even if the measured times indeed gets warped, is it not just the perceived time for the traveling twin? Logically, the twins have lived for the same amount of time (as measured from a digital clock on earth) and should have grown/aged similar. Why would the twin who stayed back on earth be aged more? I might be totally wrong, but I would appreciate if somebody helps me understand where I went wrong. Thanks for your time.
<urn:uuid:a5a683e3-ffee-4e10-a88c-6c3c170975e0>
3.1875
205
Q&A Forum
Science & Tech.
56.37947
|S/2004 S 1| |discovery by||Sébastien Charnoz| |date of the discovery||2004| |data of the orbit| |course radius||194,000 km| |scan time||1.0083 days| |natural satellite||of the Saturn| |mass||1,65×10 13 kg kg| |of density||1.17 g/cm 3| |gravitation at the surface||0,000 48 m/s 3| |atmospheric pressure||0 kPa| Table of contents Methone was discovered in the year 2004 by the astronomer Sébastien Charnoz on photographic photographs of the space probe Cassini Huygens. Charnoz is coworker of the scientific team of Cassini Huygens and works on the university of Paris. There with the smaller, so far moons did not discover themselves it around extremely faint objects act, the probe Cassini of 75 pairs of long-exposed photographs around the Saturn had not made. Charnoz left the photographs by means of one of it developed software to examine. It found the moons Methone and Pallene. Charnoz reports: “I had looked already for weeks in my Paris offices for such objects, but only as I during one vacation my laptop used, became fündig I. That said to me that I should take more vacation. “ Possibly it already acts with the moon around the same object, to 23. August 1981 of the space probe Voyager 2 on only one admission was visible and the designation S/1981 S 14 received. Its distance to Saturn became estimated on approximately 200,000 km. in a middle distance from approximately 194,000 km circles course data Methone in 24 hours and 12 minutes. structure and physical data Methone have a diameter of approximately 3 km. If one puts a middle density of 1,17 g/cm to 3 at the basis (as with the neighbouring moon Mimas, then a mass of 1,65×10 13 kg results. At its surface the Schwerebeschleunigung amounts to 0.00048 m/s 2, this corresponds only about 0.05 parts per thousand of the terrestrial. Web on the left of Albiorix | Atlas | Calypso | Daphnis | Dione | Enceladus | Epimetheus | Erriapo | Helene | Hyperion | Ijiraq | Janus | Japetus | Kiviuq | Methone | Mimas | Mundilfari | Narvi | Paaliaq | Pan | Pallene | Pandora | Phoebe | Polydeuces | Prometheus | Rhea | Siarnaq | Skathi | Suttungr | Tarvos | Telesto | Tethys | Thrymr | Titanium | Ymir (see also: List of the natural satellites)
<urn:uuid:564f8307-9e48-420b-ab0f-cf3a7d9603bb>
2.6875
619
Knowledge Article
Science & Tech.
53.571918
Ask a question about 'Beta Cephei variable' Start a new discussion about 'Beta Cephei variable' Answer questions from other users Beta Cephei variables are variable stars which exhibit variations in their brightness due to pulsations of the stars' surfaces. The point of maximum brightness roughly corresponds to the maximum contraction of the star. Typically, Beta Cephei variables change in brightness by 0.01 to 0.3 magnitudes with periods of 0.1 to 0.6 days. These stars are main sequence The main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. These color-magnitude plots are known as Hertzsprung–Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell... stars of masses between about 7 and 20 M The pulsations of Beta Cephei variables are driven by the kappa mechanism The κ–mechanism is the driving mechanism behind the changes in luminosity of many types of pulsating variable stars. Here, the Greek letter kappa is used to indicate the radiative opacity at any particular depth of the stellar atmosphere... The prototype of these variable stars, Beta Cephei Beta Cephei is a third magnitude star in the constellation Cepheus. It has the traditional name Alfirk , meaning "The Flock" This star, along with α Cep and η Cep , were Al Kawākib al Firḳ , meaning "the Stars of The Flock" by Ulug Beg... , shows variation in apparent magnitude The apparent magnitude of a celestial body is a measure of its brightness as seen by an observer on Earth, adjusted to the value it would have in the absence of the atmosphere... from +3.16 to +3.27 with a period of 4.57 hours. These stars should not be confused with Cepheid variable A Cepheid is a member of a class of very luminous variable stars. The strong direct relationship between a Cepheid variable's luminosity and pulsation period, secures for Cepheids their status as important standard candles for establishing the Galactic and extragalactic distance scales.Cepheid... s, which are named after Delta Cephei Delta Cephei is a binary star system approximately 891 light-years away in the constellation of Cepheus . Delta Cephei is the prototype of the Cepheid variable stars, and it is among the closest stars of this type to the Sun...
<urn:uuid:f1ae5f8a-246f-4421-a6ae-181d8eb11f51>
3.640625
533
Q&A Forum
Science & Tech.
54.359854
Science Fair Project Encyclopedia Mercury programming language Mercury is compiled rather than interpreted, as is traditional for logic languagues. It features a sophisticated, strict type and mode system. Its authors claim these features and logic programming's abstract nature speeds writing of reliable programs. Mercury's module system enables division into self-contained units, a problem for past logic programming languages. Hello World in Mercury: :- module hello_world. :- interface. :- import_module io. :- pred main(io__state, io__state). :- mode main(di, uo) is det. :- implementation. main --> io__write_string("Hello, World!\n"). (by Ralph Becket at the University of Melbourne): Mercury is developed at the University Of Melbourne Computer Science department under the supervision of Dr. Zoltan Somogyi . Unfortunately, the current Mercury implementation lacks user level documentation (only reference documentation exists). Thus it is almost unused outside the team of its creators. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:8b95ca34-78dc-4286-9be9-81af35ebfa5d>
2.703125
241
Knowledge Article
Software Dev.
22.699148
Characteristics of Life Characteristics of life include living things that are made of cells or cell products, use energy, respond to changes in the environment, maintain homeostasis and reproduce with similar offspring. These characteristics define whether something is living or not. When people start studying what is life, my students often say "why do we need to bother having all these definitions for life because itâs kind of duh obvious what is a life." And that seems true in the beginning but scientists actually had to sit there and try to narrow it down because originally when people did study life, they ascribed life to a lot of things that we now know aren't alive. For example, they used to think that fire was alive. And we know now itâs not. So what are the characteristics of life? Well one, weâve discovered that all life on this planet is made up of cells or products of cells. For example, my body is made up of gazzillions of cells. Skin cells, muscle cells, nerve cells, blood cells et cetera but then things like my finger nails or hair, those arenât made of cells but they are made by cells. So thatâs what I mean by cell products. Living things use up energy and materials. One of the laws of the universe is that the universe hates order and itâs called entropy this love of disorder. So in order to maintain this highly organized state we need to be constantly spending energy to maintain the highly organized cells of our bodies. So one of the things that we know about life is that it has this active metabolism. Living things respond to changes in their environment. Scientists like to call those stimulus. So when you poke me I look at you and say "hey, stop poking me." If you poke a tree, it doesnât turn and look at you but if you keep poking it ultimately itâll start growing thicker bark perhaps in that area and if itâs a Venus Fly Trap and you are a fly and you're poking the little trigger hairs, it will catch you and eat you. We maintain homeostasis, this kind of goes hand in hand with this idea about responding to changes in the environment. Homeostasis, this is the idea that we try to maintain an internal balance. For example, if I get too hot, I start to sweat in order to get rid of the excess heat. If I get too cold I start to shiver doing muscle contractions to help release some additional heat energy. And thatâs all to keep my body temperature same. So homeostasis is an example, or sorry, body temperature maintenance is an example of homeostasis. Living things reproduce with similar offspring. So when I reproduce I have a mechanism of inheritance. I have a way of passing on my traits to my children. In me I use DNA, so pretty much everything on this planet uses DNA. There are some examples of viruses, although some people argue against them being alive. Some viruses were also use RNA. Now, life evolves at the species level. Individuals donât evolve. We donât undergo genetic change over time. Unless of course you're a spiderman you get beaten. But species evolve over time in response to environmental changes. Whether itâs, it gets colder so you need to evolve to have better protection against the cold or itâs a new environmental change because some member of your species figured out a new genetic mechanism thatâs just better than was previously found in your species. Thatâs life.
<urn:uuid:5e03e228-9335-4f67-a651-ee3da9563ace>
3.453125
729
Knowledge Article
Science & Tech.
54.358403
This isn’t actually about brain control. Not really, anyway. If I really knew anything about brain control, I’d be out taking over the world, not writing blog posts (and mark my words, I’d be making Jake Gyllenhaal my number one minion. Yum). It’s really about a technique called Transcranial Magnetic Stimulation, how amazing it would be if speculations that it might happen in nature were true, and why I’m not 100% convinced that they are. I read this article the other week in New Scientist. It describes a (not so*) new idea suggesting that the elusive phenomenon of ball lightning may in fact be a hallucination brought on by transient powerful magnetic fields caused by (ordinary) lightning strikes. The mechanism suggested to be behind this is the same one that is exploited by Transcranial Magnetic Stimulation (TMS), which I had the joy of playing with as part of my undergraduate research project. The principles of TMS are fairly simple. Firstly, it’s important to understand that neurons (brain cells) communicate chemically, not electrically**. But the signal within individual neurons that generates this chemical communication is electrical. The process of neurotransmission is an endlessly fascinating one, but one I’m not going to go into here, for reasons of brevity and because there is a pretty good summary on the ever marvellous wikipedia. You’ll probably remember from your schooldays that if you pass an electric current through a coil of wire, a magnetic field is generated around that wire. Placing a ‘conductor’ within that magnetic field causes electric current to flow in loops perpendicular to the magnetic field, and parallel to the coil itself. If that conductor is a human head, that electric current is being induced in neurons close to the brain’s surface in the region of the coil. If this all sounds complicated, it’s probably because I haven’t explained it very well; I’m no physicist! A much nicer explanation can be found in box 1 of this article. Anyway, the upshot of all this is that by sending brief pulses of current through such a coil, TMS can be used to externally evoke temporary neuronal activity, and we can observe the behavioural results. So, if we do this with the coil placed over motor cortex – the part of our brain that sends movement instruction to our muscles – we can make the arms or legs twitch, for example. As someone who has been subject to this bizarre procedure, I can tell you it’s both hilarious and a little unsettling to watch your hand flail about wildly seemingly of its own volition. It’s a bit like watching your dad dancing to The Mavericks; amusing, but a bit unnatural and there’s nothing you can do to stop it. I digress. Now, what I find super interesting is that by placing the TMS coil over the occipital cortex, which includes our visual cortex, people often report seeing visual artifacts such as flashes of light, or pale shapes like ovals or lines. Awesome stuff. So, what this article was suggesting was that something similar is going on when ball lightning is perceived. The idea is that a bolt of lightning creates fluctuating magnetic fields similar to those used in TMS. If an observer happens to be about the right distance away from that lightning, this magnetic field will be strong enough to induce electrical activity in that person’s visual cortex, causing hallucinations just like those artifacts we get with TMS. This idea totally blows me away. This idea of a kind of naturally occuring TMS is just amazing – I mean lightning being able to make you see things?! It’s one of those flukey little quirks of nature that make science so interesting. I just love it. I’m certainly no expert on lightning, so I am holding my hands up and admitting that the next couple of paragraphs are at least 73.87% speculation on my part, but I’m not entirely convinced by this idea. Let’s, for a start, write off all the supposed photographs of ball lightning. I’m fairly certain you can’t photograph a hallucination, but you can do some remarkable things with photoshop, and other phenomena like St. Elmo’s Fire can be mistaken for ball lightning. Let’s also ignore all those reports coming from groups of people who all claim to have seen the same instance of ball lightning. Social pressure and the desire to conform can cause people to say all sorts of things. So far, no solid evidence that ball lightning even exists, suggesting the hallucinations idea is actually quite plausible. But there are a couple of things that bother me about it. Firstly, the remarkable consistency in descriptions of ball lightning. As far as I have been able to find out, there don’t seem to be reports of lines, or squares, or lights of different colours. Always a glowing, sometimes moving orb. This seems quite bizarre to me, as such consistent effects would surely require a really focused, specific region of effect of the magnetic field. We might expect that a generalised magnetic field could affect any number of brain regions, causing all sorts of different effects. Not only might we expect a wider variety of visual hallucinations, but we might also expect to hear unusual sounds, or experience twitching of the limbs like I described above. Secondly – though this is related to the first point – the fact that people seem to be able to choose to watch this lightning. Our visual cortex is mapped out in a similar way to our retina, such that picking a particular point on the visual cortex is like picking out a corresponding point in the visual field. If the perception of ball lightning really were a result of abnormal brain activity, we might expect that this ball lightning would stay firmly in the same part of our visual field. What I mean by this is that if the induced activity were in a part of the brain that corresponded to the left half of the visual field, it wouldn’t matter how far left we tried to look, the image would remain to the left of our point of focus. If you’ve ever suffered a visual aura, you’ll understand what I mean by this. It should also be perceptible with the eyes closed. I have no idea if this has been documented either way though. Finally, the authors themselves suggest that only around half of all instances of ball lightning might be explained by this type of hallucination. Which not only leaves the obvious question of what is causing the others – but also the question of the quite astounding coincidence that hallucination-ball lightning would result in the same visual experience as non-hallucination-ball lightning. The reports just seem to consistent to be able to draw a divide between those instances that are likely to be hallucinations, and those that aren’t. Given that ball lightning has effectively defied explanation for centuries, far be it from me to undermine what might be a really strong theory. Of course, I only really know about the brain stuff, so my questions about this idea might be totally misguided. I don’t know much about lightning or electromagnetic induction. I think this highlights why it’s really important for scientists of different disciplines to collaborate and debate more; it’s only then that the really interesting ideas and discussions come about. So on that note, if anyone has any more ideas about this, or has any response to the questions I’ve raised, I’d be really interested to hear them, and please feel free to leave comments below! * This has actually been suggested in the past by Cooray and Cooray, 2008; paper available here, though the paper doesn’t appear to have been cited, and I couldn’t even find the (obscure) journal’s impact factor. ** Usually. There are exceptions.
<urn:uuid:ca3b1c49-72bc-4e9e-9a65-eb4583e27f0b>
2.75
1,647
Personal Blog
Science & Tech.
51.07732
Apache HTTP Server Version 2.0 This document covers stopping and restarting Apache on Unix-like systems. Windows NT, 2000 and XP users should see Running Apache as a Service and Windows 9x and ME users should see Running Apache as a Console Application for information on how to control Apache on those platforms. In order to stop or restart Apache, you must send a signal to httpd processes. There are two ways to send the signals. First, you can use the unix command to directly send signals to the processes. You will httpd executables running on your system, but you should not send signals to any of them except the parent, whose pid is in the PidFile. That is to say you shouldn't ever need to send signals to any process except the parent. There are three signals that you can send the parent: will be described in a moment. To send a signal to the parent you should issue a command such as: kill -TERM `cat /usr/local/apache2/logs/httpd.pid` The second method of signaling the is to use the -k command line options: as described below. These are arguments to the httpd binary, but we recommend that you send them using the apachectl control script, which will pass them through to After you have signaled httpd, you can read about its progress by issuing: tail -f /usr/local/apache2/logs/error_log apachectl -k stop stop signal to the parent causes it to immediately attempt to kill off all of its children. It may take it several seconds to complete killing off its children. Then the parent itself exits. Any requests in progress are terminated, and no further requests are served. apachectl -k graceful graceful signal causes the parent process to advise the children to exit after their current request (or to exit immediately if they're not serving anything). The parent re-reads its configuration files and re-opens its log files. As each child dies off the parent replaces it with a child from the new generation of the configuration, which begins serving new requests immediately. USR1to be used for a graceful restart, an alternative signal may be used (such as WINCH). The command apachectl gracefulwill send the right signal for your platform. This code is designed to always respect the process control directive of the MPMs, so the number of processes and threads available to serve clients will be maintained at the appropriate values throughout the restart process. Furthermore, it respects StartServers in the following manner: if after one second at least StartServers new children have not been created, then create enough to pick up the slack. Hence the code tries to maintain both the number of children appropriate for the current load on the server, and respect your wishes with the Users of the will notice that the server statistics are not set to zero when a USR1 is sent. The code was written to both minimize the time in which the server is unable to serve new requests (they will be queued up by the operating system, so they're not lost in any event) and to respect your tuning parameters. In order to do this it has to keep the scoreboard used to keep track of all children across The status module will also use a G to indicate those children which are still serving requests started before the graceful restart was given. At present there is no way for a log rotation script using USR1 to know for certain that all children writing the pre-restart log have finished. We suggest that you use a suitable delay after sending the before you do anything with the old log. For example if most of your hits take less than 10 minutes to complete for users on low bandwidth links then you could wait 15 minutes before doing anything with the old log. -tcommand line argument (see httpd). This still will not guarantee that the server will restart correctly. To check the semantics of the configuration files as well as the syntax, you can try starting httpdas a non-root user. If there are no errors it will attempt to open its sockets and logs and fail because it's not root (or because the currently running httpdalready has those ports bound). If it fails for any other reason then it's probably a config file error and the error should be fixed before issuing the graceful restart. apachectl -k restart restart signal to the parent causes it to kill off its children like in TERM, but the parent doesn't exit. It re-reads its configuration files, and re-opens any log files. Then it spawns a new set of children and continues serving hits. will notice that the server statistics are set to zero when a HUP is sent. Prior to Apache 1.2b9 there were several race conditions involving the restart and die signals (a simple description of race condition is: a time-sensitive problem, as in if something happens at just the wrong time it won't behave as expected). For those architectures that have the "right" feature set we have eliminated as many as we can. But it should be noted that there still do exist race conditions on certain architectures. Architectures that use an on disk ScoreBoardFile have the potential to corrupt their scoreboards. This can result in the "bind: Address already in use" (after HUP) or "long lost child came home!" (after USR1). The former is a fatal error, while the latter just causes the server to lose a scoreboard slot. So it might be advisable to use graceful restarts, with an occasional hard restart. These problems are very difficult to work around, but fortunately most architectures do not require a scoreboard file. See the ScoreBoardFile documentation for a architecture uses it. All architectures have a small race condition in each child involving the second and subsequent requests on a persistent HTTP connection (KeepAlive). It may exit after reading the request line but before reading any of the request headers. There is a fix that was discovered too late to make 1.2. In theory this isn't an issue because the KeepAlive client has to expect these events because of network latencies and server timeouts. In practice it doesn't seem to affect anything either -- in a test case the server was restarted twenty times per second and clients successfully browsed the site without getting broken images or empty documents.
<urn:uuid:1c7310bb-4caf-4d6c-af66-6cd8672a16db>
2.953125
1,400
Documentation
Software Dev.
51.786652
See more from this Session: Symposium--Changes In Soil Carbon Due to Climate and Human Activities Wednesday, October 19, 2011: 8:05 AM Henry Gonzalez Convention Center, Room 209, Concourse Level Changes in soil organic carbon (SOC) stocks and its distribution in depth due to land use change have been reported worldwide. The objective is to establish the variation of the SOC pattern, in surface and deep soil layers, of the Pampas of Argentina as affected by agriculture during the last four decades. For the estimation of past carbon stocks, soil data from more than 2000 soil profiles were obtained from surveys (1960-1980). Soil variables were reported from the soil surface to the bottom of the profiles or to the petrocalcic horizon. Soil bulk density and organic carbon content were determined. For the carbon stock at present, a soil sampling was performed in 2008 at eighty two farms widespread over the region and at each farm paired treatments were sampled accounting for common vegetation types and land uses. Bulk density and SOC were determined up to 1 m depth. Rainfall and temperature were obtained from climatic records. An artificial neural network model was developed that allowed the estimation of the SOC stock (R2= 0.64) based on climate, soil properties, vegetation type and land use at county level. This regional model, linked to remote sensing information, estimated SOC stock at present of 4.11 Gt compared to the estimated stock from soil surveys of 4.16 Gt for an area of 48.5 Mha. Agriculture determined a reduction of 16 % of SOC to 50 cm in sampled sites. The stratification pattern of SOC in depth was not affected by the treatments; so that vegetation and land use impacted the SOC sequestered in soil, but not its allocation in depth. At regional scale only a small decrease of total SOC stock was produced while at county scale soil with SOC content higher than 100 t ha-1 to 1 m depth loose carbon. Sequestration prevailed below this threshold.
<urn:uuid:b6486f0f-5dfd-4275-bd67-a4b8e53f9af2>
2.765625
407
Academic Writing
Science & Tech.
50.863721
Flat-screen TV manufacturing emits chemical with warming effect (09.29.2008) An unregulated chemical used to make flat-screen televisions and computers has 17,000 times the climate-warming effect of carbon dioxide, say UC Irvine Earth system scientists Michael Prather and Juno Hsu. Their assessment of nitrogen trifluoride, or NF3, in a recent issue of Geophysical Research Letters caught the attention of global warming experts and journalists worldwide, with news stories appearing in the Los Angeles Times, Discover, New Scientist, Chemistry World and on National Public Radio. Prather, Fred Kavli Chair and director of the UCI Environment Institute, discusses NF3 and why it is such a global warming threat: Q: What is NF3 and how does it react in the atmosphere? A: NF3 is a man-made gas that – once released into the atmosphere – circulates from the surface to the stratosphere hundreds of times before it is destroyed by solar ultraviolet radiation. The average lifetime of an NF3 molecule in the atmosphere is 550 years. NF3 is nearly chemically inert in the atmosphere, but it is very effective in absorbing the infrared radiation that the Earth emits. By trapping this infrared radiation, NF3 becomes a potent greenhouse gas. In terms of kilograms emitted, NF3 is about 17,000 times more effective as a greenhouse gas than carbon dioxide. Q: How is NF3 used to make flat-screen televisions and computers? A: When hit with microwave discharge or a plasma beam, NF3 releases fluorine atoms that are used to clean the chamber in which flat-screen liquid crystal display panels are made. It also can be used to release fluorine to etch and cut the silicon substrate for computer chips. As I understand it, NF3 is used in large volumes in the LCD screen process. Q: Why do you consider NF3 the "missing greenhouse gas"? A: NF3 is not included in the Kyoto Protocol list of greenhouse gases. This fact is perhaps an oddity. NF3, like many synthetic greenhouse gases, was not recognized specifically as a major industrial gas in 1995 when the United Nations’ Intergovernmental Panel on Climate Change report listed the global warming potentials of many greenhouse gases. The rapid rise in production by the chemical industry had gone almost unnoticed. Q: Is UCI measuring NF3, and if so, how? A: The labs of Eric Saltzman and Murat Aydin in Earth system science are preparing to measure the atmospheric abundance of NF3. It is an extremely slippery molecule. There are probably only three laboratories in the world with the experience of measuring gases like NF3, and UCI has one of them. Q: How should industry address the NF3 issue? A: The major NF3 producer, Air Products and Chemicals, Inc., says that 98 percent of the gas is destroyed during the manufacturing process, and thus NF3 is an environmentally safe gas. However, there should be no doubt that NF3 is a highly hazardous gas in terms of global warming. We have no reliable estimates about leakage during production, shipping and decommission. This gas is extremely volatile and difficult to measure at low abundances, so we cannot be sure that industry estimates or measurements are accurate. Q: What can the general public do about NF3? A: The public should recognize that technology has a price and that the manufacture of these screens is not greenhouse-free. NF3 is not special; it is one of many high-tech products that potentially can have a large carbon footprint. The public should demand that the carbon footprint of such products be evaluated independently (not by the chemical manufacturers) and disclosed as energy efficiency is posted on appliances. — Jennifer Fitzenberger, University Communications
<urn:uuid:78abd39b-ac9e-4133-b9aa-a89fcd81bfad>
3.3125
773
Audio Transcript
Science & Tech.
42.241567
Photo courtesy of Beyond The Rhetoroic by Michael Kwan and the movie Ice Age Scientists have reported the discovery of a saber tooth squirrel. It seems squirrels have managed to survive a VERY long time. In the journal Nature, lead researcher Guillermo Rougier, a professor from the University of Louisville, reported that the study team believes they have found the fossilized remains of a creature known as Cronopio dentiacutus. Drawings of Cronopio dentiacutus resemble some of the animals seen in the movie "Ice Age." Photo courtesy of CNN blog "Lightyears" Previously there has been a large gap in the fossil record of mammals in South America from about 60 million to 120 million years ago. Cronopio dentiacutus was a mouse-size squirrel whose teeth were very long in proportion to the rest of his body, and technically wasn't really saber toothed or a squirrel. This mouse-sized squirrel with proportionally long teeth, was technically speaking neither a squirrel nor saber-toothed. The extinct mammal, who lived approximately 94 million years ago was the forerunner of today's marsupials and placental animals. It has been extinct for about 60 million years. When Rougier and his colleagues examined the unique skull of this animal they report "It was a lot more primitive than we are with regard to the way in which the skull was put together; the teeth were very primitive," and "The skull is about an inch long." They believe the animal was an insectivore which is common for small mammals today. They had teeth that appeared to be specialized for crushing and cutting; Cronopio dentiacutus could puncture right through small insects. Rougier said if you want to imagine what they looked like think what you would look like if your teeth came down below your chin. (Now that's an attractive picture!) These primitive animals lived during the same time period as snakes with legs, small carnivorous dinosaurs and terrestrial crocodiles. They lived in the flood plains of Argentina that today is a desert area in Patagonia. At the time they existed most mammals were very small. It was not until later when big dinosaurs were extinct that mammals grew to be the size of large cats and dogs. Rougier says "These were the tiny little guys that would squirrel in between the toes of the dinosaurs trying not to get stepped on." I guess the next time I'm complaining about the squirrels eating the birdseed it could be worse, they could be tryin to hide between my toes!
<urn:uuid:2de1bba4-eb96-4de5-bd23-48e4aa402a4b>
3.3125
530
Personal Blog
Science & Tech.
46.089116
In general relativity Eddington–Finkelstein coordinates are named for Arthur Stanley Eddington and David Finkelstein, even though neither ever wrote down these coordinates or the metric in these coordinates. They seem to have been given this name by Misner, Thorne, and Wheeler in their book Gravitation. They are a pair of coordinate systems for a Schwarzschild geometry which are adapted to radial null geodesics (i.e. the worldlines of photons moving directly towards or away from the central mass). The outward (inward) traveling radial light rays define the surfaces of constant "time" while the radial coordinate is the usual area coordinate so that the surfaces of rotation symmetry have an area of 4π. One advantage of this coordinate system is that it shows that the apparent singularity at the Schwarzschild radius is only a coordinate singularity and not a true physical singularity. (While this was recognized by Finkelstein, it was not (or at least not commented on) by Eddington, whose primary purpose was to compare and contrast the spherically symmetric solutions in Whitehead's theory of gravitation and Einstein's.) Schwarzschild Metric The Schwarzschild coordinates are , and the Schwarzschild metric is well known: Note the conventions being used here are the metric signature of (− + + +) and the natural units where c = 1 (although the gravitational constant G will be kept explicit, and M will denote the characteristic mass of the Schwarzschild geometry). Tortoise coordinate Eddington–Finkelstein coordinates are founded upon the tortoise coordinate. The tortoise coordinate is defined: so as to satify: The tortoise coordinate approaches −∞ as r approaches the Schwarzschild radius r = 2GM. When some probe (such as a light ray or an observer) approaches a black hole event horizon, its Schwarzschild time coordinate grows infinite. The outgoing null rays in this coordinate system have an infinite change in t on travelling out from the horizon. (This is why information would never be received back from any probe that is sent sufficiently close to such an event horizon, despite that the probe itself can nonetheless travel past this horizon. It is also why the metric, expressed in Schwarzschild coordinates, becomes singular at the horizon - thereby failing to be able to fully chart the trajectory of the infalling probe.) The tortoise coordinate is intended to grow infinite at the appropriate rate such as to cancel out this singular behaviour in coordinate systems constructed from it. The ingoing Eddington–Finkelstein coordinates are obtained by replacing the coordinate t with the new coordinate . The metric in these coordinates can be written where is the standard metric on a unit radius two sphere. Likewise, the outgoing Eddington–Finkelstein coordinates are obtained by replacing t with the null coordinate . The metric is then given by In both these coordinate systems the metric is explicitly non-singular at the Schwarzschild radius (even though one component vanishes at this radius, the determinant of the metric is still non-vanishing and the inverse metric has no terms which diverge there.) Note that for radial null rays, v=const or =const or equivalently =const or u=const we have dv/dr and du/dr approach 0 and ±2 at large r, not ±1 as one might expect if one regarded u or v as "time". When plotting Eddington-Finkelstein diagrams, surfaces of constant u or v are usually drawn as cones, with u or v constant lines drawn as sloping at 45 degree rather than as planes (see for instance Box 31.2 of MTW). Some sources instead take , corresponding to planar surfaces in such diagrams. In terms of this the metric becomes which is Minkowskian at large r. (This was the coordinate time and metric that both Eddington and Finkelstein presented in their papers.) The Eddington–Finkelstein coordinates are still incomplete and can be extended. For example, the outward traveling timelike geodesics defined by (with τ the proper time) has v(τ)-> -∞ as τ->2GM. Ie, this timelike geodesic has a finite proper length into the past where it comes out of the horizon (r=2GM) when v becomes minus infinity. The regions for finite v and r<2GM is a different region from finite u and r<2GM. The horizons r=2GM and finite v is a different horizon (the black hole horizon) from that with r=2M and finite u (the white hole horizon) . The metric in Kruskal-Szekeres coordinates covers all of the extended Schwarzschild spacetime in a single coordinate system. It's chief disadvantage is that in those coordinates the metric depends on both the time and space coordinates. In Eddington-Finkelstein, as in Schwartzschild coordinates, the metric is independent of the "time" (either t in Schwartzschild, or "u" or "v" in the various Eddington–Finkelstein coordinates), but none of these cover the complete spacetime. The Eddington–Finkelstein coordinates have some similarity to the Gullstrand–Painlevé coordinates in that both are time independent, and penetrate (are regular across) either the future (black hole) or the past (white hole) horizons. Both are not diagonal (the hypersurfaces of constant "time" are not orthogonal to the hypersurfaces of constant r.) The latter have a flat spatial metric, while the former's spatial ("time" constant) hypersurfaces are null and have the same metric as that of a null cone in Minkowski space ( in flat spacetime). See also - Schwarzschild coordinates - Kruskal-Szekeres coordinates - Lemaitre coordinates - Gullstrand–Painlevé coordinates - Vaidya metric - Eddington, A.S. (Feb. 1924). Nature 113 (2832): 965–967. - Finkelstein, David (1958). Phys. Rev 110: 115–117.
<urn:uuid:60d8c57c-07e3-44f4-a951-59be79fb57b1>
3.46875
1,280
Knowledge Article
Science & Tech.
38.63101
I recently came across an interesting programming challenge, which I can summarise here. Develop a first-in/first-out (FIFO) queue in a programming language of your choice. The following constraints apply: - You must use a linked-list, and can’t use arrays, hashes or other sophisticated enumerations; - The queue must be able to accept and store arbitrary objects; - If the queue is empty, popping should raise an exception - Each method/function can only be one line long (and using multi-statement separators such as ‘;’ is cheating); - Each line can be at most 80 characters long; - You can’t use an external or additional libraries – core language features only. The queue should implement the following public interface: size()–> returns an integer representing the number of elements in the queue push(object o)–> pushes an arbitrary object oonto the end of the queue pop()–> returns the next object from the head of the queue; raises an exception if there are no objects on the queue While implementing a FIFO queue as a linked list is a fairly typical first year CS undergraduate problem, the additional constraints, in particular #4, make it much more interesting. I chose to implement the solution in Ruby. Here’s the test spec for the solution: As an added challenge, although it’s almost dictated by the problem statement, I tried to minimise (or indeed, ideally, eliminate) the use of ‘if’ statements. Check-out my solution by clicking through.
<urn:uuid:8684f30e-184f-4cd3-89d3-66ecedd94409>
2.84375
340
Personal Blog
Software Dev.
27.715248
Structures & Unions in C : - C Programming Tutorial contains number of data types grouped together. These data types may or may not be of same type. We can pass structures as arguments to functions. We can also make structure within a structure. Use of Typedef in C language C provides a facility called typedef for creating new data type names. typedef means you no longer have to write struct all over the place.Typedefs can make your code more clear. Typedefs can make your code easier to modify Similarly, typedef can be used to define a class type (structure, union etc). For example: Pointers to Structures. Malloc function. Dynamic memory
<urn:uuid:acac171a-e332-4933-9948-389f30f4dd47>
3.359375
157
Documentation
Software Dev.
44.422876
OBJECTIVES: Develop and maintain a capability to provide aerial surveys of marine bird and mammal distribution and abundance for oil spill response and post-spill injury assessment. TIME PERIOD: June 1994 through October 1998 STUDY AREA: Coastal and inland marine waters of California. METHODOLOGY: ... Aerial surveys were conducted in a variety of California locales with experienced observers and trainees. The aircraft used was a Partenavia PN68 Observer provided by the Department of Air Services, CDFG, flown at an altitude of 200' (60m) above ground level and at a typical air speed of 90 kts. Two observers (at least one experienced) occupied middle seats and searched a corridor of 50m on each side of the aircraft. Width was defined by clinometer and simple trigonometric functions. Species, numbers, behavior and other information was described on hand-held tape recorders for later transcription and computer entry. The co-pilot position was occupied by a navigator/computer operator. This individual recorded number of observers on-watch, transect status (i.e., on-effort, off-effort, and commutes), as well as sea state, weather and other observation conditions. Date, time, and position of the aircraft were recorded directly into the data-logging computer with time, latitude and longitude provided by a Global Positioning System (GPS). DATABASES PRODUCED: A single database was produced including date, time, latitude/longitude, behavior, observation conditions, and other information for each sighting of marine birds, mammals, and turtles. As stated above, some surveys were solely for the purpose of drills and training, some for systematic data collection, and others for actual oil spill response. In this study, 74 one-day surveys were flown through 1997. Through the end of 1997, a total of 670 hours were flown and, exclusive of commutes, mapped bird and mammal distribution and abundance along 31,271 km (16,886 nmi) of transects. (data from several surveys from 1998 are yet fully analyzed and therefore not included on this CD-ROM.) Credit for this study is shared with OSPR by the Minerals Management Service (MMS), Pacific OCS Region, that provided most observers for surveys flown in Santa Barbara Channel and the Santa Maria Basin; MMS personnel also carried out tape-transcriptions and computer entry for surveys in southern California waters (this portion of the OSPR surveys was conducted under matching-funds between OSPR and the Coastal Marine Institute, University of California, Santa Barbara). CURRENTNESS REFERENCE: ground condition SPATIAL REFERENCE INFORMATION - GEODETIC MODEL Horizontal Datum Name: D_WGS_1984 Ellipsoid Name: WGS_1984 Semi-major Axis: 6378137.000000 Denominator of Flattening Ratio: 298.257224
<urn:uuid:b8de9cc8-515c-4676-8888-bdbb364da2f5>
2.796875
616
Academic Writing
Science & Tech.
31.641814
4.3. Velocity versus Density The peculiar velocity data is compared with the distribution of galaxies in redshift space to obtain . The comparison can be performed either at the density level (e.g., velocity-inferred mass density a la POTENT versus real-space density of galaxies as extracted from redshift surveys ), or at the velocity level , , or simultaneously (reviews: , ). New developments: The methods are being improved to better take into account the random and systematic errors. The comparison is done in several different ways. Pro: Some of the comparison methods allow a direct mapping of the biasing field. Certain versions of the method are straightforward to implement. Con: It is hard to impose the same effective smoothing on the two data sets. This may cause a bias in the estimate of , and a complication due to possible scale dependence in the biasing scheme. The estimation is contaminated by the possible complexity of the biasing scheme. Each method may actually refer to a somewhat different . It is hard to distinguish nonlinear biasing from nonlinear gravitational effects. Current Results: For IRAS galaxies, the current best estimates vary in the range 0.5 I 1.2, depending on the method, the volume used, the weighting of the different data, the smoothing scale, etc. The comparisons at the density level tend to yield higher estimates than the comparisons at the velocity level . One of the velocity comparisons indicates a possible inconsistency in the data at large distances . The value of I seems to grow with smoothing scale, from I ~ 0.5 - 0.6 at Gaussian smoothing scales of 3 - 6 h-1Mpc , to I ~ 1 on scales of ~ 12 h-1Mpc , . The estimates for optical galaxies indicate a biasing parameter that is typically larger by ~ 30%.
<urn:uuid:68f378b7-17ee-4c30-8e08-862c1ef245a0>
2.765625
380
Academic Writing
Science & Tech.
45.856381
That cloaking device we've been dreaming of appears to be one step closer to actual cloakdom, so start pondering the mischievous possibilities. Scientists from Duke University have improved on their earlier efforts at producing an invisibility cloak, coming up with a new type of device they say is significantly more sophisticated at cloaking an object (and eventually a person?) from visible light. The device is made from a light-bending composite material that can detour electromagnetic waves around an object and reconnect them on the other side. That creates an effect similar to a distant mirage you'd see hovering above a road on a hot day. In Duke's latest experiments, a beam of microwaves aimed through the cloaking device at a "bump" on a flat mirror surface bounced off the surface at the same angle, as if the bump wasn't there. Additionally, the device prevented the formation of scattered beams that would normally be expected from such a perturbation. (The team details its findings in far more technical terms than I ever could in the latest issue of Science magazine.) … Read more
<urn:uuid:077eda4a-703d-49a1-95cc-c6b4868a21a7>
3
222
Truncated
Science & Tech.
35.413669
A podcast is a an audio file published on the web. The files are usually downloaded onto computers or portable listening devices such as iPods or other players. Read more about podcasting from webcontent.gov HOST: Welcome to Diving Deeper where we interview National Ocean Service scientists on the ocean topics and information that are important to you! I’m your host Kate Nielsen. Today’s question is….How can we prepare for climate-related impacts? The NOAA Coastal Services Center is one office in NOAA that produces a variety of tools to help communities prepare for and respond to the impacts of climate variability and climate change. To help us dive a little deeper into this question, we will talk with Stephanie Fauver by phone on how best to prepare for climate impacts. Stephanie is a meteorologist with the NOAA Coastal Services Center. Hi Stephanie, welcome to our show. STEPHANIE FAUVER: Hi Kate, thank you for inviting to talk to your listeners today. (BACKGROUND ON CLIMATE) HOST: Stephanie, first, can you explain to us what the difference is between climate and weather? STEPHANIE FAUVER: Sure Kate. There is still a lot of confusion between climate and weather. Often, you’ll hear people say “climate is what you expect, weather is what you get.” What that really means is the weather is what you see outside on any particular day. It may be 85 degrees and sunny or it could be 30 degrees and snowing. That’s the weather. Climate is the average weather for a certain period of time at a certain location. An example might be that you can expect snow in the Northeast in January and that’s their climate. Also, it’s hot and humid in the Southeast in July. That’s their climate. The climate record also includes extreme values such as record high temperatures or record amounts of rainfall. You may hear the local TV meteorologists or the National Weather Service meteorologists talk about “today we hit a record high for this day.” Those are climate records. Climate can vary over time, and these extreme values are the climate variability part of the equation. HOST: Thanks Stephanie, I like that. Climate is what you expect and weather is what you get. That sort of sums it all up right there. You just mentioned climate variability. What is that and how is it different from climate change? STEPHANIE FAUVER: We know now that climate is average weather, but we’re not always average. Sometimes we’re higher, sometimes we’re lower than average. These shorter term changes in climate are climate variability. We could see a period of drought or a period of flooding. These are climate variability. Generally, climate variability is on the order of weeks to months to even years. This is in contrast to climate change. Climate change is a long-term trend on the order of decades to centuries. You can’t just look out your window and see sea level rise, sea level rise happens for tens to hundreds of years. Climate change is a longer term trend. (IMPACTS OF CLIMATE VARIABILITY AND CLIMATE CHANGE) HOST: OK, so when we’re talking long-term trends, that’s climate change, and then short periods of time that’s climate variability. Stephanie, do you have any examples of where we’re seeing impacts from a changing climate already? STEPHANIE FAUVER: Kate, we are seeing impacts already. Shorter ice seasons in the Great Lakes and in the Arctic Ocean are already being seen. In North Carolina, we’re seeing damage from rising water levels causing problems to coastal lowlands. We’re also seeing changes in the ranges of tree and animals species due to climate change. For example, we have seen butterflies further north and in higher elevations than in the past and they’re also becoming extinct in southern and warmer locations. In addition, spring now arrives an average of 10 to 14 days earlier than it did 20 years ago. HOST: What other impacts are expected from climate change and climate variability? STEPHANIE FAUVER: We can expect to see a lot of impacts from climate change and climate variability. We may see increased flooding, heavier downpours in storms, and along with that comes increased property damage and the potential for loss of life. Heat waves are expected to become more frequent. And with those heat waves comes declining air quality and the potential for loss of life. Increased drought is a possibility, and the potential for crop damage and increased forest fires as a result of drought. In some coastal regions, public infrastructure like roads and water and sewer treatment plants and port facilities are found in low lying areas. They can expect to see increased impacts from future flooding. Some economic impacts as a result of climate change and climate variability are also expected. Damage from coastal ecosystems and fisheries can result in loss of revenue for folks that rely on those resources for their livelihood. Also a loss of revenue from tourism dollars if the beaches are damaged from sea level rise and erosion from strong storms. It’s not all doom and gloom though. There is potential for us to see a few benefits from climate change. There will be increased opportunities for tourism in some of the colder climates if they have a longer tourism season. And also the potential for longer growing seasons in cold climates. HOST: Stephanie, are there ways that communities and individuals can prepare for these impacts of climate change that you’ve talked about so far today? STEPHANIE FAUVER: There are a lot of ways that communities and people in those communities can start to prepare. It won’t happen overnight. It is an ongoing process and it will take time, but it’s definitely worth doing. I would encourage people to take a look at the planning activities in their community and see where they might be able to consider climate variability and climate change and the impacts that we’re going to see. If they look at comprehensive plans or development plans that many communities are working on, they can think about the future growth of their community and do it smartly. We don’t want to put ourselves in the situation where we’re building infrastructure and putting people in harm’s way. We also need to think about water resources and where we’ll get our fresh water as more people move into coastal areas. We don’t want to wait for a drought situation or a water scarcity situation. We want to make sure we’re planning ahead for those water resources. Some communities also have hazards plans where they look at how they will prepare and respond to hurricanes. These may need to be a little more robust, if we think about the potential for stronger storms, and consider how climate will impact these storms in the future. HOST: How can communities, and the people in those communities, prepare for climate-related impacts? What’s the best first step to take? STEPHANIE FAUVER: I recommend to start those conversations now. Identify people in your community that have a stake in this issue since it affects many aspects of the community. Find a champion, find someone that is onboard with climate change, is already working the issue or already talking about the issue and is ready to start to take action. Another initial step is to find the climate experts in your area – you have a state climatologist that you can call on, some folks in universities would be helpful as well, your local Sea Grant extension agents – find the people in your area that can help you understand what the impacts are going to be. Start to talk about those impacts and start to identify who and what will be impacted. Maybe it’s certain neighborhoods, maybe it’s infrastructure, critical facilities, the hospitals that are in low lying areas, so start to look at who and what will be impacted. In one community where we’re working, the city officials are having conversations and the mayor has been involved and they’re starting to talk about what impacts they will see. (RESOURCES TO SUPPORT COMMUNITY PREPARATION AND RESPONSE) HOST: Thanks Stephanie for the information you’ve given us today, so far, to help us understand more about climate change and ways we can start to prepare for impacts. What is the National Ocean Service’s role in helping communities prepare for climate change impacts? STEPHANIE FAUVER: Well Kate, NOAA and the National Ocean Service provide a host of resources to help communities – everything from data and research to climate modeling and tools and techniques to identify their impacts and help them develop strategies to prepare for climate change. The National Ocean Service is responsible for keeping track of water levels and calculating trends and changes in water level. We also monitor changes to the natural environment to help us identify where climate variability and climate change are having an effect on say marsh grass and other habitats, which are critical areas for our ocean and for ocean life. The National Ocean Service also works directly with state decision makers and planners to help them identify what risks and vulnerabilities exist in their community, and how they can start to take action to address these risks. The National Ocean Service also works with coastal decision makers to bring them together to talk about the issues and help them build their own capacity to address the problems. HOST: Stephanie, can you highlight a few of the products or tools that you have to help folks get started? STEPHANIE FAUVER: Sure Kate. The Office of Ocean and Coastal Resource Management just issued a guidebook, “Adapting to Climate Change: A Planning Guide for State Coastal Managers,” and this takes state managers through the process to identify the impacts that they can expect from climate change, they look at pulling together their team to develop a plan, identifying strategies to deal with climate change, and then implementing their plan and evaluating their progress. All of the states are at different levels in terms of planning, but they all can find helpful information in this guide, no matter what level of the process they’re at. We also have a coastal climate adaptation website. This website provides access to adaptation plans and strategies and lessons learned from the states and the communities around the country that have already started in this process. We always hear from people that they want to know what others are doing, “give me an example, give me something I can look at to see how other people are taking action.” So this site provides access to a lot of those examples. It also allows users to post questions and share lessons learned about their experiences, so they can learn from their peers. We also have a training that has been developed by the National Estuarine Research Reserve. It’s a one-day workshop and it brings folks together in their community to start to talk about the issues. They learn from their local experts about climate impacts, what they can expect, and then they get into groups and talk about the issues that are relevant to them, what’s most important, and then identify some actions they can take when they leave that room, how they can work together, and what they can do to start to address the problems. HOST: Do we have any data or first-hand experiences on the success of these resources? STEPHANIE FAUVER: Some of these resources are still quite new and we are still evaluating how people are using the information and the resources that we have, but we continually hear from people at the end of these workshops, that tell us how they’re really glad they got together and started talking about the issues, and they’re motivated, and they’re ready to go back to their staff members or to talk to their council members or to go back to their wastewater management folks and say, “we need to consider climate change, we need to start to think about this issue, and how it will impact what we already do.” We recently worked with a group in South Florida. They were trying to figure out what their impacts were and where they were going to see problems from sea level rise. We gathered them together and had them talk about their methods for mapping sea level rise. They decided what process they were going to use, so they were all on the same page, and they were delivering a consistent message to their residents, and now they’re starting to show those maps and use those for outreach to their communities to say what impacts they could see and where the problem areas are. One of their big issues is salt water intrusion into their freshwater resources. They’re worried about potential issues with increased development and with additional folks moving into that area. They already have stress on their water resources and they need to make sure that they can accommodate additional sea level rise. HOST: Thanks Stephanie, so sometimes it’s just about getting the conversation started. Because many of our listeners don’t live along the coast, how is what you’ve talked about today with climate variability and the resources out there to help us prepare, how is that important to them? STEPHANIE FAUVER: Kate, that’s a great question. Many of the impacts from climate change and climate variability will impact inland areas as well. Some of the listeners may recall the drought in Georgia in 2008. Water levels in the lakes, which is their fresh water resource, became dangerously low. They were already implementing water conservation measures and they were looking to enter into agreements with surrounding states to find alternative sources for fresh water. So this isn’t just a coastal problem – drought and flooding impact inland areas as well. With climate variability, we also see extreme heat conditions, particularly cities around the country are vulnerable to these impacts. So that’s something that is not just going to affect the coast. HOST: Stephanie, do you have any final closing words for our listeners today? STEPHANIE FAUVER: I think a resident of coastal Georgia said it best at a workshop recently. She was still kind of on the fence about climate change, but she said that in Savannah, the anticipation every year is for a hurricane. Thankfully they haven’t had one, but they still plan for a hurricane. She said that’s how this needs to be done. It has to be that they’re doing this planning in the event that sea level rises. So if your community is not quite ready to talk about climate change, you can still find a message that would work with your residents and with your decision makers. Whether it’s about human health or safety or hazards such as hurricanes or strong storms, find the approach that works and begin to have those conversations. HOST: Thanks Stephanie for joining us on Diving Deeper and talking more about climate change impacts and how best to prepare for these in our communities. To learn more, visit collaborate.csc.noaa.gov/climateadaptation.(OUTRO)
<urn:uuid:f4cb11fb-9eb9-45e1-8918-2b4e9c470cdc>
3.609375
3,142
Audio Transcript
Science & Tech.
47.844716
Sometimes it is possible to do volume and surface integrals as ordinary integrals without bothering with the general approach discussed in previous sections. This happens when both your integrand and your region of integration have sufficient symmetry that you can do all but one of the integrals by inspection in the multiple integration that our formalism leads to. Why bother with such things? There are two reasons, neither of which is very convincing, but here they are. First, using the general approach that applies to integrating over any reasonable shape and any reasonable integrand to solve a problem much of which you can solve by eye is like using a big gun to shoot a mosquito, or having a philosopher teach kindergarten. Second, in traditional study of calculus you study single integration long before you study surface and volume and multiple integration, so you then have an opportunity to find answers to these questions without knowing anything about these concepts. The absolutely simplest example is finding the area in the region between the x-axis (y = 0) two lines x = a, x = b and the curve defined by the function y = f(x). You can write this as an area integral with integrand 1, but having done so you can immediately make this a double integral and do the integration over y to get the standard formula for this area Suppose you rotate the curve y = f(x) about the x-axis and ask for the volume of the region generated by this action between the same limits on x. Now you can argue that the volume in a small slice of this region between x and x + dx is the area of the circle with radius f(x) multiplied by dx. So you can write this as a single integral with integrand given by this area How about its surface area? Again, we can slice our surface into sections between x and x + dx. The surface area in any one slice will be ds multiplied by the circumference of the circle of radius f(x). You have to be a bit careful here because the surface sliced is generally tilted with respect to the x-axis, and the factor ds here is not dx but rather the length of the curve defined by y = f(x) in our slice. as the surface area of the surface generated by rotating the curve y = f(x) about the x-axis. Similar but different formulae can be generated for regions obtained by rotating the curve defined by y = f(x) around the y-axis. Exercise 24.6 Find single integral expressions for the volume and surface of the region generated by rotating the curve defined by y = f(x) from (a, f(a)) to (b, f(b)) about the y-axis. You can also do similar things in other coordinate systems, integrating over one or more of the variables of spherical coordinates by eye, for example. A standard example for this method is the sphere. Suppose we want to determine volume and surface area of a sphere of radius R. Notice that the sphere is a rotation of the curve defined by about the x-axis. The limits of integration for volume or surface are x = -R and x = R, and the integrands are These may be integrated with the standard results. It is mildly interesting that the surface area of a sphere in a slice of thickness d is independent of where it is, as long as there are pieces of the sphere on either side of it. Formulae like those above can be obtained for cones and wedges and all sorts of other shapes.
<urn:uuid:2548f6d3-9b83-42bb-92d0-d588b2a40954>
3.390625
736
Tutorial
Science & Tech.
46.251956
I haven’t had a chance to read the original paper – I’m getting ready to head out of town and probably won’t get to it until next week, but I just got a press release from U Alaska Fairbanks about a recent paper in this month’s issue of Science that suggests that we’ve got bigger methane problems than we knew about. From the UAK press release: The research results, published in the March 5 edition of the journal Science, show that the permafrost under the East Siberian Arctic Shelf, long thought to be an impermeable barrier sealing in methane, is perforated and is leaking large amounts of methane into the atmosphere. Release of even a fraction of the methane stored in the shelf could trigger abrupt climate warming. “The amount of methane currently coming out of the East Siberian Arctic Shelf is comparable to the amount coming out of the entire world’s oceans,” said Shakhova, a researcher at UAF’s International Arctic Research Center. “Subsea permafrost is losing its ability to be an impermeable cap.” They found corresponding results in the air directly above the ocean surface. Methane levels were elevated overall and the seascape was dotted with more than 100 hotspots. This, combined with winter expedition results that found methane gas trapped under and in the sea ice, showed the team that the methane was not only being dissolved in the water, it was bubbling out into the atmosphere. These findings were further confirmed when Shakhova and her colleagues sampled methane levels at higher elevations. Methane levels throughout the Arctic are usually 8 to 10 percent higher than the global baseline. When they flew over the shelf, they found methane at levels another 5 to 10 percent higher than the already elevated arctic levels. The East Siberian Arctic Shelf, in addition to holding large stores of frozen methane, is more of a concern because it is so shallow. In deep water, methane gas oxidizes into carbon dioxide before it reaches the surface. In the shallows of the East Siberian Arctic Shelf, methane simply doesn’t have enough time to oxidize, which means more of it escapes into the atmosphere. That, combined with the sheer amount of methane in the region, could add a previously uncalculated variable to climate models. “The release to the atmosphere of only one percent of the methane assumed to be stored in shallow hydrate deposits might alter the current atmospheric burden of methane up to 3 to 4 times,” Shakhova said. “The climatic consequences of this are hard to predict.” Shakhova, Semiletov and collaborators from 12 institutions in five countries plan to continue their studies in the region, tracking the source of the methane emissions and drilling into the seafloor in an effort to estimate how much methane is stored there. From the New York Times today: Natalia Shakhova, a scientist at the university and a leader of the study, said it was too soon to say whether the findings suggest that a dangerous release of methane looms. In a telephone news conference, she said researchers were only beginning to track the movement of this methane into the atmosphere as the undersea permafrost that traps it degrades. But climate experts familiar with the new research, reported in Friday’s issue of the journal Science, said that even though it does not suggest imminent climate catastrophe, it is important because of methane’s role as a greenhouse gas. Although carbon dioxide is a far more abundant and persistent in the atmosphere, ton for ton atmospheric methane traps at least 25 times as much heat. The paper is behind a paywall for those not in the reporting business, but I will link more as more comes available. If correct, this is not good news – the prior assumption was that increased levels of methane in the arctic were linked primarily to methane bubbling out of freshwater areas – but there’s much more methane here to release. Here’s an NSF piece on the potential role of methane in abrupt climate change. I should emphasize here that we have no idea this methane release could cause something similar to occur, but this strikes me as a compelling case for the precautionary principle – precisely because we have no idea. An abrupt release of methane, a powerful greenhouse gas, from ice sheets that extended to Earth’s low latitudes some 635 million years ago caused a dramatic shift in climate, scientists funded by the National Science Foundation (NSF) report in this week’s issue of the journal Nature. The shift triggered events that resulted in global warming and an ending of the last “snowball” ice age. The researchers believe that the methane was released gradually at first and then very quickly from clathrates–methane ice that forms and stabilizes beneath ice sheets. When the ice sheets became unstable, they collapsed, releasing pressure on the clathrates. The clathrates then began to de-gas. “Our findings document an abrupt and catastrophic global warming that led from a very cold, seemingly stable climate state to a very warm, also stable, climate state–with no pause in between,” said geologist Martin Kennedy of the University of California at Riverside (UCR), who led the research team. “What we now need to know is the sensitivity of the trigger,” he said. “How much forcing does it take to move from one stable state to the other–and are we approaching something like that today with current carbon dioxide warming?” Allow me to speak for all of humanity when I say…crap.
<urn:uuid:e3eb9ee7-4fc0-4450-9867-843355a7c276>
2.9375
1,174
Personal Blog
Science & Tech.
35.737057
With high speed photography, I can use a high voltage spark to create a flash of only 1/1,000,000th of a second in duration. The problem is that there are not a lot of things that move this fast that such a flash is required to stop the motion. Bullets are such a subject requiring a very high speed flash system. Around the lab we jokingly call this “ludicrous speed”. After photographing bullets hit just about every conceivable object it is time to move on to other subjects. In this case a paint ball is sent into the edge of a straight razor blade. The paint ball crosses two optical detectors that measure the velocity (166 feet per second) then trigger the flash when the paint ball has traveled about 12 inches. The momentum of the paint ball keeps the ball in motion even after being sliced in half by the razor blade. A wonderful way to illustrate Newton’s Law of Inertia – that is, an object in motion will stay in motion until a suitable force is applied to stop it. With many photo sessions once the photography is done we will stand around looking at all the equipment set up and wonder what else we can do with it before the set has to be disassembled. At this point someone wondered what would happen if the paint ball were to hit an egg? The results above show that the paint ball hits at such a speed as to break, then force the yolk out the other side before moving through the rest of the shell. Shots like this create a tremendous mess and parts of the lab will have pinhead specks of pink paint ball dye and dried egg yolk for years to come. I hope this image excites the minds of a few readers. I always welcome ideas, even though it is often years before I get around to doing a certain project. This post was written by Ted Kinsman for Photo Synthesis.
<urn:uuid:a5b38e5e-1fdb-47fc-9604-6950d8ec95b7>
3
388
Personal Blog
Science & Tech.
63.999634
COBOL (Common Business Oriented Language) was the first widely-used high-level programming language for business applications. Many payroll, accounting, and other business application programs written in COBOL over the past 35 years are still in use and it is possible that there are more existing lines of programming code in COBOL than in any other programming language. While the language has been updated over the years, it is generally perceived as out-of-date and COBOL programs are generally viewed as legacy applications. COBOL was an effort to make a programming language that was like natural English, easy to write and easier to read the code after you'd written it. The earliest versions of the language, COBOL-60 and -61, evolved to the COBOL-85 standard sponsored by the Conference on Data Systems Languages (CODASYL). In years immediately preceding the year 2000, many COBOL programs required change to accommodate the new century. Programmers with COBOL skills were in demand by major corporations and contractors. A number of companies have updated COBOL and sell development tools that combine COBOL programming with relational databases and the Internet.
<urn:uuid:b32b18a2-7a67-4246-8f7a-91c9512d8577>
3.234375
241
Knowledge Article
Software Dev.
25.219535
Saturday 18 May Yellow-throated apalis (Apalis flavigularis) Yellow-throated apalis fact file - Find out more - Print factsheet Yellow-throated apalis description An endangered warbler, the yellow-throated apalis (Apalis flavigularis) is distinguished from other species of the Apalis genus by its vivid yellow colouration. The yellow-throated apalis is now considered to be a distinct species; however, until 1994 it was classified as a subspecies of the bar-throated apalis (Apalis thoracica). The male yellow-throated apalis has a black head and tail, and a black breast-band which divides a striking yellow throat and chest. Its back and wings are bright green, and its legs are pink. The female yellow-throated apalis is slightly smaller than the male and usually has a narrower breast-band. In comparison to the striking plumage of the adult yellow-throated apalis, the juvenile has a much duller colouration (2) (3). Consisting of a series of loud, monotonous ‘preep’ sounds, the song of the male yellow-throated apalis is by no means musical (2). Calls produced by the female bear a resemblance to those of the male, but occur at a faster rate and have a higher pitch. The alarm call of the yellow-throated apalis is a repetitive series of ‘peep’ notes (3).Top Yellow-throated apalis biology The diet of the yellow-throated apalis consists mainly of insects, which are gleaned from foliage or, occasionally, caught in flight. The yellow-throated apalis is a territorial bird and spends most of the year in solitude. However, this solitary lifestyle is abandoned in favour of monogamous pairing during the breeding season, which occurs between October and December (2). During the breeding season, nests are constructed to accommodate clutches of two to three eggs. Nests consist of a dome-shaped outer layer of moss and an inner lining of fine plant material. Nests vary considerably in size and are found among foliage between one and three metres above the ground (2) (3).Top Yellow-throated apalis rangeTop Yellow-throated apalis habitat The yellow-throated apalis predominantly inhabits evergreen forests, although it is also found in riparian forests and thickets close to the forest edge. It is generally found at elevations of 600 to 2,400 metres above sea level (3).Top Yellow-throated apalis status The yellow-throated apalis is classified as Endangered (EN) on the IUCN Red List (1).Top Yellow-throated apalis threats The yellow-throated apalis is considered fairly common within the areas it inhabits, but it is classified as Endangered on the IUCN Red List because of its restricted range. Demand for timber and land for agriculture, driven by a rapid increase in the human population of south-eastern Malawi, has resulted in deforestation, and therefore habitat loss, within this species’ range (3). Habitat loss is a serious threat to the yellow-throated apalis and is responsible for a continuing decrease in its population size (2).Top Yellow-throated apalis conservation All of the areas inhabited by the yellow-throated apalis are within forest reserves and are therefore legally protected. However, this level of protection has not been sufficient to prevent habitat loss. Fortunately, a number of conservation actions have been proposed to safeguard the remaining areas inhabited by the yellow-throated apalis. These include increasing public awareness and support for forest conservation, enhancing the level of protection of the remaining forest habitat and monitoring habitat and populations on a regular basis (3).Top Find out more Learn more about the yellow-throated apalis: BirdLife International - Yellow-throated apalis: This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: - A species or taxonomic group that is only found in one particular country or geographic area. - A plant which retains leaves all year round. This is in contrast to deciduous plants, which completely lose their leaves for part of the year. - A category used in taxonomy, which is below ‘family’ and above ‘species’. A genus tends to contain species that have characteristics in common. The genus forms the first part of a ‘binomial’ Latin species name; the second part is the specific name. - The catching of prey by plucking from, or within foliage. - Having only one mate during a breeding season, or throughout the breeding life of a pair. - Riparian forest - Forest that is situated along the bank of a river, stream or other body of water. - A population usually restricted to a geographical area that differs from other populations of the same species, but not to the extent of being classified as a separate species. - Describes an animal, a pair of animals or a group that occupies and defends an area. IUCN Red List (January, 2012) - del Hoyo, J., Elliott, A. and Sargatal, J. (2006) Handbook of the Birds of the World. Volume 11: Old World Fly-catchers to Old World Warblers. Lynx Edicions, Barcelona. BirdLife International (November, 2011) MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends. Terms and Conditions of Use of Materials Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors. Visitors to this website (End Users) are entitled to: - view the contents of, and Material on, the website; - download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use; - teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User. End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials. Additional use of flagged material Green flagged material Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use. Creative commons material Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details. Any other use Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use. Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted.
<urn:uuid:4ec6bed3-347d-432e-9a42-ae88abce5a10>
4.03125
1,739
Knowledge Article
Science & Tech.
30.692111
The Research of Dr. Harold Zakon ur lab studies a number of questions using weakly electric fish as our model organism. These fish live in murky waters and are nocturnally active. They generate weak electric fields around themselves from a specialized electric organ and sense these electric fields, with specialized sensory receptors, called electroreceptors. They sense the distortions caused in their own electric fields to locate nearby objects, and the electric fields of other fish as communication signals. Weakly Electric Fish Weakly electric fish have evolved twice. One group, the mormyriformes, live in Africa, the other group, the gymnotiformes, live in South America. Electric organs (EOs) have evolved independently in at least six different groups of fish including two groups of elasmobranchs (Torpedo rays and the skates), and four groups of teleosts (gymnotiforms, mormyriformes, stargazers, catfish). In some groups EOs generate strong discharges (such as the Torpedo ray) and in others, weak discharges (such as the knifefish we study). In all but one case, they derive from muscle (the exception is the family Apteronotidae in which the axons of the EMNs form the electric organ). How the electric organ comes from muscle evolutionarily and developmentally is one question that we have been pursuing in our laboratory. The morphology of the electrocytes varies greatly between species and these variations are intimately bound up in the generation of species-specific electric organ discharge (EOD) waveforms. However, they operate on fundamental features of excitable membranes and current flow. Mainly, electric organs are composed of columns of electrocytes oriented in the same axis and ensheathed in high resistance connective tissue. The connective tissue channels the flow of current along the axis of the organ, out into the water, and back into the other end of the electric organ. Species such as Torpedo rays or electric eels, which have many stacks of flattened electrocytes, are capable of generating discharges of hundreds of volts. Weakly electric fish, such as the ones we study, make modest discharges of only hundreds of millivolts to a few volts. (Picture by J. Oestreich)
<urn:uuid:ab9321e9-211c-4af5-94c5-4d94b128bbf5>
3.3125
482
Academic Writing
Science & Tech.
26.186322
Most Popular Links Lampyridae Level 1 Level 2 |Key Characters (adult)|| Fireflies, lightningbugs, glowworms. Adults soft-bodied, elongate; color generally black, with red markings on the pronotum; antennae with 8-13 (usually 11) segments, filiform, serrate or branched in some taxa; tarsi 5-segmented, next-to-last segment with pads; abdominal segments may be modified as luminescent organs (mainly Eastern species). Some adults may have reduced wings or even be larva-like. |Key Characters (larva)|| Larvae live in moist soils, sometimes along shorelines; all larvae luminescent to some degree; body elongate with prominent scleritized plates; head covered by pronotum; legs 5-segmented.
<urn:uuid:edc531cc-321b-4d98-bcae-8a86ab64f4e8>
2.734375
181
Knowledge Article
Science & Tech.
20.876029
Good job bringing this to light. People won't realise how huge the problem is and municipalities are woefully ill equipped to... Agreed; mining can never be sustainable, but then how do you get the metals to make all the things you need in the course of... Very good piece. Debris cover also affects glacial retreat rate GLACIERS, reservoirs of fresh water on the earth, are assumed to be major indicators of climate change. Since their melting contributes to drinking water, water for agriculture and hydropower in densely populated regions in South and Central Asia, glaciers have become a major concern for these regions. Himalayan glaciers are important as they feed 10 rivers that supply water to 20 per cent of the world’s population. Scientists are curious to know whether global warming is affecting all glaciers at the same rate. Diverse scientific studies and strategic approaches need to be taken into consideration to predict the current state and future evolution of Himalayan glaciers. However, variable retreat rates and insufficient data on glacial mass-balance—the difference between the amount of snow collected during winter and the amount of snow melted during summer—make it challenging to develop a coherent picture of the impact of climate change. In a study Dirk Scherler and his colleagues at University of Potsdam, Germany, proposed that the response of Himalayan glaciers to climate change varies significantly with several factors like surface steepness and degree of rock debris cover, besides climate uncertainty. Published in Nature Geoscience on January 23, their research revealed that nearly 50 per cent of the studied glaciers in the Karakoram region of northwestern Himalayas are either stable or advancing, whereas about 65 per cent are retreating elsewhere, such as in the Tibetan Plateau. “So far the significance of debris cover and its impact on regional differences in the frontal dynamics of Himalayan glaciers has not been established on a mountain-belt scale,” said Scherler. The authors studied 286 mountain glaciers from 12 heavily ice-covered areas of the Greater Himalayas, by analysing remotely measured frontal changes and mean annual glacier-surface velocities. They found that the mean glacier-surface movement between 2000 and 2008 ranged from retreating at the speed of 80 metres per year to advancing at the speed of 40 metres per year. There is no uniform response of glaciers to climate change. The glaciers of central Himalayas, which have high debris cover, have low retreat rates in comparison to the subdued landscapes of the Tibetan Plateau, where retreat rates are higher. “During the advancing stage, the debris is brought down the valley by the glacier and during retreat they protect the ice below them, to some extent, from melting,” explained R K Chaujar, scientist, Wadia Institute of Himalayan Geology, Dehradun, Uttarakhand. “It is important to consider such parameters that may have been neglected so far in order to predict how fast certain glaciers are retreating and will retreat in the future to predict water availability or measure the global sea level. Just because some glaciers do not retreat (at their front) does not mean that they are in good condition,” Scherler added. “Scherler’s method is convincing. Though satellite data interpretations contribute in coming to these conclusions effectively, one should always take field measurements and observations into account,” said P K Joshi, associate professor, TERI University, New Delhi. “Validation of such studies should be carried out using strong, uniformly distributed ground station records.”
<urn:uuid:fda6e82b-589e-405d-b518-3ba4766f0346>
3.484375
727
Comment Section
Science & Tech.
32.497868
Daniele Bochicchio, Stefano Mostarda, and Marco De Sanctis are authors of ASP.NET 4.0 in Practice. With so many multicore CPUs on the market, multithreading and parallel execution are becoming more popular topics among developers. Both multithreading and parallel execution aim at reducing computing time, providing better performance. Multithreading is the ability to execute multiple tasks at the same time using different threads (see Figure 1). Parallel execution is the ability to span a single task across multiple CPUs and use the whole power to execute a computing task in the fastest possible way. Process, Threads, and Execution When a program is executed, the operating system creates a particular object called process, giving an isolated memory space to it. A process contains a specific kind of items called threads, used to execute the code. A process, in fact, does not have the ability to execute anything. A process contains at least one thread (the primary one). When the primary thread is terminated, the process itself is terminated, and the memory is unloaded. Creating a thread is easier from a performance point of view than creating a process; you are not required to allocate memory. When a piece of code is executed, the thread is blocked, waiting for the response. If you have a single thread responding to your code execution needs, the problem is simple -- you'll have a waiting list for the code to be executed. This approach will not work for normal applications. Let's imagine if, while in a production program like the ones in Office, you have to wait for every single operation you'll do before moving on. It will be impossible using a similar approach to have a background spellchecker or start printing while editing a document. Multithreading is very important; in fact, ASP.NET does support multiple threads. Using this approach, one request does not stop the others and multiple requests at the same time can be served. What is really important at this point is the ability to create new threads and assign a specific code to them to execute part of the work in a different thread. To be clear, I'm speaking of generating multiple threads from a single request to increase response time. This approach is very useful in scenarios where you need to make calls to external resources, just like databases or web services are. There is a strong debate about whether generating multiple threads in a web application is a best practice or not because the working threads are shared by all requests. In such a situation, if you can afford a better application componentization, this can be achieved by simply moving the thread generation to a different layer and using the application as a controller and display only. Anyway, the technique shown in the next example may be useful in a lot of scenarios where this componentization is not needed or possible.
<urn:uuid:2d979a1f-8cd1-4bc3-90d8-b2e799abea7b>
3.359375
580
Knowledge Article
Software Dev.
38.26252
Contact: Bill Steigerwald NASA/Goddard Space Flight Center Caption: New research from NASA's Lunar Science Institute indicates that the solar wind may be charging certain regions at the lunar poles to hundreds of volts. In this short video Dr. Bill Farrell discusses this research and what it means for future exploration of the moon's poles. Credit: NASA/Goddard Space Flight Center Usage Restrictions: None Related news release: Lunar polar craters may be electrified
<urn:uuid:907ebac3-8c73-422b-85ef-671d698b5c59>
2.984375
100
Truncated
Science & Tech.
37.076
Using this graphic and referring to it is encouraged, and please use it in presentations, web pages, newspapers, blogs and reports. For any form of publication, please include the link to this page and give the cartographer/designer credit (in this case UNEP/GRID-Arendal) Houghton, J.T., et al. (editors). 2001. Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). UK: Cambridge University PressHoughton, J.T., et al. (editors). 2001. Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). UK: Cambridge University Press Uploaded on Tuesday 21 Feb 2012 Main greenhouse gases A table of the main greenhouse gases and their attributes, sources and concentration levels from 1998. Naturally occurring greenhouse gases include water vapour, carbon dioxide, methane, nitrous oxide, and ozone. Greenhouse gases that are not naturally occurring include hydro-fl uorocarbons (HFCs), perfl uorocarbons (PFCs), and sulphur hexafl uoride (SF6), which are generated in a variety of industrial processes. Water vapour is the most abundant greenhouse gas. However, human activities have little direct impact on its concentration in the atmosphere. In contrast, we have a large impact on the concentrations of carbon dioxide, methane and nitrous oxide. In order to be able to compare how different gases contribute to the greenhouse effect, a method has been developed to estimate their global warming potentials (GWP). GWPs depend on the capacity of greenhouse gas molecules to absorb or trap heat and the time the molecules remain in the atmosphere before being removed or broken down. The GWP of carbon dioxide is 1 (constant for all time periods) and the GWPs of other greenhouse gases are measured relative to it. Even though methane and nitrous oxide have much higher GWPs than carbon dioxide, because their concentration in the atmosphere is much lower, carbon dioxide remains the most important greenhouse gas, contributing about 60% to the enhancement of the greenhouse effect.
<urn:uuid:8461a624-2738-4dd7-a5ac-cb4ba8b3bb21>
3.375
471
Knowledge Article
Science & Tech.
37.267471
A Keymap lets an application bind key strokes to actions. In order to allow keymaps to be shared across multiple text components, they can use actions that extend TextAction. TextAction can determine which JTextComponent most recently has or had focus and therefore is the subject of the action (In the case that the ActionEvent sent to the action doesn't contain the target text component as its source). The input method framework lets text components interact with input methods, separate software components that preprocess events to let users enter thousands of different characters using keyboards with far fewer keys. JTextComponent is an active client of the framework, so it implements the preferred user interface for interacting with input methods. As a consequence, some key events do not reach the text component because they are handled by an input method, and some text input reaches the text component as committed text within an InputMethodEvent instead of as a key event. The complete text input is the combination of the characters in keyTyped key events and committed text in input method events. The AWT listener model lets applications attach event listeners to components in order to bind events to actions. Swing encourages the use of keymaps instead of listeners, but maintains compatibility with listeners by giving the listeners a chance to steal an event by consuming it. Keyboard event and input method events are handled in the following stages, with each stage capable of consuming the event: To maintain compatibility with applications that listen to key events but are not aware of input method events, the input method handling in stage 4 provides a compatibility mode for components that do not process input method events. For these components, the committed text is converted to keyTyped key events and processed in the key event pipeline starting at stage 3 instead of in the input method event pipeline. By default the component will create a keymap (named DEFAULT_KEYMAP) that is shared by all JTextComponent instances as the default keymap. Typically a look-and-feel implementation will install a different keymap that resolves to the default keymap for those bindings not found in the different keymap. The minimal bindings include: The model is defined by the Document interface. This is intended to provide a flexible text storage mechanism that tracks change during edits and can be extended to more sophisticated models. The model interfaces are meant to capture the capabilities of expression given by SGML, a system used to express a wide variety of Each modification to the document causes notification of the details of the change to be sent to all observers in the form of a DocumentEvent which allows the views to stay up to date with the model. This event is sent to observers that have implemented the interface and registered interest with the model being observed. Serialized objects of this class will not be compatible with future Swing releases. The current serialization support is appropriate for short term storage or RMI between applications running the same version of Swing. As of 1.4, support for long term storage of all JavaBeansTM has been added to the java.beans package. Please see XMLEncoder .
<urn:uuid:9301fa2e-7c6c-467d-9841-2a5321aa0bcd>
3.46875
674
Documentation
Software Dev.
31.886485
In this chapter we discuss how to program a leveleditor that can used in any arraybased game. An arraybased game might be a Breakout clon, Boulderdash, PacMan or a Nibbles clone like our game Snakin, where we used this leveleditor too. Before you start reading this article you should know something about arrays in Java (just basics), a little bit about applet parameters, and of course you should download the sourcecode at the end of the chapter! Why do we need a leveleditor? Even though the answer to this question might be clear I want to write a few lines about it. To program a leveleditor and to make your game work with any level build with this editor is much harder and costs you much more time than to write a game that has one or two static levels. But this is worth the price if you want to program a game with many different levels. If you programmed the game that way, that it works with a leveleditor, then it costs you just a few minutes to add a new level. But if you programmed your game just for static levels it will be hard and maybe even impossible to add a new level. So the hard work at the start of your game design was really worth the price! Well, let's start with the real problem! The basic idea Before we find a solution to the problem of where and how to define our levels, we have to think about something else. We have to find a way to represent a level in our game. Imagine the applet area with a grid over it. Next we place the elements included in our level, (walls, stones, enemies...), in the fields of this grid (just one element per field). This means that the position of every level element is defined by the row and the column number of the grid field in which the level element is placed. The easiest way to represent such a two dimensional matrix in Java is to use a two dimensional array. So every level consits of a 2D array. This array holds the different level elements and these elements can differ from level to level. Everytime we read in our levels we will place different level objects at different positions in our 2D array. Everytime we want to draw our level to the screen we go through the array and paint every object in it at the position in our grid which is represented by the column and row number of the array. Using this pretty simple idea we will now try to solve the problem of where and how we want to define our levels so that it is possible to fill the level array later. Where and how will we define the levels? We are programming applets so we have basicly three possibilities where we can define the levels: - In an external file - In the source code of the game - In the HTML - page of the applet (with help of applet parameters) The first and the second solution are for different reasons worse than the third one. To read in an external file into an applet is possible but not that easy, so we won't do it. To define the different levels in the sourcecode is a good solution if the player should not be able to take a look at a level before he has reached it, for example if the goal of the game is to find a way out of a labyrint. But you can't use this solution if the player should be able to write a level by himself. So there is just the third alternative left and we will use this idea now in our editor. So we will use applet parameters to define our levels. Every applet parameter has a value and a name (for details see below) and we can get the value of a parameter by calling the getParameter(parametername) - method of the applet class. Then the value of the parameter with the specific name is returned as a string. Applet parameter can be defined between the opening and the closing applet tag and look always the same just like: <param name= "name of the parameter" value="value of the parameter"> Well, now we know where we will write our levels, but we still don't know what a level will look like. Ok, here comes my solution. Every level we'll write will consist of 11 parameters: 3 information parameters that hold information about the author of the level, the level description and the level name. The other 8 parameters will represent the 8 rows of the level array that represents our level in the example. The parameter names will always look like this: "Level" + "Levelnumber" + "_" + "Id". Id can have the values "Author", "Name", "Comment" or "Line" + "Rownumber". The value strings of the information parameters can have any length, the values of the level defining parameters have to consist of a string with length 10. Every character of the string represents one level element, in our case, stones with different colors. These colors/character pairs will be r = red, g = green, b = blue, y = yellow and another character ":" that represents grid fields in the level where no level element shall be placed. So the names of parameters of different levels will look all the same execpt for the levelnumber. Because of this structure of the parameter names it is really easy to read in the different levels using a while or a for loop, counting from 1 to the in the "Levels_in_total" defined integer (for details see readLevel - method further down this chapter). Now you can take a look at a level, which can be read by the leveleditor: // Start of the applettag including the normal applet information <applet code = Main width=300 height=400> // This line tells the editor how many levels are defined <param name="Levels_in_total" value="1"> // These lines include the level information parameters <param name="Level1_Author" value="FBI"> <param name="Level1_Name" value="Test Level 1"> <param name="Level1_Comment" value="My first try"> // This is the "real" level <param name="Level1_Line0" value="rrrrrrrrrr"> <param name="Level1_Line1" value="bbbbbggggg"> <param name="Level1_Line2" value="r::rrrr::r"> <param name="Level1_Line3" value="yyyyybbbbb"> <param name="Level1_Line4" value="rrr::::rrr"> <param name="Level1_Line5" value="gggggyyyyy"> <param name="Level1_Line6" value="r::rrrr::r"> <param name="Level1_Line7" value="bgybgrybgy"> // End of the applet tag </applet> Class design of our editor Now we'll start with the class design of our leveleditor, that will make it possible to read in a level. This class will read in a certain number of levels defined in the "Levels_in_total" - parameter using the getParameter(parametername) - method. For every level in the HTML - file it will create an instance of the class Level (see below), and stores this created level in an array of instances of the class Level. Then the method readLevels(), which is doing the job of reading in all the levels, returns this array of Level instances to the calling class. This class saves the values of the information strings Author, Levelname and description and holds the 2D array (stone_map) with the level elements. As I've already said, this array saves instances of the class Stone and, according to the definition of the level, in different colors. Each stone "knows" its color and its position in the grid. The class level has also a method to paint the whole level to the screen. This class holds the color and the position of the stone in the applet area. The position (in pixel) is calculated in the constructor of the class, using the information in which column and row, the stone is placed in the stone_map array of the level instance. A stone instance also has it's own paint method to paint the stone in the right color and at the right position. This class holds an array of instances of the class level. A level can be chosen out of this level array using the cursor keys and then the chosen level is painted to the screen. This is just a test class and has absolutly no meaning for the leveleditor. To make the leveleditor more flexible without needing to change the sourcecode in general, all constant values (number of lines in one level, number of columns in one level, grid size...) are stored in the class C_LevelEditor. If you want to use more lines in your level... you just have to change the values of the corosponding constant. So this class holds some static constants and nothing else!
<urn:uuid:c8892482-82e8-4122-81ef-f6b750257162>
2.984375
1,879
Tutorial
Software Dev.
51.115872
The great 1997 El Niño snuck up on us. Once here, it seemed to spawn an entire industry last year, the business of El Niño forecasting. We kept track of some of the major predictions, matched them up with what actually happened, and tabbed the results to get a sense of just how good the forecasters have gotten. Date of AppearancePrediction: Before El Niño actually showed up, many forecasts said 1997 might be an El Niño year; many said not. It wasn't until late February or March that scientists put the pieces of the puzzle together and realized El Niño was underway. As NOAA's Michael Glantz wryly put it, "once it was started, it wasn't as hard to predict." Outcome: The ayes had it. But no one predicted it would be a record-breaker; most thought it would be weak and short-lived. CaliforniaPrediction: In early autumn last year Ants Leetmaa, the director of NOAA's National Center for Environmental Prediction, warned Californians of a long winter of powerful storms comparable to the devastating storms in the El Niño winter of 1982-83. Specifically, he said "The southern part of the state can expect rainfall on the order of 200% of normal." Outcome: El Niño deflected the two major northern jetstreams so that they carried a long train of storm systems into the state throughout the winter. Southern California got double its average winter rainfall, recording approximately 230% of normal. Flooding was widespread in several coastal areas, with regions near San Francisco suffering especially. The Northern US and CanadaPrediction: The northern half of the US was predicted to experience a relatively mild winter, as the jetstream could be expected to park itself farther north than usual, acting as a barrier against cold Canadian air. However, especially along the east coast, intrusions of southern moisture might lead to more rain and storms than usual. Outcome: The northern US generally enjoyed a mild winter. One way to measure is the total expenditure on heating fuels; the average heating bill over the winter was as much as 10 percent lower than normal. PeruPrediction: Peru would be inundated by heavy rains throughout the peak of the El Niño occurrence, and cooler waters off the coast would mean a drop in Outcome: Peru and adjacent Ecuador suffered massive flooding, with rains rarely stopping for months on end. In January and February, the land could absorb no more water, and vast new lakes—some 50 miles long—appeared in formerly dry coastal areas. Rivers ripped out entire towns in the mountains, and completely inundated agricultural areas in valleys. At sea, fish stocks were depressed, and many fishermen suffered severe economic pain. AustraliaPrediction: Australia would wither under an extended drought throughout the northern winter (which in Australia and the Southern Hemisphere is summer). Outcome: This one's a toss-up; as NOAA's Mickey Glantz put it, "the (forecasting) operation was a success, but the patient died." Meteorologically speaking, there was a bad drought, as rainfall totals across Australia were well below normal. But agriculturally speaking, enough rain fell at just the right times to prevent catastrophic wheat crop and cattle losses. Australian newspapers gave thanks to "Billion-Dollar Rains" that appeared just as disaster seemed imminent, showing that it's not how much rain you get, but when you get it, that counts. India and OceaniaPrediction: India and Oceania would suffer a failure of vital monsoons (heavy Outcome: The monsoons had a late onset, showing up many weeks later than normal, but although erratic in schedule, were in no sense "failed." Indonesia in particular suffered a self-inflicted wound, as fires deliberately set to clear forest lands for slash-and-burn agriculture raged out of control when the monsoons took their time arriving. Fears of imminent famine and misery in India disappeared with the onset of the life-giving rains, although coastal China was flooded and battered by an excess of violent storms bearing tornadoes and high winds. AfricaPrediction: Southern Africa would suffer drought and severe food shortages, followed by increases in disease. Outcome: There was very little noticed effect on weather, no major drought, and no widespread outbreaks of disease or famine. Eastern Pacific/Western AtlanticPrediction: The Eastern Pacific would engender some very powerful hurricanes, while Atlantic hurricane production would be suppressed. Outcome: Some of the most powerful hurricanes ever measured spun up in the Pacific, including Hurricane Linda, so powerful that weather scientists proposed a new "Category 6" (the current system only goes up to 5) to describe it. (Compare Linda, with winds of 185 mph, to Andrew, the hurricane that devastated Homestead, Florida in 1994 with winds of 130 mph.) Meanwhile, the Atlantic hurricane season was below normal.
<urn:uuid:cd411ddb-4118-4347-bec7-f95cbee587e5>
3.109375
1,077
Structured Data
Science & Tech.
37.650766
Black Hole Computers; Reality-Bending Black Holes; Special Editions; by Seth Lloyd and Y. Jack Ng; 10 Page(s) What is the difference between a computer and a black hole? This question sounds like the start of a Microsoft joke, but it is one of the most profound problems in physics today. Most people think of computers as specialized gizmos: streamlined boxes sitting on a desk or fingernail-size chips embedded in high-tech coffeepots. But to a physicist, all physical systems are computers. Rocks, atom bombs and galaxies may not run Linux, but they, too, register and process information. Every electron, photon and other elementary particle stores bits of data, and every time two such particles interact, those bits are transformed. Physical existence and information content are inextricably linked. As physicist John A. Wheeler of Princeton University says, "It from bit." Black holes might seem like the exception to the rule that everything computes. Inputting information into them presents no difficulty, but according to Einstein's general theory of relativity, getting information out is impossible. Matter that enters a hole is assimilated, the details of its composition lost irretrievably. In the 1970s Stephen Hawking of the University of Cambridge showed that when quantum mechanics is taken into account, black holes do have an output: they glow like a hot coal. In Hawking's analysis, this radiation is random, however. It carries no information about what went in. If an elephant fell in, an elephant's worth of energy would come out--but the energy would be a hodgepodge that could not be used, even in principle, to re-create the animal.
<urn:uuid:48d9203f-33a9-4f70-9104-2133ea032a6e>
3.453125
349
Truncated
Science & Tech.
40.678824
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2001 August 16 Explanation: Its core hidden from optical view by a thick lane of dust, the giant elliptical galaxy Centaurus A was among the first objects observed by the orbiting Chandra X-ray Observatory. Astronomers were not disappointed, as Centaurus A's appearance in x-rays makes its classification as an active galaxy easy to appreciate. Perhaps the most striking feature of this Chandra false-color x-ray view is the jet, 30,000 light-years long. Blasting toward the upper left corner of the picture, the jet seems to arise from the galaxy's bright central x-ray source -- suspected of harboring a black hole with a million or so times the mass of the Sun. Centaurus A is also seen to be teeming with other individual x-ray sources and a pervasive, diffuse x-ray glow. Most of these individual sources are likely to be neutron stars or solar mass black holes accreting material from their less exotic binary companion stars. The diffuse high-energy glow represents gas throughout the galaxy heated to temperatures of millions of degrees C. At 11 million light-years distant in the constellation Centaurus, Centaurus A (NGC 5128) is the closest active galaxy. Authors & editors: Jerry Bonnell (USRA) NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC & Michigan Tech. U.
<urn:uuid:3071b08c-027c-48b9-8e40-520fea9c48c2>
3.765625
323
Knowledge Article
Science & Tech.
47.113533
Overview of molluscs - Phylum Mollusca Snails, slugs, clams, mussels, squids and octopuses are all very different-looking animals. However, they are all molluscs. Molluscs live almost everywhere - on the rocky shore, in freshwater habitats and in your garden. Generally molluscs have: - an unsegmented, soft body - a muscular foot or tentacles - a mantle that can secrete a shell Most, but not all, molluscs have: - an internal or external shell - a radula (tongue with teeth) Molluscs are one of the largest animal groups with about 200,000 species worldwide. The number of species in Australia is about 15,000 with more than 2,000 known from Sydney. Sydney has an amazing diversity of molluscs, from the Little Blue Periwinkle the size of your little fingernail to the Giant Cuttlefish over 1 m long. Despite the many differences in external appearance, their internal structure is very similar. Mollusca means 'soft-bodied' and, although some have developed a tough shell, they are all soft on the inside. Molluscs are further classified into seven major groups and Sydney has representative from five of these. The main groups found in Sydney are gastropods, bivalves, cephalopods, chitons, and also a minor group, the aplacophorans or spicule worms.
<urn:uuid:56bbcaa1-30f9-449a-bf6c-522ada1a6d16>
3.65625
324
Knowledge Article
Science & Tech.
38.062
Skip to Main Content Your organization might have access to this article on the publisher's site. To check, click on this link:http://dx.doi.org/+10.1063/1.1146959 A new balloon‐borne instrument designed by the author detects x rays produced in thunderstorms. The instrument uses a sodium iodide (NaI) scintillation detector, and flies on a small meteorological balloon. A three bin (30 to 60, 60 to 90, and 90 to 120 keV) x‐ray spectrum is acquired every 0.25 s. The deployment of these detectors with electric‐field meters has resulted in several vertical profiles of x‐ray intensity and electric‐field strength. These data support the hypothesis that the electric field generated by the thunderstorm can produce energetic electrons, which in turn emit bremsstrahlung x rays. © 1996 American Institute of Physics.
<urn:uuid:d8a0e4d6-aceb-4b94-aaf6-1d3279ebbcf6>
3.5625
186
Truncated
Science & Tech.
63.715696
|Table 1: Global Warming Potentials for Common Greenhouse Gases | In 1978, the United States federal government banned the use of Chlorofluorocarbons (CFCs) in aerosol cans. The cited reasons had to do with the extremely detrimental effect that the emissions of these gases had on the depletion of ozone in the atmosphere. Ozone serves as a buffer that effectively limits the amount of UV rays entering the lower levels of the atmosphere which we as humans occupy. Without the ozone being present, we would have significantly higher levels of skin cancer and other simpler species would easily die. CFCs have chlorine molecules in them. Upon entering the atmosphere where significant amounts of ozone exist, the chlorine molecules become free and react with the ozone in the atmosphere thus increasing the amount of oxygen, but reducing the ozone present. This reduction of ozone in the atmosphere can have significant impacts on human life that could accelerate global warming at a significantly higher rate. In short, CFCs have a huge impact on the atmosphere when emitted. To quantify this impact, greenhouse gas equivalency factors can be used to equate the environmental impact of different gases. Global warming potential (GWP) is a relative measure of how much a particular gas contributes to global warming. The baseline metric for global warming potential is one carbon dioxide molecule. Consequently, carbon dioxide has a GWP of 1. When dealing with carbon dioxide, the amount of molecules being emitted is the only metric that needs to be quantified in order to come to a GWP approximation. However, if one were to try and come to an equivalent GWP number for a greenhouse gas other than carbon dioxide, such as CFCs, a greenhouse gas equivalency factor would need to be used. In the case of CFCs, this equivalency factor ranges from 400 to 15,000 depending on the time horizon. Why is there such a large variation for CFCs' equivalency factors and why do they differ depending on a time horizon? The Global Warming Potential GWP is a time-normalized thermal resistance index. The radiative efficiency of a gas is where P is the radiative power transmitted through the troposphere-stratosphere boundary, R is the radius of the earth, and M is the mass of this gas added to the atmosphere, with the temperature profile of the troposphere held fixed. The GWP of chemical species x is then given by where c(t) is the amount of 1 kg of the gas injected at t = 0 and still in the air at time t, , T is the time horizon and r denotes a reference species, typically carbon dioxide. The forcing capacity is a function of the infrared absorbance, path length of the gas in question and density. GWP then adjusts this capacity according to how long it would take for the gas to naturally decay in the atmosphere. A gas that takes longer to decay naturally is more harmful than one that can decay in a short time span. Once gases start accumulating at a faster rate than they are able to decay, global warming runs away. The GWP is heavily dependent on the time horizon being used. Table 1 assumes a 100-year time horizon, which is the standard time horizon used for citing GWP values. The next section details the importance of time horizon. Common values used for calculating the time horizon include 25, 50 and 100 year timespans. US government reports often use 100 years as the assumed baseline for this metric. For example, in the table above the emission of one molecule of methane would be the equivalent of 25 molecules of carbon dioxide. It would take 100 years for this one molecule of methane to naturally dissolve in the atmosphere at the 25 molecule carbon dioxide equivalency rate. Similarly, for a 20 year time horizon, the equivalency rate would be 72 molecules of carbon dioxide for one molecule of methane. Consequently, as the time horizon goes up, the equivalency rate goes down and as the time horizon decreases, the equivalency rate increases. Given the complexities of atmospheric science, this is not necessarily a linear relationship and detailed mathematical models are developed in order to understand this exact correlation. With the BWP metric standardized it becomes possible to convert greenhouase gases into equivalents of carbon dioxide or carbon. The distinction between carbon dioxide and carbon is sometimes confusing. Carbon Dioxide is a three atom molecule with a molar mass of 44 grams whereas carbon is a single atom with a molar mass of 12 grams. [4,5] In order to convert between carbon dioxide equivalents and carbon equivalents, these molar mass differences need to be accounted for through the simple application of appropriate stoichiometric relationships. Most modern literature use carbon dioxide equivalents, but older sources may use carbon equivalents. As stated in the beginning, CFCs were banned due to the fact that they caused significant ozone depletion when released into the atmosphere. However, carbon dioxide has a different mechanism for increasing the Earth's temperature. This involves trapping the heat produced from the Earth inside the atmosphere thus driving up the temperature of the planet. Given the fact that depleting ozone could have significantly more catastrophic effects than the accumulation of carbon dioxide in the atmosphere, it sounds reasonable that CFCs have such a high GWP. Carbon dioxide and other greenhouse gases both have a part to play with regard to climate change. Understanding which ones have the largest impact is critical to creating a method to manage emissions that could have the biggest impact and mitigate the effects of climate change. copy; Subhan Ali. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author. L. E. Manzer, "The CFC-Ozone Issue: Progress on the Development of Alternatives to CFCs," Science 249, 31 (1990). "IPCC Fourth Assessment Report: Climate Change 2007 (AR4)," Intergovernment Panel on Climate Change. "IPCC Third Assessment Report: Climate Change 2001 (TAR)," Intergovernment Panel on Climate Change. "Metrics for Expressing Greenhouse Gas Emissions: Carbon Equivalents and Carbon Dioxide Equivalents," Environmental Protection Agency, EPA420-F-05-002, February 2005. U. Springer, "The Market for Tradable GHG Permits Under the Kyoto Protocol: a Survey of Model Studies," Energy Economics 25, 527 (2003).
<urn:uuid:d9777ddc-04d3-4715-8347-016d3e068df2>
3.828125
1,319
Academic Writing
Science & Tech.
34.526103
A Search for Transiting Exoplanets Over 270 planets are now known to be orbiting stars other than the Sun. Most of these planets were detected using the radial velocity method, which involves measuring (using spectroscopy) the "wobble" of the star due to the planet's gravitational pull. An alternative used by an increasing number of planet searches is the transit method (see figure below). This involves measuring the small decrease in apparent brightness (of order 1%) of the star, as the planet eclipses (or transits) it. This method is thus only sensitive to planets with edge-on orbits. However, if such a planet is detected, its size, mass and orbital characteristics can be determined (with follow-up spectroscopy), constraining models of its structure and formation. High-precision spectroscopic observations of a host star during a known transit may reveal something about the chemical composition of the planet's atmosphere. The first such observation was recently made by Charbonneau et al. (2001). Under highly favourable circumstances, it may even be feasible to search for the presence of atmospheric oxygen (Webb & Wormleaton, 2001). For the transit method to be effective, two main difficulties need to be overcome. Firstly, the probability that a given star hosts a planet in an edge-on orbit and the transit occurs while the star is being observed is low. Thus, a reasonable detection rate can only be achieved by continuous photometric monitoring of a large sample (tens of thousands) of stars. A wide-field automated telescope is ideal for this task. The second challenge is the high level of photometric precision required. Changes in brightness on timescales of a few hours need to be measured to better than 1% precision. This is difficult to achieve with CCD photometry on ground-based telescopes (the only feasible method for most transit searches). Above the fundamental limits set by Poisson statistics (since measurement involves counting photons in the image), systematic errors become important. Photometric Precision - The RMS magnitude variation versus approximate V magnitude over a night of observations (150sec exposures) using the raster-scan technique. The red dashed line shows the Poisson limit due to star and sky flux. Stars brighter than V~9 are saturated.) The Automated Patrol Telescope The Automated Patrol Telescope (APT) is a 0.5m telescope owned and operated by the University of New South Wales, and located at Siding Spring Observatory, Australia. The previous CCD camera images a 2x3 degree field with 9.4 arcsecond pixels. The telescope is entirely computer-controlled, with the possibility of remote or fully automated observation. A new camera is currently being installed on the APT. An important drawback of using a wide-field telescope is that the images tend to be undersampled (as is the case with the APT). Because the sensitivity of a CCD is not uniform over the surface of a single pixel, the total brightness measured for a star is dependent on where the star falls relative to pixel boundaries. This becomes significant in undersampled images, where the light from a star is spread over only a few pixels. For the APT, this effect proved to be the limiting factor, causing photometric errors of several percent. We have developed a new observing technique specifically aimed at minimising the effects of intra-pixel variations. We systematically move the telescope during intergration, in a raster-type scan covering 1x1 or 2x2 pixels. Although this does broaden the PSF slightly, it effectively eliminates the intra-pixel variation problem. The resulting PSF is fairly flat topped with a very rapid falloff. We therefore developed an optimised aperture photometry package to process the resulting images, with co-located apertures for each object positioned to better than 1/100th of a pixel repeatability in each frame. Using this system we are obtaining differential photometric precision of ~2 mmag down to V = 11 in 150 sec exposures. The lightcurves are further processed to find and remove periodic signals from variable stars and searched for transit signals using a variant of the Gregory-Loredo algorithm optimised for transit detection. We are seeking PhD students to work with our team. We are installing a new CCD camera and need people to work on this exciting project! If you are interested, contact Prof John Webb or Prof Michael Ashley in the People section.
<urn:uuid:22414ae5-c646-4112-97a5-14db2d844e1f>
3.5
907
Academic Writing
Science & Tech.
35.695757
There are many ways to execute external commands from Perl. The most commons are: - system function - exec function - backticks (``) operator - open function All of these methods have different behaviour, so you should choose which one to use depending of your particular need. In brief, these are the recommendations: |method||use if ...| |system()||you want to execute a command and don't want to capture its output| |exec||you don't want to return to the calling perl script| |backticks||you want to capture the output of the command| |open||you want to pipe the command (as input or output) to your script| More detailed explanations of each method follows: system() executes the command specified. It doesn't capture the output of the command. system() accepts as argument either a scalar or an array. If the argument is a scalar, system() uses a shell to execute the command ("/bin/sh -c command"); if the argument is an array it executes the command directly, considering the first element of the array as the command name and the remaining array elements as arguments to the command to be executed. For that reason, it's highly recommended for efficiency and safety reasons (specially if you're running a cgi script) that you use an array to pass arguments to system() #-- calling 'command' with arguments system("command arg1 arg2 arg3"); #-- better way of calling the same command system("command", "arg1", "arg2", "arg3"); The return value is set in $?; this value is the exit status of the command as returned by the 'wait' call; to get the real exit status of the command you have to shift right by 8 the value of $? ($? >> 8). If the value of $? is -1, then the command failed to execute, in that case you may check the value of $! for the reason of the failure. if ( $? == -1 ) print "command failed: $!\n"; printf "command exited with value %d", $? >> 8; The exec() function executes the command specified and never returns to the calling program, except in the case of failure because the specified command does not exist AND the exec argument is an array. Like in system(), is recommended to pass the arguments of the functions as an array. In this case the command to be executed is surrounded by backticks. The command is executed and the output of the command is returned to the calling script. In scalar context it returns a single (possibly multiline) string, in list context it returns a list of lines or an empty list if the command failed. The exit status of the executed command is stored in $? (see system() above for details). #-- scalar context $result = `command arg1 arg2`; #-- the same command in list context @result = `command arg2 arg2`; Notice that the only output captured is STDOUT, to collect messages sent to STDERR you should redirect STDERR to STDOUT #-- capture STDERR as well as STDOUT $result = `command 2>&1`; Use open() when you want to: - capture the data of a command (syntax: open("command |")) - feed an external command with data generated from the Perl script (syntax: open("| command")) #-- list the processes running on your system open(PS,"ps -e -o pid,stime,args |") || die "Failed: $!\n"; while ( <PS> ) #-- do something here #-- send an email to user@localhost open(MAIL, "| /bin/mailx -s test user\@localhost ") || die "mailx failed: $!\n"; print MAIL "This is a test message";
<urn:uuid:c2a98029-146f-4903-a0fa-fafd0fbdc8e6>
3.359375
849
Tutorial
Software Dev.
54.800714
Techniques and Methods 6-A35 The computer program PHAST (PHREEQC And HST3D) simulates multicomponent, reactive solute transport in three-dimensional saturated groundwater flow systems. PHAST is a versatile groundwater flow and solute-transport simulator with capabilities to model a wide range of equilibrium and kinetic geochemical reactions. The flow and transport calculations are based on a modified version of HST3D that is restricted to constant fluid density and constant temperature. The geochemical reactions are simulated with the geochemical model PHREEQC, which is embedded in PHAST. Major enhancements in PHAST Version 2 allow spatial data to be defined in a combination of map and grid coordinate systems, independent of a specific model grid (without node-by-node input). At run time, aquifer properties are interpolated from the spatial data to the model grid; regridding requires only redefinition of the grid without modification of the spatial data. PHAST is applicable to the study of natural and contaminated groundwater systems at a variety of scales ranging from laboratory experiments to local and regional field scales. PHAST can be used in studies of migration of nutrients, inorganic and organic contaminants, and radionuclides; in projects such as aquifer storage and recovery or engineered remediation; and in investigations of the natural rock/water interactions in aquifers. PHAST is not appropriate for unsaturated-zone flow, multiphase flow, or density-dependent flow. A variety of boundary conditions are available in PHAST to simulate flow and transport, including specified-head, flux (specified-flux), and leaky (head-dependent) conditions, as well as the special cases of rivers, drains, and wells. Chemical reactions in PHAST include (1) homogeneous equilibria using an ion-association or Pitzer specific interaction thermodynamic model; (2) heterogeneous equilibria between the aqueous solution and minerals, ion exchange sites, surface complexation sites, solid solutions, and gases; and (3) kinetic reactions with rates that are a function of solution composition. The aqueous model (elements, chemical reactions, and equilibrium constants), minerals, exchangers, surfaces, gases, kinetic reactants, and rate expressions may be defined or modified by the user. A number of options are available to save results of simulations to output files. The data may be saved in three formats: a format suitable for viewing with a text editor; a format suitable for exporting to spreadsheets and postprocessing programs; and in Hierarchical Data Format (HDF), which is a compressed binary format. Data in the HDF file can be visualized on Windows computers with the program Model Viewer and extracted with the utility program PHASTHDF; both programs are distributed with PHAST. See Report PDF for unabridged abstract. First posted June 17, 2010 For additional information contact: Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge. Parkhurst, D.L., Kipp, K.L., and Charlton, S.R., 2010, PHAST Version 2—A program for simulating groundwater flow, solute transport, and multicomponent geochemical reactions: U.S. Geological Survey Techniques and Methods 6–A35, 235 p. Chapter 1. Introduction Chapter 2. Running the Simulator Chapter 3. Thermodynamic Database and Chemistry Data Files Chapter 4. Flow and Transport Data File Chapter 5. Output Files Chapter 6. Examples Chapter 7. Notation Appendix A. Three-Dimensional Visualization of PHAST Simulation Results Appendix B. Using PHASTHDF to Extract Data from the HDF Output File Appendix C. Parallel-Processing Version of PHAST Appendix D. Theory and Numerical Implementation
<urn:uuid:a4be439a-d1b2-4088-b752-83ba388b0778>
2.734375
820
Documentation
Science & Tech.
30.998977
I hope I am not bothering you in any way, but there is a question that has been bothering me for awhile now. I have asked my teacher but he does not know the answer. I have been researching it online and in some text books for about a month now and have come up with nothing. I do not want this to seem like my math teacher knows nothing, he is a very intelligent man. I have taken Calculus 1 and 2 in high school and am leaving for college to study engineering in the fall. The question I have which I would appreciate any little bit of information or direction on is: Where does the 4 come from in the parabola equation: (x – h) 2 = 4 p (y–k) ? This has been bothering me simply because I wish to know the why and how of things. I am sorry if I am taking up you time and I appreciate you for reading my question. Thank you. You realize of course that the textbook could easily get rid of the factor 4, by simply introducing a new constant q = 4p, which leads to the same formula but without the puzzling factor. The fact that the "4" is included suggests that the constant "p" has some special significance, which would be lost if the equation were written otherwise. This indeed is the case. When you define a parabola by its equation (x – h)2 = 4 p (y–k) you are using coordinate geometry, invented by Descartes in the 1600s. Today we prefer such a definition, because it allows geometrical properties to be handled by means of numbers, and our age is very, very good in handling numbers. The ancient Greeks, the first to study parabolas, were not so good with numbers (maybe because they lacked the decimal system), but they were very good at pure geometry. They defined a parabola as the collection of points, each of which had the same distance -- call it "q"--from some point F, its focus, as it had from a straight line, which they called the "directrix" of the parabola. Let us try to translate this condition into the language of coordinate geometry. Get some paper, draw on it (x,y) axes, and add a freehand sketch of the parabola y = x2 , passing through the origin and rising above it symmetrically, on both sides of the y-axis. You need not number the axes. After that please follow one by one the steps outlined below, on paper. Do not try to skip anything, and proceed past any point only after everything preceding is clear. Then when you are done, explain it all to your math teacher. On the graph you drew above, mark a point D= (0, –q) a distance q below the origin. Place D about as far from the origin as 1/4 of the height of the y axis you have drawn rising above it. Draw through D a straight line parallel to the x-axis (its equation is y=–q). That will be the directrix. By symmetry, you expect the focus F to be on the y-axis (only then does the Greek condition produce a curve symmetric with respect to that axis). The origin is one of the points of the parabola, at a distance q from the directrix, so by the Greek condition, the focus F is a distance q on the other side, at the point (0, q). Next, mark some point P = (x,y) on your parabola; to best illustrate the argument, choose it with y of about 2q or 3q. Also draw a perpendicular line from P to the directrix, a line parallel to the y-axis. Say it meets the directrix at point Q=(x, –q). You have drawn PQ, now add one last line PF. By the Greek definition, they are equal in length PF = PQ PQ = y + q (PF)2 = (y+q)2 Expressing (PF) 2 by the theorem of Pythagoras (y–q) 2 + x2 = (y+q) 2 Multiply out the squares! You find you can now subtract from both sides y2 and also q2, leaving just –2yq + x2 = 2yq x2 = 4qy But suppose we are working, not in (x,y) coordinates where the lowest point of the parabola happens to strike the origin, but in (X,Y) coordinates, at which the lowest point is at some arbitrary point (X,Y) = (h,k) . The connection between the two systems is a simple shift in x and y : x = (X–h) y = (Y–k) The equation of the parabola now becomes (X–h)2 = 4q(Y–k) That is of course your formula. You can see that when the factor 4 is included, then q (or p in your version) has a geometrical meaning, it is the equal distance which all points on the parabola maintain, from a point and from a straight line.
<urn:uuid:50fa0361-d88b-4e73-a18b-6c8623d199c7>
3.265625
1,098
Q&A Forum
Science & Tech.
66.73121
A fullerene is any molecule composed entirely of carbon, in the form of a hollow sphere, ellipsoid, or tube. Spherical fullerenes are also called buckyballs, and they resemble the balls used in association football. Cylindrical ones are called carbon nanotubes or buckytubes. They are usually made in the form of a hollow ball or tube. The fullerene was found in 1985 by Robert Curl, Harold Kroto and Richard Smalley at the University of Sussex and Rice University, and was named after Buckminster Fuller because his famous Geodesic domes are similar in shape.
<urn:uuid:e785c310-d9b4-4ea9-9765-cd71e7c8886a>
3.53125
131
Knowledge Article
Science & Tech.
24.476391
In the animal kingdom, slugs and snails belong in the phylum Mollusca, a word meaning 'soft animal'. The number of different species of mollusc varies, according to the source, but is estimated to vary between 30.000 and 120.000 living species. Members of the group are found in nearly all available biotopes on earth. The phylum Mollusca is subdivided into 8 classes. Except for the Monoplacophora, representatives of all classes are found in the North Sea. The class Monoplacophora contains only one single genus (Neopilina ), with a number of species known from abyssal zones in the Pacific, Atlantic and Indian Oceans. All other classes will be introduced in separate chapters in the 'Introduction'. On this CD-ROM, the classification of the Check List of European Marine Mollusca (CLEMAN) from the M.N.H.N in Paris is used.
<urn:uuid:140ea6f7-e672-485b-ade7-770c9358fd7c>
3.421875
202
Knowledge Article
Science & Tech.
54.170797
How to Watch This is the most important question after ‘when is it happening?’. Looking directly at the Sun is sheer stupidity, so you’ll need some equipment. The cheapest and least effort-requiring alternative Solar goggles will be sold in your neighbouring areas somewhere. Get them. Make sure that these are proper by checking that you can look at the sun comfortably without your eyes hurting even a bit! Make-shift goggles made out of old X-Ray plates might not be a good idea, especially for the sun in the afternoon or late morning. Venus or no Venus, your eyes are the most important things to take care of. Never forget that! Cheap alternative, but requires a bit more handiwork One cheap way to look at the transit is to project it. You’ll need a few things and I think you’ll enjoy the thrill of making a simple tool to observe a cosmic event. You’ll need two magnifying lenses – one being the eye-piece and the other the objective. Make sure that the objective is a bigger lens than the eyepiece. Make a simple cardboard roll. Place another roll inside it, so that it fits snugly, but can still slide without too much of a problem. Place lenses properly within the rolls (before sticking them permanently, of course), so that you can get a decent image. Adjust the distance after that by sliding one cardboard roll against the other and ensure proper focus. Aim it at the sun and project it onto a white sheet of paper. If properly focussed, you should be able to see Venus quite clearly! Strict warning: Do NOT look through the telescope while it is aimed at the sun. This is more dangerous that looking directly at the Sun, since the telescope actually collects sunlight and focuses them. You’ll run a serious risk of being blinded! Do NOT do it! Not so cheap alternative If you’re a member of the local amateur astronomy club, or if they are organising something in the locality, then, filters on a proper telescope is the best way to go! Make sure the guys use proper filters – red-orange filters are the way to go. The filters should be of good make, otherwise prolonged exposure might damage the filters and, in turn, the telescope CCD. With the filters installed, you can actually look directly at the Sun! The glory of Venus making its way should be evident! The upside of a fancy telescope is that you’ll also be able to see a few sunspots. The filters aren’t too expensive, but they aren’t dirt cheap either! So if you’re the handyman and your friend has a telescope, collaborate to set up something that you can tell your grandchildren about. Capturing on the Camera Unfortunately the only way to get a good image of the transit on film (real or virtual) is to actually point the camera into the sun and click. With the Sun being so bright, this is a very bad idea; your camera CCD/film will not like it! The only way around that is filters. Make sure you have red-orange filters of appropriate shaped for your camera lens, before you even think of aiming at the Sun and clicking. Contact your local amateur astronomy club(s) or planetaria for the exact filters needed and available. Search for ND5 filters (Neutral Density-5 filters), if possible. What to watch for Venus entering and/or exiting the solar disc will be a treat. It will look like an oil drop falling into a liquid. Venus has a dense atmosphere, making the edges of the planet blur out against the bright disc of the Sun. Towards the end of the transit, a thin sliver of the sun’s disc will be visible between the edge of Venus and the edge of the solar disc. As Venus proceeds further, this sliver will also be “pinched off”. That’ll be a sight to behold. The Venus Cycle! Now, onto the real nitty gritty of the cycle of Venus. When can we see the transit and does it really have a period? It turns out that it does – a massive period of 243 years! The transit of Venus will occur in a pair and then be separated by a relatively long time. The pair transits will differ by 8 years. The last transit occured in 2004 and this one is occuring this year – after 8 years, on the dot! But then this will be separated by a long 105.5 years, with the one after this year’s occuring in 2117 (December). The cycle is pretty complicated. The one after that will occur in 2125 – after the proper 8 year cycle (December again!). The one after that will occur after 121.5 years! So the complicated cycle looks like this: 8-105.5-8-121.5 years between two successive transits. This entire 243 year cycle is then repeated. Look above for the diagrammatic representation. So let’s look at the takeaway points: 1.Take a look at the GMT time and figure out the time for the transit in your area. 2.Make sure that you have proper equipment for observing the event. Do NOT see it with the naked eye. Projection onto a surface is an easy way. But it will require a little preparation. Youtube videos are your friends! 3.Look for solar goggles of good make. Remember, your eyes are of foremost importance, not Venus! The best solution – and the most expensive one – is buying filters for a telescope. This will give you a magnified and sharp image. 4.If you are geographically lucky enough to see the start or the end, do keep an eye out for the “pinch-effect”. Have fun! Watch the Venus Transit safely.
<urn:uuid:b8855123-477c-48a3-bf6b-a8065c1e9fc4>
3.15625
1,220
Tutorial
Science & Tech.
67.343861
Science Fair Project Encyclopedia Kuiper Airborne Observatory The Gerard P. Kuiper Airborne Observatory (KAO) was a national facility operated by NASA to support research in infrared astronomy. The observation platform was a highly modified C-141A jet transport aircraft with a range of 6000 nautical miles, capable of conducting research operations up to 45,000 feet (14 km). The KAO's telescope was a conventional Cassegrain reflector with a 36-inch (91.5 cm) aperture, designed primarily for observations in the 1 to 500 μm spectral range. Its flight capability allowed it to rise above almost all of the water vapor in the earth's atmosphere (allowing observations of infrared radiation, which is blocked before reaching ground-based facilities), as well as travel to almost any point on the earth's surface for an observation. The KAO made several major discoveries, including the first sightings of the rings of Uranus in 1977 and a definitive identification of an atmosphere on Pluto in 1988. The KAO was used to study the origin and distribution of water and organic molecules in regions of star formation, and in the vast spaces between the stars. Kuiper astronomers also studied the disks surrounding certain stars that may be related to the formation of planetary systems around these stars. Peering still deeper into space, KAO astronomers studied powerful far-infrared emissions from the center of our galaxy and other galaxies. Scientists onboard the KAO tracked the formation of heavy elements like iron, nickel, and cobalt from the massive fusion reactions of Supernova 1987A. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:41831ffe-6449-4f82-969e-9acf14b44306>
3.84375
356
Knowledge Article
Science & Tech.
34.918239
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...auratus) of eastern North America, which has more than 100 local names. This golden-winged form, which measures about 33 cm (13 inches) in length, is replaced in the West (to Alaska) by the red-shafted flicker (C. cafer), considered by many authorities to represent the same species as the yellow-shafted because the two forms hybridize frequently. The campos, or pampas, flicker... What made you want to look up "red-shafted flicker"? Please share what surprised you most...
<urn:uuid:f29a504e-90aa-4807-af7a-44a7a886fab3>
3.265625
156
Knowledge Article
Science & Tech.
61.724394
Team Number: 068 School Name: Shiprock High School Area of Science: Environmental Science Project Title: Water Pollution Our definition of water pollution is the contamination or pollution of the water caused by human activities, either intentionally or accidentally. An example of the intentional pollution of water is people knowingly dumping toxic waste into bodies of water. An example for accidental pollution is when oil from leaking vehicles seeps into the ground and contaminates ground water. We plan to study the problem of water pollution and its effects on people in San Juan County. It is important to learn about water pollution so we can have a better understanding of the problem and perhaps slow the rate of the pollution. As we work through our project, we hope to gain greater knowledge of how to be environmentally friendly and find ways to conserve what natural water we have left.
<urn:uuid:e9b9f534-8a91-4c5a-b6d1-19ce513632ff>
3.359375
169
Academic Writing
Science & Tech.
31.461847
Wind can be used to do work. The kinetic energy of the wind can be changed into other forms of energy, either mechanical energy or electrical energy. When a boat lifts a sail, it is using wind energy to push it through the water. This is one form of work. Farmers have been using wind energy for many years to pump water from wells using windmills like the one on the right. In Holland, windmills have been used for centuries to pump water from low-lying areas. Wind is also used to turn large grinding stones to grind wheat or corn, just like a water wheel is turned by water power. Today, the wind is also used to make electricity. Blowing wind spins the blades on a wind turbine – just like a large toy pinwheel. This device is called a wind turbine and not a windmill. A windmill grinds or mills grain, or is used to pump water. The blades of the turbine are attached to a hub that is mounted on a turning shaft. The shaft goes through a gear transmission box where the turning speed is increased. The transmission is attached to a high speed shaft which turns a generator that makes electricity. If the wind gets too high, the turbine has a brake that will keep the blades from turning too fast and being damaged. You can use a single smaller wind turbine to power a home or a school. A small turbine makes enough energy for a house. In the picture on the left, the children at this Iowa school are playing beneath a wind turbine that makes enough electricity to power their entire school. We have many windy areas in California. And wind is blowing in many places all over the earth. The only problem with wind is that it is not windy all the time. In California, it is usually windier during the summer months when wind rushes inland from cooler areas, like the ocean to replace hot rising air in California's warm central valleys and deserts. In order for a wind turbine to work efficiently, wind speeds usually must be above 12 to 14 miles per hour. Wind has to be this speed to turn the turbines fast enough to generate electricity. The turbines usually produce about 50 to 300 kilowatts of electricity each. A kilowatt is 1,000 watts (kilo means 1,000). You can light ten 100 watt light bulbs with 1,000 watts. So, a 300 kilowatt (300,000 watts) wind turbine could light up 3,000 light bulbs that use 100 watts! As of 1999, there were 11,368 wind turbines in California. These turbines are grouped together in what are called wind "farms," like those in Palm Springs in the picture on the right. These wind farms are located mostly in the three windiest areas of the state: - Altamont Pass, east of San Francisco - San Gorgonio Pass, near Palm Springs - Tehachapi, south of Bakersfield Together these three places in California make enough electricity to supply an entire city the size of San Francisco! About 11 percent of the entire world's wind-generated electricity is found in California. Other countries that use a lot of wind energy are Denmark and Germany. Once electricity is made by the turbine, the electricity from the entire wind farm is collected together and sent through a transformer. There the voltage is increase to send it long distances over high power lines. Chapter 17: Renewable vs. Nonrenewable Energy.
<urn:uuid:e5998359-1f65-4a7f-baa6-3641e355ed88>
3.1875
713
Knowledge Article
Science & Tech.
59.961642
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. Post a reply Topic review (newest first) That is always a danger and some of that will definitely happen. Everyone has slowed down with mental calculation. I am still learning though. For instance, I did not want to just type in "FourierSeries[e^x]" because that won't show me how they dealt with the problem I had in post #1, which was having an undefined term (division by 0) at n = 1. The worry I have, for example, is having my knowledge of the concept deteriorate. Now that I use a calculator quite often I do not have to do something like 357*762 in my head, so over time, I have got slower at doing mental calculations. I am worried that the same thing would happen with this, for example. I may forget how to find Fourier series because I am used to getting something to do it for me. Most of the time I use Mathematica. Today algebra is sort of like square roots of numbers. If you had to evaluate √ (234.176253) you would turn to your calculator. If you needed to multiply 102536 * 776241 you would turn to your calculator. At one time people did them both with pencil and paper. Same thing now with mechanical symbolic math. When you are learning do it by hand, when you know it use a CAS! It happens automatically. It is inherent in the fit. Oh okay... but, how can we get this from the Fourier series? Least squares minimize the square of the error between the fit equation and the data. First you start with an overdetermined system. What do you mean by least squares? I have heard the term thrown around for regression lines in statistics. That I do not know for sure. The Fourier fit is also least squares or minimax, I am not sure. Oh, I see. Are they orthogonal because sine is 90° out of phase with cosine, and the Fourier series is a sum of sines and cosines? Taylor series are not orthogonal but they are osculating so they have some benefits. I see them. So the Fourier series have an orthogonal basis? And I am guessing Taylor series do not? I understand that orthogonality is preferred since it gives you the least possible error. But I can't see how this relates to our Fourier series for e^x. Where are the orthogonal lines? When we curve fit using ordinary polynomials x, x^2, x^3, x^4, x^5,...as the basis we can see by graphing how much they are like example 3. Look at that mess around the origin. All of them on top of each other. That is why it is not recommended to curve fit a function using powers higher than say 10. The accumulated error makes them very difficult to get accurate results.
<urn:uuid:91ced5cc-5111-4759-ade4-387f9921f6c8>
2.96875
676
Comment Section
Science & Tech.
71.167528
GLACIOLOGISTS are in dispute over how a highly contentious claim about the speed at which glaciers are melting came to be included in the latest report of the Intergovernmental Panel on Climate Change. A decade ago, New Scientist reported (5 June 1999, p 18) a comment by the leading Indian glaciologist Syed Hasnain, who said in an email interview with this author that all the glaciers in the central and eastern Himalayas could disappear by 2035. Hasnain, of Jawaharlal Nehru University in Delhi, has never repeated the prediction in a peer-reviewed journal, and now says it was "speculative". The claim found its way into the IPCC's fourth assessment report published in 2007. Moreover, it was extrapolated to all glaciers in the Himalayas. This has angered many glaciologists, who regard the claim as unjustified. Vijay Raina, a leading ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:7db5161c-8988-40cf-b98a-e45bc3279d6f>
2.921875
219
Truncated
Science & Tech.
47.971184
We're thinking about probability more often than we might realise. Even weather... We're thinking about probability more often than we might realise. Even weather forecasts are, at heart, 'chances are' reports - Duration: 10 mins - Published on: Wednesday 5th January 2005 - Introductory Level - Posted under: Statistics Many of the things that statisticians and others investigate involve uncertainty. What will be the UK level of unemployment in a year’s time? If I take this drug my doctor has prescribed, will my health improve? If I drink a glass or two of red wine every day, will I live a longer or shorter time than if I don't drink wine at all? Will it rain tomorrow? The mathematical tool that is generally used to deal with such uncertainty is called probability. Probability is a way of expressing the uncertainty of an event in terms of a number on a scale. The most common way, among statisticians at least, of expressing this uncertainty is on a scale going from 0 to 1, where impossible events are given a probability of 0 and events that will certainly happen are given a probability of 1. Other events, that might or might not happen, are given probabilities at intermediate events on the scale. So an event that is as likely to happen as not is given a probability halfway along the scale, at ½ or 0.5. An event that is pretty likely to happen, but could possibly not happen, might have a probability of 0.95. Other scales are used for probabilities. Sometimes they are expressed on a percentage scale, where impossible events have a probability of 0%, events that are certain get a probability of 100%, an event as likely as not to happen has a probability of 50% and so on. Bookmakers (and statisticians in some contexts) usually express uncertainty in terms of odds rather than probability. If a horse-racing expert says that the odds on a particular horse winning a particular race are 1 to 2, he or she means that the chance of the horse not winning is twice as big as the chance of the horse winning. Expressing this on a probability scale going from 0 to 1, the probability that the horse will win the race is 1/3, and the chance that it doesn’t win is 2/3. Probabilities obey various mathematical rules, many of which are quite simple and straightforward. For instance, tomorrow it will either rain or not rain. If the Met Office gives the probability of rain tomorrw as, let's say, 0.2 (that is, 20%), then then the probability that it won’t rain is 1 – 0.2 which comes to 0.8. In general, if the probability of an event is p, the probability that the event won’t happen is 1 – p. Using this and many other mathematical rules, a large body of mathematical theory about probabilities has been built up over several centuries. Probability theory has been used to refine our understanding of random and chance occurences in the world, a subject in which people’s intuitions often lead them astray. If we want to use this body of theory to tell us something useful about the world, we need to have an understanding of what a probability means in terms of things in the real world. Actually, philosophers, statisticians and probability theorists have made this link between the maths and the world in several different ways. One common way, probably the most common, is as follows. Suppose I’m going to toss a coin, and I say that the probability it will come up Heads is ½. OK, this means that a Head is as likely as a Tail, but what does that mean more precisely? Imagine that, instead of tossing the coin just once, I keep on tossing it again and again. In 100 tosses, I wouldn’t be surprised if the number of Heads wasn’t exactly 50. It might be 48 or 55, but it wouldn’t be too far from 50. In 1000 tosses, again, I wouldn’t expect exactly 500 Heads, but the proportion of heads would be very close to half. If the probability of a Head is really ½, then as I keep tossing again and again, the proportion of Heads would tend to get closer and closer to a half. This long-run meaning of probability is all very well, but it doesn’t make much sense in contexts where things cannot be repeated. If a horseracing expert says that a particular horse’s probability of winning a particular race is ½, then it is hard to imagine the same horse running exactly the same race again and again and counting up how often it wins. The expert may well mean instead that he or she would be prepared to bet on the horse winning if the odds on offer were better than evens, but would not be prepared to bet if the odds were worse than evens. There’s no notion of repeating the race behind this sort of thinking, so it can be applied more widely than the long-run idea. But it has an inevitable subjective aspect. A different horseracing expert might give a different probability for the same horse winning the same race. Probability forecasts for weather don’t explicitly relate to betting, but some of the ideas behind them are the same. Suppose the Met Office says that the probability of rain tomorrow in your region is 20%. They aren’t really talking about long-run repetitions of tomorrow. Tomorrow’s only going to happen once. They also aren’t saying that it will rain in 20% of the land area of your region, and not rain in the other 80%. No, forecasts like this are considerably less subjective than most horseracing tips, in that they are based on extensive observations and complicated computer models of the weather, but fundamentally they are just a way of expressing that, even with all that technology, tomorrow’s weather is uncertain. One can be more precise about the actual uncertainty by saying that the chance of rain is 20%, rather than just using words and saying “it might rain” or “there’s some chance of rain”. Find out more Thinking in terms of probability can often help to make more sense of situations involving chance, where our intuition can lead us astray. An example is the notorious Monty Hall problem. Copyright & revisions Originally published: Wednesday, 5th January 2005 Last updated on: Monday, 10th January 2005 - Body text - Copyright: The Open University - Image 'Betting at a horse race' - Copyrighted: BBC - Image 'Probability in action [Image squacco under CC-BY-NC licence' - Creative-Commons: squacco via Flickr If you enjoyed this, why not follow a feed to find out when we have new things like it? Choose an RSS feed from the list below. (Don't know what to do with RSS feeds?) Remember, you can also make your own, personal feed by combining tags from around OpenLearn.
<urn:uuid:905db292-e71b-4785-b5ce-6dba1a7961ae>
3.40625
1,467
Knowledge Article
Science & Tech.
53.682524
At the 2010 Olympic Winter Games in Vancouver, 20 lasers were used in a nightly light show in which people from around the world controlled the beams through public Internet access. mirror—which was made of aluminized Mylar and spanned 90 feet across. In the main hall, visitors could see their reflections hanging upside down above their heads. To get to that hall, they walked through a dark clam-shaped room illuminated by sound-activated laser beams shining downward and tracing Lissajous figures on the floor. E.A. T. commissioned Cross and Jeffries to design the laser-and-sound display. An estimated 2 million people visited the Pepsi Pavilion during the six months of Expo ’70. However, according to Cross’s website, company officials were not able to maintain the technical exhibits to the artists’ standards, and the building was demolished soon after the exposition ended. Cross moved to the University of Iowa and created a mixed-gas argon-krypton ion laser show—complete with symphony orchestra, soloists and electronic music—for the opening of a new campus auditorium in 1972. Scientist Elsa Garmire pioneered the use of textured lenses to create these artistic “lumia” effects for Laserium. Garmire explores her artistic side Toward the end of the 1960s, Elsa Garmire, a postdoc at Caltech who had received her doctorate at MIT under OSA Honorary Member Charles H. Townes, found that her research into ultra-short laser pulses had become stalled. So, for a few years, she turned her attention to experiments that brought art and technology together. As part of Caltech’s celebration of the first moon landing in 1969, Garmire designed a laser light “wall” that people could walk through. In another experiment, she and her friends hauled an argon laser all the way to the top of the campus library and staged a light show. Garmire was assisted by a graduate student who held a mirror to reflect the beam. Although she knew how to make Lissajous figures, Garmire was more interested in creating abstract diffraction patterns through textured glass and plastic. “I wanted to be a pure artist,” she said. Through her experimentation, she discovered that Duco household cement formed bubbles when it dried on glass and made the most interesting patterns when a laser shone through those At an E.A.T. conference at the University of Southern California in November 1970, Garmire displayed photographs of her artwork and invited the filmmaker Ivan Dryer to visit her lab. Dryer set up a 16-mm movie camera, but when he saw the brilliant patterns of laser speckles on the wall, he realized that “film would not cover the intense color and scope” of the laser effects and became convinced that live lasers would be better than any motion picture. A onetime college astron- omy major, Dryer had volunteered off and on for a decade as a guide at Griffith Observatory, a public science center in downtown Los Angeles. So, using a single He-Ne laser and one of Garmire’s lumia for creating interference patterns that made an “undulating, kind of organic image among the stars,” he prepared a demonstration show for Griffith officials and proposed calling the show Laserium, or “house of the lasers.” Undaunted by Griffith’s initial rejection, in January 1973 Dryer formed a company called Laser Images Inc., with Garmire as president. The company provided some laser effects for a rock-music documentary called Medicine Ball Caravan, as well as some live concerts by Alice Cooper and a building’s grand opening. Garmire eventually decided to return to scientific research. After her stint at Caltech, she spent two decades at USC— during which she served as OSA’s 1993 President. She is now an OSA Fellow and engineering professor at Dartmouth College in New Hampshire. In June 1973, the Laserium team borrowed a 1-W krypton laser from SpectraPhysics and set up another demonstration in a vacant Caltech lab. Of the 120 invitees, only two showed up—but one was the new Griffith director, William J. Kaufmann III, a young science popularizer and a “pretty hip
<urn:uuid:0ae98c4e-6ec1-4e32-b2f7-32a11e631a02>
3.140625
1,000
Nonfiction Writing
Science & Tech.
44.638474
Keplerian orbital elements Filed under explaining science In this diagram, an orbital plane (yellow) intersects a reference plane (gray). For objects in solar orbit, the reference plane is usually the plane of the ecliptic. The intersection is called the line of nodes, as it connects the center of mass with the ascending and descending nodes. The Vernal Point, (?) is the heliocentric longitude of 0, the angular position of Earth's northern vernal equinox. Heliocentric longitudes count up in a prograde direction (counterclockwise, when viewed from the north side of the ecliptic plane). Lasunncty via Wikimedia Commons
<urn:uuid:6858f45b-22ea-4f4a-961c-6f52e5e96024>
3.84375
145
Knowledge Article
Science & Tech.
31.08875
A robotic sensor that won an R&D 100 Award in 2009 has been put to use by Woods Hole Oceanographic Institution (WHOI) in Gulf of Maine coastal waters to monitor the way red tides behave. These harmful algal blooms, which generate a potentially fatal toxin, can be a challenge to track or predict. The Environmental Sample Processors have been remotely deployed and should simplify and enhance this effort. An international team of researchers may have found what cause a dramatic cooling near... Until recently people believed much of the rain... According to research taking place at Lawrence Livermore National Laboratory, the... Among its many talents, silver is an antibiotic. Titanium dioxide is known to glom on to certain heavy metals and pollutants. Other materials do the same for salt. In recent years, environmental engineers have sought to disinfect, depollute, and desalinate contaminated water using nanoscale particles of these active materials. Engineers call them nanoscavengers. Scientists sampling 127 shallow drinking water wells in areas overlying Fayetteville Shale gas production in north-central Arkansas found no evidence of groundwater contamination. The team of scientists at Duke University and the U.S. Geological Survey (USGS) analyzed the samples for major and trace elements and hydrocarbons, and used isotopic tracers to identify the sources of possible contaminants. Detecting greenhouse gases in the atmosphere could soon become far easier with the help of an innovative technique developed by a team at NIST, where scientists have overcome an issue preventing the effective use of lasers to rapidly scan samples. The team says the technique also could work for other jobs that require gas detection, including the search for hidden explosives and monitoring chemical processes in industry and the environment. Researchers studying the origin of cirrus clouds have found that these thin, wispy trails of ice crystals are formed primarily on dust particles and some unusual combinations of metal particles—both of which may be influenced by human activities. The findings are important, scientists say, because cirrus clouds cover as much as one-third of the Earth and play an important role in global climate. Researchers have cautioned that more work is needed to understand how microorganisms respond to the disinfecting properties of silver nanoparticles, increasingly used in consumer goods and for medical and environmental applications. Although nanosilver has effective antimicrobial properties against certain pathogens, overexposure to silver nanoparticles can cause other potentially harmful organisms to rapidly adapt and flourish. University of Manchester scientists, writing in Nature Geoscience, have shown that natural emissions and manmade pollutants can both have an unexpected cooling effect on the world’s climate by making clouds brighter. Clouds are made of water droplets, condensed on to tiny particles suspended in the air. When the air is humid enough, the particles swell into cloud droplets. The growing global demand for energy, combined with a need to reduce emissions and lessen the effects of climate change, has increased focus on cleaner energy sources. But what unintended consequences could these cleaner sources have on the changing climate? Researchers at Massachusetts Institute of Technology now have some answers to that question, using biofuels as a test case. For the first time, researchers from institutions around the country have conducted an identical series of toxicology tests evaluating lung-related health impacts associated with widely used engineered nanomaterials (ENMs). The study provides comparable health risk data from multiple laboratories, which should help regulators develop policies to protect workers and consumers who come into contact with ENMs. In an effort to determine if conditions were ever right on Mars to sustain life, a team of scientists has examined a meteorite that formed on the red planet more than a billion years ago. And although this team’s work is not specifically solving the mystery, it is laying the groundwork for future researchers to answer this age-old question. Long-term exposure to air pollution may be linked to heart attacks and strokes by speeding up atherosclerosis, or "hardening of the arteries," according to a University of Michigan public health researcher and colleagues from across the U.S. Nanotechnology typically describes any material, device, or technology where feature sizes are smaller than 100 nanometers in dimension. However, this new and uncharted direction in research provides a large spark for new product and drug delivery development. To achieve these discoveries, scientists must rely on specialized instruments and materials to drive their experiments and analysis. The most comprehensive evaluation of temperature change on Earth’s continents over the past 1,000 to 2,000 years indicates that a long-term cooling trend—caused by factors including fluctuations in the amount and distribution of heat from the sun, and increases in volcanic activity—ended late in the 19th century. Using a new laboratory geochemical technique to analyze heavy isotopes of carbon and oxygen in fossil snail shells, scientists have gained insights into an abrupt climate shift that transformed the planet nearly 34 million years ago. At that time, the Earth switched from a warm and high-carbon dioxide "greenhouse" state to the lower-carbon dioxide, variable climate of the modern "icehouse" world. When superstorm Sandy turned and took aim at New York City and Long Island last October, ocean waves hitting each other and the shore rattled the seafloor and much of the United States—shaking detected by seismometers across the country, University of Utah researchers have recently found. These “microseisms” generated by Sandy were detected by Earthscope, a network of 500 portable seismometers. Almost three weeks after China reported finding a new strain of bird flu in humans, experts are still stumped by how people are becoming infected when many appear to have had no recent contact with live fowl and the virus isn't supposed to pass from person to person. Scientists at Lawrence Livermore National Laboratory and the University of California, Berkeley have discovered new materials to capture methane, the second highest concentration greenhouse gas emitted into the atmosphere. The research team performed systematic computer simulation studies on the effectiveness of methane capture using two different materials—liquid solvents and nanoporous zeolites. A Purdue University-led team of researchers discovered sunlit snow to be the major source of atmospheric bromine in the Arctic, the key to unique chemical reactions that purge pollutants and destroy ozone. The team's findings suggest the rapidly changing Arctic climate—where surface temperatures are rising three times faster than the global average—could dramatically change its atmospheric chemistry. New research indicates that cutting emissions of certain pollutants can greatly slow sea level rise this century. Scientists focussing on emissions of four heat-trapping pollutants—methane, tropospheric ozone, hydrofluorocarbons, and black carbon—found that reductions these pollutants that cycle comparatively quickly through the atmosphere could temporarily forestall the rate of sea level rise by roughly 25 to 50%. The Food and Drug Administration says it has uncovered potential safety problems at 30 specialty pharmacies that were inspected in the wake of a recent outbreak of meningitis caused by contaminated drugs. The agency said its inspectors targeted 31 compounding pharmacies that produce sterile drugs, which must be prepared under highly sanitary conditions. Researchers have successfully measured reaction rates of a second Criegee intermediate, CH3CHOO, and proven that the reactivity of the atmospheric chemical depends strongly on which way the molecule is twisted. The measurements will provide further insight into hydrocarbon combustion and atmospheric chemistry. For decades, no one worried much about the air quality inside people’s homes. Then scientists at Lawrence Berkeley National Laboratory made the discovery that the aggregate health consequences of poor indoor air quality are as significant as those from all traffic accidents or infectious diseases in the United States. They are now working on turning those research findings into science-based solutions. A comprehensive marine biodiversity observation network could be established with modest funding within five years, according to a recently published assessment from a team led by J. Emmett Duffy of the Virginia Institute of Marine Science. Such a network, they say, would fill major gaps in scientists' understanding of the global distribution of marine organisms. Variations in nutrient availability in the world's oceans could be a vital component of future environmental change, according a research team. Their research reviews what we know about ocean nutrient patterns and interactions, and how they might be influenced by future climate change and other man-made factors. The authors also highlight how nutrient cycles influence climate by fuelling biological production. For decades, scientists have used sophisticated instruments and computer models to predict the nature of droughts. The majority of these models have steadily predicted an increasingly frequent and severe global drought cycle. But a recent study from a team of researchers in the United State and Australia suggests that one of these widely used tools—the Palmer Drought Severity Index (PDSI)—may be incorrect. As recently as 5,000 years ago, the Sahara was a verdant landscape, with sprawling vegetation and numerous lakes. The Sahara’s “green” era likely lasted from 11,000 to 5,000 years ago, and is thought to have ended abruptly. Now researchers have found that this abrupt climate change occurred nearly simultaneously across North Africa.
<urn:uuid:4d90564d-a2b9-4d25-a351-fec331e3f09e>
3.078125
1,864
Content Listing
Science & Tech.
22.756667
Here is a listing of the most commonly used terms and definitions associated with Weather. Absolute humidity: The mass of water vapor in a given volume of air( i.e., density of water vapor in a given parcel, usually expressed in grams per cubic meter Actual vapor pressure: The partial pressure exerted by the water vapor present in a parcel. Water in a gaseous state (i.e. water vapor) exerts a pressure just like the atmospheric air. Vapor pressure is also measured in millibars. Condensation: The phase change of a gas to a liquid. In the atmosphere, the change of water vapor to liquid water. Dewpoint: the temperature air would have to be cooled to in order for saturation to occur. The dewpoint temperature assumes there is no change in air pressure or moisture content of the air. Dry bulb temperature: The actual air temperature. See wet bulb temperature below. Freezing: The phase change of liquid water into ice. Evaporation: The phase change of liquid water into water vapor. Melting: The phase change of ice into liquid water. Mixing ratio: The mass of water vapor in a parcel divided by the mass of the dry air in the parcel (not including water vapor) Relative humidity: The amount of water vapor actually in the air divided by the amount of water vapor the air can hold. Relative humidity is expressed as a percentage and can be computed in a variety of ways. One way is to divide the actual vapor pressure by the saturation vapor pressure and then multiply by 100 to convert to a percent. Click here for text explaining this in more detail. Saturation of air: The condition under which the amount of water vapor in the air is the maximum possible at the existing temperature and pressure. Condensation or sublimation will begin if the temperature falls or water vapor is added to the air. Saturation vapor pressure: The maximum partial pressure that water vapor molecules would exert if the air were saturated with vapor at a given temperature. Saturation vapor pressure is directly proportional to the temperature. Specific humidity: The mass of water vapor in a parcel divided by the total mass of the air in the parcel (including water vapor) Sublimation: In U.S. meteorology, the phase change of water vapor in the air directly into ice or the chance of ice directly into water vapor. Chemists, and sometimes meteorologists, refer to the vapor to solid phase change as "deposition." Wet bulb temperature: The lowest temperature that can be obtained by evaporating water into the air at constant pressure. The name comes from the technique of putting a wet cloth over the bulb of a mercury thermometer and then blowing air over the cloth until the water evaporates. Since evaporation takes up heat, the thermometer will cool to a lower temperature than a thermometer with a dry bulb at the same time and place. Wet bulb temperatures can be used along with the dry bulb temperature to calculate dew point or relative humidity. return to Geog112 | Geography Home Page
<urn:uuid:88ae5dd6-6198-4f7d-b393-4fb0cfa21388>
4.03125
636
Structured Data
Science & Tech.
36.427746
NOTE: Click on the image to view it at its highest resolution. This is an image obtained with the 0.91-meter Spacewatch Telescope of the University of Arizona Observatories on Kitt Peak in Arizona by Jim Scotti, on 1993 March 30 at a mid-time of 07:26:13 UT. The total integration time was 440 seconds. North is to the right and East is at the top. The image scale is 1.076 arcseconds per pixel and the field of view is 9.2 arcminutes, approximately square. Note the various structures visible in this image. Namely, the train of individual nuclei aligned along a position angle of 77 to 257 degrees. Approximately 11 nuclei are visible in this image of the train, spread out over 51 arcseconds (or about 170,000 kilo- meters). A tail extends from each of the nuclei approximately 1 arcminute towards p.a. 285 degrees, with a brighter component extending from the brightest nucleus out to about 1.2 arcminutes. Dust trails extend off the edges of the fram to more than 10 arcminutes in p.a. 260 degrees and 6 arcminutes in p.a. 75 degrees. Notice also that the southern margin of the dust trails is relatively sharp while the northern margin is more diffuse. This may be due to smaller particles being blown off by the solar wind. Images, Images, Images
<urn:uuid:fc49f1f0-cbdd-458b-80f7-2e1e1b5dd23b>
3.203125
290
Knowledge Article
Science & Tech.
66.748854
The bryophyte groups – Which bryophyte is it? - Leaves Racomitrium pruinosum is a widespread moss in the southern hemisphere. It is fairly robust and can form extensive carpets on exposed rocks or soil, especially in mountainous areas. It is pale in colour and quite hoary when dry. The photos below show, on the left, part of an extensive colony of this species and, on the right, a closer view of the colony. These photographs were taken near Rotorua in the North Island of New Zealand. The hoary look is caused by each leaf finishing in a colourless hairpoint that is from one to two millimetres long. The colourless area extends down both sides of the leaf and only the lower third of the leaf contains chlorophyll. The following drawings, kindly provided by Judith Curnow, show a Racomitrium pruinosum leaf. The specimen on which these drawings are based was collected at an altitude of 1190 metres near the southern end of the South Island of New Zealand. The upper drawing shows a full leaf. In the coloured, schematic drawing, green indicates the chlorophyllous area of the leaf, grey marks the colourless parts of the leaf and the black strip indicates the nerve. There are also some short, crease lines near the base of the leaf. The leaf finishes in a long hairpoint and the nerve extends into the region of colourless cells, but finishes well before the hairpoint's apex. You can also see that the leaf is toothed in its upper half. The third drawing shows a much enlarged view of the upper part of the hairpoint. As well as teeth the hairpoint has numerous smaller projections, or papillae,
<urn:uuid:72fcafbc-dee8-4e52-a588-28ec504126f5>
3.109375
361
Knowledge Article
Science & Tech.
56.213044
Dramatic change in the chemistry of the atmosphere has occurred over the last century due to human activity. 1. From the GISP2 ice core it was determined that levels of sulfate and nitrate (the primary components of acid rain) in the North Atlantic and much of the Northern Hemisphere have greatly exceeded their natural levels as a consequence of human activity. 2. From GISP2 and South Pole it was determined that the the Chernobyl nuclear accident released radioactive fallout that spread throughout the Arctic and high latitudes of the Northern Hemisphere and was even transported to the high latitude south polar region through the upper atmosphere. The past 1000 years of sulfate and nitrate from the GISP2 record. Note dramatic increase in both of these major components of acid rain during the 20th century relative to levels of the past 1000 years. Changes in sulfate are closely tied to industrial activity in North America and Europe: (1) beginning of the industrial revolution, (2) the Great Depression, (3) World War II, (4) period of most intense burning of sulfur rich, "dirty" coal, and (5) beginning of the Clean Air Act. The Clean Air Act did not have a dramatic effect on nitrate levels. Most of the short- term (up to one to two years) increases in sulfate are the product of volcanic activity such as the Tambora eruption of 1815 and the Laki eruption of 1783. Data from Mayewski et al. (1986, 1990). Total beta radioactivity (measured in counts per hour per kilogram (cph/kg) from snow pits in central Greenland (GISP2 site) and 25 miles (40km) from the South Pole. Both snow pits were hand excavated to a depth of approximately 20 feet (6 meters) and then sampled for radioactivity. The site near the South Pole contains snow dating back to 1952 and the central Greenland snow pit only back to 1976, demonstrating the greater amount of annual snowfall in central Greenland. Snow pits containing snow that dating back to the 1950's (like the site near the South Pole) contain evidence of former atmospheric testing of nuclear bombs in the form of total beta radioactivity. These "bomb layers" provide a means for calibrating the exact age of the snow. As of the mid 1960's, atmospheric testing of nuclear bombs was banned and radioactivity levels in the atmosphere and snow dropped to natural background levels. The 1986 nuclear accident at the Chernobyl reactor in the former Soviet Union released sufficient radioactivity to contaminate much of the high latitudes of the Northern Hemisphere (see highlighted yellow levels in central Greenland snow pit), but it was not assumed that this radioactivity could extend into the Southern Hemisphere. However, some radioactivity did rise high enough into the atmosphere to get into regions of the atmosphere where air can travel easily from high latitudes of the Northern Hemisphere to high latitudes of the Southern Hemisphere. The input timing for Chernobyl radioactive debris at South Pole is close to 19-20 months from the time of its injection into the atmosphere as indicated by the presence of high total beta radioactivity in January 1988 snow near South Pole (note highlighted yellow section in snow pit). This dreadful accident provides a marker for the time of the accident close to its source and a fingerprint for tracing the time it took this air to reach the South Pole. The transport pathway is similar to that taken by ozone-destroying chemicals that are produced by humans in the Northern Hemisphere. Although most of their production is in the Northern Hemisphere the first ozone destroying consequences occur over Antarctica. Dibb, J., Mayewski, P.A., Buck, C.F. and Drummey, S.M., 1990, Beta radiation from snow, Nature, 344, 25. Mayewski, P.A., Lyons, W.B., Spencer, M.J., Twickler, M.S., Koci, B. Dansgaard, Davidson, C. and Honrath, R., 1986, Sulfate and nitrate concentrations from a South Greenland ice core, Science, 232, 975-977. Mayewski, P.A., Lyons, W.B., Spencer, M.J., Twickler, M.S., Buck, C.F. and Whitlow, S., 1990, An ice core record of atmospheric response to anthropogenic sulphate and nitrate, Nature, 346, 554-556. Mayewski, P.A., Holdsworth, G., Spencer, M.J., Whitlow, S., Twickler, M.S.. Morrison, M.C., Ferland, K.F., and Meeker, L.D., 1993, Ice core sulfate from three northern hemisphere sites: Source and temperature forcing implications, Atmosph. Environ., 27A, 2915-2919. tanceBeginEditable name="contribfooter" --> Contribution 3
<urn:uuid:730c1409-84e8-459d-92e2-225fc5bf56d4>
3.59375
1,018
Academic Writing
Science & Tech.
61.270029
A Brief History of Antarctic Drilling Russian scientists have just reported that they have successfully drilled into Lake Vostok, a vast, tepid body of water that rests under kilometers of ice beneath Antarctica’s glacial surface. Most news reports make mention of the long-duration drilling effort that it took to make it down to Lake Vostok, but I’ve yet to see an account of the on-again, off-again relationship between scientists and these mysterious subglacial lakes. A few months ago, I had the pleasure of interviewing Martin Siegert about his recent book, Antarctic Subglacial Aquatic Environments for the American Geophysical Union‘s members-only newspaper, Eos. Martin is the head of the UK-led mission to drill into another Antarctic subglacial lake, Lake Ellsworth, later this year. The full interview I feel is well worth reading (though unfortunately it is behind a paywall), but at one point the interview turned to a discussion of the convoluted history of the scientific endeavour to reach beneath the ice. Eos: Lake Sovetskaya and the larger Lake Vostok were first detected in 1968 and 1970, respectively, but the field of Antarctic subglacial aquatic research did not begin in earnest until the mid-1990s. What was the reason for this delay, and what changed to make scientists take notice? Siegert: That’s a really good question. When we first knew about subglacial lakes, no one—not even glaciologists—seemed to care. The lakes are now a curiosity, but back then no one seemed curious about them! The geophysical data defining both Lake Sovetskaya and Lake Vostok were published in the late 1960s and mid-1970s, but then they were sort of lost to the literature—people’s research just didn’t follow them up. The first inventory of subglacial lakes, published in 1973, showed there to be 17 lakes, but it still didn’t get wider scientific traction and interest. The paper published in 1996 on Lake Vostok showed that the water was about 500 meters deep. Now, this is only my opinion, but what I think happened is that between the 1970s and the 1990s there was a great deal of development in our understanding of life in extreme environments. I don’t think that idea was mature enough in the 1970s for microbiologists to take an interest in subglacial lakes. But in the 1990s, when the new information on the depth of Lake Vostok was announced, microbiologists began to take notice, believing that trapped within these ice-covered lakes were bacteria that hadn’t been exposed to air for millions of years, adapted to withstand the extreme conditions. So glaciologists presented information on subglacial lakes in the 1970s, and glaciologists still presented information on subglacial lakes in the 1990s. It’s just that there was a different audience available: In the 1990s the audience suddenly became not just glaciologists but microbiologists too. Siegert shows that the assumed linear path of scientific progress, of one discovery leading to the next, is not necessarily the way science works. Sometimes, waning interests or unrelated advances take a previous curiosity and transform it overnight into the next frontier.
<urn:uuid:e63e959b-6833-4abf-861b-f0974783bfcd>
3.40625
689
Personal Blog
Science & Tech.
38.165664
Grab your warm jackets and binoculars and help dedicated volunteers record a snapshot of winter bird distribution in North America. The first survey, conducted on Christmas Day in 1900, was organized by Frank Chapman of the fledgling Audubon Society. Audubon chapters across the continent have continued this tradition yearly. This one-day early winter survey provides data that scientists use to identify areas and habitats that are important to birds in the winter. Audubon scientists also used data from Christmas Bird Counts to identify 177 birds, including the American Robin, who are wintering farther north in response to warming temperatures. This year the Christmas Bird Count will be conducted between December 14 and January 5.
<urn:uuid:3ff11386-354b-4beb-b89b-7d60c89786ca>
3.765625
139
Knowledge Article
Science & Tech.
34.365181
According to astronomers studying background radiation data gathered by the Planck Space probe, the universe is 80-million years older than previously thought. So now when somebody asks you how old the universe is, you can confidently tell them, "80-million years older than previously thought" because you never knew the original figure in the first place. WTF are they teaching in school these days? The Planck space probe looked back at the afterglow of the Big Bang, and those results have now added about 80 million years to the universe's age, putting it at 13.81 billion years old. The findings released Thursday bolster a key theory called inflation, which says the universe burst from subatomic size to its now-observable expanse in a fraction of a second. The probe, named for the German physicist Max Planck, the originator of quantum physics, also found that the cosmos is expanding a bit slower than originally thought, has a little less of that mysterious dark energy than astronomers had figured and has a tad more normal matter. But scientists say those are small changes in calculations about the universe, whose numbers are so massive. Not gonna lie, trying to wrap my head around the scale of the universe and how it was formed and are their infinite universes -- that kind of thinking makes my head hurt. I'm a simply man, you know? Some might argue too simple. Others would probably argue mentally deficient. And you know what I call those people? Friends and family. "Don't forget us." And Geekologie readers. Thanks to Pyrblaze, who, like me, can't even fathom 13.81-billion years and starts spazzing out whenever the Burger King drive-thru line is takes too long. LIKE IT WAS TODAY.
<urn:uuid:722fc390-09f2-4d79-b838-a578351e5911>
2.796875
363
Personal Blog
Science & Tech.
56.289349
See also the Dr. Math FAQ: Browse High School Logic Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: - Logic and Conditional Sentences [10/04/2005] I am having a hard time understanding why two false statements in a conditional sentence makes it true. - Logic: Bayes and Popper [06/24/2003] Is p -> q totally equivalent to ~q -> ~p in practice? - The Logic behind Conditional Statements [11/28/2007] I understand the subset explanation of why the conditional logic statement 'If false then true/false' is always considered true. But what is the logic behind it? - Logic, Groups, and Identities [02/25/1999] Is it possible for more than one answer to exist when proving things? What is a group? Can you give an example of an identity? - Logic Laws [03/04/2003] I do not understand the laws of inference, simplification, disjunctive inference, and disjunctive addition. - Logic - Liars & Truthtellers (What Question Does She Ask?) [3/12/1995] A logician vacationing in the South Seas finds herself on an island inhabited by the two proverbial tribes of liars and truth-tellers. - Logic Statement False Implies True [02/06/2008] The logic statement A->B is considered true if A is false and B is true. How can a false imply a true? What's the thinking behind that statement and can you give a good example of how it works? - Match Couples and Parties [03/28/2001] Read the clues given, and match everything up. - Mathematical Induction [09/07/1998] What is mathematical induction? Can you give an example of the ideas of - Mathematical Induction [07/01/1998] Proof by induction does not prove anything, because in the inductive step, one makes the assumption that P(k) is true... - Mathematical Logic [02/09/2001] Assumptions, rules, contradictions, and a derivation. - Mathematics, Logic, and Intuition [05/27/2003] How is math related to logic and intuition? - Math Logic [6/5/1996] Sally, Ron, Jim, and Meghan are President, VP, Treasurer, and Captain of the cheerleading squad, but not necessarily in that order. Who is what? - Math Logic - Determining Truth [04/13/1999] A number divisible by 2 is divisible by 4. Find a hypothesis, a conclusion, and a converse statement, and determine whether the converse statement is true. - Math Symbol for IFF [06/18/2003] Is there a mathematical symbol for the term 'if and only if'? - Math Symbols [04/07/1997] What do the common math symbols (backward E, upside-down A, etc.) mean? - The Meaning of 'Or' in Logic Statements [12/19/2003] If a logic statement says, 'James is taking fencing or algebra,' does that mean he is taking one class or the other, or could he be taking both of them? - Minimal Weighings of Ten Coins to Identify the Two Counterfeits [04/18/2010] At least how many balance scale weighings of ten coins do you need to determine the two fakes? By applying combinatorics and keeping track of lower bounds, Doctor Jacques provides a methodical approach. - Modus Ponens [07/10/2001] A man born in 1806 is x years old at the year x squared. Solve for x. - Monty Hall Logic [03/09/2001] Are there in fact four options? Aren't there three choice points, not - Monty Hall Strikes Again [11/2/1994] There are three cups, one of which is covering a coin. I know the whereabouts of the coin, but you don't. You pick a cup, and I take one of the remaining cups, one which DOESN'T contain a coin. Both you and I know the cup I pick doesn't contain a coin. You then have the option to swap your cup with the third, remaining cup, or keep your first choice. What is the probability of the coin being in the cup if you keep your first choice, or if you decide to swap them? - Necessary and/or Sufficient [05/26/2002] What does it mean to say that a condition is necessary, sufficient, or necessary and sufficient? - Necessary and/or Sufficient Conditions with Modular Math [12/01/2006] I'm working on a question in modular math that asks me to identify whether given conditions are "necessary", "sufficient", or "necessary and sufficient". I'm not sure what those terms mean. - Negating a Quantifier [03/20/2010] What is the negation of "at least two"? Is it "none" or "at most two"? Doctor Peterson responds by analyzing one case at a time as well as by representing the proposition as an inequality. - Negating Statements [10/27/1998] What is negation? What is a statement? How do you negate a statement? - Negation in Logic [8/3/1996] What is the negation of "In every village, there is a person who knows everybody else in that village"? - One-to-One Correspondence of Infinite Sets [03/26/2001] How can I prove that any two infinite subsets of the natural numbers can be put in a 1-1 correspondence? - Open Sentence, Statement [09/18/2001] What is an open sentence? - Order of Quantifiers [12/19/2002] Can you help me understand the order of quantifiers? - Orders of Infinity [12/05/2001] I recently read a book about infinity which set forth several arguments for why there are different sizes or orders of infinity. None of them seem convincing to me... - Paradox [05/07/2001] What is a paradox? - Paradox and Fallacy [01/25/2001] What is the difference between paradox and fallacy in mathematics? - Paradox of the Unexpected Exam [03/26/1998] A teacher announces that a test will be given next week on one of the five weekdays. Why won't the test ever be given? - Parts of a Biconditional Statement [06/03/1999] Does the "necessity" condition correspond to "only if" and "sufficient" correspond to "if," or is it the other way around? - Party Guests and Perfect Squares [07/02/2001] Who was dancing with whom? - Philosophy of the Truths of Mathematics [02/28/2001] Do the truths of math hold in any conceivable world? - The Prisoners' Dilemma [12/8/1995] I'm looking for a paper - or some material - about "the prisoners' - Probability of Two Male Children [7/5/1996] If a family has two children, and the older child is a boy, there is a 50 percent chance the family will have two boys. However... - Projects on Puzzles or Mazes [11/13/2002] I would like to do a project that involves applying mathematics to areas like puzzles or mazes. - Proof by Contradiction [04/29/2003] Is there any specific mathematical theory that states that Proof by Contradiction is a valid proof?
<urn:uuid:85d5e82c-26e3-4820-a028-031768c8657f>
3.421875
1,706
Q&A Forum
Science & Tech.
66.603339
Via Via Meadia comes this news: The Economist also brings us big news on the “settled science” of climate change. A new study has found soot to be twice as bad for climate as was previously thought, making it the second most damaging greenhouse agent after CO2. This is actually good news for two reasons. First, soot is easier to control than CO2, and targeting that kind of pollution provides lots of benefits that have nothing to do with climate change: it’s a dangerous pollutant and a health threat on its own. Second, controlling soot will seriously slow the speed of climate change. One of the study’s authors told the Economist that fully addressing the soot problem would strip half a degree from potential warming, buying politicians and scientists more time to make informed decisions.
<urn:uuid:a63565b5-7b70-447a-b64b-8a4249ed3cc5>
2.875
169
Personal Blog
Science & Tech.
50.892743
For those who were disenchanted with the results of the most recent United Nations Conference on Climate Change, a recent development gives at least one reason to be optimistic. The formation of the Climate and Clean Air Coalition to Reduce Short-Lived Climate Pollutants was announced on Thursday, February 16th by United States Secretary of State Hillary Clinton. It is a partnership between certain developed and developing nations with the aim of reducing the concentration of short-lived greenhouse gases (GHGs) in the atmosphere, thereby mitigating climate warming in the short-term. It is the first effort to focus on short-lived GHGs collectively and it is intended to augment current efforts to reduce carbon dioxide emissions globally. The participating countries include Canada, Sweden, the United States, Mexico, Ghana, and Bangladesh. Three GHGs are the focus of this initiative: Methane, Black Soot and Hydrofluorocarbons (HFCs). Each is a contributor to climate change and is also short-lived in the atmosphere, from a matter of days to approximately 15 years. This can be contrasted with carbon dioxide, the most well-known GHG, which has an average atmospheric lifetime of longer than a century. By reducing the atmospheric concentration of these short-lived GHGs, it should be possible to see strong and relatively quick climate change mitigation. A recent NASA study estimated that 0.5oC of global warming could be avoided by reducing the atmospheric concentrations of key short-lived GHGs like Methane and Black Soot. This is an important finding since the International Panel on Climate Change has determined the maximum allowable global temperature increase to avoid catastrophic climate change is 2oC. Furthermore, the study indicates that these emissions reductions could boost international crop yields and prevent hundreds of thousands of premature deaths related to these atmospheric pollutants. The Climate and Clean Air Coalition to Reduce Short-Lived Climate Pollutants pledges to help reduce the atmospheric concentration of short-lived pollutants via a multi-faceted plan. It will work with already existing groups like the Arctic Council and Global Methane Initiative, create national policy priorities, mobilize funds, raise awareness, and support further scientific research into the atmospheric effects of these pollutants. Tackling the problem presented by climate change is easily one of the most difficult and important tasks set before humankind. Any viable long-term plan will need to deal with all the issues—most importantly, our dependence on fossil fuels as an energy resource. However, with global action on climate change mitigation stalling, this seems to be a reasonable, albeit small, step forward. (Engineering Physics, MASc, Year 2 at McMaster University)
<urn:uuid:9d228362-5460-4a57-b5ba-24e3d711d93e>
3.671875
537
Personal Blog
Science & Tech.
20.986609
If one looks at planetary systems from the “modern” point of view provided by the HARPS survey and the results from Kepler’s recent data release, our own solar system looks pretty strange. In the Sun’s case, the frequently planetiferous orbital zones inside of P=50 days are completely, mysteriously barren. The orbital region inside P<3000 days is also almost entirely bereft, with just a few iron-silicate dregs totaling less than two Earth masses. Out in the boondocks, however, the Sun’s harbors a giant planet that managed to accumulate lots of gas, yet paradoxically didn’t manage to migrate a really significant distance. It will take more time to determine whether the solar system is really all that weird, but with each passing month’s accumulation of fresh exoplanets, our eight-planet set-up manages to seem slightly less ordinary. Jupiter, for example, induces a 12 m/s velocity half-amplitude, and the high-precision radial velocity surveys have been operating for long enough so that if true-Jupiter analogs were the rule, then we’d perhaps be hearing of more of them being detected. The Kepler multi-transiting candidates correspond to systems that are completely alien when compared to MVEMJSUN, but they are much more familiar when compared to the regular giant planet satellites — the moon systems of Jupiter, Saturn and Uranus. In each of these cases (and despite a factor-of-twenty difference in mass between Jupiter and Uranus) the characteristic orbital period is of order a week, and the characteristic secondary-to-primary mass ratios are of order a few parts in 100,000. For example, Ariel, Umbriel, Titania and Oberon have mass ratios of 1.6e-5, 1.4e-5, 4.0e-5, and 3.5e-5 relative to Uranus, and their orbital periods are 2.52, 4.14, 8.71, and 13.46 days. In the Jovian system, the satellite/Jupiter ratios for Io, Europa, Ganymede and Callisto are 4.7e-5, 2.5e-5, 7.9e-5, and 5.8e-5, with corresponding orbital periods of 1.76, 3.55, 7.15, and 16.68 days. In the plot below, I’ve taken the 45 three-transit systems from Kepler’s list, and plotted the orbital periods of their constituent planet candidates along the x-axis. The colors of the points are given a linear gray-scale, with black corresponding to a planet-to-star mass ratio of zero, and white corresponding to a planet-to-star mass ratio of 1.0e-4 or larger. I’ve converted radius to mass by assuming M=R^2 when mass and radius are expressed in Earth masses and Earth radii. It’s interesting to speculate whether the commonality between the regular satellite systems, and the teeming population of Super-Earth/Sub-Neptune class systems might be more than just a coincidence…
<urn:uuid:f5c2b744-8d0b-46f6-b322-0c09526117ba>
2.984375
671
Personal Blog
Science & Tech.
57.635514
Connecting the tropical Pacific with Indian Ocean through South China Sea Article first published online: 21 DEC 2005 Copyright 2005 by the American Geophysical Union. Geophysical Research Letters Volume 32, Issue 24, December 2005 How to Cite 2005), Connecting the tropical Pacific with Indian Ocean through South China Sea, Geophys. Res. Lett., 32, L24609, doi:10.1029/2005GL024698., , , , and ( - Issue published online: 21 DEC 2005 - Article first published online: 21 DEC 2005 - Manuscript Accepted: 4 NOV 2005 - Manuscript Revised: 31 OCT 2005 - Manuscript Received: 19 SEP 2005 Analysis of wind data over the past 40 years and results from a high-resolution general circulation model has revealed the existence of a previously undescribed circulation that connects the tropical Pacific with Indian Ocean. As a direct response to the Pacific wind, water of the Pacific origin enters the South China Sea through Luzon Strait, and from there part of the water continues southward into the Java Sea and returns to the Pacific through Makassar Strait. This circulation contains a strong signal of El Niño and Southern Oscillation and appears to have a notable impact on the Indonesian Throughflow heat transport.
<urn:uuid:36abfb9c-fd38-4e54-8be5-cca5be8705d3>
2.796875
264
Academic Writing
Science & Tech.
44.967619
| The Italian experimenter, Carlo Matteucci (1811-1868), in experiments in the mid-eighteen forties, showed that the effect worked over larger distances without the presence of iron. He devised a pair of identical flat coils, with wire wound in a spiral pattern on the surface of glass disks about 30 cm in diameter. A Leiden jar was discharged through one of the coils, and an experimenter holding on to wires connected to the other coil felt a shock. The magnitude of the shock increased when the distance between the two coils decreased. Matteucci is also known for his work with the electrical conductivity of the earth in 1844. By demonstrating that the earth has an appreciable conductivity, he showed that it was possible to use the earth as a return conductor for telegraph signals, thus making it possible to use one metallic conductor instead of two.
<urn:uuid:f7f015e2-ff3d-4d1d-883d-5b87bd72f6c3>
3.828125
190
Knowledge Article
Science & Tech.
41.909269
I am a physics novice. Google tells me that electron microscopes work much like their optical counterparts -- but the analogy falls apart for me when I think about what I'm "viewing." Obviously, you can see light through the lenses, but what is the "image" analog for electron microscopes? Is it at-all like spraying an invisible shape with bullets and examining where collisions took place? Like if you shot at an invisible car with a tommy gun and were able to make out bullet holes -- so that the more bullets you shoot the better your image? And, just for completeness, I suspect this implies that the best resolution you can get is the bullet-size, or in this case the size of the electron. How do you map "objects" or whatever they are considered on that scale if they are smaller than an electron? Is our perception of how small we can see limited by this cap?
<urn:uuid:4e9e32fd-8f58-4f91-a06c-26c66a5a63ff>
2.6875
188
Q&A Forum
Science & Tech.
54.53432
robo-animals, animal robots, and crittercams lie insects electronically modified to suit our purposes: Mosquitoes (1st image) In June, Network World ran an article about the development, over the last several years, of micro air vehicles (MAVs) - insect-sized devices disguised as dragonflies or mosquitoes and operated remotely as spy drones. While the image above is a fabrication, and robotic insects capable of landing on a person's skin and using their needles to take a DNA sample or inject a tracking device are (apparently) things of science fiction, Snopes does not dispute the development of MAVs by the U.S. Government. Cockroaches (2nd image) North Carolina State University engineer Alper Bozkurt and colleagues are surgically implanting electrodes in the antennae and rear sensors of roaches, and attaching tiny backpacks that contain a wireless control system, a locator beacon, and a tiny microphone. The miniature equipment turns the bugs into "biobots" (biological robots) and allows the scientists to control them. By sending them into hard-to-access areas, steering them remotely, and monitoring the results, the scientists hope roaches will one day help locate earthquake survivors. Bozkurt explains why the bugs are superior to mechanical robots: "They come with a self-powered locomotion system. And they have biological autonomy to help them survive—they will run away when they sense danger, which makes them hard to trap or squash. That's really useful in uncertain, dynamic environments." Honeybees (3rd image) San Francisco State University entomologist John Hafernik and colleagues are gluing tiny radiofrequency identification tags onto about 500 infected honeybees. The bees have been attacked by parasitic maggots and consequently desert their hives at night, fly around outdoor lights, and then circle erratically on the ground before dying. The researchers have fitted the entry/exit of the hive with laser scanners to monitor their comings and goings. They hope the data will reveal whether the maggots are mind-controlling the honeybees and whether this has anything to do with the mass die-offs of bee populations, although Hafernik says, "We think it's a long shot that these parasites are the main cause of colony collapse disorder."
<urn:uuid:f8038298-f9d7-4ece-84dd-8413c1af2f34>
2.9375
481
Personal Blog
Science & Tech.
24.312164
Difference between Atomic Bomb and Hydrogen Bomb For most people, the atomic bomb and the hydrogen bomb are pretty much the same thing with the only difference being their relative strengths. While this is partly true–with the hydrogen bomb being considerably more powerful–both types of bombs actually have a number of differences with each other, most having to do with their construction and means of detonation. Let's take a look at their other differences and similarities. How Do They Work? The process that is at the heart of every atomic bomb explosion is called nuclear fission, which utilizes a certain amount of uranium 235 or plutonium 239. Nuclear fission essentially involves splitting the atoms of either of those two radioactive elements. A hydrogen bomb for its part uses a nuclear fusion process on uranium or plutonium as well. This sets off a chain reaction which results in the release of a considerable amount of energy. In some ways, hydrogen bombs can be seen as "upgraded" versions of atomic bombs. Atomic bombs are typically set off by an explosion from a TNT-equipped device. This causes the radioactive matter to become packed tightly together, causing the individual atoms to collide with each other with explosive force. This sets off a chain reaction with more and more atoms breaking down and releasing energy, resulting in a nuclear explosion. The hydrogen bomb on the other hand is set off not by an explosive charge but an actual atomic bomb. The elements that form hydrogen–deuterium and tritium–are packed tightly together in a manner similar to that of the atoms in a nuclear fission reaction in an atomic bomb, causing a nuclear fusion. This produces a considerably stronger explosion. The entire nuclear fission process happens in a matter of a split-second, although the results can be pretty devastating. Anyone who has seen pictures and film footage of the bombing of Hiroshima and Nagasaki in Japan during the last days of World War II are well aware of the atomic bomb destructive force. More powerful still is the hydrogen bomb, which is estimated to produce the explosive force of several millions of tons of TNT. These types of bobs have also been designed to expel more radioactive material into the air above the drop site. The results, as you can imagine, can be pretty destructive. - Main energy source is either radioactive uranium or plutonium - Often triggered by a TNT-equipped explosive device - There is a limit to how powerful a pure atomic bomb can be - Utilizes radioactive uranium or plutonium as a source of energy - Usually triggered by a small atomic bomb instead of an explosive device - Releases a lot more energy than a typical atomic bomb - There is no limit to how powerful it can be made
<urn:uuid:e556c48e-c270-4b03-bcea-9609b0d490f0>
3.5625
547
Knowledge Article
Science & Tech.
34.310923
Hybrid, a plant or animal whose parents belong to two different breeds, varieties, or species. In some cases the parents may even belong to different genera (groups of related species). Plant hybrids are constantly being produced in nature. Hybrids are also produced experimentally in the interests of scientific research, and by plant and animal breeders for economic purposes. Some kinds of organisms, however, will not produce offspring when crossbred with another species. Some hybrids cannot produce offspring. The mule, a cross between a male ass and a mare, is an example. Hybrid varieties of plants and animals are of economic value because the hybrid is nearly always more vigorous, larger, and more fertile than either of its parents. Also, double-cross hybrids (offspring of two hybrids) have a far greater number of traits from which plant or livestock breeders can select those they wish to reproduce. Since the offspring of double-cross hybrids do not always resemble their parents, plant breeders use vegetative propagation to maintain desirable traits. Most garden perennial flowers and shrubs, as well as fruit trees, are hybrids, and are propagated in this way.
<urn:uuid:73389194-0294-486b-b927-2afee8d52005>
3.640625
235
Knowledge Article
Science & Tech.
37.571444
techfun89 writes "Mars has returned to our evening skies as it does every two years. This time it is getting even more attention and buzz than it normally would. Amateur astronomer Wayne Jaeschke of West Chester Pennsylvania noticed an unusual protrusion in the planet's southern hemisphere, preceding the sunrise terminator. Several things may have contributed to this strange 'cloud formation.' One possibility is a meteoric impact event, where dust was spewed up into the atmosphere. Another could be a major dust storm, which are typical on Mars. Of course, it could be something more mundane; that these observations were caused by a mere optical illusion via a type of glint that occurred due to having just the right combination of lighting and atmospheric conditions. Some suggest volcanic activity, though this is unlikely given it has been 20 to 200 million years since lava has flowed on Mars."
<urn:uuid:aba800b3-5045-4d96-9b25-04752e2d3a3b>
2.953125
174
Comment Section
Science & Tech.
36.57777
David Charbonneau Harvard - Smithsonian Center for Astrophysics "The Era of Comparative Exoplanetology" When extrasolar planets are observed to transit their parent stars, we are granted unprecedented access to their physical properties. It is only for these systems that we are permitted direct estimates of the planetary masses and radii, which in turn provide fundamental constraints on models of their physical structure. Furthermore, such planets afford the opportunity to study their atmospheres without the need to spatially isolate the light from the planet from that of the star. Recently, astronomers have taken a first glimpse into the atmospheric chemistry and dynamics of these puzzling worlds. I will review the most recent results, and then describe a new observatory that we are constructing that will survey 2000 nearby M-dwarfs with a sensitivity to detect rocky planets orbiting within their stellar habitable zones. Angela Speck University of Missouri - Columbia "The Nature of Stardust: Astromineralogy and Circumstellar Dust Around Evolved Stars" Intermediate-mass stars (0.8 - 8.0 solar masses) are major contributors of new elements to interstellar space. These stars eventually evolve into asymptotic giant branch (AGB) stars. During the AGB phase, these stars suffer intensive mass loss leading to the formation of circumstellar shells of dust and neutral gas, including the new elements formed during the star's life. Eventually the star runs out of material to lose, and the central core collapses and heats up. Meanwhile the material around the star (the circumstellar shell) drifts away from the star. Once the central star is hot enough to have significant ultraviolet (UV) emission it will begin to ionize the surrounding medium, and it becomes a planetary nebula. The newly- formed elements then become part of the interstellar medium, from which new stars and their Using a combination of observing techniques (e.g. infrared (IR) spectroscopy, visible, IR and and sub-mm imaging) and laboratory IR studies, together with theoretical considerations (e.g. kinetics and thermodynamics of the dust-forming region; nucleosynthesis models and changing stellar chemistries) and meteoritic evidence, we investigate the structure and evolution of the circumstellar dust and its environment and how the evolution of AGB stars (in terms of chemistry, mass-loss rates, dust shell dispersion and the change from benign AGB star to ionized planetary nebulae), leads to changes in the dust composition and distribution. Siang Peng Oh University of California, Santa Barbara (UCSB) "New Views of the High-Redshift Universe" I discuss theoretical perspectives on present observations of the high-redshift universe in Lyman alpha emitters and QSO transmission spectra, emphasizing that our constraints on the state of the intergalactic medium z > 6 are still very uncertain. I discuss Ly-alpha radiative transfer effects which complicate the interpretation of high-redshift Ly-alpha emitters. I then turn to prospects for detecting the IGM in 21cm emission with upcoming instruments, and focus on the importance of developing novel statistical techniques for mining the data. In particular, I discuss prospects for detecting HII regions both statistically and in imaging data. Eugene Chiang University of California, Berkeley "Problems and Prospects in Planet Formation" Planets form in disks. Planetary properties result from myriad processes within disks, some of which are chaotic. We pose and offer solutions to a variety of problems associated with protoplanetary disks: (1) How do T Tauri disks dissipate? (2) As dust grains settle toward disk midplanes, what are the maximum densities attainable? Are these densities large enough for gravitational instability? (3) How do densely packed systems of protoplanets ("oligarchies") relax into solar system-like (and extrasolar system-like) configurations? (4) Does Brownian motion of planets within planetesimal disks interfere with resonance capture? (5) What governs the distinct surface brightness profiles of debris disks? Application will be made to transitional disks systems, including TW Hyd and GM Aur; the circumbinary ring of KH 15D; Neptune and resonant Kuiper belt objects; and the debris disk encircling AU Microscopii. Yun Wang University of Oklahoma "Dark Side of the Universe" The cause for the observed acceleration in the expansion of the universe is unknown, and dubbed "dark energy" for convenience. Dark energy could be an unknown energy component, or a modification of Einstein's general relativity. I will examine the most promising methods for probing dark energy, and discuss recent results and future prospects. Stephen E. Strom National Optical Astronomical Observatory (NOAO) "Transition Disks: A Possible Key to Understanding Planet Formation" The unusual properties of transition objects (young stars with an optically thin inner disk surrounded by an optically thick outer disk) suggest that significant disk evolution has occurred in these systems. To explore the physical cause(s) for the transition disk phenomenon, we examine these demographics, specifically their stellar accretion rates and disk masses, and compare these parameters with those of accreting T Tauri stars of comparable age. We find that transition objects of ages approximately 1 Myr occupy a restricted region of the [mass accretion rate, disk mass] plane. Compared to accreting T Tauri stars, transition disks have stellar accretion rates that are typically about 10 times lower at the same disk mass, and disk masses about 4 times larger than the median disk mass. These properties are anticipated by several proposed planet formation theories and suggest that the formation of Jovian mass planets may play a significant role in explaining the origin of many transition objects. We suggest observational strategies that have the potential to determine (a) whether transition disks indeed indicate the onset of planet formation; and (b) if so, how the physical characteristics of the transition disks may be linked to outcome planetary system properties. Special Event: Antoinette de Vaucouleurs Memorial Lecture (Standard Location: RLM 15.216B, Time 3:30 p.m.) John C. Mather (Nobel Prize in Physics, 2006) NASA/Goddard Space Flight Center "Finding our Origins with the James Webb Space Telescope" How did we get here? Where are we headed? Dr. John Mather will tell the history of the universe in a nutshell, and describe what our future holds within the realm of discovery. Dr. Mather is Project Scientist for the James Webb Space Telescope (JWST), which is planned for launch in 2013. As a successor to the Hubble Space Telescope, the Webb telescope will look even farther back in time and examine the first stars and galaxies that were created after the big bang. JWST will be the largest telescope mirror ever placed in space and with a positioning of 1.5 million miles away from Earth, the Webb telescope will be able to unravel some of the biggest mysteries of the universe. Special Event: Antoinette de Vaucouleurs Public Lecture (Special Location: ACE 2.302 - Avaya Auditorium, Time 4:00 p.m.) John C. Mather (Nobel Prize in Physics, 2006) NASA/Goddard Space Flight Center "From the Farm to the Nobel Prize: Deciphering the Big Bang" The history of the universe in a nutshell, from the Big Bang to now, and on to the future - John Mather will tell the story of how we got here, how the Universe began with a Big Bang, how it could have produced an Earth where sentient beings can live, and how those beings are discovering their history. Dr. Mather grew up on the Dairy Research Station in Sussex County, New Jersey where he developed his strong interest in science. At Nasa, he was Project Scientist for the Cosmic Background Explorer (COBE) satellite, which measured the spectrum (the color) of the heat radiation from the Big Bang, discovered hot and cold spots in that radiation, and hunted for the first objects that formed after the great explosion. He will explain Einstein's biggest mistake, show how Edwin Hubble discovered the expansion of the universe, how the COBE mission was built, and how the COBE data support the Big Bang theory. He will also show NASA's plans for the next great telescope in space, the James Webb Space Telescope. It will look even farther back in time than the Hubble Space Telescope, and will look inside the dusty cocoons where stars and planets are being born today. Planned for launch in 2013, it may lead to another Nobel Prize for some lucky observer. Oct. 14 - 16 Frank N. Bash Symposium 2007 New Horizons in Astronomy The Second Biennial Symposium on the Topic of New Horizons in Astronomy! (Scientific Organizing Comittee: Kurtis Williams and Justyn Maund [co-chairs], Kyungjin Ahn, Eiichiro Komatsu, Mike Montgomery) The Astronomy Program at the University of Texas at Austin is hosting its second biennial symposium on the topic of New Horizons in Astronomy. This symposium brings truly excellent young researchers who are working on frontier topics in astronomy and astrophysics together, to exchange research ideas, experiences, and their visions for the future. The symposium will focus on invited review talks given by postdoctoral fellows, followed by open panel discussions, and a select number of poster papers from postdocs and graduate students will be presented. John E. Chambers Carnegie Institution of Washington / Department of Terrestrial Magnetism "How Does Orbital Migration Affect the Oligarchic Growth of Planets?" Many of the main characteristics of a planetary system are shaped during the oligarchic growth stage of planetary formation. This begins when solid bodies in a protoplanetary disk have grown to the size of large asteroids. In the Solar System, the end products of oligarchic growth were Moon-to-Mars sized bodies in the terrestrial planet region. In the outer Solar System, bodies grew substantially larger and were destined to become the cores of giant planets. Current analytic theories and numerical simulations indicate that bodies formed during oligarchic growth underwent rapid inward orbital migration caused by tidal interactions with gas in the disk. In this talk, I will examine how planets might survive migration during oligarchic growth, and look at the variety of planetary systems that can be produced as a result of these processes. Alicia M. Soderberg Princeton University "A Radio View of the GRB-SN Connection" Over the past few years, long duration gamma-ray bursts (GRBs), including the subclass of X-ray flashes (XRFs), have been revealed to be a rare variety of Type Ibc supernova (SN Ibc). While all these events result from the death of massive stars, the electromagnetic luminosities of GRBs and XRFs exceed those of ordinary Type Ibc SNe by many orders of magnitude. The observed diversity of stellar death corresponds to large variations in the energy, velocity, and geometry of the explosion ejecta. Using multi-wavelength (radio, optical, X-ray) observations of the nearest GRBs, XRFs, and SNe Ibc, I show that while GRBs and XRFs couple at least ~10^48 erg to relativistic material, SNe Ibc typically couple less than 10^48 erg to their fastest (albeit non-relativistic) outflows. Specifically, I find that less than 3% oof local SNe Ibc show any evidence for relativistic ejecta which may be attributed to an associated GRB or XRF. Recently, a new class of GRBs and XRFs has been revealed which are under-luminous in comparison with the statistical sample of GRBs. Owing to their faint high-energy emission, these sub-energetic bursts are only detectable nearby (z <~ 0.1) and are likely 10 times more common than cosmological GRBs. In comparison with local SNe Ibc and typical GRBs/XRFs, these explosions are intermediate in terms of both volumetric rate and energetics. Yet the essential physical process that causes a dying star to produce a GRB, XRF, or sub-energetic burst, and not just a SN, remains a crucial open question. Progress requires a detailed understanding of ordinary SNe Ibc which will be facilitated with the launch of wide-field optical surveys in the near future. Barbara Ercolano Harvard - Smithsonian Center for Astrophysics "The Temperature Structure of HII regions and Star Forming Galaxies Ionized by Multiple Spatially Distributed Sources: 3D Photoionization Models with MOCASSIN" Spatially resolved studies of star-forming regions show that the assumption of spherical symmetry is not realistic in most cases, further complication is added by the gas being ionized by multiple non-centrally located stars or star clusters. Geometrical effects including spatial configuration of ionising sources affect the temperature and ionization structure of these regions. I will present the results of our on-going theoretical investigation of these effects which made use of 3D photoionisation mosels using the 3D Monte Carlo photoionisation code MOCASSIN with various spatial configurations and ionisation sources. In particular I will illustrate the behaviour of temperature fluctuations within the ionised region as well as that of metallicity indicators based on strong-line methods, that are often our only means to determine the metallicity of extragalactic Karen M. Leighly University of Oklahoma "The Influence of the AGN Spectral Energy Distribution on Broad-Line Region Emission and Kinematics" Quasars are the most luminous persistently emitting objects in the Universe. Powered by accretion onto a supermassive black hole, many of their observable properties should be determined by the fundamental parameters of accretion: the black hole mass and accretion rate. The black hole mass and accretion rate should, in turn, determine the shape of broad-band continuum emission arising from the accretion disk in the central engine. That broad-band continuum is responsible for powering the strong, broad emission lines that are prominent in optical and UV spectra, and are an identifying feature of AGN. Thus, the line emission may be used as a probe of the fundamental intrinsic properties of AGN, if we can break the code. I will present recent work investigating the role of the spectral energy distribution in determining AGN broad-line properties. Neal J. Evans University of Texas at Austin "Star Formation: From Cores to Disks" Observations with the Spitzer Space Telescope and complementary data at other wavelengths have provided more complete samples of star-forming regions. These provide constraints on theoretical models of the origin of the initial mass function and evolutionary stages. The early stages of star formation include the separation of dense cores from the background molecular cloud, the evolution before point source formation, the infall into the central source, and the formation of the disk. These events are usually associated with changes in the SED associated with the Class System. The large sample available from the Cores to Disks (c2d) program provides good statistics on the numbers of objects in various stages, and these can be used to estimate timescales. The evolution of the disk to planetary systems is probed by studies of more evolved systems. We show that several paths are possible in this evolution. Finally, the evolution of chemical state from molecular cloud to planet-forming disk is revealed by infrared spectroscopy. Nathan Smith University of California, Berkeley "Precursors to Supernova Explosions and Extraordinary Deaths of Very Massive Stars" I will discuss some observations of a few recent and extraordinary supernova explosions. Among these is SN2006gy, which radiated more luminous energy than any other supernova. It may be our first observed case of a so-called "pair instability supernova", thought to mark the deaths of the first stars in the very early Universe (although SN2006gy was relatively nearby), and it appears to have suffered a violent precursor mass ejection just 5-10 years before the SN. It was probably the most massive star ever seen to explode, and I will mention connections to one of the most massive stars in our own Milky Way galaxy, called Eta Carinae. SN2006gy is one of a class of supernovae that are plowing into very dense circumstellar matter ejected by the star in the decade or so preceding the final SN. I will discuss how this episodic pre-supernova mass loss, combined with some other recent clues, is forcing us to revise some of our fundamental paradigms of massive star evolution.
<urn:uuid:beb51e01-974d-4920-8f80-b2a3f38e6d08>
3.15625
3,661
Content Listing
Science & Tech.
29.957023
Authors: J. Bar-Sagi The electromagnetic wave quantum-energy depends only on its frequency, not on the emitting system's radiation power. The proportionality constant between the frequency and the quantumenergy of the electromagnetic wave, the Planck's constant is in the essence of quantum mechanics. This constant is known experimentally but till now there was no clue for calculating its value on a theoretical basis. In the present work a methodology for calculating a lower bound for Planck's constant is presented, based on simple principles. In order to get a reasonable good lower bound it is necessary to have a model of a relativistic oscillator whose period is independent of its energy and which efficiently radiates electromagnetic energy. It is highly desired that the mathematics involved is simple enough to enable good insight into the results. Such a model can also be used for other investigations, and therefore, in this work a potential that conserves the vibration period of symmetric oscillators at relativistic velocities is found and analyzed. The electrically charged system of constant period is used to calculate a lower bound Hm of the Planck's constant h . The value of Hm is smaller than h by a factor very close to √3 . The explanation of this factor also explains the value of Planck's constant. From this value the fine structure constant value is calculated and a new interpretation of this constant obtained. Comments: 15 pages [v1] 14 Jul 2010 Unique-IP document downloads: 170 times Add your own feedback and questions here:
<urn:uuid:df74f31d-9cfe-4de2-b6bb-5863250d8742>
3.03125
314
Truncated
Science & Tech.
34.98969
What's that black dot moving across the Sun? Possibly the clearest view of in front of the Sun last week was from Earth orbit. The Solar Dynamics Observatory obtained an uninterrupted vista recording it not only in optical light but also in bands of Pictured above is a composite movie of the crossing set to music. Although the event might prove successful scientifically for better determining components of Venus' atmosphere, the event surely proved successful culturally by involving people throughout the world in observing a rare astronomical phenomenon. Many spectacular images of this Venus transit from around (and above) the globe are being proudly displayed.
<urn:uuid:1783da5a-1cee-44d5-bae3-0dbd22d257be>
2.765625
135
Content Listing
Science & Tech.
27.290585
Forcefield technology is one of the most massively useful technological advances ever made. Initially prophesied by science fiction in the early 20th century, this is one of those technologies which has developed along almost lines almost identical to those forecast by those early "imagineers". The first forcefield technologies were developed during the mid 21st century by the team working under Doctor Cochrane as part of the warp flight project. Since these early pioneering days, forcefield technology has diversified to the point where there are literally thousands of different types of field, each with properties carefully designed to fulfil a specific range of functions. Below is a listing of some of the more common types of forcefield currently in use. The Inertial Damping field is one of several types of forcefield which makes space flight practical. Essentially, a modern inertial damping system is a network of variable symmetry force fields which serve to absorb the inertial forces involved in space flight;even interplanetary craft routinely accelerate at hundreds of gees, and without this protection a person within such a ship would experience an apparent weight equivalent to many tons. Most damping systems operate under the direct control of the ships main computer systems, which allow it to anticipate the forces which will result from use of the engines. The degree of fine control which this allows is such that it is virtually impossible to tell from within that a vessel is accelerating at all, let alone to feel any discomfort. However, when the forces on a vessel are generated by an external source - such as weapons fire for example - it is a slightly different story. In this case the system can only react rather than anticipate, and this leads to a small lag between the action and reaction. This is manifested by a certain leakage through the IDF field, resulting in a noticeable effect on the passengers. Ensuring that this effect remains within safe limits is one of the primary concerns of all Starship designers.1 The structural integrity field (SIF) is another of the basic requirements for any modern spacecraft. This field is projected through the structure of a vessel,essentially turning the material into a cross between matter and forcefield. This increases the strength and rigidity by orders of magnitude, allowing the materials to withstand the stresses associated with both normal and combat operations. The Structural Integrity Field of Starfleet vessels can also serve as a secondary backup to the ships main shielding system if required; when run at above normal capacity the system is capable of protecting a vessel from even multiple direct hits by heavy weapons. This makes the SIF a key component in the protection of a starship.2 The shield system provides the modern Starship with its principle protection against both violent natural phenomena and enemy weapons fire. Most shield systems are composed of highly focused spatial distortions which contain an energetic graviton field. The shield itself is projected by a set of transmission networks located on the hull of the ship; when matter or energy strikes the shield, field energy is concentrated at that point to create an intense localized spatial distortion.3 The shape of the field can be varied at the discretion of the tactical officer - the most common configuration is a set of curved fields which interlock to form a large bubble over vessel4, although some users prefer to make the shields closely match the ships hull.5 In the former case shield burn-throughs are more likely, as the shield must enclose a somewhat greater volume. However, in the latter case those burn-throughs which do occur are much more damaging as they are directly adjacent to the hull. Most of the information on this subject is highly classified, but since even individual vessels are known to utilize both configurations, it appears that bubble shields are preferred under certain tactical situations, conformal shields under others.6 Shields are carefully tuned to create windows which allow matter and energy to pass through under certain specific circumstances - for example, visible light is allowed to pass through unhindered. This allows the crew of a vessel to see out whilst the shields are up - or more importantly, to use visible light sensor systems. This window renders the shields invisible to the naked eye under normal circumstances. Other windows exist to allow sensors and weapons to operate through the shields.6 Impacts on the shield cause Cerenkov radiation to be released, often perceived as a flash of colour which "lights up" the shield, rendering it briefly visible. To an observer it appears that the intruding object bounces off the shields - in fact the spatial distortion becomes so great that the path of the object is radically altered, and to an zero dimensional observer on the incoming object it appears that it is the starship which has suddenly changed location while his/her course is unchanged.3 For over a century after the invention of the shield it was impossible to use transporters to beam to or from a shielded location7, but to an extent this limitation has now been circumvented.8 In general sensor and weapon windows are insufficient to allow beaming; whilst technically there is nothing to prevent a ship opening a window in its own shields of sufficient size to allow transport, in practice such windows are almost always large enough to be detected and exploited by enemy vessels and it is far simpler just to drop the shields briefly altogether. The more modern Starfleet shield designs have now reached a point at which transporters can be operated via a large wide frequency window which is briefly opened over the hull emitters. This gives greater flexibility in using the transporter during high threat situations, but it remains a somewhat risky proposition - should an enemy score even a near miss on such a window the effects on the ship would be considerable.9 Beaming through an opponents shields is an altogether more difficult proposition, but this can be accomplished successfully if the transporter operator has a detailed knowledge of the shield configuration s/he is attempting to beam through. A notable example of this is the occasion when the USS Enterprise managed to beam a crew member on board the USS Phoenix whilst that vessel was engaged in unlawful operations within Cardassian space10, or the Defiant's use of the transporter to board the Constitution class USS Enterprise whilst that ship was modulating its shields for sensor operation.11 Such operations remain the exception rather than the rule, however - and against the unknown shield configuration of an enemy vessel, beam through remains impossible. The most recent advance in shielding systems is the Regenerative shield. This system was employed by the Dominion in the planetary defence network around Chintaka.12 The regenerative shield allows a portion of the enemy fire to be diverted through the shield generator to reinforce the shield layer - the amount of damage that a weapon impact does is thus greatly reduced. The effectiveness of the reinforcement depends on the shield generator design, but typically the effectiveness of a shield will be increased several fold by the addition of regenerative capacity. The Containment Field has become the standard method of confining objects and isolating them from their surroundings for a wide variety of purposes. Some of the main applications common on board the modern Starship are listed below : Many medical applications of containment fields exist. Typically these are designed to contain samples such as viruses which cannot - usually - attempt to physically force their way out of a container.13 Engineering applications include the storage of material samples collected via transporter. This generally requires higher strengths, since the samples collected can include the likes of high temperature plasmas or highly radioactive materials.14 A step up from these levels of field are those used in the shuttle or cargo bays of a starship in order to contain the atmosphere whilst allowing vehicles to pass through relatively unhindered.15 The atmospheric containment field of even a small cargo bay must hold against a force of over half a million Newtons, whilst the field used on the main hangar bay of a Galaxy class starship must withstand some two hundred and fifty times this.16 Probably the biggest use of the containment field on board a starship is in the field of security. These are generally used to block corridors17, or keep prisoners contained within the brig whilst allowing visual checks on their condition to be made.18 Starships by their very nature must employ ultra strong fields in a few selected locations. Whilst these fields can be many times stronger than even the ships main shielding system, this is usually gained by generating the field over a very restricted volume and projecting it directly within the generator network itself. Such fields are used to contain the matter-antimatter reaction within the warp core and power transfer conduits which permeate a starship. A type of forcefield used in medical applications. Quarantine fields are designed to contain biological hazards to prevent their spreading to the rest of the ship.19
<urn:uuid:0940752e-3713-4061-b855-280cf60959f8>
3.3125
1,740
Knowledge Article
Science & Tech.
29.774705
- Making A* on a GPU is silly : it needs random access on memory, slow with GPU. - A* it's for finding path... - Algorithm to find path a 'matrix' way ? Perhaps this 1) Build a matrix A, where A(i,j) = 0 if we can reach node(j) from node(i) in one step 2) For the matrix multiplications, replace + by max, and * by + 3) Compute A^n : A(i,j) will give you the distance you need to reach node(j) from node(i), if this distance is < n - A^n is cheap to compute. Example A^7 = (A^4) * (A^3) A^4 = (A^2) * (A^2) A^3 = (A^2) * A A^2 = A * A 4 multiplications instead of 7, log2(n) instead of n ;) - It could help for a pathfinding, I'am not sure it's very interesting (complexity, efficency) Anyway, with a special matrix (path on a square grid as example), I'am nearly sure there is some tricks to reduce computations. AI Algorithms on GPU 21 replies to this topic Members - Reputation: 100 Posted 11 August 2005 - 10:52 PM Original post by Sneftel Mark my words: within five years, we'll see multi-core processing on Intel and AMD processors. Get ready for that. Hmm, you do know Athlon X2 and Pentium 4D are already available? ;-) I've had SMP dual and quad machines for years.
<urn:uuid:f71f1c50-3dd1-4fe9-9285-86e44ce2570f>
3.453125
370
Comment Section
Software Dev.
76.395831
Trajectory is the path a moving object follows through space. In physics class we'd create our own trajectory tests using a massive homemade sling shot – consisting of a bicycle helmet, stretchy surgical tubing and football field uprights. With a little bit of man power and your obligatory water balloon we'd fire our "objects" down the football field. Using the measurements on the field, each student was able to figure out the trajectory of their own water balloon. Now, I'm not going to breakdown the formula for trajectory of a projectile because there are too many variables that I will inevitably screw up. Plus, if you're an engineer you should have an idea of some of the formula’s variations. But if you want to work on figuring out trajectory in a vacuum, without having to do much pen to paper problem solving, try out The Discovery Channels NLOS Cannon Challenge. Part violent, part mathematic, it’ll occupy for a little while at least.
<urn:uuid:c1810d7f-c810-410a-8183-964b79e6e4e2>
3.375
198
Personal Blog
Science & Tech.
48.868432
Liquids Conducting Current Name: Nathan U. I am reviewing some materials for the electricity unit in my physics classes. I have come across similar passages in different texts as follows: "A conductor in both the solid state and liquid state can carry a current to complete an electrical circuit. However, a solid conductor involves the movement of electrons through the atomic lattice, whereas a liquid conductor transports ionic charges through the solution." (Northwestern University Materials World Modules, 2002). I am puzzled by this generalization of liquids. How does liquid mercury or sodium (nuclear reactor design) conduct electricity? I am not aware of the requirement of ions. It is too bad; those passages seem to have over-generalized, quite detrimentally to the reader's understanding. Every metal, whether liquid or solid, has easily-moving electrons in an Virtually everything that is silver-gray colored and electrically conductive is in this category. So you were correct to notice that mercury or molten sodium are metals and rather than ionic conductors. Metallic (Electron) conductors range in color from bright silvery to dark silvery gray to black (such as graphite). Only a few have non-neutral colors (examples copper, gold, titanium-nitride). Salt-water is the prototypical example of an iconically conducting liquid. There are ionic conductors that are solid. Examples are Zirconium oxides, many Lithium-containing mixed oxides, and They get used in fuel-cells and some batteries and in every car's smog-sensor. They tend to be roughly white and ceramic-brittle, not silvery and not Admittedly they do not conduct electricity very well compared with metals or the best ionic liquids. It takes many square meters or high temperatures to get an ampere of useful current through them. But iconically conductive solids and liquids share this correlation: ionic conductors are not metal-colored. They tend to be clear or white or off-white, usually not including black. They might easily have non-neutral colors such as red, orange, yellow, green, or blue tint. True, when one imagines electrically conductive liquids, water-related ionic liquids are the most commonly experienced. It is equally predominant in common experience that conductive solids are However, some word of qualification: "typically" or "usually" or "commonly", is missing from the passage you quoted. This makes it sound like there is a scientific understanding which requires solids to conduct electronically and liquids iconically. This is the core misrepresentation in the passage. It might be more scientifically illuminating to the reader, to start from "electronic" and "ionic" conduction, and work towards solid and liquid. A conductor must have mobile carriers of charge. Possible charged particles include electrons, protons, and heavier positive and negative ions. If the mobile charges are tiny, slippery electrons, the remaining bulk of the mass is likely to be able to sustain the structure of a solid. But such a lattice can easily melt, and if it does the free electron conductivity will not be greatly impaired. If the charges are large ions, the bulk will likely need to be a liquid to let it move a useful amount. However some ions are small and some solids have large passages, allowing some solids to conduct iconically. At higher temperatures, say 1500C, there are many melted metals and melted and there will be about as many metallic conductive liquids as ionic Above 2500C, there are few insulating solids. The few solids remaining are all either electronically conducting or Again, similar in number. At cryogenic temperatures, atom-sized particles with strong charges or polar groups will always stick together. So there will be no ionic conductors, liquid or solid. Only electronic conductivity can work there. And insulators. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:2b1c5f22-6f74-4b86-817a-e88b2981061e>
3.5625
884
Q&A Forum
Science & Tech.
28.812981
Skip Maine state header navigation Skip All Navigation Home > Explore! > Ground Water and Wells > Water Resources > Figure 2 Figure 2. Distribution of mapped sand and gravel aquifers in Maine, shown in green. Northwestern Maine is currently unmapped for sand and gravel aquifers but undoubtedly has some. Last updated on October 6, 2005
<urn:uuid:75d9397c-2561-4f3f-a60a-f85e839bb274>
2.90625
71
Knowledge Article
Science & Tech.
40.709231
The application development is not an easy task but yet designing is essential for this application. As it name suggests a user interface is something that users face every time they use your program. That is why it is necessary to create applications that meet the user's requirements on how the user interface should look like. A good user interface is a good user interface to use. However, most developers don't burden themselves with the user interface designer work, thinking that the only thing to do is to create a clever code and to use an interesting color. The user face designer work consists of using a good user interface that would allow users who understand the problem domain that works with your application without reading the manuals or getting some training. The most import thing in user interface designer work is to create a really intuitive because the more intuitive an interface, then the easier it is to use. In addition, you can cut training and support costs by offering a self explanatory interface in your application. Now, let's get to know about the user interface designer work step by step. There is a collection of principles to improve the quality of your user interface design. This collection of principles goes as follows: The first one is the structure principle; your design of software applications ought to organize the user interface purposefully, in useful, meaningful, apparent, and recognizable models. The structure principle should be applied to your overall user interface architecture. The second principle is the simplicity principle by which your design should enable simple tasks to be performed easily. It can be achieved by using meaningful shortcuts and buttons that relate to similar procedures. The following principle is the visibility principle which consists in keeping all needed options and materials for a certain tasks without burdening the interface with obviously redundant information. We recommend that you should not overwhelm the user with many alternatives to do the same task or operation. One of the most important principles is the feedback principle; your designer should keep users informed about actions or interpretations that can user's interest in a language familiar to the user. The tolerance principle consists in the fact that your design should be tolerant and flexible decreasing the cost of human mistakes. It is achieved by providing undo and redo buttons to restore the information that may be lost due to inaccurate user actions. These rules ensure a good user interface design work that will bring many clients to you. These rules can be used to design computer mobile communication device in order to increase its usefulness. Remember the fact that what functionality your software provides and how technically superior your software is doesn't really matter if your users don't like the user interface. That's why you should pay close attention to the user interface designer work.
<urn:uuid:ab6885e4-54b7-41ff-a060-7fdcccf5672f>
3.0625
527
Tutorial
Software Dev.
29.867092
by TIM VASQUEZ / www.weathergraphics.com This article is a courtesy copy placed on the author's website for educational purposes as permitted by written agreement with Taylor & Francis. It may not be distributed or reproduced without express written permission of Taylor & Francis. More recent installments of this article may be found at the link which follows. Publisher's Notice: This is a preprint of an article submitted for consideration in Weatherwise © 2007 Copyright Taylor & Francis. Weatherwise magazine is available online at: http://www.informaworld.com/openurl?genre=article&issn=0043-1672&volume=60&issue=4&spage=86. PART ONE: The Puzzle Some of the most violent weather on the planet descends on the Great Plains during the spring months. The Rocky Mountains, which stretch north to south, help bring cold Canadian air southward and very moist Gulf air northward, resulting in the oversimplified scenario known as the "battle of the air masses". In this puzzle we'll take a close look at a major severe weather situation that occurred in spring 2007. In addition to fronts it will be important to check for wind shift lines and troughs. Advanced readers are encouraged to find the dryline, a boundary which separates warm air with high dewpoints from warm dry air. This weather map is for the afternoon hours in May. Draw isobars every two millibars (1008, 1006, 1004, etc.) using the plot model example at the lower right as a guide. As the plot model indicates, the actual millibar value for plotted pressure (xxx) is 10xx.x mb when the number shown is below 500, and 9xx.x when it is more than 500. For instance, 027 represents 1002.7 mb and 892 represents 989.2 mb. Therefore, when one station reports 074 and a nearby one shows 086, the 1008 mb isobar will be found halfway between the stations. Then try to find the locations of fronts, highs, and lows. Click to enlarge * * * * * Scroll down for the solution * * * * * PART TWO: The Solution May 4, 2007 brought one of several significant tornado outbreaks recorded in 2007 (touchdowns are highlighted with red inverted triangles). The map in this issue shows the weather for 4 p.m. Central Time that afternoon, a few hours before a series of tornadic supercell storms developed in southwest Kansas. These storms produced at least 11 tornadoes. The tornado which hit Greensburg, Kansas had the rare distinction of reaching the highest category on the Fujita damage scale. It destroyed nearly all of the town and resulted in ten deaths. A key feature on the charts was the dryline, which is found between warm, moist air and warm dry air and often serves as a breeding ground for thunderstorms. On the afternoon of May 4 it separated 40s and 50s dewpoints from 60 dewpoint readings to the east. The tornadic storms were intensified by the tongue of 70-degree dewpoints stretching from central Oklahoma into south central Kansas. Since instability is highly proportional to the moisture in a parcel, the higher dewpoint temperatures in this zone allowed for much stronger updrafts and much more intense storm activity. The chart also shows backed winds along a trough in central Kansas. In the lower levels of the atmosphere, backing, a counterclockwise variation in the wind, augments the type of shear which favors tornadic storms. The southeasterly winds shown in central Kansas provides a richer environment for tornadoes compared to the southerly winds in Oklahoma and Texas. Another prime weather pattern for severe weather is the warm front shown in Nebraska extending into northwest Kansas. The warm front is often associated with backed winds and serves as a source of convergence which allows storms to develop. On this day, however, the warm front was not a major severe weather producer and some storm chasers found theirselves too far north. The storms preferred to develop in the richer moisture to the south near the dryline and in proximity to the weaker features in southern Kansas. It illustrates the fact that forecasters can't just look at fronts. They have to look at the subtlest boundaries and features using as much data as possible including radar, satellite, and mesonet data. Failure to identify a boundary can lead to a missed forecast. In many respects, severe storms forecasters are the keepers of meteorology's fine-toothed comb. Click to enlarge ©2007 Taylor & Francis All rights reserved
<urn:uuid:fab68b56-32bf-4f00-a7e4-b6965b3372ee>
4.125
958
Knowledge Article
Science & Tech.
56.828381
According to quantum mechanics, electrons bound to atoms occur in specific electronic configurations. The highest energy configuration (or energy band) that is normally occupied by electrons for a given material is known as the valence band, and the degree to which it is filled largely determines the material's electrical conductivity. In a typical conductor (metal), the valence band is about half filled with electrons, which readily move from atom to atom, carrying a current. In a good insulator, such as glass or rubber, the valence band is filled, and these valence electrons have very little mobility. Like insulators, semiconductors generally have their valence bands filled, but, unlike insulators, very little energy is required to excite an electron from the valence band to the next allowed energy bandknown as the conduction band, because any electron excited to this higher energy level is relatively free. For example, the bandgap for silicon is 1.12 eV (electron volts), and that of gallium arsenide is 1.42 eV. This is in the range of energy carried by photons of infrared and visible light, which can therefore raise electrons in semiconductors to the conduction band. (For comparison, an ordinary flashlight battery imparts 1.5 eV to each electron that passes through it. Much more energetic radiation is required to overcome the bandgap in insulators.) Depending on how the semiconducting material is configured, this radiation may enhance its electrical conductivity by adding to an electric current already induced by an applied voltage (see photoconductivity), or it may generate a voltage independently of any external voltage sources (see photovoltaic effect). Photoconductivity arises from the electrons freed by the light and from a flow of positive charge as well. Electrons raised to the conduction band correspond to missing negative charges in the valence band, called holes. Both electrons and holes increase current flow when the semiconductor is illuminated. In the photovoltaic effect, a voltage is generated when the electrons freed by the incident light are separated from the holes that are generated, producing a difference in electrical potential. This is typically done by using a p-n junction rather than a pure semiconductor. A p-n junction occurs at the juncture between p-type (positive) and n-type (negative) semiconductors. These opposite regions are created by the addition of different impurities to produce excess electrons (n-type) or excess holes (p-type). Illumination frees electrons and holes on opposite sides of the junction to produce a voltage across the junction that can propel current, thereby converting light into electrical power. Other photoelectric effects are caused by radiation at higher frequencies, such as X rays and gamma rays. These higher-energy photons can even release electrons near the atomic nucleus, where they are tightly bound. When such an inner electron is ejected, a higher-energy outer electron quickly drops down to fill the vacancy. The excess energy results in the emission of one or more additional electrons from the atom, which is called the Auger effect. Also seen at high photon energies is the Compton effect, which arises when an X-ray or gamma-ray photon collides with an electron. The effect can be analyzed by the same principles that govern the collision between any two bodies, including conservation of momentum. The photon loses energy to the electron, a decrease that corresponds to an increased photon wavelength according to Einstein's relation E = hc/l. When the collision is such that the electron and the photon part at right angles to each other, the photon's wavelength increases by a characteristic amount called the Compton wavelength, 2.43 x 10-12 metre.
<urn:uuid:874d024c-39ab-462b-896a-7ded94ac02f9>
4
754
Knowledge Article
Science & Tech.
27.704634
Time Dilation and Quasars In May of 2001 Hawkins published a paper called “Time Dilation and Quasar Variability”. Part of the Abstract reads as follows. “We find that the timescale of quasar variation does not increase with redshift as required by time dilation. Possible explanations of this result all conflict with widely held consensus in the scientific community.” http://xxx.lanl.gov/abs/astro-ph/0105073 The conflict arises since this indicates that space-time is not expanding. This is contrary to the evidence of type 1a super novas that confirms the time dilation effect due to the expansion of space. Initially this topic was posted by Dunash on this BB on January 10, 2002, but there was no follow up discussion of his posting. http://www.badastronomy.com/phpBB/viewtopic.php? I am appreciative for dgruss23 bringing up the paper in the course of a poll discussion called “Is the expansion of space-time accelerating or decelerating?”. http://www.badastronomy.com/phpBB/vi...2&start=50 (Page 3) I believe reference to this paper may also have been found in a discussion about the Red Shift but I could not find it but I think I remember reading it there. Hopefully someone will provide additional links to preserve the reference value of this BB. I thought that a more through discussion of this topic is in order on its own since it provides evidence that something is wrong with current cosmological models. I will attempt a “layman’s” description of the report. Hopefully someone with more expertise will provide a more explicit description. Time dilation generally refers to an increase in the observed time a physical process occurs. There are at least two possible physical interpretations or descriptions for time dilation. The most common is the application of special relativity. Time progresses comparatively slower for a moving object, so an object observed in the past with a high velocity (indicated by red shift) will have physical processes occur at a slower rate. The decay of a muon entering the earth’s atmosphere is a classic example of how a physical process is slowed when an object is moving at near light speed velocities. The time scale of rapidly moving objects can be described by how long a physical process takes to occur, as predicted by special relativity. Specifically the time scale, Ts, can be described by the red shift proportion z as follows. Ts =Tm/Tl =1+z. Tm = interval of time moving, Tl = time interval of time local or “at rest”, z = ratio of wavelength. The other physical interpretation is that the expansion of space-time itself results in a time dilation. Lets say that we are at a bowling alley and we roll two balls down the ally separated by 1 second of time. The distance between the two balls remains essentially constant while traveling down the alley. (Ignoring friction effects). The two balls will arrive at the end of the ally one second apart. Now lets throw the two balls again with a 1 second separation, but this time the bowling ally is “stretched” while the balls roll down the alley. This will physically increase the distance between the two balls. For example, Instead of the balls being 2 meters apart, they can end up being 4 meters apart. When the balls reach the end of the alley, in this example, the separation in time for when they reach the end will now be 2 seconds. (Ignoring the effect of the expansion on the velocity and energy of the balls, at least for this posting since the possible variance in the speed of light and the loss of energy of a photon (instead of a bowling ball), with the expansion of space-time is a whole other issue). I prefer this explanation of the cosmological red shift since it keeps galaxies “at rest” locally, allowing them to be carried by the expansion of space-time. Regardless of the model, the basic general effect of time dilation will be the same. The time dilation will be Td = 1+z. A process that took 1 second to occur in a “rest” frame, will take 2 seconds to occur as measured by an observer if the red shift of the observed object producing the effect has a cosmological red shift of 1. I am sure some will provide a better explanation of time dilation, and different interpretations, but I hope it gives the reader a general idea. (In the application of my uniform expansion hypothesis (www.uniformexpansion.com) both special relativity and expansion result in time dilation, but one of the effects is unobservable due to the specific geometric rate the expansion occurs. This would alter the assumed distance of 1asn’s and the assumed “acceleration” (really deceleration) indicated by such. It also addresses the issue involved with no observed time dilation effects noticed in the variation of energy output of quasars. This is merely an aside for now. It is hoped that the postings of others will provide additional explanations and perspectives. ) The time variance of Quasars The time variance of quasars, while not described in the Hawkins paper, is based upon observed variation in the energy output from quasars. It is the extreme variance of energy output of quasars in short periods of time that has helped determine the size of quasars. Quasars put out about 1,000 times the amount of energy of an entire galaxy, in a region of space 100,000 times smaller. Of course this is based upon the assumption that the cosmological red shift correlates not only to a velocity measure describing the expansion of space but to a distance measure. (v = Ho x D and v causes the red shift). (Some will take issue with this assumption arguing that quasars are much closer, “tired light proponents”). I regret not being able to find a link with a graph illustrating the time variance of the energy output of quasars. I will try to explain verbally a graph of quasar 3c 279, which is in one of my texts. One of the most dramatic peak cycles of energy output shows that the increase in luminosity varies by a magnitude of 7 over a period of about 1200 days (rise, peak to fall) . There are a number of smaller cycles (rise peak and fall), with a variation of magnitude 2 over about 800 days. Amongst this variation there are additional variations in magnitude of about 1 or perhaps a bit more times over the passage of a just a 50 or so days. There is also some variation with a magnitude of 1 over periods of only a few days. A very “noisy” graph. While there is great variation in the cycles of energy output from quasars, there is a discernable pattern. Large energy peaks last longer than short energy peaks. Large peaks tend to last a thousand days, etc. Mathematically, it is possible to extract frequency relationships utilizing a Fourier based transformation with what is called a power spectrum analysis. It allows a statistical manipulation of cyclic processes with a “noise” component. It works best if even numbers of cycles are in the mix, but if there are sufficient numbers of cycles within the analysis, this restraint is not that critical. Categorizing cycle events helps in the statistical evaluation, “large” energy output events last over 1000 days, etc. The anticipated result It was anticipated that the further away a quasar was observed, as indicated by the red shift, the greater the time dilation of the cycles observed in the energy output of quasars. The increase in the period of the cycles should correspond to an increasing red shift. Specifically it was anticipated that the cycle length should vary by 1+z. For example, the period of “averaged” cycles should be two times greater than another quasar if the red shift for one quasar has a z of 1 while the other quasar had a z of 3. No such effect was observed. This is opposite to the results found with type 1a supernovas. It is assumed that Type 1a supernovas are always the result of a supernova explosion with a white dwarf star with a mass of about 1.44 masses involved. (Baring the variation induced by rotational effects of the two stars involved and the mass of the sister star losing mass to the white dwarf star). (This also assumes that high red shift 1asn’s are the same as “local”, which is an assumption I have issues with). Since the mass involved in the supernova is assumed to be the same, the duration of the event should be generally the same. Time dilation should increase the observed duration of the 1asn by a factor of (1 +z). This time dilation is observed in that the light curves of high red shift supernovas; the “explosion” takes longer to occur the greater the red shift. (Generally). How can one process associated with Supernovas indicate time dilation associated with red shift, while another process associated with quasars indicate no time dilation associated with red shift?
<urn:uuid:1eefb1e0-b25f-4139-b48d-2eb2dfc666a5>
2.75
1,937
Comment Section
Science & Tech.
49.426213
Roots of the Derivatives of a Certain Real Polynomial in the Complex Plane With and real, is a real polynomial with three roots: and . This Demonstration shows the roots of , , …, . Each of these derivatives has at most two nonreal roots (depending on the value of the real root ). As varies over the reals, these nonreal roots trace out ellipses in the complex plane. The nonreal roots of the first derivative lie on a circle, while those of the higher derivatives lie on successively narrower ellipses. Set the value of , and slide along the real line to see the roots of the derivatives move in real time. This problem came out of an undergraduate research project conducted at Randolph-Macon College in 2006. The ellipse associated with the derivative has semimajor axis and semiminor axis . Further investigation showed that these results could be generalized to a broader class of polynomials. These results have been submitted for publication in the Pi Mu Epsilon Journal. The interested reader might also enjoy the article "Roots of polynomials and their derivatives," by Bruce Torrence. It appeared in Mathematica in Education and Research (10) 2, 2005, pp. 71-80.
<urn:uuid:de05bb70-8418-49b7-bc63-8350c6300bb5>
3.109375
262
Academic Writing
Science & Tech.
44.04491
In What is cloning? we learned what it means to clone an individual organism. Given its high profile in the popular media, the topic of cloning brings up some common, and often confusing, misconceptions. Misconception #1: Instant Clones! Let's say you really wanted a clone to do your homework. After reviewing What is Cloning? and Click and Clone, you've figured out, generally, how this would be done. Knowing what you know, do you think this approach would really help you finish your homework...this decade? A common misconception is that a clone, if created, would magically appear at the same age as the original. This simply isn't true. You remember that cloning is an alternative way to create an embryo, not a full-grown individual. Therefore, that embryo, once created, must develop exactly the same way as would an embryo created by fertilizing an egg cell with a sperm cell. This will require a surrogate mother and ample time for the cloned embryo to grow and fully develop into an individual. Misconception #2: Carbon Copies! Your beloved cat Frank has been a loyal companion for years. Recently, though, Frank is showing signs of old age, and you realize that your friend's days are numbered. You can't bear the thought of living without her, so you contact a biotechnology company that advertises pet cloning services. For a fee, this company will clone Frank using DNA from a sample of her somatic cells. You're thrilled: you'll soon have a carbon copy of Frank - we'll call her Frank #2 - and you'll never have to live without your pal! Right? Not exactly. Are you familiar with the phrase "nature versus nurture?" Basically, this means that while genetics can help determine traits, environmental influences have a considerable impact on shaping an individual's physical appearance and personality. For example, do you know any identical twins? They are genetically the same, but do they really look and act exactly alike? So, even though Frank #2 is genetically identical to the original Frank, she will grow and develop in a completely different environment than the original Frank or will have a different mother, and she will be exposed to different experiences throughout her development and life. Therefore, there is only a slim chance that Frank #2 will closely resemble the Frank you know and love. Supported by a Science Education Partnership Award (SEPA) [No. 1 R25 RR16291-01] from the National Center for Research Resources, a component of the National Institutes of Health, Department of Health and Human Services. The contents provided here are solely the responsibility of the authors and do not necessarily represent the official views of NCRR or NIH. Here, kitty, kitty On December 22, 2001, a kitten named CC made history as the first cat - and the first domestic pet - ever to be cloned. CC and Rainbow, the donor of CC's genetic material, are pictured below. But do you notice something odd about this picture? If CC is a clone - an exact genetic copy - of Rainbow, then why don't they look exactly alike? The answer lies on the X chromosome. In cats, a gene that helps determine coat color resides on this chromosome. Both CC and Rainbow, being females, have two X chromosomes. (Males have one X and one Y chromosome.) Since the two cats have the exact same X chromosomes, they have the same two coat color genes, one specifying black and the other specifying orange. So why do they look different? Very early in her development, each of Rainbow's cells "turned off" one entire X chromosome - and therefore, turned off either the black color gene or the orange one. This process, called X-inactivation, happens normally in females, in order to prevent them from having twice as much X-chromosome activity as males. It also happens randomly, meaning that not every cell turns off the same X chromosome. As a result, Rainbow developed as a mosaic of cells that had one or the other coat color gene inactivated - some patches of cells specified black, other patches specified orange, and still others specified white, due to more complex genetic events. This is how all calico cats, like Rainbow, get their markings. CC looks different because the somatic cell that Rainbow donated to create her contained an activated black gene and an inactivated orange gene. What's interesting is that, as CC developed, her cells did not change that inactivation pattern. Therefore, unlike Rainbow, CC developed without any cells that specified orange coat color. The result is CC's black and white tiger-tabby coat. Rainbow and CC are living proof that a clone will not look exactly like the donor of its genetic material. For more information about CC, Rainbow, coat color and X-inactivation, see Additional Resources.
<urn:uuid:0b6bc1e9-72af-441f-8e37-2240aee77e54>
3.1875
1,004
Knowledge Article
Science & Tech.
49.879771
In the limit of clean surfaces, friction has its origins in the microscopic, chemical interactions at the interface between the two objects in question. One of the more amazing (to me, anyway) consequences of this is the extremely important role played by commensurability between the surfaces. Let me explain with an example. Consider a gold crystal terminated at the (111) surface, and another gold crystal also terminated at the (111) surface. Now, if those two surfaces are brought into contact, with the right orientation so that they match up as if they were two adjacent layers of atoms inside a larger gold crystal, what will happen? The answer is, in the absence of adsorbed contaminants, the surfaces will stick. This is called "cold welding". In contrast, if you bring together two ultraclean surfaces that are incommensurate, they can slide past each other with essentially no friction. This is called "superlubricity". Here are two great examples (pdf of first one; pdf of second one) of this. In this new paper, Liu et al. are able to do some very cute experiments in this regard, looking at the motion of thin graphite flakes (exfoliated from and) sliding on graphite pedestals. It's clear from the observations that graphite flakes shifted relative to the underlying graphite substrate can slide essentially frictionlessly over micron scales. Very neat and elegant, and surprising since there is not any rotation at work here to break commensurability. This is a very firm reminder that our macroscale physical intuition about materials and their interactions can fail badly at the nanoscale.
<urn:uuid:b8c37183-eef9-41bb-a5a2-dc340dcd43f7>
2.703125
332
Personal Blog
Science & Tech.
37.477448
Name: Mike T. How do emitted photons instantaneously travel at the speed of light since they were not accelerated? At one instant there is no photon, and at the next instant, it miraculously is already travelling at the speed of light. Hi, Mike !! A photon - as you know - is an electromagnetic (EM) wave . Its energy is described by e = h. n in a quantum of energy. All EM waves travel in the vacuum with the speed of light. A photon is a form of energy and results from transformations of another forms of energy. When the photon appears it behaves like all EM waves. It doest need to be accelerated. All bodies emit EM waves, even a peace of ice. The question concerns more on the frequency. If you see the light - visible light - it covers a range of wave length between 400 and 700 nanometers. When you mention in your question that "...emitted photons instantaneously travel at speed of light and were not accelerated..." than you are thinking like a particle behaviour that needs to be accelerated, right ?? Than, we find ourselves at the hands of the problem concerning to the dual behaviour particle-wave of matter. You are thinking of particle and making a question for waves...Waves do not need to be accelerated. Particles do, but we are talking about wave-photons, arent we ?? And - besides of that - photons are generally regarded as particles with zero mass and no electric charge...( again the dual behaviour of matter ). This is not an easy question and is not treated in introductory spectroscopic texts or quantum mechanical texts, at least the ones I could find. It involves time dependent quantum mechanics. The best treatment I have been able to find so far is on the web site: Newton's laws of physics do not apply to objects moving near the speed of light, or to objects smaller than an atom. Small scale requires quantum physics. Large speed requires relativity. Light fits both criteria. Also, light photons have zero mass. This is another fact that makes them unusual. At the level of quantum physics, things change form at almost random times. You cannot predict where something will be. You can only say it has a probability of being somewhere. Objects under the right conditions can change into other objects without being forced to. A photon flying through space can change into an electron and anti-electron, or vice-versa. Quantities such as total momentum, total energy, and total electric charge must be conserved, but many things can change. We do not know how or why it We can deal with a bundle of millions of particles, working with average effects (a baseball is millions of protons, neutrons, and electrons; a beam of light is millions of photons). We do not know how an individual particle works. We do not know what actually happens during the process of changing, only before and after. We don't know whether it requires any time at all. It is almost like a change of reality, switching between a universe with a photon and a universe with an electron and anti-electron. Dr. Ken Mellendorf Illinois Central College Click here to return to the Physics Archives Update: June 2012
<urn:uuid:65375af2-ba61-4675-a330-c639bdfd4fc5>
3.46875
713
Q&A Forum
Science & Tech.
58.744633
ASTR 160 - Lecture 3 - Our Solar System and the Pluto Problem Lecture 3 - Our Solar System and the Pluto Problem Class begins with a review of the first problem set. Newton's Third Law is applied in explaining how exoplanets are found. An overview of the Solar System is given; each planet is presented individually and its special features are highlighted. Astronomy is discussed as an observational science, and the subject of how to categorize objects in the Solar System is addressed. The Pluto controversy is given special attention and both sides of the argument regarding its status are considered. The Pluto Controversy (links to various sites related to the demotion of Pluto to the status of "dwarf planet.")
<urn:uuid:a17255e3-5e6d-4155-a3d7-dbc2ccc15735>
3.359375
148
Content Listing
Science & Tech.
39.542822
Ever since the wheel was invented more than 5,000 years ago, people have been inventing new ways to travel faster from one point to another. The chariot, bicycle, automobile, airplane and rocket have all been invented to decrease the amount of time we spend getting to our desired destinations. Yet each of these forms of transportation share the same flaw: They require us to cross a physical distance, which can take anywhere from minutes to many hours depending on the starting and ending points. But what if there were a way to get you from your home to the supermarket without having to use your car, or from your backyard to the International Space Station without having to board a spacecraft? There are scientists working right now on such a method of travel, combining properties of telecommunications and transportation to achieve a system called teleportation. In this article, you will learn about experiments that have actually achieved teleportation with photons, and how we might be able to use teleportation to travel anywhere, at anytime. Teleportation involves dematerializing an object at one point, and sending the details of that object's precise atomic configuration to another location, where it will be reconstructed. What this means is that time and space could be eliminated from travel -- we could be transported to any location instantly, without actually crossing a physical distance. Many of us were introduced to the idea of teleportation, and other futuristic technologies, by the short-lived Star Trek television series (1966-69) based on tales written by Gene Roddenberry. Viewers watched in amazement as Captain Kirk, Spock, Dr. McCoy and others beamed down to the planets they encountered on their journeys through the universe. In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. This revelation, first announced by Bennett at an annual meeting of the American Physical Society in March 1993, was followed by a report on his findings in the March 29, 1993 issue of Physical Review Letters. Since that time, experiments using photons have proven that quantum teleportation is in fact possible.
<urn:uuid:4f9522ea-7945-4f09-96f0-9327860a8edc>
3.484375
448
Knowledge Article
Science & Tech.
29.154977
Coalescence with purifying selectionHow positive selection affects genealogies is demonstrated by the script genealogies_with_selection.py and discussed on page Genealogies 1. Here, we extend this discussion to purifying selection. Instead of an infinite sites model, we now use a finite sites model. Recurrent mutations with are injected with rate U. The population will settle at a state where the influx of deleterious mutations is balanced by rare back-mutations. Hence this approximates a state where the majority of mutations are deleterious. Mutations have effect size s. The entire script can be viewed genealogies_with_selection.html or downloaded genealogies_with_selection.py. Everything is more or less the same as in the example on positive selection, only that the per site mutation rate is set as N = 10000 #population size s = -1e-2 #single site effect U = 0.1 #genome wide mutation rate r = 0.0 #outcrossing rate No sex, frequent deleterious mutationsThe following shows three trees sampled from a large asexual population suffering from many deleterious mutations. The genealogies are strongly distorted and show long terminal branches and uneven branching. Outcrossing reduces interferenceTo demonstrate how outcrossing reduces interference, the following shows the genealogy in an obligate outcrossing population with a maplength 10. The population size was reduced to N=1000 to speed up the simulation. Coalescence takes much longer than in the asexual example, even though the population size is smaller by a factor of 10. Open the script in your favorite text editor, change parameters, and rerun to explore.
<urn:uuid:6df63f5b-d468-4119-9549-56688cd700b5>
2.703125
354
Documentation
Science & Tech.
25.81004