text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
This is an image of the surface of Io. Click on image for full size Can there be Life in the Environment of Io? In spite of the fact that Io has an atmosphere, the environment of Io seems pretty unfriendly toward life as we know it Io is very small, so there is not much gravity. For this reason, the atmosphere of Io is constantly drifting away. If it were not for the volcanic eruptions, there would be no atmosphere at all. Io is inside the enormously powerful magnetosphere of Jupiter. With very little atmosphere, there is little protection from the radiation of the charged particles of the magnetosphere. With little atmosphere, there is hardly a buffer between the surface and space itself. This means that the temperature above the surface is very cold. On the other hand, with all the volcanos, the surface of Io is frequently molten, therefore the surface itself is sometimes super-hot! All together these things add up to a very hostile environment for life as we know it. Shop Windows to the Universe Science Store! The Spring 2010 issue of The Earth Scientist , focuses on the ocean, including articles on polar research, coral reefs, ocean acidification, and climate. Includes a gorgeous full color poster! You might also be interested in: Jupiter's magnetosphere is very special. It is the biggest thing in the entire solar system. Not only is it big enough to hold all of Jupiter's moons, but the sun itself could fit inside. It goes all...more Jupiter's atmospheric environment is one of strong gravity, high pressure, strong winds, from 225 miles per hour to 1000 miles per hour, and cold temperatures of -270 degrees to +32 degrees (freezing temperature)....more In July, 1996, it was announced that Dr. David McKay, along with a team of scientists at Johnson Space Center (a division of NASA), had discovered possible fossils of bacteria in a meteorite named ALH84...more Saturn's atmospheric environment is one of strong gravity, high pressure, strong winds, from 225 miles per hour to 1000 miles per hour, and cold temperatures of -270 degrees to +80 degrees. With winds...more Titan's atmosphere is a lot like the Earth's, except that it is very cold, from -330 degrees to -290 degrees! Like the Earth, there is a lot of Nitrogen and other complex molecules. There also may be an...more Autotrophs are organisms that can "make their own food" from an inorganic source of carbon (carbon dioxide) given a source of energy. Most autotrophs use sunlight in the process of photosynthesis to make...more In the warm primordial ocean, aggregates of amino acids, proteins, and other hydrocarbons came together into a form called *coacervates*. Amino acids will spontaneously form coacervates in the same way...more
<urn:uuid:02e394b4-f0c3-4110-8b8a-89d2995b8116>
3.34375
604
Content Listing
Science & Tech.
54.294398
The strfmon() function formats the specified amounts according to the format specification format and places the result in the character array s of size max. Ordinary characters in format are copied to s without conversion. Conversion specifiers are introduced by a `%' character. Immediately following it there can be zero or more of the following flags: The single-byte character f is used as the numeric fill character (to be used with a left precision, see below). When not specified, the space character is used. Do not use any grouping characters that might be defined for the current locale. By default, grouping is enabled. ( or + The ( flag indicates that negative amounts should be enclosed between parentheses. The + flag indicates that signs should be handled in the default way, that is, amounts are preceded by the locale's sign indication, e.g., nothing for positive, Omit the currency symbol. Left justify all fields. The default is right justification. Next, there may be a field width: a decimal digit string specifying a minimum field width in bytes. The default is 0. A result smaller than this width is padded with spaces (on the left, unless the left-justify flag was given). Next, there may be a left precision of the form Next, there may be a right precision of the form frac_digits and int_frac_digits'' items of the current locale. If the right precision is 0, no radix character is printed. (The radix character here is determined by LC_MONETARY, and may differ from that specified by LC_NUMERIC.) Finally, the conversion specification must be ended with a conversion character. The three conversion characters are (In this case the entire specification must be exactly One argument of type double is converted using the locale's international currency format. in the Dutch locale (with fl for
<urn:uuid:9cfc027a-d81b-4a36-ad59-bb7f969a168e>
3.375
391
Documentation
Software Dev.
46.384671
By Matin Durrani, Editor, Physics World The Royal Society — perhaps the world’s oldest and most prestigious scientific society — is celebrating its 350th anniversary this year. It was founded in 1650 by a group of 12 natural philosophers, including Robert Boyle, best known for his law describing how the pressure of a gas rises as it is compressed at constant temperature. Over the years, the society has had plenty of links with physics — past presidents include Isaac Newton, J J Thomson, Lord Kelvin and Ernest Rutherford and the current president is the Cambridge University astrophysicist and cosmologist Martin Rees. Speaking in an exclusive video interview with physicsworld.com, Rees explains why he thinks the Royal Society still has an essential role to play in the modern world. After all, if scientists can communicate quickly and easily via online discussion groups, Facebook and Twitter, a society with a limited and admittedly elite membership might not be totally in tune with today’s world. For Rees, however, the society’s strengths lie in its ability to promote and disseminate science — and in the increasing amount of scientific advice it offers to politicians on topics like energy and climate change. In a wide-ranging discussion, Rees also welcomes President Obama’s decision not to return astronauts to the Moon. “Given the financial constraints, if I were an American taxpayer I would entirely support it,” he says. “I think it is very important we pursue science in space [but] the case for sending people into space is getting weaker all the time with every advance in robotics and miniaturization. I still believe in the long run that there is a role for people in space, but that’s just for an adventure – not for any practical purpose.” As for what are the mostexciting developments in astronomy, Rees cites the search for Earth-like extrasolar planets, the study of the cosmic microwave background by the Planck satellite and the ability of the Herschel infrared telsecope to understand how the earliest galaxies formed. The interview with Rees took place at the Royal Society’s “presidential flat” — a kind of up-market crash-pad at the society’s headquarters at Carlton House Terrace in central London. The flat has great views out onto the London Eye, Big Ben and the Houses of Parliament, where Rees — as a member of the House of Lords — had spent the morning giving evidence to a scientific committee. That was followed by a radio interview and then us. Add in his duties as master of Trinity College Cambridge, and it’s not surprising that Rees only has time for research at weekends. But, as he explains, he is “in a style of life that is fascinating”.
<urn:uuid:1ef53030-21a4-4043-9156-8b31d1e92578>
2.703125
579
Truncated
Science & Tech.
36.802979
Glowing Remnant from a Star-Shattering Explosion This "true color" Chandra image of N132D shows the beautiful, complex remnant of an explosion of a massive star in the Large Magellanic Cloud, a nearby galaxy about 160,000 light years from Earth. The colors represent different ranges of X-rays, with red, green, and blue representing, low, medium, and higher X-ray energies respectively. Supernova remnants comprise debris of a stellar explosion and any matter in the vicinity that is affected by the expanding debris. In the case of N132D, the horseshoe shape of the remnant is thought to be due to shock waves from the collision of the supernova ejecta with cool giant gas clouds. As the shock waves move through the gas they heat it to millions of degrees, producing the glowing X-ray shell.
<urn:uuid:d1a7dd89-5a2a-478f-9557-8a030b6ebc77>
3.28125
174
Knowledge Article
Science & Tech.
47.350529
We mentioned a recent paper that examined silicon/carbon fibers for use in batteries. Shown above is a TEM image from the paper. From it you can tell the fibers are essentially hollow carbon fibers with a layer of silicon around the outside edge. Transmission electron microscopes (TEMs) work by shooting a beam of electrons through a sample. The sample has to be quite thin to let the electrons through, but since these fibers are around 100 nm thick that is fine. The important thing is that the image is made with electrons. This is different than the sort of images you’re used to, such as those made with cameras, regular microscopes, or even your eyes: all of those images are made by light. Electrons have a smaller wavelength than light, and so electron microscopes work at extremely high resolution: in some cases you can make out individual atoms with them. Why would anyone care about silicon-coated nano-fibers? Because lithium ions can be stored inside silicon, and that makes these fibers well-suited to storing lithium for lithium ion batteries. Some researchers think we may be able to extend the life of batteries using these.
<urn:uuid:6d86496c-53c1-4080-8a94-89e00c16c845>
3.640625
240
Personal Blog
Science & Tech.
44.574091
This section describes how to use mysqldump to produce dump files, and how to reload dump files. A dump file can be used in several ways: As a backup to enable data recovery in case of data loss. As a source of data for setting up replication slaves. As a source of data for experimentation: To make a copy of a database that you can use without changing the original data. To test potential upgrade incompatibilities. mysqldump writes SQL statements to the standard output. This output consists of CREATE statements to create dumped objects (databases, tables, and so forth), and INSERT statements to load data into tables. The output can be saved in a file and reloaded later using mysql to recreate the dumped objects. Options are available to modify the format of the SQL mysqldump produces two output files for each dumped table. The server writes one file as tab-delimited text, one line per table row. This file is named in the output directory. The server also sends a CREATE TABLE statement for the table to mysqldump, which writes it as a in the output directory.
<urn:uuid:182834a8-1fd0-4088-9d77-9703ea2a3ee7>
3.390625
252
Documentation
Software Dev.
45.133333
See also the Dr. Math FAQ: See also the Browse High School Trigonometry Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Volume of a tank. - Averaging Two Angles [06/08/1999] How do you take the average of two or more angles? - Basics of Trigonometry [12/12/2001] Why is trigonometry important? How is it used in the real world? - Calculating Height in the Air [11/14/2001] I need to know the formula for calculating the height of an object, like how high in the air an object is. - Carrying a Ladder around a Corner [02/28/2003] A ladder of length L is carried horizontally around a corner from a hall 3 feet wide into a hall 4 feet wide. What is the length of the - Changing a Trigonometric Graph [08/06/1998] How do you graph: -cos 2x / 2? - Curious Property of a Regular Heptagon [04/06/2001] How can I prove that in a regular heptagon ABCDEFG, (1/AB)=(1/AC)+(1/ - Defining the Six Trigonometric Functions on the Unit Circle [09/11/2003] I can find sin, cos, and tan on the unit circle, but I don't know how to find csc, cot, and sec. - Deriving Sines of 30, 45, 60 and 90 Degrees [06/02/1999] Where do the values for sin 30, sin 60, etc. come from? - Deriving the Dot Product [09/17/1998] Can you explain how to derive the formula for the dot product? - Explanation of Sine and Cosine [7/30/1996] I am having trouble with the topics of sine and cosine. - Find Angle Given 3 Sides, NOT Using Law of Cosines [12/18/2001] Why doesn't it work to use proportions? - Flying a Kite [5/23/1996] Find the angle of elevation of a kite... - Geometry vs. Trigonometry [07/14/1997] What is the difference between trigonometry and geometry? - History and Uses of Trigonometry [9/10/1995] What is trigonometry? - Latin Origins of Trig Functions [11/20/1998] What are the etymologies of the six trigonometric functions: sine, cosine, tangent, cosecant, secant, and cotangent? - Proving Identities Rigorously [01/23/2002] I've been taught that the proper way to work on identities is to work with one side at a time only. - Pythagorean Theorem and non-Right Triangles [03/09/2002] Why doesn't the Pythagorean theorem work for triangles other than right - Sine and Cosine Without a Calculator [9/2/1996] How do I find sines and cosines without using a calculator? - Sine, Co-sine, and Tangent: SOHCAHTOA [03/28/1999] I am having trouble figuring out what to use when solving a triangle - Techniques of Integration - Change of Variables [02/17/1999] Solve: (integral sign) sin 2x/ sq rt (9-cos^4 x) dx . - Trig Identities [11/5/1994] I am a junior in High School and currently taking a pre-calc. course. We started the year with trig. identities. What confuses me is how to find the exact value (i.e, not a decimal answer) of an angle (in radians or degrees) QUICKLY. For several of our problems we need to know almost instantaneously the value of say, sec 15, sin 45, cos 60, etc. Instead of drawing a triangle on a cartesian plane and using the 30x30x60 or 45x45x90 rules, is there an easier way to derive the answers to these - Trigonometric Functions and the Unit Circle [05/22/2000] Why don't sine and cosine graphs have values greater than 1 or less than -1? Why does the tangent graph have asymptotes? How do the trig functions relate to the unit circle? - Trigonometric Identities [11/13/1997] Do you have any suggestions for finding an easy way to remember them that I could apply to all identities? - Trigonometry in a Nutshell [04/11/2001] What is trigonometry? Can you give me some problems and examples? What does trigonometry have to do with circles? with waves? - Trigonometry: Positive vs. Negative [5/31/1996] The problem: tan x = 5/12 and sec x = -13/12 - The Unit Circle [4/2/1996] Could you describe how to "read" the unit circle? - Volume of a Cylindrical Tank [2/3/1995] I have to keep an inventory of how much is kept in a farm of tanks outside my school. The tanks are cylindrical, which would be no problem if they were standing on end... - What is Arctan? [8/28/1996] For the function f(x) = sin (arctan x), is sin the same as arctan X? - Why Sine, Cosine, and Tangent? [11/29/2001] Why does sine equal opp/hyp, cos equal adj/hyp, and tan equal adj/opp? - 30-60-90 and 45-45-90 Triangles [03/15/1999] If I have a triangle that is 30-60-90 or 45-45-90, how do I find all the sides when given only one side? Where does trigonometry come in? - Algebra and Trig Equation [4/20/1996] How can I solve this equation: 1 = sin(3x) - cos(6x) ? - Algebraic Expression for cos(arctan(x)/3) [8/15/1996] How do I find an algebraic expression for cos(arctan(x)/3) so that I can get rid of the trigonometric operands? - Altitude of a Model Rocket [10/26/1999] I need an algebraic formula that determines the altitude of a rocket based upon inclination observations of three ground observers. - The Ambiguous Case [04/01/2003] How many triangles can be constructed if, for example, a=4, A=30, and c=12? Or a=9, b=12, and A=35? - Ambiguous Cases - Laws of Cosines and Sines [04/26/2000] When I try to use the law of cosines and the law of sines on triangle ABC, with sides of length a = 3.2, b = 4.3 and c = 5.1, I get two - Amplitude of Function with Sine and Cosine [01/11/2004] How do you predict the amplitude of a function involving both sine and cosine? For instance, how is it possible to find the amplitude of f(x) = a*sin(x) + b*cos(x) without using an automatic grapher? - Angle Between Two Sides of a Pyramid [10/29/1999] How can I compute the angle formed by two sides of a frustrum of a - Angle of Elevation [01/22/1997] A tree 66 meters high casts a 44-meter shadow. Find the angle of elevation of the sun. - Angle of Sun's Rays [05/03/1999] How can you determine the angle at which the sun's light hits the earth at any given point? - Angle, Side Length of a Triangle [9/4/1996] What is the relation between the angles and side lengths of a triangle?
<urn:uuid:bd822fd3-b498-428e-b51b-5393a8c8e832>
3.3125
1,815
Q&A Forum
Science & Tech.
77.8179
Well, security should be no different. To build secure mobile apps, there are certain things that we're just going to have to do, and darn near every single time we write code. So, what security goodies are in your bag of tricks? Here's some food for thought on some things you might find useful, in no particular order: - Protecting secrets at rest -- Inevitably, we need to protect some data locally on the mobile device. Of course, the principles of sound design should guide us to minimize the data we store locally on the device. Some argue that we shouldn't really store anything of value locally, but our users don't always share that view. So we need to protect data locally: usernames (a la remember me button), passwords (best avoided, but for some consumer grade apps, it's acceptable), session tokens, customer names, and on and on. We need a reliable set of tools that help us protect things locally. Both iOS and Android give us some ability to do that, but for times when we cannot rely on the OS, we need more. SQLcipher is one such example. It's an open source extension of SQLite that does AES-256 using the venerable OpenSSL library, and it works on Android and iOS. - Protecting secrets in transit -- Of course, any modern OS and app can do SSL encryption, but things aren't always that simple. Sometimes we want to more strongly verify the SSL certificates on both ends of the connection. Sometimes we want to encrypt data that doesn't play nicely with TCP connections, like Voice over IP data that is best suited for UDP. - Server connections -- We often need to connect to different types of back-end services, and of course those connections need to be established securely. At a network layer, we can use SSL, like I've described above, but at a data layer, we also need to ensure our connection is strong. For example, if we're connecting to an SQL database of some sort, we need to ensure our SQL API is immutable, and not subject to SQL injection and such. - Authentication -- We need strong mutual authentication among all of our application components, of course, but we also need to authenticate users and any other entities our app interacts with. We can use x.509 certificates in some cases. Other times we need to use simple username/password combinations. Either way, though, the authentication needs to be mutual and worthy of trust. We have to avoid mistakes like hard coding credentials into our code. - Authorization -- Once a user or entity is identified and authenticated, we then need to ensure that it is able to get to all the data and resources it needs, of course. But we also need to ensure that it is not able to get to data and resources that it doesn't need -- that it's not authorized to access. That means we need to weave access control throughout our system, and it needs to be consistently applied across our architecture. - Input validation -- All data entering our application, through whatever input source possible, needs to be validated. For example, if we're expecting a credit card number, then our code should validate that a credit card number has indeed been input, and nothing other than a credit card number. That's called input validation, and it's vital we get it right. Input validation problems lead to cross site scripting and a myriad of other security problems, after all. But there are many decisions to make, like where do we perform input validation? Our design time choices can have vast impacts on our application's ability to perform its given tasks securely. - Output escaping --
<urn:uuid:590363e6-f889-495a-b13b-ce314fc182f5>
2.84375
751
Personal Blog
Software Dev.
48.645908
The XmlLite library allows developers to build high-performance XML-based applications that provide a high degree of interoperability with other applications that adhere to the XML 1.0 standard. The primary goals of XmlLite are ease of use, performance, and standards compliance. XmlLite works with any Windows language that can use dynamic link libraries (DLLs), but Microsoft recommends C++. XmlLite comes with all necessary support files for use with C++, but if you want to use it with other languages, some additional work may be required. XmlLite works with various versions of the Microsoft C++ compiler, but the samples in the documentation have been validated only with the latest version of Visual Studio. This topic provides an overview of XmlLite and some guidelines for which XML parser to use in various scenarios. For installation information, see Installing XmlLite. XML can be used as a format for storing documents, such as Microsoft Office Word documents. It can also be used to encode data for marshalling method calls across machine boundaries (SOAP). Businesses can use XML for sending and receiving purchase orders and invoices. Web technologies can use XML to send data between the Web server and the client's Web browser. Database servers can return the data from queries in XML for further processing by other applications. Because it is such a flexible format, XML can be used in a vast variety of scenarios. Usage scenarios can be generally divided into two categories: Some scenarios work with XML documents that come from external sources, and it is not known whether the XML documents are valid. In these scenarios, verification of validity is important. Typically, developers use XSD schemas or Document Type Definitions (DTDs) to verify validity. Performance may be a concern, but the overriding concern is that the application reading the XML receives a valid document. Saving and loading documents from and to a variety of applications is a usage scenario that falls in this category. Some software systems use XML as a data store or a means for communication. In these scenarios, the developer knows that the XML document is valid, perhaps because another part of the system (which is under the control of the same developer or organization) generated the XML. The question of document validity is not an overriding concern. One example of this approach is where the software system runs on a server farm, and XML is used to communicate between various servers and processes. Another example might be one where a relatively complicated application has to store and retrieve a large amount of information. The developer completely controls the format of the XML document. The focus of XmlLite is on performance. Therefore, XmlLite is most appropriate in the second of the two scenarios. XmlLite enables developers to write efficient (fast) code to read and write XML documents. In most scenarios, XmlLite parses faster than either the DOM in MSXML or SAX2 in MSXML. XmlLite vs. System.XML XmlLite is most appropriate for use with C++. If you are using C#, Visual Basic .NET, or other languages that use the common language runtime (CLR), it is more appropriate to use one of the parsers in System.XML. Some developers want a deployment scenario where it is not required that the Microsoft .NET Framework be installed on deployment computers. XmlLite does not require the .NET framework to be installed, and may be appropriate for this situation. No XSD or DTD Validation Because XmlLite is oriented towards optimum performance, it does not provide for document validation. Validation via XSD schemas or DTDs is not supported. If you require validation, it is recommended that you use either MSXML or System.XML. If you read a document that refers to an external XSD schema, the XmlLite reader ignores the external schema. Even if the document is invalid per the schema, the XmlLite reader will report no errors. If you read a document that contains an inline schema, the XmlLite reader returns all of the elements and attributes of the inline schema, just as if they were parts of the XML document. No Scripting Language Support XmlLite does not support scripting languages. If you need to use XML from JScript or Visual Basic Scripting Edition (VBScript), it is more appropriate to use the Document Object Model (DOM) in MSXML. Limited DTD Support Document Type Definitions (DTDs) are supported, but only for entity expansion and defaults for attributes, not for document validation. If you require DTD validation, it is recommended that you use either MSXML or System.XML. If you enable DTDs, note the following: If you use XmlLite to read a document that refers to a DTD and the document is not valid per that DTD, no error will be thrown. Both the XmlLite runtime and development files are required to run the examples in this documentation. For more information, see Installing XmlLite.
<urn:uuid:afc1de8a-e57a-4e74-bc10-8eb3635d863f>
2.734375
1,067
Documentation
Software Dev.
44.147531
I’m tickled to see that my work on stonefly flight has come up in this discussion. One thing that is worth adding here is that a gills-to-wings transition would require a simultaneous change in gas exchange, since a sophisticated wing is unlikely to also be an effective gill, and the physics and physiology of gas exchange are very different in an aquatic versus a terrestrial environment. My research shows that modern stoneflies may have retained intermediate forms of flight that date back to an evolutionary transition from gills to wings, and therefore perhaps they have retained other traits related to a transition in gas exchange physiology. This line of thinking led me to suggest to Thorsten Burmester, an expert on arthropod gas exchange proteins, that he should check to see if stoneflies have hemocyanin in their blood. This was a pretty far out idea, since blood-based gas exchange is what other arthropods use (including aquatic ones) but was previously thought to be completely absent in insects, which deliver air directly to their tissues via tracheae. Burmester found that stoneflies do indeed have hemocyanin in their blood (Proceedings of the National Academy of Sciences 101: 871-874) that reversibly binds oxygen, and it appears that no other pterygote insects possess this trait. In summary, the developmental evidence that you have presented for a gills-to-wings transition is supported by both a set of mechanically intermediate forms of winged locomotion in stoneflies and molecular evidence that a simultaneous transition occurred in gas exchange physiology. Jim Marden’s homepage at Penn State University with quick time movies from their studies Hagner-Holler, S., A. Schoen, W. Erker, J.H. Marden, R. Rupprecht, H. Decker, and T. Burmester. 2004. A respiratory hemocyanin from an insect. Proceedings of the National Academy of Sciences 101, 871-874. ( Abstract ),( full text ) Marden, J.H. and M.A. Thomas. 2003. Rowing locomotion by a stonefly that possesses the ancestral pterygote condition of co-occurring wings and abdominal gills. Biological Journal of the Linnean Society 79, 341-349 ( Full text ) Thomas, M.A., K.A. Walsh, M.R. Wolf, B.A. McPheron, and J.H. Marden. 2000. Molecular phylogenetic analysis of evolutionary trends in stonefly wing structure and locomotor behavior. Proceedings of the National Academy of Science 97:13178-13183 ( full text ) Marden, J.H., B.C. O’Donnell, M.A. Thomas, and J.Y. Bye. 2000. Surface-skimming stoneflies and mayflies: the taxonomic and mechanical diversity of two-dimensional aerodynamic locomotion.Physiological Zoology 73, 751-764. ( abstract ) ( full text ) Marden, J.H. and M.G. Kramer. 1994. Surface-skimming stoneflies: a possible intermediate stage in insect flight evolution. Science 266, 427-430. (abstract)
<urn:uuid:eb4046fd-5a95-4d7b-918b-bed39cfc7831>
2.703125
676
Personal Blog
Science & Tech.
55.837475
The Mg/Ca in foraminifera shells is commonly used as a proxy to estimate ocean temperatures in Earth’s past. However, studies have shown that both dissolution and salinity influence the Mg/Ca in shells of tropical foraminifera, which can cause paleotemperature estimates to be inaccurate. We measured Mg/Ca in shells of Globigernoides ruber and Globigerinoides sacculifer from core tops in the eastern equatorial Pacific. We compared our results with global core top data, which paleotemperature equations have been calibrated and published. We find that Mg/Ca values range greatly at the same surface ocean temperature. We also find that salinity and dissolution do not affect the relationship between Mg/Ca and temperature. In analyzing the carbonate ion concentration of the water at 30m, we find that this might be affecting the relationship between Mg/Ca and temperature, which could be affecting the accuracy of the Mg/Ca proxy. Foraminifera, Paleotemperature, Mg/Ca Clark, Sarah and Mekik, Figen, "Does the Mg/Ca in Foraminifera Tests Provide a Reliable Temperature Proxy?" (2009). Student Summer Scholars. Paper 41.
<urn:uuid:4c233e43-b19a-4654-80b1-468c074b7642>
2.84375
260
Academic Writing
Science & Tech.
27.516287
SWAN Lyman-alpha whole sky map in ecliptic coordinates. Two areas were not covered for safety reasons, around the Sun and around the anti-solar direction. The color is coding the intensity, in counts per second per pixel (one square degree), which corresponds to 1.3 Rayleigh. A number of UV hot stars can be identified, tracing the galactic plane. The rest of the ubiquitous emission is due to solar UV Lyman alpha photons, backscattered by Hydrogen atoms in the solar system. These H atoms are coming from interstellar space, and are approaching the Sun down to about 2 AU, in the direction of the incoming flow (ecliptic coordinates, longitude 254 deg, latitude 7 deg). A maximum of Lyman-alpha intensity surrounds this upwind direction. In the opposite direction, the emission is weaker by a factor of 3.5, because most atoms have been destroyed by charge-exchange with solar wind protons, creating a cavity void of Hydrogen atoms in the downwind direction. A detailed comparison of such Lyman-alpha maps will allow to determine the solar wind mass flux at all ecliptic latitudes.
<urn:uuid:b956fdde-dd35-4ddc-b697-985dfd032fc2>
3.078125
240
Knowledge Article
Science & Tech.
43.645949
Advance in science comes by laying brick upon brick, not by sudden erection of fairy palaces. - J. S. Huxley Astronomers refer to Venus as Earth's sister planet. Both are similar in size, mass, density and volume. Both formed about the same time and condensed out of the same nebula. However, during the last few years scientists have found that the kinship ends here. Venus is very different from the Earth. It has no oceans and is surrounded by a heavy atmosphere composed mainly of carbon dioxide with virtually no water vapor. Its clouds are composed of sulfuric acid droplets. At the surface, the atmospheric pressure is 92 times that of the Earth's at sea-level. Venus is scorched with a surface temperature of about 482° C (900° F). This high temperature is primarily due to a runaway greenhouse effect caused by the heavy atmosphere of carbon dioxide. Sunlight passes through the atmosphere to heat the surface of the planet. Heat is radiated out, but is trapped by the dense atmosphere and not allowed to escape into space. This makes Venus hotter than Mercury. A Venusian day is 243 Earth days and is longer than its year of 225 days. Oddly, Venus rotates from east to west. To an observer on Venus, the Sun would rise in the west and set in the east. Until just recently, Venus' dense cloud cover has prevented scientists from uncovering the geological nature of the surface. Developments in radar telescopes and radar imaging systems orbiting the planet have made it possible to see through the cloud deck to the surface below. Four of the most successful missions in revealing the Venusian surface are NASA's Pioneer Venus mission (1978), the Soviet Union's Venera 15 and 16 missions (1983-1984), and NASA's Magellan radar mapping mission (1990-1994). As these spacecraft began mapping the planet a new picture of Venus emerged. Venus' surface is relatively young geologically speaking. It appears to have been completely resurfaced 300 to 500 million years ago. Scientists debate how and why this occurred. The Venusian topography consists of vast plains covered by lava flows and mountain or highland regions deformed by geological activity. Maxwell Montes in Ishtar Terra is the highest peak on Venus. The Aphrodite Terra highlands extend almost half way around the equator. Magellan images of highland regions above 2.5 kilometers (1.5 miles) are unusually bright, characteristic of moist soil. However, liquid water does not exist on the surface and cannot account for the bright highlands. One theory suggests that the bright material might be composed of metallic compounds. Studies have shown the material might be iron pyrite (also know as "fools gold"). It is unstable on the plains but would be stable in the highlands. The material could also be some type of exotic material which would give the same results but at lower concentrations. Venus is scarred by numerous impact craters distrubuted randomly over its surface. Small craters less that 2 kilometers (1.2 miles) are almost non-existent due to the heavy Venusian atmosphere. The exception occurs when large meteorites shatter just before impact, creating crater clusters. Volcanoes and volcanic features are even more numerous. At least 85% of the Venusian surface is covered with volcanic rock. Hugh lava flows, extending for hundreds of kilometers, have flooded the lowlands creating vast plains. More than 100,000 small shield volcanoes dot the surface along with hundreds of large volcanos. Flows from volcanos have produced long sinuous channels extending for hundreds of kilometers, with one extending nearly 7,000 kilometers (4,300 miles). Giant calderas more than 100 kilometers (62 miles) in diameter are found on Venus. Terrestrial calderas are usually only several kilometers in diameter. Several features unique to Venus include coronae and arachnoids. Coronae are large circular to oval features, encircled with cliffs and are hundreds of kilometers across. They are thought to be the surface expression of mantle upwelling. Archnoids are circular to elongated features similar to coronae. They may have been caused by molten rock seeping into surface fractures and producing systems of radiating dikes and fractures. |Mass (Earth = 1)||.81476| |Equatorial radius (km)||6,051.8| |Equatorial radius (Earth = 1)||.94886| |Mean density (gm/cm^3)||5.25| |Mean distance from the Sun (km)||108,200,000| |Mean distance from the Sun (Earth = 1)||0.7233| |Rotational period (days)||-243.0187| |Orbital period (days)||224.701| |Mean orbital velocity (km/sec)||35.02| |Tilt of axis (degrees)||177.36| |Orbital inclination (degrees)||3.394| |Equatorial surface gravity (m/sec^2)||8.87| |Equatorial escape velocity (km/sec)||10.36| |Visual geometric albedo||0.65| |Mean surface temperature||482°C| |Atmospheric pressure (bars)||92| Trace amounts of: Sulfur dioxide, water vapor, - Rotating Venus Movie. - Venus Topography Animation. - Artist's view of Venus. - Earth/Venus Rotation Movie. - Magellan - Mapping the planet Venus. - Flight over Western Atla Regio. - Flight over Artemis. - Flight over Alpha Regio. - Flight over Western Eistla Regio. - A dramatic view the the moon with Venus in the distance. Venus with Visible and Radar Illumination This picture shows two different perspectives of Venus. On the left is a mosaic of images acquired by the Mariner 10 spacecraft on February 5, 1974. The image shows the thick cloud coverage that prevents optical observation of the planet's surface. The surface of Venus remained hidden until 1978 when the Pioneer Venus 1 spacecraft arrived and went into orbit about the planet on December 4th. The spacecraft used radar to map planet's surface, revealing a new Venus. Later in August of 1990 the Magellan spacecraft arrived at Venus and began its extensive planetary mapping mission. This mission produced radar images up to 300 meters per pixel in resolution. The right image show a rendering of Venus from the Pioneer Venus and Magellan radar images. (Copyright Calvin J. Hamilton) The Interior of Venus This picture shows a cutaway view of the possible internal structure of Venus. The image was created from Mariner 10 images used for the outer atmospheric layer. The surface was taken from Magellan radar images. The interior characteristics of Venus are inferred from gravity field and magnetic field measurements by Magellan and prior spacecraft. The crust is shown as adark red, the mantle as a lighter orange-red, and the core yellow. More ... (Copyright Calvin J. Hamilton) Mariner 10 Image of Venus This beautiful image of Venus is a mosaic of three images acquired by the Mariner 10 spacecraft on February 5, 1974. It shows the thick cloud coverage that prevents optical observation of the surface of Venus. Only through radar mapping is the surface revealed. (Copyright Calvin J. Hamilton) Galileo Image of Venus On February 10, 1990 the Galileo spacecraft acquired this image of Venus. Only thick cloud cover can be seen. (Copyright Calvin J. Hamilton) Hubble Image of Venus This is a Hubble Space Telescope ultraviolet-light image of the planet Venus, taken on January 24, 1995, when Venus was at a distance of 113.6 million kilometers from Earth. At ultraviolet wavelengths cloud patterns become distinctive. In particular, a horizontal "Y" shaped cloud feature is visible near the equator. The polar regions are bright, possibly showing a haze of small particles overlying the main clouds. The dark regions show the location of enhanced sulfur dioxide near the cloud tops. From previous missions, astronomers know that such features travel east to west along with the Venus' prevailing winds, to make a complete circuit around the planet in four days. (Credit: L. Esposito, University of Colorado, Boulder, and NASA) Hemispheric View of Venus This hemispheric view of Venus, as revealed by more than a decade of radar investigations culminating in the 1990-1994 Magellan mission, is centered at 0 degrees east longitude. The effective resolution of this image is about 3 kilometers. It was processed to improve contrast and to emphasize small features, and was color-coded to represent elevation. (Courtesy NASA/USGS) Additional Hemispheric Views of Venus - View centered at 90°E longitude. - View centered at 180°E longitude. - View centered at 90°W longitude. - View centered at the north pole. - View centered at the south pole. This image is a Mercator projection of Venusian topography. Many of the different regions have been labeled. The map extends from -66.5 to 66.5 degrees in latitude and starts at 240 degrees longitude. (Copyright Calvin J. Hamilton) Venusian Topography Map This is another Mercator projection of Venusian topography. The map extends from -66.5 to 66.5 degrees in latitude and starts at 240 degrees longitude. A Black & White version of this image is also available. (Courtesy A.Tayfun Oner) Gula Mons and Crater Cunitz A portion of Western Eistla Regio is displayed in this three dimensional perspective view of the surface of Venus. The viewpoint is located 1,310 kilometers (812 miles) southwest of Gula Mons at an elevation of 0.78 kilometers (0.48 mile). The view is to the northeast with Gula Mons appearing on the horizon. Gula Mons, a 3 kilometer (1.86 mile) high volcano, is located at approximately 22 degrees north latitude, 359 degrees east longitude. The impact crater Cunitz, named for the astronomer and mathematician Maria Cunitz, is visible in the center of the image. The crater is 48.5 kilometers (30 miles) in diameter and is 215 kilometers (133 miles) from the viewer's position. (Courtesy NASA/JPL) Eistla Regio - Rift Valley A portion of Western Eistla Regio is displayed in this three dimensional perspective view of the surface of Venus. The viewpoint is located 725 kilometers (450 miles) southeast of Gula Mons. A rift valley, shown in the foreground, extends to the base of Gula Mons, a 3 kilometer (1.86 miles) high volcano. This view is facing the northwest with Gula Mons appearing at the right on the horizon. Sif Mons, a volcano with a diameter of 300 kilometers (180 miles) and a height of 2 kilometers (1.2 miles), appears to the left of Gula Mons in the background. (Courtesy NASA/JPL) A portion of Western Eistla Regio is displayed in this three dimensional perspective view of the surface of Venus. The viewpoint is located 1,100 kilometers (682 miles) northeast of Gula Mons at an elevation of 7.5 kilometers (4.6 miles). Lava flows extend for hundreds of kilometers across the fractured plains shown in the foreground, to the base of Gula Mons. This view faces the southwest with Gula Mons appearing at the left just below the horizon. Sif Mons appears to the right of Gula Mons. The distance between Sif Mons and Gula Mons is approximately 730 kilometers (453 miles). (Courtesy NASA/JPL) The southern scarp and basin province of western Ishtar Terra are portrayed in this three dimensional perspective view. Western Ishtar Terra is about the size of Australia and is a major focus of Magellan investigations. The highland terrain is centered on a 2.5 km to 4 km high (1.5 mi to 2.5 mi high) plateau called Lakshmi Planum which can be seen in the distance at the right. Here the surface of the plateau drops precipitously into the bounding lowlands, with steep slopes that exceed 5% over 50 km (30 mi). (Courtesy NASA/JPL) Three-Dimensional Perspective View of Alpha Regio A portion of Alpha Regio is displayed in this three-dimensional perspective view of the surface of Venus. Alpha Regio, a topographic upland approximately 1300 kilometers across, is centered on 25 degrees south latitude, 4 degrees east longitude. In 1963, Alpha Regio was the first feature on Venus to be identified from earth-based radar. The radar-bright area of Alpha Regio is characterized by multiple sets of intersecting trends of structural features such as ridges, troughs, and flat-floored fault valleys that, together, form a polygonal outline. Directly south of the complex ridged terrain is a large ovoid-shaped feature named Eve. The radar-bright spot located centrally within Eve marks the location of the prime meridian of Venus. (Courtesy NASA/JPL) Arachnoids are one of the more remarkable features found on Venus. They are seen on radar-dark plains in this Magellan image mosaic of the Fortuna region. As the name suggests, arachnoids are circular to ovoid features with concentric rings and a complex network of fractures extending outward. The arachnoids range in size from approximately 50 kilometers (29.9 miles) to 230 kilometers (137.7 miles) in diameter. Arachnoids are similar in form but generally smaller than coronae (circular volcanic structures surrounded by a set of ridges and grooves as well as radial lines). One theory concerning their origin is that they are a precursor to coronae formation. The radar-bright lines extending for many kilometers might have resulted from an upwelling of magma from the interior of the planet which pushed up the surface to form "cracks." Radar-bright lava flows are present in the 1st and 3rd image, also indicative of volcanic activity in this area. Some of the fractures cut across these flows, indicating that the flows occurred before the fractures appeared. Such relations between different structures provide good evidence for relative age dating of events. (Courtesy NASA/JPL) Two groups of parallel features that intersect almost at right angles are visible. The regularity of this terrain caused scientists to nickname it graph paper terrain. The fainter lineations are spaced at intervals of about 1 kilometer (.6 miles) and extend beyond the boundaries of the image. The brighter, more dominant lineations are less regular and often appear to begin and end where they intersect the fainter lineations. It is not yet clear whether the two sets of lineations represent faults or fractures, but in areas outside the image, the bright lineations are associated with pit craters and other volcanic features. (Courtesy Calvin J. Hamilton) Surface Photographs from Venera 9 and 10 The Soviet Venera 9 and 10 spacecraft were launched on 8 and 14 June 1975, respectively, to do the unprecedented: place landers on the surface of Venus and return images. The Venera 9 Lander (top) touched down on the surface of Venus on October 22, 1975 at 5:13 UT, about 32° S, 291° E with the sun near zenith. It operated for 53 minutes, allowing return of a single image. Venera 9 landed on a slope inclined by about 30 degrees to the horizontal. The white object at the bottom of the image is part of the lander. The distortion is caused by the Venera imaging system. Angular and partly weathered rocks, about 30 to 40 cm across, dominate the landscape, many partly buried in soil. The horizon is visible in the upper left and right corners. The Venera 10 Lander (bottom) touched down on the surface of Venus on October 25, 1975 at 5:17 UT, about 16° N, 291° E. The Lander was inclined about 8 degrees. It returned this image during the 65 minutes of operation on the surface. The sun was near zenith during this time, and the lighting was similar to that on Earth on an overcast summer day. The objects at the bottom of the image are parts of the spacecraft. The image shows flat slabs of rock, partly covered by fine-grained material, not unlike a volcanic area on Earth. The large slab in the foreground extends over 2 meters across. Color Surface Photographs from Venera 13 On March 1, 1982 the Venera 13 lander touched down on the Venusian surface at 7.5° S, 303° E, east of Phoebe Regio. It was the first Venera mission to include a color TV camera. Venera 13 survived on the surface for 2 hours, 7 minutes, long enough to obtain 14 images. This color panorama was produced using dark blue, green and red filters and has a resolution of 4 to 5 min. Part of the spacecraft is seen at the bottom of the image. Flat rock slabs and soil are visible. The true color is difficult to judge because the Venerian atmosphere filters out blue light. The surface composition is similar to terrestrial basalt. On the ground in the foreground is a camera lens cover. This image is the left half of the Venera 13 photo. Ford, John P. et al. Guide to Magellan Image Interpretation. JPL Publication 93-24, 1993. Robinson, Cordula. "Magellan Reveals Venus." Astronomy, 32-41, February 1995.
<urn:uuid:bcaa0314-ca1b-470f-ac88-d2841847fb8c>
4.28125
3,719
Knowledge Article
Science & Tech.
52.345557
Unitary transformations are like orthogonal transformations, except we’re working with a complex inner product space. We’ll focus on just the transformations that are unitary with respect to the inner product itself. That is, we ask that and so we must have just as we wrote for orthogonal transformations, but we have to use the adjoint that’s appropriate to our complex inner product. In this way, unitary and orthogonal transformations are related in a way similar to that in which Hermitian and symmetric forms are related. Now, we’ve got this running analogy between endomorphisms on an inner product space and complex numbers. Taking the adjoint is like complex conjugation, so Hermitian transformations are like real numbers because they’re equal to their own adjoints. But here, we’re looking at transformations whose inverses are equal to their adjoints. What does this look like in terms of our analogy? Well, we’ve noted that a transformation composed with its own adjoint is a sure way to get a positive-definite transformation . This is analogous to the way that a complex number times its own conjugate is always nonnegative: . In fact, we use this to interpret as the squared-length of a the complex number . So what’s the analogue of the unitarity condition ? That’s like asking for , and so must be a unit-length complex number. Unitary transformations are like the complex numbers on the unit circle.
<urn:uuid:e781c5a3-8505-4486-8c3c-7276e1377f94>
3
322
Personal Blog
Science & Tech.
42.795
2012 SURF: Evolution of Fruit Fly Wings Selection to Change the Wings in Drosophila The goal of this project is to carry out controlled selective breeding on wings in fruit flies (Drosophila melanogaster) so that we can shed light on the genetics of evolution. Fly wings are slightly different in closely related species, and very different in distantly related species. This experiment is showing that gradual changes are occurring every generation through selective breeding, so that the capacity for long term change can be quantified. This experiment applies Darwinian selection to the creation of modifications in the wing. Can wings be changed gradually by selection on small variations, or only through big and uncommon mutational steps? Can major change be achieved by small additive steps, simply by consistently selecting those flies that vary slightly in the same direction? Finally, why are some features easy to change while others are difficult to change?
<urn:uuid:aa1876c2-122f-485f-ba0b-2c02959661e7>
3.046875
184
Academic Writing
Science & Tech.
28.952895
The breeding population of chinstrap penguins has declined significantly as temperatures have rapidly warmed on the Antarctic Peninsula, according to researchers funded in part by the National Science Foundation (NSF). The study indicates that changing climatic conditions, rather than the impact of tourism, have had the greatest effect on the chinstrap population. Ron Naveen, founder of a nonprofit science and conservation organization, Oceanites, Inc., of Chevy Chase, Md., documented the decline in a paper published in the journal Polar Biology. Naveen and coauthor Heather Lynch, of Stony Brook University, are researchers with the Antarctic Site Inventory (ASI). The paper's findings are based on an analysis of data collected during fieldwork conducted in December 2011 at Deception Island, one of Antarctica's busiest tourist locations. "We now know that two of the three predominant penguin species in the peninsula--chinstrap and Adlie--are declining significantly in a region where, in the last 60 years, it's warmed by 3 degrees Celsius (5 degrees Fahrenheit) annually and by 5 degrees Celsius (9 degrees Fahrenheit) in winter," said Naveen. "By contrast, Gentoo penguins are expanding both in numbers and in range. These divergent responses are an ongoing focus of our Inventory work effort." The ASI has been collecting and analyzing Antarctic Peninsula-wide penguin population data since 1994, and these new findings have important implications both for the advancement of Antarctic science and the management of Antarctica by the Antarctic Treaty nations. The United States is a signatory to the Treaty. The Inventory is supported in part by NSF's Office of Polar Programs and also by public contributions. The project's fieldwork at Deception Island was assisted by a grant from The Tinker Foundation. Through Polar Programs, NSF carries out its presidential mandate to manage the U.S. Antarctic Program, |Contact: Peter West| National Science Foundation
<urn:uuid:2afa46c6-6c9a-46b2-9aa6-09ee5e55a4e5>
3.0625
394
Knowledge Article
Science & Tech.
31.323284
What is meant by first contact? Basically it's the start of the eclipse. The point where the eclipsing body first touches the primary star's image and starts a decrease in the primary star's brightness as seen from Earth. But it's not so simple. With epsilon Aurigae, first contact appears to be wavelength dependent. This means that the longer (red and visual bands) wavelengths will start to show an eclipse before the shorter (blue and ultraviolet bands) wavelengths. In addition, because epsilon Aurigae is known to vary a fair amount out-of-eclipse, knowing exactly when the eclipse is starting (first contact) is not easy. One approach, and the one used for the Campaign predictions, is to determine the average magnitude for a given band out-of-eclipse. Next, after the eclipse is well underway there will be an ingress curve or line showing the system getting fainter with time. This should be a fairly straight line. The slope of that line is important as it can be used along with an intersection with the average out-of-eclipse value to determine the first contact point, also for second contact.. Unlike many eclipsing binary star systems where there are dozen, hundreds or even thousands of eclipse observations, because of epsilon Aurigae's long period of 27.1 years there are only a handful of eclipse observations and most of those were prior to many significant developments in equipment and techniques. Another variable for predicting first contact is that the precise time between first contacts appears to be changing for each eclipse. The 2009/2011 eclipse will add another layer of data to these predictions. For this current eclipse the V band prediction for first contact is 30 July 2009. RI and JH bands are most likely to be days or weeks earlier. We have no data from previous eclipses so no predictions for those bands. The B band is predicted for 11 August 2009 and the U band 21 August 2009. Other contact points and mid-eclipse timing have similar problems. It will be interesting to see how these times compare with current predictions and with past eclipses. Much will be learned from the new data. Perhaps some surprises and enlightenment.
<urn:uuid:e4110162-b091-43d5-80da-0140bf3f775d>
3.5625
450
Knowledge Article
Science & Tech.
54.391179
On our second afternoon at Woleai, I had the opportunity to deploy my drifter. I went out to the reef in one of the small boats that we use to move from the big ship, anchored in deep water, to the shallow reef area, near the shore. To begin, I looked around to observe the water movement, and I jumped in the water to feel which way the current was going. Once I knew which end was upstream, I went to the upstream end of the reef and dropped the drifter in the water. At this point, I sampled the water and I left the drifter to float along with the water across the reef. The drifter tracked where the water was going. After the water (and drifter) crossed the reef, I sampled it again. While the water was passing over the reef the water was interacting with the coral community. As the coral were growing, they were extracting the chemical building blocks for their skeletons directly out of the water. When I measure the chemistry before and after the water interacts with the coral reef, I can determine how much they grew during that time. This means that I can calculate how fast the community is growing, collectively. The community is comprised of corals that generate carbonate (limestone) skeletons, coralline algae that also deposit carbonate mineral, sand which is dissolving carbonate back into the water, and a multitude of organisms with carbonate shells. Measuring the net growth of this community is one way to evaluate the health of a community and to assess its sensitivity to changes in the environment.
<urn:uuid:789e4c67-d683-4277-a157-b2e80aefe6e7>
2.828125
322
Personal Blog
Science & Tech.
48.006043
COLLEGE STATION, Texas, Oct. 5 (UPI) — The possible impact of human activity on the world’s environment and climate may not be known for 40 years or more, U.S. researchers say. A Texas A&M study shows that although it is evident the world is experiencing one of the fastest warming rates since the beginning of climate record keeping, it will take a long time before a statistically significant difference can be seen between possible human impacts and those caused by natural climate variability, a university release reported Tuesday. The study analyzed 150 years of climate data to determine past trends and annual temperature fluctuations and then used the data to simulate possible temperature scenarios for the rest of this century, the release said. The effect of humanity’s carbon footprint on the environment may not be measurable for decades, if at all, the study concludes. Their study has broad implications for international policy making and protocols, including initiatives like cap-and-trade, programs that provide financial incentives to companies that pollute less than others, the study authors said. “In the end,” lead author Doug Sherman at A&M’s College of Geosciences said, “we found that even with an aggressive international effort to reduce the amount of greenhouse gases, it may be decades before we can see definitive results.” “There is something here for both sides of the ‘war against global warming,’” Sherman said. “Do we charge ahead with international agreements and policies, or do we do nothing? Do we save money for our grandchildren’s future or do we try to save the climate, not knowing if our efforts will have any effect? “Unlike a true war,” he said, “we cannot anticipate victory. We have, at best, a stalemate.” Copyright 2010 United Press International, Inc. (UPI). Any reproduction, republication, redistribution and/or modification of any UPI content is expressly prohibited without UPI’s prior written consent.
<urn:uuid:fd3164e1-7c01-4fde-a0e0-012a9cb02308>
3.3125
426
Truncated
Science & Tech.
32.644623
Itokawa’s Muses Sea (the smooth area near the center), where HAYABUSA collected its sample Optical microscope photo of Itokawa dust particles supported by 5-µm-diameter carbon fibers (courtesy: Osaka University/JAXA) For me, it’s the amazing amount of information such tiny samples hold about the heating history of the asteroid. The first analysis shows that the sample of Itokawa heated to 800 degrees Celsius. The asteroid formed from smaller objects; each of these impacts made the asteroid larger and larger and caused heat to accumulate inside the asteroid, changing kinetic energy to thermal energy. So as the asteroid was growing, it was heating up. And the minerals in the samples returned by HAYABUSA record every stage of this heating process, and also the cooling, down to the present temperature today. All those steps are recorded in different minerals. And by careful analysis of these tiny mineral grains in the HAYABUSA samples, you can map out the heating history and then the cooling history of the asteroid very, very accurately. Although we have tens of thousands of meteorites from asteroids, they tell us almost nothing about what’s happening at the very surface of the asteroids. And for the past 4.6 billion years, that’s where all the activity has been on these asteroids. The interactions with other kinds of asteroids, impacts, interactions with the sun and radiation, comets striking the asteroid – all that information is really found only on the very surface of the asteroid. That’s one of the most valuable aspects of the Itokawa samples. In addition, it’s possible that the Itokawa dust may contain particles that fell from other small objects, so we may be able to learn about asteroids that are of a different type from Itokawa. We’ve learned a lot from the meteorites we already have. But we’re learning a lot more from the dust. This is why the Itokawa sample has so much vital information in it. It’s only been a year since we began the Itokawa analyses, but we’ve already learned a lot, and there’s still much, much more we can learn. And that’s the value of having your samples on Earth carefully curated and maintained as they are here, by JAXA, in a very clean, carefully controlled environment. It’s worth doing that because some of our children and our grandchildren will be scientists, and will want to study these samples. And if we’re careful with the samples now, they’ll be safe, and future scientists who are asking new questions with new equipment that doesn’t even exist now will have these samples as a resource to use. So it’s an international treasure that’s being cared for here by JAXA. Work using JAXA’s curation equipment First of all, I want to say that it’s a great honor to be even just a tiny part of this mission. People in Japan should be very proud of the scientists and engineers that are working here. They really pulled off a miracle in this mission. I was very fortunate to be part of the HAYABUSA capsule recovery team that was in Australia in June, 2010. It was truly one of the most exciting things I’ve ever done. Watching the spacecraft come into the atmosphere at night, it was actually red – bright red, glowing. It was fantastic. In fact, I was also on the recovery team for NASA’S Stardust Mission. And this was actually, I think, a better recovery operation than we had for Stardust. It’s just amazing. It was great to see how carefully people here had prepared for recovery day. They were prepared for anything that could happen. And nothing went wrong. Everything happened just right because people planned this very, very carefully. And then, of course, it took a long time to remove the samples from the sample catcher inside the spacecraft because they were so tiny and hard to see. And again, people here had spent years preparing to do that. But even so, they had to adjust and adapt techniques to the actual samples. It took quite a bit of time to do that – you had to be very careful not to lose any of these tiny, precious samples. They devised entirely new techniques for handling these particles. And then they had to move them around without losing them. And then to learn the most you could from these precious samples. That was really thrilling. So to be just a small part of that was an honor for me. Actually we were at JAXA’s curation facility for a month to learn how to handle the particles. I advised the HAYABUSA team based on my experience with the Stardust mission, but I could not handle the sample, because I was afraid to do that, I was afraid I might lose the sample. And now we can apply techniques developed here for HAYABUSA and bring them back to America and apply them to other samples – to asteroid and comet grains collected in Earth’s stratosphere and to the Stardust samples from Comet Wild 2. Asteroid Explorer OSIRIS-REx (courtesy: NASA/Goddard/University of Arizona) Right now, the United States has a plan to send a new spacecraft, called OSIRIS-REx, to a primitive, perhaps organic-rich asteroid called 1999 RQ36. OSIRIS-REx will be launched in 2016, and we hope to get some organics from the earliest stage of the solar system. The hope is that those samples will tell us about the precursors to life on Earth, because all those organic molecules and water were raining down on the early Earth. All the things that are in our bodies came from those asteroids, and from comets as well. The challenge will be to clean up the spacecraft so you don’t contaminate the asteroid. And then, how do you clean up the collection part of the spacecraft so the samples are maintained without any organics being introduced from the air. It also means that when you bring it back to Earth, the laboratory has to be organically clean, which is a very, very difficult thing to do. Because we humans are lumps of organic matter. Japan’s HAYABUSA 2, which will launch in 2014, also plans to go to a more primitive and, we think, more organic-rich asteroid. One of the main goals of OSIRIS-REx and HAYABUSA 2 is the sample return of organic matter. And so because these missions have similar goals, of course people are discussing how to clean up the spacecraft and maintain them cleanly, how to function at the asteroid, how to do preliminary analyses of the samples without contaminating them, etc. That’s very, very important. And also mistakes. If you make a mistake in your mission, you tell the other mission, “Oh, we did this thing wrong, please avoid this mistake.” That’s really an important thing that often doesn’t get done. People don’t talk to each other. So it’s important that it happen this time. I’d certainly like the relationships I’ve built with Japanese researchers through the HAYABUSA mission to also be of help in future missions. I might say that I like studying water in every solar system. In 1998 I found a meteorite that fell in Texas with halite (table salt) crystals inside, which contained droplets of liquid water inside. And my hope is that we return more of that kind of material in a future mission. Or maybe even ice. People are thinking about how you would collect a sample of ice on an icy asteroid or a comet, and then keep it frozen all the way back to Earth. It would be very hard to do, but you could do it. So my dream is that in my lifetime, sometime, we’ll be able to have the chance to do a mission where we bring back actual ice from a comet or an asteroid. Maybe it could still happen during my career.
<urn:uuid:a4d0c0d4-6fcd-4a73-bc62-534ac7727dcb>
3.453125
1,705
Audio Transcript
Science & Tech.
53.997704
Name: Steve L. Date: July 2003 If centrifugal forces are "Imaginary" or "virtual", then how does a centrifuge manage to separate particles in immersion according to their mass? I hope I am understanding your question correctly. I do not know what you mean by "virtual" or "imaginary". The force is a REAL one. As you spin a tennis ball attached by a string above your head, you will FEEL the force of that ball trying to escape (centrifugal force). This is a real force that is generated by the law that objects in motion (on a straight vector or course) tend to stay in motion (on that straight vector or course). Just like if you were in your car and tried to turn RIGHT at a corner at a speed of 50 MPH versus 10 MPH. You would find your body plastered up against the drivers side door. This is exactly how CENTRIFUGATION works. In a way this force can be considered to be a sort of ARTIFICIAL GRAVITY, right? If you have seen the movie 2001: A Space Odyssey, you'd know that the orbiting space station used a rotation hub. In my opinion, that was supposed to be where they got their artificial gravity. Now that that is cleared up, lets talk about MASS versus DENSITY. A centrifuge separates by DENSITY differences just like gravity on the Earth does the same thing to ""AIR"" (note the double quotes). If you have two, perhaps, orange with pulp, it is conceivable that the pulp has the higher density (by a very small amount) If you were to centrifugally separate them in a test tube or what not, you shall surely find a nice thick blob of pulp in the test tube and pure OJ on top. Of course, it should be noted that if you do have access to a centrifuge always use a counter weight (with similar liquid media) on the opposite "site" of your test tube (if you are only separating one tube). Otherwise, you may get the "WALKING WASHING MACHINE" phenomenon. Centrifugal forces (centripetal for purists) are neither "imaginary" or "virtual". They are very real, just ask anyone who has been on any rotating amusement park rides. High speed centrifuges have been "standard" lab equipment dating back to the 1930's when they were used to separate U(235) from U(238) in the form of UF6. The quantitative details can become algebraically messy, but for an ideal solution of component "i" having a molecular weight "Mi", molar volume "Vi", in a solvent having density "d", spinning with an angular velocity "w" radians/sec, at an absolute temperature "T" kelvins, at two radii "r``" and "r`" from the axis of rotation, the mole fraction of "i" at radii "r``" and "r`", the mole fraction of "i", at the two radii, Xi`` and Xi`, is : ln(Xi``/Xi`) = (Mi - d*Vi)*w^2/ 2RT * [r``^2 - r`^2] where R is the universal gas constant in proper mechanical units, and "ln" is the "natural log". (Mi - d*Vi) is the balance between the buoyancy and the centripetal force. Should that term be "adjusted' so that it is zero no separation will occur. Click here to return to the Engineering Archives Update: June 2012
<urn:uuid:57daa09d-a06d-41e1-8214-150073192a00>
3.21875
785
Q&A Forum
Science & Tech.
53.54861
This class can be used to encrypt data (messages, files) and hide that data in images using steganography techniques. Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscured. It reads a given image in GIF, JPEG or PNG formats, encrypts data supplied by the user (a message, a file, a collection of both) and hides the encrypted data in the image by making subtle colour changes to certain pixels to store the data in the image symbolically. The resulting image is generated in PNG format (because PNG is lossless). It can also perform the reverse operation by extracting the original data previously stored in an image using this package. The advantage of steganography over cryptography alone is that messages do not attract attention to themselves, to messengers, or to recipients. An unhidden encrypted message, no matter how unbreakable it is, will arouse suspicion and may in itself be incriminating, as in some countries encryption is illegal.
<urn:uuid:6cb51bbe-4c54-449d-bf2e-e30f053d1522>
3.078125
241
Knowledge Article
Software Dev.
23.241574
July 8, 2009–Good solar potential, relative proximity to existing or potential energy transmission corridors, and the perception of the fewest conflicts with existing land uses and the natural environment were factors in site selection. May 27, 2009–Researchers from California and Hawaii have analyzed 25 factors and developed a map that reflects the relative cumulative magnitude of their effects on the waters extending for about 250-350 miles off the shores of Washington, Oregon, California, and the Baja Peninsula. February 18, 2009–The Western Governors' Association and the US DOE begin Phase 1 of a 4-phase process, with initial designation of suitable zones for renewable power generation in the western US, Canada, and Mexico. October 31, 2007–California is trying to eradicate the brown apple moth along the state's central coast by aerial spraying of a pheromone-based pesticide intended to interfere with the insect's natural reproductive cycle. Some residents want to know what's being sprayed on them.
<urn:uuid:e3661328-c220-4ec8-b875-16007b80c388>
2.90625
196
Content Listing
Science & Tech.
20.116552
To help clarify: the term ASP is an acronym for Active Server Pages, which was coined some years ago. ASP.NET makes use the core .NET framework to accomplish it's goals, and is available in several languages (E.G. C#, VB). that kind of exactly same mostly i mean to say .net or asp.net both are similar. When you add .net you imply you are using the .net framework. The .net framework behaves similar to Java in which it compiles down to an IL and then gets interpreted by the framework. ASP.net is the web piece of the .net framework. That is a brief overview. There is a lot more that go on with it. I hope that helps out more. .Net Framework is the platform, where the application runs. The applications are of 3 types - Windows Forms, Web and Console applications. Normally, the web application part of the .Net Framework is known as asp.net . Just look at the figure and see where the asp.net comes in .Net Framework. The older technology ASP is combined with .Net framework to make ASP.NET. Dot Net is a technology which can run a lot of different programming languages so basically its a platform for different programming languages.
<urn:uuid:0aa1084b-c599-4c0c-8222-9ca0061f785a>
3.5
261
Comment Section
Software Dev.
69.850195
LA Plans a Massive Water Conservation Plan A $2 billion proposal to conserve water for the City of Los Angeles California has reached city officials who will take a serious look at the future of water. The plan proposes a massive water conservation plan to conserve about 32 billion gallons of water each year! Part of the plan includes reclaiming or recycling water from sewage back into the drinking water supply. It also includes building systems to capture and treat rainwater and runoff. The proposal also requires restrictions for homes watering their lawns and people washing their cars to certain days of the week. According to the LA Times: Financial incentives and building code changes would be used to incorporate high-tech conservation equipment in homes and businesses. Builders would be pushed to install waterless urinals, weather-sensitive sprinkler systems and porous parking lot paving that allows rain to percolate into groundwater supplies. So I guess it’s time for everyone in the LA area to start doing their part. LA needs to to do this in order to support an increase of 15% in demand for water by 2030. If nothing is done, water restrictions could end up as serious as it is in Georgia where extreme drought has caused the state to take drastic measures. Easy things can be done to conserve water: Believe it or not, doing these things helps you conserve water which is great for the environment, but it also can save you money on your water bill. We really need to be aware of issues like water conservation so there is plenty for everyone and for the generations to come. Thoughts, Comments, Questions… Reasons to JOIN US include: - It's absolutely FREE! - Get Green Tips You MUST know about. - How to's on going green, saving money, and having fun. - Keep up-to-date on our posts in cased you missed them.
<urn:uuid:5d661cd6-e7b9-483a-8f6c-accd01456e8b>
2.703125
385
Personal Blog
Science & Tech.
49.386403
Data reported by the weather station: 865823 (SULS) Latitude: -34.85 | Longitude: -55.1 | Altitude: 30 Weather Maldonado - Capitan Corbeta |Main||Year 1997 climate||Select a month| To calculate annual averages, we analyzed data of 359 days (98.36% of year). If in the average or annual total of some data is missing information of 10 or more days, this is not displayed. The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast. |Annual average temperature:||18.3°C||359| |Annual average maximum temperature:||21.2°C||359| |Annual average minimum temperature:||14.4°C||359| |Annual average humidity:||72.3%||358| |Annual total precipitation:||-||-| |Annual average visibility:||10.3 Km||359| |Annual average wind speed:||18.6 km/h||359| Number of days with extraordinary phenomena. |Total days with rain:||90| |Total days with snow:||0| |Total days with thunderstorm:||26| |Total days with fog:||26| |Total days with tornado or funnel cloud:||0| |Total days with hail:||0| Days of extreme historical values in 1997 The highest temperature recorded was 35°C on February 1. The lowest temperature recorded was 1°C on July 5. The maximum wind speed recorded was 83.2 km/h on December 4.
<urn:uuid:46da6788-c720-4571-abea-d96e2f8d65a7>
2.703125
364
Structured Data
Science & Tech.
65.827506
3. Color - color diagram From data in Tables 3 and 4, after the IRAS fluxes are transformed into magnitudes without color corrections according to Cheeseman et al. (1989), different infrared color-color diagrams can be plotted: () - () in Fig. 1; () - () in Fig. 2; () - () in Fig. 3 and () - () in Fig. 4. In those figures, the open and filled circles indicate S stars with and without Tc, respectively, and the straight line corresponds to the blackbody distribution. It is obvious from Fig. 1 that although samples with Tc distribute in a rather wider area than the ones without Tc, both species are basically located in almost the same region. These distributions imply that, statistically, both categories of S stars have almost the same colors and temperatures in the near infrared, and they can not be clearly distinguished on the basis of their near infrared color. It is shown from () - () diagram in Fig. 2 that a rather clear segregation between Tc-rich and Tc-deficient S stars is visible by different but not by except for a peculiar Tc-deficient star DY Gem (it will be discussed in more detail later). This implies that the color is a good probe for distinguishing these two kinds of S stars, and Tc-rich S stars really exhibit on infrared excess in the 12 µm and 25 µm bands. They have cooler temperature and hence are probably more evolved. On the other hand, Tc-deficient stars have no or little excess in the 12 µm and 25 µm bands which indicates a photospheric origin of the mid-infrared flux. It is also noted that a few Tc-rich S stars including NQ Pup, Ori, Cyg and HR Peg populate the same region as Tc-deficient S stars as Jorissen et al. (1993) already pointed out. In addition, it is seen from Table 4 or Fig. 2 that Tc-deficient S stars with good quality 60 µm flux are not numerous (only 7, out of 20 in Table 4). Therefore, other colors not involving 60 µm flux must be used to show their infrared property. In order to include more samples than that in Fig. 2, the () - () diagram is plotted in Fig. 3 from which it is obviously seen that except for DY Gem and V Cnc (they will be discussed later), Tc-deficient S stars are concentrated on a very small region of and , and very close to the blackbody line, whereas almost all Tc-rich S stars are out of this region, far away from the blackbody line, and spread over a much wider area that corresponds to a much larger infrared excess either in or in colors, hence with much lower temperature and much higher mass loss. Only four stars: Ori, NQ Pup, HD 170970 and V679 Oph are located in the Tc-deficient sample region. This suggests that both and are good probes to distinguish Tc-deficient S stars from Tc-rich ones. The () - () diagram is also plotted for the two kinds of S stars in Fig. 4. Again except for DY Gem and V Cnc, most Tc-deficient S stars can be separated from Tc-rich ones with color, but not with color. It should be noted that the () color has been used in Figs. 3 and 4, although the observational results in K and 25 µm were not obtained at the same epoch. From the previous observations in K (Gezari et al. 1993) it is found that, on the average, the difference between the previous data and the new ones is 0.22. This has no serious influence on the result shown in Fig. 3. However, the similar comparison is not possible for samples in Fig. 4 for lack of earlier data. The conclusion that can be drawn out from the analysis of infrared color-color diagrams mentioned above is that the more sensitive colors for segregating Tc-rich and Tc-deficient S stars are and , hence the more appropriate color-color diagram for distinguishing them is the () - () diagram. Chen et al. (1995) gave more than 700 IRAS associations of S stars in their Table 1, but more than 2/3 of the samples have no good 60 µm flux. If one wants to extract candidates of Tc-deficient S stars according to their infrared colors, only color is not enough. Fortunately, if K magnitudes have been measured, by using the () - () diagram the candidates of Tc-deficient S stars can be well selected despite the absence of 60 µm flux for almost all samples. © European Southern Observatory (ESO) 1998 Online publication: April 20, 1998
<urn:uuid:627c94ba-49ab-4079-a3b0-1df27a69ab0c>
3.4375
999
Academic Writing
Science & Tech.
67.166366
``It's been a long strange trip for this universe we call home ...." Interest in cosmology undoubtedly predates recorded human history. The question of the origin, composition, structure and fate of the universe has challenged lay persons, scientists, philosophers and theologians across racial and cultural barriers. It is one of the most profound questions we can contemplate. Cosmology is also an area of explosive scientific progress: the last 2 decades have seen a revolution in using observation, analysis and insight to build a sound scientific theory of the cosmos. of the National Academy of Sciences stated recently: We are the first generation of human beings to glimpse the full sweep of cosmic history, from the universe's fiery origin in the Big Bang to the silent, stately flight of galaxies through the intergalactic night. Humankind continues its own journey into the future with a new depth of understanding and appreciation for the forces that shape our destiny. Currently astrophysical research is uncovering new information about the universe in which we live -- about its Due to progress in observations and in computational power, our knowledge in this area is growing quickly. This course will describe what has been learned so far, and the `big picture' which has been emerging. Questions driving current research will be introduced, and new results brought into context as they occur. Last modified May 19, 1998
<urn:uuid:9a1856dd-f0fa-4931-a960-1cfeeb9a657b>
3.578125
275
Knowledge Article
Science & Tech.
37.488136
Canadian gun-launched orbital launch vehicle. In 1962-1967 Canada's Gerard Bull led development of the Martlet system for gun-launched access to space. The program was cancelled before the objective of gun launch to orbit was attained. Even after the rocket established its primacy as a method of accessing space, Gerald Bull of the Canadian Armament and Research Development Establishment began a life-long struggle to use guns for cheap access to space. In the 1950's Bull pioneered the use of gun-fired models as an economical approach to study supersonic aerodynamics. The model was fitted with a wooden shell, or sabot, that matched the diameter of the gun barrel. After leaving the barrel the sabot would fall away and the model would continue, with high-speed cameras recording its behaviour in flight. By 1961 Bull had expanded his concept and obtained a $10 million joint contract from the US and Canadian Defence Departments for a High Altitude Research Program (HARP). This was to prove the feasibility of using large guns for launch of scientific and military payloads on sub-orbital and orbital trajectories. For long range shots a range was established at Barbados, where the payloads could be sent eastward over the Atlantic. A surplus 125 tonne US Navy 16 inch gun was used as the launcher. The standard 20 m barrel was extended to 36 m, and converted to a smooth-bore. In 1962 - 1967 Bull launched over 200 atmospheric probes to altitudes of up to 180 km. By this time relations between Canada and the United States were strained because of the Viet Nam war. Canada terminated the project. Success Rate: 100.00%. Launch data is: incomplete. Status: Retired 1966. More... - Chronology... First Launch: 1963.01.01. Last Launch: 1966.11.20. Number: 37 . Gun-launched Artillery dominated military ballistics from the earliest use of gunpowder. In 1865 Jules Verne could only realistically consider a cannon for a moon launch in his influential novel. Even after the rocket established its primacy as a method of accessing space, Canadian Gerald Bull began a life-long struggle to use guns for cheap access to space. His successes could not generate funding to continue. Others since then have pursued the technology, convinced it was the only way for low-cost delivery of payloads to orbit. More... Martlet In 1962-1967 Canada's Gerard Bull led development of the Martlet system for gun-launched access to space. The program was cancelled before the objective of gun launch to orbit was attained. More... Associated Manufacturers and Agencies Bull Canadian manufacturer of rockets, spacecraft, and rocket engines. Bull, Canada. More... McDowell, Jonathan, Jonathan's Space Home Page (launch records), Harvard University, 1997-present. Web Address when accessed: here. Goebel, Greg, Space Guns, Web Address when accessed: here. 1967 June 30 - . Launch Vehicle - HARP project closed down - . Nation: Canada. The cancellation came only a few months before an orbital 2G-1 could be flown. Martlet 2's were used to conduct extensive research at altitudes of up to 180 km with some 200 flights being conducted between 1963 and 1967. The very low cost per flight, about $3,000, made it ideal for a wide variety of applications.. Typical mission payloads included chemical ejection to produce an observable atmospheric trail and assorted sensors with multi-channel telemetry. 1968 October 11 - 15:17 GMT - . : Wallops Island . Launch Complex : Wallops Island LA2 . Launch Vehicle . LV Configuration : EXAMETNET W 104. - Chute - . Nation: USA. Agency: NASA. Apogee: 68 km (42 mi). Home - Browse - Contact © / Conditions for Use
<urn:uuid:e82b973d-df77-4a75-8aa2-c3c35ba12955>
3.59375
810
Knowledge Article
Science & Tech.
57.774986
One Of The Strange Mysteries Of Our Sky Galaxies are mysteries of the sky, all unique in their own way. Come explore with me about galaxies, our Milky Way, and more. Now, you might be wondering how galaxies are made. Just, how did they appear in our universe? Well, I’m here to answer your questions. Galaxies are made up of stars, dust, and crumpled up pieces of rock from space. Now, there are three different types of galaxies. The first type is called resular galaxies. They don’t always have the same shape. In fact, no one can tell what shape they are because they are always different! They contain very young stars and lots of dust and rock. The second kind of galaxies are called spiral galaxies. As you might have guessed, are shaped like spirals. Our Milky Way is a spiral galaxies. They contain middle aged stars with a medium amount of dust and gas. The third type of galaxies are called Elipticed galaxies. They don’t always have the same shape, but they usually are in the shape of a circle or an oval. Now that you’ve learned the basics of galaxies, here are some facts for you that you can go home today and say to your parents, “Hey mom! Guess what I learned today!” and have her not know the answer! Or, quiz your older brother right before his science! Did you know that there is almost always a black hole in the middle of a galaxy? Our Milky Way has one, too. So don’t go walking into the Milky Way any time soon! Or, gravity is always pushing galaxies either away or closer to one another? (But they are usually getting pushed away.)
<urn:uuid:67db2c91-fb06-418d-861b-836b0f9e970c>
3.671875
361
Personal Blog
Science & Tech.
65.348783
The map shows temperature changes for the last decade--January 2000 to December 2009--relative to the 1951-1980 mean. Warmer areas are in red, cooler areas in blue. The largest temperature increases occurred in the Arctic and a portion of Antarctica. (Credit: NASA) From Science Daily: ScienceDaily (Jan. 22, 2010) — A new analysis of global surface temperatures by NASA scientists finds the past year was tied for the second warmest since 1880. In the Southern Hemisphere, 2009 was the warmest year on record. Although 2008 was the coolest year of the decade because of a strong La Nina that cooled the tropical Pacific Ocean, 2009 saw a return to a near-record global temperatures as the La Nina diminished, according to the new analysis by NASA's Goddard Institute for Space Studies (GISS) in New York. The past year was a small fraction of a degree cooler than 2005, the warmest on record, putting 2009 in a virtual tie with a cluster of other years --1998, 2002, 2003, 2006, and 2007 -- for the second warmest on record. Read more ....
<urn:uuid:99243111-497c-4567-b42f-3fc307c6b68c>
3.234375
226
Truncated
Science & Tech.
63.77
Investigate the molecular masses in this sequence of molecules and deduce which molecule has been analysed in the mass spectrometer. Explore the distribution of molecular masses for various hydrocarbons Which dilutions can you make using only 10ml pipettes? Get some practice using big and small numbers in chemistry. Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents. How much energy has gone into warming the planet? Work out the numerical values for these physical quantities. Work with numbers big and small to estimate and calculate various quantities in physical contexts. PhysNRICH is the area of the StemNRICH site devoted to the mathematics underlying the study of physics Making a scale model of the solar system Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from? chemNRICH is the area of the stemNRICH site devoted to the mathematics underlying the study of chemistry, designed to help develop the mathematics required to get the most from your study. . . . When you change the units, do the numbers get bigger or smaller? Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts? Make an accurate diagram of the solar system and explore the concept of a grand conjunction. Explore the relationship between resistance and temperature Estimate these curious quantities sufficiently accurately that you can rank them in order of size Which units would you choose best to fit these situations? To investigate the relationship between the distance the ruler drops and the time taken, we need to do some mathematical modelling... Use trigonometry to determine whether solar eclipses on earth can be perfect. In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book. An observer is on top of a lighthouse. How far from the foot of the lighthouse is the horizon that the observer can see? Is it really greener to go on the bus, or to buy local? Have you ever wondered what it would be like to race against Usain Bolt? Design and test a paper helicopter. What is the best design?
<urn:uuid:548c2d67-bc41-41d2-a5c5-5fdab7c560d4>
3.875
477
Content Listing
Science & Tech.
46.34103
By David L. Brown One of the things that helps slow the rise of carbon dioxide in our atmosphere—and the greenhouse warming that results from higher CO2—is the fact that more than a third of the carbon released into the air since the beginning of the Industrial Revolution has been absorbed in our oceans and sequestered in sediments. That’s a good thing, right? Well, it turns out that’s not necessarily the case. It is true that the oceans can absorb large quantities of carbon … but only over long periods of time. Recent studies of a 3.2 kilometer core drilled from the Antacrtic ice cap, as reported today by BBC News (read it here), provide a record of temperature and atmospheric CO2 going back 800,000 years. The core reveals a direct cause-and-effect relationship between atmospheric levels of CO2 and global temperatures. When CO2 goes up, temperatures also rise. In that entire 800,000 years, the fastest rate of change in atmospheric carbon that was found was an increase of 30 parts per million per 1000 years. We have seen that much carbon added to our atmosphere just since 1989, a mere 17 years ago—and the rate of change is becoming ever faster as humans proliferate and continue to burn more and more oil, gas and coal. The oceans cannot respond fast enough to sequester this unprecedented onslaught of carbon in sediments, and instead more and more of the CO2 reacts with the water to form carbonic acid. According to the BBC story, …[m]ore CO2 absorbed by the oceans will raise their acidity, and a number of recent studies have concluded that this will eventually disrupt the ability of marine micro-organisms to use the calcium carbonate in the water to produce their hard parts. What is at stake is not only the entire foundation of the global food chain, which starts with the phytoplankton and other micro-organisms that live in the sea, but also a major factor in the delicate balance of oxygen and CO2 in our atmosphere. Much of the planet’s fresh oxygen is produced through he process of photosynthesis by algae and bacteria that share the oceans with those tiny animals that capture carbon and eventually sequester it on the sea bottom in their skeletons. The subject of ocean acidity was recently explored in a cover story in New Scientist magazine (“The Other CO2 Problem,” 5 August, 2006, pgs. 28-33). According to that article (read it here, subscription required), The potential seriousness of the effect was underlined in 2005 by the work of James Zachos of the University of California at Santa Cruz and his colleagues, who studied [one of several rare catastrophic events over the past 300 million years]. They showed that the mass extinction of huge numbers of deep-sea creatures around 55 million years ago was caused by ocean acidification after the release of around 4500 gigatonnes of carbon (New Scientist, 18 June 2005, p 19). It took over 100,000 years for the oceans to return to their normal alkalinity. Something similar to that catastrophic event of 55 million years ago seems to be taking place today, thanks to human-induced CO2 emissions from the burning of fossil fuels. In effect, we are digging up vast quantities of carbon—quantities that it has taken Nature hundreds of millions of years to remove from the air and sequester in the earth—and releasing it again over a period of a few lifetimes, a mere hiccup in geological time. We are only now becoming fully aware how human activity is changing our air, and now it seems that we may be destroying the oceans as well. Already there are signs of damage and destruction of tropical coral reefs due to rising temperatures and acidity. Speculation that a warmer ocean might offset the higher acid content have proved unfounded, because when tropical corals are surrounded by water just 1 degree C. warmer than normal, they reject the algae on which they depend in a symbiotic relationship, becoming “bleached” and eventually dying. But the effects will be even more pronounced in cooler regions. According to the New Scientist article: If the outlook for tropical corals is bleak, the consequences of acidification for organisms in more southerly and northerly waters causes even more concern. “Tropical surface waters will be affected by ocean acidification last,” says Ulf Riebesell of the Leibniz Institute of Marine Sciences in Kiel, Germany. “In higher latitudes the waters could tip much sooner into being corrosive.” Early studies suggested that high-latitude surface waters would become undersaturated with respect to aragonite [the soluble form of calcium carbonate that forms the skeletons of coral, molluscs and other marine organisms] only if atmospheric CO2 reached four times pre-industrial levels. But in September 2005, Orr, Fabry and colleagues published evidence suggesting that some polar and sub-polar surface waters will become undersaturated at just twice pre-industrial levels – conditions that are likely to occur within the next 50 years. In shipboard experiments, they found that the shells of pteropods ["sea butterflies," tiny shelled plankton] started dissolving after just two days in water at the pH predicted for 2050. This is worrying because pteropods are an important part of the ecosystems in the Southern Ocean and Arctic and sub-Arctic waters, where animals such as cod, salmon and whales eat them. “In standard ocean surveys their abundance is used as an indicator of ecosystem health,” says Orr. Could the pteropods simply move to warmer waters that are not approaching the saturation horizon so fast? “We think it unlikely, as they would have to outcompete organisms already living there,” says Orr. The fate of all the creatures that feed on pteropods will depend on whether species less vulnerable to acidification take their place in the food chain. Such an event of new species arising like a deus ex machina to take the place of pteropods and other threatened creatures in any reasonable time frame is extremely unlikely in my opinion. Evolutionary adaptation does not work in the short term, but through millennia and thousands of generations. What the Earth is experiencing is no less than a Great Extinction, only the sixth in the entire history of the Earth, and the geological record shows that the effects of such events last for literally millions of years. According to the New Scientist article, there is also great concern for cold water corals, lesser known than their colorful tropical relatives and found deep in the waters. Only in recent years has their existence been documented and their importance realized. According to New Scientist, “…one system stretches from Norway down to the coast of Africa. At around 4500 kilometres, it is roughly two-and-a-half times as long as Australia’s Great Barrier Reef. The richness of these reefs is also astonishing. In terms of biomass production and even biodiversity, cold-water corals may be as important as warm-water corals.” Increasing CO2 in the oceans could affect a broad range of other life forms, including fish and mammals which might not be able to adapt to more acidic conditions. In searching for a silver lining in this otherwise gloomy scenario, New Scientist noted speculation that the increased CO2 might cause microbes to produce larger quantities of volatile organic compounds such as dimethyl sulphide, which could enter the atmosphere and induce cloud seeding, which in turn might shield the planet from some of the Sun’s rays and thus reduce the impact of global warming. Another effect might be to speed up the process of sequstering carbon on the sea bottom by stimulating growth of some siimple organisms in the ocean. However, this is in question and needs more study. In fact, the whole subject involves many factors about which we know little, and which have the potential to create even more problems for our endangered planet. As a measure of how recently scientists have become aware of this new threat, it was only in 2003 that Ken Caldeira of the Carnegie Institution and Michael Wickett of the Lawrence Livermore National Laboratory calculated that the absorption of fossil CO2 “could make the oceans more acid over the next few centuries than they have been for 300 million years, with the possible exception of rare catastrophic events.” It was in their Nature paper that the phrase “ocean acidification” appeared in the scientific literature for the first time. Just three years ago. How little we know; how much is yet to be learned. And perhaps most pertinent of all, how much time do we have left? Meantime, the human circus continues to play and in the center ring the antic clowns in the capitals of the world command the audience’s full attention.
<urn:uuid:a4c236d3-0ffd-4fb6-9bbf-b591923c00fe>
3.859375
1,831
Personal Blog
Science & Tech.
38.556122
Branched chain surfactants were some of the first used and manufactured. They were chosen over their linear counterparts due to increased solubility. However, the very slow biodegradation of branched chain surfactants lead to their ultimate replacement with the linear counterparts. The branched species are still used in some Latin American countries however, due to its low cost. While human toxicity of branched dodecylbenzene sulfonate (BCDS) is not significant, its environmental buildup does pose a serious problem. The mechanism for degradation of BCDS is not well documented or investigated. The degradation was studied using a Pseudomonas aeruginosa strain and found to begin with the desulfonation of the surfactant. The process is then believed to undergo side-chain oxidation and leads eventually to the product 3,4-dihydroxybenzoic acid (Campos-Garcia et al., 1999). The steps surrounding the side-chain oxidation have not yet been elucidated. The following is a text-format branched-chain dodecylbenzene sulfonate pathway map. An organism which can initiate the pathway is given, but other organisms may also carry out later steps. Follow the links for more information on compounds or reactions. This map is also available in graphic (9kb) format. Branched-Chain Dodecylbenzene Sulfonate Pseudomonas aeruginosa W51D | | | BCDS monooxygenase | | v +-----Branched Dodecylphenol-----+ | | | | v A v B | | | | v v 3-(4-Hydroxyphenyl)- 2-(4-Hydroxyphenyl)- propionate propionate | | | | v C v D | | | | +------->4-Hydroxybenzoate<------+ | | | | | v to the Vanillin Pathway Page Author(s): Tyler D. Hall September 13, 2011 Contact Us © 2013, University of Minnesota. All rights reserved. The UM-BBD is licensed to EAWAG for hosting, maintaining and updating. http://umbbd.ethz.ch/bcds/bcds_map.html
<urn:uuid:1bfe7b27-bb6f-4fe8-ad54-2eab47e93e5f>
2.734375
477
Knowledge Article
Science & Tech.
40.572697
Section illustrating the structure of the ocean's floor. Brief Introduction of the Model of Ocean Expanding Floor The model mainly demonstrates the processes of the formation and distinction of the ocean floor. Due to the intrusion of the new substances from the asthemosphere underneath the edges of the lithosphere, these substances from the mantle push the earth crust to the sides, causing the renewing the the ocean floor. Because the floor is being pushed to the sides, the marine trench and island arc are close to each other and both dive into the mantle when reach to the land. Thus, the formation and distinction of the floor become balanced. The ocean floor gets renewed every 0.2~0.3 billion years. Main geographical objects that this model demonstrates: formation of ocean ridge: rising and cooling of the mantle substances. marine trench: ocean floor diving into the mantle. island arc (island arc chain), ocean mountain: formed when the land crust and ocean crust crash. earth crust, mantle, Mohorovicic interface, silicon aluminum layer, silicon magnesium layer. lithosphere, asthenosphere, lava. features of the age of ocean crust. features of the structures of land crust and ocean crust.
<urn:uuid:48ac7759-b1d4-4400-8232-0cf44cce8148>
4.03125
239
Knowledge Article
Science & Tech.
50.904615
Two related quantities x and y are called proportional (or directly proportional) if there exists a constant non-zero number k such that - y = kx In this case, k is called the proportionality constant of the relation. If y are proportional, we often write - y ~ x. For example, if you travel at a constant speed, then the distance you cover and the time you spend are proportional, the proportionality constant being the speed. Similarly, the amount of force acting on a certain object from the gravity of the Earth at sea level is proportional to the object's mass To test whether x and y are proportional, one performs several measurements and plots the resulting points in a Cartesian coordinate system. If the points lie on (or close to) a straight line passing through the origin (0,0), then the two variables are proportional, with the proportionality constant given by the line's slope. The two quantities x and y are inversely proportional if there exists a non-zero constant k such that - y = k/x For instance, the number of people you hire to shovel sand is (approximately) inversely proportional to the time needed to get the job done. See also: proportional font, proportional representation
<urn:uuid:ef5b19f2-9538-4917-a0ed-23f95137d390>
4.5625
263
Knowledge Article
Science & Tech.
30.71288
1.)When do action and reaction forces within a system not cancel each other? a.)When the action force is double the resistance of friction =b.)When the reaction force is contained in separate events c.)Never, they always cancel each other d.)Only on February 29th during a full... Though in the first one you may not even need the comma! :) :) :) The first one looks correct. I'm still evaluating the other two! Since I've been waiting to get my answers checked for my submission #2, I moved on to my submission #3 and I need some questions checked for my submission #3 please. 1.)If a blue car drives 10km/h across the deck of an ocean liner that is traveling at 90km/h, what is the b... These are questions I'm unsure about out of 20 for my submission #2. Someone please check soon. My answers have an =. 1.)A ball is rolling across the top of a billiard table and slowly rolls to a stop. How would Galileo explain the motion of the ball as it stopped? a)The b... This is confusing me. I have to questions and I picked 2 answers but can someone tell me if I'm wrong and why? 1.)The distance an object travels per unit of time =a)Is its speed b)Is its velocity c)Is its acceleration 2.)The distance and direction an object travels per uni... A man leaves his house and begins walking in a straight line. After 2 hours he is ten miles from home. Calculate the man's average speed in terms of feet per second. (There are 3600 seconds in an hour and 5280 feet in a mile.) so i used Average speed=total distance travele... A ball is rolling across the top of a billiard table and slowly rolls to a stop. How would Galileo explain the motion of the ball as it stopped? a)The ball's natural state is at rest =b)The ball stopped because it ceased to be pushed c)The ball stopped due to external forc... For Further Reading
<urn:uuid:0d2302d7-b8f2-4a50-928a-322a379cdbd6>
3.484375
432
Q&A Forum
Science & Tech.
83.485806
Death in the Deep Freeze Preserve your Body for a Better Future Cryogenics: The branch of physics dealing with the production and effects of very low temperatures. The one certainty in life is death! But, a small group of people are attempting to defy the inevitable. Their ticket to a second life is a controversial experiment with a radical goal - Cryonics. In the United States of America, there are, currently, two organisations that offer the chance for a future second life: The Cryonics Institute in Clinton Township, Michigan and Alcor in Scottsdale, Arizona. After they die, patient's bodies are preserved in chemicals designed to. theoretically, protect cellular structure, before being lowered into steel tubes of liquid nitrogen, called dewars. Here they will face an indefinite wait at -196°C in the hope that medical science will discover a way to bring them back to life. There are currently 147 people in cryogenic suspension, with another 1,000 members signed up for the deep freeze. Cryogenic Storage Tubes Cryonics members have two options, they can choose to have the entire body stored, or they can have a neural procedure where only the severed head is frozen. The thinking behind the latter is that an elderly patient will not wish to come back in an old body. What are the costs for these procedures. Alcor currently charge the equivalent of £80,000 for the full body option and £42,000 for the head only. Cryonics is an unproven theory. There are scientific obstacles that, some would say, are insurmountable. The current technique of full-body preservation with cryoprotectant chemicals causes extensive molecular damage to the body. To successfully bring a patient back to life, cryonics would not only need to reverse this damage, but would also have to cure the original illness the patient died from. Science has already discovered ways to suspend and revive biological life forms. Today, relatively simple living structures such as red blood, stem cells, sperm and embryos are routinely preserved using cryobiology technology. At 21st Century Medicines in Rancho Cucamonga, California Chief Scientific Officer Gregory Fahy Ph.D and his team of researchers are at the cutting-edge of cryobiology technology. Their mission is t extend the shelf-life of donor human organs which currently only remain viable for transplant for a few hours. Gregory and his team have achieved a world first. They have cryopreserved a rabbit kidney, reversed the procedure and successfully re-implanted it without losing the ability to sustain the life of the recipient. Gregory Fahy said of the achievement: "We have finally accomplished this goal that I've been pursuing since 1972 of being able to vitrify a kidney, warm it back up again, and transplant it and have the animal maintain clinical normalcy indefinitely". The Initial Stabilisation Procedure As soon as a patient dies, the aim is to stop cellular decomposition caused by oxygen deprivation. Crucially, brain cells are the first to die. The first step is to cool the body. For every 10°C drop in temperature there is a 50% reduction in metabolic demand which means it takes twice as long for damage to occur. The aim is to cool the body to just above freezing. Next, a mechanical chest compressor is used to temporarily restore circulation before injecting a cocktail of medications to stop the blood clotting. Then, the patient's blood is washed out and replaced with a temporary protective fluid. Once the body has been transported to the operating theatre, the main preservation process can begin. The process begins by opening the chest cavity to allow plastic cannulation tubes to be sewn into the heart to provide netry and exit points for the cryoprotectant fluid. These tubes connect to a heart-bypass machine that will pump the cryopreservation fluid around the body. Current cryonic techniques rely on the success of a process called vitrification. This means replacing over 60% of the water in the body with, potentially toxic, preservation chemicals. When exposed to cryogenic temperatures of below -120°C they react by turning tissue to a glass-like solid. Throughout the procedure the body is kept packed in ice inside a perspex covering. Liquid nitrogen vapour is regularly pumped around the body to keep the temperature at -3°C. Rapid Cooling Container After the surgery, the body is transferred to an insulated holding chamber for the rapid cool down stage. Liquid nitrogen vapour is pumped inside and probes will monitor the body's core temperature. The temperature will be dropped rapidly to just above the glass transition point. The body is then placed in a sleeping-bag and put into a pod which is the permanent storage container where it will be cooled very slowly to liquid nitrogen temperature. |Forever for All: Moral Philosophy of Cryonics - R. Michael Oerry| |Life in the Frozen State - Barry J. Fuller, Nick Lane, Erica E. Benson|
<urn:uuid:f4a79f5f-932d-4d68-beaf-0e6e6ed1c5a4>
2.84375
1,032
Knowledge Article
Science & Tech.
39.458596
India's dwindling forests could be revived with the help of an artificial topsoil made from waste. Initial studies suggest that the mixture of sewage sludge, fly ash (the residue from burning coal) and weed compost can increase plant growth by between 50 and 200 per cent on poor eroded soils. Researchers from the Department of Earth Sciences at the University of Western Ontario, Canada began testing the artificial soil at two plantations in the state of Orissa in eastern India. This week they will start two new trials in Orissa and larger scale use of the soil substitute is planned for next year. Michael Powell, who heads the project, says the idea is to create a soil with similar properties to normal, organic soil, and add to it soils that have lost essential minerals. 'The results to date have been quite amazing.' So far, Powell and his colleagues have tested ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:cc3689bf-6eac-4dff-bfd4-a11697d89118>
3.578125
203
Truncated
Science & Tech.
54.508995
Date: Spring 2012 I am teaching a high school Earth and Space Science class to a group of graduating students. I was curious if you know of some approach or way I could include kite flying into a lesson? This way they could get outside and soak up some fresh air and relax a little in the last couple weeks of school. I am just stumped on a way to include it. Just a some quick ideas that popped into my head when I read your question: 1. Weather phenomena such as wind speed 2. Compass use, triangulation, etc to measure height, distances, direction 3. Mass of kite vs Height (tricky as you are unlikely to have constant wind speed 4. Size of kite vs. Height 5. Shape of kite vs. Height 6. If you cannot count on wind, try paper airplanes. You can do speed, distance, height, etc according to airplane type. 7. Let the students come up with experiments, variables, measurements, etc. for either one. Have fun and enjoy the weather. Attach a digital camera to it with a remote trigger and use it as a vehicle to get air photos of the local area. Challenge students to rig the kite so the camera will be pointing down, can be fired from the ground AND is light enough for the kite to lift. Then use the photos to evaluate the geology/geomorphology of the area. You could select geologically significant areas to do this in. But rather than that, why not go over to Mineral Wells Fossil Park, about 3 hours west of you, and look for the abundant fossils there? Information is available on the web. R. W. "Mr. A." Avakian Have you discussed Bernoulli's Principle yet? I normally just use strips of paper, holding it by one edge and blowing to make the strip straighten out - the revelation is that you can straighten out the paper by blowing below it (obvious) and above it (surprise). Then you can extend this to kite flying. And ultimately to the design of kites that best take advantage of this principle. For example, why should a box kite still fly? What design elements can you have that best takes advantage of the principle and what happens if you remove these design elements or improve on it. Greg (Roberto Gregorius) Easy enough OK so far. Now make the interesting modifications. Attach anywhere from 1 to 4-5 or 6 aluminum “party” balloons in various configurations to various “points” on the kite. Now you have a “free floating” air device. There are a lot of configurations and “experiments” that you could spin off from this basic set of experimental conditions. Follow up for me the results of your experimental conditions. Click here to return to the General Topics Archives Update: June 2012
<urn:uuid:6f03f43c-d446-4366-8d42-6ed5db1627d3>
3.1875
607
Comment Section
Science & Tech.
64.121592
Though pretty, purple loosestrife has the ability to overrun native plants Nonnative species are also referred to as introduced, exotic, or alien species. The National Park Service defines nonnative as species that occur in a given place as a result of direct, indirect, deliberate, or accidental actions by humans. Plant species that are brought into an area as food, fiber, or ornamental landscape plantings can "jump the fence" and become established in the wild. Likewise, nonnative animal species can be introduced into an area deliberately, for agricultural use or fish stocking; or by "hitching a ride" on objects like boat hulls and outboard motors. Many species find their way to new locations in crop seed, soil or nursery stock. Although many introduced species have had a negative impact on our society, primarily in agriculture, these species would not have evolved with the native species and therefore are not a natural component of the ecological system. In extreme cases, invasive nonnative species can displace native species, thereby degrading the integrity and diversity of native communities. Alien species can also become pests, such as Asian lady beetles and zebra mussels. Nonnative Plant Species There are currently about 8 nonnative plant species being targeted for action within the MNRR that are of high management concern. These include: - Purple loosestrife - although beautiful, is a noxious weed, well known for its capacity rapidly to invade wetlands, replace native vegetation and dominate those habitats at the expense of turtles, birds and other animals. - Salt cedar - also known as Tamarisk, it is a deciduous shrub or small tree. It can absorb 200 gallons of water per day, giving heavy infestations the ability to dry up creeks and small lakes. - Russian olive - Canada thistle - Leafy spurge - a noxious weed that has driven out and taken over from native species. Click here for a bulletin on Invasive Plants
<urn:uuid:04492fea-4dfa-47f5-9c5a-65914819d1d9>
3.703125
406
Knowledge Article
Science & Tech.
20.353891
Quantitating DNA - How they did it back then? (Feb/15/2009 ) We now know that DNA with a concentration of 50ug/mL has an OD of 1.0 at 260nm and this value helps us determine our sample's DNA concentration. Can anyone shed light or provide links as to how scientist arrive at that relationship? If it's by a standard curve, how did they prepare their DNA concentrations to plot the graph? I'm willing to tack the challenge. But why dos it matter? Why don't you take it for granted? Do you check every math axiom before using it? Maybe they measured the weight of DNA. The DNA pellet should have mass right? Then using that, they dissolve in fixed volume and look at Ab260 . Work it out in reverse: take a range of DNA samples with different ODs, then determine how much DNA (weight) is in each. or you can run an agarose gel electroforesis, with serial dilutions to to compare the concentration with another method
<urn:uuid:c07a34ef-efcc-465e-9dd9-f5d28d48e1ec>
2.953125
219
Comment Section
Science & Tech.
72.768057
Narrator: This is Science Today. Planets discovered outside our solar system were found to have an elliptical orbit, rather than the circular orbit of our own planets. Geoff Marcy, a professor of astronomy at the University of California, Berkeley, says this makes astronomers wonder if circular orbits are necessary for the evolution of advanced life. Marcy: If instead, the Earth were residing in an elliptical orbit, then of course the Earth would get carried close to the sun and far from the sun and close to sun, alternately heating up the water to steam and the other half of the time, freezing the water on the Earth into ice. And of course, that would not bode well for the quiescent evolution of microbiology and life in general. Narrator: Marcy says their results are a bit scary for researchers looking for extra-terrestrial Marcy: But I try to assure them they should not be scared - only five percent of the stars we've looked at have Jupiters. And although those Jupiters are in elliptical motion, the other ninety-five percent of our stars could very well have Earth-like planets in circular motion. Narrator: For Science Today, I'm Larissa Branin.
<urn:uuid:3b2d401e-f3ae-4d66-97ab-204e6e7c73e9>
3.5625
280
Audio Transcript
Science & Tech.
35.948
There are nearly 450 nuclear reactors in the world, with hundreds more either under construction or in the planning stages. There are 104 of these reactors in the USA and 195 in Europe. Imagine what havoc it would wreak on our civilization and the planet’s ecosystems if we were to suddenly witness not just one or two nuclear melt-downs but 400 or more! How likely is it that our world might experience an event that could ultimately cause hundreds of reactors to fail and melt down at approximately the same time? I venture to say that, unless we take significant protective measures, this apocalyptic scenario is not only possible but probable. Consider the ongoing problems caused by three reactor core meltdowns, explosions, and breached containment vessels at Japan’s Fukushima Daiichi facility, and the subsequent health and environmental issues. Consider the millions of innocent victims that have already died or continue to suffer from horrific radiation-related health problems (“Chernobyl AIDS”, epidemic cancers, chronic fatigue, etc) resulting from the Chernobyl reactor explosions, fires, and fallout. If just two serious nuclear disasters, spaced 25 years apart, could cause such horrendous environmental catastrophes, it is hard to imagine how we could ever hope to recover from hundreds of similar nuclear incidents occurring simultaneously across the planet. Since more than one third of all Americans live within 50 miles of a nuclear power plant, this is a serious issue that should be given top priority! Figure 1. Coronal Mass Ejection (CME), SOHO image, June 9, 2002. In the past 152 years, Earth has been struck roughly 100 solar storms causing significant geomagnetic disturbances (GMD), two of which were powerful enough to rank as “extreme GMDs”. If an extreme GMD of such magnitude were to occur today, in all likelihood it would initiate a chain of events leading to catastrophic failures at the vast majority of our world’s nuclear reactors, quite similar to the disasters at both Chernobyl and Fukushima, but multiplied over 100 times. When massive solar flares launch a huge mass of highly charged plasma (a coronal mass ejection, or CME) directly towards Earth, colliding with our planet’s outer atmosphere and magnetosphere, the result is a significant geomagnetic disturbance.
<urn:uuid:5e6e859b-4968-4965-8143-522ca3cca03c>
3.484375
464
Personal Blog
Science & Tech.
30.120249
This is an image showing the clouds of Titan. Can there be Life in the Environment of Titan? Titan's atmosphere is a lot like the Earth's, except that it is very cold, from -330 degrees to -290 degrees! Like the Earth, there is a lot of Nitrogen and other complex molecules. There also may be an ocean of methane, or perhaps a liquid water layer inside the moon. Except for the cold, these signs would be favorable for some sort of life. Some creatures on Earth are known to live in an environment of very cold water. In the atmosphere there are layers of clouds composed of complex molecules such as methane. Moreover there is energy from ultraviolet light, and the charged particles of the magnetosphere. This type of environment, aside from the cold, is the kind of environment in which scientists think life began. Overall, the environment sounds unfriendly to life as we know it on earth, because of the cold. Since not much is known about the moon Titan, up close exploration of this moon, with a probe, as shown in this drawing, would help scientists better understand if life could survive there. Shop Windows to the Universe Science Store! The Spring 2010 issue of The Earth Scientist , focuses on the ocean, including articles on polar research, coral reefs, ocean acidification, and climate. Includes a gorgeous full color poster! You might also be interested in: There are many types of living things that are able to live in difficult environments on Earth. The picture to the left shows an example of some of these creatures. These are tubeworms that live at the...more Jupiter's atmospheric environment is one of strong gravity, high pressure, strong winds, from 225 miles per hour to 1000 miles per hour, and cold temperatures of -270 degrees to +32 degrees (freezing temperature)....more In July, 1996, it was announced that Dr. David McKay, along with a team of scientists at Johnson Space Center (a division of NASA), had discovered possible fossils of bacteria in a meteorite named ALH84...more Saturn's atmospheric environment is one of strong gravity, high pressure, strong winds, from 225 miles per hour to 1000 miles per hour, and cold temperatures of -270 degrees to +80 degrees. With winds...more Titan's atmosphere is a lot like the Earth's, except that it is very cold, from -330 degrees to -290 degrees! Like the Earth, there is a lot of Nitrogen and other complex molecules. There also may be an...more Autotrophs are organisms that can "make their own food" from an inorganic source of carbon (carbon dioxide) given a source of energy. Most autotrophs use sunlight in the process of photosynthesis to make...more In the warm primordial ocean, aggregates of amino acids, proteins, and other hydrocarbons came together into a form called *coacervates*. Amino acids will spontaneously form coacervates in the same way...more
<urn:uuid:5dfc084e-a65b-48d5-a236-1559c0c2e5d5>
3.140625
626
Content Listing
Science & Tech.
56.016376
News story originally written on March 15, 1999 Right now you can see all 110 Messier objects in one night. This only happens for a little while each year. There's a new moon out so the night sky is dark. That makes it easier to see the objects. During special times like this, people try to see how many Messier objects they can find in one night. They call it a Messier Marathon. Shop Windows to the Universe Science Store! Our online store on science education, classroom activities in The Earth Scientist specimens, and educational games You might also be interested in: It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The sky was clear and the weather was great. This was the America's 123rd manned space mission. A huge...more Scientists found a satellite orbiting the asteroid, Eugenia. This is the second one ever! A special telescope allows scientists to look through Earth's atmosphere. The first satellite found was Dactyl....more The United States wants Russia to put the service module in orbit! The module is part of the International Space Station. It was supposed to be in space over 2 years ago. Russia just sent supplies to the...more A coronal mass ejection (CME) happened on the Sun last month. The material that was thrown out from this explosion passed the ACE spacecraft. ACE measured some exciting things as the CME material passed...more Trees and plants are a very important part of this Earth. Trees and plants are nature's air conditioning because they help keep our Earth cool. On a summer day, walking bare-foot on the sidewalk burns,...more There is something special happening in the night sky. Through mid-May, you will be able to see five planets at the same time! This doesn't happen very often, so you won't want to miss this. Use the links...more
<urn:uuid:a3bae78a-472b-4ef9-8e94-dd7cfee61ee7>
3.265625
450
Content Listing
Science & Tech.
68.237555
Sunday, August 19, 2012 THE WORLD'S GREATEST MIMIC Discovered in 1998 off the coast of Indonesia, the mimic octopus is the first known species to take on the characteristics of multiple species. The octopus grows to about 2 feet in length and primarily lives in the seas of Southeast Asia. It impersonates at least 15 different species (including sea snakes, lionfish, flatfish, brittle stars, giant crabs, sea shells, stingrays, jellyfish, sea anemones, and mantis shrimp) as means of evading or becoming a seeming predator of its own predators. As an example, in order to evade a damselfish, it mimics a banded sea snake, the damsel’s known predator.
<urn:uuid:a4743c52-e2d3-4b29-94e1-e12ef5e06f20>
3.296875
156
Personal Blog
Science & Tech.
43.350562
At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and paper. At the beginning of the night three poker players; Alan, Bernie and Craig had money in the ratios 7 : 6 : 5. At the end of the night the ratio was 6 : 5 : 4. One of them won $1 200. What were the assets of the players at the beginning of the evening? According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have their surface areas equal to their volumes? Take an A4 piece of paper and halve it by drawing a line across the middle, parallel to the shorter side. Each half is called an A5 Halve the bottom half by drawing a vertical line down the middle. This creates two A6 rectangles. Halve the right hand one by drawing a horizontal line across its middle. This creates two A7 rectangles. Halve the bottom one by drawing a vertical line down the middle. This creates two A8 rectangles. You should have something like this: Halve the right hand A8 shape by drawing a horizontal line across its middle. This creates two A9 rectangles. And so on ... Keep going until the rectangles are too small to be Now draw the diagonal of the A4 piece of paper from the top left corner to the bottom right corner. This creates a sequence of triangles. The first two are numbered in the diagram below but you will have many more drawn on your sheet: What is the total area of the first two triangles as a fraction of the original A4 rectangle? What is the total area of the first three triangles as a fraction of the original A4 rectangle? If you could you go on adding all the triangles' areas, what do you think the total would be as a fraction of the original A4 Many thanks to Professor Michael Sewell for providing us with this idea. It has a connection with Zeno's paradox. Can you find out about What is the connection between the method used to find all the triangles' areas and the method used to explain Zeno's paradox?
<urn:uuid:0d5126f5-4c2c-434c-b052-25edb0736e42>
3.3125
508
Tutorial
Science & Tech.
67.067814
Wooller, Luke; van Wyk de Vries, Benjamin; Cecchi, Emmanuelle and Rymer, Hazel Due to copyright restrictions, this file is not available for public download Click here to request a copy from the OU Author. |DOI (Digital Object Identifier) Link:||http://dx.doi.org/doi:10.1007/s00445-009-0289-3| |Google Scholar:||Look up in Google Scholar| Long-term fault movement under volcanoes can control the edifice structure and can generate collapse events. To study faulting effects, we explore a wide range of fault geometries and motions, from normal, through vertical to reverse and dip-slip to strike-slip, using simple analogue models. We explore the effect of cumulative sub-volcanic fault motions and find that there is a strong influence on the structural evolution and potential instability of volcanoes. The variety of fault types and geometries are tested with realistically scaled displacements, demonstrating a general tendency to produce regions of instability parallel to fault strike, whatever the fault motion. Where there is oblique-slip faulting, the instability is always on the downthrown side and usually in the volcano flank sector facing the strike-slip sense of motion. Different positions of the fault beneath the volcano change the location, type and magnitude of the instability produced. For example, the further the fault is from the central axis, the larger the destabilised sector. Also, with greater fault offset from the central axis larger unstable volumes are generated. Such failures are normal to fault strike. Using simple geometric dimensionless numbers, such as the fault dip, degree of oblique motion (angle of obliquity), and the fault position, we graphically display the geometry of structures produced. The models are applied to volcanoes with known underlying faults, and we demonstrate the importance of these faults in determining volcanic structures and slope instability. Using the knowledge of fault patterns gained from these experiments, geological mapping on volcanoes can locate fault influence and unstable zones, and hence monitoring of unstable flanks could be carried out to determine the actual response to faulting in specific cases. |Item Type:||Journal Article| |Copyright Holders:||2009 Springer-Verlag| |Funders:||Natural Environment Research Council Studentship [grant number NER/S/A/2000/03505], INSU PNRN (France), Open University Research Development Fund Fellowship| |Keywords:||volcano-tectonics; fault; analogue modelling; lateral collapse; debris avalanche; deformation| |Interdisciplinary Research Centre:||Centre for Earth, Planetary, Space and Astronomical Research (CEPSAR)| |Depositing User:||Hazel Rymer| |Date Deposited:||01 Oct 2009 15:47| |Last Modified:||14 Nov 2012 20:06| Actions (login may be required) |Public: Report issue / request change|
<urn:uuid:7422b6cd-a7e5-440c-98c6-473ef289fe7a>
3.125
626
Academic Writing
Science & Tech.
20.659821
Comet C/2009 R1 has been discovered in September 2009 by Robert H. McNaught in the course of Australia’s Siding Spring Survey. For more information about the discovery of this comet, please see our previous post: The comet is now around magnitude 7.5 and it will be a nice binocular object. Throughout this apparition it will be low in the east or northeast when dawn begins to brighten. In our images, taken on May 26, is clearly visible a nice disconnection event (DE) in the plasma tail of the comet C/2009 R1. Occasionally due to comet-solar wind interaction, the entire plasma tail or part of it separates from the comet and drift away (antisunward), followed by simultaneous renewal of the plasma tail. This phenomena is called a disconnection event. Here you can see bigger versions of this image: Wide-field animation of comet C/2009 R1 (May 26, 2010), showing the DE event: or click here: Wide-Field Image (May 26, 2010): click here for a bigger version: by Ernesto Guido & Giovanni Sostero
<urn:uuid:11785ac3-876d-4340-bf1a-dafdaedf6f93>
3.4375
240
Personal Blog
Science & Tech.
53.893379
View the U.S. Severe Weather Map. Hail is a form of frozen precipitation that's created by strong thunderstorms with fast updrafts—air being pulled upward into a thunderstorm. It can cause serious damage, especially to cars, aircraft, glass-roofed structures, and most notably, farmers' crops. Hail causes approximately $1 billion in property and crop damage each year. The costliest hailstorm happened in April 2001, from eastern Kansas to southwest Illinois, including the St. Louis area. Property damage in this storm exceeded $2.4 billion in 2010 dollars. Deaths from hail are rarethe last known death caused by hail in the U.S. was in the year 2000, when a man was killed by softball size hail in Fort Worth, Texas. Hail is formed when very strong thunderstorm updrafts meet supercooled water droplets. Supercooled water droplets are liquid water drops that are surrounded by air that is below freezing, and they're a common occurrence in thunderstorms. There are two methods of hail stone formation and growth that give hail stones their "layered" look. A tiny ice crystal will be the nucleus of the hail stone. In wet growth, supercooled water droplets collide with and spreads across the ice nucleus. Since this process is relatively slow (slower than the dry growth process) it results in a layer of clear ice. Unlike wet growth, the supercooled water meets the ice nucleus and immediately freezes. Because this process is so fast, everything within the supercooled droplet, including small air bubbles, freezes into the layer, which gives it a cloudy look. Rain and hail is what creates the "bounded" portion of the bounded weak echo region (BWER) on radar. The weak echo region is created by a strong updraft, which also helps the hail to grow. 1. Hail forms and is carried upward through the storm by the updraft and held above the freezing line 2. The hail stone collides with supercooled droplets and grows in size 3. When the hail becomes too large and heavy to be supported by the updraft, it falls to the ground The largest officially recognized hailstone on record to have been "captured" in the U.S. was that which fell near Vivian, South Dakota in 2010. It measured 8.0 inches in diameter, 18.5 inches in circumference, and weighed in at 1.9375 pounds. Mr. Lee Scott, who collected the monster stone, originally planned to make daiquiris out of the hailstone but fortunately thought better and placed it in a freezer before turning it over to the National Weather Service for certification. More on record-setting hail from our Weather Historian, Christopher C. Burt. The following video was shot in an incredible hail storm in Oklahoma in 2010. It's not often that you get to see softball-size hail fall into a swimming pool from 30,000 feet. The video really gets good around 1 minute in.
<urn:uuid:f95ae24c-a4d1-4ed8-be26-ad84cf01f1cd>
3.875
629
Knowledge Article
Science & Tech.
64.613658
Ever seen little points of light buzzing around outside on summer nights? Those lights - fireflies – are beetles that create light through a chemical reaction. By controlling the reaction, fireflies can turn on and off their lights. They flash light to communicate and find a mate. Fireflies may be disappearing from some areas where they have been found in the past, so researchers are looking to citizen scientists for help understanding more about what is affecting fireflies. Changes in the way we use land might be taking a toll on fireflies. For example, as natural landscapes are turned into lawns, fertilizers, pesticides and mowers may jeopardize fireflies, which spend daytime hours on the ground. Fireflies might also be affected by outdoor lights such as streetlights and the amount of water in the environment. The Firefly Watch project gets the public involved collecting data about where fireflies are found. If you live east of the Rocky Mountains in the United States and have ten minutes a week to look for fireflies in the evening, consider signing up as a volunteer. The project started three years ago at Boston’s Museum of Science by Don Salvatore, project coordinator, in collaboration with local scientists. Don had been teaching insect programs at the museum and was getting questions from visitors about fireflies. Some people mentioned that they used to see more fireflies years ago and wondered whether fireflies were becoming less common. Don contacted Adam South, a graduate student at Tufts University studying fireflies, and the two of them started Firefly Watch with other researchers in 2008. In the project’s first summer, Don hoped to get 100 people in the local area to participate. There was much more interest than he expected. Seven hundred people from 34 states watched fireflies with the project during the summer of 2008. Now, three years later, 5000 people have become involved. Volunteers watch for fireflies at least once per week throughout the warm season. In New England that season is roughly from the end of May until the beginning of August. The season for finding fireflies is much longer further south. For example, in Florida you might be able to find fireflies from March to November. While Firefly Watch suggests that people look for fireflies in their backyards, mainly because that is convenient for many people, you can choose any spot – urban, rural, or suburban. Because the project is interested in the impact of fertilizers and pesticides on fireflies, choose a spot where you know whether these products are used. Before you head outside to look for fireflies, register on the Web site and then check out the Virtual Habitat to learn how to recognize different types of fireflies from their flashing lights and the types of flashing patterns that you may see outside. If you are not able to watch fireflies where you live, you can still take a look at the data online. There are maps showing where Firefly Watch volunteers have spotted fireflies. You can also download all the data if you’d like to interpret it yourself. The project invites you to explore and analyze the data yourself and then report back what you found. Who knows what interesting trend you may discover in the data!
<urn:uuid:13134907-12b6-44e1-8505-dbd2b17e684a>
3.65625
648
Personal Blog
Science & Tech.
47.86154
To determine how elevated CO2 may reduce water use by crops, plant physiologist James Bunce measures water vapor conductance of barley leaves grown at twice the current atmospheric CO2 concentration. Understanding climate change on a global scale means getting up close and personal with a single plant--or even with a single cell in a plant. "Nature has a way of rewarding those who take the time to look closely at basic processes," says Steven J. Britz, an ARS plant physiologist at Agency scientists around the country are examining how elevated atmospheric CO2 and other greenhouse gases affect three essential biological processes: respiration, or the exchange of oxygen for CO2; the use of light in photosynthesis to remove CO2 from the air for plant growth and reproduction; and water use. Research to date both confirms some long-held beliefs about plant response to elevated CO2 and adds to what we already know. For example, elevated levels affect a plant's respiration. James A. Bunce, an ARS plant physiologist at Beltsville, grew soybean plants in CO2 chambers at nearly double the current atmospheric level. Surprisingly, while higher levels of CO2 increased plant growth, they lowered plant respiration. "We expected the plants to have a higher rate of respiration," says Bunce. "It's still a mystery how the rate of respiration can be reduced without a negative impact on the plant." Photosynthesis in wheat plants can be measured in field chambers like this one being adjusted by plant physiologist Richard Other studies show that changes in the atmosphere affect how plants use water. Like scientists at other ARS laboratories, Bunce and colleagues found that plant water use changes dramatically when the plants grow in higher atmospheric CO2. By studying the plant stomata--the pores on the leaf surface that regulate water loss from the leaf--they found that at higher CO2 levels, plants use less water to produce the same amount of growth. This response is commonly seen in the growth chamber and greenhouse, but the overall reduction in water use for crops grown in the field seems to be less than 5 percent, for reasons that are not yet understood. ARS soil scientist Bruce A. Kimball and colleagues at Phoenix, Arizona, confirmed that plant photosynthesis is immediately stimulated when you double the atmospheric CO2. He also showed it doesn't necessarily slow down over time in crops such as wheat and cotton or fruit trees like oranges. In experiments with sour orange trees, Citrus aurantium, physicist Sherwood B. Idso observed sustained explosive growth over a 9-year period when the trees grew outdoors under experimentally elevated CO2. Physicist Sherwood Idso and soil scientist Bruce Kimball assess fruit production on an orange tree growing in an open-top chamber with enriched Scientists speculate that this level of response to increased CO2 concentrations will lead to an overall net increase in productivity in many ecosystems. Other greenhouse gases can add to this effect. For example, ARS plant physiologist Joseph E. Miller and co-workers at Raleigh, North Carolina, found that the atmospheric concentration of ozone near ground level affected the degree to which elevated atmospheric CO2 stimulated photosynthesis in soybean leaves. Under today's CO2 concentrations, ozone can suppress photosynthesis, but Miller's experiments showed that photosynthesis and yield were increased more by elevated CO2 if plants were stressed by ozone. "This is one example of the complexities involved in understanding how plants will respond to global environmental change," Miller says. "Clearly, we have a lot to learn about how the different contributors to climate change interact--and how those interactions will affect plant The Free Air CO2 Enrichment project (FACE) in Arizona is helping scientists from around the world to understand how plants respond to actual field conditions representing those anticipated in the next 50 to 75 years. Large amounts of CO2 are vented through upright pipes that maintain a constant CO2 concentration of 550 parts per million in the atmosphere around the plants. "Our FACE project, begun in 1989, is the longest running of five now providing researchers with information needed to assess impacts of global change," says Kimball. "We have studied cotton and wheat, while the other experiments concentrate on forage grasses, loblolly pine, chaparral, and desert plants." In general, Kimball's work has shown that crop yields increase as CO2 rises.
<urn:uuid:2e62d225-0ab9-4795-b451-3fcb7fa718b2>
3.59375
975
Knowledge Article
Science & Tech.
33.211943
One of the most fascinating questions that occurs when contemplating the universe is whether there other life exists, equally or more intelligent than we. Are there alien eyes looking at our star or our galaxy and do these creatures ask the same cosmological questions we ask? Nobody knows, although the straightforward application of the Copernican Principle suggests that we cannot be unique in the universe. Is there other life in the universe? How can we begin to answer that question, in the absence of direct evidence to answer the question in the affirmative? One way is through something known as the Drake equation, named after the astronomer Frank Drake. It is not really an equation to be solved, so much as it is a way of systematizing the unknowns. Here is how it works. Let us say we wish to estimate the quantity N, the number of technological civilizations in the galaxy. Of the n stars in the galaxy, only some fraction   fp  of them will have planets. Only some average number of planets per star, (H), will be potentially habitable. Of the habitable planets, there is a fraction   fl  that will develop life. Now of the planets that develop life, how many will develop intelligent life? Use   fi  for that fraction. Only some fraction of intelligent species   ft  will develop technology. So given all these things, we can write N = n ×   fp  ×   H  ×   fl  ×   fi  ×   ft Some of these factors are easier to estimate than others. There are about 100 billion stars in the Milky Way so we will use that for n. There now seems to be some direct evidence for planets around other stars, but as yet we still don't know what fraction of stars would have planets. If we are optimistic, then we would take a fraction near one, essentially saying that all stars have planets. What number of planets per star would be habitable? The planets would have to be located at a distance from their star that is neither too hot nor too cold. In our solar system there are three that are potentially habitable, Venus, Mars, and the Earth. Some stars would support fewer, or possibly no, habitable planets. Let's say that, on average, only one in 10 stars with planets has one planet that could support life. What have we got so far? N = 100,000,000,000 × 1 × 0.1 ×   fl  ×   fi  ×   ft This still leaves a lot of potentially life-bearing planets! The next three fractions are the especially tricky ones. If life can develop, does it? Opinions differ widely on this topic. This is where the recent Life on Mars issue has some application. If this development holds up, then life developed on both Mars and the Earth and it becomes much more problematic to say that life is incredibly difficult to get started on any given planet. If you believe life is inevitable, given habitable conditions, then make   fl  =1. Now, if life forms, does it become intelligent? A difficult question. Life has been around on Earth for billions of years and we (modern humans) came on the scene only in the last 100,000 or so years. And any life on Mars that may have once existed (if it did) died out completely. For purposes of an estimate, let's take the ratio of 100,000 years of humans to 1 billion years of life, giving us 1 in 10,000 planets with life that develop intelligence. Does intelligent life inevitably develop technology? Good arguments can be made either way. There doesn't seem to be anything particularly inevitable about humanity's rise to technological prowess. Although it happened rapidly once it got going, did it have to happen? Could an intelligent creature stay as a hunter/gatherer or simple tool-user for the entire length of its existence? Who knows? Let's adopt the attitude that intelligence necessarily leads to technology and say that   ft  = 1. So we have N = 100,000,000,000 × 1 × 0.1 × 1 × 0.0001 × 1 = 1,000,000 One million planets with technologies! OK, we stacked the deck by choosing all the optimistic numbers. Go back and put in some numbers of your own. You only have to insert one pessimistic number to drop the number of planets in the Milky Way down to around 1, which would be the Earth. For example, humanity has been technological for only 100 out of its 100,000 years of existence. If you find the thought of a low number of life-bearing planets depressing, that we might be alone in the Milky Way, bear in mind that there are more galaxies in the visible universe, than there are stars in our galaxy. So if there were only one life bearing planet in each galaxy there would still be trillions of life bearing planets. But we will never communicate with or visit other galaxies. And how likely are life-bearing planets that can lead to intelligent life? Do they require a large moon, such as the Earth has? Such double planets may well be rare, particularly if the Moon formed as the result of a huge impact early in the history of the solar system. Does intelligent life require dry land as well as oceans? What are the odds that the Earth would end up with both oceans and dry land (as opposed to all oceans or all dry land)? We don't really know, but once you start thinking about it, things become rather tricky quite rapidly. So who knows? But it does tend to make you want treat our planet and its unique inhabitants with some respect.
<urn:uuid:a7681a55-3804-4087-a7d8-8ac96c82f10c>
3.75
1,147
Academic Writing
Science & Tech.
59.664198
Random number generation is tricky business. Good random number generation algorithms are tricky to invent. Code implementing the algorithms is tricky to test. And code using random number generators is tricky to test. This article will describe SimpleRNG, a very simple random number generator. The generator uses a well-tested algorithm and is quite efficient. Because it is so simple, it is easy to drop into projects and easy to debug into. SimpleRNG can be used to generate random unsigned integers and double values with several statistical distributions: - Chi square - Inverse gamma - Laplace (double exponential) - Student t Why Not Just Use the .NET Random Number Generator? For many applications, it hardly matters what random number generator you use, and the one included in the .NET runtime would be the most convenient. However, sometimes it helps to have your own random number generator. Here are some examples. - When debugging, it's convenient to have full access to the random number generator. You may want to examine the internal state of the generator, and it helps if that state is small. Also, it may be helpful to change the generator temporarily, making the output predictable to help debug code that uses the generator. - Sometimes it is necessary to compare the output of programs written in different languages. For example, at my work we often take prototype code that was written in R and rewrite it in C++ to make it more efficient. If both programs use their own library's random number generator, the outputs are not directly comparable. But if both programs use the same algorithm, such as the one used here, the results might be directly comparable. (The results still might not match due to other differences.) - The statistical quality of the built-in generator might not be adequate for some tasks. Also, the attributes of the generator could change without notice when you apply a service pack. George Marsaglia is one of the leading experts in random number generation. He's come up with some simple algorithms that nevertheless produce high quality output. The generator presented here, SimpleRNG, uses Marsaglia's MWC (multiply with carry) algorithm. The algorithm is mysterious but very succinct. The algorithm passes Marsaglia's DIEHARD battery of tests, the acid test suite for random number generators. The heart of SimpleRNG is three lines of code. Here is the method that generates uniformly distributed unsigned integers. private static uint GetUint() m_z = 36969 * (m_z & 65535) + (m_z >> 16); m_w = 18000 * (m_w & 65535) + (m_w >> 16); return (m_z << 16) + m_w; m_z are unsigned integers, the only member variables of the class. It's not at all obvious why this code should produce quality random numbers, but it does. The unsigned integer is then turned into a double in the open interval (0, 1). ("Open" means that the end points are not included; the method will not return 0 or 1, only numbers in between.) public static double GetUniform() uint u = GetUint(); return (u + 1.0) * 2.328306435454494e-10; Using the Code SimpleRNG class has two seeds. These have default values, or they can be specified by calling SetSeed() with one or two arguments. These arguments must be non-zero; if an argument is zero, it is replaced by the default value. Some may prefer to throw an exception in this case rather than silently fix the problem. There is also an option to set the seed values from the system clock using SetSeedFromSystemTime(). Once the class is initialized, there is only one public method to call, Points of Interest The code to test SimpleRNG is more complicated than SimpleRNG itself. The test code included as a demo uses a statistical test, the Kolmogorov-Smirnov test, to confirm that the output of the generator has the expected statistical properties. If this test were applied repeatedly with ideal random input, the test would fail on average once in every thousand applications. This is highly unusual in software testing: the test should fail occasionally! That's statistics for you. Don't be alarmed if the test fails. Try again with another seed and it will most likely pass. The test is good enough to catch most coding errors since a bug would likely result in the test failing far more often. The test code also uses RunningStat, a class for accurately computing sample mean and variance as values accumulate. For more information on random number generation, particularly on subtle things that can go wrong, see the CodeProject article Pitfalls in Random Number Generation. If you are using C++, see Random number generation using C++ TR1. - 11th April, 2008: Initial post - 13th April, 2008: Revised article to explain why this generator might be preferable to the built-in generator - 30th September, 2008: Added further reading section - 4th October, 2008: Fixed two bugs based on reader feedback. Now seeds cannot be 0, and GetUniform cannot return 0. - 22nd October, 2008: Added methods for generating normal (Gaussian) and exponential random samples - 19th February, 2010: Fixed incompatibility with Marsaglia's MWC algorithm - 30th April, 2010: Added methods for new distributions, extended the test code - 27th July, 2010: Updated article - 6th January, 2011: Updated article and download files - 16th March, 2011: Updated article and download files per Craig McQueen‘s comment regarding the lower bits of the core generator
<urn:uuid:9139d9a1-0c4f-49eb-8543-59305e79394a>
2.6875
1,209
Documentation
Software Dev.
46.138573
FIGURE A2.1. (a) Daily amplitudes of the Arctic Oscillation (AO) the North Atlantic Oscillation (NAO), and the Pacific-North American (PNA) pattern. The pattern amplitudes for the AO, (NAO, PNA) are calculated by projecting the daily 1000-hPa (500-hPa) height anomaly field onto the leading EOF obtained from standardized time- series of daily 1000-hPa (500-hPa) height for all months of the year. The base period is 1979–2000. (b-d) Northern Hemisphere mean and anomalous 500-hPa geopotential height (CDAS/Reanalysis) for selected periods during the month are shown in the remaining 3 panels. Mean heights are denoted by solid contours drawn at an interval of 8 dam. Dark (light) shading corresponds to anomalies greater than 50 m (less than -50 m). Anomalies are calculated as departures from the 1979–95 base period daily means.
<urn:uuid:1161c33b-fa3c-4cb7-91e8-3bb40ac222f6>
3.046875
226
Documentation
Science & Tech.
59.694141
New analysis of an experiment performed by the Viking landers suggests that evidence of microbial life in the Martian soil may have been detected 36 years ago. As one of the authors of this new paper puts it: "on the basis of what we've done so far, I'd say I'm 99 percent sure there's life there." Whoa. Remember when everyone was freaking out about the mass deaths of bees back in 2006? Well, while the general populace may have decided to go back to eating its honey completely care-free, scientists have been hard at work trying to discover the cause.The newest suspect? High-fructose corn syrup. Hermit crab housing has just taken on an interesting new turn. Harry, the local resident at the rock pool in Legoland in the U.K. has crawled into a specially crafted shell made of the local building material — Legos. So, adorable hermit crabs enjoy plastic blocks as much as they love 3D-printed enclosures. Scientists have created a sensor so accurate it can now measure the weight of a single proton. The super scale uses the smallest unit of mass as a measurement, a single yoctogram, whereas previous sensors could only get within 100 yoctograms — a large margin at that scale. London's Wellcome Collection has an awesome exhibit starting today through June 17: It's all about brains! The image above is a corrosion cast of blood vessels in the brain from the 1980s.... Three billion years ago was way, way before humans. It was before mammals. It was before dinosaurs and insects and even plants. It was before Earth had any forms of life more complex than microbes. But it still rained back then, and paleoclimatologists have used fossilized raindrops to figure out what kind of atmosphere our planet used to have. Launched by Microsoft Research, Moscow State University and UC Berkeley, ChronoZoom has, oh, just about 14 billion years for you to learn about as you travel through time. You didn't have anything else planned today, did you? Titanic Director James Cameron has just joined one of the most exclusive clubs on Earth, becoming just the third person to reach the deepest part of the ocean, and return safely to the surface. Generating a non-destructive 100 Tesla magnetic field has been a project of the Los Alamos National Lab for about a decade and a half, and just yesterday, they managed to pull it off. A huge nested magnet hooked up to an even huger generator kicked out a pulse 2 million times stronger than the Earth's magnetic field, and it screamed in the process. From Blastr: On Sept. 13, 1999, a tragedy befell all mankind. An accident of unknown origin at the nuclear waste dumps on the far side of the moon caused a massive explosion that hurled the moon out of Earth orbit.
<urn:uuid:571d4da7-62fd-48a5-8181-549ca2480e4b>
3.1875
587
Content Listing
Science & Tech.
58.046521
- Reader's Guide - Summary for Decision-makers - Key Questions in the Millennium Ecosystem Assessment - How have ecosystems changed? - How have ecosystem services and their uses changed? - How have ecosystem changes affected human well-being and poverty alleviation? - What are the most critical factors causing ecosystem changes? - How might ecosystems and their services change in the future under various plausible scenarios? - What can be learned about the consequences of ecosystem change for human well-being at sub-global scales? - What is known about time scales, inertia, and the risk of nonlinear changes in ecosystems? - What options exist to manage ecosystems sustainably? - What are the most important uncertainties hindering decision-making concerning ecosystems? - Appendix A. Ecosystem Service Reports - Appendix B. Effectiveness of Assessed Responses - Appendix C. Authors, Coordinators, and Review Editors - Appendix D. Abbreviations, Acronyms, and Figure Sources - Appendix E. Assessment Report Tables of Contents Disclaimer: This chapter is taken wholly from, or contains information that was originally written for the Millennium Ecosystem Assessment as published by the World Resources Institute. The content has not been modified by the Encyclopedia of Earth.
<urn:uuid:4560790f-34af-4e30-afec-c1a1b6c35eeb>
3.296875
260
Truncated
Science & Tech.
22.45625
Found 30 - 40 results of 76 programs matching keyword "feautures of mars" Engineers at the Jet Propulsion Laboratory (JPL) explain how they simulate martian conditions and conduct tests with model rovers to prepare the Curiosity rover for its journey to Mars and its work on the red planet. Join the Exploratorium crew on our trip to NASA’s Jet Propulsion Lab (JPL) in Pasadena, California, to learn more about the Mars Science Laboratory mission and the Curiosity rover. Fernando Abilleira, Ingeniero Español especializado en navegación y trayectorias quien trabaja para la NASA, describe como el astromóvil "Curiosidad" aterrizará sobre la superficie del planeta Marte en el cráter llamado "Gale", un lugar de alto inte A glimpse of the full-scale model of the Mars rover, Curiosity. On display at the Exploratorium from August 1st to September 16, 2012. This model is on loan from JPL, NASA's Jet Propulsion Laboratory, and there are only two on loan in the United States! In this video from NASA's Jet Propulsion Laboratory, we examine the notion of curiosity. Curiosity is a big part of what it means to be human. It's also the name of NASA's next Mars rover. This 60-second video shows how one type of curiosity can inspire another. In this video from NASA's Jet Propulsion Laboratory, we look at landing on Mars. Landing a spacecraft on Mars is one of the trickiest things we do. This 60-second video explains how it’s done, and the three landing systems we use at the Red Planet. In this video from NASA's Jet Propulsion Laboratory, an animation shows the major mission events of the Curiosity rover's landing on Mars. It's time for a new mission to Mars! Join Exploratorium science educators as we celebrate the launch of the newest rover, Curiosity, as it begins it's 8 1/2 month journey to the planet Mars. We will look at the launch itself, talk a little bit about MSL(Mars Science Laboratory) and Curiosity, summarize the history of Mars exploration, and look forward to what is next! Dr. Laura Peticolas is a physicist at UC Berkeley's Space Physics Research group. She studies the Aurora to learn more about the Earth and the workings of our Solar System. She's currently working with NASA's Mars data to understand why the Martian aurora looks the way it does. In this podcast she discusses her research, her inspiration and how and why scientists sonify data. The Mars Phoenix Lander will have been collecting data and sending it back to earth for a month! Exploratorium Senior Scientist Paul Doherty will examine the data and tell us what new information we've gained about Mars. We'll also get an update on our old friends, the Mars rovers Spirit and Opportunity!
<urn:uuid:5ef7f527-5f03-455c-8ae7-c9d035f92f89>
3.015625
617
Content Listing
Science & Tech.
50.004403
While Vilhelm Bjerknes and his team were developing their synoptic models in Bergen, a radically different approach to forecasting was being pursued by Lewis Fry Richardson. Richardson's starting point was the system of fundamental physical principles governing atmospheric motion. He assembled the set of mathematical equations which represent these principles and formulated an approximate algebraic method of calculating their solution. Starting from the state of the atmosphere at a given time - the initial conditions - the method could be used to work out its future evolution. Using the most complete set of observations available to him, Richardson applied his numerical method and calculated the changes in the pressure and winds at two points in central Europe. The results were something of a calamity: Richardson calculated a change in surface pressure over a six-hour period of 145 hPa, a totally unrealistic value. As Sir Napier Shaw remarked, the wildest guess would not have been wider of the mark! Despite the ``glaring errors'' in his forecast, Richardson was bold enough to publish his method and results in his remarkable Weather Prediction by Numerical Process (LFR; Richardson, 1922). This profound, and occasionally whimsical, book is a treasure-store of original and thought-provoking ideas and amply repays the effort required to read it. The application of Richardson's forecasting method involved an enormous amount of numerical computation. Even the limited results he obtained cost him some two years of arduous calculation (Lynch, 1993). This work was carried out in the Champagne district of France where Richardson served as an ambulance driver during the Great War (Ashford, 1985). His dedication and tenacity in the dreadful conditions of the war are an inspiration to those of us who work in more genial conditions. In this paper the results obtained by Richardson will be examined and the causes of the errors in his forecast will be explained. It will be shown how a realistic forecast can be obtained by modifying the initial data. The study is based on the original observations for 20 May, 1910, originally compiled by Hugo Hergessel and analysed by Vilhelm Bjerknes. These are used to extend the table of values published by Richardson, to cover most of Europe. A numerical model is then constructed, keeping as close as possible to the method of Richardson, except for omission of minor physical processes. When the model is run with the extended data, the results are virtually identical to those of Richardson. In particular, an initial pressure tendency of 145 hPa in 6 hours is obtained at the central point, in agreement with Richardson. The tendency values are unrealistic, being generally about two orders of magnitude too large. The reasons for the spurious tendencies will be discussed. They are essentially due to an imbalance between the pressure and wind fields resulting in large amplitude high frequency gravity wave oscillations. The `cure' is to modify the analysis so as to restore balance; this process is called initialization. An initialization method based on a digital filter will be outlined, and its application to Richardson's problem described. The forecast tendency from the modified data yields reasonable results. In particular, the tendency at the central point is reduced to 3 hPa per 6 hours - a realistic value! The chapter will conclude with some speculations about what-might-have-been had Richardson been able to initialize his data. Reference: Lynch, Peter, 1999: Richardson's Marvellous Forecast. Pp 61-73 in The Life Cycles of Extratropical Cyclones, M A Shapiro and S Grønås, Eds., Amer. Met. Soc., Boston, 355pp.
<urn:uuid:ff5c9465-5361-45ca-9e2d-e41191c545ff>
2.90625
727
Academic Writing
Science & Tech.
35.077697
on the importance of water vapor and CO2 Robert Wagner (OD optometry) correctly recognizes that water vapor (what Robert Essenhigh refers to as “water gas”) is a key Greenhouse gas. Quoting the IPCC Fourth Assessment report: “Water vapor is the most important gaseous source of infrared opacity in the atmosphere, accounting for about 60% of the natural greenhouse effect for clear skies (Kiehl and Trenberth, 1997), and provides the largest positive feedback in model projections of climate change (Held and Soden, 2000).” If Wagner read the published peer reviewed science that IPCC summarizes, he’d know that direct observations from balloon soundings show that The average atmospheric water vapor content has increased since at least the 1980s over land and ocean as well as in the upper troposphere. The increase is broadly consistent with the extra water vapor that warmer air can hold. See IPCC 2007 Chapter 3 Section 3.4. Meanwhile, CO2 concentrations are also increasing. Elevated CO2 concentrations have an associated net warming affect on climate. Of all the “well mixed” greenhouse gasses, CO2 has by far the largest warming effect. See figure SPM.2 in the IPCC 2007 Summary For Policy Makers. Climate change deniers seek holes in the science instead of seeking the truth. The science by definition aims for truth. There is no conspiracy. Climate change deniers waste the time of climate scientists and block progress. We should instead be united to protect future generations from our trashing of the environment. United we stand, divided: the rest of the world sells us technologies America should sell them.
<urn:uuid:9f970800-605e-408b-9a10-22b0bc6cf351>
3.03125
342
Personal Blog
Science & Tech.
47.501219
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 10 results on physics.org and 102 results in our database of sites 102 are Websites, 0 are Videos, and 0 are Experiments) Search results on physics.org Search results from our links database Description of nuclear fission with details of uranium, other isotopes and the history, good links to other sites. Nuclear fission explained simply. A brief description of how Nuclear fission power works from Marshall Brain's HowStuffWorks.com. Concentrating on the fission of Uranium 135. Part of a nuclear reactor tour, this site gives a simplified explanation of nuclear energy. A quick introduction to nuclear energy. A brief description of how Nuclear Bombs work from Marshall Brain's HowStuffWorks.com. The pages cover the principles of fission and fusion bombs. Description of nuclear binding energy with energy curve and discussion of yields from fission and fusion. An introduction to nuclear medicine, nuclear reactors, nuclear physics, nuclear power, nuclear waste and war. Includes movie clips of the effects of nuclear detonation. This is a good short introduction to the topic and includes a link to listen to Einstein talking about his famous equation E=MC2 A well presented and comprehensive introduction to nuclear science. Nuclear structure, Antimatter, decay, Cosmic rays, etc. Excellent graphics Showing 1 - 10 of 102
<urn:uuid:7dda5fb4-82f8-4f20-8b60-63bced0c1d35>
3.21875
330
Content Listing
Science & Tech.
54.89436
Just when we were beginning to think the media had finally learned to tell a hawk from a handsaw when covering global warming (at least when the wind blows southerly), along comes this article ‘In Ancient Fossils, Seeds of a New Debate on Warming’ by the New York Times’ William Broad. This article is far from the standard of excellence in reporting we have come to expect from the Times. We sincerely hope it’s an aberration, and not indicative of the best Mr. Broad has to offer. Broad’s article deals with the implications of research on climate change over the broad sweep of the Phanerozoic — the past half billion years of Earth history during which fossil animals and plants are found. The past two million years (the Pleistocene and Holocene) are a subdivision of the Phanerozoic, but the focus of the article is on the earlier part of the era. Evidently, what prompts this article is the amount of attention being given to paleoclimate data in the forthcoming AR4 report of the IPCC. The article manages to give the impression that the implications of deep-time paleoclimate haven’t previously been taken into account in thinking about the mechanisms of climate change, whereas in fact this has been a central preoccupation of the field for decades. It’s not even true that this is the first time the IPCC report has made use of paleoclimate data; references to past climates can be found many places in the Third Assessment Report. What is new is that paleoclimate finally gets a chapter of its own (but one that, understandably, concentrates more on the well-documented Pleistocene than on deep time). The worst fault of the article, though, is that it leaves the reader with the impression that there is something in the deep time Phanerozoic climate record that fundamentally challenges the physics linking planetary temperature to CO2. This is utterly false, and deeply misleading. The Phanerozoic does pose puzzles, and there’s something going on there we plainly don’t understand. However, the shortcomings of understanding are not of a nature as to seriously challenge the CO2.-climate connection as it plays out at present and in the next few centuries. The use of the more recent Pleistocene and Holocene record to directly test climate sensitivity presents severe enough difficulties (discussed, for example, here), but the difficulties of using deep-time Phanerozoic reconstructions for this purpose make the Pleistocene look like childs’ play. The chief difficulty is that our knowledge of what the CO2. levels actually were in the distant past is exceedingly poor. This situation contrasts with the past million years or so, during which we have accurate CO2. reconstructions from ancient air trapped in the Antarctic ice. Obviously, if you don’t know much about how CO2. is changing, you are poorly placed to infer its influence on climate, even if you know the climate perfectly and nothing else is going on besides variation of CO2.. But, neither of these latter two conditions are true either. Our knowledge of climates of the distant past is sketchy at best. Even for the comparatively well-characterized climates of the past 60 million years, there have been substantial recent revisions to the estimates of both tropical (Pearson et al., Nature 2001) and polar (Sluijs et al, Nature 2006) climates. Most importantly, one must recognize that while CO2. and other greenhouse gases are a major determinant of climate, they are far from the only determinant, and the farther back in time one goes, the more one must contend with confounding influences which muddy the picture of causality. For example, over time scales of hundreds of millions of years, continental drift radically affects climate by altering the amount of polar land on which ice sheets can form, and by altering the configuration of ocean basins and the corresponding ocean circulation patterns. This affects the deep-time climate and can obscure the CO2-climate connection (see Donnadieu, Pierrehumbert, Jacob and Fluteau, EPSL 2006), but continental drift plays no role whatsoever in determining climate changes over the next few centuries. Let’s take a closer look at the question of CO2 variations over deep time. In contrast to the situation for the late Pleistocene, there is no one method for reconstructing CO2 at earlier times which is fully satisfactory. Methods range from looking at carbon isotopes in microfossils to looking at the density of pores on fossil leaves, with many other exotic geochemical tracers (e.g Boron) coming in in recent times. There is also some data for the very early Earth associated with the CO2 conditions under which certain exotic minerals (uraninites and siderites) form. None of the methods is unambiguous, and none provide information about other greenhouse gases that might be playing a role (though there may be some hope to do something abou methane). As an example of the difficulty faced by the field, take a look at the compilation of various estimates of CO2 since the Permian presented in the following figure (From Donnadieu et al, G3, in press; the red squares come from an attempted geochemical model fit to the data. The data set comes from Royer et al. (2004) and is available here). By the time one gets back to the Permian, the error bars are huge. At earlier times, the estimates are even more problematic. Broad’s article does make reference to a very interesting paper by MIT’s Dan Rothman, writing in PNAS. This paper attempts a peek at the CO2 over the past 500 million years, using a clever and novel reconstruction technique. It is innovative, but far from the last word on the subject. Broad inappropriately cherry-picks Rothman’s statement that there appears to be no clear connection between warm climates and CO2 (except in "recent" times, about which more anon). However, Broad’s article neglects all the caveats in the paper, which clearly point to the real problem being that the reconstructions of CO2 and climate over such time scales are so uncertain that it’s not clear that the data is up to the task of teasing out ssuch a connection. Even in Rothman’s reconstruction, during the past 50 milllion years — when the data is best and continents are most like the present — the long term cooling trend leading into the Pleistocene is clearly associated with a long term CO2 decline. This is not our main reason to infer that increasing CO2 will warm the climate in the future, but insofar as the data supports CO2 decline as a main culprit in the long slide from the Cretaceous hothouse climates of 60 million years ago to the cold Pleistocene climate, it also lends weight to the notion that as industrial activity busily restores CO2 to levels approaching those of the Cretaceous, climate is likely to turn the climate clock back 60 million years as well. From Broad’s flip dismissal of the CO2-climate connection in the "recent" part of the record, the reader would never guess at the length and particular significance of this period. And then, too, the tired old beast of Galactic Cosmic Rays (GCR) raises its hoary head in Broad’s article. The GCR issue has been extensively discussed elsewhere on RealClimate (e.g. here and here) On one level the GCR idea is another instance of the problem that Phanerozoic climate variations may have had many causes, giving rise to a false appearance of decorrelation between climate and CO2. Whatever role GCR may have played in deep time climate, the climate of the past century and its attribution to CO2 is a wholly different kettle of fish, since in modern times we have direct observations of GCR and they are not doing anything of a sort that would cause the observed warming — to say nothing of the fact that one would still have to argue away the basic radiative physics which makes CO2 affect the planet’s radiation budget. We repeat: There has been no recent trend in cosmic rays that could conceivably account for the recent warming, even if the GCR proponents were right about the physical mechanism underpinning their theory. This is made abundantly clear in this recently published article. Further, whatever was going on in the past, the present observations do not support the supposed cloud-GCR connection that is supposed to mediate the climate effect. That’s not the end of the story, for there are also severe methodological difficulties in the way the GCR proponents have attributed Phanerozoic change to GCR rather than CO2, and also severe conceptual difficulties in the supposed physical link between clouds and GCR.. Some of these difficulties may ultimately be resolved and allow a more fair test of the possibility that GCR influences played some role in the past. Surely, the play given to Veizer and Shaviv in the context of Broad’s article is an instance of false balance of the worst sort. The possibility that the GCR theory may play some role in deep-time Phanerozoic climate is eminently worthy of further consideration, but the way its major proponents have used the theory in attempts to undermine forecasts of near-term warming is unjustified. Besides the broad-brush errors discussed above, Mr. Broad commits a number of lesser climatological faux pas, in areas where he really ought to know better. He refers to CO2 as "blocking sunlight" (whereas it’s actually thermal infrared which CO2 affects). He says that CO2 traps heat "in theory." This is a lot like saying that a bowling ball dropped from an airplane will fall to the ground "in theory." There is indeed a theory involved in both cases, but the use of the phrase gives a completely wrong picture of the certainty of the phenomenon. There is no more doubt about the heat-trapping effect of CO2 than there is about the physics that causes a bowling ball to fall. Broad also says that the greenhouse effect of CO2 "plateaus" at high levels. This is a botched attempt to describe the well-known logarithmic radiative forcing of CO2, incorporated in every climate model since the time of Arrhenius. There is no "plateau" where CO2 stops being important. Every time you double CO2, you get another 4 Watts per square meter of radiative forcing, so that the anticipated climate change between present CO2 and doubled CO2 is comparable to that between doubled CO2 and quadrupled CO2. In fact, as one goes to very high CO2 levels (comparable to the Early Earth), the radiative forcing starts to become more, rather than less, sensitive to each further doubling (something that can be inferred from the radiative forcing fits in Caldeira and Kasting’s 1992 paper in Nature). Let’s not lose sight, however, of the essential conundrum posed by Phanerozoic climate, particularly by the warm climates of the Cretaceous and Eocene. Current climate models do not reproduce the weak pole to equator gradients believed to characterize these climates, and have trouble warming up the polar climates enough to melt ice and eliminate continental winter without frying the tropics more than data seems to permit. Maybe there’s something wrong with the data, or maybe there are currently unknown amplification mechanisms that make the switch from a moderate Holocene type climate to a hothouse more catastrophically sensitive to CO2. This truly must give us pause as we contemplate the experiment of doubling CO2 in the next century. It’s certainly an experiment that would help to resolve some of the mysteries of Phanerozoic climate, but we’d on the whole prefer to see the mysteries resolved by improved studies of past climate instead. Update: See Tom Yulsman’s commentary on this post and the broader issues.
<urn:uuid:c63c8328-5a5b-4ac6-a1f6-ab6a96ab70d9>
2.890625
2,489
Comment Section
Science & Tech.
38.826521
March 3, 2008 A NASA spacecraft in orbit around Mars has taken the first ever image of active avalanches near the Red Planet's north pole. Oct. 20, 2008 A curiously short-lived type of gamma-ray burst has astronomers puzzled. Leading experts discuss the clues at today's Gamma-ray Burst Symposium in Huntsville, Alabama. May 14, 2008 At long last, astronomers have found one of the Milky Way's mysteriously missing supernovas. July 18, 2008 NASA scientists are using an infra-red sounder in space to improve short-term weather forecasting. April 10, 2008 Unlike Earth, the firmament of The Moonis directly exposed to charged particles from the sun. What happens to moondust under the onslaught of solar wind? Researchers in a NASA-supported lab are finding some surprising answers. Jan. 15, 2008 Images from NASA telescopes are jewels of the space program, marvelous to behold. But how do you behold them when you can't see? The answer lies between the covers of a new NASA-funded book written in Braille, Touch the Invisible Sky. Nov. 7, 2008 A surge of new-cycle sunspots in October may signal the beginning of the end of the ongoing solar minimum. July 1, 2008 Look beyond the fireworks on 4th of July weekend. A trio of worlds is converging for a pretty sunset sky show. June 11, 2008 NASA's Gamma-ray Large Area Space Telescope left Earth today onboard a Delta II rocket. "The entire GLAST Team is elated," reports program manager Kevin Grady of NASA's Goddard Space Flight Center. "The observatory is now on-orbit and all systems continue to operate as planned." March 6, 2008 Imagine living on a planet where Northern Lights fill the heavens at all hours of the day. Around the clock, even in broad daylight, luminous curtains shimmer and ripple across the sky. News flash: Astronomers have discovered such a planet. Its name is Earth.
<urn:uuid:86045ceb-cf3e-4fe2-a342-3883a1dd90c6>
3.09375
419
Content Listing
Science & Tech.
62.476959
The Weekly Newsmagazine of Science |Recently on MathTrek:| March 6, 1999 For a truly cool experience, there's nothing like transforming a 12-foot, 20-ton block of manufactured snow into a giant sculpture. That's the premise underlying the Breckenridge snow sculpture championships in Colorado, held annually in January. This year, the international competition featured 15 four-member teams representing 10 countries. It also had a striking mathematical component. Noted mathematical sculptor Helaman Ferguson, joined by math professors Dan Schwalbe and Stan Wagon (see Riding on Square Wheels, July 11, 1998) and student Tamas Nemeth of Macalester College in St. Paul, Minn., labored intensively for 65 hours to carve a spectacular version of a minimal geometric structure known as the Costa surface. Wolfram Research sponsored the team, and Ferguson's wife, Claire, photographed the event. The equations for this minimal surface were discovered in 1984 by the Brazilian mathematician Celso J. Costa. The figure's curvature resembles that of a potato chip, which typically starts out as a flat, thin slice of moist potato. As it dries out during frying, the chip shrinks. Minimizing its area, it curls into a saddle shape. Every little section of the Costa surface has this saddle configuration. Indeed, one can imagine the surface as the sum of an infinite number of saddles. From certain angles, the Costa surface has the splendid elegance of a gracefully spinning dancer flinging out her full skirt so that it whirls parallel to the ground. Gentle waves undulate along the skirt's hem. Two holes pierce the skirt's lower surface and join to form a tunnel that sweeps upward. Another pair of holes, set a right angles to the first pair, lead from the top of the skirt downward into a second tunnel. Several years ago, Ferguson created a number of marble and bronze versions of the Costa surface. Carving one in snow, however, presented a host of new challenges. In fact, when Wagon first approached Ferguson with the idea of submitting a proposal to the by-invitation-only competition, Ferguson was initially somewhat reluctant to get involved. "I do granite; I don't do snow," he replied. Ferguson's interest, however, increased as he and Wagon began to discuss which of his many pieces would look best in snow. It forced Ferguson to think about what material properties snow and stone might have in common. Stone, for example, can carry weight. It has compressive strength. But it can't be stretched very much, so it has significantly less tensile strength. You can make an arch out of stone. Snow has similar characteristics. An igloo is really a system of arches. And a minimal surface can also be thought of in terms of arches. In effect, every point seems to be the keystone of a cluster of arches. What about sculpting snow into the Costa surface? Marble can be carved fairly thin. Could the same be done with compacted snow? Ferguson wanted to test the feasibility of carving snow into the required shape, but there was a dearth of snow in the month of May in Maryland. He ended up retrieving several cubic feet of high-consistency snow (shaved ice) dumped by a Zamboni ice-smoothing machine outside a local ice rink. On a warm afternoon, using a giant kitchen spoon and spatula, Ferguson carved a minimal-surface form with lots of holes. As it melted in the late afternoon sun, he watched its walls get thinner and thinner. The sculpture, however, maintained its basic structure. "It seemed that a Costa form could be carved in snow, and without any special equipment," Ferguson says. The Wagon-Ferguson proposal to create a Costa form, titled "Invisible Handshake," was accepted. The team was the only one in the competition with no prior snow-sculpting experience. To work out the technical details of carving snow into the Costa surface, Ferguson got permission from the manager of the ice arena where the Washington Capitals hockey team practices to get additional Zamboni snow at various times between May and the competition date in January. Those details included determining the types of tools to use, what sort of clothing to wear for the anticipated long days of snow carving, and how best to coordinate the actions of team members, who were mathematicians, not sculptors. Remarkably, it all came together in four and a half days of intense labor. By the end of the competition's third day, the rough Costa shape was visible, just in time for the arrival of a class of kindergarten children, who crawled and slid along the surface's intriguing tunnels. The team continued to shave the outside walls down until they were only 4 inches thick. The final day of the competition was warm, bringing with it the threat of melting. The Costa surface, however, held up nicely. A week after the event, the other sculptures had all lost detail, and one had even imploded. The only significant change in the "Invisible Handshake" was that its walls had become thinner still. "It was a real blast," Wagon says. "We will be back next year." Comments are welcome. Please send messages to Ivars Peterson at email@example.com. Ivars Peterson is the mathematics/computers writer and online editor at Science News. He is the author of *The Mathematical Tourist, Islands of Truth, Newton's Clock, Fatal Defect, and The Jungles of Randomness. His current work in progress is Fragments of Infinity: A Kaleidoscope of Mathematics and Art (to be published in 1999 by Wiley). MATHEMUSEMENTS: Look for math-related articles by Ivars Peterson every month in the children's general-interest magazine Muse (http://www.musemag.com) from the publishers of Cricket and Smithsonian magazine. Back to Top Copyright © 1999 Science Service
<urn:uuid:ed2179ff-5a0e-4f3c-913e-b98eb285e994>
2.859375
1,243
Nonfiction Writing
Science & Tech.
48.981113
Global warming increasing by 400,000 atomic bombs every day Hawaii isn’t a place where giant hail forms. In fact, only eight times has hail the size of a penny or quarter been recorded for the islands. There were no records for hail larger than an inch until a freak supercell thunderstorm formed there on March 9, 2012. This pumped out a hail storm full of 2 to 3 inch diameter hail with at least one that was the width of a grapefruit -- 4.5 inches. The two largest hailstones in US record keeping were both formed in 2010 super storms. An 8-inch hunk of ice hit South Dakota in July and a 7.75 inch hunk landed in Kansas in September. If you want to see an amazing video of a recent hail storm in Oklahoma, click here. Top meteorologist Dr. Jeff Masters sums up our new climate: The stunning extremes we witnessed gives me concern that our climate is showing the early signs of instability ... We’ve bequeathed to our children a future with a radically changed climate that will regularly bring unprecedented weather events–many of them extremely destructive–to every corner of the globe. This year’s wild ride was just the beginning. Writing about March’s freak heat wave he said simply: "This is not the atmosphere I grew up with." It was hard for me to grasp “400,000 atomic bombs worth of energy” beyond a sense of 'really, really big'. So I decided to break it down into smaller chunks to see how it relates to my everyday world. One for every 3 MtCO2 of past emissions Humanity has dumped 1.2 trillion tonnes of CO2 into our atmosphere so far. That means our climate is gaining the energy of one A-bomb worth of energy each day for every 3 million tonnes of past CO2 emissions. Based on that rough rule of thumb: - Alberta tar sands: Past tar sands carbon is accelerating our climate forward at the rate of 1,300 A-bombs worth of energy each day. Industry’s and Alberta government’s goal is to double tar sands carbon extraction by the end of this decade and triple by 2035. - BC CO2 emissions: Past fossil fuel burning in BC is accelerating our climate forward at the rate of 1,100 A-bombs worth of energy each day. The Clark government is halting carbon reduction policies while pushing huge increases in BC carbon extraction. Note also that the Alberta tar sands have now produced more carbon than all fossil fuels ever burned in BC history. - BC Coal: Past BC coal carbon is accelerating our climate forward at the rate of 600 A-bombs worth of energy each day. Industry and our current provincial government plan to significantly increase extraction of BC crown-owned coal. One every 10 seconds in Canada Canadians have dumped 2.2 per cent of the global CO2, making our national share from past emissions equal to 8,680 a-bombs worth of energy each day. That is 360 per hour. Six per minute. One every 10 seconds. Both our past and our on-going emissions are among the highest in the world per capita. One for every 3,900 Canadians Broken down by population it works out to one atomic bomb worth of extra energy each day for every 3,900 Canadians (three tonnes of TNT per person). Based on population, here are the contributions to our accelerating weather bomb from various BC towns and cities: - 100 Mile House – 1 A-bomb worth of extra energy every other day - Golden – 1 a-bomb worth every day - Whistler – 2 per day - Nelson – 3 - Terrace – 4 - Campbell River – 9 - Penticton – 10 - Nanaimo – 23 - Victoria – 83 a-bombs worth each day - Metro Vancouver – 560 A-bombs worth of extra energy every day. One every 2 minutes. - City of Vancouver – 163 A-bombs worth a day - West End neighbourhood – 12 a day - Kitsilano neighbourhood – 8 a day - Mt. Pleasant neighbourhood -- 6 a day The energy from just one atomic bomb shocked the world when it exploded. For good reason as this footage from a Nevada test shows. It shows a house a mile away from the epicentre of a single atomic bomb blast like the one dropped on Hiroshima. The fossil fuels we have already burned in BC are increasing global warming by this much new energy every 75 seconds. In case you were curious, we literally can’t afford to pull our past CO2 out of the atmosphere. A study by the American Physical Society shows it costs $2,400 to reduce atmospheric CO2 by a single tonne. We have dumped over a trillion tonnes of CO2 into the atmosphere since Enola Gay dropped its bomb in 1945. It would cost $3 to remove the CO2 emitted from a single litre of gasoline. Our current BC Carbon Tax is 6 cents per litre. As the International Energy Agency recently said, the path humanity is on with fossil fuels is leading to "catastrophe." They said the costs to act now are many times less than the costs we face if we delay acting. BC halts climate policies while accelerating carbon extraction In BC, our climate policies have proven to be inadequate to meet even our own undersized goals. We added a BC Carbon Tax with the plan to raise the price of carbon fuels and thereby discourage their use and so their CO2 emissions. Yet years later both gasoline and natural gas are cheaper now than before we added our carbon tax. In addition, BC said clearly that coal was too dirty to burn. Yet the extraction of our crown-owned BC coal is increasing. Almost all our too-dirty-to-burn coal also is allowed to evade our carbon pricing and our carbon accounting. Giant tar sands pipelines, each of which will pump out many times more carbon than all of BC's economy burns, are being fast-tracked through by our federal government. BC is becoming the doormat for a huge increase in tar sands carbon extraction. Extraction of carbon in the form of BC natural gas is also increasing with a surge of fracking. Big new pipelines, liquefying export terminals and super tankers are being shoved into the Great Bear Rainforest to get this carbon away from our carbon taxes and into our atmosphere as fast as possible. In the face of all this carbon bingeing, our supposedly climate-concerned BC Liberal government has: - talked of halting or eliminating the BC Carbon Tax - refunded some past carbon taxes paid by industry - eliminated taxes on a billion litres of jet fuel a year - proposed backing out of BC's 100 per cent clean electricity mandate - shelved plans to join carbon cap-and-trade - supported huge increases in crown-owned coal extraction and export - supported huge increases in natural gas extraction and export - allowed carbon exports to evade carbon taxes - sat silent while BC is being forced to be a pipeline and supertanker doormat for tar sands carbon expansion If we want to stop our metastasizing climate bomb from getting totally out of hand the science is clear that most of the carbon still in the ground needs to stay there. To do that, our carbon laws and leadership need to change course pronto. Nothing we are doing is close to adequate to the challenge. We are carbon binging in BC. We are pouring our economic future into a growing carbon bubble. As Nobel Prize Laureate and SFU professor Dr. Mark Jaccard said when he was arrested last week for blocking a BC coal train: "We are heading for a real crisis in which we’ll have to start ripping infrastructure apart."
<urn:uuid:866438a7-6e68-4916-a4b8-6a4b128c1cc1>
3.28125
1,613
Personal Blog
Science & Tech.
57.354323
Recent Solar Wind Observations The solar wind plays a vital role in transfering the effects of solar activity to the Earth's space environment. The density, speed, pressure, and temperature of the solar wind upstream of the Earth as well as the magnitude and direction of the interplanetary magnetic field all influence the impact of solar disturbances on the Earth's magnetosphere, ionosphere The clock dial plots below, provided by Rice University, show the values of solar wind parameters upstream of the Earth at L1. Last Measurement (at Sun-Earth L1 Point): Interplanetary Magnetic Field The source of this material is Windows to the Universe, at http://windows2universe.org/ from the National Earth Science Teachers Association (NESTA). The Website was developed in part with the support of UCAR and NCAR, where it resided from 2000 - 2010. © 2010 National Earth Science Teachers Association. Windows to the Universe® is a registered trademark of NESTA. All Rights Reserved. Site policies and disclaimer.
<urn:uuid:99cb3e39-fd86-432c-ac8f-b7de6a36a929>
3.21875
215
Knowledge Article
Science & Tech.
37.793899
Previous abstract Next abstract Radio astronomy observations in the HF (1-30 MHz) portion of the electromagnetic spectrum could result in new insights into astrophysical processes. However, this particular part of the spectrum is mostly inaccessible from the ground due to the effects of the Earth's ionosphere. One solution is to observe from Earth orbit, thereby avoiding most of the absorption and phase distortions from the ionosphere. However, in the 1-30 MHz band of interest, the ionosphere is neither a perfect reflector nor is it a perfect transmission medium. Terrestrial signals leak through and increase the background radio noise or introduce spurious signals into the measurements, making the detection of faint sources difficult. All terrestrial HF communications signals, especially over-the-horizon radar, are potential interferers to low frequency radio astronomy. Ideally, radio telescopes on the moon's far side would provide a perfectly shielded environment, but at much greater cost and difficulty than a similar system in Earth orbit. We are investigating methods of predicting signal strengths at the top of the ionosphere with respect to time, frequency, and solar behavior. Existing ionospheric models provide a description of the general, global state of the ionosphere. This information is used as an input to our ray-tracing software to predict the likelihood of leakage through the ionosphere. Sources are distributed in frequency, ray launch angles, and geographic location. Because of the ionosphere, rays can be focussed (increasing the interference intensity) or defocussed (decreasing the interference intensity). Plots of the ray paths from potential interferers, showing the focussing or defocussing effects will be presented. The collective effect of a number of widely separated interferers is to potentially increase the noise at the satellite location. The ultimate goal of this research is to determine if there exist temporal and frequency windows, where the radio leakage is on the order of the cosmic background noise, that permit high-resolution low-sensitivity radio astronomical measurements from Earth orbit. The final model should to be able to predict those times using solar and geophysical parameters. Tuesday program listing
<urn:uuid:db01f428-ce90-4158-a5b8-031b3d2553fd>
3.28125
425
Academic Writing
Science & Tech.
21.143845
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. January 25, 1998 Explanation: Almost unknown to casual observers in the northern hemisphere, the southern sky contains two diffuse wonders known as the Magellanic Clouds. The Magellanic Clouds are small irregular galaxies orbiting our own larger Milky Way spiral galaxy. The Small Magellanic Cloud (SMC), pictured here, is about 250,000 light years away and contains a preponderance of young, hot, blue stars indicating it has undergone a recent period of star formation. There is evidence that the SMC is not gravitationally bound to the LMC. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:7c851639-92ff-4ba2-8ca8-1c592da751d1>
3.515625
184
Knowledge Article
Science & Tech.
42.759545
Many technologies and techniques have been developed to confront the issue of global warming with many focusing on reducing the amount of greenhouse gases in the atmosphere, and others focusing on increasing the reflectivity of the Earth's atmosphere. Carbon sequestration, synthetic trees and stratospheric sulfur injection are the three of these global warming mitigation technologies that seem to be the most viable and are analyzed in this paper. Brooks, Gregory Taylor, "Global Warming Mitigation Technologies" (2010). Undergraduate Research Awards. Paper 7.
<urn:uuid:459ef30a-04db-4a07-9096-c0ab7be69289>
2.734375
103
Academic Writing
Science & Tech.
22.020707
So by entangling two photons, for instance, physicists have demonstrated the ability to transmit quantum information from one place to another by encoding it in these quantum states—influence one of the pair and a change can be measured in the other without any information actually passing between the two. Researchers have done this before, between photons, between ions, and even between a macroscopic object and a microscopic object. But now Chinese researchers have, for the first time, achieved quantum teleportation between two macroscopic objects across nearly 500 feet using entangled photons… The two bundles of rubidium atoms that served as sender and receiver are more or less analogs for what we hope will someday be our “quantum Internet”—a system of routers like the ones we have now that, instead of beaming information around a vast network of fiber optic wires, will send and receive information through entangled photons. So in a way, this is like a first proof of concept, evidence that the idea works at least in the lab. Now all we have to do is figure out is how to build several of these in series so they can actually pass information from one to the other. To do that, we only have to somehow force these quantum states to exist for longer than the hundred microseconds or so that they last now before degrading. Sounds easy enough.
<urn:uuid:fd79585f-2abb-480e-a8c6-c117393d7cc0>
3.46875
272
Personal Blog
Science & Tech.
27.697586
Scientists in an increasing number of fields are doing science in new ways, exploiting powerful new data-collection technologies with the aid of computational methods and a little humility. Tradition demands that science always be hypothesis-driven: First, try to guess the truth, and only afterward collect experimental data to test whether the guess predicts the results. Indeed, this has been termed “The Scientific Method”. The new data-driven approach suggests that we collect data first, then see what it tells us. This becomes practical when experimental methods can amass enormous amounts of data, enough data to test more hypotheses than any mortal scientist could conceivably imagine. The adoption of data-driven approaches has been surprisingly controversial: In a “The Human Genome Project: Lessons from Large-Scale Biology”, a viewpoint article in Science magazine, Collins, Morgan, and Patrinos observe that Some of the most significant lessons date to the HGP’s formative days in the mid-1980s, when a handful of visionaries dared to break ranks with the prevailing view that biological research must always be conducted as a hypothesis-driven enterprise. The basic idea is that if we can collect enough data to form a large, rich picture — as in modern genomics, but not in old-style gene-by-gene investigation — then we are likely to learn something by looking at it. This can be seen as a hypothesis, but a very humble one. There is no pretense here that every possibility can be guessed beforehand. But what does it mean to “look at it”? For these methods to work, we must know enough about patterns (repetition, correlation, difference, functional correspondence…) that we can recognize some of them and separate the real patterns from the statistical illusions. This too is a hypothesis, but there is no pretense of vast insight. Stepping back for a broader view of science makes it obvious that the “new” approach is, in some fields, very old. Astronomers and microscopists, for example, did data-driven science centuries ago. They gathered optical data (images on retinas or photographic film), then made discoveries by applying the powerful pattern-recognizers in the human visual system. Whether literally or metaphorically, scientists have used data-driven approaches in many fields, including biology. Data-driven methodologies in biology were controversial, but necessary to make genomics possible. As the engines of data collection and automated pattern recognition grow more powerful, more fields of biology are following that lead.
<urn:uuid:8187fd45-93d3-4378-9c6e-013366ec4da7>
3.46875
525
Personal Blog
Science & Tech.
28.285446
Specifies the execution states of a Thread. This enumeration has a FlagsAttribute attribute that allows a bitwise combination of its member values. Assembly: mscorlib (in mscorlib.dll) |The thread has been started, it is not blocked, and there is no pending ThreadAbortException.| |The thread is being requested to stop. This is for internal use only.| |The thread is being requested to suspend.| |The thread is being executed as a background thread, as opposed to a foreground thread. This state is controlled by setting the Thread::IsBackground property.| |The Thread::Start method has not been invoked on the thread.| |The thread has stopped.| |The thread is blocked. This could be the result of calling Thread::Sleep or Thread::Join, of requesting a lock — for example, by calling Monitor::Enter or Monitor::Wait — or of waiting on a thread synchronization object such as ManualResetEvent.| |The thread has been suspended.| |The Thread::Abort method has been invoked on the thread, but the thread has not yet received the pending System.Threading::ThreadAbortException that will attempt to terminate it.| |The thread state includes and the thread is now dead, but its state has not yet changed to .| defines a set of all possible execution states for threads. Once a thread is created, it is in at least one of the states until it terminates. Threads created within the common language runtime are initially in the Unstarted state, while external threads that come into the runtime are already in the Running state. An Unstarted thread is transitioned into the Running state by calling Start. Not all combinations of ThreadState values are valid; for example, a thread cannot be in both the Aborted and Unstarted states. There are two thread state enumerations, and System.Diagnostics::ThreadState. The thread state enumerations are only of interest in a few debugging scenarios. Your code should never use thread state to synchronize the activities of threads. The following table shows the actions that cause a change of state. A thread is created within the common language runtime. A thread calls Start The thread starts running. The thread calls Sleep The thread calls Wait on another object. The thread calls Join on another thread. Another thread calls Interrupt Another thread calls Suspend The thread responds to a Suspend request. Another thread calls Resume Another thread calls Abort The thread responds to a Abort request. A thread is terminated. In addition to the states noted above, there is also the Background state, which indicates whether the thread is running in the background or foreground. A thread can be in more than one state at a given time. For example, if a thread is blocked on a call to Wait, and another thread calls Abort on the blocked thread, the blocked thread will be in both the WaitSleepJoin and the AbortRequested states at the same time. In this case, as soon as the thread returns from the call to Wait or is interrupted, it will receive the ThreadAbortException to begin aborting. The Thread::ThreadState property of a thread provides the current state of a thread. Applications must use a bitmask to determine whether a thread is running. Since the value for Running is zero (0), test whether a thread is running by using C# code such as (myThread.ThreadState & (ThreadState.Stopped | ThreadState.Unstarted)) == 0 or Visual Basic code such as (myThread.ThreadState And (ThreadState.Stopped Or ThreadState.Unstarted)) = 0. Windows 8, Windows Server 2012, Windows 7, Windows Vista SP2, Windows Server 2008 (Server Core Role not supported), Windows Server 2008 R2 (Server Core Role supported with SP1 or later; Itanium not supported) The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
<urn:uuid:4e463226-c495-499d-99ed-55cbc486619a>
2.828125
853
Documentation
Software Dev.
56.402926
Mission Plan - Windows to the Deep Exploration of the Blake Ridge Dr. Carolyn Ruppel, Co-Chief Scientist Georgia Institute of Technology The cold seeps on the Blake Ridge and Carolina Rise are located several hundred kilometers offshore of the Southeastern U.S. These seeps act as vents for methane that originates in the underlying gas hydrate deposits. Gas hydrates are essentially methane icea solid form of methane and water that is stable at the pressure and temperature conditions common in the marine sediments of continental margins. Because these deposits concentrate large amounts of methane into a small volume, the U.S. and other countries consider them to be a possible future energy resource. Gas hydrates have also been implicated in the formation of submarine slides that not only destabilize large areas of the sea floor, but also produce tsunamis. As gas hydrates break down in response to pressure (sea level variations) or temperature disturbances caused by global climate change events, they may also release methane to the ocean and possibly even to the atmosphere. This could intensify global warming. The amount of methane trapped in sediments within the area that we will explore is estimated to be between 800,000 and 2,500,000 cubic meters per square kilometer. In most of the study area, the top of the gas hydrate probably lies more than 100 meters below the sea floor. Under the geologic, thermal, and other conditions that prevail near cold seeps, seafloor chemosynthetic communities mark windows that provide unusual access to the top of the gas hydrate zone and methane venting system. At three such windows, this expedition will compile marine inventories, develop high resolution seafloor and shallow subseafloor maps, describe seep habitats, and contribute to a better understanding of gas hydrates. As we sail on the R/V Atlantis, we will conduct seven dives using the Alvin submersible. We will also use tools such as the Atlantiss swath bathymetric system and a Chirp subbottom profiling system. The three unique dive sites chosen for Windows to the Deep encompass the range of near-seafloor features considered critical for studying methane emission from the sea floor. At two of the dive sites, ancient, buoyant salt deposits (called diapirs) have risen through and pushed aside the surrounding sediments. Above these salt deposits, gas hydrate occurs very close to the sea floor. Fluid circulation in the shallow sediments carries methane, hydrogen sulfide, and other chemicals out of the sea floor. Near these, organisms such as clams, mussels, and tubeworms have established communities that rely on these flows. At the third site, sediments have been deposited over a large area in a pattern that resembles a field of sand dunes. Previously collected seismic imaging has revealed a complex distribution of methane in the sediments beneath this area. Windows to the Deep will be the first mission to explore this type of feature for evidence of chemosynthetic (derives energy from chemical reactions) organisms or seafloor methane seeps. During the mission, the scientists will conduct daily Alvin dives to: observe seafloor methane emission; retrieve clams, tubeworms, mussels, shrimp, and other organisms living at seep communities; sample bacterial mats that provide important information about the chemistry of fluids being emitted from the seafloor; and retrieve short sediment push cores using the Alvin manipulator arm. Each night, the scientists will map the sea floor using a special system that measures water depth. We will use an acoustic pulse that penetrates the uppermost meters of the sediment column to look into them. By combining the biological and chemical information obtained from the Alvin samples and the geophysical data collected during the nighttime operations, scientists will develop a more complete picture of both the surface and subsurface manifestation of methane and other fluid seeps.
<urn:uuid:d6303909-5098-4795-b6f2-5ce33635290f>
3.625
813
Knowledge Article
Science & Tech.
28.293371
Secchi Disk Depth One of the most commonly used measurements for water clarity is the Secchi disk depth (pronounced she SEHK-ki) . The Secchi disk is a simple and inexpensive device used by both citizens and scientists for measuring water clarity. The device generally consists of a 20-centimeter (8-inch) disk made of wood or plastic. Some are painted with alternating black and white quadrants, or they can be solid white. A non-stretching piece of rope or light chain is attached through the center; the rope is marked in increments of feet or meters. A small weight is attached beneath the disk so that it will sink quickly and the line will remain taught while measurements are being made. The disk is lowered below the surface until it just disappears from view; that depth is referred to as the Secchi disk depth.
<urn:uuid:cfe021db-1725-4711-bdf6-abb6086e2367>
3.359375
175
Knowledge Article
Science & Tech.
52.005
Family: LYCAENIDAE (Coppers, Hairstreaks and Blues) A relatively small butterfly, the Melissa Blue is about an inch in wingspan. The sexes are dimorphic, meaning each sex has its own color. Males are a beautiful blue while females are brown. A distinguishing characteristic is the red-orange submarginal row of scales in both sexes. Range and Habitat The Melissa Blue is one of the most widespread blues of the western states, occurring throughout the west to northern Baja California, northern Mexico and west Texas. It has been extirpated south of the San Francisco Bay area. There are also remnant populations in the Great Lakes area. In our region it occurs from the coast to the desert, preferring open spaces such as meadows, agricultural fields, and grasslands. Caterpillars are green, and tended by ants. This species overwinters as an egg and has three flights of adults. Host plants include legumes such as alfalfa, clovers, locoweed, and lupines. Related or Similar Species
<urn:uuid:824fee9f-e357-4e1e-a58c-58a3d62f7224>
3.40625
226
Knowledge Article
Science & Tech.
43.182504
Human-forced climate change has already many effects that are visible today. An article that appears in today’s Science introduces another candidate: trees in the Western U.S. are living only half as long as they did 50 years ago. In the climate most of us grew up in, western forests acted as carbon sinks. Their growth “scrubbed” carbon from the atmosphere. Climate change has introduced conditions that are drier than normal; severe drought has ensued across the region. As a result, trees are growing less and dying earlier than they used to. That could result in less carbon being removed from the atmosphere, creating yet another positive feedback loop in the climate system. Researchers focused on what’s called “background” mortality – trees dying from events that do not include infestations of insects like the mountain pine beetle currently afflicting the West, which is identified as “abrupt” mortality. They studied 76 plots where trees were at least 200 years old. They were undisturbed by logging (harder to do every year), bark beetle epidemic or wildfire. Trees being studied in Colorado were largely wiped out by the mountain pine beetle epidemic currently moving across the state’s forests, itself a trend linked to climate change. Temperature data in the research plots (across the entire West) showed an increase in every season of the year. The warming and drought conditions we’ve experienced in Colorado has also made its presence felt across a much larger region. Our forests are suffering from multiple coincident effects of a warming planet and regionalized drying. Direct human pressures such as increased population in the inter-mountain West and hundreds of years of logging aren’t helping matters. Efforts need to be made today to decrease our forcing on the climate system. Carbon emissions need to be drastically reduced so that concentrations in the atmosphere can be reduced later this century. If forests are unable to play their historic role of a carbon sink, those efforts become all the more critical. Unfortunately, it will likely increase their cost, something environmentalists cited regularly during the past eight years’ of climate inaction. The Denver Post has, surpringly to me, a pretty good article on this. Joseph Romm (Center for American Progress Senior Fellow & Climate Progress blogger) has a more scientifically-rigorous discussion of the article and its implications. I recommend reading Romm’s analysis if you don’t want to read the article itself. Cross-posted at SquareState. [Update]: While reading the article again, something important popped out at me. The authors note the following: From the 1970s to 2006 (the period including the bulk of our data; table S1), the mean annual temperature of the western United States increased at a rate of 0.3° to 0.4°C decade -1, even approaching 0.5°C decade -1 at the higher elevations typically occupied by forests. So between 1.2°C and 2°C warming has already occurred since the 1970s. That means the forests of the future are in for bad times. If we could somehow magically stop emitting greenhouse gases today, the climate system would still get warmer for the next 100+ years due to the forcing “in the pipeline”, as climatologists refer to it. The climate system hasn’t fully responded to the gases emitted in the last 5, 10, or 50 years. Of course, no such magic is going to occur. Emissions will have to stop increasing (stabilize) then start decreasing. Which means there is plenty of additional forcing (warming) that will occur. The 2007 IPCC assessment relied on models that didn’t demonstrate the warming that has already occurred very well. Policy decisions based on that report would therefore be poorly suited for the task we face. [2nd Update]: NPR’s Science Friday discussed this paper with one of the researchers today. The segment can be found here.
<urn:uuid:5665b7d7-99fd-43ff-87e6-1b3b60f484e9>
3.5625
817
Personal Blog
Science & Tech.
48.739123
|This fire-breathing Dragon can fly. Pictured above yesterday, SpaceX Corporation's Falcon 9 rocket capped with a Dragon spacecraft lifted off from Cape Canaveral, Florida, USA. The successful launch was significant not only because it demonstrated that a private company has the ability to re-supply the International Space Station (ISS), but also that spaceflight has taken a significant step away from being an endeavor that only big governments can do with public money. If all continues as planned, the robotic Dragon will dock with the ISS this weekend. Over the next two weeks, the ISSExpedition 31 crew will then unload Dragon and refill it with used scientific equipment. In about three weeks, the ISS's robotic arm will then undock Dragon and move it to where it can fire its rockets. Soon thereafter the Dragon capsule is expected to reenter the Earth's atmosphere, deploy its parachutes, splash down in the Pacific Ocean off the coast of California, and be recovered.
<urn:uuid:a229673b-38bd-4ad1-b612-d63ce9a49ff1>
2.875
196
Truncated
Science & Tech.
38.912883
Comet color and you March 2013: Take this month's observing opportunity to test out your eyes’ color receptors. January 28, 2013 |Red sensitive? Blue sensitive? How perceptive are your eyes to the colors of astronomical objects? One way to find out is to observe newly discovered Comet C/2011 L4 (PANSTARRS) with friends under a dark sky this month. (For more about this comet, including a finder chart, see “Get ready for Comet PANSTARRS” on p. 60). All you need to do is sketch the comet head and its tail(s) and rate their intensity and their color. It’s an exercise that’s both informative and fun.| Astronomy magazine subscribers can read the full column for free. Just make sure you're registered with the website. You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:714a68cd-1048-474c-aee4-97729bcf616d>
3.109375
340
Truncated
Science & Tech.
52.913293
What is Copyleft? Copyleft is a general method for making a program (or other work) free, and requiring all modified and extended versions of the program to be free as well. The simplest way to make a program free software is to put it in the public domain, uncopyrighted. This allows people to share the program and their improvements, if they are so minded. But it also allows uncooperative people to convert the program into proprietary software. They can make changes, many or few, and distribute the result as a proprietary product. People who receive the program in that modified form do not have the freedom that the original author gave them; the middleman has stripped it away. In the GNU project, our aim is to give all users the freedom to redistribute and change GNU software. If middlemen could strip off the freedom, we might have many users, but those users would not have freedom. So instead of putting GNU software in the public domain, we “copyleft” it. Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom. Copyleft also provides an incentive for other programmers to add to free software. Important free programs such as the GNU C++ compiler exist only because of this. Copyleft also helps programmers who want to contribute improvements to free software get permission to do so. These programmers often work for companies or universities that would do almost anything to get more money. A programmer may want to contribute her changes to the community, but her employer may want to turn the changes into a proprietary software product. When we explain to the employer that it is illegal to distribute the improved version except as free software, the employer usually decides to release it as free software rather than throw it away. To copyleft a program, we first state that it is copyrighted; then we add distribution terms, which are a legal instrument that gives everyone the rights to use, modify, and redistribute the program's code, or any program derived from it, but only if the distribution terms are unchanged. Thus, the code and the freedoms become legally inseparable. Proprietary software developers use copyright to take away the users' freedom; we use copyright to guarantee their freedom. That's why we reverse the name, changing “copyright” into “copyleft.” Copyleft is a way of using of the copyright on the program. It doesn't mean abandoning the copyright; in fact, doing so would make copyleft impossible. The “left” in “copyleft” is not a reference to the verb “to leave”—only to the direction which is the inverse of “right”. Copyleft is a general concept, and you can't use a general concept directly; you can only use a specific implementation of the concept. In the GNU Project, the specific distribution terms that we use for most software are contained in the GNU General Public License (available in HTML, text, and Texinfo format). The GNU General Public License is often called the GNU GPL for short. There is also a Frequently Asked Questions page about the GNU GPL. You can also read about why the FSF gets copyright assignments from contributors. An alternate form of copyleft, the GNU Lesser General Public License (LGPL) (available in HTML, text, and Texinfo format), applies to a few (but not all) GNU libraries. To learn more about properly using the LGPL, please read the article Why you shouldn't use the Lesser GPL for your next library. The GNU Free Documentation License (FDL) (available in HTML, text and Texinfo) is a form of copyleft intended for use on a manual, textbook or other document to assure everyone the effective freedom to copy and redistribute it, with or without modifications, either commercially or noncommercially. The appropriate license is included in many manuals and in each GNU source code distribution. All these licenses are designed so that you can easily apply them to your own works, assuming you are the copyright holder. You don't have to modify the license to do this, just include a copy of the license in the work, and add notices in the source files that refer properly to the license. Using the same distribution terms for many different programs makes it easy to copy code between various different programs. When they all have the same distribution terms, there is no problem. The Lesser GPL, version 2, includes a provision that lets you alter the distribution terms to the ordinary GPL, so that you can copy code into another program covered by the GPL. Version 3 of the Lesser GPL is built as an exception added to GPL version 3, making the compatibility automatic. If you would like to copyleft your program with the GNU GPL or the GNU LGPL, please see the license instructions page for advice. Please note that you must use the entire text of the license you choose. Each is an integral whole, and partial copies are not permitted.
<urn:uuid:9b4aa105-c516-4a9e-b057-304fdc530ae4>
3.546875
1,063
Knowledge Article
Software Dev.
43.058114
By Helen Taylor/Nature Conservancy Tiny mussels that wash up on beaches and attach to boats, piers and underwater pipes. Mats of vegetation that blanket lakes and take the fun out of boating, fishing and water-skiing. Tall grasses that invade shorelines, blocking lake views. In the Great Lakes states, we’ve read about aquatic invasive species such as zebra mussels, Eurasian water milfoil and Phragmites (common reed) and oftentimes experienced them first hand. We know what a nuisance they are. Less clear, however, is the impact these invasive plants and animals are having on our economy. Quantifying those costs is difficult and, as a result, many different numbers have surfaced in recent years. In an effort to better understand the true costs of aquatic invasive species in the Great Lakes basin, The Nature Conservancy recently commissioned the Anderson Economic Group to perform the research and analysis needed to sort the hype from the reality. Their conclusion? While they can’t provide a single number for total cost because there isn’t enough solid research on all aspects of the issue, they can report with confidence that the direct cost of aquatic invasive species to the Great Lakes basin is more than $100 million annually, and likely significantly more. Six main industries bear the brunt of these direct, out-of-pocket costs: sport and commercial fishing, power generation, industrial facilities, shipping-related businesses, tourism and recreation and public water supply facilities. Some of the cost estimates the Anderson Group uncovered are startling: *Great Lakes businesses suffer $50 million every year in losses and reduced demand due to mollusks and sea lamprey. * The Great Lakes Fishery Commission is spending approximately $34 million annually on research and control of aquatic invasive species. * A paper plant along Lake Michigan spent $1.97 million to remove 400 cubic yards of zebra mussels from its facility. * In 2009-10, the eight Great Lakes states spent nearly $31 million to manage and prevent the spread of aquatic invasive species. These costs trickle down to all of us. The 40 million people who get their drinking water from the Great Lakes pay higher water bills. Costs incurred by power plants to remove zebra mussels from intake pipes will be passed on to consumers. And Great Lakes fish are either more expensive or less available, or both, due to the impacts of invasive species. Today, most of the money spent on aquatic invasives goes to management. But if we want to win the war against invasive species, prevention is the first line of defense and the most cost-effective strategy. For example, investing in the development of screening and risk assessment tools that allow agencies to prohibit or restrict from trade those species that are likely to be problematic could stop the most ecologically and economically damaging new species from entering Great Lakes waters. If they do get in, tools like environmental DNA, which can detect DNA shed into water by Asian carp and other fish, can help detect them quickly, giving us a better chance of controlling emerging populations before they become established and are much more costly to control. We can’t afford to ignore aquatic invasive species. Their economic, social and environmental cost to the Great Lakes basin and the people who live and work here is too high. A coordinated, region-wide plan to prevent new introductions of those invasive species most likely to cause harm, detect and respond rapidly to new invaders and implement early control is a wise investment in our economy, our lives and one of the world’s greatest freshwater systems. Editor's note: Taylor is a member of the Bridge Board of Advisers.
<urn:uuid:65458bdf-307b-4101-b4eb-70959ff7b791>
3.0625
757
Nonfiction Writing
Science & Tech.
36.596393
Solar Plasma Swirls, Erupts A prominence composed of solar plasma just behind the edge of the Sun, rose up and swirled around for many hours, then burst away into space over about a one-day period (Nov. 18-19, 2012). Unseen magnetic fields, mostly above this area, are the driving forces behind the event. This action was captured in extreme ultraviolet light by Solar Dynamics Observatory. Events like this are fairly common, but when viewed in profile, it is easier to see the dynamics of the plasma more clearly. › View associated video
<urn:uuid:fa6c682c-6ac7-4670-bf1a-82fde87b32ce>
2.78125
118
Truncated
Science & Tech.
43.787261
Date: Around 1993 I have heard that supposed "space plane prototype" that flew over California and Denmark, and a few other countries. I read an article which said that it used an engine which ran on liquid methane. Is this the same as methanol in cars? Also, is this plane's engine a Scramjet (what is this engine about?), or what? Methane is a gas from the same "family" that includes butane, propane, octane, etc. Methanol is an alcohol and a liquid at room temperature. Liquid methane is cold methane. Click here to return to the Engineering Archives Update: June 2012
<urn:uuid:0d12f3eb-9a70-4e61-956c-25149f2b6611>
2.734375
142
Knowledge Article
Science & Tech.
50.786962
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 6 results in our database of sites 6 are Websites, 0 are Videos, and 0 are Experiments) Search results from our links database Equations, graph and simulation of a cyclotron. Uses of radioactive beams generated by the cyclotron are given, as is a point and click diagram of the NSCL facility which gives further information. A graph showing Isotopes made is provided. The cyclotron was one of the earliest types of particle accelerators, and is still used as the first stage of some large multi-stage particle accelerators. It makes use of the magnetic force on a ... A site describing the work of Ernest Lawrence and the setting up of the cyclotron at Berkeley. Leo Szilard (1898 - 1964) ideas included the linear accelerator, cyclotron, electron microscope, and nuclear chain reaction. Equally important was his insistence that scientists accept moral ... Ernest Orlando Lawrence (1901 - 1958) invented the cyclotron, a device for accelerating nuclear particles to very high velocities without the use of high voltages.
<urn:uuid:20e5e0c7-407e-47f0-b881-a6ae9ea3b47e>
3.203125
279
Content Listing
Science & Tech.
46.258456
What is the meaning of the decimal point? Here is an illustration of the most common answer in my class (a math content course for prospective elementary and special ed teachers): Thinking of the decimal point as the border between whole numbers and decimals limits our ability to think about relationships between these two domains. In particular, look at that picture again. The essence of whole-number place value is right-to-left; the essence of fraction (decimal) place value is left-to-right if we’re thinking about the decimal point as the border. But the same relationships hold on either side: If we move one place to the left, the value is 10 times greater and if we move one space to the right, the value is 1/10 as great. No, the decimal point isn’t really the border between two different settlements. And it’s not a the marker of symmetry either. Thinking about the decimal point this way leads us to expect that there should be a oneths place to the right of the decimal point. No, the decimal point marks the most important location in a place value system: the ones place. Once we know where the ones place is, we know the value of every other place. It is just an unfortunate accident of history that the decimal point lies to the right of the ones place, when really it should sit underneath it.
<urn:uuid:560400e3-6e83-42b5-b3d2-373e861ff13e>
3.6875
292
Personal Blog
Science & Tech.
53.309306
Non-native to Chesapeake Bay; invasive - Family - Hydrocharitaceae - Distribution - Hydrilla is an exotic species introduced from southeast Asia which first appeared in United States during 1960's. Today it is found in most of the southeastern United States, westward to California. Hydrilla was first observed in 1982 in the Potomac River near Washington D.C. and by 1992 grew to cover 3000 acres in the Potomac. Hydrilla is also found in the Susquehanna Flats and a few upper bay tributaries. It is predominantly a freshwater species, but has been found in waters of 6-9 ppt salinity. Hydrilla grows on silty to muddy substrates and tolerates lower light than other bay - Recognition - Stems are freely branching with whorls of 3-5 leaves linear to lanceolate leaves. Leaves have strongly toothed or serrated margins and a spinous midrib. Roots are adventitious, forming along nodes of rhizomes that grow horizontally atop or just below sediment surface. Tubers are also commonly found at the end of runners that branch from the buried rhizome. - Ecological Significance - Hydrilla is an introduced species first identified in the Chesapeake Bay region in 1982. It is often considered a nuisance plant because of its habit of forming dense impenetrable beds that impede recreational uses of waterways. Because of the substantial populations of hydrilla found in the Potomac River, a mechanical harvesting program had to be instituted to keep the many marinas along the river open to boat traffic. Hydrilla is an excellent food source for waterfowl, and the large populations found in the Potomac River have increased waterfowl numbers. In spring, young hydrilla beds that are just starting to reach for the surface are a great place to catch a large mouth bass. - Similar Species - (Elodea canadensis) has a similar appearance, however, leaves of waterweed are in whorls of 3 and are not as markedly toothed as those of hydrilla. Common waterweed also lack the tubers that hydrilla forms in late summer or early fall. - Reproduction - Hydrilla reproduces sexually and asexually. The strain found in Chesapeake Bay is monoecious, with male and female flowers occurring together near the growing stem tips. Small, white female flowers are born on a hypanthium at the water surface. Male flowers detach from stem tips and float to the water surface. The pollen they release must settle directly on the female flower for pollination to occur. Seed set has a success rate of about 50% and is not as effective as asexual reproduction. Asexual reproduction occurs through fragmentation, production of new stems from rhizomes, turions (resting plant buds that develop in leaf axils or tips of branching stems) which break off then sink to the substrate and form a new plant, and tubers (another type of resting plant bud) that develop at ends of buried runners that branch off from rhizomes. Tubers and turions can over-winter and are the major form of reproduction during the late summer (August) die-off of dense hydrilla beds. Print out a copy for the field (Adobe Acrobat file 69KB) - To get the free Acrobat Reader go to Adobe.com. If you don't think your plant is represented above try going back a level and choosing again. DNR Home > Bays & Streams > Bay Grass Home > Bay Grass Identification
<urn:uuid:e33140fa-ddc0-45a4-aabf-b8cc87cc06ba>
3.625
796
Knowledge Article
Science & Tech.
41.061922
- (US) IPA: /ləˈbɛːɡ ˈɪntəɡrəl/, /ləˈbɛːɡ ˈɪntəɡrl̩/ Lebesgue integral (plural Lebesgue integrals) - (analysis, singular only (the Lebesgue integral) and countable) An integral which has more general application than that of the Riemann integral, because it allows the region of integration to be partitioned into not just intervals but any measurable sets for which the function to be integrated has a sufficiently narrow range. (Formal definitions can be found at PlanetMath). - The Lebesgue integral is learned in a first-year real-analysis course. - Compute the Lebesgue integral of f over E. Last modified on 19 June 2013, at 18:42
<urn:uuid:75e60f75-f0c1-4e83-8160-9660583ef38f>
2.828125
194
Knowledge Article
Science & Tech.
32.622602
Restoration of Fish Habitat in Relocated Streams FLOW IN NATURAL STREAMS Rivers and streams are nature's channels for the disposal of surface runoff. Stream flows vary continuously from day to day, month to month, and year to year, and the variations between the greatest floods and the lowest flows can be very large. A ratio of 100 to 1 is not uncommon. A stream forms its channel by erosion, practically all of which takes place during a few days or weeks in the year when the stream is in flood. The main channel-forming flood for most streams is the discharge that has a 50 percent chance of being equaled or exceeded in any given year. The larger floods which occur less frequently may considerably alter the work of the 50 percent chance floods, but after they are over, the stream will resume the normal channel forming process. It is this process that is most significant for creating and maintaining fish habitat. During floods of any size, a stream may carry large amounts of sediment suspended in the flow. Most of this material is clay, silt and fine sand washed into the stream with the surface runoff, but a small amount is derived from the stream's own bed and banks. It is this bed and bank material that is most important for creating and modifying fish habitat. Action of a Stream on Its Bed and Banks Gravity, acting on the water and its suspended sediment, is the propelling force of stream flow. This force is opposed by the friction exerted on the flow by the bed and banks of the stream. As the volume flowing in a stream increases, the stream's erosive power also increases. The resulting erosion varies with the resistance of the materials composing the bed and banks, from practically nil for bedrock and large boulders to very extensive for sand and silt. During high flows, lighter materials, such as sand or fine gravel are swept up by the turbulence and carried along in suspension in the flow. Heavier particles such as coarse gravel may roll and slide along the bottom. When the flood recedes, the heavier particles stop rolling and lighter ones drop out as the velocity decreases. The bed of the stream is changed after each flood. Pools and gravel bars may remain in substantially the same places, but the bed materials have changed and the stream is in fact slightly different. High flows sweep away sand and small gravel on the stream bed, leaving the large gravel, cobbles and boulders that are too heavy for the current to move.* A pavement or armor is formed which protects the undisturbed materials below from scour. As the flood recedes, this armor is covered by finer materials, but it remains to protect the bed from the next flood. If the armor is removed, as for example by gravel mining, a new cycle of erosion can begin, and the streambed will "degrade" until a new armor surface develops. Lack of armor explains why new relocated channels can cut themselves deeper than the original excavated depth although the new channel may have the same or a flatter gradient. *Sediment particles are commonly classified according to their diameters as: ||10 inches to 80 inches (250 mm to 2000 mm) ||2.5 inches to 10 inches (64 mm to 250 mm) ||0.08 inches to 2.5 inches (2 mm to 64 mm) ||0.06 mm to 2 mm ||0.004 mm to 0.06 mm ||less than 0.004 mm After a stream has armored its channel, its bed may be immune to erosion from all but the highest flows. However, the stream may still scour the unarmored deposits in the banks, providing continuing supplies of sediment in the channel forming process. With time, the steep upper portions of a stream tend to degrade their beds while the lower portions build up wide flood plains of sediment. Eventually, the stream reaches an equilibrium where the gradient at any given place, the flood flows and the sediment load are in balance. When this occurs the stream is barely able to carry away the sediment that comes to it from upstream, and it is neither cutting or building up its bed. This balanced condition can be upset, by natural or manmade changes in the stream's geometry, sediment load or discharge. If, for example, the channel is shortened by relocation, the local gradient will be steepened, which may cause the stream to begin cutting its bed above the relocation. The sediment thus created will be deposited downstream, building up the bed. Any significant change of the natural channel has the potential to start a new cycle of erosion, and must be considered in the hydraulic design of every stream relocation. Obstructions to Flow Cause Changes in Stream Channels An immovable obstruction, such as a large boulder, restricts the area of flow in a stream channel. Water piles up against the upstream edge of the obstruction, causing an increase in velocity around the sides, accompanied by the development of vortexes which scour the bed. Scour holes may develop, and the scoured-out material may be deposited downstream as a gravel bar. If the obstruction is overtopped during high water an additional erosive force is introduced as water plunges over the downstream face, impinging on the bed like a jet. This force can greatly enlarge the scour hole below the obstacle. Scour holes change somewhat with each flood, but as long as the obstacle remains the scour hole will persist unless filled by an excess of sediment in the stream. An obstruction which extends completely across the channel, such as a low dam, has the effect of flattening the gradient upstream. This reduces the stream's velocity, and also its capacity to transport sediment. The stream may drop some of its sediment in the pool above a new dam and this pool may eventually become filled with deposits. Downstream from the dam the opposite effect occurs. The water falls almost vertically from the lip of the dam, impinging on the bed very much as a jet from a hose. The strength of this jet increases with the volume of water passing over the dam, and the height of fall. If the bed materials are of such size that they can be moved by the jet a scour hole develops immediately below the dam. The smaller materials are scoured out first, followed by larger particles until only those remain which are too large to be moved by the water. The scour hole is then "paved" or "armored" and will not deepen itself any more until a greater flood occurs. If the bed material below the dam consists of very coarse particles the scour hole may be very small or may not develop at all. In such cases, a hole or basin should be excavated in the bed when the dam is installed to provide the desired plunge pool for fish habitat. Material removed from a natural scour hole is carried downstream where, as the velocity decreases it forms a gravel bar. Such bars are often excellent spawning beds for fish. Scour holes have a tendency to enlarge themselves in all directions, so the foundation of a dam must either be built below the probable final scour level, or the scour must be controlled by placing coarse rock along the toe of the structure. An obstruction at the edge of a stream constricts the flow and changes its direction toward the opposite bank. This can cause increased far bank erosion, and at the same time cause a scour hole to develop below the near bank tip of the obstruction. These "current deflectors," whether occurring accidentally in nature or introduced by man, are important molders of fish habitat in Obstructions which cause an appreciable constriction of the flood flow of a stream may cause sediment bars and riffles to form upstream. This tendency is intensified if the channel is unusually wide above the constriction. In nature, clusters of boulders, fallen trees or debris lodged in the channel may cause constrictions. Bridges and bank riprapping may also constrict the flow if not properly designed. Channel Formation in Straight Streams Natural streams are never absolutely straight, even where they have steep gradients.* They tend to develop bends as in Figure l(b). Erosion is greater in the outsides of these bends, and the eroded material is carried downstream, where it is deposited as bars in the insides of the bends. These bars force the stream against the opposite bank, causing more erosion and deepening of the channel. Eventually, the bends become pools and the gravel bars form "riffles," and the stream assumes an undulating profile as shown here. The relationship between pools and riffles varies with stream discharge. During low flows, a stream is more sinuous, and riffles, pools and slow shallow water tend to increase. During high water the stream tends to straighten; riffles and pools tend to decrease, and deep fast runs tend to increase. In straight streams having beds composed of several sizes of coarse materials, the riffle-pool sequence commonly occurs at intervals of 5 to 7 stream widths. On the average, pools tend to be longer than riffles, and the bed materials in riffles are noticeably coarser than in pools. For a stream that is in equilibrium, the gravel bars tend to remain in the same location for years, although the materials composing them may change from year to year. *Geologists define a straight stream as one in which the length measured along the line of greatest channel depth is less than 1112 times the length of the valley in which the stream flows. Riffle-pool sequences will not develop in streams with sand or silt beds. Straight streams usually have hard, erosion resistant banks of gravels, boulders or even bedrock, which restrain the stream's natural tendency to develop a sinuous course. The gradient may change abruptly with bed conditions, producing short sections of swift water alternating with relatively flat reaches. Meandering Streams and Braided Streams Streams with comparatively flat gradients and erodible banks are more sinuous than straight streams. The bends tend to extend themselves laterally to form loops or "meanders" as in Figure l(a). If the bed materials are coarser than coarse sand or fine gravel, the riffle-pool sequence will develop in meandering streams, with pools in the outsides of the bends and riffles in the crossovers from one bend to the next. Frequently, pools develop overhanging banks providing excellent cover for fish. Natural or man-made obstructions in a stream may cause or reinforce meandering in unwanted places. This possibility must be considered when designing and placing habitat improvement structures. Riprap bank protection may be needed at vulnerable places to prevent extension of Streams carrying large quantities of sediment may develop a braided pattern as in Figure l(c). Such streams usually have steep gradients and carry more sediment than the stream can effectively transport. They tend to have wide, shallow beds when in flood, and to cut their channels even wider, producing more sediment to add to the excess already existing. These streams lack the riffle-pool sequence, and the channel may spread out in many shallow rills incapable of supporting fish. In its course through the country, the same stream may be straight, meandering or braided, depending on the local topography and geology.
<urn:uuid:387694f3-83d6-48cf-a1f4-98d77fe7644f>
4.21875
2,423
Knowledge Article
Science & Tech.
52.026867
A circle has a unit circle (radius, r = 1) removed from its centre to produce an annulus. If the area remaining is the same as the unit circle removed, find the width of the annulus. Let the radius of the original circle be, r. If the area remaining is equal to the area removed (a unit circle), the area of the original circle will be twice a unit circle. πr2 = 2π r2 = 2. Hence, the radius of the original circle, r = 2 and so the width of the annulus is 2 1.
<urn:uuid:75e88997-89b1-4f9c-af0e-c65bb577bcf3>
4.1875
124
Tutorial
Science & Tech.
69.889058
Some data referring to the number of survey fields, and the number of galaxies examined, are summarized in Table 2. In the survey areas there are altogether 1273 galaxies with diameters larger than the above limits, and in the comparison areas 2084. The entire material thus comprises 3357 galaxies distributed over 518 fields. |Class A||Class B||Class C||All| |Number of survey areas||62||64||48||174| |(number of galaxies)||(436)||(441)||(396)||(1273)| |Number of comp. areas||122||126||96||344| |(number of galaxies)||(760)||(702)||(622)||(2084)| Fig. 2 gives the statistical distribution of the 174 survey areas, the areas being grouped according to the number of galaxies (diameter 1.0 kpc). The number of companions, physical and optical, of the central spiral galaxies ranges from 0 to 13. The smooth curve represents the Poisson distribution corresponding to the same total number and the same arithmetical mean. As expected, the observed frequencies deviate from a random distribution. In order to reduce possible disturbances by distant clusters that may fall within the boundaries of the survey areas, it seemed advisable to exclude the 14 most populous areas-those with more than eight objects, or double the average number. The following analyses will be based on the 160 survey areas that have a maximum of eight galaxies. For the same reason, and as a compensation, the 28 most populous comparison areas have also been omitted. In the second part of the survey, down to a diameter of 0.61 kpc, we have excluded two additional survey areas with more than 15 galaxies, and the four most populous comparison areas. Figure 2. Statistical distribution of 174 survey areas containing different number of galaxies. The smooth curve is the corresponding Poisson distribution. The number of physical companions to the central spiral system in a survey area is, in a statistical sense, represented by the difference between the number of galaxies in this area and half the total number in the two comparison areas. However, the central systems, with a mean major diameter of 27 kpc, screen off parts of the survey areas, about 3% (class A) and 5% (class B). The comparison number is accordingly multiplied by 0.97 and 0.95, respectively (since for spirals of class A the physical companions are confined to position angles of 30° -90°, the factor will in reality be 0.65). For class C, comprising spiral systems of different inclinations which sometimes have large companions within the survey areas, the average factor is 0.95. It has been found that the accuracy is improved if the individual comparison number is replaced by the mean number derived from all the comparison areas as a function of the distance, the accidental fluctuations being reduced. The results listed in Table 7, as regards the number of physical companions (diam. 1.0 kpc), are based on these smoothed-out comparison numbers. The values of Nphys range from -2 to +7, the estimated mean error of an individual number being about 1.2. It should be remarked here that the statistical distribution derived later for position angles, separations, and absolute diameters are based on the total number of optical companions, corresponding to all the comparison areas. There does not seem to be any systematic dependence of the tabulated Nphys on the galactic latitude, which may indicate that the surface-brightness gradients in the outer parts of the satellites are so steep that the measured diameters are not seriously affected by galactic absorption. On the other hand, the results obtained in the extended survey (diam. = 0.6 - 0.9 kpc) indicate a certain latitude effect, probably explained by the somewhat lower surface magnitude of these small satellites. For this reason, spiral systems with galactic latitudes below 29° (NGC 891, 925, 1023, 1560, 2835, 7640) will not be included when the results of the extended survey are used in the determination of the distribution of absolute diameters (sect. 10). It was stated above that survey areas with more than eight galaxies (diam. 1.0 kpc) have been excluded, in order to reduce possible disturbances by background clusters. It is possible to make a check on the remaining part of the material by means of the charts of the distribution of distant clusters given in the catalogues by Zwicky et al. (1961, 1963, 1965, 1966, 1968). From the charts we estimate the fraction, f, of each survey area that is projected on a background cluster. If only clusters of a limited size are included, the very extended clouds being omitted, we find that f = 0 for about half the number of survey areas; on an average, f amounts to 0.22. It is not possible to establish a statistically significant relation between Nphys and f, the coefficient of correlation being as small as +0.10 ± 0.10 (m.e.). For a reliable determination of the distribution of absolute diameters for galaxies in the physical groups it is important that the apparent diameter measures form a homogeneous system, that is, that the diameters are free from systematic errors that are a function of the diameter or of the distance modulus. All the diameters have been measured by the writer using the same eyepiece and taking care that the observing conditions were the same, for instance, as regards the illumination of the Atlas prints. A great part of the work has been repeated, with no indication of any serious changes in the diameter measures. The homogeneity is supported by the fact that Nphys, as listed in Table 7, shows no systematic dependence on the adopted distance modulus. Furthermore, an examination of the statistical distribution of the diameters of the galaxies in all the comparison areas shows that this distribution is compatible with the assumption of a constant space density of galaxies (cf. sect. 14). There is only a slight decrease in the number of galaxies with diameters less than 0'.35, which may be explained as a redshift effect in the diameters, since most of these small galaxies are at estimated distances of the order of 100-200 Mpc. It should be noted here that the diameters of the small physical satellites of the spiral systems do not suffer from any redshift effect; whereas most of the background objects are giant galaxies at very large distances, the satellites are dwarf systems with an average distance modulus of about 30.0.
<urn:uuid:5be430e4-37ac-4778-aae3-7e894c9a23b4>
3.03125
1,351
Academic Writing
Science & Tech.
49.93416
Apr 27, 2005 A little known fact: Popular ideas about the Sun have not fared well under the tests of a scientific theory. The formulators of the standard Sun model worked with gravity, gas laws, and nuclear fusion. But closer observation of the Sun has shown that electrical and magnetic properties dominate solar behavior. For centuries, the nature of the Sun’s radiance remained a mystery to astronomers. The Sun is the only object in the solar system that produces its own visible light. All others reflect the light of the Sun. What unique trait of the Sun enables it to shine upon the other objects in the solar system? Today, astronomers assure us that the most fundamental question is answered. The Sun is a thermonuclear furnace. The ball of gas is so large that astronomers envision pressures and densities within its core sufficient to generate temperatures of about 16 million K—producing a continuous “controlled” nuclear reaction. Most astronomers and astrophysicists investigating the Sun are so convinced of the fusion model that only the rarest among them will countenance challenges to the underlying idea. Standard textbooks and institutional research, complemented by a chorus of scientific and popular media, “ratify” the fusion model of the Sun year after year by ignoring evidence to the contrary. A growing group of independent researchers, however, insists that the popular idea is incorrect. These researchers say that the Sun is electric. It is a glow discharge fed by galactic currents. And they emphasize that the fusion model anticipated none of the milestone discoveries about the Sun, while the electric model predicts and explains the very observations that posed the greatest quandaries for solar investigation. More than 60 years ago, Dr. Charles E. R. Bruce, of the Electrical Research Association in England, offered a new perspective on the Sun. An electrical researcher, astronomer, and expert on the effects of lightning, Bruce proposed in 1944 that the Sun’s "photosphere has the appearance, the temperature and the spectrum of an electric arc; it has arc characteristics because it is an electric arc, or a large number of arcs in parallel." This discharge characteristic, he claimed, "accounts for the observed granulation of the solar surface." Bruce’s model, however, was based on a conventional understanding of atmospheric lightning, allowing him to envision the “electric” Sun without reference to external electric fields. Years later, a brilliant engineer, Ralph Juergens, inspired by Bruce’s work, added a revolutionary possibility. In a series of articles beginning in 1972, Juergens suggested that the Sun is not an electrically isolated body in space, but the most positively charged object in the solar system, the center of a radial electric field. This field, he said, lies within a larger galactic field. With this hypothesis, Juergens became the first to make the theoretical leap to an external power source of the Sun. Juergens proposed that the Sun is the focus of a "coronal glow discharge" fed by galactic currents. To avoid misunderstanding of this concept, it is essential that we distinguish the complex, electrodynamic glow discharge model of the Sun from a simple electrostatic model that can be easily dismissed. Throughout most of the volume of a glow discharge the plasma is nearly neutral, with almost equal numbers of protons and electrons. In this view, the charge differential at the Earth’s distance from the Sun is smaller than our present ability to measure—perhaps one or two electrons per cubic meter. But the charge density is far higher closer to the Sun, and at the solar corona and surface the electric field is of sufficient strength to generate all of the energetic phenomena we observe. Today, the electrical theorists Wallace Thornhill and Donald Scott urge a critical comparison of the fusion model and the electrical model. Given what we now know about the Sun, which model meets the tests of unity, coherence, simplicity, and predictability? Why did so many discoveries surprise investigators and even contradict the expectations of the fusion model? Is there any fundamental feature of the Sun that contradicts the glow discharge hypothesis? Our closer looks at the Sun have revealed the pervasive influence of magnetic fields, which are the effect of electric currents. Sunspots, prominences, coronal mass ejections, and a host of other features require ever more complicated guesswork on behalf of the fusion model. But this is the way an anode in a coronal glow discharge behaves! In the electrical model, the Sun is the “anode” or positively charged body in the electrical exchange, while the "cathode" or negatively charged contributor is not a discrete object, but the invisible “virtual cathode” at the limit of the Sun’s coronal discharge. (Coronal discharges can sometimes be seen as a glow surrounding high-voltage transmission wires, where the wire discharges into the surrounding air). This virtual cathode lies far beyond the planets. In the lexicon of astronomy, this is the “heliopause.” In electrical terms, it is the cellular sheath or “double layer” separating the plasma cell that surrounds the Sun ("heliosphere”) from the enveloping galactic plasma. In an electric universe, such cellular forms are expected between regions of dissimilar plasma properties. According to the glow discharge model of the Sun, almost the entire voltage difference between the Sun and its galactic environment occurs across the thin boundary sheath of the heliopause. Inside the heliopause there is a weak but constant radial electrical field centered on the Sun. A weak electric field, immeasurable locally with today's instruments but cumulative across the vast volume of space within the heliosphere, is sufficient to power the solar discharge. The visible component of a coronal glow discharge occurs above the anode, often in layers. The Sun’s red chromosphere is part of this discharge. (Unconsciously, it seems, the correct electrical engineering term was applied to the Sun’s corona.) Correspondingly, the highest particle energies are not at the photosphere but above it. The electrical theorists see the Sun as a perfect example of this characteristic of glow discharges—a radical contrast to the expected dissipation of energy from the core outward in the fusion model of the Sun. At about 500 kilometers (310 miles) above the photosphere or visible surface, we find the coldest measurable temperature, about 4400 degrees K. Moving upward, the temperature then rises steadily to about 20,000 degrees K at the top of the chromosphere, some 2200 kilometers (1200 miles) above the Sun's surface. Here it abruptly jumps hundreds of thousands of degrees, then continues slowly rising, eventually reaching 2 million degrees in the corona. Even at a distance of one or two solar diameters, ionized oxygen atoms reach 200 million degrees! In other words the “reverse temperature gradient,” while meeting the tests of the glow discharge model, contradicts every original expectation of the fusion model. But this is only the first of many enigmas and contradictions facing the fusion hypothesis. As astronomer Fred Hoyle pointed out years ago, with the strong gravity and the mere 5,800-degree temperature at the surface, the Sun’s atmosphere should be only a few thousand kilometers thick, according to the “gas laws” astrophysicists typically apply to such bodies. Instead, the atmosphere balloons out to 100,000 kilometers, where it heats up to a million degrees or more. From there, particles accelerate out among the planets in defiance of gravity. Thus the planets, Earth included, could be said to orbit inside the Sun's diffuse atmosphere. The discovery that blasts of particles escape the Sun at an estimated 400- to 700-kilometers per second came as an uncomfortable surprise for advocates of the nuclear powered model. Certainly, the “pressure” of sunlight cannot explain the acceleration of the solar “wind”. In an electrically neutral, gravity-driven universe, particles were not hot enough to escape such massive bodies, which (in the theory) are attractors only. And yet, the particles of the solar wind continue to accelerate past Venus, Earth, and Mars. Since these particles are not miniature “rocket ships,” this acceleration is the last thing one should expect! According to the electric theorists, a weak electric field, focused on the Sun, better explains the acceleration of the charged particles of the solar wind. Electric fields accelerate charged particles. And just as magnetic fields are undeniable witnesses to the presence of electric currents, particle acceleration is a good measure of the strength of an electric field. A common mistake made by critics of the electric model is to assume that the radial electric field of the Sun should be not only measurable but also strong enough to accelerate electrons toward the Sun at “relativistic” speeds (up to 300,000 kilometers per second). By this argument, we should find electrons not only zipping past our instruments but also creating dramatic displays in Earth’s night sky. But as noted above, in the plasma glow-discharge model the interplanetary electric field will be extremely weak. No instrument placed in space could measure the radial voltage differential across a few tens of meters, any more than it could measure the solar wind acceleration over a few tens of meters. But we can observe the solar wind acceleration over tens of millions of kilometers, confirming that the electric field of the Sun, though imperceptible in terms of volts per meter, is sufficient to sustain a powerful drift current across interplanetary space. Given the massive volume of this space, the implied current is quite sufficient to power the Sun. Look for more details on the drift current, solar magnetic fields, nuclear reactions, and many other features of the Sun in upcoming Pictures of the Day. See also these Pictures of the Day—TPOD Oct 06, 2004: The Iron Sun TPOD Oct 15, 2004: Solar Tornadoes TPOD Nov 03, 2004: Kepler Supernova Remnant More about electric stars can be found here: Copyright 2005: thunderbolts.info
<urn:uuid:9e5e6f80-1bb6-49c4-9ef1-22edcf01aeed>
3.9375
2,081
Personal Blog
Science & Tech.
33.054237
Dust Plume Across Northern Patagonia A checkerboard of growing (dull green) and fallow (tan) fields in northern Patagonia appears beneath a rippling plume of beige dust in this natural-color image from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite on September 11, 2009. The image shows parts of Argentina’s Río Negro province (left) and the southern tip of Buenos Aires province (right). High temperatures across Argentina in the first weeks of September 2009 depleted soil moisture, according to crop reports from the USDA Foreign Agricultural service. In addition to its impact on crops, the drying also makes it easier for the wind to sweep the soil off the ground. This image originally appeared on the Earth Observatory. Click here to view the full, original record.
<urn:uuid:a3ddeb0c-8d80-4e9c-95f9-00797003ee99>
3.578125
175
Truncated
Science & Tech.
34.515641
Regular expressions are a well recognized way for describing string patterns. The following regular expression defines a floating point number with a (possibly empty) integer part, a non empty fractional part and an optional exponent: [0-9]* \.[0-9]+ ([Ee](\+|-)?[0-9]+)? The rules for interpreting and constructing such regular expressions are explained below. A regular expression parser takes a regular expression and a source string as arguments and returns the source position of the first match. Regular expression parsers either interpret the search pattern at runtime or they compile the regular expression into an efficient internal form (known as deterministic finite automaton). The regular expression parser described here belongs to the second category. Besides being quite fast, it also supports dictionaries of regular expressions. With the definitions $Frac= \.[0-9]+ and $Exp= ([Ee](\+|-)?[0-9]+), the above regular expression for a floating point number can be abbreviated to $Int* $Frac $Exp?. I separated algorithmic from interface issues. The files RexAlgorithm.h and RexAlgorithm.cpp implement the regular expression parser using only standard C++ (relying on STL), whereas the file RexInterface.h and RexInterface.cpp contain the interfaces for the end user. Currently there is only one interface, implemented in the class REXI_Search. Interfaces for replace functionality and for programming language scanners are planned for future releases. class REXI_Search : public REXI_Base AddRegDef (string strName,string strRegExp); SetRegexp (string strRegExp); bool MatchHere (const char*& rpcszSrc, int& nMatchLen,bool& bEos); bool Find (const char*& rpcszSrc, int& nMatchLen,bool& bEos); int main(int argc, char* argv) const char szTestSrc= "3.1415 is the same as 31415e-4"; const int ncOk= REXI_DefErr::eNoErr; err= rexs.AddRegDef("$Int","[0-9]+"); assert(err.eErrCode==ncOk); err= rexs.AddRegDef("$Frac","\\.[0-9]+"); assert(err.eErrCode==ncOk); err= rexs.SetRegexp("($Int? $Frac $Exp?|$Int \\. $Exp?|$Int $Exp)[fFlL]?"); const char* pCur= szTestSrc; bool bEosFound= false; cout << "Source text is: \"" << szTestSrc << "\"" << endl; cout << "Floating point number found at position " << " having length " << nMatchLen << endl; cin >> i; A call to the member function REXI_Search::SetRegexp(strRegExp)involves quite a lot of computing. The regular expression strRegExp is analyzed and after several steps transformed into a compiled form. Because of this preprocessing work, which is not needed in the case of an interpreting regular expression parser, this regular expression parser shows its efficiency only when you apply it to large input strings or if you are searching again and again for the same regular expression. A typical application which profits from the preprocessing needed by this parser is a utility which searches all files in a directory. Currently Unicode is not supported. There is no fundamental reason for this limitation and I think that a later release will correct this. I just did not yet find an efficient representation of a compiled regular expression which supports Unicode. Constructing regular expressions Regular expressions can be built from characters and special symbols. There are some similarities between regular expressions and arithmetic expressions. The most basic elements of arithmetic expressions are numbers and expressions enclosed in parens ( ). The most basic elements of regular expressions are characters, regular expressions enclosed in parens ( ) and character sets. On the next higher level, arithmetic expressions have '*' and '/' operators, whereas regular expressions have operators indicating the multiplicity of the preceding element. Most basic elements of regular expressions - Individual characters. e.g. "h" is a regular expression. In the string "this home" it matches the beginning of 'home'. For non printable characters, one has to use either the notation \xhh where h means a hexadecimal digit or one of the escape sequences \n \r \t \v known from "C". Because the characters * + ? . | [ ] ( ) - $ ^ have a special meaning in regular expressions, escape sequences must also be used to specify these characters literally: \* \+ \? \. \| \[ \] \( \) \- \$ \^ . Furthermore, use '\ ' to indicate a space, because this implementation skips spaces in order to support a more readable style. - Character sets enclosed in square brackets [ ]. e.g. "[A-Za-z_$]" matches any alphabetic character, the underscore and the dollar sign (the dash (-) indicates a range), e.g. [A-Za-z$_] matches "B", "b", "_", "$" and so on. A ^ immediately following the [ of a character set means 'form the inverse character set'. e.g. "[^0-9A-Za-z]" matches non-alphanumeric characters. - Expressions enclosed in round parens ( ). Any regular expression can be used on the lowest level by enclosing it in round brackets. - the dot . It means 'match any character'. - an identifier prefixed by a $. It refers to an already defined regular expression. e.g. "$Ident" stands for a user defined regular expression previously defined. Think of it as a regular expression enclosed in round parens, which has a name. Operators indicating the multiplicity of the preceding element Any of the above five basic regular expressions can be followed by one of the special characters * + ? /i - * meaning repetition (possibly zero times); e.g. "[0-9]*" not only matches "8" but also "87576" and even the empty string "". - + meaning at least one occurrence; e.g. "[0-9]+" matches "8", "9185278", but not the empty string. - ? meaning at most one occurrence; e.g. "[$_A-Z]?" matches "_", "U", "$", .. and "" - \i meaning ignore case Catenation of regular expressions The regular expressions described above can be catenated to form longer regular expressions. E.g. "[_A-Za-z][_A-Za-z0-9]*" is a regular expression which matches any identifier of the programming language "C", namely the first character must be alphabetic or an underscore and the following characters must be alphanumeric or an underscore. "[0-9]*\.[0-9]+" describes a floating point number with an arbitrary number of digits before the decimal point and at least one digit following the decimal point. (The decimal point must be preceded by a backslash, otherwise the dot would mean 'accept any character at this place'). "(Hallo (,how are you\?)?)\i" matches "Hallo" as well as "Hallo, how are you?" in a case insensitive way. Alternative regular expressions Finally - on the top level - regular expressions can be separated by the | character. The two regular expressions on the left and right side of the | are alternatives, meaning that either the left expression or the right expression should match the source text. E.g. "[0-9]+ | [A-Za-z_][A-Za-z_0-9]*" matches either an integer or a "C"-identifier. A complex example The programming language "C" defines a floating point constant in the following way: A floating point constant has the following parts: An integer part, a decimal point, a fraction, an exponential part beginning with e or E followed by an optional sign and digits and an optional type suffix formed by one the characters f, F, l, L. Either the integer part or the fractional part can be absent (but not both). Either the decimal point or the exponential part can be absent (but not both). The corresponding regular expression is quite complex, but it can be simplified by using the following definitions: $Int = "[0-9]+." $Exp = "([Ee](\+|-)?[0-9]+)". So we get the following expression for a floating point constant: ($Int? $Frac $Exp?|$Int \. $Exp?|$Int $Exp)[fFlL]?
<urn:uuid:036e46cf-3866-407c-9a12-00e69319ebae>
3.390625
1,935
Documentation
Software Dev.
51.360653
Editor's Note: This article was originally presented at ESC Boston 2011. No software engineering process can guarantee secure code, but following the right coding guidelines can dramatically increase the security and reliability of your code. Many embedded systems live in a world where a security breach can be catastrophic. Embedded systems control much of the world’s critical infrastructure, such as dams, traffic signals, and air traffic control. These systems are increasingly communicating together using COTS networking and in many cases using the Internet itself. Keeping yourself out of the courtroom, if not common decency, demands that all such systems should be developed to be secure. There are many factors that determine the security of an embedded system. A well-conceived design is crucial to the success of a project. Also, a team needs to pay attention to its development process. There are many different models of how software development ought to be done, and it is prudent to choose one that makes sense. Finally, the choice of operating system can mean the difference between a project that works well in the lab and one that works reliably for years in the real world. Even the most well thought-out design is vulnerable to flaws when the implementation falls short of the design. This paper focuses on how one can use a set of coding guidelines, called MISRA C and MISRA C++, to help root out bugs introduced during the coding stage.MISRA C and C++ MISRA stands for Motor Industry Software Reliability Association. It originally published Guidelines For the Use of the C Language In Critical Systems , known informally as MISRA C, in 1998. A second edition of MISRA C was introduced in 2004, and then MISRA C++ was released in 2008. More information on MISRA and the standards themselves can be obtained from the MISRA website The purpose of MISRA C and MISRA C++ guidelines are not to promote the use of C or C++ in critical systems. Rather, the guidelines accept that these languages are being used for an increasing number of projects. The guidelines discuss general problems in software engineering and note that C and C++ do not have as much error checking as other languages do. Thus the guidelines hope to make C and C++ safer to use, although they do not endorse MISRA C or MISRA C++ over other languages. MISRA C is a subset of the C language. In particular, it is based on the ISO/IEC 9899:1990 C standard, which is identical to the ANSI X3.159-1989 standard, often called C ’89. Thus every MISRA C program is a valid C program. The MISRA C subset is defined by 141 rules that constrain the C language. Correspondingly, MISRA C++ is a subset of the ISO/IEC 14882:2003 C++ standard. MISRA C++ is based on 228 rules, many of which are refinements of the MISRA C rules to deal with the additional realities of C++. For notational convenience, we will use the terms “MISRA”, “MISRA C” or “MISRA C++” loosely in the remainder of the document to refer to either the defining documents or the language subsets.
<urn:uuid:4e890a66-39a7-46bf-848e-0d45600bf949>
3.546875
668
Truncated
Software Dev.
48.938182
For more than twenty years, the Homestake Solar Neutrino Experiment in the Homestake Gold Mine in South Dakota has been attempting to measure neutrino fluxes from space; in particular, this experiment has been gathering information on solar neutrino fluxes. The results of this experiment have been checked against predictions made by standard solar models and it has been discovered that only one-third of the expected solar neutrino flux has been detected. This "Where are the missing neutrinos?" question is known as the Solar Neutrino Problem. And it is not just the Homestake experiment that is detecting a shortage of neutrinos. Several other experiments, including Kamiokande II, GALLEX, and SAGE, have noticed a definite neutrino shortfall. Neutrinos are subatomic particles produced during nuclear fission and fusion processes. Like electrons (and muons and tauons), neutrinos are classified as leptons. There are three "flavours" of neutrinos: electron neutrinos, muon neutrinos, and tauon neutrinos. At this time it is unknown whether neutrinos have either mass or magnetic moments but recent observations of Supernova 1987A have set an upper limit on any neutrino magnetic moments at less than about 10^(-13) Bohr magnetons. If neutrinos do have a magnetic moment, then they will either be "left-handed" or "right-handed" in orientation. The Sun produces energy by fusing hydrogen to helium. This may be accomplished in a number of ways but in the Sun, a process known as the proton-proton chain is thought to be primarily responsible for energy generation. H + H --> D + positron + neutrino H + H + electron --> D + neutrino D + H --> He3 + gamma ray He3 + He3 --> H + H + He4 He3 + He4 --> Be7 + gamma ray Be7 + positron --> Li7 + neutrino Li7 + H --> He4 + He4 Be7 + H --> B8 + gamma ray B8 --> Be8* + positron + neutrino Be8* --> He4 + He4 H is hydrogen, D is deuterium (heavy hydrogen), He is helium, Li is lithium, Be is beryllium, and B is boron. Numbers indicate different isotopes. The Homestake experiment detects only the highest energy neutrinos produced by the Sun, the neutrinos produced by the beryllium/boron reactions. Solutions to the solar neutrino problem are usually classified in one of two categories, astrophysical or physical. Solutions that require a change in the way we think about the Sun are termed astrophysical solutions while solutions that require a change in the way we think about neutrinos are called physical solutions. The Homestake experiment has been running for over two solar activity cycles (1 activity cycle = 11 years approximately) and it has been noticed that the neutrino fluxes are not constant. Many researchers have tried to link solar surface activity with neutrino fluxes and, depending upon whether you believe their statistical arguments, have succeeded. It has been claimed that the neutrino flux is correlated to solar radius and solar wind mass flux; and anti-correlated to line-of-sight magnetic flux, p-mode frequencies, and (you guessed it) sunspots. (If two quantities are correlated, then they increase and decrease together. If two quantities are anti-correlated, then when one increases, the other decreases, and vice versa.) Many of these parameters are (anti-) correlated with each other and are internally consistent. The solar activity cycle is usually defined by sunspot numbers but sunspots are related to magnetic activity in the Sun. Many of these other parameters are also directly affected by magnetism. If these correlations really exist, then it would seem that neutrinos are reacting with the magnetic fields in the heliosphere and magnetosphere. Thus, from this evidence, the solution to the solar neutrino problem is a physical one. Another possibility, rarely discussed, is that the solar neutrino flux is actually constant and it is the cosmic ray background that is varying. Cosmic rays are more likely to get through to the Earth during periods of low solar activity. Therefore, neutrinos generated in the Earth's atmosphere by cosmic rays will increase in number during these times. If this cosmic background flux is not correctly subtracted from the total detections, then it will appear that the solar flux is indeed varying with the solar cycle.
<urn:uuid:93f82ccd-7846-46a4-ad89-f06139df4f4c>
3.75
970
Knowledge Article
Science & Tech.
31.30125
Joined: 16 Mar 2004 |Posted: Tue Apr 14, 2009 1:33 pm Post subject: Carbon nanotubes not toxic for mice |A new pilot study has found that single-walled carbon nanotubes are not toxic to mice, even after a period of four months. The result means that SWNTs might not be toxic to humans either and could one day be used for applications such as imaging and therapy. However, more detailed work needs to be done before this can be confirmed with any certainty. SWNTs could be used in a host of biomedical applications, including drug delivery, imaging and destroying tumours. But scientists still do not know whether the nanomaterials are toxic and to what extent. A new study led by Sanjiv Gambhir and colleagues at Stanford University in California has shown that SWNTs injected into the bloodstream of mice are not toxic to the animals. The researchers obtained their results by comparing two groups of five mice, one group that had been administered with between 50 and 150mg of SWNTs, and one that had not. They monitored the mice' body weight, blood pressure, blood cell counts and the amounts of electrolytes present in the bloodstream. At the end of four months, the mice were then killed and their tissues analyzed. While there were some minor differences, the scientists found no major evidence of toxicity. The researchers did find SWNT particles in the liver Kupfer cells – but these cells are part of the reticulo-endothelial system and "gobble" up large particles in the body, so this was not surprising. Gambhir stresses that these results must be interpreted with caution since the study was performed on a limited number of mice and for specific doses of SWNTs. "However, it does set the groundwork for future studies to see if nanotubes will indeed be safe for eventual human use," he told nanotechweb.org. "This is important because nanotubes have a lot of potential as imaging and drug-delivery agents." The team now plans to study a larger number of mice, and other animals. "People are sometimes scared by nanotechnology," continued Gambhir "especially using it on humans. We must continue with the type of study performed here to prove the safety of these strategies."
<urn:uuid:dafdcfb2-f54e-45bb-8ff7-ea83e91453cc>
2.859375
476
Comment Section
Science & Tech.
44.762701
This image shows a rimless, irregular depression that appears brighter than the surrounding material. Such features are thought to be volcanic vents . The shadowed, curved feature leading into the vent is a fault scarp, also called rupes . This image was taken using the Narrow Angle Camera (NAC). Date acquired: April 07, 2012 Image Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington
<urn:uuid:4c5a1666-e4d0-4091-a13c-214f1cc8c484>
3.40625
89
Knowledge Article
Science & Tech.
26.347404
(1) An abstract class may contain complete or incomplete methods. Interfaces can contain only the signature of a method but no body. Thus an abstract class can implement methods but an interface cannot implement methods. (2) An abstract class can contain fields, constructors, or destructors and implement properties. An interface cannot contain fields, constructors, or destructors and it has only the property's signature but no implementation. (3) An abstract class cannot support multiple inheritances, but an interface can support multiple inheritances. Thus a class may inherit several interfaces but only one abstract class. (4) A class implementing an interface has to implement all the methods of the interface, but the same is not required in the case of an abstract Class. (5) Various access modifiers such as abstract, protected, internal, public, virtual, etc. are useful in abstract Classes but not in interfaces. (6) Abstract classes are are faster than interfaces.
<urn:uuid:1a172ac3-b411-4e7b-86df-d530346ccb18>
3.34375
197
Q&A Forum
Software Dev.
38.82897
Like people who approach geopolitics with the attitude of "If people would just talk to each other, we would all along", there are a lot of naïve assumptions about just dumping gasoline. We know it causes emissions, and emissions are bad, we know a lot of the money paid for oil goes to fund Middle Eastern terrorism, and that is bad - those things should cause both the left and the right in America to want gasoline gone. And yet it is not gone. The reason is simple: gasoline is a lot more efficient than alternative energy proponents want to believe. Most methane comes from natural gas - natural gas used to be loved but once it got popular it got lumped in with mean old fossil fuels so the search is on to find a new, green approach to methane, using microbes that can convert renewable electricity into carbon-neutral methane. Researchers are raising colonies of microorganisms, called methanogens, which have the ability to turn electrical energy into pure methane, the key ingredient in natural gas. The scientists' goal is to create large microbial factories that will transform clean electricity from solar, wind or nuclear power into renewable methane fuel. No one is asking the Department of Energy to play venture capitalist with taxpayer money again , but basic research in dye-sensitized solar cells may bring the cost of solar down enough to allow for mainstream acceptance - primarily because dye-sensitized solar cells (also known as DSCs) are less fragile than panels that use crystalline silicon, also a benefit of thin-film panels, and don't require a clean room. Microbes have been evolving for millions of years to efficiently digest organic material. Now researchers are tapping these natural processes to maximize energy output from the breakdown and use it to power farms and even waste facilities. One process, developed by researchers at Michigan State University, mimics the natural mechanism of waste digestion and generates 20 times more energy than existing processes by creating ethanol and hydrogen for fuel cells. No matter how much spin you hear and read from highly-paid lobbyists and clueless advocates, green energy is not viable. It will be, though science would get there faster if the Department of Energy would stop throwing money at solar panel companies and instead throw it at basic research, like battery technology. After cost and efficiency, storage is the biggest obstacle preventing widespread use of renewable energy sources like wind and solar power. The ability to store energy when it is produced is an essential waypoint on the road to turning alternative energy into regular energy. The current U.S. energy grid system is used predominantly for distributing energy and allows little flexibility for storage of excess or a rapid dispersal on short notice. Cyanobacteria are small organisms with huge importance. Ancient cyanobacteria created the oxygen atmosphere, and modern cyanobacteria produce a significant amount of the air we breathe. Now, these tiny organisms are helping us again by providing clues to improving biofuel production. Because of their prolific photosynthesizing, cyanobacteria have great potential for solar-powered biofuel production. To tap into that potential, researchers from Queen Mary’s School of Biological and Chemical Sciences recently became the first to visualize and control the “biological electrical switch” that dictates how electrons flow through the bacterium. Materials scientists at Harvard have demonstrated a solid-oxide fuel cell (SOFC) that converts hydrogen into electricity but can also store electrochemical energy like a battery. This fuel cell can continue to produce power for a short time after its fuel has run out. The finding in Nano Letters will be most important for small-scale, portable energy applications, where a very compact and lightweight power supply is essential and the fuel supply may be interrupted. Sometimes declaring bankruptcy is a good thing. In the case of Abound Solar Inc., a U.S. solar manufacturer that had American taxpayers on the hook for $400 million, the good thing is they closed the doors after only losing us $70 million in Department of Energy funds. Chump change, I know, since we have committed $72 billion on alternative energy in the last few years, but $330 million here and $330 million there, and pretty soon we are talking about real money. A new toilet system can turn human waste into electricity and fertilizers and even reduce the amount of water needed for flushing by up to 90 percent The inventors in Singapore call it the No-Mix Vacuum Toilet and it has two chambers that separate the liquid and solid wastes. Using vacuum suction technology, like you find in airplane lavatories, flushing liquids requires only 0.2 liters of water while flushing solids require just one liter. The existing conventional commonly used in Singapore need 4 to 6 liters of water per flush so a single public toilet, that may be flushed 100 times a day, could save about 160,000 liters of water in a year – enough to fill a small pool. Tapping ocean energy sources like tides and offshore wind sound fine to people who understand nothing about science (the Anything But Oil contingent) but in reality it requires pile driving, the practice of pounding long, hollow steel pipes called piles into the ocean floor to support energy turbines and other structures. Pile driving creates loud, underwater booms that can harm fish and other marine animals so if you're thinking CO2 is better for the world, you are right.
<urn:uuid:fa054ec6-08f8-46e8-a77f-1b36851b3205>
3.109375
1,097
Personal Blog
Science & Tech.
31.641054
UML 2 in a Nutshell: Use Case Diagrams Source: O'Reilly Media Use cases are a way to capture system functionality and requirements in UML. Use case diagrams consist of named pieces of functionality (use cases), the persons or things invoking the functionality (actors), and possibly the elements responsible for implementing the use cases (subjects). Use cases represent distinct pieces of functionality for a system, a component, or even a class. Each use case must have a name that is typically a few words describing the required functionality, such as View Error Log. UML provides two ways to draw a use case.
<urn:uuid:284610c1-f89b-4007-acc8-069ea70140b4>
3.84375
129
Knowledge Article
Software Dev.
30.199588