text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
In the set of real numbers the square root of a minus number is a problem because it does not exist. However, √-1can be used to generate algebras which use higher dimensional numbers, or to put it the other way round, algebras with higher dimensional numbers may have one or more solutions to the square root of a minus number. By 'higher dimensional numbers' we mean that each element in the algebra, each number, may contain multiple real values similar to vectors. However there are a lot of possible algebras using multidimensional elements, when we are talking about real normed algebras then we define multiplication so that: - Inverse multiplication (division) always exists, i.e. we can always find the solution to a/b - norms are preserved by multiplication |a * b| = |a| |b|. Where |a| = the distance from origin. It turns out that these requirements can only be met by algebras which have 1,2,4 or 8 dimensional elements. These 4 algebras, which satisfy this condition, have the following properties: |dimension||√-1 exists||Multiplication commutative||Multiplication associative| |Real Numbers R||1||no||yes||yes| |Complex Numbers C||2||yes||yes||yes| Complex Numbers contain a copy of the real numbers plus a copy of the real numbers multiplied by the square root of -1. Quaternions contain a copy of the Complex Numbers plus another copy of the real numbers multiplied by another square root of -1 (going via a different dimension at 90 degrees to the first). Octonions contain a copy of the Quaternions and double up again.
<urn:uuid:9c1b258d-6055-4e3e-b456-bcfd41b27e1c>
3.546875
376
Knowledge Article
Science & Tech.
40.389101
A car tire has diameter 64 cm. Determine its angular velocity, in radians per second, when the car is traveling at 100 km/h. So I want to find the number of rotations the tire makes in one second, and to do that I divide the velocity of the car by the circumference of the tire. (I think) is the circumference of the circle in meters. is the number of rotations per second. So the circle makes approximately 138.5 rotations per second. I know that an entire circle is radians so I just multiple 138.5 by and that works out to be 870.22 radians/s. The answer is supposed to be 86.83 radians/s and I don't know what I'm doing wrong. Could someone please explain? I have no idea how to start this one. Help!Write an expression for the angular velocity, in radians per second, for a car tire with diameter d centimeters when the car is traveling at x kilometers per hour.
<urn:uuid:db899273-5ab1-4a37-afee-4615c0b46395>
3.171875
213
Q&A Forum
Science & Tech.
75.766135
One hundred lightning bolts strike Earth’s surface every second, each containing up to one billion volts of electricity. Cloud-to-ground lightning is caused by an electrical imbalance: precipitation collects at the base of storm clouds and creates a negative charge, while objects on the ground become positively charge—and nature remedies the imbalance by passing an enormous electrical current between them. Lightning actually moves in steps that work their way down to Earth with incredible speed, creating a fractal pattern—then the lowermost step is met by a surge of positive electricity from the target below, and electricity is channelled through as lightning. This surge can climb through a tree, a building, or even a person, so it’s unsurprisingly that approximately 2,000 people are killed by lightning each year. The lucky ones who survive being struckare sometimes left with remarkable fractal scarring on their skin, called Lichtenberg figures or “lightning trees”. Lichtenberg figures were discovered by German physicist Georg Christoph Lichtenberg, who found that when an electrical discharge strikes insulating material, it’s reproduced on the surface or interior of the material. It looks like trapped lightning because it quite literally is, showing the fascinating pattern of the branching steps. It’s not the heat that causes the scarring, however; it’s hypothesized that the shockwave of the lightning current ruptures capillaries under the skin. Since they’re not burns, Lichtenberg figures generally only last from a few hours to a few days before fading from the skin.
<urn:uuid:8fc1a2c1-99ab-42c4-8b0c-0ce0a41cb3cd>
3.765625
320
Personal Blog
Science & Tech.
27.343571
What Italy lacks in size, it certainly makes up for in solar power. In fact, according to recent calculations, it’s on the verge of surpassing the U.S. in total installed solar PV capacity. In 2009, Italy was dubbed the world’s second-largest solar power market in the world, just ahead of the U.S. They installed 250 MW every two months—more than the state of California added in an entire year. The U.S. as a whole installed close to half that amount, but considering the large difference in land mass, Italy’s carbon footprint is nowhere near that of America’s. So far in 2010, Italy has installed 1500 MW of solar power, mainly on rooftops. In comparison, the U.S. has installed 480 MW, with 250 MW in California alone. With overall size, population and economy similar to that of California, they should, in essence, be neck and neck. By the end of 2010, Italy will have installed more than 2,500 MW of solar power—that’s 1.5 times the amount in America. They have already far surpassed their 2007 goal of reaching 1,200 MW. Looks like size is deceptive when it comes to going green. The score thus far: Italy 1, United States, still draggin’ its heels. Photo Credit: GreenDiary
<urn:uuid:27799b38-d948-4f19-8a04-e2f6f0ece8bb>
2.765625
287
Personal Blog
Science & Tech.
70.692727
Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run. At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins. In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot. There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has. That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060. However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations. I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum. However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot? I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).
<urn:uuid:b1170076-c45d-4552-ab5a-a37afd5c4bef>
2.765625
447
Personal Blog
Science & Tech.
66.485719
There are four Unicode normalization forms defined in UAX #15, corresponding to two types of character equivalence. The first, and most important, is canonical equivalence. This is equivalence between sequences of codepoints which represent the same abstract character. This includes: - pre-composed characters and their separate base+combining forms - pre-composed Hangul and their Jamo sequences - unified singleton characters (e.g. mapping the Ohm sign to the Greek capital Omega by which it is represented) - canonical ordering for combining marks when multiple combining marks are used The second character equivalence is compatibility equivalence. Compatibility equivalence is where two characters represent the same abstract character but differ in appearance or behavior. Differences include: - Font variants - cursive, bold, mathematical - Breaking differences - different hyphen and space types - Circled characters - Width, size, rotated - variations common in Japanese scripts - Superscripts and subscripts - 25 becomes "2"+"5" - Squared characters - Japanese words written in a single character square - Fractions - representing the single 1/2 character as "1"+"/"+"2" These two equivalences can be combined to provide four normalization forms: - NFD - canonical decomposition - NFC - canonical decomposition, followed by canonical composition - NFKD - compatibility decomposition - NFKC - compatibility decomposition, followed by canonical composition Order matters, since the canonical and compatibility normalizations are not commutative. Normalizing by performing canonical composition followed by compatibility decomposition is not a defined form in the standard. Note that neither of these equivalences ignore control characters, nor do they attempt to unify characters which look alike, such as Latin and Cyrillic "o", so spoofing is a security issue regardless of Unicode normalization. Problems with Normalization The problem inherent in a character set as complex as Unicode is the "same" (by any of the definitions above) character can have multiple representations. Thus the same conceptual word may be represented with different sets of codepoints, and not match as a search term or database key or filename, etc. For example, Googling the same word with different forms for NFC and NFK will return a different set of results, which makes no sense. Since web search is inherently an approximate art to begin with, this is perhaps not directly perceived as a "bug" by users, though certainly their experience would be improved if searching gave the same results. Googling for unicode normalization bug, on the other hand, turns up a large number of hits for interoperability problems with filenames, and with systems sharing data between multiple users such as wikis. A large part of the problem is that whereas Windows and Linux tend to default to NFC normalization, and Mac OS X input tends to produce NFC, the Mac HFS filesystem automatically normalizes to NFD (as well as doing full Unicode case-folding). But even if this were not the case problems would arise, just less predictably. Approaches to Normalization R6RS provided four separate procedures to convert explicitly between the four Unicode normalization forms. This is the obvious choice, and is what most other languages do. Whenever normalization matters (yes, every programmer working with any kind of text needs to understand normalization and when it is necessary), you need to convert all input to a chosen normalization form. Or more likely ignore it until you get a rare but inevitable normalization bug, then go back to your design and figure out where all the relevant inputs are. But it's what everyone else does, so at least we won't get laughed at for this choice. Another option is to leave all strings in their original form and just compare them in a normalization-insensitive manner, e.g. with `string-ni=?'. If you want to hash with these you'll also need `string-hash-ni', and if you want to sort strings or use them in search trees you'll need `string-ni<?'. Searching will require `string-contains-ni', though this could (very inefficiantly) be built using `string-ni=?'. In general, anything that needs to be normalization independent needs to be built specially. And you get a lot of duplicated work. And the programmer still needs to remember to actually _use_ this API where appropriate instead of `string=?' and the like. How can we make things easier for the programmer? One approach is to globally represent all strings in the same normalization form in memory. This is mostly a matter of normalizing on port input, but also involves some simple checks on endpoints when concatenating strings. String mutation is more complicated as well, though I think strings should be immutable anyway. The advantage is this is all done once, at the implementation level. The programmer never needs to worry about normalization, and can just compare with `string=?' and `string<?' like before (so existing code works too). Normalization then becomes an encoding issue, only of concern when working with an external system or format that expects a normalization form different from your own. Automated normalization is not a complete silver bullet though. The fundamental problem is that we're working with codepoints when conceptually most of the time we want to be working with graphemes. The problem can be seen when searching for a string ending in a base character in a document which has that same string followed by a combining character (in any of the normal forms this is possible). Even when both are in the same normal form the search will return success, but the string doesn't actually match. If we were comparing grapheme by grapheme, on the other hand, it would correctly ignore the partial match. And assuming graphemes are all internally normalized this would provide the same benefits of automatically normalized strings. So using graphemes instead of codepoints (or layered over codepoints) as our basic unit of strings looks like a promising way to simplify the programmer's life. Of course, with the exception of the first option none of these have been specified formally, much less implemented, much less tested and used in real applications. But they deserve consideration. Unfortunately, the first option, by exposing direct control over codepoint sequences, makes it impossible to implement either of the last two options, so it seems premature to standardize on this in WG1. A practical compromise would be to provide a single `string-normalize' procedure, which converts a string to a system-specific normalization form. In an ASCII-only implementation, or an auto-normalizing implementation this could be the identify function. Other implementations would simply choose a single preferred form. Then programmers writing portable code could code just as they do in the first option, with the caveat that the normalization form has been chosen for them. This procedure would be sufficient to implement the API described in the second option, though more optimized versions could be provided by the implementation. Control of explicit normalization forms would be a matter for I/O and/or byte-vectors, probably in a separate WG2 module. And APIs that make the programmer's life easier would at least be possible.
<urn:uuid:b075f8c7-05b4-4c50-9e40-741156580fec>
2.953125
1,524
Documentation
Software Dev.
31.614655
Light Pollution and the Palomar Observatory Milky Way over Palomar photo by Wally Pacholka of AstroPics.com What is Light pollution? Light pollution is any adverse effect of light caused by society. Light pollution is an increasing problem for observatories everywhere. One of the reasons Palomar Mountain was selected as the site for the 200-inch telescope was its dark skies that would allow observation of the faintest galaxies without the interference of city lights. Since 1934, rapid urbanization of southern California has resulted in a significant increase in the amount of sky glow. If such light pollution continues to increase, it will seriously reduce the effectiveness of the Palomar Observatory for many types of research. Caltech and the Palomar Observatory have worked with and will continue to work with City, County, & Tribal governments to diminish the effects of local light pollution. |This simulation (3.6 mb) shows how the increase of light pollution over time makes the night sky harder and harder to see -- not only for astronomers working at Palomar, but for everyone all over the Southern California region.| This partial panorama looks southwest (left), through north (center, towards the 200" dome), to northeast (right). It reveals the sky glow caused by lights in San Diego County (left), Riverside County (center) and Palm Springs (right). Photographed February 4, 2005 Sky Preservation at Palomar Observatory (Published in 1991, but still holds true today.) Lighting information for local home owners. (Adobe Acrobat file) What can you do to help control light pollution in your area? Riverside County's Light Pollution Ordinance (No. 655) San Diego County's Light Pollution Ordinance - Choose "Frames" or "No Frames" and then put "light pollution" into the search window San Diego County's Dark Skies and Glare Guidelines for Determining Significance (Draft Version) San Diego county light zone map City Ordinances Within San Diego County: City of Chula Vista - Updated link City of Escondido - Article 35 Outdoor Lighting Ordinance Reducing Outdoor Retail Lighting - a brochure from the Escondido City of Imperial Beach Lighting Regulations City of Oceanside Light Pollution Regulations - Chapter 39 City of Poway 17.08.220 Section L City of San Diego Lighting Code City of San Marcos (search for lighting in the document) City of Vista (pages 10 - 13) City Ordinances within Riverside County: City of Murrieta's Mount Palomar Lighting Standards (16.18.110) http://www.amlegal.com/library/ca/murrieta.shtml, choose frames or no frames and enter 'Palomar' in the search window. The City of Temecula adheres to Riverside County's Light Pollution Ordinance (No. 655) Learn more about light pollution: Visibility, environmental, and Astronomical Issues Associated with Blue-Rich White Outdoor Lighting and Seeing Blue - a less technical article on the effects of blue-rich white outdoor light Star Parties & Observing the Night Sky: - Also see their evening tours of Palomar Observatory Palomar Sky Brightness Data collected by the National Park Service Night Sky Team on March 24, 2006.
<urn:uuid:37feda01-9652-4fff-947e-e622e036ce0f>
3.53125
704
Knowledge Article
Science & Tech.
38.109478
|Quaoar is about 4 billion miles away from Earth, well over a billion miles farther away than Pluto. Earth as seen by the departing Voyager spacecraft as it departed the solar system: a tiny, pale blue dot. Six years ago, then NASA Associate Administrator Wesley Huntress, Jr., stated , "Wherever liquid water and chemical energy are found, there is life. There is no exception." Few opportune years like 2004 have presented astrobiology with as many remarkable vistas and fresh perspectives on this fundamental triad of water, chemical energy and life. Consider this year's accomplishments of those dedicated to searching for life in the universe. Landing on Mars not once, but twice. Then finding evidence for water on opposite sides of the red planet. Picking up what appears to be methane signals in the martian atmosphere, one of the residues that might prove one day to be the product of underground biology. Scientists began to discuss seriously what colonization strategies make sense. Setting off to explore the even richer atmosphere of the Earth-like moon, Titan. Spiraling into orbital capture around Saturn and photographing its majestic rings. Flying through the tail of a comet and heading home after collecting the first extraterrestrial samples from such dusty iceballs. Launching the Deep Impact probe to smash into a comet and watch how the dust and ice get kicked up. Filling the astronomy catalogs with well over a hundred new planets, including what may prove to be the first visible exoplanet. Finding some nearby candidates that might occupy temperate locations or safely orbit Sun-like stars. Witnessing the once-per-century passage of our neighboring Venus across the face of the Sun. The MESSENGER probe took off on its decade long tour of the inner solar system to orbit Mercury. Discovering the largest planetoids beyond Pluto among those outer nurseries where only comets visit. The editors of Astrobiology Magazine revisit the highlights of the year and where possible point to one of the strongest lineups ever for beginning a new turn of the calendar. Between the marathon still being run by the twin Mars rovers and the expected descent to Saturn's moon, Titan, next year promises no letdowns. |The artist's rendition shows "Quaoar" in relation to other bodies in the solar system, including Earth and its Moon; Pluto; and Sedna, a planetoid beyond Pluto that is the largest known object beyond Pluto. Image Credit: NASA/JPL-Caltech Number six on the countdown of 2004 highlights was detection of planetoids beyond Pluto. In December, David Jewitt (University of Hawaii) and Jane Luu (MIT Lincoln Lab) presented the first high quality spectrum of a bright Kuiper Belt Object (50000) Quaoar beyond Pluto. What they found was the signature of potential volcanic heating, since the ice spectrum showed signs of a crystallizing and not amorphous process at work on the icy planetoid. The surface temperature of Quaoar is only 50 K (-220 C) and, at these low temperatures, the thermodynamically preferred form of ice is amorphous (meaning "structureless": the water molecules freeze where they stick in a jumbled pattern). The data show that the ice on Quaoar has at some time been raised in temperature above 110 K, the critical temperature for transformation from amorphous to crystalline. Two ways to heat the ice are 1) to form it at temperatures above 110 K, presumably beneath the frigid surface, and then somehow expose it to view from Earth. Warm ice could be excavated by impact from deeper layers, or blown onto the surface by low-level cryovolcanic outgassing through vents. 2) Ice on the surface could be heated above 110 K by micrometeorite impact. The timescale for this "back-conversion" of crystalline to amorphous ice is uncertain but probably on the order of 10 Myr for the surface ice. 10 Myr is effectively "yesterday" compared to the 4500 Myr age of the solar system. This means that whatever process emplaces the crystalline ice (basically either impact gardening or cryovolcanic outgassing) has been active in the immediate past and, indeed, is probably still active. While the interpretation remains speculative, the good news is that the researchers are, for the first time, able to take useful spectra that reveal unexpected and intriguing properties of the surface of distant Quaoar. Quaoar's "icy dwarf" cousin, Pluto, was discovered in 1930 in the course of a 15-year search for trans-Neptunian planets. It wasn't realized until much later that Pluto actually was the largest of the known Kuiper belt objects. The Kuiper belt wasn't theorized until 1950, after comet orbits provided telltale evidence of a vast nesting ground for comets just beyond Neptune. The first recognized Kuiper belt objects were not discovered until the early 1990s. This hard-to-pronounce planetoid was named after a creation god of the Tongva native American tribe, the original inhabitants of the Los Angeles basin. According to legend, Quaoar, "came down from heaven; and, after reducing chaos to order, laid out the world on the back of seven giants. He then created the lower animals, and then mankind." - Mars Reconnaissance Orbiter (MRO) launch, Mars Orbiter to collect high-resolution, 1-meter, images in stereo-view of Mars - European Venus Express, Venus Orbiter for two-year nominal mapping life [486 days, two Venus year] - New Horizons, Pluto and moon Charon flyby, mapping to outer solar system cometary fields and Kuiper Belt - Dawn, Asteroid Ceres and Vesta rendezvous and orbiter, including investigations of asteroid water and influence on meteors - Kepler, Extrasolar Terrestrial Planet Detection Mission, designed to look for transiting or earth-size planets that eclipse their parent stars [survey 100,000 stars] - Europa Orbiter, planned Orbiter of Jupiters ice-covered moon, Europa, uses a radar sounder to bounce radio waves through the ice - Japanese SELENE Lunar Orbiter and Lander, to probe the origin and evolution of the moon - Japanese Planet-C Venus Orbiter, to study the Venusian atmosphere, lightning, and volcanoes. - Mars Scout mission, final selections August 2003 from four Scouts: SCIM, ARES, MARVEL and Phoenix - French Mars Remote Sensing Orbiter and four small Netlanders, linked by Italian communications orbiter - BepiColumbo, European Mercury Orbiters and Lander, including Japanese collaborators, lander to operate for one week on surface - Mars 2009, proposed long-range rover to demonstrate hazard avoidance and accurate landing dynamics Related Web Pages 2003: Year in Review Solar System Exploration Survey Mars Opportunity Rover Mars Spirit Rover Planet Ten: Beyond Pluto?
<urn:uuid:e043d5d4-8b1a-4152-8d2b-fb3e2008e9f7>
3.46875
1,450
Knowledge Article
Science & Tech.
32.402139
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...was corecipient, with the Swedish astrophysicist Hannes Alfvén, of the Nobel Prize for Physics in 1970 for his pioneering studies of the magnetic properties of solids. His contributions to solid-state physics have found numerous useful applications, particularly in the development of improved computer memory units. German physicist whose research in solid-state physics and electronics yielded many devices that now bear his name. ...used in virtually every kind of electronic device—computers, radios, transmitters, components of high-fidelity sound systems, and so on. After World War II the transistor was perfected, and solid-state devices (based on semiconductors) came to be used in all applications at low power and low frequency. The common conception at first was that solid-state technology would rapidly render... study of rocks ...engineers examine the nature and behaviour of the materials on, in, or of which such structures as buildings, dams, tunnels, bridges, and underground storage vaults are to be constructed; solid-state physicists study the magnetic, electrical, and mechanical properties of materials for electronic devices, computer components, or high-performance ceramics; and petroleum reservoir... What made you want to look up "solid-state physics"? Please share what surprised you most...
<urn:uuid:705b2e8c-c938-46a6-963c-0e05626b5106>
3.171875
299
Truncated
Science & Tech.
36.897389
KML is a standard XML file, where the schema defines GE objects. You've seen several important node types that define placemarks, points, lines, and polygons. Collectively, you can call them "GeoObject" nodes. The KML schema defines some other nodes that help organize and display the GeoObject nodes, such as the <Folder> node and style-related nodes. At the root of a KML document, you can define a folder, a GeoObject, or a style object. <Folder> nodes can contain other <Folder> nodes, GeoObject nodes, and so on, letting you create a hierarchy. When you create a KML file, you'll use folders to arrange GeoObjects. You define folders with a set of child nodes that control how GE will render the folder. Here's an example: A folder is a container that can hold multiple other objects <name>Folder object 1 (Point)</name> <name>Folder object 2 (Polygon)</name> <name>Folder object 3 (Line)</name> The simple <Folder> node above has a name and a description that will be visible to the user in the GE object tree—the sidebar panel that appears on the left side of the GE interface. The <open> node defines the default status of the folder when the KML is loaded; folders can initially appear either open (1 ) or closed (0 ). This folder contains several placemarks, but remember, folders can contain other folders as well. You can create any folder hierarchy you need. For example: Creating KML Files Programmatically This is a main folder and can contain points, lines, polygons or other folders <description>This is a subfolder that is closed by default, and can contain points, lines, polygons or other The simplest way to create an XML file with .NET is to use the XmlWriter object. XmlWriter provides methods that help you write well-formed XML documents easily and efficiently. You can output the created XML to a standard file or stream it on the web. This next example uses a MemoryStream to write XML to memory (see Listing 1 ). ASP.NET can direct the XML stream XML to an OutputStream. This sample will create a valid KML file that will contain a point placemark. Coordinates are received as input parameters, and XML is returned as string output parameter. Most of the code in Listing 1 is quite easy to understand, but here are the important points: object controls how the ToString() function formats the latitude, longitude, and altitude values. The code ensures that the decimal separator is a period and that the NumberGroupSeparator is a null string, conforming to KML specifications. After creating the KML, the code rewinds the MemoryStream (memStream.Seek(0, IO.SeekOrigin.Begin ), sets an encoding, and then reads the entire stream as the return value—in other words, the function returns a text string containing the created KML document. You could also return the MemoryStream itself or work with an ASP.NET OutputStream to write the KML directly to a browser. The attached code contains a simple working program that uses the GetKML function in Listing 1 to create a KML document and display that to users. By extending the methods shown in this sample you can create any KML file you need, adding code to insert Folders, Lines, or other valid node types to the stream. As you've seen, it's easy to create KML files programmatically, but you can improve on that model by creating a class that "maps" a KML file in an object-oriented manner. That way you can work with typed data and with a reusable resource library without worrying about the details of the KML format and without having to rewrite similar code to produce other KML documents. A KML wrapper class would expose methods and properties such as: Public Function ToKml() as String Public Property Title() As String Public Property Folders() As FoldersCollection You could define other classes that map the KML basic structures such as Point, Line, Shape, and Coordinate classes as well as Style and Folder classes. GE and KML Files GE can open KML files from your hard disk or download them from the web, so you can publish your data on your web site or from your web application. To publish simple static data that doesn't require user interaction, all you need to do is create the KML files and make them available. People can download the files and load them into GE through the File → Open menu. If you output a content-type metatag that specifies "application/vnd.google-earth.kml+xml ," the user's browser will open it with GE, without any user interaction. |Author's Note: from a web browser's point of view, GE is a standard external plug-in, meaning that if GE hasn't been installed on that machine, the browser will not show any content for downloaded KML documents, because it won't know which helper application handles the KML format.
<urn:uuid:204eb5ae-a4b4-4e07-bb2c-b36daed74d00>
3.203125
1,096
Documentation
Software Dev.
54.439502
What is the optimum stroke angle for a duck’s foot when paddling? Although a duck may already intuitively know the answer, the question has now been clarified for us humans as part of a recent research project undertaken at the prestigious California Institute of Technology (Caltech), in the US. Research Scientist and Post Doctoral Scholar Dr. Daegyoum Kim performed the study along with professor Morteza Gharib. The team employed a range of experimental setups with mechanical flappers, clappers and paddlers in tanks of fluid which was seeded with silver-coated glass microspheres and illuminated by an Nd:YAG laser.
<urn:uuid:2ec76ebe-5ca5-4a5e-a10e-ba6f36a47c5d>
2.90625
140
Content Listing
Science & Tech.
41.869
Often the Document Object Model is thought of as an abstract concept that is very difficult for beginners to deal with. With this series of articles, I will try to simplify and demystify the use and application of XML and the DOM. First, we will look at what XML is and then move on to what functions are available for use to manipulate and use XML. What is XML? XML is an acronym for eXtensible Markup Language; it was designed to make it easy to represent data in a structured way, almost like a database. Though it is not quite a database, it does allow for formatting and storing structured data persistently. Just because the word language appears in XML does not actually mean that XML is a language, it is better to interpret it as a specification that enables you to create your own markup language(s). It is a subset of Standard Generalized Markup Language, which is the mother of all markup languages; incidentally, it is also the parent of the more popular Hyper Text Markup Language or HTML. XML, as already stated, makes it easy to exchange information between different applications. For example, a program written in PHP will be able to process information created and stored in a XML document by another programming language such as PERL or ColdFusion. For those of you who are familiar with HTML, using XML documents should not be a problem at all, because it is similar to how documents are formatted in HTML. The main difference between the two is that the HTML specification has a fixed set of elements and attributes so you have to learn to use those if you want to write any sensible HTML document. That in itself is an advantage because it makes it easy for developers across the world to read and write HTML documents. For example if you want to make text bold in HTML you use the <b></b> or <strong></strong> tags. Any developer that is familiar with HTML will know what they are. XML on the other hand does not have any fixed elements or attributes. It allows you to create your own elements and attributes, this of course gives you the power to define your own language or to use someone else's definition. The flexibility that XML offers is what makes it so powerful not to mention useful. This is also why programs created by different applications can read and write to a XML document. Structure of an XML Document To create an XML document is very easy, viewing an XML document is not. Although an XML document is similar to an HTML document in the sense that it is a specification and not actually a language and also in the sense that it has elements and attributes defined as tags, it cannot be directly rendered by many browsers unless you specify formatting information in some way. We will get to this in a moment, but look at how easy it is to create a XML document: Version - Describes the version of XML that is used by this document. Root Element - In this case, the root element is called articles. There can only be one root element per XML document. Child element - The third line defines a child element of the root element, in this case it is called article and it has an attribute called title, which is set to "Change in Politics". From the structure of the document above you can easily work out that the document is about articles and that the first article is titled "Change in Politics". Articles have other attributes as well, such as date that it was written, and author name. Therefore, if we want to get a fuller description for our document we can modify it in the following manner: It is that difficult and that easy to create your own document. To summarize an XML document must have the following parts: All XML documents must contain an xml version line. It may also contain a character encoding declaration. For example: To make your XML document more portable or more readable by other applications, a valid XML document will contain a DTD or an XML Schema, or at least a reference to one of these. For example: XML documents contain one or more elements; they in turn can contain more than one attribute. Elements can contain other elements or information between their starting and ending tags. For example: XML functions in PHP PHP being one of the most popular server side scripting languages has long being able to store, retrieve and manipulate information stored in databases. Because XML has gained so much popularity among developers, PHP started to add functions that enabled developers to manipulate XML data. Not only do these functions enable us to manipulate data contained within XML documents, it also enables us to create XML documents. Most of the functions are designed to parse XML documents. As you can imagine, to work with an XML document, these functions must be able to get at and work with the names and values of elements and attributes and any other type of XML document components. In the next couple of sections and articles, we will look at the various functions that are provided by PHP. Here are some of the XML functions that are available in PHP xml_parser_create: This is the basic function to create an xml parser, which can then be used with the other XML functions for reading and writing data, getting errors, and a variety of other useful tasks. Use xml_parser_free() to free up the resource when done. An example use: $xmlParser = xml_parser_create(); xml_parse_into_struct: Parse XML data into an array structure. You can use this function to take the contents of a well-formed XML file, turn it into a PHP array, and then work with the contents of the array. Below is an example use of this function: When you run the above example, you get the following result:
<urn:uuid:34558224-f73d-42f6-bdb2-5d267bd3ff97>
3.734375
1,178
Documentation
Software Dev.
46.377165
The temperature gets colder as you go upward in the troposphere. Click on image for full size Original artwork by Windows to the Universe staff (Randy Russell). Temperature in the Troposphere The temperature gets colder as you go upward in the troposphere. Light from the Sun heats the ground. The warm ground gives off the heat as infrared "light". The IR energy heats the troposphere. The lowest part of the troposphere is the warmest because it is closest to the ground, where the heat is coming from. Shop Windows to the Universe Science Store! The Fall 2009 issue of The Earth Scientist , which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store You might also be interested in: Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. The sun's rays pass through millions of raindrops. A...more It takes the Earth one year to travel around the sun one time. During this year, there are four seasons: summer, autumn, winter, and spring. Each season depends on the amount of sunlight reaching the...more Scientists sometimes travel in airplanes that carry weather instruments in order to gather data about the atmosphere. These research aircraft bring air from the outside into the plane so scientists can...more An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). These instruments can be used in a backyard weather station or on a well-equipped scientific research...more Thermometers measure temperature. "Thermo" means heat and "meter" means to measure. You can use a thermometer to measure the temperature of many things, including the temperature of...more Weather balloons are used to carry weather instruments that measure temperature, pressure, humidity, and winds in the atmosphere. The information collected from the instruments on weather balloons are...more Wind is moving air. Warm air rises, and cool air comes in to take its place. This movement creates the winds around the globe. Winds move at different speeds and have different names based on their speed....more
<urn:uuid:943b498a-1421-4762-bf18-4bad5ea4112f>
3.84375
459
Content Listing
Science & Tech.
54.634651
Heath Stewart (2005) and Josh Keane (2002) studied diffusion of impurities in silver chloride. Heath's impurity was cadmium and Josh's impurity was calcium. They obtained a radioactive isotope of the impurity dissolved in an acidic solution. They put several drops of the solution on the top surface of a cylindrical sample of silver chloride. Then they placed the crystal in a hot furnace for several hours. They used sandpaper to grind down the crystal to obtain a graph of activity as a function of depth from the surface. After multiple trials at different temperatures, we were able to create an Arrhenius plot, which gave us a value for the activation energy for the particular impurity.
<urn:uuid:39d2a81e-4bb1-4852-8179-b69ca5dca2d9>
3.09375
161
Academic Writing
Science & Tech.
40.916154
3.6. Rings in Bulge-dominated Systems and Hoag-like Galaxies A recent study of an early-type system that is almost certainly a collisional ring system is the study of AM 1724-622 by Wallin and Struck-Marcell (1994). This pair of galaxies was given the name "The Sacred Mushroom" by Arp and Madore (1987), since the ring forms the cap of the mushroom shape, and the bridge to the highly elongated companion appears as the mushroom's "stalk". In the AM catalog there are a number of such "mushroom-shape" ring galaxies, and northern examples may include Arp 284. Perhaps Arp 148 is also an early stage of the same phenomenon. AM 1724 has a smooth morphology, which led Wallin and Struck-Marcell to speculate that it may be an example of an early-type ring system. This indeed proved to be the case, based on the photometry of the ring. Unfortunately only small parts of the ring could be measured to determine the color of the ring material because of the large number of foreground stars contaminating the galaxy, which lies at a galactic latitude of -15 degrees. Nevertheless, tentative colors for the ring confirmed that its colors are red (U - B = 0.67, B - V = 0.87) compared with the ring sample of Appleton and Marston (1995). Profiles of ring sections suggested sharp-edged caustics, predicted in a purely stellar ring. Ironically, this southern galaxy may be one of the few truly stellar ring systems studied so far which conforms to the models of Lynds and Toomre (1976), being free of the complicating effects of gas. Of possible relevance to collisional ring galaxies in early type galaxies are galaxies like Hoag's object (Hoag 1950; Schweizer et al. 1987). These galaxies are characterized by having a central nuclear bulge surrounded by a smooth, thick and extremely regular ring, often almost perfectly circular. Optical spectra of Hoag's object taken by O'Connell et al. (1974) showed the galaxy to be dominated by late-type stars in the core, and this is confirmed by the red colors of the galaxy (Brosch 1985). However, more recent observations by Schweizer et al. (1987) show emission lines of H and [OIII] 5007 in the ring and the detection of significant amounts of HI emission from the galaxy. These observations suggest that the ring contains a young population of stars, despite the overall red color of the system. Ever since its discovery, the origin of the almost perfect ring has been controversial. Hoag was unsure of its origin, but suggested that it might be a gravitational lens, although this would require an exceptionally large mass of the central bulge. Brosch (1985) suggested that the ring was produced at a resonance by a weak bar, similar to the ringed galaxies of Buta (1994). However, deep imaging by Schweizer et al. failed to show any evidence for a bar, apparently ruling out the formation of the ring by this mechanism. On the other hand, it is possible that any ring-making bar may have recently dissolved. The authors discuss the possibility that the galaxy is a collisional ring, but do not favor this interpretation because the central bulge has the same radial systemic velocity as the ring. If the central bulge was a small elliptical galaxy caught in the act of penetrating the disk and generating a ring, the authors argue that it should have a high radial velocity compared with the ring. The origin of these galaxies would still appear to be a mystery. We believe that, although the case for a collisional origin for Hoag's object is not strong, it cannot yet be completely ruled out. For example, it is possible that the argument about the small velocity of the companion relative to the ring can be circumvented if the companion has been strongly decelerated by dynamical friction with a massive halo in the target galaxy and is about to fall back onto the target or has already merged (see Section 3.4 for more discussion of this possibility). This would be consistent with the heavy halo around the galaxy postulated by Schweizer et al. based on the kinematics of the ring. We note that, if the ring in Hoag's object was formed collisionally, its relatively red colors and low ring star formation rates argue for an early-type target disk. Early-type galaxies may naturally contain a more dominant dark-matter component and bulge capable of significantly slowing and eventually absorbing a small intruder galaxy. However, such a scenario, whilst explaining some of the properties of Hoag's object, does require a combination of remarkable circumstances. For example, we would have to be viewing the collision from a special position (exactly along the symmetry-axis of the disk and target trajectory) and at a special time (either just at the moment when it has reached its furthest distance from the target and has zero relative velocity to the ring, or just after it has merged). Since such circumstances would be rare, we suggest that spectroscopic studies of the other Hoag-like galaxies would be worthwhile to increase the statistics on the velocity of the central objects relative to the ring.
<urn:uuid:8cd5a0cd-1526-4783-b3bb-8649e2fc43bc>
2.75
1,077
Academic Writing
Science & Tech.
44.862132
Despite millions of dollars spent on research in Hood Canal, the precise causes of low-oxygen problems in Southern Hood Canal are still not fully understood, according to a report released this week by the U.S. Environmental Protection Agency and the Washington Department of Ecology. News articles about the report have created some confusion, and I’ll get to that in a moment. As I reported in Tuesday’s Kitsap Sun, research has not proven that nitrogen from human sources is responsible for a decline in oxygen levels greater than 0.2 milligrams per liter anywhere in Hood Canal. That number is important, because it is the regulatory threshold for action under the Clean Water Act. Mindy Roberts, one of the authors of the report, told me that scientists who have worked on the low-oxygen problem have gained an appreciation for Hood Canal’s exceedingly complex physical and biological systems. So far, they have not come to consensus about how much human inputs of nitrogen contribute to the low-oxygen problems in Lower Hood Canal. The report, which examined the complexity and scientific uncertainty about these systems, seems to have generated some confusion, even among news reporters. I think it is important to understand two fundamental issues: 1. The deep main channel of Hood Canal is almost like a separate body of water from Lower Hood Canal (also called Lynch Cove in some reports). This area is generally defined as the waters between Sisters Point and Belfair. Because Lower Hood Canal does not flush well, low-oxygen conditions there are an ongoing and very serious problem. 2. Fish kills around Hoodsport cannot be equated or even closely correlated with the low-oxygen conditions in Lower Hood Canal. The cause of these fish kills was not well understood a decade ago, but now researchers generally agree that heavy seawater coming in from the ocean pushes up a layer of low-oxygen water. When winds from the south blow away the surface waters, the low-oxygen water rises to the surface, leaving fish no place to go. I’m not aware that researchers were blaming nitrogen from septic systems for the massive episodic fish kills, as Craig Welch reports in the Seattle Times. At least in recent years, most researchers have understood that this was largely a natural phenomenon and that human sources of nitrogen played a small role, if any, during a fish kill. The question still being debated is how much (or how little) humans contribute to the low-oxygen level in the water that is pushed to the surface during a fish kill and whether there is a significant flow of low-oxygen water out of Lower Hood Canal, where oxygen conditions are often deadly at the bottom. The new report, which was reviewed by experts from across the country, concludes that fish kills can be explained fully without considering any human sources of nitrogen. Evidence that low-oxygen water flows out of Lower Hood Canal in the fall is weak, the report says, though it remains a subject of some debate. “We have not demonstrated that mechanism to their satisfaction,” Jan Newton of the Hood Canal Dissolved Oxygen Program told me in an interview. “We never said it caused the fish kill, only that it can reduce the oxygen level below what it was. In some years, it wouldn’t matter, but in some years it would make it worse.” A cover letter (PDF 83 kb) to the EPA/Ecology reports includes this: “While the draft report concludes that although human-caused pollution does not cause or contribute to the fish kills near Hoodsport, our agencies strongly support additional protections to ensure that nitrogen and bacteria loadings from human development are minimized. “Water quality concerns extend beyond low dissolved oxygen and include bacteria and other pathogens that limit shellfish health. Overall, human impacts to Hood Canal water quality vary from place to place and at different times of year. Hood Canal is a very sensitive water body and people living in the watershed should continue their efforts to minimize human sources of pollution.” One of the most confounding factors is the large amount of nitrogen born by ocean water that flows along the bottom of Hood Canal. An unresolved but critical questions is: How much of that nitrogen reaches the surface layer, where it can trigger plankton growth in the presence of sunlight? Plankton growth is a major factor in the decline of oxygen levels, because plankton eventually die and decay, consuming oxygen in the process. Human sources of nitrogen often enter Hood Canal at the surface, but researchers disagree on how much of the low-oxygen problem can be attributed to heavy seawater that reaches the sunny euphotic zone near the surface. Here are the principal findings in the EPA/Ecology report, “Review and Synthesis of Available Information to Estimate Human Impacts to Dissolved Oxygen in Hood Canal” (PDF 3.8 mb).
<urn:uuid:299231c5-f1cb-44a7-a2bd-2191ad89ee74>
3.171875
1,009
Personal Blog
Science & Tech.
38.971882
Intermoleculr bonding is a form of bonding that exists between molecules. The strength of the intermolecular bonds determines how easily the molecules will separate and hence the melting and boiling points. There are different strengths of intermolecular bond, and we shall look at them in order. Starting with the weakest: Van der Waals Also known as induced dipoles and even London Forces. This is a weak force caused by the attraction of temporary dipoles, the diagram below shows how they form. Electrons are constantly moving about in a random pattern in their energy level. This means that different parts of the molecule carry a very slightly negative charge, known as δδ- (delta delta minus) since it is so small. This movement induces a dipole in neighbouring molecules. Because electrons are always moving around very quickly, the charges switch around all the time. It is also important to note that the more electrons in a molecule / atom, the stronger these Van der Waals or London forces are. This is seen in the increasing boiling points of the noble gases as you go down the group. In bonding you will have learned about polarisation and how a permanent dipole is produced. In this type of molecular bonding, the opposite delta charges are attracted. This type of bonding is stronger than the above. Below is an example of permanent dipole interactions in HCl Hydrogen bonding is much stronger than the previous two and is a special type of permanent dipole-dipole. It only happens in molecules where hydrogen bonds with the following: Oxygen (O), Nitrogen (N) and Fluorine (F). This is because these molecules have high electronegativities (4, 3.5 and 3). As you can see in the above diagram. When these atoms bond to hydrogen, it leaves lone pairs of electrons that are not being used in bonding. This creates electrostatic forces between molecules. Hydrogen bonds exist in our most abundant molecule on earth: water. This gives it all sorts of properties that are essential for life.
<urn:uuid:8fd5e6c4-af2c-472d-bd4a-e3ae21751a75>
4.125
425
Knowledge Article
Science & Tech.
44.064439
C# Tutorials - Program Flow Control The statements that enable you to control program flow in a C# application fall into three main categories: selection statements, iteration statements, and jump statements. In each of these cases, a test resulting in a Boolean value is performed and the Boolean result is used to control the application's flow of execution. In this tutorial, you'll learn how to use each of these statement types to control structure flow. You use selection statements to determine what code should be executed and when it should be executed. C# features two selection statements: the switch statement, used to run code based on a value, and the if statement which runs code based on a Boolean condition. The most commonly used of these selection statements is the if statement. Table of content (tutorial index)
<urn:uuid:b3f0a1a0-713c-4d62-a7ab-a22212d430aa>
3.671875
166
Tutorial
Software Dev.
37.585652
School: CHAPARRAL MID-CHAPARRAL Area of Science: Earth and Space Science Abstract: Life is A Gas The problem we are trying to solve is does methane gas change the buoyancy of salt water, and can these changes sink a ship. This is important to know because ships depend on buoyancy to stay afloat in the ocean. Basically we are trying to find if methane gas lowers the buoyancy of water and does this affect the ability of a ship to float. Methane is frozen in the ocean floor. Sometimes it releases methane gas that bubbles to the surface of the ocean that could change the buoyancy of the water. To test out theory we are going to create a model that will demonstrate any charges in buoyancy due to methane and if these changes will affect the ships ability to stay afloat. Sponsoring Teacher: Carl Bogardus
<urn:uuid:0b2ba1d2-7457-4c73-bffd-f00205b4a978>
3.03125
179
Academic Writing
Science & Tech.
49.442146
Wilson, A.T., Hendy, C.H. and Reynolds, C.P. 1979. Short-term climate change and New Zealand temperatures during the last millennium. Nature 279: 315-317. Temperatures derived from an 18O/16O profile through a stalagmite found in a New Zealand cave (40.67°S, 172.43°E) revealed the Medieval Warm Period to have occurred between AD 1050 and 1400 and to have been 0.75°C warmer than the Current Warm Period.
<urn:uuid:7b772335-7471-4452-b3cb-f94652ddbf9c>
2.9375
109
Academic Writing
Science & Tech.
90.435231
The oxidized structure of CoQ, or Q, is given here: The various kinds of Coenzyme Q can be distinguished by the number of isoprenoid side chains they have. The most common CoQ in human mitochondria is Q 10. The image above has three isoprenoid units and would be called Q 3. If Coenzyme Q is reduced by one equivalent, the following structure results, a ubisemiquinone, and is denoted QH. Note the free radical on one of the ring oxygens. If Coenzyme Q is reduced by two equivalents, the compound becomes a '''ubiquinol , denoted QH2''': - CoQH2+ 2 Fe+3-cytochrome c → CoQ + 2 Fe+2-cytochrome c
<urn:uuid:547797e4-6338-4d11-b25b-0beab665402a>
3
174
Knowledge Article
Science & Tech.
48.839642
Cool technology of CryoSat-2 Just how are Earth’s icebound regions responding to a warming world? This question looms even larger now than it did in 2005, when ESA sought to answer it through the first CryoSat satellite. That spacecraft was lost through launch failure, but the nations of Europe recognised the effort could not be abandoned. A second satellite dedicated to measuring changes in ice thickness – the culmination of ten years of ESA endeavours – is therefore about to fly. Our collective future will be influenced by the rate that polar ice is melting, and CryoSat-2’s technology makes it best equipped to finally resolve this question. Existing satellites, whether optical or microwave-radar based, provide useful overviews of ice extent in the cryosphere, where the influence of climate change is thought to be greatest. However, their results lack a vital extra dimension: they show where the ice is, but have no way of accurately estimating its mass, and how that mass changes over time. NASA’s ICESat used active laser ranging to measure ice-sheet thickness but its effectiveness was limited by cloud cover and laser problems – and all its lasers have now failed. CryoSat-2’s primary payload is a radar altimeter, using the same ranging principle but with cloud-piercing radar instead of laser pulses. Sending thousands of radar pulses to the ground each second, it precisely measures the time it takes for their echoes to return. Providing the satellite’s position in space is known to a high accuracy, these data can be used to map the global ice surface to an accuracy of a few centimetres. The first radar altimeter sensors were developed solely for ocean monitoring – detecting ocean currents and wave height – but later instruments flown on ESA’s ERS and Envisat missions proved successful at mapping land, ocean and the polar ice caps. Revealing sea ice thickness But the places where the real action is in climate terms – the rugged terrain at the margins of the great ice sheets where enormous glaciers spill into the sea – remain largely terra incognita to radar altimeters. Standard design averages out thousands of radar echoes returning each second to boost the overall signal to noise ratio, delivering a surface ‘footprint’ of around 1.6 km across – fine for the open seas they were devised for, but far too broad to discriminate between floating sea-ice floes and the water around them, or the irregular margins of land ice. University College London developed a method to measure sea-ice thickness, ‘freeboard’ (the amount of ice protruding above the water) and extent from ERS altimetry data, but its usefulness was limited by the altimeter’s large footprint. It was a promising start, however, inspiring a team of scientists and engineers to imagine an instrument that could make better use of the technique, as well as possessing an improved ability to measure elevation over the ice sheet margins of Greenland and Antarctica. This was how what eventually became CryoSat-2’s SAR/Interferometric Radar Altimeter (SIRAL) would be born. Though long before any metal was bent or carbon-fibre cast, the principles behind the mission had to be developed and demonstrated. “The unique high-level processing algorithms the mission employs were developed through an ESA study,” explained Robert Cullen, overseeing SIRAL-2 and CryoSat-2’s commissioning phase for ESA. “Lead Investigator Duncan Wingham led this effort prior to submitting the original mission proposal in 1998. The ESA study also included development of the instrument concept by a team from Thales Alenia Space. “When the mission was given the initial go-ahead, Professor Wingham’s team at UCL already had the basis for developing a mission performance simulator, including a software model of the radar, quickly followed by the detailed on-ground science processing methods unique to CryoSat. By the time industry was brought in to build the operational ground systems in 2002, UCL had already generated key baseline processing methods, test scenarios and data to test the operational ground processing system.” The mission employs two separate techniques to sharpen the vision of CryoSat-2’s altimetry over sea ice and land ice sheet margins. One improves the altimeter’s precision in the along-track (or direction of motion) direction while the second provides a further boost to the across-track (either side of the moving satellite) accuracy when needed. The first technique is called Synthetic Aperture Radar (SAR), a method more typically used to improve the resolution of satellite imaging radar. In SIRAL’s SAR mode, returning radar echoes are sorted into separate strips across its ground track based on slight frequency shifts caused by the Doppler effect induced by the satellite hurtling along at more than 7 km/s relative to Earth’s surface. The altimeter’s along-track footprint is effectively divided into more than 60 separate beams with a resolution of around 250 m each, enough to differentiate many more ice floes from open water and often also the ‘leads’ (cracks) between them. For this method to work, the number of pulses sent per second has to increase tenfold, sending out a burst of pulses that can subsequently be processed together to provide a finer sampling of the surface compared with the traditional method used for the Envisat and ERS altimeters. This article continues... Last update: 23 February 2010
<urn:uuid:dbd2e4f5-4025-4276-9e41-36d9a513e75e>
3.9375
1,156
Knowledge Article
Science & Tech.
29.201048
How do eggs split to form conjoined twins? Here is the simple explanation: For more detail on twin development, please see this excellent web page from the Univ. of Minn.: Thanks for the good question, Jeff Buzby, Ph.D. I think your question is somewhat incorrect...conjoined twins often develop from a single egg which has incompletely separated, which if it had separated would have produced a set of identical twins. Two eggs ovulated at the same time and fertilized at the same period of a woman's cycle would give non-identical, fraternal twins;. Peter Faletra Ph.D. Office of Science Department of Energy Click here to return to the Molecular Biology Archives Update: June 2012
<urn:uuid:046010fa-84e5-48f3-b118-8250ed99b51e>
3.578125
164
Q&A Forum
Science & Tech.
54.492596
Finding ozone holes Name: Russ A Morton How do we know where the holes are in the ozone? Yes, we can measure the concentration of ozone by flying very high up, with U- 2 planes, stratospheric balloons, sounding rockets, and satellites. The ozone measuring instruments carried on these flights show that the concentration is lowest over the South Pole, hence the nickname "ozone hole." Click here to return to the Environmental Science Update: June 2012
<urn:uuid:5c70c3b5-e6ed-4ba8-b517-4b456904dffd>
3.640625
101
Knowledge Article
Science & Tech.
32.564
The Apollo Program: Apollo 15 Feature Summary - From Physics Research The Apollo Program: Apollo 15 - You're looking at the vicinity of NASA's Apollo 15 landing site, located almost in the center of the image, on the lava surface at the eastern edge of Mare Imbrium (click for a lunar map to find it). Naturally a smooth impact basin would be the best place for the lunar lander to put down. You can also see part of the Apennine mountain range in the image above. The Apollo Program's mission was to explore and map the moon. To learn more about Apollo 15, see The Apollo Program: Apollo 15. Do you see the squiggly line running up and down in the middle of the image? It's actually a trench called the Hadley Rille. Here is a video of Apollo 15 landing, with the Hadley Rille in the background. On a lava-filled basin, and with a mountain range and a rille so close, the astronauts could explore plenty of lunar geology. - image credit: NASA; image source; larger image - Image URL: - August 1, 2012 - August 16, 2012
<urn:uuid:2ad9dad2-f96a-47ac-838e-36c7ce85cb67>
3.625
241
Knowledge Article
Science & Tech.
52.882778
* Venus: It's responsible for the greatest number of sightings--including one by Jimmy Carter in 1969. The planet usually appears just above the horizon around dusk or dawn; it trails only the moon in luminosity; and depending on the air mass in front of the viewer, the planet can be magnified into a ball or seem to skip around. World War II pilots mistaking it for a Japanese plane once shot at it. * Plasma: Some 60 miles up, electrons that shower the atmosphere interact with neutral atoms and release bursts of photons. In 2006, the United Kingdom's Defence Intelligence Staff concluded that the resulting aurora generates a number of the country's annual sightings. * Meteors: About 25 million pass through the atmosphere daily--some resulting in showy midair displays that have been mistaken for UFOs since as early as 1965, when residents of Kecksburg, Pa., were bewildered by a meteor. * Lenticular clouds: Stratified and saucer-shaped, they tend to form near mountains. In July 2008, the North Wales Pioneer ran a resident's "strange images"--of a lenticular cloud. The most provocative imagery to emerge from the Stephenville sightings came from resident David Caron, who videotaped squiggling, multicolored lights on Jan. 19, 2008, that he says must have been a UFO (above, left). "You could see it much better through the camera than just with the naked eye," he told the Stephenville Empire-Tribune. That might have been the problem. "Video cameras record at about 30 frames per second," says George Reis of Imaging Forensics. "This recording refreshes twice per second, so very long exposures are being used." Caron was likely filming a distant, stationary light. "A handheld exposure of a bright light results in light streaks from camera movement that would look like these images," Reis says. The color shifts, he adds, may be from color mosaic filtration on the camera's CCD chip. The History Channel's UFO Hunters reproduced a similar effect with a video camera set to night mode zoomed in on a cluster of colored lights across the lab (above, right). UFOlogists on Internet forums did the same by shooting the star Sirius, which is known for its luminescence and vivid coloring.
<urn:uuid:1ac5beee-3422-4bc9-879a-112c64905e05>
2.921875
476
Listicle
Science & Tech.
47.796055
On March 13, 2012, the sun erupted with an M7.9-class flare that peaked at 1:41 p.m. EDT. This flare was from the same active region, No. 1429, that has been producing flares and coronal mass ejections all week. That region has been moving across the face of the sun since March 2, and will soon rotate out of Earth view. This is the sunspot region AR 1429 that has generated several major solar storms recently. The video covers nine days (Mar. 4 – 12, 2011). Notice how the spot continually changes as its magnetic fields realign themselves. The images are white light images called intensitygrams captured by NASA’s Solar Dynamics Observatory (SDO).
<urn:uuid:2ebc65e9-1e91-4fb5-848b-c50879bc6ee9>
3.234375
152
Truncated
Science & Tech.
73.040575
Dramatically decreasing water levels of the great lakes are raising many concerns for environment Canada and the Stats surrounding the Great lakes in the United States of America as well. For the past couple of years the water level has been decreasing by a great margin, and is predicted to keep deceasing unless there is a change of climate, resulting in some massive rain fall/ snow fall. The entire Great Lakes Basin is above sea level, with Lake Superior the highest at 180 meters above sea level, then Lakes Michigan and Huron at 176 m, Lake Erie at 174 m, and Lake Ontario lowest at 74 m above sea level. According to Environment Canada the levels of Lakes Michigan, Huron and Superior were already running below the 100-year average last year, and now all the Great Lakes are running below that average. As you can see in Figure 1 we see that the levels are decreasing dramatically starting from this year, and the trend is continuing to descend. According to Environment Canada, Erie is down 22 centimeters over the roughly 100-year average for this time of year and 34 centimeters below last year’s levels for the same time; Lake Ontario, 23 cm. lower than average and 24 from last year; Lakes Michigan- Huron, 63 centimeters below average for this time of year and 24 below last year, and Superior is 34 centimeters The decrease in water level as really made sailing a boat is extremely hard. "Boaters are running into rocks a mile off shore." “Boaters are running into rocks a mile off shore,” Biddle told the Star. I believe that the decrease in the Great Lakes water levels is mainly to do with the recent dry weather that we are having. In this research Essay you will read about the Causes that would affect the water level to decrease, the impacts, and mitigation. Climate / Natural Factors There are many reasons for the low water levels in the Great Lakes, but the main cause should have to... [continues] Cite This Essay (2012, 11). Low Lake Levels. StudyMode.com. Retrieved 11, 2012, from http://www.studymode.com/essays/Low-Lake-Levels-1248101.html "Low Lake Levels" StudyMode.com. 11 2012. 11 2012 <http://www.studymode.com/essays/Low-Lake-Levels-1248101.html>. "Low Lake Levels." StudyMode.com. 11, 2012. Accessed 11, 2012. http://www.studymode.com/essays/Low-Lake-Levels-1248101.html.
<urn:uuid:3843fe59-0b21-4a08-b394-caf5044cabb2>
3.265625
541
Academic Writing
Science & Tech.
63.617149
Quasar's belch solves longstanding mystery Gemini Observatory observations expose a broad outflow extending in all directions around the galaxy Markarian 231’s core, removing gas from the nucleus at more than twice the rate of star formation. February 23, 2011 When two galaxies merge to form a giant, the central supermassive black hole in the new galaxy develops an insatiable appetite. However, this ferocious appetite is unsustainable. Artist’s conceptualization of the environment around the supermassive black hole at the center of Mrk 231. The broad outflow seen in the Gemini data is shown as the fan-shaped wedge at the top of the accretion disk around the black hole. Gemini Observatory/AURA, artwork by Lynette Cook For the first time, observations with the Gemini Observatory clearly reveal an extreme, large-scale galactic outflow that brings the cosmic dinner to a halt. The outflow is effectively blowing the galaxy apart in a negative feedback loop, depriving the galaxy's monstrous black hole of the gas and dust it needs to sustain its frenetic growth. It also limits the material available for the galaxy to make new generations of stars. The groundbreaking work is a collaboration between David Rupke from Rhodes College in Tennessee and Sylvain Veilleux from the University of Maryland. Markarian 231 (Mrk 231), the galaxy observed with Gemini, is an ideal laboratory for studying outflows caused by feedback from supermassive black holes. "This object is arguably the closest and best example that we know of a big galaxy in the final stages of a violent merger and in the process of shedding its cocoon and revealing a very energetic central quasar,” Veilleux said. “This is really a last gasp of this galaxy; the black hole is belching its next meals into oblivion. When we look deep into space and back in time, quasars like this one are seen in large numbers, and all of them may have gone through shedding events like the one we are witnessing in Mrk 231." Although Mrk 231 is extremely well-studied and known for its collimated jets, the Gemini observations exposed a broad outflow extending in all directions for at least 8,000 light-years around the galaxy's core. The resulting data reveal gas — characterized by sodium, which absorbs yellow light — streaming away from the galaxy center at speeds of over 620 miles (1,000 kilometers) per second. At this speed, the gas could go from New York to Los Angeles in about 4 seconds. This outflow is removing gas from the nucleus at a prodigious rate — more than 2.5 times the star formation rate. The speeds observed eliminate stars as the possible "engine" fueling the outflow. This leaves the black hole itself as the most likely culprit, and it can easily account for the tremendous energy required. The energy involved is sufficient to sweep away matter from the galaxy. However, "when we say the galaxy is being blown apart, we are only referring to the gas and dust in the galaxy," said Rupke. "The galaxy is mostly stars at this stage in its life, and the outflow has no effect on them. The crucial thing is that the fireworks of new star formation and black hole feeding are coming to an end, most likely as a result of this outflow." The environment around such a black hole is commonly known as an active galactic nucleus (AGN), and the extreme influx of material into these black holes is the power source for quasistellar objects, or quasars. Merging galaxies help feed the central black hole and also shroud it in gas. Mrk 231 is in transition, now clearing its surroundings. Eventually running out of fuel, the AGN will become extinct. Without gas to form new stars, the host galaxy also starves to death, turning into a collection of old aging stars with few young stars to regenerate the stellar population. Ultimately, these old stars will make the galaxy appear redder, giving these galaxies the moniker "red and dead." Many physical processes unique to rapidly growing black holes are likely to play a role in propelling the winds observed by Gemini. "At its peak, the quasar shines with such intensity that the light itself is trapped by a cocoon of gas and dust pushing on material with a force that can easily overcome the gravitational pull of the black hole," said Philip Hopkins from the University of California, Berkeley. The bath of X-rays and gamma rays quasars generate could also heat up the gas in the galaxy's center until it reaches a temperature where it "boils over" and causes a bomb-like explosion. "But until now, we haven't been able to catch a system in the act." Part of the problem, according to Hopkins, has been that the most visible outflows are those collimated jets already known in Mrk 231. These jets are trapped, probably by magnetic fields, in an extremely narrow beam, whereas material is falling into the black hole from all directions. The previously known jets therefore only cause very localized damage — drilling a tiny hole in the cocoon, rather than sweeping it away more broadly as seen in these new, all-encompassing outflows. The observations for this study were obtained with the Gemini Multi-Object Spectrograph (GMOS) on Gemini North, on Mauna Kea, Hawaii. The study used a powerful technique known as integral field spectroscopy. The integral field unit (IFU) in GMOS obtains a spectrum at several hundred points around the galaxy's core. Each spectrum is then, in turn, used to determine the velocity of the gas at that point and represents the third dimension in what is called a data cube. Markarian 231 is located about 600 million light-years away in the direction of the constellation Ursa Major. Although its mass is uncertain, some estimates indicate that Mrk 231 has a mass in stars about 3 times that of our Milky Way galaxy, and its central black hole is estimated to have a mass of at least 10 million solar masses or about 3 times that of the supermassive black hole in the Milky Way. Look for this icon. This denotes premium subscriber content. Learn more »
<urn:uuid:ffe8ba0b-d0bc-4796-b5cd-bc3a8806637d>
3.296875
1,273
Truncated
Science & Tech.
43.848919
TCP is the protocol that gets the most attention, and that makes some sense as it’s the one that carries your email and web pages, the two most popular applications on the net. There are times, though, when TCP is not appropriate, like when you want to multicast data to several machines at once. Multicasting takes advantage of the fact that LANs, such as Ethernet, are broadcast media, in which all hosts can, if they choose, see all the packets as they go by. In order to use multicast you have to use a datagram protocol, such as UDP, for reasons that I will not get into in this posting. One of the more important qualities to test for when building a multicast network is latency. Many people know about testing for bandwidth, i.e. how many bytes/bits per second can we shove down this pipe, but latency tests are rarer. Since such tests are rare I have written one which is now included in FreeBSD. It can be found in the src/tools/tools/mctest directory of CURRENT (8.0). An example is shown below: mctest -i em0 -s 1024 -n 100 -t 1 mctest -i em0 -s 1024 -n 100 -r Send 100 packets of 1024 bytes, with an inter-packet gap of 1 nanosecond. mctest program includes both the source and the sink code, and the -r command line argument is what tells the program to be a sink (i.e. to receive packets). The way that mctest tests for latency is that the source sends out a multicast packet and then the sink(s) send the packet right back. The sinks report statistics like the size of the gap between packets, which shows network jitter, and if any packets were lost. The source reports if packets were lost as well as the round trip latency of the packets. Results from the test look like this on the source: Results from client #0 sec: 0 usecs: 73 sec: 0 usecs: 44 sec: 0 usecs: 48 sec: 0 usecs: 39 which shows partial results. The results for all clients are actually output. There is also a convenient shell script mctest_run.sh which can be used to start sinks on multiple hosts. All of this is documented in the manual page as well.
<urn:uuid:e76ff653-bf2a-4911-9c76-f7198ce697a6>
3.09375
514
Documentation
Software Dev.
74.297543
An alternative method of dealing with collisions which entirely does away with the need for links and chaining is called open addressing . The basic idea is to define a probe sequence for every key which, when followed, always leads to the key in question. The probe sequence is essentially a sequence of functions where is a hash function, . To insert item x into the scatter table, we examine array locations , , ..., until we find an empty cell. Similarly, to find item x in the scatter table we examine the same sequence of locations in the same order. The most common probe sequences are of the form where . The function h(x) is the same hash function that we have seen before. I.e., the function h maps keys into integers in the range from zero to M-1. The function c(i) represents the collision resolution strategy. It is required to have the following two properties: must contain every integer between 0 and M-1. This second property ensures that the probe sequence eventually probes every possible array position.
<urn:uuid:ea262786-70a2-4e2b-81b8-07e7a8a1ab73>
2.96875
214
Documentation
Software Dev.
54.384698
While the quadratic formula for solving quadratic equations is familiar to everyone who has taken high school math, it seems that the formula for solving cubic equations is not as familiar, but it is occasionally very handy. This formula was first discovered by Girolamo Cardano, Scipione del Ferro and Niccolo Fontana Tartaglia, and first published in Cardano's Ars Magna in 1545, and is considered a major result in Renaissance mathematics. The general cubic equation can be expressed as follows: x3 + bx2 + cx + d = 0 We may then calculate the following two values (somewhat analogous to the discriminant of the quadratic formula): q = (b2 - 3c)/9 r = (2b3 9bc + 27d)/54 If the coefficients of the cubic are all real, then the sum p = q3 + r2 determines what sort of roots the cubic polynomial might possess. If p is greater than 0, then it possesses one real root and two complex conjugate roots. If p is equal to zero, all roots are real, and we have roots of at least multiplicity 2. If p is less than zero, then all three roots are real. In the latter case, we can proceed by computing: θ = cos-1(r/sqrt(q3)) The three real roots may then be readily computed as: x1 = -2 sqrt(q) cos(θ/3) - b/3 x2 = -2 sqrt(q) cos((θ + 2π)/3) - b/3 x3 = -2 sqrt(q) cos((θ - 2π/3) - b/3 If the coefficients are complex, or if either of the other two cases applies, we then compute these two values: s1 = (r + sqrt(q3 + r2))1/3 s2 = (r - sqrt(q3 + r2))1/3 The roots of the cubic are then: x1 = s1 + s2 - b/3 x2 = -(s1 + s2)/2 - b/3 + (i sqrt(3)/2)(s1 - s2) x3 = -(s1 + s2)/2 - b/3 - (i sqrt(3)/2)(s1 - s2) In practice, it's usually easier to just use one of these formulas to compute the root (the first one usually), and then remove that root using synthetic division and solve the resultant quadratic equation. If one is doing these kinds of calculations by hand, a simple check would be to use the symmetric functions generated by the cubic, namely: x1 + x2 + x3 = -b x1x2 + x1x3 + x2x3 = c x1x2x3 = -d and see if these formulas on the roots match with the coefficients of the original cubic. In the computer age however, this cubic formula is only seldom used to numerically compute the roots of cubic polynomials, as iterative numerical methods like Laguerre's method or the Jenkins-Traub method can generally do the job much more efficiently for equations of this degree and higher. William H. Press, et. al. Numerical Recipes in FORTRAN Milton Abramowitz and Irene A. Stegun, Handbook of Mathematical Functions
<urn:uuid:e14266d7-77de-4e9d-a3e0-1c9741e4a0b2>
3.90625
752
Knowledge Article
Science & Tech.
57.796853
An interesting report from the BBC last week: Concentrations of the natural pigment chlorophyll in coastal waters have been shown to rise prior to earthquakes. These chlorophyll increases are due to blooms of plankton, which use the pigment to convert solar energy to chemical energy via photosynthesis. This is based on an article by Singh et al. [doi] in Advances in Space Research, the current issue of which seems to be devoted to the use of satellite remote sensing for studying and predicting natural hazards such as earthquakes. The authors claim that you can detect a rise in sea surface temperature just before large coastal earthquakes. The blooms observed in this study are, they say, result from an increased flow of heat energy from the ocean to the atmosphere, enhancing the upwelling of cold, nutrient rich water and fuelling a boom in the growth of photosynthetic algae. This is all very interesting, but what is unclear in the paper is the reason so much heat energy is being released prior to the earthquake, which is itself releasing accumulated strain energy. And, looking at the paper, I'm not really sure the relationship is quite as clear-cut as is claimed. The figure below is from their paper, which shows the amount of upwelling measured at three localities in the month in which an earthquake was recorded. The peaks can be linked to increased concentrations of chlorophyll, as measured by satellite. The dark vertical lines mark the date of the earthquake. As you can see, there are a number of peaks in each of these plots, not just the one associated with an earthquake. In all but perhaps the last case the peak preceding the earthquake doesn't seem to have a significantly different magnitude to all the others, and the timing of the earthquake seems quite variable - in one case it coincides with the peak, in another it is ten days afterwards. You may beg to differ, but I'm rather unconvinced by this. It seems on a par with other proposed earthquake precursors like low frequency EM fluctuations (on which there are also some papers in this issue of Space Science Research) - not only is it not clear how they work, but the supposed precursor signals are not obviously distinguishable from background. Sadly, I don't think we'll be using satellite data to reliably forecast earthquakes just yet.
<urn:uuid:8cec4202-a326-42e7-a957-984da5c1ddc4>
3.640625
475
Personal Blog
Science & Tech.
42.293
(1) If A and B are equiprobable outcomes and |B|=100, explain using a Venn Diagram why P(A) <= P(B). (2) Three indistinguishable (fair) dice are thrown simultaneously at random. Find the probability that the sum of the three dice is less than six and that no two dice show the same face (the same number.) For (2), I get 0. Is this correct? Not sure what to do for (1). Any help?
<urn:uuid:7831f1a2-dd3b-43d0-aeec-f66c78e4a187>
3.671875
106
Q&A Forum
Science & Tech.
84.067162
Coded Mask Imaging on ASTROSAT Electromagnetic radiation in the X-ray band – nanometer and shorter wavelengths, cannot be focused as easily as for optical, radio and other lower energy bands. It is possible to use the grazing angle incidence technique to focus these higher energy photons, as in the case of the Telescope on ASTROSAT, but this focussing technique can be used only for fields narrower than ~ 1°. Coded mask imaging is one possible way of performing wide field imaging with photons of energy greater than a few keV. It comprises of utilising the shadows of a multiple pinhole mask plate cast on the detector, with the shift in the shadows encoding the location of the source in the sky. For the literature on the coded mask concepts one can refer to the webpage on GSFC, NASA site Coded Aperture Imaging in High-Energy Astronomy by Jean in 't Zand. Two of the four X-ray instruments aboard ASTROSAT are based on the coded mask imaging concept: Scanning Sky Monitor (SSM) Cadmium Zinc Telluride Imager (CZTI) The SSM uses a one-dimensional imaging system with a position sensitive proportional counter as the detector. The CZTI comprises of a two dimensional mask plate mounted on top of a pixellated CZT The coded mask plates for both the SSM and the CZTI have been designed based on pseudo-noise Hadamard set Uniformly Redundant This page was last updated on 31st Jan 2008.
<urn:uuid:1b2c0bb8-59c6-4bf5-9fd6-859ee1727967>
3.234375
349
Knowledge Article
Science & Tech.
26.4525
The movie shows people freezing to death because of a super-storm bringing freezing air from the "upper troposphere." Could that really happen? NASA: No. Firstly the upper troposphere is not as cold as portrayed (minima near -80 C (-110 F) at about 15 to 20 km), and this cooling is mainly due to the pressure decreasing with height. As air rises it cools due to this effect alone, similarly as it is brought down, it warms up by 6-10 degrees per kilometer. NSIDC: No, this part of the movie is somewhat far fetched. On the same general subject, changes are expected in atmospheric temperature as well as on the surface. Shown below is a summary. Because GHGs absorb heat in the atmosphere, they change the temperature within its various layers. Finding the pattern illustrated in the diagram of warming and cooling (diagrammed below) would be definitive proof that CO2 and CH4 were the cause of surface warming. But this is not as easy as it may sound; there are not many weather stations in the upper troposphere (aircraft do not do the job well because they affect the air around them). Balloon and satellite data do appear to show the expected trends. The topic is the subject of active scientific debate.
<urn:uuid:2104868e-eb6f-4b6e-a5c0-842af7d7f6a4>
3.609375
264
Q&A Forum
Science & Tech.
58.73701
Use this calculator to compute the variance from a set of numerical values. This calculator computes the variance from a data set: To calculate the variance from a set of values, specify whether the data is for an entire population or from a sample. Enter the observed values in the box above. Values must be numeric and may be separated by commas, spaces or new-line. You may also copy and paste data into the text box. Press the "Submit Data" button to perform the computation. To clear the calculator and enter a new data set, press "Reset". The variance is one of the measures of dispersion, that is a measure of by how much the values in the data set are likely to differ from the mean of the values. It is the average of the squares of the deviations from the mean. Squaring the deviations ensures that negative and positive deviations do not cancel each other out. This calculator uses the following formulas for calculating the variance: The formula for the variance of a sample is: where n is the sample size and x-bar is the sample mean. The formula for the variance of an entire population is: where N is the population size and μ is the population mean.
<urn:uuid:ba27a0d4-d7b4-48ce-a7d3-85db0a021bf8>
3.78125
251
Tutorial
Science & Tech.
48.215933
The Russians don’t do countdowns. For the final few seconds before launch those of us watching just hold our breath and stand well back. I find several thousand kilometres back at the European Space Agency’s mission control in Germany to be safest. When ignition comes, the launcher is engulfed in clouds of toxic orange smoke before it rises through the inferno and accelerates into the clouds. Many of these Russian rockets, such as the Cosmos and Rockot launchers, are converted from missiles designed to deliver nuclear warheads. Given that their launch would originally have signalled the end of the world, I don’t suppose the toxicity of the smoke was a major design consideration. Rockets are dangerous, complicated and relatively unreliable. No-one has yet built a launcher that is guaranteed to work every time. A misaligned switch, loose bolt or programming error can lead to disaster or, with a human crew, a potential tragedy. Rockets are also incredibly expensive - even the cheapest launch will set you back some $12 million, meaning the cost of any cargo costs a staggering $16,700 per kilogram. Although the funky new space planes being developed, such as Britain’s Skylon or Virgin’s SpaceShipTwo, will slash the costs of getting into space, they are still based on rocket technology – using sheer brute force to escape the clutches of gravity. But there is a radical alternative. Science fiction fans have long been familiar with space elevators. Popularised by Arthur C Clarke, the concept of an elevator from the Earth to orbit has been around for more than a century. In the space operas of Iain M Banks or Alastair Reynolds, space elevators are pretty much taken for granted – they’re what advanced civilisations use to leave their planets. These futuristic engineering feats consist of a cable – also known as a ribbon or tether - of material stretching from the Earth’s surface into orbit. An anchor and Earth’s gravity at the lower end, and a counterweight and centrifugal force at the top end keep the elevator’s “cable” taut and stationary over ground station. Robotic ‘climbers’ would then pull themselves up the ribbon from the surface, through the stratosphere and out into space, potentially powered by lasers. The climbers could carry satellites up and bring minerals from the moon, or asteroids, back. They could take tourists into orbit or convey astronauts on the first part of their journey to the stars. No longer would space exploration be held back by gravity or rely on smelly, dangerous and expensive rockets. “You could take a ride for the cost of a first class airline ticket,” exclaims David Horn, the Conferences Chair of the International Space Elevator Consortium (ISEC). Estimates suggest that the cost of sending cargo into space could plummet to around $100 per kilogram. “A primary school could have a bake sale to cover the costs of sending a class science experiment into space.” Or, by selling enough cakes, even the entire class. ISEC has been organising space elevator conferences for the past ten years – the latest will be held in Seattle later this month. They are attended by scientists, engineers and students from around the world, including those from various national space agencies like Nasa. There are also annual conferences in Europe and Japan and technical papers on various aspects of space elevators are published every year. "There’s global interest,” says Horn. “Reducing the cost to access space will change the global economy.” Which would be wonderful, but how much of this interest is just wishful thinking?
<urn:uuid:977ac17f-7afc-460d-846f-5bf7fe3bc4e2>
2.9375
757
Nonfiction Writing
Science & Tech.
42.677518
It's an old technique to keep the value of a variable between request. Let's think of an example. You enter your name in a text field, and then press the submit button. One way ot store the value is to put it in a hidden input field, in the next page, inside a form. <input type="hidden" name="username" value="Jothi"/> When you submit again to an other page, your name is sent back again. Your name will be available as long as it is stored in a hidden field. You can imagine that it is a pain to maintain. Hi Vyas, welcome to the ranch. You may not be aware of the Javaranch Naming Policy yet. Could you please read it, and change your name accordingly ? Thank you. http://www.javaranch.com/name.jsp Regardless, I could not understand the pain With hidden variables, you would have to keep track of your variables through all pages, which is challenging when you have a lot of input elements. This was mainly used when session were not available in the old days
<urn:uuid:6aa347d1-b7d7-4d97-ad3e-504d36161f81>
2.921875
230
Q&A Forum
Software Dev.
71.481508
Maybe he's expecting the result to a specific number of decimal places? In which case look at sprintf and the format options... But agreed would be handy to know what the expected output for a given input is. float si(int p,int r,int t) std::cout << "p = " << p << "\nr = " << r << "\nt = " << t << std::endl; If the valuesthere are incorrect, there may be a problem with how you write them in. You are not separating the p r and t with comma's when you input them into the program are you? It will only work using spaces, unless you do:
<urn:uuid:59b7ef11-563f-4700-b254-899430992a4c>
2.9375
145
Comment Section
Software Dev.
85.94466
The Biology Laboratory conducts Algal Growth Potential and Limiting Nutrient algal assays. Algal Growth Potential (AGP) tests and Limiting-Nutrient assays are the most direct and effective ways to determine the amount of nutrients available to organisms in surface waters, and the eutrophication potential of an aquatic system. These methods provide information on the biologically-available nutrients in the water. Few other laboratories in Florida have the capacity or the staff able to perform these highly useful tests. Algal Growth Potential (AGP) assays determine the maximum amount of algal growth that the nutrients in a water sample can support. This test provides a better indication of the potential for algal blooms than can be determined by chemical measurement of nutrient concentrations, because all nutrients in the water are not in a form that can be used by algal. Additionally, there may be substances present in the water that inhibit algal growth. In nature, other factors than nutrient availability can limit the growth of algae, such as the amount of light available, water temperature, or the amount of algal being consumed by algae-eating organisms. However, the AGP assay alerts officials to the potential for algal blooms. Limiting Nutrient (LN) assays determine what nutrient is preventing even more algal growth than is presently possible. All plants, including algal, require various nutrients in different amounts. Whichever of the required nutrients is used up first by the algal prevents further growth regardless of how much of the other nutrients are available. The LN assay helps officials to understand what nutrient is limiting greater algal growth and to regulate discharges of that nutrient to help prevent the occurrence of algal blooms. Used with taxonomic identification and chlorophyll measurements, these procedures produce a complete picture of the trophic status of a system. These algal assays require specialized equipment and expertise not found in most laboratories. The Standard Operating Procedures used in preparation for and conducting Algal Assay and Limiting Nutrient tests can be viewed or downloaded from the Biology Section SOPs.
<urn:uuid:bdb9447b-23e3-4e80-84c9-713bc9f70494>
3.3125
474
Knowledge Article
Science & Tech.
23.873571
File:Antarctic Climate Change.jpg From Global Warming Art This figure shows the temperature trends since 1970 at 25 sites in Antarctica where temperature has been recorded at least 25 of the last 37 years. During this time, the only two continuously manned stations in the interior of Antarctica were Amundsen-Scott_Station at the South Pole and Vostok Station near the Southern Pole of Inaccessibility. The extremely harsh conditions during the six months of polar winter have prevented any other temperature stations (manned or unmanned) from operating over the long-term in the interior of the continent, hence all other long-term records come from coastal margins. This has severely limited the amount of information available about climate change in Antarctica and made it the most poorly instrumented of the continents. As indicated in the figure, the Antarctic peninsula is strongly warming, well in excess of the global average warming during this period. Other regions show a mix of both warming and cooling, including conflicting trends at the two interior stations. As the greenhouse gas carbon dioxide is expected to warm polar regions more rapidly than other areas, the evidence of cooling in some regions can be seen as unexpected. Others have argued that circulation changes produced by the formation of the ozone hole over Antarctica may explain many of the unexpected changes in Antarctic climate. Data for the sites shown in this figure was taken from the GISTEMP collection. This image was produced by Robert A. Rohde from public data. - ^ [abstract] [ [ Peter T. Doran, John C. Priscu, W. Berry Lyons, John E. Walsh, Andrew G. Fountain, Diane M. McKnight, Daryl L. Moorhead, Ross A. Virginia, Diana H. Wall, Gary D. Clow, Christian H. Fritsen, Christopher P. McKay and Andrew N. Parsons (2002). "Antarctic climate cooling and terrestrial ecosystem response". Nature 415: 517-520. - ^ [abstract] [ [ Thompson, David W. J. and Susan Solomon (2002). "Interpretation of Recent Southern Hemisphere Climate Change". Science 296 (5569): 895-899. - ^ [abstract] [ Shindell, Drew T. and Gavin A. Schmidt (2004). "Southern Hemisphere climate response to ozone changes and greenhouse gas increases". Geophysical Research Letters 31. GWArt images and pages linking to this file Wikipedia pages and images linking to this file Click on a date/time to view the file as it appeared at that time. |current||18:41, 12 November 2007||725×1,000 (98 KB)||Robert A. Rohde|
<urn:uuid:717a4ee5-41c1-4d7c-9cb0-41939e2ad3d5>
3.515625
550
Knowledge Article
Science & Tech.
53.189027
A vector can be read and modified from C with the functions scm_c_vector_set_x, for example. In addition to these functions, there are two more ways to access vectors from C that might be more efficient in certain situations: you can restrict yourself to simple vectors and then use the very fast simple vector macros; or you can use the very general framework for accessing all kinds of arrays (see Accessing Arrays from C), which is more verbose, but can deal efficiently with all kinds of vectors (and arrays). For vectors, you can use the functions as shortcuts. Return non-zero if obj is a simple vector, else return zero. A simple vector is a vector that can be used with the The following functions are guaranteed to return simple vectors: Evaluates to the length of the simple vector vec. No type checking is done. Evaluates to the element at position idx in the simple vector vec. No type or range checking is done. Sets the element at position idx in the simple vector vec to val. No type or range checking is done. Acquire a handle for the vector vec and return a pointer to the elements of it. This pointer can only be used to read the elements of vec. When vec is not a vector, an error is signaled. The handle must eventually be released with The variables pointed to by lenp and incp are filled with the number of elements of the vector and the increment (number of elements) between successive elements, respectively. Successive elements of vec need not be contiguous in their underlying “root vector” returned here; hence the increment is not necessarily equal to 1 and may well be negative too (see Shared Arrays). The following example shows the typical way to use this function. It creates a list of all elements of vec (in reverse order). scm_t_array_handle handle; size_t i, len; ssize_t inc; const SCM *elt; SCM list; elt = scm_vector_elements (vec, &handle, &len, &inc); list = SCM_EOL; for (i = 0; i < len; i++, elt += inc) list = scm_cons (*elt, list); scm_array_handle_release (&handle); scm_vector_elements but the pointer can be used to modify The following example shows the typical way to use this function. It fills a vector with scm_t_array_handle handle; size_t i, len; ssize_t inc; SCM *elt; elt = scm_vector_writable_elements (vec, &handle, &len, &inc); for (i = 0; i < len; i++, elt += inc) *elt = SCM_BOOL_T; scm_array_handle_release (&handle);
<urn:uuid:8ec7328f-638c-45db-a76d-0e27cd850769>
3.4375
636
Documentation
Software Dev.
48.564584
The nature of the Baltic Sea The Baltic Sea is a small sea on a global scale, but as one of the world's largest bodies of brackish water it is ecologically unique. Due to its special geographical, climatological, and oceanographic characteristics, the Baltic Sea is highly sensitive to the environmental impacts of human activities in its sea area or in its catchment area, which is home to over 85 million people. What makes the Baltic so sensitive? An almost enclosed sea The Baltic Sea is only connected to the world’s oceans by the narrow and shallow waters of the Sound and the Belt Sea. This limits the exchange of water with the North Sea, and means that the same water remains in the Baltic for up to 30 years – along with all the organic and inorganic matter it contains. The Baltic Sea consists of a series of sub-basins, which are mostly separated by shallow sills. These basins each have their own water exchange characteristics. Runoff enters the shallow Baltic Sea from a large catchment area At an average depth of just 53 metres, the Baltic Sea is much shallower than most of the world’s seas. It contains 21,547 km³ of water and every year rivers bring about 2% of this volume of water into the sea as runoff. The Baltic Sea’s catchment area is almost four times larger than the sea itself. The brackish water of the Baltic Sea is a mixture of sea water from the North Sea and fresh water from rivers and rainfall. The salinity of its surface waters varies from around 20 psu (≈parts per thousand) in the Kattegat to 1–2 psu in the northernmost Bothnian Bay and the easternmost Gulf of Finland, compared to 35 psu in the open oceans. A stratified sea Salinity levels also vary with depth, increasing from the surface down to the seafloor. Saltier water flowing in through the Sound and the Belt Sea does not mix easily with the less dense water already in the Baltic, and tends to sink down into deeper basins. At the same time, the less saline surface water flows out of the Baltic. The boundary between these two water masses, known as the halocline, consists of a layer of water where salinity levels change rapidly. In the Baltic Proper and Gulf of Finland, for instance, the halocline lies at a depth of around 60–80 m. Like a lid, the halocline limits the vertical mixing of water. This means that the oxygen content of the deep basins may decline due to biological and chemical oxygen consumption. The Baltic Proper is replenished by oxygen-rich saltwater flowing in from the North Sea along the sea floor. The Gulf of Bothnia, separated by a shallow sill from the Baltic proper, has low bottom water salinity and, hence, a very weak or absent halocline. In summer a thermocline – a distinct layer of water where the temperature changes rapidly – divides surface waters into two layers: a wind-mixed surface layer down to a depth of 10–25 m, and a deeper, denser and colder layer extending down to the sea-bed or the halocline. Such temperature stratification ends as surface waters cool in the autumn. Compared to other aquatic ecosystems, only relatively few animal and plant species live in the brackish ecosystems of the Baltic Sea – although this limited biodiversity does include a unique mix of marine and freshwater species adapted to the brackish conditions, as well as a few true brackishwater species. Where salinity levels are low, in the Baltic’s northern and eastern waters, fewer marine species can thrive, and habitats are dominated by freshwater species, especially in estuaries and coastal waters. Figure 1. Specific features and processes which make the Baltic Sea sensitive (green - natural characteristics, white - human impacts, yellow - harmful effects)
<urn:uuid:2fd013fc-0053-4b2f-8bf0-246848398e64>
3.953125
810
Knowledge Article
Science & Tech.
43.452398
Research has been conducted over a ten year period with samples and thorough laboratory testing carried out by the biophysicist Dr. Levengood, Nancy Talbott, and John Burks -- the BLT Research Team, and a small army of volunteers worldwide. Their findings include from samples taken within the crop formations in contrast to the control samples taken from outside the circles. Levengood studied biophysics at the University of Michigan in the late 1960s and holds several patents on his inventions to increase seed growth and vigor. He has studied biochemical and biophysical changes in crop formation seeds and plants for over a decade. He has found out that the plants from more than 95% of the sampled events revealed either single or multiple anomalous and readily apparent structural alterations at the macroscopic level. In general, these consisted of significant enlargements of the cell walls, expulsion cavities in the nodes of the plant stalks, significantly extended node lengths, and changes to the soil composition (ie. vastly higher level of magnetite concentration) Significant changes in seed germination and development were also found. Affected plants also have characteristics suggesting the involvement of transient high temperatures. Not one of these clearly anomalous plant alterations had been mentioned – much less explained – by the proponents of the croppies theory, nor can they be accounted for by the supposed methods employed to create crop formations through claims made by the circle makers. Levengood thinks what is creating the crop circle is a complex energy system that he describes as a "spinning plasma vortex" of ions with microwave frequencies that rapidly heat water in plants, causing them to collapse to the ground with stems not cracked or broken. Further, the microwave frequencies can heat up water in the plant's growth nodes which burst out, creating small holes that have often been found in extraordinary crop patterns. Those complex energies can also affect the seeds. The affected plants have components which suggest the involvement of rapid air movement, ionization, electric fields and transient high temperatures combined with an oxidizing atmosphere. One naturally occurring and organized force incorporating each of these features is an ion plasma vortex, one very high energy example being a lightning discharge. They demonstrated their technique for the cameras with a 1.2-metre board attached to a rope they hung around their necks. One held one end of a string in the centre to determine the radius while the other held the other end and stomped down the plants with the board. Newspapers and TV stations around the world trumpeted the solution to the crop circle mystery. Two phenomena appear to be pushing the evolving art. That could probably explain the numerous reports of electronic equipment failing in crop circles and compasses spinning out of control in and over the crop circles (even when flying over in aircraft). - Macroscale: external area effects; heat and overpressure - Microscale: internal plant stem effects; pressure changes and cellular damage. The Whirlwind theory The official government explanation is that whirlwinds, created by heat thermals, are the true cause of the crop circle anomaly. But whirlwinds or Mini-tornadoes are not static, they travel around and it is very unlikely that they would create such intricate and symmetrical patterns. According to Stephen J. Smith, a paranormal investigator and amateur composer, these ratios are not the result of chance "because the numbers have to be very precise in order to be a diatonic ratio. This is why music sounds like music instead of noise, because it is built on precise ratios." To derive music from the crop circles, Smith used a fractal music-generating computer program. He entered photographs of the formations into the computer, which ëreadí the photographs and generated music from the photos, using the crop circle scales to play it back. Curiously, not all crop circles embody diatonic rations in their formations. Hence, some do not have musical qualities. Possibly, Smith says, the real circles have diatonic ratios, and the faked ones do not. Further, diatonic ratios may be only a part of the overall geometry of the formations.
<urn:uuid:947de9ed-6125-4071-8144-36452ea742f2>
3.421875
834
Knowledge Article
Science & Tech.
35.81364
Learning From a Volcano, 25 Years After Mount St. Helens Exploded Download MP3 (Right-click or option-click the link.) This is SCIENCE IN THE NEWS, in VOA Special English. I'm Barbara Klein. And I'm Doug Johnson. Twenty-five years ago this month, a volcano exploded in the American state of Washington. On our program today, we tell about the explosion at Mount Saint Helens and how scientists have improved their knowledge of volcanic activity. May eighteenth, nineteen eighty, was a beautiful Sunday morning in the small town of Ellensburg, Washington. Fifteen-year-old Scott Johnson was reading a book near his home. His twelve-year-old sister Leslie was playing with a basketball. As Scott read, he looked up to see a huge, black cloud far away to the west. It might rain, he thought. Soon, he heard what sounded like a big gun. The sound seemed to grow louder. He looked up again. This time, he saw a huge cloud moving quickly across the sky. The two children watched as the sky grew darker. The cloud began to block light from the sun. Scott again looked at his book. He noticed something unusual on the book. It looked like very fine dust. How strange, he thought. It is raining dust! Scott and Leslie ran into the house and told their parents about what they saw. They turned on the television. They saw the first reports about the explosion of Mount Saint Helens. The cloud beginning to cover the sky was ash from the volcano. It had quickly reached Ellensburg from the volcano more than three hundred kilometers away. The cloud had now almost covered the sky. Scott watched the last small part of blue sky slowly disappear. Within moments, it was as black as night. A strong chemical smell was in the air. Ash fell very quickly and in huge amounts. Scott, Leslie and their parents continued to watch television reports. Experts said they did not know what would happen. Scott looked outside the house again. The ash now covered the ground. It was a frightening experience. He wondered, "Will the ash bury us?" The ash that fell on Scott and Leslie Johnson in Ellensburg began flying through the air at eight thirty two in the morning, local time. Washington State's beautiful Mount Saint Helens had exploded. The explosion was about three hundred fifty times more powerful than the explosions of the first nuclear bombs. Fire, rock and volcanic gas flew out of the volcano with a force of four hundred eighty kilometers an hour. A cloud of ash went straight up more than twenty kilometers into the air in less than fifteen minutes. Within fifteen days, ash from the volcano traveled around the Earth in the upper atmosphere. The explosion caused a landslide on the side of the mountain that became one of the largest such events in recorded history. More than four hundred meters of the top of the mountain disappeared. People near the volcano died immediately. Thousands of animals, birds and fish also were killed. In just a short period, thirty-five thousand hectares of forest timber was destroyed. The heat was so fierce it killed every living thing in the immediate area, even bacteria. The Native American Indians in Washington State still call Mount Saint Helens by its Indian name: Loowit. It means "Lady of Fire." On the morning of May eighteenth, nineteen eighty, the mountain again became a "Lady of Fire." The volcano had been giving warnings for three months. These warnings were in the form of many small earthquakes. On March twenty-seventh, a small explosion blew away the ice and snow at the very top of the mountain. Steam burst from the top of the volcano. By May seventeenth, more than ten thousand earthquakes had been measured. These earthquakes had caused the north face of the mountain to push out more one hundred forty meters. Volcano experts say this was strong evidence that hot liquid rock had risen high into the volcano. It was the day before the major explosion. Several weeks earlier, government officials had declared an emergency. They barred people from entering the Mount Saint Helens area. A special permit was needed to travel near the mountain. Officials also forced people who lived near the mountain to leave their homes. Many were angry, and demanded permission to return. Some people violated government rules and visited the Mount Saint Helens area. They did not think the volcano represented a real danger. Workers who planted trees near the mountain were given documents that permitted them to continue their work. Scientists also were at the mountain, studying the volcano. Many of these people were killed when the volcano exploded. Fifty-seven people died as a result of the explosion. The volcano exploded for more than eight hours. Then the explosions slowly began to decrease in force. But Mount Saint Helens was not finished. Five smaller explosions followed during the summer and autumn of nineteen eighty. Each explosion produced ash that rose twelve to fourteen kilometers into the sky. In the twenty-five years since then, small explosions, earthquakes and other volcanic events were reported at the mountain. The most recent began in October of last year. But none of the events is comparable to the May eighteenth explosion. Still, experts say Mount Saint Helens will explode again sometime in the future. The United States Congress created the Mount Saint Helens Monument in nineteen eighty-two. The monument covers a total of forty-four thousand five hundred hectares of the Mount Saint Helens area. It includes the mountain and much of the land around it. The United States Forest Service supervises the area. But nature controls it. Trees, animals, fish, flowers and plants were left to a natural recovery process. Humans were not permitted to help. The natural area around Mount Saint Helens that was almost completely destroyed is being rebuilt by nature. Many scientists have studied what happened in this natural laboratory. They found that nature is very quick to heal the wounds caused by the huge explosion. Scientists have learned much about volcanic activity since Mount Saint Helens exploded twenty-five years ago. More than twenty smaller explosions were observed at the volcano between nineteen eighty and nineteen eighty-six. Scientists also have been watching recent activity there. One thing they have learned is that a volcano can come very close to exploding without giving any warning. They also learned that volcanic activity can continue for years without any explosions taking place. The United States Geological Survey is responsible for providing warnings of possible volcanic explosions. It operates five volcano observation centers with the help of government agencies and universities. Late last month, scientists with the Geological Survey released a report on the nation's one hundred sixty-nine active volcanoes. The report rates the most dangerous volcanoes in the United States. It also discusses problems with current methods of estimating future volcanic activity. The scientists proposed a plan to improve volcano observations and provide better information about volcanic activity. They said the system could help prevent unnecessary and costly safety measures when such activity will not result in an explosion. They said it also would help warn airplanes of the possibility of dangerous ash in the atmosphere. Volcanic ash has caused millions of dollars in damage to planes and other aircraft in the past. Earlier, we told how Scott Johnson was concerned twenty-five years ago that the ash from Mount Saint Helens might cover his home. That did not happen, although the ash was deep in some parts of town. It had to be removed from streets and from tops of houses. Travel was almost impossible for several days. Today, Scott Johnson is an engineer in Seattle, Washington. Leslie Johnson is a medical doctor in Portland, Oregon. Both say the Mount Saint Helens explosion was an experience they never will forget. This program was written by Paul Thompson and Nancy Steinbach. Cynthia Kirk was our producer. I'm Barbara Klein. And I'm Doug Johnson. Join us again next week for another SCIENCE IN THE NEWS, in VOA Special English.
<urn:uuid:653fa321-d5de-4489-8041-138f0e2f698a>
3.53125
1,617
Truncated
Science & Tech.
55.673549
Multiplication Is Rotation Solomon introduces the multiplication map (p) = qpq* on showing it to be a linear map taking pure quaternions to pure quaternions with the neat composition property = . Now if n = (a, b, c) is a unit vector in and nq = (0, a, b, c), then = cos()h + sin()nq is a quaternion of unit magnitude for any angle t. Then becomes a rotation through angle t on with axis n. Julstrom shows the equivalence of the matrix approach to the multiplication map based on the function M. Note that for a unit quaternion, the conjugate is the inverse. The operations will be incorporated in the function Let us rotate the vector (1, 1, 1) 45 degrees about the x-axis represented by (1, 0, 0). Thus the vector (1, 1, 1) when rotated about the x-axis has new position (1, 0, 1.41421). Now look at another example. Rotate (3, 4, 5) 50 degrees about the axis determined by the nonunit vector (2, -1, 1). So, (3, 4, 5) rotates to (-0.0527686, -0.0347517, 7.07079). Next we illustrate the very important fact that any sequence of rotations about various axes through the origin is equivalent to a single rotation about a single axis intersecting the origin. In fact, if = cos()h + sin()nq and = cos()h + sin()mq represent two rotations then = = so we can recover the composite angle and axis from the product of two quaternions. To illustrate take the vector (5, 7, 9) first rotated by 25 degrees about axis (1, .5, 1) and then 15 degrees about (.5, 0, 1) So the vector (5, 7, 9) terminates at (3.27799, 5.27079, 10.7923). Next do the rotations in the reverse order and see what difference that makes. Now (5, 7, 9) terminates at (3.18801, 4.76137, 11.0529). So the order matters! We can illustrate the multiplicative property of the mapping M by looking directly at the quaternions that represent the two rotations above. Now the amazing fact is that we can easily extract the axis and the angle from a quaternion that represents a rotation. The first component of the rotation is the cosine of half of the angle of rotation while the remaining three components give the axis though not necessarily as a unit vector. The latter is easily normalized. So using the example above, we can determine what the angle of t is as well as the unit axis. Thus the single angle that accomplishes r and s above is 38.9884 degrees about the axis (.574224, .239584, .782858). We verify this next. Next let us do a 90 degree rotation about the x-axis followed by a 90 degree rotation about the z-axis and ask what single rotation accomplishes this sequence. So we see the sequence of these 90 degree rotations is equivalent to a 120 degree rotation about (1, 1, 1), which has been normalized to (.57735, .57735, .57735). As a last illustration here, let us find the result of a 180 degree rotation about the x-axis followed by a 120 degree rotation about (1, 1, -1) followed by a 90 degree rotation about (0, 1, 0). The result is a 180 degree rotation about (0, 1, 1). Finally, what if we want to rotate a vector about an axis that does not pass through the origin? The geometry of the situation is clear. Translate the axis to the origin, do the rotation, and then translate back. Let p denote an axis and v a vector that is to be rotated through an angle t about p. The v rotates to r(v-p) + p where r is the rotation through angle t. Let us rotate v = (1, 1, 1) through 45 degrees about an axis parallel to the x-axis but passing through p = (0, 2, 2). Thus (1, 1, 1) has final resting position (1, 2, .585786). For more on this fascinating topic confer Kuipers Copyright © 2002 Wolfram Media, Inc. All rights reserved.
<urn:uuid:8b50fa58-3c65-45b7-afe9-c4195d06cc02>
3.03125
959
Academic Writing
Science & Tech.
78.581786
Parallelism and Concentric Circles Parallel lines. I was playing with a lock in geometry class on day and my teacher asked me if the outside line and the inside line of the lock are parallel. said that it was, but he told me that it could not be, but he never gave me a reasonable explanation. I think that the inside and outside lines could in fact be parallel. Could you please give me an explanation as to why they are not considered parallel? Everything I have looked up about them never says anything about the lines not being able to curve like that in an arch. By definition, parallel lines must be straight lines in the same plane. So even though the lines do not intersect, they are not parallel. Scott P. Smith Lines are, according to Wikipedia, perfectly straight. This coincides with the idea that equations of lines are of the form ax + by = c or the possibly more familiar y = mx + b (slope - intercept form). So your teacher is correct. Two arcs that never meet, your lock hasp, would be considered concentric - that is they have a common center but different radii. Concentric arcs never meet either. It may seem like semantic games, but the field of mathematics is one for rigid and precise definitions within a strict logical and semantic Mathematics is often in the forefront of human thought and deals with questions that are so "far out" they may not be obviously tied to the "real world". The way a mathematician can be sure they are correct is if other mathematicians can follow each and every step of their argument and find no "holes" or logical errors. Thus they are extremely precise and tend to break things down into a series of steps, each of which has its own proofs. That being said, there is very little mathematics out there that has NOT be applied to other problems once technologists and other scientists get hold of it. So if math seems a bit over precise, cut it some slack. We all benefit from the work mathematicians do and have done. The answer does in fact depend on what you mean by parallel. In the strictest sense, a line does not curve. Parallel lines must be in the same direction everywhere. The curves are only parallel at the closest points. The top of one circle is not parallel to the left side of the In some areas of science and engineering, the term parallel can be used in a variety of ways. One refers to two paths that are identical but offset. Two circles of the same size but with different centers would qualify. Another use refers to two paths that always maintain the same distance from each other. Closest points between the paths always have the same distance. In this case, concentric circles are parallel. Objects traveling along parallel lines will always be moving in the same direction as each other. Objects traveling along concentric circles only maintain same directions if they are both traveling at the same number of revolutions per second. Dr. Ken Mellendorf Click here to return to the Mathematics Archives Update: June 2012
<urn:uuid:078385b0-3f14-4df5-801c-b3a28e8adcc9>
3.703125
672
Comment Section
Science & Tech.
54.030616
In a joint project between the Universities of Strathclyde and Glasgow, Imperial College London and the National Physical Laboratory, researchers have developed a portable way to produce ultracold atoms for quantum technology and quantum information processing. Researchers at Harvard University recently showcased a very interesting project where a human participant managed to control a rat. Scientists in Scotland have come up with a method of creating 3D printers which can make human stem cells. Cliff swallows are evolving so fast that they have developed shorter wings to deal with the threat of speeding cars. Big eyes may be beautiful, but they could be what did for the Neanderthals, say University of Oxford scientists. University of Georgia researchers say they've discovered important genetic clues about archaea, one of Earth's oldest life forms. Moore's Law, the much-cited theory that rates of technological improvement increase exponentially over time - is true, say MIT researchers. 'Focusing points' off certain coasts can create tsunamis much higher than previously believed possible. A species of algae that can cope with 'battery acid' conditions managed it by copying genes from bacteria. The Earth's only been this warm for about a quarter of the time over the last 11,300 years, a new reconstruction of the planet's temperature history shows. Giant camels once roamed Canada's High Arctic - much further north than previously believed - and may have evolved their flat feet and humps as a result. If you need help in snow and ice, who better to ask for help than a yeti? Relatives of the alligator made it to North America ten million years earlier than mammals, swimming there more than 19 million years ago. Tying a smoke ring in a knot sounds impossible - but University of Chicago physicists have done something similar by creating a vortex knot for the first time, in a container of fluid. Schrödinger's Cat could be (almost) as easy to observe as the internet's millions of LOLcats, with confirmation that there may be a way round Heisenberg's famous Uncertainty Principle after all. A South Dakota scientist has discovered a new species of dinosaur - and found that its babies were the meal of choice for a type of crocodile. In a deeply weird experiment, scientists have transplanted eyes onto the rear ends of tadpoles, and discovered that they can still see. A lucky find has allowed scientists to identify one of the earliest evolutionary examples of limbs used for feeding, along with the oldest nervous system to stretch beyond the head in fossil record. ESA’s planning to crash a spacecraft into an asteroid called Didymos, to, well, see what happens. Using CAT scans, Idaho State University researchers have made 3D virtual reconstructions of the jaws of the ancient spiral-toothed fish Helicoprion.
<urn:uuid:628f45eb-8d52-49d3-aa40-676df58df535>
3.203125
588
Content Listing
Science & Tech.
38.02854
Demographic divergence between the sexes is a major consequence of sexual selection. Matrix-based demographic measures, including the sensitivity and elasticity of l (population growth rate, fitness) to survival and fertility rates are powerful indices of intersexual divergence. Many morphological, behavioral and ecological differences distinguish males and females in lekking long-tailed manakins (Chiroxiphia linearis)--none is more dramatic than the demographic divergence. Only 16 of 142 (8%) banded males copulated during an eight-year period. The mean estimated age of male copulators was 10.1 years (SD = 2.2), and only 5 of 166 copulations were by males = 8 years old. Females probably begin reproduction at age one or two. The reproductive value curve reached a peak of 15.0 in their twelfth for males, versus 2.4 in their sixth year for females. The matrix-based elasticity of l (proportional sensitivity of the growth rate, or fitness) to survival rates was greater in males (91% of total elasticity) than in females (80% of total). In a literature-based, interspecific comparison, the difference in elasticity to survival between the male and female manakins (91 - 80 = 11; ranks 2 and 9 of 16 species/sex combinations) was greater than that between the sexes in northern elephant seals (90 - 84 = 6; ranks 3 and 8), which have the highest variance of male mating success documented for mammals, red deer (88 - 87 = 1; ranks 4 and 5), Galapagos cactus finches (79 - 74 = 5; ranks 10 and 12), and acorn woodpeckers (76 - 74 = 2; ranks 11 and 13). In the face of continuing debate over appropriate measures of sexual selection, matrix-based demographic techniques facilitate quantitative comparative analyses of the life history consequences of sexual selection. Measures of intersexual demographic divergence may provide insights into heretofore puzzling instances of sexual selection in species with little dimorphism in size or ornament.
<urn:uuid:c7d2d1a7-5793-41df-a6d9-a2d37fbee790>
2.765625
418
Academic Writing
Science & Tech.
32.017685
The Miller Urey experiment helped show how it was possible to derive some of the components of life from isolated molecules. In the 1950's, biochemists Stanley Miller and Harold Urey, conducted an experiment which demonstrated that some of the basic elements of life, including amino acids, could be formed spontaneously by simulating the conditions of Earth's early atmosphere. The presence of an ocean was important to help preserve the forming molecules in a quiet, stable environment. Their experiments lent support to the theory that the first life forms arose spontaneously through naturally occuring chemical reactions. In whatever way that life on Earth came to be, by 3.8 BYA, the middle of the Archean age, (very early in the history of the Earth!), life on Earth included both early autotrophs and early heterotrophs. Organisms that are able to make their own food (in the form of sugars) by using the energy of the Sun are called autotrophs, meaning "self-feeders". Photosynthesis is the name of the process by which these autotrophs eat. Organisms which require food from sources outside themselves are called heterotrophs, meaning "other-feeders". Because autotrophic bacteria were able to feed themselves by using the energy of the Sun, they were not dependent on a limited food supply and were able to The appearance of these organisms capable of performing photosynthesis was of monumental significance -- if it weren't for the photosynthetic activity of these early bacteria, Earth's atmosphere would still be without oxygen and the appearance of oxygen-dependent animals, including humans, would never have occurred! This is page 8 of 10 Shop Windows to the Universe Science Store! Our online store on science education, classroom activities in The Earth Scientist specimens, and educational games
<urn:uuid:ba3df9db-b8a7-4b20-b689-540928e83329>
4.09375
395
Content Listing
Science & Tech.
25.813003
Taxon Attribute Profiles Predaceous Water Beetles Dytiscidae (predaceous water beetles) is one of the largest and most commonly encountered groups of aquatic beetles. Both adults and larvae are predaceous, and will attack a wide variety of small aquatic organisms. Although most species are small to medium sized, some adults can attain a length of 35 mm. Taxonomy and Ecology Synopsis of included taxa Australia has a rich fauna of Dytiscidae, with 226 species in 42 genera (Gooderham & Tsyrlin, 2002). Watts (1978) supplied a revision of Australian Dytiscidae; and Pederzani (1995) provided keys to all dytiscid genera and subgenera for the world. Watts (2002) provided a series of easy to use keys to the genera of both adult and larval dytiscids. A checklist of Australian Dytiscidae is available on the Australian Biodiversity Information Facility website (Lawrence et al., 2002). General overviews of Australian Dytiscidae, including biological and ecological information, are provided by Lawrence et al. (1991) and Gooderham & Tsyrlin (2002). Dytiscids have a characteristic appearance, and can generally be recognized by having a hard, smooth, oval body, without any ventral spine, having the hind legs flattened and with a fringe of hairs so that it can act like a paddle, and having long, thin antennae. In the water, dytiscids swim by moving their hind legs simultaneously, like oars, while the similar appearing Hydrophilidae alternate the movement of their hind legs. Dytiscidae are all aquatic, and are common throughout the continent, with the greatest number of species found in the south-east. Adults are capable of flying to isolated habitats, which has allowed their spread to aquatic habitats throughout Australia. Dytiscids generally prefer slow moving or stagnant water, such as ponds, lakes, billabongs, dams, and pools at the edges of streams. They require atmospheric air, and the adult beetles go to the surface to gather air which they store in a chamber underneath their elytra (wing covers) to enable them to increase the time they can be submerged. Larvae lack this ability, but many species use a siphon in the form of long filaments at the end of the abdomen. Status in Community Both adults and larvae are predaceous. The adults are capable of eating through a normal mouth opening. Most larvae do not have a mouth opening, but have long, sickle like jaws which enable them to suck fluids out of their prey items. The larvae will attack animals much larger than themselves, and have been known to feed on other insects, crustaceans, worms, leeches, mollusks, tadpoles and small fish. Reproduction and Establishment A variety of mating behaviours occur in the Dytiscidae, and many of them reflect the fact that females are more selective in choosing a mate than males (Miller, 2003). Males have a variety of methods to achieve mating, and in some cases females have behaviours to resist mating, such as swift and erratic swimming when approached by a male. Some males have even developed sucker-shaped setae on the legs which allow them to grab females and prevent them from escaping during mating. Adult dytiscids are capable of sustained flight, and often travel some distance to disperse and find new habitats. They generally fly in the evening or at night, and they use reflected light from a water surface as a method of finding a new habitat. They can be confused by artificial reflected surfaces (e.g. glass) or lights, and are often attracted to these sources rather than water. Adults generally lay eggs in the underwater stems of plants, using their ovipositor to cut the stem to deposit the eggs. Dytiscid larvae are very aggressive predators which attack a variety of prey. Pupation takes place out of the water. The larva forms a cell in damp soil near the water, and the adults return to the water after emerging. Hydrology and Salinity Gooderham & Tsyrlin (2002) and Chessman (2003) reported that Dytiscidae as a family are quite tolerant of high levels of salinity. However, Bailey et al. (2003) list several examples of specific dytiscids which have very narrow ranges of salinity tolerance and could be eliminated with minor rises in salinity levels. Alternating periods of flooding and drought could affect dytiscid populations, which need water for survival. The strong flying ability of adults will allow recolonization of aquatic habitats after periods of drought. As with most invertebrates, there is insufficient data on Dytiscidae to discuss conservation status in any intelligent manner. Many species are known from only a few specimens, but whether this reflects insufficient collecting or declining population numbers can not even be guessed at. There are no records of Aboriginal use of dytiscids as food, although many insect species were eaten by Aborigines (Tindale, 1966). The large dytiscid Cybister explanatus is used as food in Mexico, where they are eaten roasted with salt and in tacos (Ramos-Elorduy & Pino, 1989). Dytiscidae share the characteristics of macroinvertebrates that could make them suitable species for including in programs for monitoring water quality (Water and Rivers Commission, 1996; Chessman, 2003; Minnesota Pollution Control Agency, 2004). As of yet, we lack specific examples of their use in such programs. List of MDB Species Table 1. Dytiscidae recorded from the Murray Darling Basin (50 species in 19 genera). Bailey, P., Boon, P. & Morris, K. (2002) Australian Biodiversity Salt Sensitivity Database. Land & Water Australia. http://www.rivers.gov.au/research/contaminants/saltsen.htm Chessman B. (2003) SIGNAL 2 - A Scoring System for Macroinvertebrate ('Water Bugs') in Australian Rivers, Monitoring River Heath Initiative Technical Report no 31, Commonwealth of Australia, Canberra. Gooderham, J. & Tsyrlin, E. (2002) The Waterbug Book: a guide to the freshwater macroinvertebrates of temperate Australia. CSIRO Publishing. Kefford, B.J., Papas, P.J., Nugegoda, D. (2003) Relative salinity tolerance of macroinvertebrates from the Barwon River, Victoria, Australia. Marine and Freshwater Research, 54: 755-765. Lawrence, J.F. & Britton, E.B. (1991) Coleoptera (Beetles). Pp. 543-683, in Insects of Australia: A textbook for students and research workers. CSIRO. 2nd Edition. Lawrence, J.F., Weir, T.A. & Pyke, J.E. (2002) Australian Faunal Directory: Checklist for Coleoptera: Adephaga: Dytiscidae. Australian Biological Resources Survey, Department of the Environment and Heritage. http://www.deh.gov.au/cgi-bin/abrs/abif-fauna/tree.pl?pstrVol=ADEPHAGA&pintMode=1 Miller, K. (2003) The phylogeny of diving beetles (Coleoptera: Dytiscidae) and the evolution of sexual conflict. Biological Journal of the Linnean Society, 79: 359-388. Minnesota Pollution Control Agency (2004). Wetlands: Monitoring Aquatic Invertebrates. http://www.pca.state.mn.us/water/biomonitoring/bio-wetlands-invert.html Pederzani, F. (1995) Keys to the identification of the genera and subgenera of adult Dytiscidae (sensu lato) of the world (Coleoptera Dytiscidae). Atti della Accademia Roveretana degli Agiati, Serie 7 B, 4: 5-83. Ramos-Elorduy, J. & Pino M. (1990). Caloric content of some edible insects of Mexico. Revista de la Sociedad Quimica de Mexico 34(2): 56-68. Tindale, N.B. (1966) Insects as food for the Australian Aborigines. Australian Natural History, 15(6), p. 179-183. Water and Rivers Commission (1996). Macroinvertebrates & Water Quality. Water Facts 2. http://www.wrc.wa.gov.au/public/waterfacts/2_macro/WF2.pdf Watts, C.H.S. (1978) A revision of the Australian Dytiscidae (Coleoptera). Australian Journal of Zoology, Supplementary Series no 57: 1-166. Watts, C.H.S. (2002) Checklists & Guides to the Identification, to Genus, of Adult & Larval Australian Water Beetles of the Families Dytiscidae, Noteridae, Hygrobiidae, Haliplidae, Gyrinidae, Hydraenidae and the Superfamily Hydrophiloidea (Insecta: Coleoptera). Identification and Ecology Guide no 43. Cooperative Research Centre for Freshwater Ecology.
<urn:uuid:2ceebfde-5e11-4e4d-a993-9a8011eb67a6>
3.546875
1,999
Knowledge Article
Science & Tech.
44.871346
On Tuesday, June 5, Venus passed in front of the Sun – an event that was visible on seven continents for those that were fortunate enough to have clear weather. These “transits” of Venus are very rare, coming in pairs separated by more than a hundred years. This June’s transit, the second of a 2004-2012 pair, won’t be repeated until the year 2117. Credit: NASA/SDO, AIA Credit: JAXA/NASA/Lockheed Martin The first image is a composite of images taken by the Solar Dynamics Observatory that shows the path that Venus took across the disk of the Sun. The second is a close-up image taken by Hinode – a joint JAXA/NASA mission to study the connections of the sun’s surface magnetism, primarily in and around sunspots. Read more » The Goddard Astrobiology Analytical Laboratory released some exciting news about the age-old question of origin of the chemical components necessary for life. According to Dr. Michael Callahan of Goddard, “People have been discovering components of DNA in meteorites since the 1960′s, but researchers were unsure whether they were really created in space or if instead they came from contamination by terrestrial life. For the first time, we have three lines of evidence that together give us confidence these DNA building blocks actually were created in space.” The findings imply that some asteroids and comets may have the chemistry necessary to make the building blocks of essential biological molecules. Happy 21st anniversary, Hubble! To celebrate this milestone, the telescope was pointed at a lovely pair of interacting galaxies called Arp 273. The shape (which reminds some of a rose) is due to the gravitational tidal pull between the pair, which is distorting the disk of the larger galaxy. It’s exactly the sort of gorgeous imagery we’ve come to expect from the telescope. The still image follows. Read more » This stunning new image was taken of the first six James Webb Space Telescope flight mirrors were being prepped for cryo testing at Marshall Space Flight Center. You can read more about this mirror milestone in the NASA.com feature. Sorry we were slow with posts last week – we were swamped with preparations for the government shut-down that (thankfully) never happened. We’ve got a bunch of things in the works, but we’ll start with a link round-up. Gamma-ray Bursts (GRBs) are huge explosions in space, and scientists think they happen either when a very massive star explodes or when two very dense neutron stars collide. Either way, it’s thought that a GRB signals the birth of a black hole. Very short duration GRBs are less common than another kind of burst that lasts longer, more than two seconds. Also, their shorter duration makes them harder to study. This new supercomputer simulation of short GRBs has shown that merging neutron stars could indeed power short GRBs. You can read all the details in this web feature. In honor of the 30th anniversary of the Space Shuttle Program, employees down at Kennedy Space Center came together for this impressive themed aerial portrait. I’m not sure what image we would pick at Goddard, since the research here is so diverse! Any ideas? Post them in the comments! There’s a lot more awesomeness below… last week was a busy one for space stuff! Read more » Meet “Harry,” a bald eagle recently spotted here at NASA Goddard! Geeked on Goddard has a few more photos, as well as some information about bald eagles in Maryland. While we’re usually talking about the space exploration and research going on here, it’s also worth mentioning that Goddard covers over a thousand acres of land, much of it in a natural state. We’ve got lots of wooded areas and a lake, plenty of space for all of the geese and deer (and, apparently, bald eagles!) that live here. Though we’re just a handful of miles from Washington, DC, it can be pretty peaceful to walk through the woods at Goddard. Due to the one-two punch of a federal holiday and a bit of wintry weather that caused NASA Goddard to open a few hours late today, here’s a belated awesomeness round-up! Credit: NASA/Goddard Space Flight Center/Bill Hrybyk One year ago, I had just returned to Goddard from an epic adventure – over a week in Florida for back-to-back launches, stranded for a little while because the DC area was being hammered with a record-breaking snowstorm during my trip. Whee! But in honor of the 1-year anniversary of the launch of the Solar Dynamics Observatory, I wanted to post this amazing video of SDO’s Atlas V blowing away a gorgeous sundog. This was definitely the highlight of the launch! I know that I kicked off last week’s round-up with a snow picture, but look! We got another six inches! Here’s the real NASA connection, though… imagery of the storm from the MODIS instrument, aboard the Terra satellite.
<urn:uuid:de400fde-4b17-402d-8f12-fb381ea46b9e>
3.359375
1,086
Content Listing
Science & Tech.
52.115201
12.1. Random walkEarly population studies concentrated on local population dynamics. However, spatial processes are very important in life-systems of most of the species. They may so significantly modify system behavior that local model would be unable to predict population changes. Several ecological problems cannot be addressed without analysis of organism dispersal. Examples are: spread of invading species, epidemics, etc. Let's take the problem of pest insect control as an example. The first question is what area to treat. If this area is too small it will be immediately colonized by immigrants. Crop rotation is often used to prevent propagation of pests, but the distance between fields with the same crop in two consecutive years should be separated further than migration distance. Finally, many insect pests are sampled using traps (pheromone-baited traps or UV-traps). To determine pest density from trap catches it is important to know dispersal abilities of the insect. The main problem: how many organisms disperse beyond a specific distance? Random walk is simulated here assuming that 50% individuals stay at the same place, 25% move to the left, and 25% move to the right. After several time steps the distribution of organisms becomes close to the normal distribution: Normal (=gaussian) distribution corresponds to equation Random walk can be defined in a 2-dimensional space. If organisms were released at the center of coordinates (0,0), then their distribution can be described by 2-dimensional normal distribution: This is a 2-dimensional normal distribution:
<urn:uuid:ca3010cd-c8d2-4de7-aca3-c6d11708081b>
3.59375
314
Academic Writing
Science & Tech.
34.296425
Having the same units on both sides of an equation does not gaurantee that the equation is correct, but having different units on the two sides of an equation certainly gaurantees that it is wrong! So it is good practice to reconcile units in problem solving as one check on the consistency of the work. Units obey the same algebraic rules as numbers, so they can serve as one diagnostic tool to check your problem solutions. For example, in the solution for distance in constant acceleration motion, the distance is set equal to an expression involving combinations of distance, time, velocity and acceleration. But the combination of the units in each of the terms must yield just the unit of distance, since the left hand side of the equation has the dimension of distance. Combinations of units pervade all of physics, and doing some analysis of the units is common practice. For example, in the case of centripetal force, it is not immediately evident that the quantity on the right has the dimensions of force, but it must. Checking it out: Often the use of dimensional analysis can be helpful as a reminder of what specialized units contain. In the case of the magnetic force on a moving charge, the magnetic field unit is a Tesla. But what is a Tesla? Checking out the force equation can remind you of the combination of basic units that is contained in the unit named a Tesla.
<urn:uuid:f3c29376-5895-46a7-93e9-90191b3127b4>
3.953125
283
Tutorial
Science & Tech.
40.43111
Date of introduction and origin Styela clava was probably introduced in 1952, as it was found in Plymouth, Devon, in 1953 (Carlisle 1954; Houghton & Millar 1960). This species was introduced from the north-western Pacific, where it occurs from Japan to Siberia Method of introduction It was transported on the hulls of warships following the end of the Korean War in 1951. Reasons for success It is a hardy species, capable of withstanding salinity changes and temperature fluctuations. Rate of spread and methods involved Its spread has been rapid: from Plymouth in 1953 to Southampton Water in 1959 and Milford Haven in south-west Wales (Coughlan 1969) and across the Channel to France by 1968. It was first recorded in Ireland in 1972 (Minchin & Duggan 1988). Possible methods of dispersal include transport on ships' hulls or on transferred oysters. It is distributed on south and west coasts of England as far north as Cumbria. It is found in abundance in certain parts of the Solent (S. King pers. comm.), and also in certain parts of Loch Ryan and other scattered Scottish localities (S.M. Smith pers. comm.). Elsewhere in Europe it is found in France, The Netherlands, Denmark and Ireland (Minchin & Duggan 1988). Factors likely to influence spread and distribution It is believed only to be able to spawn in waters above Effects on the environment Serious competition for food between individuals and with other species can result if the population becomes big Effects on commercial interests It is a fouling pest on ships' hulls and oyster beds. Control methods used and effectiveness Biological control through the deliberate introduction of Carcinus maenas into cages surrounding the sea squirt has proved to be an unsuccessful control agent. Various combinations of salinity, temperature and exposure to air have proved successful in killing Styela clava without causing the host oysters any None are known, though it harbours many epibionts so may aid localised increases in biodiversity. In Lancashire this species was first found in a man-made pool at Morecambe from where it spread to other high-level pools, under boulders and stones and down the shore (Coughlan 1985). Carlisle, D.B. 1954. Styela mammiculata, a new species of ascidian from the Plymouth area. Journal of the Marine Biological Association of the United Kingdom, 33: Coughlan, J. 1969. The leathery sea squirt - a new ascidian from Milford Haven. Nature in Wales, 11: Coughlan, J. 1985. Occurrence of the immigrant ascidian Styela clava Herdman in Heysham Harbour, Lancashire. Porcupine Newsletter 3: 85-97. Houghton, D.R., & Millar, R.H. 1960. Spread of Styela mammiculata Carlisle. Nature, 185: Millar, R.H. 1960. The identity of the ascidians Styela mammiculata Carlisle and Styela clava Herdman. Journal of the Marine Biological Association of the United Kingdom, 39: 509-511. Minchin, D., & Duggan, C.B. 1988. The distribution of the exotic ascidian, Styela clava Herdman, in Cork Harbour. Irish Naturalists' Journal, 22: Acknowledgements (Contributions from questionnaire) D. Jones, Lancaster University.
<urn:uuid:0fe117c4-f265-4f5a-a236-e216755ab15c>
3.25
785
Knowledge Article
Science & Tech.
54.267962
Defining an Enum with Reflection Emit An enumeration field is defined using the EnumBuilder.DefineLiteral method, as demonstrated by the code example for that method. Before the enumeration is used, the EnumBuilder.CreateType method must be called. CreateType completes the creation of the enumeration. In the .NET Framework versions 1.0 and 1.1, it is necessary to define enumerations using TypeBuilder because EnumBuilder emits enumerations whose elements are of type Int32 instead of the enumeration type. In the .NET Framework version 2.0, EnumBuilder emits enumerations whose elements have the correct type.
<urn:uuid:c77ff92c-433f-4c99-b3a5-e4eebfdc2481>
2.75
139
Documentation
Software Dev.
21.581653
The Camarillo fold belt (CFB) in the Western Transverse Ranges poses a significant seismic hazard to nearly one million people living in Southern California, yet few published geologic or geochronological data from this fold belt exist. The CFB is composed of several actively growing folds that are developed along the western extent of the highly segmented Simi fault zone, which extends for 40 km through urbanized Ventura and Los Angeles Counties. This research includes five balanced cross sections that are used to determine the magnitude of fault and fold related deformation. In addition, eight new absolute ages on deformed sedimentary strata exposed at the surface and in three paleoseismic trenches are presented and used to quantify the local timing and rates of fault slip across the fold belt, which is critical to assessing earthquake hazard. The results presented by Duane E. DeVecchio and colleagues show that local deformed sedimentary strata are an order of magnitude younger than previously thought, fault slip rates are comparable to other study fold belts in Southern California (0.8-1.4 mm/yr), and discrete faults within the fold belt are younger toward the west. A model of punctuated lateral fault propagation is proposed to explain westward growth of the Simi fault, which occurs in discrete pulses that are separated by intervals of fault displacement accumulation and fold amplification during constant fault length conditions. Lateral fault growth is limited in space and time by an orthogonal north-striking fault set, which juxtaposes a series of west-plunging anticlines that decrease in structural relief and age toward the west. Explore further: Astonishing hi-resolution satellite views of the destruction from the Moore, Oklahoma tornado More information: Duane E. DeVecchio et al., Earth Research Institute, University of California, Santa Barbara, CA 93106-9630, USA. Lithosphere. Posted online 28 Feb. 2012; print issue: April 2012; doi: 10.1130/L136.1
<urn:uuid:6d6b5db0-2155-4613-8b4d-fffde7b66e83>
3.40625
413
Academic Writing
Science & Tech.
34.19138
Active galactic nuclei measure the universe Oct 3, 2011 2 comments A common type of active galactic nuclei (AGN) could be used as an accurate "standard candle" for measuring cosmic distances – according to astronomers in Denmark and Australia. AGNs are some of the brightest objects in the visible universe and the technique could allow astronomers to determine much larger distances than is possible with current techniques, the scientists say. Standard candles are distant objects with known brightness that give astronomers a very accurate measure of cosmic distances – the dimmer the candle appears to us, the farther away it must be. Studying these candles is crucial to our understanding of the age and energy density of the universe. Indeed, the use of supernovae and Cepheids as standard candles turned our understanding of the cosmos on its head through the discovery of the acceleration of the expansion of the universe and the introduction of dark energy. However, reliable measurements of distances greater than redshift of about 1.7 are beyond the current capabilities of known standard candles. Now, Darach Watson and colleagues at the University of Copenhagen and the University of Queensland have shown that a tight relationship between the luminosity of an AGN and the radius of its "broad-line region" can be used to measure cosmic distances. The radius is found using "reverberation mapping", an established technique for studying the inner structure of AGNs, to gauge their mass. However, until this latest work, the method had not been considered in the search for new standard candles. According to Copenhagen astronomer Kelly Denney, the approach works using type-1 AGNs – those with broad-line emissions in the visible spectrum. These objects have a dense area of gas and dust surrounding the black hole called the broad-line region. The region is so-called because light emitted by the gas has much broader line widths than light from most other astronomical sources. Heart of the matter Much closer to the black hole is the accretion disc where matter falling into the black hole collects, causing a great deal of light to be produced. As this light travels outwards, it ionizes gas in the broad-line region, causing it to emit light with the distinct broad line widths because the gas is moving at many thousands of kilometres per second due to the gravity of the black hole, and the Doppler shifts associated with this motion causes the broadening. However, the amount of light produced in the accretion disc is not constant. By carefully comparing the time at which the light is emitted from the accretion disc and the time at which the ionized light is re-emitted from the broad-line region, astronomers can measure a time lag between the light arriving from the two sources. This delay is proportional to the radius of the broad-line region divided by the speed of light. This radius correlates tightly with the luminosity of the AGN. The luminosity in turn is used to calculate the distance because they are inversely related. The technique, however, is difficult and it wasn't until 2009 that Denney – then working with Bradley Peterson's group at Ohio State University – vastly improved the accuracy of the data from the radius-luminosity relationship such that it would allow a precise distance to be calculated. When Darach Watson came across the result, he wondered why this was not being used as a distance indicator already. "The simple answer was 'Huh, well, I don't know!' Everyone in the AGN community typically wants to know why no-one has thought of this before!" said Denney. Candle in the wind To confirm the technique's ability to give the distance of an AGN, Watson and colleagues looked at a sample of 38 AGNs at known distances. They found that reverberation mapping gave a reasonable estimate of the distance to the AGNs. Kenney quipped, "This almost makes the notion of AGNs as standard candles an oxymoron, since it's their variability that makes the method work!" Currently, the AGN technique is not as reliable as those based on Cepheids or supernovae. However, unlike a supernova – which lasts for a relatively short time – an AGN can be observed over long periods, reducing observational uncertainties. Also, AGNs exist at all redshifts, so astronomers can pick and choose which ones to study. In the coming months, the researchers aim to reduce the scatter in their current data and work on higher redshift reverberation mapping experiments. "One drawback of the method is that, due to time-dilation effects, the monitoring time required to measure time delays can become very long, especially for high-redshift sources. We are investigating ways to reduce this time, such as working in the UV, where the time delays are shorter." says Denney. A preprint of a paper about the work is available on arXiv. About the author Tushna Commissariat is a reporter for physicsworld.com
<urn:uuid:d23e270c-6bf8-4219-b4e3-98d935755554>
3.828125
1,018
Truncated
Science & Tech.
37.830796
El Niño periods increase growth of juvenile white seabass (Atractoscion nobilis) in the Southern California Bight Studies of the impact of El Niño periods on marine species have usually focused on negative, highly visible eVects, e.g., decreasing growth rates or increasing mortality due to a decline in primary productivity in typically nutrient rich upwelling zones; but positive effects related to elevated water temperature are also known. This study examined how the growth rate of juvenile white seabass, Atractoscion nobilis, responded to changes in ocean temperature in an El Niño period (1997–1998) in the northern portion of the Southern California Bight, USA. Growth rates of juvenile white seabass during their first 4 years of life were estimated as the slopes of linear relationships between body mass and age (from otoliths) of 800 fish collected at 11 stations throughout the bight. Growth rates differed significantly among cohorts hatched in 1996–2001. Specifically, white seabass that hatched in 1996 and 1997 grew significantly faster than those that hatched in 1998, 1999, and 2001. These differences in growth rates of cohorts appeared to be driven by variation in sea-surface temperature (SST). Growth rates averaged over the first three or 4 years of life were signiWcantly positively corre- lated to average daily SST during the first 1–4 years of life. Increased growth of juvenile white seabass during the warm El Niño period likely provided a number of benefits to this warm-temperate species. This study demonstrated that some species will benefit from these warm-water periods despite reduced system-wide primary production. Williams, J., L. Allen, M. Steele, and D. Pondella. 2007. El Niño periods increase growth of juvenile white seabass (Atractoscion nobilis) in the Southern California Bight. Marine Biology 152:193-200
<urn:uuid:f2237a52-ed9d-41a3-83a7-064252f31a65>
3.0625
391
Academic Writing
Science & Tech.
36.235198
I just learned (via John Lynch) about a paper on cetacean limbs that combines developmental biology and paleontology, and makes a lovely argument about the mechanisms behind the evolution of whale morphology. It is an analysis of the molecular determinants of limb formation in modern dolphins, coupled to a comparison of fossil whale limbs, and a reasonable inference about the pattern of change that was responsible for their evolution. One important point I’d like to make is that even though what we see in the morphology is a pattern of loss—whale hindlimbs show a historical progression over tens of millions of years of steady loss, followed by a near-complete disappearance—the molecular story is very different. The main players in limb formation, the genes Sonic hedgehog (Shh), the Fgfs, and the transcription factor Hand2, are all still present and fully functional in these animals. What has happened, though, is that there have been novel changes to their regulation. Even loss of structures is a consequence of changes and additions to regulatory pathways. This retention of major genetic pathways should be obvious just looking at a whale. They evolved from four-limbed tetrapods, and lost their hindlimbs as more and more locomotor function was committed to the tail and flukes, yet they still retain forelimbs. It is the same set of genes that operate in the hind- and fore-limbs, so of course you can’t just get rid of them—this is a case of selective limb loss. In addition, the genes have multiple functions making simple gene loss untenable. Shh, for instance, is a critical signaling molecule involved in the specification of midline structures in early development, and loss of the gene as a whole is lethal. What evolution did was to modify the domains of expression, selectively inactivating limb genes in the hindlimb region. One other curious feature of cetacean development is that they start by making perfectly respectable hindlimb buds, at about the fifth week of gestation. As is typical, they go through a period of phylotypy where their embryos resemble the embryos of other vertebrates, and they initiate the formation of the full four limbs. What happens next, though, is that the hindlimbs regress and their remnants become imbedded in the body wall. This gives us a clue about the change: the molecules involved in limb initiation are still active, but the ones responsible for limb maintenance in early development have been shut down. If you’ve taken any developmental biology courses at all, you’ve already been exposed to these well-known TLAs: AER and ZPA. The AER, or apical ectodermal ridge, is one of the earliest signs of limb formation. It is a ridge of thickened ectodermal tissue that demarcates the distal margin of the forming limb, and is a signaling center, with a whole family of molecules, Fgf4, 8, 9, and 17, emanating from it and triggering the growth of the structure. The ZPA is the zone of polarizing activity. It’s another signaling center that forms on the posterior margin of the limb, and as you might guess from the name, is important in setting up the polarity of the limb, but it’s also important in maintaining the tissue. It is defined early by a domain of expression of the Hand2 transcription factor, and cells in the ZPA then turn on the Shh gene. Analysis of the development of limbs in a small number (four—dolphin embryos are not easy to come by, or casually used) of river dolphin (Stenella attenuata) embryos was sufficient to come up with a straightforward picture of the differences in molecular development. Cetaceans form the AER for both the fore- and hind-limb. They express Fgf8. This is the normal tetrapod pattern. Cetaceans form a ZPA for the forelimb. Hand2 is expressed broadly at first, and then is restricted to just the posterior part of the fore-limb; Shh is expressed in a perfectly ordinary fore-limb ZPA. Hand2 is not expressed in the hind limb region. Shh is never activated. No ZPA forms for the hind limb, and the structure arrests and ultimately regresses. Looking at the evolutionary history of whales, the authors think they can pin down when this downregulation of Hand2 occurred. Shutting off that gene causes a complete loss of the limb, so older fossils that show a gradual diminution of the hind limbs must have retained an active Hand2/Shh combination; the complete loss occurred about 34 million years ago, so that would have been the ‘moment’ when this restriction would have caused the final disappearance of the whale’s posterior limbs. A promising correlation in the fossil morphology is that there was a concurrent reorganization of the vertebral skeleton at the same time that the hind limbs were lost. The distinct identity of the sacral vertebrae was lost, and the caudal vertebrae became more homogeneous. This implies a change in the expression pattern of the Hox genes, which are responsible for anterior-posterior positional information. That suggests that we ought to look upstream of Hand2, and ask what’s going on with Hox gene expression in cetaceans—the changes that streamlined the vertebral column may have simultaneously induced the changes in Hand2. It is tempting to speculate that modulation of Hox gene expression along the craniocaudal axis underpins the altered expression of Hand2 in the hind limb and posterior flank of Stenella. This hypothesis is supported by work showing that ZPA position in the fore limb is specified by the anterior boundary of Hoxb8 and by the recent finding that modulation of Hoxd gene expression in mice can shift the boundary of Hand2 in the early limb bud. This finding provides a mechanistic link between hind-limb reduction and homogenization of the posterior axial skeleton in whale evolution and can be tested by studies of Hoxd expression. How do you make a whale? Clearly, you don’t just “lose” the genes required to make hind limbs. You have to revise and add to the control information for existing banks of regulatory genes involved in limb formation. Thewissen JGM, Cohn MJ, Stevens LS, Bajpai S, Heyning J, Horton WE (2006) Developmental basis for hind-limb loss in dolphins and the origin of the cetacean body plan. Proc.Nat.Acad.Sci. USA 103(22):8414-8418.
<urn:uuid:a8fc79e8-f6ba-4a50-bfdd-54a47df7b542>
2.734375
1,408
Personal Blog
Science & Tech.
42.952867
After the success of the audacious Entry Descent and Landing (EDL) in delivering the Curiosity rover to Mars, the space engineers of this world are no doubt looking for the next challenge. How about something further away than Mars? And how about landing on terrain that we’ve not explored before – say a liquid? Maybe we could sail about? Seems unlikely, but there’s a place that has all these challenges, the lakes of Saturn’s moon Titan. Titan has long been one of the most interesting planetary targets in our solar system, though a moon of Saturn it is actually larger (at least by volume) than the planet Mercury. It puzzled us more after it was discovered that it has quite a dense hazy atmosphere. Titan’s atmosphere is pretty similar to ours on Earth; it’s dominated by nitrogen gas and generates a surface pressure of about 1 atmosphere. If you were on the outside of our solar system looking in (like we are currently are for the Alpha Centauri system) it would look a pretty intrigued possibility for life. The Cassini mission, currently touring about the Saturnian system, revealed the icy moon Titan to be a complex and unique place. Shrouded in its hazy atmosphere, we could only guess at what lay beneath this before Cassini could dispatch its Huygens lander and use the on-board radar to reveal the surface below. It was worth the wait, with Huygens making a squelchy landing into an alien terrain dominated by hydrocarbons and water. Measurements by Cassini spacecraft itself have revealed a ‘methane cycle’ like the water cycle we have on Earth. It really is hydrological, but not as we know it! Aside from discoveries of volcanoes, weather and complex organic molecules, one of the most exciting developments in the Cassini mission was the observations of lakes across the Titan surface. Dotted all over the surface and in many shape and sizes, some are big enough to have been name seas – or Maria. Being so far from the sun the average surface temperature of Titan is a chilly -194°C. So rather than water these lakes and seas are thought to be made up of mixtures of methane and ethane, making them a crucial part of the moon’s methane cycle. But exploring the chemistry, depth and (probably most excitingly) possibility for biology on these hydrocarbon lakes will be impossible before we land an interplanetary boat on these seas. Added to this would be the potential for this probe to paddle about, sampling the atmosphere and mapping the shores, without all the issues that the Mars rovers have had getting their wheels stuck. How soon will a ‘nautical’ mission take off? There’s nothing planned as yet. Sadly, NASA already have passed once on an opportunity to send a boat to Titan. Named the Titan Mare Explorer (TiME) it was proposed as part of the latest round of Discovery missions, and lost out to theInSight mission which will head to explore the Martian interior in 2016. More recently a Spanish engineering firm revealed concept plans for another mission Titian Lake In-situ Sample Propelled Explorer (TALISE). More ambitions than TiME, this design does incorporate a way of propelling it across the seas , either with wheels or an screw. Even if a mission to send a boat to Titan gets approved tomorrow, there would still be a seven years or so travel to this frozen world. So until then, I suppose you’ll have to content yourselves with this written view from ‘The Shores of Titan’.
<urn:uuid:d0338fe1-f34e-47f8-b0ca-220abc3f2077>
3.671875
748
Nonfiction Writing
Science & Tech.
43.141745
An informal sense Building numbers from smaller building blocks: Any counting number, other than 1, can be built by adding two or more smaller counting numbers. But only some counting numbers can be composed by multiplying two or more smaller counting numbers. Prime and composite numbers: We can build 36 from 9 and 4 by multiplying; or we can build it from 6 and 6; or from 18 and 2; or even by multiplying 2 x 2 x 3 x 3. Numbers like 10 and 36 and 49 that can be composed as products of smaller counting numbers are called composite numbers. Some numbers can't be built from smaller pieces this way. For example, he only way to build 7 by multiplying and by using only counting numbers is 7 x 1. To "build" 7, we must use 7! So we're not really composing it from smaller building blocks; we need it to start with. Numbers like this are called prime numbers. Informally, primes are numbers that can't be made by multiplying other numbers. That captures the idea well, but is not a good enough definition, because it has too many loopholes. The number 7 can be composed as the product of other numbers: for example, it is . To capture the idea that "7 is not divisible by 2," we must make it clear that we are restricting the numbers to include only the counting numbers: 1, 2, 3.... A formal definition Clarifying two common confusions Two common confusions: - The number 1 is not prime. - The number 2 is prime. (It is the only even prime.) The number 1 is not prime. Why not? Well, the definition rules it out. It says "two distinct whole-number factors" and the only to write 1 as a product of whole numbers is 1 x 1, in which the factors are the same as each other, that is, not distinct. Even the informal idea rules it out: it cannot be built by multiplying other (whole) numbers. But why rule it out?! Students sometimes argue that 1 "behaves" like all the other primes: it cannot be "broken apart." And part of the informal notion of prime -- we cannot compose 1 except by using it, so it must be a building block -- seems to make it prime. Why not include it? Mathematics is not arbitrary. To understand why it is useful to exclude 1, consider the the question "How many different ways can 12 be written as a product using only prime numbers?" Here are several ways to write 12 as a product but they don't restrict themselves to prime numbers. Using 4, 6, and 12 clearly violates the restriction to be "using only prime numbers." But what about these? - 3 x 4 - 4 x 3 - 1 x 12 - 1 x 1 x 12 - 2 x 6 - 1 x 1 x 1 x 2 x 6 Well, if we include 1, there are infinitely many ways to write 12 as a product of primes. In fact, if we call 1 a prime, then there are infinitely many ways to write any number as a product of primes. Including 1 trivializes the question. Excluding it leaves only these cases: - 3 x 2 x 2 - 2 x 3 x 2 - 1 x 2 x 3 x 2 - 2 x 2 x 3 x 1 x 1 x 1 x 1 This is a much more useful result than having every number be expressible as a product of primes in an infinite number of ways, so we define prime in such a way that it excludes 1. - 3 x 2 x 2 - 2 x 3 x 2 - 2 x 2 x 3 (So, if 1 is not considered prime, what is it? See multiplicative inverse.) The number 2 is prime. Why? Students sometimes believe that all prime numbers are odd. If one works from "patterns" alone, this is an easy slip to make, as 2 is the only exception, the only even prime. One proof: Because 2 is a divisor of every even number, every even number larger than 2 has at least three distinct positive divisors. Another common question: "All even numbers are divisible by 2 and so they're not prime; 2 is even, so how can it be prime?" Every whole number is divisible by itself and by 1; they are all divisible by something. But if a number is divisible only by itself and by 1, then it is prime. So, because all the other even numbers are divisible by themselves, by 1, and by 2, they are all composite (just as all the positive multiples of 3, except 3, itself, are composite). Unique prime factorization and factor trees The question "How many different ways can a number be written as a product using only primes?" (see why 1 is not prime) becomes even more interesting if we ask ourselves whether 3 x 2 x 2 and 2 x 2 x 3 are different enough to consider them "'different ways." If we consider only the set of numbers used -- in other words, if we ignore how those numbers are arranged -- we come up with a remarkable, and very useful fact (provable). - Every whole number greater than 1 can be factored into a unique set of primes. There is only one set of prime factors for any whole number. Under construction: need a section on factor trees Primes and rectangles Under construction: text needs to be written here Seven square tiles can be arranged in many ways, but only one arrangement makes a rectangle. How many primes are there? From 1 through 10, there are 4 primes: 2, 3, 5, and 7. From 11 through 20, there are again 4 primes: 11, 13, 17, and 19. From 21 through 30, there are only 2 primes: 23 and 29. From 31 through 40, there are again only 2 primes: 31 and 37. From 91 through 100, there is only one prime: 97. It looks like they're thinning out. That even seems to make sense; as numbers get bigger, there are more little building blocks from which they might be made. Do the primes ever stop? Suppose for a moment that they do eventually stop. In other words, suppose that there were a "greatest prime number" -- let's call it p. Well, if we were to multiply together all of the prime numbers we already know (all of them from 2 to p), and then add 1 to that product, we would get a new number -- let's call it q -- that is not divisible by any of the prime numbers we already know about. (Dividing by any of those primes would result in a remainder of 1.) So, either q is prime itself (and certainly greater than p) or it is divisible by some prime we have not yet listed (which, therefore, must also be greater than p). Either way, the assumption that there is a greatest prime -- p was supposedly our greatest prime number -- leads to a contradiction! So that assumption must be wrong there is no "greatest prime number"; the primes never stop. Suppose we imagine that 11 is the largest prime.Under construction: text needs to be written here Suppose we imagine that 13 is the largest prime. - 2 x 3 x 5 x 7 x 11 + 1 = 2311 ---- Prime! - No number (except 1) divides 2311 with zero remainder, so 11 is not the largest prime. - 2 x 3 x 5 x 7 x 11 x 13 + 1 = 30031 ---- Not prime! - But 59 x 509 = 30031, and both 59 and 509 are prime, and both are greater than 13, so 13 is not the largest prime.
<urn:uuid:89ca1c65-efa4-4469-8377-7b41875b565d>
4
1,637
Knowledge Article
Science & Tech.
75.042532
The hydrothermal vent crab, Bythograea is a top predator at vent sites in the Pacific Ocean. This crab is present in such high densities that scientists actually use it as an indicator that they are approaching an active vent field. The vent crab is typically found among dense clusters of tubeworms at an average depth of 1.7 miles and can tolerate a temperature gradient that ranges from 77°F in the tubeworm clumps, to 36°F, which is the temperature of the water surrounding the vent sites. Because vent fields may be separated by hundreds of miles, scientists have many questions about how they are colonized by the crabs. At the University of Delaware College of Marine Studies, scientists including graduate student Gina Perovich (right) are examining the crab's life stages and reproductive biology to look for clues. Going Crabbing in the Deep Sea! To collect a small number of adult crabs for laboratory study, scientists deploy modified minnow traps on the seafloor with the help of the deep-sea sub Alvin. Younger crabs are captured indirectly by collecting clumps of tubeworms at the vent site. To find out how the crabs are maintained back in the lab at the University of Delaware, see the sidebar at right.
<urn:uuid:a7302feb-305d-44cc-b99a-e2cfbe93580c>
3.765625
274
Knowledge Article
Science & Tech.
44.922857
The VARIANCE function returns a variance of expression values of all rows. Only one expression is specified as a parameter. You can get the variance without duplicates by using the DISTINCT or UNIQUE keyword in front of the expression or the variance of all values by omitting the keyword or by using ALL. The return value may be different from the actual evaluation value because it follows the type of the expression specified as a parameter. The following is a formula that is applied to the function. VARIANCE( [DISTINCT | UNIQUE | ALL] expression ) The following is an example that returns the variance of the number of gold medals Korea won from 1988 to 2004 in the Olympics. (demodb) SELECT VARIANCE(gold), VARIANCE(CAST (gold AS FLOAT)) FROM participant WHERE nation_code = 'KOR'; === <Result of SELECT Command in Line 1> === variance(gold) variance( cast(gold as float))
<urn:uuid:59d20ba3-0a5a-426b-86c4-1b3c29fc10a9>
3.09375
215
Documentation
Software Dev.
38.5375
An Antarctica Sub Submersible vessels have been around since the 19th century. However, none is more likely to go a stranger place than this one. Called the Micro-Submersible Lake Exploration Device, the instrument was a small robotic sub about the size and shape of a baseball bat. Designed to expand the range of extreme environments accessible by humans while minimally disturbing the environment, the sub was equipped with hydrological chemical sensors and a high-resolution imaging system. The instruments and cameras characterize the geology, hydrology and chemical characteristics of the sub's surroundings. Behar supervised a team of students from Arizona State University, Tempe, in designing, developing, testing and operating the first-of-its-kind submarine vessel. "This is the first instrument ever to explore a subglacial lake outside of a borehole." Behar said. "It's able to take us places that are inaccessible by any other instruments in existence." In 2007 an active subglacial water system consisting of several interconnected subglacial lakes was discovered under Whillans Ice Stream using repeat-track data from the ICESat satellite (Fricker and others, 2007). One of these active lakes, subglacial Lake Whillans is the subject of the sub. The sub was deployed by the U.S. team of the international Whillans Ice Stream Subglacial Access Research Drilling (WISSARD) project. The project's objective was to access subglacial Lake Whillans, located more than 2,000 feet (610 meters) below sea level, deep within West Antarctica's Ross Ice Shelf, nearly 700 miles (about 1,125 kilometers) from the U.S. McMurdo Station. The 20-square-mile (50-square-kilometer) lake is totally devoid of sunlight and has a temperature of 31 degrees Fahrenheit (minus 0.5 degrees Celsius). It is part of a vast Antarctic subglacial aquatic system that covers an area about the size of the continental United States. The WISSARD team included researchers from eight U.S. universities and two collaborating international institutions. They used specialized tools to get clean samples of subglacial lake water and sediments, survey the lake floor with video and characterize the biological, chemical and physical properties of the lake and its surroundings. Their research is designed to gain insights into subglacial biology, climate history and modern ice sheet behavior. The instrument consists of a mothership connected to a deployment device that houses the submarine. The sub is designed to operate at depths of up to three-quarters of a mile (1.2 kilometers) and within a range of 0.6 miles (1 kilometer) from the bottom of the borehole that was drilled through the ice to reach the lake. It transmits real-time high-resolution imagery, salinity, temperature and depth measurements to the surface via fiber-optic cables. In a race against time and the elements to access the lake before the end of the current Antarctic field season, the WISSARD team spent three days in January drilling a 2,600-foot-deep (800-meters), 20-inch-wide (50-centimeters) borehole into the lake, which they reached on Jan. 28. The sub was then sent down the borehole, where it was initially used to guide drilling operations. When the instrument finally reached the lake, the team used its imagery to survey the lake floor. The data enabled the team to verify that the rest of the project's instruments could be safely deployed into the lake. The WISSARD team was then able to proceed with its next phase: collecting lake water samples to search for microbial life. And that search has apparently paid off. Earlier this month, the team reported that the lake water did indeed contain living bacteria, a discovery that might hold important implications for the search for life elsewhere in the universe. On February 6, 2013, scientists reported that bacteria were found living in the cold and dark in a lake buried a half-mile deep under the ice in Antarctica. For further information see Sub. Descent image via NASA.
<urn:uuid:f5aa292b-8463-430b-9a63-0be5503ac95c>
4
849
Knowledge Article
Science & Tech.
41.329148
PDF Version - 263KB FAMILY: Convolvulaceae (Morning glory family) STATUS: Threatened (Federal Register, November 2, 1987) DESCRIPTION AND REPRODUCTION: The only morning glory vine with large, blue flowers in Florida scrub vegetation (Wunderlin et al. 1980), Florida bonamia is a perennial with sturdy prostrate stems about a meter (3 feet) long. The leathery oval or ovate leaves, up to about 4 centimeters (1.6 inches) long, are either upright or spreading. The flowers are solitary in the leaf axils. The funnel-shaped corolla is 7 to10 centimeters (2.7 to 3.9 inches) long and 7 to 8 centimeters (2.7 to 3.1 inches) across, pale but bluish purple with a white throat, similar to the cultivated "Heavenly Blue" morning glory. The fruit is a capsule. RANGE AND POPULATION LEVEL: Florida bonamia is endemic to the peninsula where most of its known populations exist in the Ocala National Forest, Marion County. It also occurs south of the forest in Polk, Orange, Highlands, Hillsborough, and Hardee Counties. It was relocated in Lake County south of Lakes Minnehaha and Susan (1 site) and in Manatee County. The historic range of Florida bonamia was from central Highlands County northward through northwestern Osceola, western Orange, Lake, eastern Marion, and northwestern Volusia Counties on ridges and uplands of the central peninsula. Collections of the plant were made in Sarasota, Manatee, and Volusia Counties in 1878, 1916, and 1900, respectively (Wunderlin et al. 1980). The Florida Department of Environmental Protection (1998) reported that the largest population of this species at Lake Louisa State Park in Clermont appears to be increasing in number and spreading out across the site. HABITAT: Florida bonamia's habitat is sand pine (Pinus clausa) scrub vegetation with evergreen scrub oaks and sand pine. Sunny openings in the vegetation are occupied by reindeer moss (Cladonia), lichens, and herbs. In the Ocala National Forest, where most of its remaining populations exist, Florida bonamia is restricted to these bare sunny sand areas, including the margins of sand pine stands on road rights-of-way, fire lanes, and other places which are kept clear of trees and shrubs. Florida bonamia also occurs in clearcut areas in the Ocala National Forest. In scrub vegetation in Highlands and Polk counties, Florida bonamia co-exists with at least three Federally listed plants: Highlands scrub hypericum (Hypericum cumulicola); papery whitlow-wort (Paronychia chartacea); and scrub plum (Prunus geniculata). Bonamia also occurs with the endangered scrub lupine (Lupinus aridorum). REASONS FOR CURRENT STATUS: Urban and agricultural development, especially citrus groves, have extirpated the plant from most of its former range and continue to be the main threats. In Polk, Hardee, Orange, and Highlands Counties, remnant populations are highly susceptible to obliteration of the vegetation for citrus groves and residences. Florida bonamia is also susceptible to trash dumping, invasion by exotic plants and weeds, and damage from off-road vehicles. Normal ecological succession also poses a threat to Florida bonamia unless the habitat is kept open by occasional fires or equivalent mechanical land disturbance. The state of Florida currently lists this plant as endangered, but the law does not provide for habitat protection. MANAGEMENT AND PROTECTION: Populations of Florida bonamia in the Ocala National Forest appear to be large and quite secure. The species may be spreading from a limited original range within the Forest. The distribution (as mapped from roads) is roughly oval-shaped and does not seem to coincide with any changes in vegetation or soils, suggesting that the distribution may reflect expansion of the plant's range along roads. Current and planned management practices ensure an abundance of the plant's early successional habitat. Forest Service management has also limited off-road vehicle use. Florida bonamia is currently protected at 7 sites on the Lake Wales Ridge, and acquisition of additional land is ongoing (Schultz et al. 1999). Hartnett and Richardson (1989) have shown that Florida bonamia has long-lived fleshy root systems that enable the plant to recover rapidly after fires, and that the plant also maintains substantial seed banks in the soil. A study conducted at Lake Louisa State Park found that Florida bonamia did not seem to be affected by the application of a monocot-specific herbicide applied for the control of non-native pasture grasses; at least two applications of the herbicide in early spring to reduce the grasses (FDEP 1998). Florida Department of Environmental Protection. 1998. Experimental restoration procedures on a scrub site containing Florida bonamia (Bonamia grandiflora) at Lake Louisa State Park, Clermont, Florida. Final report to U.S. Fish and Wildlife Service. Florida Department of Environmental Protection, Division of Recreation and Parks, Bureau of Parks, District 3 Administration. 10 pp. Hartnett D.C. and D.R. Richardson. 1989. Population biology of Bonamia grandiflora (Convolvulaceae): effects of fire and seed bank dynamics. Amer. J. Botany 76:361-369. Schultz, G.E., L.G. Chafin, and S.T. Krupenvitch. 1999. Rare plant species and high quality natural communities of twenty-six CARL sites in the Lake Wales Ridge Ecosystem. Final report of Florida Natural Areas Inventory for U.S. Fish and Wildlife Service. 202 pp. U.S. Fish and Wildlife Service. 1987. Endangered and threatened wildlife and plants; determination of threatened status for Bonamia grandiflora (Florida bonamia). Federal Register 52(21):42068-42071. Wunderlin, R., D. Richardson, and B. Hansen. 1980. Status report on Bonamia grandiflora. Unpublished report prepared for U.S. Fish and Wildlife Service. For more information, please contact: Last Updated: 08/2009 PDF Version - 263KB
<urn:uuid:77f99dae-e60d-4bec-a1cb-6126c50d5264>
3
1,327
Knowledge Article
Science & Tech.
42.511183
A buffer can have blank areas called display margins on the left and on the right. Ordinary text never appears in these areas, but you can put things into the display margins using the display property. There is currently no way to make text or images in the margin mouse-sensitive. The way to display something in the margins is to specify it in a margin display specification in the display property of some text. This is a replacing display specification, meaning that the text you put it on does not get displayed; the margin display appears, but that text does not. A margin display specification looks like ((margin left-margin) spec Here, spec is another display specification that says what to display in the margin. Typically it is a string of text to display, or an image descriptor. To display something in the margin in association with certain buffer text, without altering or preventing the display of that text, put a before-string property on the text and put the margin display specification on the contents of the before-string. Before the display margins can display anything, you must give them a nonzero width. The usual way to do that is to set these variables: This variable specifies the width of the left margin. It is buffer-local in all buffers. This variable specifies the width of the right margin. It is buffer-local in all buffers. Setting these variables does not immediately affect the window. These variables are checked when a new buffer is displayed in the window. Thus, you can make changes take effect by calling You can also set the margin widths immediately. This function specifies the margin widths for window window. The argument left controls the left margin and right controls the right margin (default
<urn:uuid:5374e3a4-d421-4072-8ed6-b4e415566090>
2.796875
370
Documentation
Software Dev.
51.871235
The Earth is Warming 2009 tied for second hottest year since records began In the Southern Hemisphere, 2009 was the hottest year on record Earth has been accumulating heat since the 1970s Mean surface temperature change between the 1950s and 2000s Warming in Greenland and Arctic has been amplified as models predict West Antarctica has been warming A cold December 2009? Depended on where you live. (°C) January 2010 hottest January in satellite record 2010 starting off as hottest year on record. Temperatures at 14,000' from satellites. Each line is a different recent year. The top line ending in a small box is 2010. The heavy line is an average for the last 20 years. Make your own chart here. 2009 second hottest year on record in Australia. 2000-2009 hottest decade. Jan. 13, 2010: Melbourne has hottest overnight temperature in 108 years. February 2009: Australia suffers worst drought in history; massive wildfires Western US Drought Continues, December 29, 2009 Colorado River inflow to Lake Powell lowest on record (2009=95%) Western snow melting earlier in spring Western wildfire frequency and spring-summer temperatures since 1970 Greenland ice mass shrinking (Science Magazine; subs. required) Antarctica losing ice Mountain glaciers declining worldwide Grinnell Glacier, Glacier National ParkGlacier NP forecast ice-free by 2030 Retreat of Gangotri Glacier, one of largest in Himalayas, 1780-2001 Arctic sea ice shrinking: Arctic Ocean forecast ice-free by 2030, or sooner Observed versus modeled Arctic sea ice declineThe IPCC was too conservative. Arctic air temperature "hockey stick." The Medieval Warm Period and the Little Ice Age show much less change than last 100 years. Arctic coastline eroding as sea level rises Arctic river discharge increasing as ice melts US plant zones shifting north Increase in number of days between last and first frost, US Northeast, 1901-2001. (Red=more days without frost) California Lakes Warming Drought in Middle East Sea level has been rising and continues to rise All satellite data and 3-month average Oceans growing more acidic (more negative pH) between 1700s and 1990s as atmospheric CO2 rises and more dissolves in the oceans For more evidence and photographs, go to Desdemona Despair. For the best climate blogs, go to Climate Progress, RealClimate and Skeptical Science.
<urn:uuid:fac41262-5c4d-4f62-b815-8ee923a8ccba>
3.171875
514
Personal Blog
Science & Tech.
46.383334
An abstract class is simply a class that can't be instantiated. It can have anything a normal class can have (fields, non-abstract methods, etc.), as well as can have abstract methods. Most often this is done to provide a class which has some implementation of a class, but not all of it. ex.: A car class could implement that you steer by turning the wheel, but how it turns would be implemented by inheriting classes. If you want a "class" with only abstract members (aka purely abstract class), you could create an abstract class, or you can create an interface. The interface is an abstract class which has and only abstract methods (static methods/fields are ok, though).
<urn:uuid:9d293470-6e38-42cd-b19b-d00e2233054e>
3.0625
147
Q&A Forum
Software Dev.
67.53
Darwin's evidence for evolution: Variation I. Chapter 1: Variation under domestication A. What is the biological nature of "variation"? (How does it arise?) 1. There's lots of variation (even more than between related wild species). 2. Darwin suggests that the source of variation (i.e., mutation) is in "reproductive elements (i.e., germ cells) prior to conception" 3. Variation is "random" (i.e., no "inherent trends") 4. Variation is heritable 5. Variation in domestic varieties is different than in wild populations. (In what way? Why does Darwin stress this observation?) B. How did different varieties originate? Darwin uses pigeons as a model to study Variation and its origin 1. Why pigeons? 2. How did different breeds originate? 2 alternative hypotheses: a. Many origins (from many aboriginal species), or b. Single origin (1 species, the rock pigeon, Columba livia). If true, this would suggest that several divergent varieties could originate from a single ancestral variety. What is the evidence that supports this alternative? How does Darwin extrapolate from this conclusion to suggest that different species arise from a common ancestral species? (Hint: use his arguments about the kind of variation that exists in domestic varieties along with his next arguments about varieties being incipient species.) II. Chapter 2: Variation in nature A. It is very difficult to differentiate species and varieties (especially when you become an expert about a particular group) B. Metaphor: This lack of delineation suggests a series, which suggests a "passage" C. Terms "species" and "variety" are "arbitrary" D. Darwin tests the hypothesis that varieties are "incipient species" (species arise from varieties, and varieties are thus potential species) (Data is on page 55 of Darwin) 1. Prediction of cladogenetic Evolution if this hypothesis is correct: Where many species have arisen (e.g., in larger genera), there should be many "incipient species" 2. Prediction of Transformism: The number of varieties should be unrelated to the number of species in a genus (e.g., the number of varieties could be the same, regardless of which genus a species came from) 3. Prediction of Separate Creation: No particular pattern is predicted (Return to top of page.)
<urn:uuid:0b5ae20a-12ab-485f-a40f-6e3861735a59>
4.03125
515
Content Listing
Science & Tech.
45.696952
A stack is an abstract data type and data structure based on the principle of Last In First Out (LIFO). This is a lib for using a dynamic stack in your linux programs. it is simple but very usefull. You can push elements to the stack of any type. Check out the header file for more With this routines you are able to gather informations given by the linux kernel to the process stack before transfering control to the process. Most coders do not know about this
<urn:uuid:5f739746-b479-47c4-a26c-e5381e8a33d6>
3.078125
105
Knowledge Article
Software Dev.
59.842966
What is in space? Many of the bodies in space give off electromagnetic radiation. Things in space which release or reflect light, like stars or planets, can be seen either with the naked eye or with telescopes. By looking at that light, scientists can either directly tell where objects in space are, or at least can make deductions about their locations. The Hubble Space Telescope, a light-recieving telescope, being deployed during the STS 31 flight. Image from the NSSDC Photo Gallery: Spacecraft. http://nssdc.gsfc.nasa.gov/photo_gallery/photogallery-spacecraft.html#HST Other bodies in space are too far away to be seen or are behind something that stops light. We can still find these objects because they give off radio waves that can be detected with radiotelescopes such as those in the deep space network (DSN). Not only do stars, nebulae and other extra terrestrial objects release radio waves, we hope that there is life in the cosmos using radio waves as well. The SETI (Search for Extraterrestrial Intelligence) project is specifically designed to try to detect radio, television, or other electromagnetic communications by intelligent aliens. A radio telescope. Image from Jet Propulsion Laboratory Goldstone Radar web site, http://wireless.jpl.nasa.gov/RADAR/ There are many other things in space that are not emitting electromagnetic waves. Scientists must use deductive logic to detect them. For example, astronomers now believe that they have detected planets around other stars.. Although we have never "seen" these planets, there are wobbles in the the movement of their stars that could have been made by the gravity of planets pulling on them. Even though observations made by telescopes tell us a lot both directly and indirectly about the universe, there are questions that can only be answered by sending ships like Deep Space 1 into space and making direct up-close observations. Sometimes those ships make accidental discoveries. Solar wind was detected once we started putting ships in space and found a slight push on them made by charged particles. Scientists are still discovering everything that is in space and still finding new ways of learning more about the cosmos. The list of ways we know about bodies in space will grow and grow over time. What is in space besides planets and stars? What is gravity? What is energy? What are radio waves? What is DSN? How does DS1 take pictures? What will DS1 do on its mission? What is electromagnetic radiation? What is solar wind? Why don't we receive light from all the stars in the universe? What makes EM radiation? Where does energy come from and go?
<urn:uuid:570bb148-eb0f-4d79-8023-43e2ad68e1c6>
3.59375
565
Knowledge Article
Science & Tech.
50.969635
June 29, 2010 The longest sequences of temperature and salinity data analyzed (from 1900 to present), have confirmed the gradual warming of the waters of the western Mediterranean. The warming has accelerated since the mid 1970's. Researchers from the Spanish Institute of Oceanography (IEO) in collaboration with the Institute of Marine Sciences of Barcelona (ICM, CSIC), have demonstrated that the waters of the Western Mediterranean have been warming progressively throughout the twentieth century, this warming being more pronounced since the mid seventies and during the current twenty first century. The rate of warming is around one thousandth per year. This work, published this May in the Journal of Marine Systems, has reconstructed the longest time series of temperature and salinity in the western Mediterranean, from 1900 to 2008. In addition, the study shows that the way in which the temperature of the Western Mediterranean deep layer increases is well correlated with the air temperature in the northern hemisphere and with the heat absorbed by the Atlantic Ocean. Thus, the Western Mediterranean is presented as an excellent indicator of the changes that are occurring in the Earth's climate on a larger scale, and, therefore, the observing systems developed by the IEO and the ICM become very useful tools in the study of climate change. To perform this study the researchers used data from the current monitoring project funded by IEO in the western Mediterranean IEO: Radmad and from previous projects as Ecomálaga, Ecomurcia and Cirbal, and time series of l'Estartit station (ICM). Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:077a8c3b-89f1-4973-ae35-fc62970d982f>
3.125
360
Truncated
Science & Tech.
26.775794
click for full-size image" data-pin-do="buttonBookmark"> TECHNOLOGIES that have come out of neuroscience have raced ahead of the ethical issues they raise. click for full-size image Image: MATT COLLINS By the third decade of the new millennium, the power of computing will be such that we should be able to scan and download a blueprint of every axon, dendrite, presynaptic vesicle and neuronal cell body, thus creating a software-based facsimile of someone's brain. Human and machine will have become one. Or so observes Ray Kurzweil, the technologist-turned-futurist who has championed the marriage of the biologic and the cybernetic. "Our immortality will be a matter of being sufficiently careful to make frequent backups," he remarks in all earnestness. Kurzweil's vision is often cited in popular accounts about the future of machine intelligence. But, in the end, his grandiose statements serve merely as technophilic conceits. This article was originally published with the title A Vote for Neuroethics.
<urn:uuid:7ed52e83-03eb-452a-996c-b7fe81077b59>
3.015625
236
Truncated
Science & Tech.
30.239695
This summer's record-smashing heat wave in the Washington area and across the country has rekindled the debate over climate change. Although scientists caution against directly linking global warming with extreme weather events such as the Colorado wildfires and Texas drought this summer, they say climate change is clearly making such events far more frequent, likely and intense. Yet in the United States, especially during an election year, the debate over climate change remains stifled by skepticism, with many Americans discounting that the phenomenon exists despite the preponderance of scientific evidence. Around the world, however, that debate has become moot, as nations not only acknowledge the existence of climate change, but brace for the pending storm. So this month, The Washington Diplomat travels the globe to present a special series on how countries such as Bangladesh and Ecuador are trying to turn the climate tide. Bangladesh: Bracing for the Deluge As Ground Zero of Climate Change Bangladesh, called the "ground zero" of global warming, has been bracing for the effects of climate change for years. Read More Eco-Ingenuity: World Pays Ecuador to Forgo Oil Profits Former Ecuadorian Ambassador Ivonne Baki part of one of the most daring schemes in the history of the environmental movement: The international community is literally paying the Ecuadorian government not to drill on its land. It's essentially a bribe — but a noble one. Read More Small Island Nations: Warn of Climate-Triggered Extinction For small island states such as Fiji and the Bahamas, climate change is not just an academic debate. It's an encroaching threat to their very existence. Read More Central America: Weather Patterns Spell Grim Forecast Central America has generated headlines for the surge in violence recently, but thousands have also lost their lives and been displaced because of another phenomenon: the weather. Read More About the Author Larry Luxner is news editor of The Washington Diplomat. Last Edited on August 2, 2012
<urn:uuid:a89a801e-f061-42cd-bdb1-c099b0faaca0>
2.890625
408
Content Listing
Science & Tech.
23.807778
Helping the Hydrogen Economy Break Even By Eileen McCluskey In WPI's Fuel Cell Center, chemical engineering researchers Ravindra Datta, left, and Nikolaos Kazantzis are working to overcome the practical and theoretical challenges that stand in the way of the widespread deployment of hydrogen fuel cells. When Welsh scientist Sir William Robert Grove built the first fuel cell in 1839, he could scarcely have imagined how big a promise his invention would hold for the world 168 years later. Today, companies, universities, and governments worldwide, recognizing the urgent need to decrease humanity's reliance on fossil fuels and divert the catastrophe of global warming, strive to realize a hydrogen fuel cell–powered economy. WPI scientists work at the forefront of this field. From the university's Global Clean Energy Center to its Fuel Cell Center, in classrooms and in the field, faculty and students drive fuel cell science, and its integral policy and economic issues, toward greater clarity in the hope of fostering a new energy paradigm. "Fuel cells are of great interest as low-polluting, high-efficiency power sources for applications ranging from the laptop to the automobile," says Ravindra Datta, professor of chemical engineering and director of WPI's Fuel Cell Center. "This work is gratifying because it's become increasingly clear that we can't keep doing things the way we've been doing them, in terms of energy use." Fuel cells are essentially high-tech batteries, except that they don't run out of energy as long as they receive a continuous flow of fuel. The most common type uses a proton exchange membrane (PEM) as its electrolyte. In a PEM fuel cell, hydrogen is fed into the anode side, where a platinum-coated catalyst separates it into electrons and protons (hydrogen ions). The ions pass through the membrane and, reacting on the catalyst, mix with electrons and oxygen on the cathode side to produce water, the cell's only "waste" product, which flows out of the cell. The electrons, which cannot pass through the membrane, produce an electric current as they travel to the cathode through an external wire. To optimize the performance of fuel cells, Datta and Kazantzis have developed an analytical framework that sees the cell's complex chemical systems as a set of interacting subunits. The catalytic reaction networks are represented as circuits, shown here, in which each step in the reaction is represented as a directed branch interconnected at nodes, so that all conceivable reaction paths can be traced. Datta and his team in the Fuel Cell Center work toward improving the cathode's catalytic activity, and to increase the anode's tolerance for carbon monoxide, which easily poisons today's fuel cells but is difficult to eliminate from hydrogen fuel. They also experiment with making the membrane thicker and more durable without sacrificing its ability to conduct ions under dry conditions. And they study the chemistry of platinum with an eye toward reducing its use. "Platinum is the fuel cell's workhorse," explains Datta, "but there's only so much of it around, and it's expensive. To use less, or to replace it, you need to understand how it reacts, which is a key goal for us." When Datta dove into fuel cell research 14 years ago, PEMs were made from powdered platinum. "This was very wasteful," he says. "I could see that chemical engineers, few of whom were involved in fuel cells at the time, had the right skills to develop a more efficient form of platinum for the fuel cells." Datta's latest PEM uses carbon cloth and the high-tech plastic Nafion, which is coated with supported platinum nanoparticles. Nikolaos Kazantzis, associate professor of chemical engineering, works closely with Datta. By using an analytical framework that views the fuel cell as a set of complex systems comprising a number of interacting subunits, Kazantzis helps further illuminate ways to enhance and optimize fuel cells' performance. "It's exciting to address the fragile interface between energy production and environmental protection using a systems-based analytic approach," Kazantzis says. "We're developing much clearer understandings of the most basic science behind how membranes, catalysts, and fuel cell systems operate," Datta notes, "so that those who make the fuel cells can reap the benefit." For example, "the fuel cell's membrane and catalyst account for two-thirds of its manufacturing cost. With better membrane and catalyst design, fuel cells would become more viable economically." So when might a hydrogen economy establish itself? "The fuel is the kicker," Datta comments. "Hydrogen, unlike fossil fuels, is an energy carrier that you have to produce from something else. Producing hydrogen on site, on demand, in small quantities, in compact units for a broad range of applications, including on-board cars, is 15–20 years down the road." The research team led by Ed Ma has been working for 10 years on a reactor that uses a palladium membrane to separate hydrogen economically from natural gas or corn. Much of the funding came from divisions of Shell, which will use it in what it hopes will be the nation's first hydrogen refueling system. In 2004, Shell installed the first hydrogen pump at a retail gas station, in Washington, D.C. Photo courtesy of Shell Hydrogen. Solving the fuel problem has been the quest for the past decade of Yi Hua "Ed" Ma, Frances B. Manning Professor of Chemical Engineering, and his large, well-funded research team in WPI's Center for Inorganic Membrane Studies. Ma knows that the hydrogen economy will depend on a supply of hydrogen pure enough to power fuel cells without poisoning their catalysts and cheap enough to compete with fossil fuels. To meet that daunting challenge, Ma and his team have developed a novel chemical reactor built around an ultrathin membrane made from the metal palladium. The reactor uses steam reforming and catalysts to extract hydrogen from natural gas or renewable sources, such as corn. Pure hydrogen passes freely through the palladium membrane and is collected; carbon dioxide, the other major product of the reaction, is sequestered. The work has been funded by major research awards from Shell International Exploration & Production Inc. and Shell Hydrogen, which plans to use the technology to supply hydrogen for what it hopes will be the nation's first successful hydrogen refueling system for fuel cell–powered vehicles. The work has also been supported by the U.S. Department of Energy, which recently selected Ma's team as one of six research groups in the nation to share nearly $10 million for work aimed at promoting the production of hydrogen from coal at large-scale facilities. The technology, which has been turned over to Shell, offers a number of advantages over existing hydrogen production systems. For one, it combines, in a single device, the processes of generating and separating the hydrogen, which will dramatically cut both operating costs and the size of the reactor. It is also able to operate at significantly lower temperatures than conventional reactors, which means it can be made from less-expensive materials. "Now you can put it in a gas station," Ma recently told the Financial Times. In a new seminar, Isa Bar-On and faculty in the Global Clean Energy Center teach students about the science behind technologies like fuel cells, left, but also about the social and regulatory issues that surround them. While most of the buzz surrounding fuel cells has focused on technology for vehicles, mechanical engineering professor Isa Bar-On does not see fuel cell–powered automobiles as a feasible first step. "From a manufacturing cost perspective, the nearest-term likelihood will be the development of fuel cells that replace the standard battery in portable devices," she says, noting such change could come within two years. (In fact, Ravi Datta's Fuel Cell Center is working on direct methanol fuel cells that may be well suited for powering laptop computers and other electronic devices.) To help accelerate the pace of change and thwart global warming, Bar-On and her colleagues examine the full gamut of alternative fuels from a range of interdisciplinary perspectives, including economics, environmental impacts, and public policy issues. Toward these ends, Bar-On and 10 other WPI faculty members formalized their ongoing teamwork in mid-2006 by establishing WPI's Global Clean Energy Center. Bar-On directs the center, which includes the Alternative Fuel Economics Laboratory, and engages faculty from the departments of Chemical Engineering, Civil and Environmental Engineering, Mechanical Engineering, and Social Science and Policy Studies. Students demonstrate a lively interest in the center's approach: When eight professors from the Clean Energy Center announced a new graduate-level seminar in July 2006, 16 students immediately signed up. "That's a high number for a first-time graduate course offering," says Bar-On. To gain a macro understanding of alternative energy issues, students participating in the seminar took a global view, literally, of other countries' alternative fuel uses. They learned, for example, of several European countries' recently adopted policy to increase—to 5.75 percent by 2010—their bio-fuel portion of transportation fuels. "But we also learned that these countries buy their palm oil—bio-fuel's raw material—from Indonesia and Malaysia, which are cutting down rain forests to grow the palms," notes Bar-On. "So we see how critical it is to consider the entire production system when measuring environmental costs associated with various fuels." Read a related story: Energy Savings Are in the Air "We face a fascinating and interrelated set of issues in the world today regarding how we might continue to fuel our economies," agrees J. Scott Jiusto, assistant professor in the Interdisciplinary and Global Studies Division. "These issues immediately lead to all the major questions regarding environmental change, geopolitics, the very way we see the world and experience our daily lives. "To be able to make the leap from a fossil fuel–dependent lifestyle to one that uses what are today considered alternative fuels," Jiusto continues, "will require imagination and daring from many, including policy makers." Indeed, Jiusto approaches fuel cell technology from a policy perspective, having written and taught extensively about technology-related policies since 1989 and recently addressing cross-border power flows and carbon emissions accounting practices. Jiusto sees the most promising U.S. alternative energy policies springing up at the state level. "This isn't like the 1970s, when the federal Clean Air Act and Clean Energy Act provided important new tools to deal with our most pressing environmental problems," he notes. "For two decades, it's been more difficult to move alternative energy at the federal level, because there's so much entrenched power and sunk capital in the oil economy. These days, those interested in efficient and renewable energy systems find it easier to make progress at the state and local levels. Massachusetts and California, for instance, are at the forefront of policies to regulate and reduce greenhouse gas emissions. "Nationally," Jiusto says, "we need a suite of policies to encourage renewable energy. We need to experiment, try to get enough, for instance, of a hydrogen fueling infrastructure going. It takes many cycles of innovation to move things along." Realistically, even the most far-reaching policies are likely to bring about "a mixed energy system, with hydrogen fuel cells as part of a highly layered energy economy," he says. Policy menus at the state level could become the basis for a national program. "Meanwhile," says Jiusto, "carbon emissions continue to climb. We need to take much larger steps, and quickly. The next round of innovation and creativity could help remake the world. If we in the U.S. want to be on the cutting edge, we have to get moving. We have the resources; hopefully, we'll make the right choices."
<urn:uuid:ae8bf110-02a6-4860-a87f-01ae1124d6f7>
3.265625
2,429
Knowledge Article
Science & Tech.
35.974504
Draw the graph of the parametric equations x = t + 2 cos(2t) y = t + 3 sin(3t) for values of t between -40 and 40. Use the following LiveMath notebook to explore the graph of these equations; in addition, you can change the values of the constants in the equations and see the effect on the graph. View an animation to see how this can be
<urn:uuid:5ed87f9a-bc1a-4f75-9dae-66cda56e7194>
2.84375
90
Tutorial
Science & Tech.
63.103636
Analysis of the interaction of pathogens with plant roots is often complicated by the growth of plants in a soil substrate. A soil-free plant growth system (SPS) was developed that removes the need for a substrate while supporting the growth of seedlings in a nutrient rich, oxygenated environment. The model legume Lupinus angustifolius was used to compare the growth of seedlings within soil and the SPS. Seedlings grown under both conditions were similar in morphology, anatomy and health (measured by leaf chlorophyll abundance) and importantly there was little difference in root growth and development although straighter and fuller root systems were achieved in the SPS. The ease of access to the root system proved efficient for the analysis of root and pathogen interactions with no interference from soil or adhering particulate matter. Following inoculation of L. angustifolius roots with Phytophthora cinnamomi the host/pathogen interaction was easily observed and tissues sampled undamaged. Field of Research 050299 Environmental Science and Management not elsewhere classified
<urn:uuid:5338dc95-6ad8-4d2f-a155-7c1bd5ce0005>
3.234375
217
Academic Writing
Science & Tech.
21.076951
06 May 2010: Report Under Threat in the Gulf, A Refuge Created by Roosevelt Among the natural treasures at risk from the oil spill in the Gulf of Mexico is the Breton National Wildlife Refuge, created by Theodore Roosevelt to halt a grave threat to birds in his era — the lucrative trade in plumage. Now, oil from the BP spill is starting to wash up on beaches where Roosevelt once walked. At the heart of the region now threatened by the massive oil spill in the Gulf of Mexico is a chain of islands containing tens of thousands of seabirds. Thin ribbons of sand rising no higher than 19 feet out of the gulf, these islands — part of the Breton National Wildlife Refuge — currently hold at least 2,000 nesting pairs of brown pelicans, 5,000 pairs of royal terns, 5,000 pairs of Caspian terns, and 5,000 pairs of various seagulls and shorebirds. Earlier this week, strong winds and barrier-like booms kept the oil slick from washing ashore on Breton Island, the Chandeleur Islands, and other links in the refuge. But the National Audubon Society reported May 5 that oil had reached the beaches of the Chandeleurs, putting the abundant birdlife there in peril. More than a century ago, these islands held an even richer assemblage of bird species. Breton Island alone was home to 33 species of wintering waterfowl, wading birds, secretive marsh birds, and various shorebirds. When the birds were in full plumage, Breton Island was quite a sight. Oil-boom barriers now line the shores of the Breton National Wildlife Refuge, home to tens of thousands of breeding seabirds. Because nobody lived on the barrier islands at the turn of the last century — they were isolated miles from Venice, Louisiana, with treacherous gulf waters in between — most Americans had never heard of the sandy breeding ground where pelicans and herons in the thousands populated the beach. But plume hunters in Mississippi and Louisiana had. Regularly gangs made “hits” on the islands’ nesting wading birds and seabirds. The birds’ feathers were worth a fortune for milliners because the delicate plumage was needed to adorn ladies hats — the fashion rage of the Gilded Age and beyond. To Roosevelt, the despoilers and plume-hunters of the Gulf South were pirates, and he wanted the feather mafias arrested. “Wreckers are no longer respectable and plume-hunters and eggers are sinking to the same level,” Roosevelt wrote about Breton Island. “The illegal business of killing breeding birds, of leaving nestlings to starve wholesale, and of general ruthless extermination, more and more tends to attract men of the same moral category as those who sell whiskey to Indians and combine the running of ‘blind pigs’ with highway robbery and murder for hire.” To stop the carnage, Roosevelt issued an executive order on October 4, 1904 creating the Breton Island Federal Bird Reservation off the southeast coast of Louisiana. The reservation was the second unit — after Pelican Island, Florida — of what would eventually become the U.S. National Wildlife Refuge System, whose stated mission was to “work with others to conserve, protect and enhance fish wildlife, plants and If Roosevelt hadn’t signed his executive orders, these islands might have been dead zones. their habitats for the continuing benefit of the American people.” Today, the refuge system numbers 551 protected areas. The history of Theodore Roosevelt and the creation of the U.S.’s first wildlife refuges in is one of the seminal stories in American conservation. For most of his adult life, Roosevelt was a staunch Auduboner. As U.S. President from 1901 to 1909, he kept a White House bird list. He regularly met with his ornithologist friends Frank M. Chapman (American Museum of Natural History) and Herbert K. Job (author of Wild Wings and Among the Water Fowl Breton Island had been formed from remnants of the Mississippi River’s Saint Bernard delta. To some sailors the island was little more than a long sandbar of broken shells, Sargasso weed, and wind-twisted pine boles. But when the sun set in dramatic shades of day-glo red-orange-purple, the island could look more enticing than a Yucatan Peninsula beach resort. To President Roosevelt’s way of thinking, he had created a bird reservation at the “mouth of the Mississippi” where his beloved brown pelicans (perhaps the bird species he enjoyed the most) could prosper. Breton Island was a prime place where herons and terns built nests, dived for fish, and hunted for fat shrimp. U.S. Library of Congress Six years after leaving the White House, Roosevelt decided to spend a week living on America’s wildlife-rich barrier islands. On June 7, 1915, ex-president Roosevelt, accompanied by his wife, Edith, arrived in New Orleans by train and then traveled to the Mississippi Gulf Coast town of Pass Christian. Instead of having professional hunters like Holt Collier or Ben Lilly as his companions, Roosevelt joined up with solid preservationist types, such as Frank M. Miller, the founder of the Louisiana Conservation Commission. Their goal was to travel by boat and inspect Breton Island, Tern Islands, Shell Keys, East Timbalier Island, and, for that matter, a few unprotected keys. Roosevelt always considered Louisiana the “home state” of John James Audubon, so it was fitting to have someone of Miller’s stature for the journey to the offshore islands. Herbert K. Job had ventured down from Connecticut with camera-in-hand, and Roosevelt hoped Job would document federal bird reservations in Louisiana as he had done in Wild Wings for the Florida Keys. Leaving Edith behind in Pass Christian, the men sailed off on the Royal Tern , pulling a dinghy behind them. The vessel’s hold was crammed with camera equipment instead of guns. The gulf waters looked darker the farther the Royal Tem ventured from shore, and whitecaps slapped against the prow. Gigantic rays leaped from the water and a few devilfish swam along the surface. “Globular jellyfish, as big as pumpkins, with translucent bodies, pulsed through the waters,” Roosevelt later wrote. The men spotted a loggerhead turtle. To Roosevelt, from a distance, his federal bird reservations looked like long lagoons on the far-off horizon. Meanwhile, sheets of white spray made the crew laugh and scores of black skimmers circled above. As they sailed deeper into the gulf waters, pelicans plunged into the sea, feeding on schools of mullet in the checkered sunshine. Before long they heard the distant murmur of birds. MORE FROM YALE e360 The Gulf of Mexico Oil Spill: An Accident Waiting to Happen The oil slick spreading across the Gulf has shattered the notion that offshore drilling had become safe. A close look at the accident shows that lax federal oversight, complacency by BP and the other companies involved, and the complexities of drilling a mile deep all combined to create the perfect environmental storm. READ MORE “All of this section is now under Government protection,” Parker wrote in Forest and Stream , “and about the middle of June, either late in the evening or early in the morning, one may see the air filled with the white-winged gulls feeding their young on minnows, and even more wonderful, during the heat of the day see some of these small islands, looking at a distance like a wind sheet, since, when the birds are young, the old ones stand over them with outspread wings to protect them both from the sun and the rain.” The Royal Tern anchored at Breton Island. If Roosevelt hadn’t signed his executive orders, these islands might have been dead zones. Now Breton Island, in particular, gathered in all the bounty of the gulf. Marine life was abundant. William Sprinkle, the reserve’s warden and the captain of the Royal Tern , told his passengers horror stories about the plumers and eggers ransacking the rookeries. Roosevelt took off his shoes in order to tread carefully, wanting to avoid bird nests in the islet’s marshlands, beaches, and brush. Proudly he marshaled facts about the birds. Castaway raccoons, the worst pests of all, had also been removed from the offshore islands by the warden to preserve eggs from robbery. Busily, Roosevelt scribbled notes in his memorandum book about nighthawks and a small flock of Louisiana heron he had observed. Seizing the moment, Job set up a green shade, very faded, to block out the sun, and started taking magnificent photographs of migrating birds. He blended into the mangrove and gulf tamarisk scrub as if he were an indigenous creature. Miller began telling the life histories of red-winged blackbirds and long-billed marsh wrens, fulfilling his duty as an expert on local wildlife. For Roosevelt and his companions, those days in the Gulf of Mexico were never-to-be-forgotten. None of the men even thought about stuffing a skimmer or tern — cameras were the order-of the-day. Roosevelt’s essay about this gulf cruise, “The Bird Refuges of Louisiana” — published by Scribner’s Magazine in March 1916 — could have been a chapter in Job’s Wild Wings . “The laughing gulls and the black skimmers More than 100 years of protection of bird and marine life is threatened by the toxic BP spill. were often found with their nests intermingled, and they hovered over our heads with some noisy protest against our presence,” Roosevelt wrote. “Although they often — not always — nest so close together, the nests were in no way alike. The gulls’ dark green eggs, heavily blotched with brown, two or three in number, lay on a rude platform of marsh-grass, which was usually partially sheltered by some bush or tuft of reeds, or, if on wet ground, was on a low pile of driftwood.” Establishing his credentials as an Auduboner, Roosevelt wrote on and on about the offshore breeding grounds. But he had also, foolishly, disturbed a sea turtle nesting area so as to carefully study the eggs. While Job busied himself with the nature photographs on the islands, a New Orleans photographer, J.H. Coquille, took a dozen unforgettable shots of Roosevelt inspecting royal tern eggs, walking barefoot on the beach, sitting like a Buddha contemplating the sea, and sneaking up on pelicans whose pouches were full of sardines. One photograph taken by Coquille showed a huge sign in the background that read: “KEEP OFF: AUDUBON SOCIETY.” Many of the photos accompanied Roosevelt’s article for Scribner’s Magazine As U.S. President, Roosevelt didn’t just save Breton Island. Determined to protect the Mississippi Gulf South as an intact ecosystem, Roosevelt also used executive orders to permanently protect Shell Keys, Tern Island, and East Timbalier Island. To Roosevelt these Gulf shore gems were American heirlooms, like Yellowstone or Yosemite. As the Gulf of Mexico oil spill approaches the Breton National Wildlife Refuge, at least 2,000 pairs of brown pelicans are nesting on islands in the refuge. Now, more than 100 years of environmental protection of bird and marine life in the Gulf of Mexico is threatened by the toxic BP spill. Crude oil may soon be washing up on the beaches where Roosevelt walked barefoot back in 1915. Since the oil boom in the gulf over the last half-century, the islands — totaling 18,000 acres, only 7,000 of which are above the mean high tide line — have endured many insults, including an oil spill several years ago that killed hundreds of brown pelicans. From 2001 to 2010, due in part to President George W. Bush’s lessening of offshore drilling restrictions, there have been numerous oil-related explosions in the Gulf of Mexico. Nature itself has taken a heavy toll on the refuge, with Hurricane Katrina destroying a lighthouse on Breton Island in 2005 and causing major beach erosion and widespread destruction of vegetation. Now, perhaps the biggest threat ever is drifting toward Breton Island and its neighbors, endangering one of Teddy Roosevelt’s finest conservation legacies. POSTED ON 06 May 2010 IN Biodiversity Business & Innovation Policy & Politics Pollution & Health Pollution & Health Sustainability Europe North America North America
<urn:uuid:336b1141-5d23-4b55-b5c3-e9cbcf35b432>
2.890625
2,691
Nonfiction Writing
Science & Tech.
47.573101
It’s safe to say that DNA is an important part of our daily lives. Not only does DNA code for our characteristics as humans, but it differentiates us from other species, and is frequently used to solve crimes. Despite it’s biological usefulness, you might be wondering what our genetic code has to do with Shakespeare and his sonnets. The answer may very well be the future of data storage. It might sound as if it’s straight from the pages of a science fiction novel, but scientists have recently discovered a way to synthesize DNA encoded with information of all sorts–including all 154 of Shakespeare’s sonnets. Researcher Nick Goldman, of the European Bioinformatics Institute of Hinxton, England, is one of the leading experts in the field. As Goldman explains, DNA has the potential to replace hard drives and disks because of its long lifespan and durability. DNA can also store enormous amounts of data in a small space. In fact, Goldman estimates that all of the information in the world today–about one billion trillion bytes–would fit “in the back of your station wagon.” It’s a bit difficult to fathom at first, but the idea behind storing data in DNA is relatively simple. The secret lies in the double helix shape of the DNA molecule. While technology now allows us to store information in two-dimensional items like CDs and microchips, DNA is a tightly wound three-dimensional molecule, enabling the storage of more information in a compact space. The structure of DNA is also responsible for the synthetic process used to encode it with information. DNA is composed of two sets of base pairs: adenine (A) with thymine (T), and guanine (G) with cytosine (C). Each base is a nitrogen containing compound, and when a base bonds to its pair, it helps to hold the double helix of the DNA molecule together. In using DNA as a storage device, scientists like Nick Goldman have taken advantage of the A, T, and G, C base pairings. By representing the bases with ones and zeros, scientists have been able to use binary–the system responsible for data encoding in computers–within molecules of DNA. In order to convert between DNA code and binary, Goldman and his colleagues wrote software capable of doing so. Shakespeare’s sonnets, for example, were first represented in binary in a computer. The software then converted the binary into a series of A’s, T’s, G’s, and C’s, which could be made into a strand of DNA. (This is a diagram that makes it a little easier to visualize.) Containing information in DNA doesn’t come without its problems however. With current technology, synthesizing and encoding information is extremely expensive, and therefore isn’t practical outside of a lab. Because it is so dense, DNA is also heavy, making large quantities somewhat difficult to transport. But, that’s not to say that there isn’t hope for this field of science. Given the pace at which technology is progressing, it’s probable that software and tools similar to those available to Nick Goldman will one day accessible to the public. Though the process isn’t perfect now, Goldman speculates that in as little as 10 years, it may be “economically viable.” The decreasing cost of DNA synthesis and the success of the project thus far are good indicators that scientists like Goldman are on to something. Before we know it, we may be storing much more than our genetic code in a double helix.
<urn:uuid:3c3ed545-0ffa-431a-b125-1b1dcd08d6ea>
3.546875
749
Knowledge Article
Science & Tech.
46.157886
Dalton's Law of Partial Pressure: Mathematically, this can be represented as: Explanation and Discussion: Dalton's Law explains that the total pressure is equal to the sum of all of the pressures of the parts. This only is absolutely true for ideal gases, but the error is small for real gases. This may at first seem a trivial law, but it can be very valuable in the chemistry lab. Let's say you want to collect hydrogen gas. To do this, you set up a system that uses a pneumatic trough, a test tube that has a pipetted stopped, a cable that connects the pipett into the pneumatic trough, and a test tube above the cable that collects the hydrogen. Warning: Do not conduct this experiment unless you are under the direction of a chemist or your chemistry teacher. It is dangerous and involves a Bunsen burner and dangerous materials. You submerge the test tube that will collect the hydrogen, and tilt it up so it only contains water. By placing zinc and acid in the pipetted test tube and heating it, hydrogen gas is given off. This gas pumps through the water and enters into the collection test tube. After the first few seconds, the gas will be pure hydrogen. Image of start of hydrogen generation. When the water level is equal in the test tube and the trough, turn off the generator. The pressure inside the test tube will be equal to the atmospheric pressure. Image of pressure equalibrium in hydrogen generator. Now you can use the ideal gas law to determine the number of hydrogen moles in the test tube, right? Not quite. You see, the water you collected the hydrogen over has vapor pressure that will distort the equation if not accounted for. Because of the Dalton's Law of partial pressure, you know that the pressure in the test tube is from both the hydrogen and the water. To find just the hydrogen, you would have to subtract the vapor pressure of the water. Vapor pressure of water is published in most chemistry books as a table in the appendix, and varies by the temperature of the water. Calculations with Dalton's Law: Let's try that last experiment with real numbers. In our lab, the atmospheric pressure is 102.4 kPa. The temperature of our water is 25°C. We used a 250 mL beaker instead of a test tube to collect the hydrogen. Let's find the pressure of the hydrogen, and then find the moles of hydrogen using the ideal gas law. Step 1: We need to know the vapor pressure of the water. A common table lists the pressure at 25°C as 23.76 torr. A torr is 1 mm of mercury at standard temperature. In kilopascals, that would be 3.17 (1 mm mercury = 7.5 kPa). We should also convert the 250 mL to .250 L and 25°C to 298 L. Step 2: We can use Dalton's Law to find the hydrogen pressure. It would So the pressure of Hydrogen would be: 99.23 kPa or 99.2 kPa. Step 3: We use the Ideal Gas Law to get the moles. Recall that the Ideal Gas Law is: where P is pressure, V is volume, n is moles, R is the Ideal Gas Constant (0.0821 L-atm/mol-K or 8.31 L-kPa/mol-K), and T is temperature. Therefore, our equation would be: This can be re-arranged so: Another important contribution by John Dalton was his generalization that all gases expand equally on going to the same higher temperature. You can visit our Dalton's Law Bonus Page for continued study. You can also test yourself and read about John Dalton. [ Main ] - [ Sections ]
<urn:uuid:d4afdf1d-aefe-48d7-8512-21e2ec33a925>
4.125
793
Academic Writing
Science & Tech.
70.302312
Search Loci: Convergence: Mark all mathematical heads which be wholly and only bent on these sciences, how solitary they be themselves, how unfit to live with others, how unapt to serve the world. In E G R Taylor, Mathematical Practitioners of Tudor and Stuart England, Cambridge: Cambridge University Press, 1954. Leonard Euler's Solution to the Konigsberg Bridge Problem Euler's Proof and Graph Theory When reading Euler’s original proof, one discovers a relatively simple and easily understandable work of mathematics; however, it is not the actual proof but the intermediate steps that make this problem famous. Euler’s great innovation was in viewing the Königsberg bridge problem abstractly, by using lines and letters to represent the larger situation of landmasses and bridges. He used capital letters to represent landmasses, and lowercase letters to represent bridges. This was a completely new type of thinking for the time, and in his paper, Euler accidentally sparked a new branch of mathematics called graph theory, where a graph is simply a collection of vertices and edges. Today a path in a graph, which contains each edge of the graph once and only once, is called an Eulerian path, because of this problem. From the time Euler solved this problem to today, graph theory has become an important branch of mathematics, which guides the basis of our thinking about networks. The Königsberg Bridge problem is why Biggs states, As Biggs' statement would imply, this problem is so important that it is mentioned in the first chapter of every Graph Theory book that was perused in the library. After Euler’s discovery (or invention, depending on how the reader looks at it), graph theory boomed with major contributions made by great mathematicians like Augustin Cauchy, William Hamilton, Arthur Cayley, Gustav Kirchhoff, and George Polya. These men all contributed to uncovering “just about everything that is known about large but ordered graphs, such as the lattice formed by atoms in a crystal or the hexagonal lattice made by bees in a beehive [ScienceWeek, 2].” Other famous graph theory problems include finding a way to escape from a maze or labyrinth, or finding the order of moves with a knight on a chess board such that each square is landed on only once and the knight returns to the space on which he begun [ScienceWeek, 2]. Some other graph theory problems have gone unsolved for centuries [ScienceWeek, 2 ].
<urn:uuid:1535801a-e89d-4616-988d-bab0633ab39c>
3.15625
526
Content Listing
Science & Tech.
31.501808
for p1 = (3,0,3) p2 = (1,0,1) p3 = (1,2,3) not the same points given to me. how can i show that these points define the triangle specified? Follow Math Help Forum on Facebook and Google+ If the right angle occurs at P then the dot product of the vectors formed by the two sides sharing the point P will equal 0. ok so for these points: (4,0,4) (2,-1,8) (1,2,3) when i performed the dot product, none equal zero. is this correct? You have to perform the dot product on the vectors that form the sides, not the points. Originally Posted by icemanfan You have to perform the dot product on the vectors that form the sides, not the points. ahh ok i got it, the difference from each of the points to each other. View Tag Cloud
<urn:uuid:15bdcfdf-1875-4820-9800-f40d9d5b2a18>
3.0625
200
Q&A Forum
Science & Tech.
88.045599
edit: I originally had some points about the inefficiency of RTGs, but after some more research prompted by @Jeremy I found that it's not really a valid point when they're used appropriately for the spacecraft's mission. The RTGs used by Galileo at Jupiter generated 300W of power, whereas the solar panels that will be used by Juno at Jupiter will generate 450W of power. Solar arrays are also much larger and heavier than RTGs and impact the delta-V budget of the spacecraft, a costly interaction. The reason that solar arrays are used in some spacecraft are outlined in the points I make below so the efficiency factor doesn't really come into play. Radioisotope Thermoelectric Generators (RTGs) are used when a spacecraft will be venturing too far from the sun to get enough power from it, or when it experiences extended periods of darkness while still needing to operate. This is the case with the Pioneer missions, Voyager missions, the Cassini missions, as well as the science experiments left on the moon during Apollo, and surely more that I haven't thought of. RTGs are dangerous, especially if the spacecraft fails during launch, or an earth-flyby goes badly (this could spread radioactive material across a continent), and don't generate much power when compared to solar panels in close proximity to the sun. Solar panels are used for missions that will almost always have a clear view of the sun, where they can generate much more power than RTGs can.
<urn:uuid:83611054-77bd-4d31-bcaf-75a7039fc272>
2.859375
305
Q&A Forum
Science & Tech.
35.804228
An assessment study of a Space-Time Explorer (STE) mission has recently been completed at ESA's Concurrent Design Facility (CDF). The study objective was to examine the possible architecture and implementation of a fundamental physics mission, which would set out to test Einstein's theories of special and general relativity by comparing high-precision atomic clocks in space and on ground. Space-Time Explorer (STE) Einstein's theory of general relativity is a metric theory, where gravity is the result of curves in space-time induced by the presence of mass. The theory is highly successful in describing gravitational interactions and to date has proven capable of satisfactorily explaining observations from within our Solar System out to the Universe at large. However, experiments continue to be performed with an increasingly higher accuracy to test if the theory is correct. A cornerstone of Einstein's theory of gravity and one of the pillars of modern physics is Einstein's equivalence principle (EEP), according to which the local effects of a gravitational field are the same as those introduced by being in an accelerated reference frame. The EEP comprises three elements: Science Goals of STE The main goal of the Space-Time Explorer would be to search for violations of the EEP and at the same time challenging general relativity and alternative theories of gravitation. The measurement concept of the STE mission is based on space-to-ground comparisons of high-performance microwave clocks (atomic clocks). High-performance clocks in space allow tests of EEP to accuracy levels not achievable solely on ground. STE would perform a precision measurement of gravitational red shift in the gravitational potentials of the Earth and the Sun with a relative accuracy at the level of Gravitational red shift (GRS) measurements are specifically suited for testing the Local Position Invariance. GRS is a gravitational effect that can be observed with clocks, which are predicted to run slower when they are deeper inside a gravitational potential. How much slower should be solely dependent on the gravitational potential at the clock's position, if EEP is valid. The GRS measurements by STE would challenge Einstein's prediction, and test metric theories of gravitation. A highly elliptical orbit (baseline: apogee ~50 000 km, perigee ~700 km) has been selected for the STE spacecraft. The large variations of the Earth's gravitational potential along the orbit are important to maximize the relative accuracy of the red shift measurement. In addition, as the Earth orbits the Sun, measurements made over the course of the STE mission lifetime would allow measuring the gravitational red shift at different points in the Sun's gravitational field. Testing the gravitational red shift and its universality would be the primary science goal of the STE mission. The secondary science goals of STE would include: The tentative baseline STE mission is for a single spacecraft carrying a caesium atomic clock. To compare the on-board atomic clock with atomic clocks on the ground, the STE spacecraft would employ a microwave link with six science ground stations. The baseline mission would have a two-way triple-frequency link, which allows for accurate determination and removal of the ionospheric effects on the signal, reaching high stabilities in the space-to-ground comparison of clocks. Alongside the microwave link, a coherent optical link is also included in the baseline, offering high link stability and redundancy. In addition the satellite would carry a "corner cube" for spacecraft ranging and for time tagging, as well as a global navigation satellite system (GNSS) receiver for accurate orbit determination. The CDF assessment study of the STE mission concept ran from 15 June to 16 July 2010. The internal final presentation of the STE assessment study, prepared by the STE/CDF team, is now available in PDF format. The document can be retrieved through the link in the right-hand menu under "related publications".
<urn:uuid:253298f2-4af4-4a78-9b80-3c6a76591361>
3.640625
779
Knowledge Article
Science & Tech.
26.015646
Mission Type: Flyby, Orbiter Launch Vehicle: Titan IIG (no. 23G-11) Launch Site: Vandenberg Air Force Base, USA Spacecraft Mass: 424 kg Spacecraft Instruments: 1) ultraviolet/visible camera; 2) near-infrared camera; 3) long-wave infrared camera; 4) high-resolution camera; 5) two star-tracker cameras; 6) laser altimeter; 7) bistatic radar experiment; 8) gravity experiment and 9) charged-particle telescope Spacecraft Dimensions: octagonal prism 1.88 meters high and 1.14 m across Spacecraft Power: gimbaled, single axis, GaAs/Ge solar panels which charged a 15-amp-hour, 47-W-hr/kg Nihau (Ni-H) common pressure vessel battery Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ Clementine was the first U.S. spacecraft launched to the Moon in over 20 years (since Explorer 49 in June 1973). Also known as the Deep Space Program Science Experiment (DSPSE), the spacecraft was designed and built to demonstrate a set of lightweight technologies, such as small imaging sensors, for future low-cost missions to be flown by the Department of Defense. Clementine carried 15 advanced flight-test components and 10 science instruments. After launch, the spacecraft remained in Earth orbit until 3 February 1994, at which time a solid-propellant rocket ignited to send the vehicle to the Moon. After two subsequent Earth flybys on 5 February and 15 February, Clementine successfully entered an elliptical polar orbit on 19 February with a period of 5 days and a perilune (closest approach to the Moon) of 400 kilometers. In the following two months, it transmitted about 1.6 million digital images of the lunar surface; in the process, it provided scientists with their first look at the total lunar landscape, including polar regions. After completing 297 lunar orbits, controllers fired Clementine's thrusters on 3 May to inject it into a rendezvous trajectory in August 1994 with the asteroid 1620 Geographos. Due to a computer problem at 14:39 UT on 7 May that caused a thruster to fire and use up all propellant, the spacecraft was put into an uncontrollable tumble at about 80 rpm with no spin control. Controllers were forced to cancel the asteroid flyby and return the vehicle to the vicinity of Earth. A power supply problem on 20 July further diminished the operating capacity of the vehicle. Eventually, lunar gravity took control of Clementine and propelled it into heliocentric orbit. The mission was terminated in June 1994 when falling power supply levels no longer allowed clear telemetry exchange. On 3 December 1996, the Department of Defense announced that Clementine data indicated that there was ice in the bottom of a permanently shadowed crater on the lunar south pole. Scientists estimated the deposit to be approximately 60,000 to 120,000 cubic meters in volume, comparable to a small lake that is 4 football fields in surface area and 5 meters deep. This estimate was very uncertain, however, due to the nature of the data.
<urn:uuid:d5e95ff6-c3a7-4439-ab29-60e4fc250f4f>
3.171875
693
Knowledge Article
Science & Tech.
44.91098
What are the various types of satellite imagery available? There are four principal types of satellite imagery used in operational and research meteorology. Each has its advantages and disadvantages. Many examples of each type can be found at meteorology related web-sites. 1. Visible Imagery (VIS) Images obtained using reflected sunlight at visible wavelengths, in the range 0.4 to 1.1 micrometres. Visible imagery is displayed in such a way that high reflectance objects, e.g. dense cirrus from CB clusters, fresh snow, nimbostratus etc., are displayed as white, and low reflectance objects, e.g. much of the earth's surface, is dark grey or black. There are grey shades to indicate different levels of albedo (or reflectivity). Very dependent upon angle of incident sunshine, and of course, not available at night, though some military/research satellite sensors can utilise reflected moonlight to detect cloud. 2. InfraRed (IR) These images are obtained by sensing the intensity of the 'heat' emissions of the earth, and the atmosphere/atmospheric constituents, at IR wavelengths in the range 10-12 micrometres. The earth, and its components, radiate across a wide spectrum of wavelengths, but for many of these, the atmospheric gases, of which water vapour is an important constituent, absorb a significant proportion of such radiation. Thus so-called 'windows' need to be chosen to allow the satellite sensors to detect such radiation unhindered, and the 10-12 micrometre band is one such. IR imagery is so presented that warm/high intensity emissions are dark grey or even black, and low intensity/cold emissions are white. This convention was chosen so that the output would correspond with that from the VIS channels, but there is no need to follow this scheme - indeed in operational meteorology, colour slicing is frequently used whereby different colours are assigned to various temperature ranges, thus rendering the cooling/warming of cloud tops (and thus the development/decay) easy to appreciate: warming/darkening of the imagery with time indicates descent and decay; cooling/whitening images imply ascent and development. 3. Water Vapour (WV) This imagery is derived from emissions in the atmosphere clustered around a wavelength of 6.7 micrometre. In contrast to the IR channel, this wavelength undergoes strong absorption by WV in the atmosphere (i.e. this is not a 'window'), and so can be used to infer vertical distribution and concentration of WV - an important atmospheric constituent. WV imagery uses the radiation absorbed and re-emitted by water vapour in the troposphere. If the upper troposphere is moist, WV emissions will be dominated by radiance from these higher levels, swamping emissions from warmer/lower layers; this radiation is conventionally shown white. If the upper troposphere is dry, then the sum of the radiation is biased towards lower altitude WV bands: it is warmer/less intense radiation, and this is displayed as a shade of grey, or even black. WV imagery is very important in the study of cyclogenesis, often being displayed as a time-sequence. 4. 'Channel 3' (CH3) Imagery from a specific wavelength of 3.7 micrometre, lies in the overlap region of the electro-magnetic spectrum between solar and earth-based/terrestrial radiation. It is sometimes referred to as 'near infrared' (NIR). CH3 images use a mixture of back-scattered solar radiation plus radiation emitted by the earth and atmosphere. It is used in fog/very low cloud studies. Interpretation is sometimes complex, especially in the presence of other tropospheric clouds.
<urn:uuid:49e961d5-642b-4f33-841d-c42914de8faf>
3.875
782
Knowledge Article
Science & Tech.
37.866732
Henri Becquerel's early work was concerned with the polarization of light, the phenomenon of phosphorescence and the absorption of light by crystals (his doctoral thesis). He was elected a member of the French Académie des Sciences in 1889. For his discovery of natural radioactivity in 1896, Henri Becquerel was awarded half of the Nobel Prize for Physics in 1903, the other half being awarded to Pierre and Marie Curie for their study of the Becquerel radiation. This is why the General Conference on Weights and Measures (CGPM) of 1975 (Resolution 8) decided to honour Henri Becquerel by adopting the special name of becquerel, Bq, for the SI derived unit of activity. This proposal had been made by the International Commission for Radiation Units and Measurements (ICRU) and accepted by the Consultative Committee for Units (CCU) as Recommendation U 1 (1974), the earlier non-SI unit having been named after the Curies. For more biographical information see:
<urn:uuid:b2104f7e-bec0-4cd8-af53-832c7eb2c270>
3.203125
219
Knowledge Article
Science & Tech.
27.31814
Today's Pacific Northwest known for raining every three days was once much drier due to the previous wind direction for example. The glaciers would have created a wind and moisture barrier which would cause the boundary of the glaciers to have weather like Mount Washington. Scientists speculate that the winds change course with significant climate change. Just goes to show how the idea of a "natural equilibrium" is flawed and how volatile our world is. (Hat tip: Very Spatial)
<urn:uuid:d1a3bdbb-efb9-409d-b35e-b840353cd539>
3.125
92
Personal Blog
Science & Tech.
48.0025
This week's element is technetium, a naturally radioactive metal denoted by the chemical symbol, Tc, and the atomic number 43. Technetium, whose longest-lived isotope has a half-life of 4.2 million years, is vanishingly rare on earth. Interestingly, based on the spectral lines from light given off by stars, we can see a number of isotopes of technetium are common in stars. For this reason and because this element has a much shorter lifespan than do stars, we've concluded that the stars themselves are the birthplace of this element. Technetium is created on earth from radioactive decay of molybdenum-99 (half-life; 67 hours) or by humans bombarding uranium-235 with neutrons. Because of its short half-life, technetium does not and never has played any biological role since it vanished (due to radioactive decay) long before life appeared on earth. However, technetium-99m (half-life; 6 hours), which results from molybdenum-99 decay, is important in medical imaging and diagnostics, particularly for detecting a number of rare and elusive cancers as well as damage to the heart muscle resulting from heart attack. Perhaps the most interesting aspect of this element is its elusiveness: occupying the empty box beneath manganese on the periodic table, its existence had been predicted long before anyone found it. But despite numerous intensive efforts to isolate this element, its eventual discovery by Emilio Segrè in 1937 was accidental. As the story goes, in 1937, Segrè visited Ernest Lawrence's laboratory in Berkeley to see his 37-inch cyclotron. Later that year, the molybdenum foil from the cyclotron deflector, which had been exposed to a lot of deuterium bombardment, was sent to him after he'd returned home to Sicily. Segrè found that some of the radiation emitted by this foil was produced by a previously unknown element, which was later named technetium. Not only had Segrè discovered a new element, but because technetium does not occur in nature, it is the first element that had been artificially synthesised in a laboratory. However, that said, there is a bit more to the story of the discovery of element number 43. It actually had been discovered in 1925 by a German group comprised of Ida Tacke, Walter Noddack and Otto Berg, who deduced the presence of element 43 after bombarding the mineral colombite with electrons and analysing the emitted x-rays. They named it masurium. However, no one believed that their instruments were sensitive enough to detect the small quantities of emitted x-rays -- until their methods and results were replicated recently by several scientists around the world who used similar instruments. They concluded that the German group had discovered this element after all. Here's our favourite Professor telling us a little more about technetium: .. .. .. .. .. .. .. .. .. .. .. .. Video journalist Brady Haran is the man with the camera and the University of Nottingham is the place with the chemists. You can follow Brady on twitter @periodicvideos and the University of Nottingham on twitter @UniNottingham You've already met these elements: Molybdenum: Mo, atomic number 42 Niobium: Ni, atomic number 41 Zirconium: Zr, atomic number 40 Yttrium: Y, atomic number 39 Strontium: Sr, atomic number 38 Rubidium: Rr, atomic number 37 Krypton: Kr, atomic number 36 Bromine: Br, atomic number 35 Selenium: Se, atomic number 34 Arsenic: As, atomic number 33 Germanium: Ge, atomic number 32 Gallium: Ga, atomic number 31 Zinc: Zn, atomic number 30 Copper: Cu, atomic number 29 Nickel: Ni, atomic number 28 Cobalt: Co, atomic number 27 Iron: Fe, atomic number 26 Manganese: Mn, atomic number 25 Chromium: Cr, atomic number 24 Vanadium: V, atomic number 23 Titanium: Ti, atomic number 22 Scandium: Sc, atomic number 21 Calcium: Ca, atomic number 20 Potassium: K, atomic number 19 Argon: Ar, atomic number 18 Chlorine: Cl, atomic number 17 Sulfur: S, atomic number 16 Phosphorus: P, atomic number 15 Silicon: Si, atomic number 14 Aluminium: Al, atomic number 13 Magnesium: Mg, atomic number 12 Sodium: Na, atomic number 11 Neon: Ne, atomic number 10 Fluorine: F, atomic number 9 Oxygen: O, atomic number 8 Nitrogen: N, atomic number 7 Carbon: C, atomic number 6 Boron: B, atomic number 5 Beryllium: Be, atomic number 4 Lithium: Li, atomic number 3 Helium: He, atomic number 2 Hydrogen: H, atomic number 1 Here's a wonderful interactive Periodic Table of the Elements that is just really really fun to play with! .. .. .. .. .. .. .. .. .. .. .. ..
<urn:uuid:eeae89a9-e81c-4b9c-a185-8989607ca949>
4.0625
1,119
Personal Blog
Science & Tech.
50.120042
Page 3 of 4 Widening and Narrowing When it comes to data types there are two possible types of assignment - widening and narrowing. A widening assignment causes no problem because the variable on the left is "bigger" than what is being computed on the right and so can be stored without loss of precision. For example you can always do: A narrowing assignment is a potential problem because the variable on the left isn't always capable of storing the result on the right. Things might work ok if say the value stored in myInt is small enough to be stored in myByte - but equally it might not. Widening assignments are performed by the system without you getting involved - and this is a general principle that goes beyond just numbers. A narrowing assignment needs some help and it generally requires the programmer to explicitly specify how the types should be converted. So suppose we do want to assign an int to a byte how do we do it? The answer is to use a cast and this is again a more general mechanism than it appears. You can attempt to convert any type to any type with a cast but it doesn't always work or make sense! To cast one type to another you simply use This converts the value in myInt to a byte and stores it in myByte. Simple and brutal. If the int is actually in the range that a byte can represent then it works fine. If it isn't then all you get are the bottom eight bits of the 16 bit value. This often makes no sense numerically but some types of bit manipulation rely on it. Unless you are doing bit manipulation casting a numerical value to a "smaller" type only makes sense if the value can be accurately represented by the smaller value. So for example you can safely use casting to make byte arithmetic "safe" as in Now even though the arithmetic promotes the byte to an int it is converted back again to a byte by the cast. Notice that it doesn't take much to cause a cast to do something that you might not want. For example: produces the answer -2. This makes sense if you know how the numbers are stored in binary signed format but unless you are doing some advanced bit manipulation this isn't particularly useful. The same sorts of things occur if you cast a float or a double to int or an integer type. For example: int myInt=(int) myDouble; in this case the cast simply truncates the result - it throws way the fractional part to give you one. What if you want to perform a more gentle and accurate sort of conversion? The answer is to use the Math class and its numeric conversion methods all of which accept a double and return a double as the result: - ceil returns the smallest integer not less than the specified double - floor returns the largest integer not greater than the specified double. - rint returns the integer that is closest to the specified double There is also a round method which will accept an double or a float and return the closes integer as a long or an int respectively. int myInt=(int) Math.floor(1.5); stores 1 in myInt - notice you still need a cast as floor returns a double. int myInt=(int) Math.ceil(1.5); stores 2 in myInt. There are lots of other useful mathematical methods included in the Math class and it is worth finding out more about.
<urn:uuid:cacbe552-f008-4e43-b43d-67a4f615a434>
3.65625
720
Documentation
Software Dev.
56.94911
The genome encodes the complete information needed by an organism, including that required for protein production. Viruses, which are up to a thousand times smaller than human cells, have considerably smaller genomes. Using a type of herpesvirus as a model system, the scientists of the Max Planck Institute (MPI) of Biochemistry in Martinsried near Munich and their collaboration partners at the University of California in San Francisco have shown that the genome of this virus contains much more information than previously assumed. The researchers identified several hundred novel proteins, many of which were surprisingly small. The results of the study have now been published in Science. More than 80 percent of the world’s population is infected with the herpesvirus, which can cause severe diseases in newborns and in persons with weakened immune system. Researchers had already sequenced the herpesvirus genome 20 years ago, thinking they could then predict all proteins that the virus produces (virus proteome). Now scientists from the research department of Matthias Mann, director at the MPI of Biochemistry, and their American colleagues have analyzed the information content of the genome more precisely. Small but highly complex To carry out their study, the scientists infected cells with herpesvirus and observed which proteins the virus produced inside the cell over a period of 72 hours. In order for proteins to be produced at all, the cell machinery must first make copies of the genetic material as intermediate products (RNA). While investigating the intermediate products of the herpesvirus, the American collaborators discovered many novel RNA molecules which were in large part surprisingly short. They also found that the organization of information required for protein production in the virus genome was far more complex than previously assumed. Annette Michalski, a scientist in the Department of Proteomics and Signal Transduction at the MPI of Biochemistry, was subsequently able to confirm directly the predicted viral proteins in the infected cell using mass spectrometry. This method enables an overview of the complete proteome of the virus-infected cell. The results of the American and German researchers provide detailed insight into the complex mechanisms used by the virus. “We showed that it’s not enough merely to know the virus genome to understand the biology of the herpesvirus,” Annette Michalski said. “What is important is to look at the products actually produced from the genome.” Even human genes may be much more complex than the genome sequence itself indicates, commented the researchers. Matthias Mann and his colleagues plan to investigate this question further in the coming years. N. Stern-Ginossar , B. Weisburd, A. Michalski, V. T. Khanh Le, M. Y. Hein, S.-X. Huang, M. Ma, B. Shen, S.-B. Qian, H. Hengel, M. Mann, N. T. Ingolia, J. S. Weissmann: Decoding Human Cytomegalovirus, Science, November 23, 2012. Prof. Dr. Matthias Mann Proteomics and Signal Transduction Max Planck Institute of Biochemistry Am Klopferspitz 18 Max Planck Institute of Biochemistry Am Klopferspitz 18 Phone: +49 89 8578-2824 (Research Department Proteomics and Signal Transduction) Hompage of Weissmann Lab at UCSF) Anja Konschak | Source: Max-Planck-Institut Further information: www.biochem.mpg.de Further Reports about: Biochemistry > human cell > Management Insights feature > Max Planck Institute > MPI > protein production > proteome > Proteomics > RNA > RNA molecule > Science TV > Transduction > viral protein > Virus More articles from Life Sciences: The secret of DNA methylation 19.06.2013 | Université de Genčve Study Shows How the Nanog Protein Promotes Growth of Head and Neck Cancer 19.06.2013 | Ohio State University Medical Center - Biological fermentation process converts CO and CO2 into bioethanol and platform chemicals - Process uses energy contained in steel plant off-gases - Ten-year co-operation to develop and market integrated environmental solutions for the steel industry worldwide Siemens Metals Technologies and LanzaTech have signed a ten-year co-operation agreement to develop and market integrated environmental solutions for the steel industry worldwide. The collaboration will utilize the ground-breaking fermentation technology developed by LanzaTech transforming carbon-rich off-gases generated by the steel industry into low carbon bioethanol and other platform chemicals. ... Novel application of 3D printing could enable the development of miniaturized medical implants, compact electronics, tiny robots, and more 3D printing can now be used to print lithium-ion microbatteries the size of a grain of sand. The printed microbatteries could supply electricity to tiny devices in fields from medicine to communications, including many that have lingered on lab benches for lack of a battery small enough to fit the ... ... two engines aircraft project “Elektro E6”. The countdown has been started for opening the gates again for the worldwide leading aviation and space event in Le Bourget, Paris from June 17th - 23rd, 2013. EADCO & PC-Aero will present at the Paris Air Show in Hall H4 booth F-7 their new future aircraft and innovative project: ... Siemens scientists have developed new kinds of ceramics in which they can embed transformers. The new development allows power supply transformers to be reduced to one fifth of their current size so that the normally separate switched-mode power supply units of light-emitting diodes can be integrated into the module's heat sink. The new technology was developed in cooperation with industrial and research partners who ... Cheaper clean-energy technologies could be made possible thanks to a new discovery. Led by Raymond Schaak, a professor of chemistry at Penn State University, research team members have found that an important chemical reaction that generates hydrogen from water is effectively triggered -- or catalyzed -- by a nanoparticle composed of nickel and phosphorus, two inexpensive elements that are abundant on Earth. ... 19.06.2013 | Life Sciences 19.06.2013 | Agricultural and Forestry Science 19.06.2013 | Studies and Analyses 14.06.2013 | Event News 13.06.2013 | Event News 10.06.2013 | Event News
<urn:uuid:c29642e5-8033-4cc9-b196-b60921f6ccdc>
3.8125
1,358
Content Listing
Science & Tech.
42.766792
Rhomboid Protease GlpGAugust 2011 Molecule of the Month by David Goodsell doi: 10.2210/rcsb_pdb/mom_2011_8 (ePub Version ) Proteases, enzymes that cut protein chains, come in many shapes and sizes. The most familiar proteases, like trypsin and pepsin, are machines of destruction used to digest proteins in our diet. However, most of the proteases in our cells are used in a more delicate task. They regulate the action of other proteins by making specific cuts in their protein targets. In some cases, these cuts can activate the proteins, in other cases, they permanently destroy them. In either case, the change is quick and permanent, turning the target protein "on" or "off". Proteases in Membranes? For many years, proteases were seen as small soluble enzymes, largely because digestive proteases are plentiful and easy to study. Now, however, proteases are known to come in many shapes and sizes, ranging from small, stable digestive enzymes to huge proteasomes that clean up obsolete proteins inside cells. In the past decade or so, an entirely new type of protease has been discovered: proteases that are found inside membranes, where they cut up other membrane proteins. The first intramembrane serine protease was discovered in a mutant fruit fly, named "rhomboid" because of its oddly-shaped head. The protease regulates a growth receptor that controls this shape, so it was named after the fly mutation, and is now called a rhomboid protease. Ironically, the protease is also roughly shaped like a rhombus floating in the cell membrane, as least from some angles. The protease shown here is GlpG, a bacterial rhomboid protease, from PDB entries 2ic8, 2irv and 2nrf (shown here). Active Site Machinery Researchers have also found other types of intramembrane proteases, which use many of the familiar catalytic mechanisms found in soluble proteases. The rhomboid proteases use a reactive serine-histidine pair, similar to the active site in the serine proteases like trypsin and chymotrypsin. The site-2 family of proteases, like the bacterial protease shown here on the left from PDB entry 3b4r, use a zinc ion, similar to soluble metalloproteinases like carboxypeptidase. Finally, acid proteases have also been found in membranes, such as the preflagellin peptidase FlaK, shown here on the right from PDB entry 3s0x, and the huge protease complex gamma-secretase, which plays an important role in the development of Alzheimer's disease. click on the image for an interactive Jmol Exploring the Structure Intramembrane proteases perform a tricky job. They need to sit comfortably inside a hydrophobic membrane, but they also need to use water and water-binding amino acids to perform their reaction. They do this by having a flexible loop that covers the active site, allowing their targets and water to enter. Three structures reveal some of this flexibility, although many controversies still remain, since study of membrane proteins requires use of artificial methods to stabilize the proteins outside of membranes. PDB entry 2ic8 (left) shows a tightly-closed form and 2nrf (center) shows a wide-open form. PDB entry 2xow (right) shows the protein closed around an inhibitor. Click on the images here to compare the structures in an interactive Jmol. - Y. Ha (2009) Structure and mechanism of intramembrane protease. Seminars in Cell & Developmental Biology 20, 240-250. - M. Freeman (2008) Rhomboid proteases and their biological functions. Annual Review of Genetics 42, 191-210. - S. Urban and Y. Shi (2008) Core principles of intramembrane proteolysis: comparison of rhomboid and site-2 family proteases. Current Opinion in Structural Biology 18, 432-441. Related PDB IDs © 2013 David Goodsell & RCSB Protein Data Bank
<urn:uuid:68ce0fe2-d22d-4028-a750-ba7a0ec9041b>
3.328125
890
Knowledge Article
Science & Tech.
38.405682
Meltdown in the North; Endangered Earth; Exclusive Online Issues; by Matthew Sturm, Donald K. Perovich and Mark C. Serreze; 7 Page(s) The list is impressively long: The warmest air temperatures in four centuries, a shrinking sea-ice cover, a record amount of melting on the Greenland Ice Sheet, Alaskan glaciers retreating at unprecedented rates. Add to this the increasing discharge from Russian rivers, an Arctic growing season that has lengthened by several days per decade, and permafrost that has started to thaw. Taken together, these observations announce in a way no single measurement could that the Arctic is undergoing a profound transformation. Its full extent has come to light only in the past decade, after scientists in different disciplines began comparing their findings. Now many of those scientists are collaborating, trying to understand the ramifications of the changes and to predict what lies ahead for the Arctic and the rest of the globe. What they learn will have planetwide importance because the Arctic exerts an outsize degree of control on the climate. Much as a spillway in a dam controls the level of a reservoir, the polar regions control the earth's heat balance. Because more solar energy is absorbed in the tropics than at the poles, winds and ocean currents constantly transport heat poleward, where the extensive snow and ice cover influences its fate. As long as this highly reflective cover is intact and extensive, sunlight coming directly into the Arctic is mostly reflected back into space, keeping the Arctic cool and a good repository for the heat brought in from lower latitudes. But if the cover begins to melt and shrink, it will reflect less sunlight, and the Arctic will become a poorer repository, eventually warming the climate of the entire planet.
<urn:uuid:74fbfd3b-d91b-47f4-a01a-58ea488be57e>
3.90625
357
Truncated
Science & Tech.
34.764951
Chinese coal blamed for global warming er... cooling Economists ride into sulphurous cloud of aerosols The refusal of the global temperatures to rise as predicted has caused much angst among academics. "The fact is that we can't account for the lack of warming at the moment and it is a travesty that we can't," wrote one in 2009. Either the instruments were wrong, or the heat energy had gone missing somewhere. Now a team of academics, after tweaking a statistical model to include sulphur emissions, suggest that coal power stations may be to blame for a lack of global warming since 1998. The IPCC's 2007 assessment acknowledged the negative radiative forcing (aka, cooling effect) of both natural aerosols from volcanoes and manmade aerosols, but admitted the level of scientific understanding was low. Economist Robert Kaufmann A team of two geographers and two economists headed by Professor Robert Kaufmann at the Department of Geography in Boston publish their results in a new paper Reconciling anthropogenic climate change with observed temperature 1998-2008 [PDF], which includes manmade emissions of sulphur and simulates the flat temperatures since 1998. Kaufmann has a PhD in energy management policy. In this paper, he and his colleagues revisit "a simplified model" from 2006 (PDF) containing statistically estimated equations for three variables: global surface temperature, CO2 and CH4. The actual temperature differences described in the new paper are tiny – with variations from model predictions of 0.1°C. "Results indicate that net anthropogenic forcing rises slower than previous decades because the cooling effects of sulfur emissions grow in tandem with the warming effects greenhouse gas concentrations. This slow-down, along with declining solar insolation and a change from El Nino to La Nina conditions, enables the model to simulate the lack of warming after 1998," the team explains. The model estimates a 0.06W/m2 increase in cooling since 2002. Declining sulphur emissions between 1990 and 2002 – caused by the collapse of the Soviet Union and the switch to gas – had a warming effect of 0.19W/m2. Kaufmann et al declare that aerosol cooling is "consistent with" warming from manmade greenhouse gases. Recent studies suggest greenhouse gas emissions may be masking a long-term cooling trend as solar activity declines. Climate scientist Judith Curry, chair of the School of Earth and Atmospheric Sciences at Georgia Institute of Technology, doesn't find the economists' statistical theatrics convincing. She wonders why the short-lived regional increases in particulates should have a global effect on temperatures. She also notes that there has been no increase in aerosols, either globally or over East Asia, from 2000 to 2006; Chinese emissions only rose in the period 2004 to 2007. Kaufmann et al do acknowledge that a La Nina weather pattern cooled the planet between 1998 and 2000, while a warm El Nino increased temperatures in 2002 and again in 2010. "The political consequence of this article seems to be that the simplest solution to global warming is for the Chinese to burn more coal, which they intend to do anyway," writes Curry. Doubtless they will. First we blame them for warming the planet, but now we blame them for cooling the planet. ® "the level of scientific understanding was low." That has to be the truest statement I've seen on the subject so far. On either side. Re AC 13:17 "Just because politicians, reporters, economists, commentards, etc know sod all doesn't really tell you much about the level of scientific understanding where it counts." There is not a single person on this planet who can tell you how global temperature works. Is that clear enough for you? No one, not scientists, not engineers, not the politicians at the IPCC can tell you, because no one knows. It seems there is some form of admittance from those in the article that it's quite hard to explain exactly what we're witnessing and where the "missing energy" went. Probably not a great idea to cripple the economy with taxes in relation to it then, no? Anonymous as I don't want to wake up and find a 30m windmill in my back garden - mainly because I can't afford to subsidise another one.
<urn:uuid:053e112f-2882-4f0f-8383-02a6083a0a0d>
3.125
880
Comment Section
Science & Tech.
46.884079
Haskell is a pure functional ProgrammingLanguage that employs LazyEvaluation and supports generic or PolymorphicTypes. It has some cool concepts, like the dot operator. If you remember your calculus, it used the . for functional composition — and so does Haskell! This allows you to do something similar to the pipe operator used in Shell scripting. -- sieve: prints the number of prime numbers between 2 and 100000 sieve :: [Int] -> [Int] sieve = sieve (h:t) = h : sieve [x| x <- t, x `mod` h /= 0] main = (print . length . sieve) [2..100000] - Haskell Users' Gofer System. An interpreter that's good for playing around learning Haskell. (“Gofer” was a subset of Haskell, but HUGS now implements full Haskell.) - Glasglow Haskell Compiler. A big, optimising compiler for Haskell. Part of CategoryProgrammingLanguages, CategoryFunctionalProgrammingLanguages
<urn:uuid:8570917f-f45e-4e9b-9c0f-7886b49d0d1b>
3.078125
229
Knowledge Article
Software Dev.
50.2301
NASA’s “Phone Sat” network of Android based satellites released the first pictures of Earth taken from orbit. The network consists of three satellites which are basically Android running Nexus One smartphones. After this first success, NASA decided to extend the program. People have wondered about the possibility of life on other planets for ages, and Mars has always been a popular subject for extraterrestrial life. NASA has a rover called the Opportunity Mars Exploration Rover on Mars now snapping pictures of everything it comes across. It’s taken some very cool pictures, and thanks to an Android App called Mars Images from Powellware we can view them from our Android devices.
<urn:uuid:4a125a11-0ef0-4b15-9e6e-f6f753f1ca56>
2.78125
132
Truncated
Science & Tech.
35.30619
Boo is a statically typed language. Static typing is about the ability to type check a program for type correctness. Static typing is about being able to deliver better runtime performance. Static typing is not about putting the burden of declaring types on the programmer as most mainstream languages do. The mechanism that frees the programmer from having to babysit the compiler is called type inference. Type inference means you don't have to worry about declaring types everywhere just to make the compiler happy. Type inference means you can be productive without giving up the safety net of the type system nor sacrificing performance. Boo's type inference kicks in for newly declared variables and fields, properties, arrays, for statement variables, overriden methods, method return types and generators. Assignments can be used to introduce new variables in the current scope. The type for the new variable will be inferred from the expression on the right. s1 = "foo" # declare new variable s1 s2 = s1.Replace("f", "b") # s1 is a string so Replace is cool Only the first assignment to a variable is taken into account by the type inference mechanism. The following program is illegal: s = "I'm a string" # s is bound with type string s = 42 # and although 42 is a really cool number s can only hold strings class Customer: _name = "" Declare the new field _name and initialize it with an empty string. The type of the field will be string. When a property does not declare a type it is inferred from its getter. class BigBrain: Answer: get: return 42 In this case the type of the Answer property will be inferred as int. The type of an array is inferred as the least generic type that could safely hold all of its enclosing elements. a = (1, 2) # a is of type (int) b = (1L, 2) # b is of type (long) c = ("foo", 2) # c is of type (object) For statement variables names = (" John ", " Eric", " Graham", "TerryG ", " TerryJ", " Michael") for name in names: # name is typed string since we are iterating a string array print name.Trim() # Trim is cool, name is a string This works even when with unpacking: a = ( (1, 2), (3, 4) ) for i, j in a: print i+j # + is cool since i and j are typed int When overriding a method, it is not necessary to declare its return type since it can be safely inferred from its super method. class Customer: override def ToString(): pass Method return types The return type of a method will the most generic type among the types of the expressions used in return statements. def spam(): return "spam!" print spam()*3 # multiply operator is cool since spam() is inferred to return a string # and strings can be multiplied by integer values def ltuae(returnString as bool): return "42" if returnString return 42 # ltuae is inferred to return object print ltuae(false)*3 # ERROR! don't know the meaning of the * operator When a method does not declare a return type and includes no return statements it will be typed System.Void. g = i*2 for i in range(3) # g is inferred to generate ints for i in g: print i*2 # * operator is cool since i is inferred to be int # works with arrays too a = array(g) # a is inferred to be (int) since g delivers ints print a + a[-1] # int sum A Word of Caution About Interfaces When implementing interfaces it's important to explicitliy declare the signature of a method, property or event. The compiler will look only for exact matches. In the example below the class will be considered abstract since it does not provide an implementation with the correct signature: namespace AllThroughTheDay interface IMeMineIMeMineIMeMine: def AllThroughTheNight(iMeMine, iMeMine2, iMeMine3 as int) class EvenThoseTears(IMeMineIMeMineIMeMine): def AllThroughTheNight(iMeMine, iMeMine2, iMeMine3): pass e = EvenThoseTears() Ok. So where do I have to declare types then? Let's say when. - when the compiler as it exists today can't do it for you. Ex: parameter types, recursive and mutually recursive method/property/field definitions, return for abstract and interface methods, for untyped containers, properties with a only set defined abstract def Method(param /* as object */, i as int) as string: pass def fat([required(value >= 0)] value as int) as int: return 1 if value < 2 return value*fat(value-1) for i as int in [1, 2, 3]: # list is not typed print i*2 - when you don't want to express what the compiler thinks you do: def foo() as object: # I want the return type to be object not string # a common scenario is interface implementation return "a string" if bar: a = 3 # a will be typed int else: a = "42" # uh, oh - when you want to access a member not exposed by the type assigned to an expression: f as System.IDisposable = foo() f.Dispose() - when you want to use Duck Typing: def CreateInstance(progid): type = System.Type.GetTypeFromProgID(progid) return type() ie as duck = CreateInstance("InternetExplorer.Application") ie.Visible = true ie.Navigate("http://boo.codehaus.org/Type+Inference")
<urn:uuid:1588657b-ae8a-443d-80d8-48402d0d6932>
3.53125
1,259
Documentation
Software Dev.
48.427351
Manual Section... (3) - page: ecvt NAMEecvt, fcvt - convert a floating-point number to a string char *ecvt(double number, int ndigits, int *decpt, int *sign); char *fcvt(double number, int ndigits, int *decpt, int *sign); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): DESCRIPTIONThe ecvt() function converts number to a null-terminated string of ndigits digits (where ndigits is reduced to a system-specific limit determined by the precision of a double), and returns a pointer to the string. The high-order digit is nonzero, unless number is zero. The low order digit is rounded. The string itself does not contain a decimal point; however, the position of the decimal point relative to the start of the string is stored in *decpt. A negative value for *decpt means that the decimal point is to the left of the start of the string. If the sign of number is negative, *sign is set to a nonzero value, otherwise it is set to 0. If number is zero, it is unspecified whether *decpt is 0 or 1. RETURN VALUEBoth the ecvt() and fcvt() functions return a pointer to a static string containing the ASCII representation of number. The static string is overwritten by each call to ecvt() or fcvt(). CONFORMING TOSVr2; marked as LEGACY in POSIX.1-2001. POSIX.1-2008 removes the specifications of ecvt() and fcvt(), recommending the use of sprintf(3) instead (though snprintf(3) may be preferable). NOTESLinux libc4 and libc5 specified the type of ndigits as size_t. Not all locales use a point as the radix character ("decimal point"). SEE ALSOecvt_r(3), gcvt(3), qecvt(3), setlocale(3), sprintf(3) COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. This document was created by man2html, using the manual pages. Time: 15:27:07 GMT, June 11, 2010
<urn:uuid:4550f6b2-e0ca-48c2-b1c1-b9758a35c78c>
2.9375
529
Documentation
Software Dev.
68.822139