text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Nisquallia olympica females range in color from dark slate gray through lighter gray, mottled brown, olive and rust. Some of the color variation may be related to how recently they’ve molted. Image 6 shows the ventral side of a female, shot through the wall of a plastic terrarium. “Two New Melanoploid Genera (Orthoptera: Acrididae: Cyrtacanthacridinae) from the Western United States” James A. G. Rehn, Transactions of the American Entomological Society (1890-), Vol. 78, No. 2 (Jun., 1952), pp. 101-115. (See JSTOR link Available to read onlline with a free account.)Jacques R. Helfer, How to Know the Grasshoppers, Crickets, Cockroaches and Their Allies, Wm. C. Brown, 1963 (republished in 1987 by Dover)
<urn:uuid:b2ebbbfa-4a02-42f6-adf4-e7f3ef59a798>
2.828125
203
Knowledge Article
Science & Tech.
62.022632
Global warming is the increase in the average temperature of Earth's near-surface air and oceans since the mid-20th century and its projected continuation. Global surface temperature increased 0.74 ± 0.18 °C (1.33 ± 0.32 °F) between the start and the end of the 20th century. The Intergovernmental Panel on Climate Change (IPCC) concludes that most of the observed temperature increase since the middle of the 20th century was very likely caused by increasing concentrations of greenhouse gases resulting from human activity such as fossil fuel burning and deforestation. The IPCC also concludes that variations in natural phenomena such as solar radiation and volcanic eruptions had a small cooling effect after 1950. These basic conclusions have been endorsed by more than 40 scientific societies and academies of science, including all of the national academies of science of the major industrialized countries. Climate model projections summarized in the latest IPCC report indicate that the global surface temperature is likely to rise a further 1.1 to 6.4 °C (2.0 to 11.5 °F) during the 21st century. The uncertainty in this estimate arises from the use of models with differing sensitivity to greenhouse gas concentrations and the use of differing estimates of future greenhouse gas emissions. Most studies focus on the period up to the year 2100. However, warming is expected to continue beyond 2100 even if emissions stop, because of the large heat capacity of the oceans and the long lifetime of carbon dioxide in the atmosphere. An increase in global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, probably including expansion of subtropical deserts. Warming is expected to be strongest in the Arctic and would be associated with continuing retreat of glaciers, permafrost and sea ice. Other likely effects include changes in the frequency and intensity of extreme weather events, species extinctions, and changes in agricultural yields. Warming and related changes will vary from region to region around the globe, though the nature of these regional variations are uncertain. Political and public debate continues regarding global warming, and what actions (if any) to take in response. The available options are mitigation to reduce further emissions; adaptation to reduce the damage caused by warming; and, more speculatively, geoengineering to reverse global warming. Most national governments have signed and ratified the Kyoto Protocol aimed at reducing greenhouse gas emissions.
<urn:uuid:fe3f896d-e628-45d8-b52f-e6794eaeba07>
3.890625
477
Knowledge Article
Science & Tech.
32.418825
-Nature of Things Mysteries of Sight, Sound and Other Senses From birds using celestial navigation, to salmon using chemical sensors to "smell" their way home, John Weeks discusses the migration phenomenon of various species. Weeks notes that many migration patterns hold mysteries that are still unexplained. Originally aired on September 25, 1987
<urn:uuid:c5694694-98fc-4f05-a66e-c062bfae9637>
3.0625
68
Truncated
Science & Tech.
25.887308
Sharp telescopic views of magnificent edge-on spiral galaxy NGC 3628 show a puffy galactic disk divided by dark dust lanes. The tantalizing scene puts many astronomers in mind of its popular moniker, The Hamburger Galaxy. About 100,000 light-years across and 35 million light-years away in the constellation Leo, NGC 3628 shares its neighborhood in the Universe with two other large spirals, a grouping otherwise known as the Leo Triplet. Gravitational interactions with its cosmic neighbors are likely responsible for the extended flare and warp of this spiral's disk, populated by the galaxy's star clusters and tell tale pinkish star forming regions. Also a result of past close encounters, a faint tail of material is just visible extending upward and left in this deep galaxy portrait.
<urn:uuid:830d2ba0-308c-4078-91bd-9db9d7466a1f>
3.359375
182
Knowledge Article
Science & Tech.
38.811466
The Climate statistics for Australian locations consists of information for more than 1000 sites. The tables prepared for each site provide averages and other statistics for a number of elements including: Further information about each of these statistics can be found by clicking on the first column of each row in the statistics tables. Sites have been included only if a minimum of 10 years of temperature data are available for the site. Thus, statistics for more than 15,000 rainfall-only stations are not currently available on this web site, but may be obtained by contacting the Bureau. Detailed information on the climate data available from all Bureau sites is available on-line, or you may contact the Bureau for specific information. The basic station metadata provided for each site includes the year in which the station was opened and, if applicable, the year that the station was closed. Generally, but not always, observations will commence close to the start date and cease around the time the station is closed. However, the types of observations made over that period - wind, temperature and rainfall for example - may change according to operational needs. Thus the length of record for each element, and how complete that record is, may not be the same. Rainfall typically has the longest observation record at most sites. The data field labeled ‘Years’ in each row of the climate statistics table contains two sub-fields: length of the record, and the first & last year of available data. The length of the record for an element is calculated by dividing the number of months used by 12, and does not mean calendar or complete years except for the rainfall decile values. It gives an indication of the amount of data used between the first date of occurrence and the last date of occurrence of the element. No statistic is provided if there are less than 10 years of data for the relevant observation. The range of different meteorological elements observed is not the same at all sites. For example, sunshine duration, maximum wind gust and evaporation are only recorded at some stations. In addition, some weather stations will have ceased recording a particular element or elements during their period of observations. In these cases, there may be no data (for a particular month or element) shown on the statistics report. Data quality control processes, and some of the methods used to derive climate parameters (such as the calculation of clear and cloudy days), have changed since records were first kept by the Bureau of Meteorology. This means there may have been quality issues with the climate statistics (calculated from historical records). We will soon finish reprocessing many of the data which are used to calculate the climate statistics. This will improve the quality of the statistical information provided to you. In the meantime, the statistics will be based on the same (updated) datasets as used previously. This will mean: There is a delay, which varies with the type of element and sometimes the site as well, between when an observation is made and when the data have completed the quality control process. This delay is typically greatest for rainfall data. Therefore, recent data may not be included in the statistics for a particular site. The mean value, also known as the average, is one of the most common statistics used to provide an estimate of what is most likely to happen. It is not necessarily equal to the most commonly occurring value, which is known as the mode, but for most elements it will be close. By itself, the mean does not provide any information about how the observations are scattered around the mean; whether they are tightly grouped or broadly scattered. Deciles are one of the statistics used to provide an indication of the spread of data in a data set (e.g.. a collection of rainfall observations at a site). To calculate deciles, we divide the ranked data set into ten parts. The median is simply that value which marks the level dividing the ranked data set in half. For example, 50 % of Januaries will have a total rainfall at or above the January median and 50% will have a total below it. The median is also known as the 5th decile, decile 5 and the 50th percentile - they are all the same thing. Decile 9 or the 90th percentile for January, means that 90 % of January totals will be at or below this figure. In other words there is a 90% chance of a January rainfall being at or below decile 9 (90th percentile), a 10% chance of it being above decile 9, and a 10% probability of it being below decile 1 (10th percentile). To get the annual decile value, you do not sum the deciles for the 12 individual months, but calculate it separately from the set of annual rainfall totals. However it is possible for the two values to be the same by chance. The median is usually the preferred measure of 'average' rainfall from the meteorological point of view, particularly for the shorter timeframe. This is because of the high variability of rainfall - one extreme rainfall event will have less affect on the median than it will have on the arithmetic mean. For example, at Roebourne (site number 004035) in the north-west coastal region of Australia, the wet season and the number and path of cyclones can vary significantly from year to year. Figure 2 illustrates the January rainfall at Roebourne over the period 1961 to 1990. The median rainfall of 23 mm is more indicative of the majority of months than is the mean rainfall of 68.5 mm. While generally correct, the extreme values should be viewed with some caution. Extremes may have been recorded using non-standard equipment, particularly in the very early years of the Australian meteorological record; for example, pre-1910 temperature extremes. In addition, the extreme value may represent an erroneous measurement that escaped detection by the normal quality control processes. Extremes can also be very location specific. A rain gauge at the centre of a concentrated, intense thunderstorm may accurately measure a record rainfall total while a location nearby observed no significant rainfall. Extreme values of minimum temperature tend to be more location specific than those of high temperature. The extremes in the tables of statistics are drawn from the Bureau's computer archive of climate data. Some sites, which could have had more extreme values in the 1800s or the early 1900s, may not have been entered in the data base. In addition, very recent extremes may not have been processed at the time the tables were created. Statistics calculated over standard periods (commonly a 30 year interval) are often called climate normals, and are generally used as reference values for comparative purposes. The period is long enough to include the majority of typical year to year variations in the climate, but no so long that it is significantly influenced by longer-term changes in climate. In Australia, the current reference climate normal is generated over the 30-year period 1 January 1961 to 31 December 1990. Climate normals can be used to assess how typical of the current climate a particular event was. For example, the difference between the average temperature for a calendar year at a site and the climate normal average temperature - the anomaly - can be used to indicate whether that year was relatively ‘hot’ or ‘cool’. Normals are also useful in comparing changes in climate over a long period, as illustrated in Figure 3, which shows the increase in monthly mean minimum temperatures in Melbourne during the 20th century. The statistics have been calculated over the available period of record, which may differ between elements. For all elements, climate statistics have been derived only when there is at least 10 years of data available. Because of the annual variability in rainfall, a period of less than 30 years of rainfall data may not produce reliable statistics and such information should be used with caution. As a comparison, some 5-10 years of temperature data will provide a reasonable estimate of the mean, although probably not of the extremes. Due to the effect of Daylight Saving, these values are only nominal for most Australian sites. The averages for 9 am are hence generally a combination of 8 am and 9 am (standard time) values, and those for 3 pm, of 2 pm and 3 pm values.
<urn:uuid:7327f2bb-65a3-4e27-887a-e96f099fe4a3>
2.84375
1,671
Knowledge Article
Science & Tech.
37.499858
common chemical sense Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. reaction to foreign substances Many microorganisms are known to remain in favourable chemical environments and to disperse away from unfavourable environments. This implies that microorganisms have a chemical sense, but, because they are so small, they are unable to detect chemical gradients by simultaneous comparison of the chemical concentration at two parts of the body. Instead, microorganisms exhibit differential... Humans use a knowledge of the chemical senses to modify their own behaviour or physiology and to modify these properties in other animals. What made you want to look up "common chemical sense"? Please share what surprised you most...
<urn:uuid:fbcc029f-4b85-4c4f-88a9-115459b833c9>
3.46875
160
Knowledge Article
Science & Tech.
31.597273
We’ve raved about solar cells previously: here, and here, the technology has taken several quantum leaps over the past decade. Paintable crystalline and printable solar cells seem to be the way of the future, the fight now is for real solar efficiency. Solar panels that can be simply printed have inched a step closer with the development of an energy efficient, organic, small-molecule solar cell. The solar cell, which was developed by a team from the University of California, Santa Barbara, has energy efficiencies of 6.7 per cent, which rivals the best polymer-based solar cells. Most polymer-based designs have reached the 6 to 8 range for efficiency. “These results provide important progress for solution-processed organic photovoltaics and demonstrate that solar cells fabricated from small donor molecules can compete with their polymeric counterparts,” the authors, including Nobel Prize winner Professor Alan Heeger, wrote in Nature Materials. Read the full article »»»»
<urn:uuid:0fba80e6-8bef-4f4e-99b9-4210948ff986>
3.1875
205
Truncated
Science & Tech.
27.713548
When creating a new VB project, Visual Studio will automatically put Option Strict Off. What is Option Strict and why should it be on? Having Option Strict ] set to Off is VERY dangerous because it allows you to implicitly convert types to other types. Take for example the following code: Dim numeric As Integer = "1" Dim otherNumeric As Integer = "Hello" The first assignment of a to an Integer will succeed. The second will fail miserably... At run time! Because Option Strict is Off, Visual Studio will not give an error on these types of conversions. Option Strict On however DOES prevent this sort of thing. The only way the above code would pass when having Option Strict On is by explicitly casting, like follows: Dim numeric As Integer = CType("1", Integer) Dim otherNumeric As Integer = CType("Hello", Integer) This will still generate an error at run time, but at least the programmer has been made very aware at design time that this case might go wrong! Having Option Strict Off is a sin punished by run time crashes during production while you are enjoying your weekend (not anymore!). If this ruined your weekend, you deserved it ;P So how do you put Option Strict On? There are two methods. First, go to your Project Options and Click the Compile tab. Here you will see some ComboBoxes, one of them has the label "Option Strict:" above it. Make sure it's set to On. This will enable Option Strict for your entire Project! This option can be set to On by default by going to your Visual Studio Options (accessed from your Menu -> Tools -> Options...). In your Options screen, find the 'Projects and Solutions' options. Open this and then select 'VB Defaults'. This will show you the default settings for, among others, Option Strict. Make sure it's On. Every newly created Project will now have Option Strict On by default! Another method is typing Option Strict On above your code file. This is especially handy when you have legacy software and setting Option Strict On on Project level will cause thousands of errors. At least your new code can benefit from design time errors and explicit casting! So why would you ever want to have Option Strict Off? Well, in VB6 the Strict Off behaviour was default, so it is kind of a backward compatibility thing. All sorts of typecasting are done in the background (but do affect performance, and as mentioned, might throw unexpected Exceptions). Second, Strict Off can be handy when using late binding. Consider, for example, the following code: Dim someThirdPartyTool As Object = GetSomeObject This code is perfectly legal with Option Strict Off. In fact, you can call any 'non-existant' method on a variable of the type (also called late binding ]). So imagine that dynamically loads some third party tool in your application. What's worse, this third party tool has terrible versioning, backward compatibility and multiple versions cannot be installed side by side. could simply load the version of the third party tool which just so happens to be installed on the current computer and return some which interacts with the third party tool. You can then call on the returned is NOT a method of (just make sure the method is supported by the third party tool). So the obvious upsides to this are that you do not even need to have the third party tool installed to write and compile this code (a customer calls and says they're having trouble with third party tool version 1, but you currently have version 2 installed which doesn't support that method anymore)! It will only throw Exceptions at run time when the tool is not found or the method is not supported. The obvious downside is that a minor spelling mistake will mess up your application good and you don't have type checking or intellisense. When writing this kind of code, make sure you have a detailed manual which has every object and method in the library described in great detail. Even better, write a uniform Interface and create a different implementation for each version of the tool (more work, but strongly typed, intellisense is supported, design time errors etc.). Another use for Option Strict Off is when working with COM Objects. I will not go into this discussion here, because I have no experience with this, but here ] is a small blog post that's worth reading. Notice how working with COM can be achieved with Strict On too. So while I do not think Option Strict Off is the best method to deal with any of these kind of situations, it might have its uses in some environments (the one having to maintain your code will want to kill you though, speaking from experience here :)). Some more reading: Coding Horror: Option Strict and Option Explicit in VB.NET 2005 Granular Late Binding: VB Whidbey Fun Option Strict Off Considered Harmful Why is this an issue in VB specifically? It's on by default in C#. :) And why am I posting this? Because I see lots of new code that does not have Option Strict On and will almost certainly result in run time errors. :(( I have been playing video games since mid-90's, when Windows 95 came out. I was around seven years old at the time. One thing led to another and I was exposed to some (scripting) languages including VB4, PHP, HTML and CSS. It was not until summer 2010, after a Bachelor Common Art and Cultural Sciences and a Master Media and Journalism, that I decided to become a professional programmer. I was hired by a company and they taught me the basics of VB.NET and WinForms and using SQL Server database. At the end of that same year I signed up on CodeProject and that is when my programming knowledge increased rapidly. Being around some of the best and most enthausiastic coders in the world certainly helps you develop your own skills. I learned various Object Oriented Principles such as SOLID and Design Patterns. I am still working in VB and WinForms using DevExpress Controls and the .NET Framework 4.0. I have experience in ADO.NET, Entity Framework 4.0, LINQ, TPL, WCF, and SQL. I have also written some articles ] for CodeProject in both VB and C#. My second article, What not to do: Anti-Patterns and the Solutions ] became best VB article of the month april 2011, of which I am very proud For those wondering what happened to my Bachelor Common Cultural Art and Sciences and Master Media and Journalism, I currently hold an MA title in Media and Journalism. I am not really doing anything with it, but I guess it helps me write those articles
<urn:uuid:a99264b4-6bea-437d-aa9c-cf0cbe29685b>
2.796875
1,441
Personal Blog
Software Dev.
59.150319
Issue Date: Nov 30, 1992 The cold war between the superpowers not only left the entire world cold in fear, but perhaps also cooled the Earth, say Russian researchers K Y Kondratyev and G A Nikolsky. Their theory is that the atmospheric nuclear tests of the early 1960s released into the stratosphere vast amounts of nitrogen oxides, which combined with sunlight, expedited ozone depletion. The oxides triggered off ozone-destroying reactions, thus using up a part of the sunlight that would otherwise cross the stratosphere and warm the surface of the Earth. Cheryl Colopy‘s book explores how south Asian rivers have been transformed from being considered sacred beings to sewers How a township has set high standard for eco-friendly living
<urn:uuid:050e1b0f-8fec-41e0-82d0-886ac20deeb3>
3.171875
157
Content Listing
Science & Tech.
26.869809
In the grand scheme of things it doesn't matter a whole lot, but I think this is fascinating. TOKYO, Japan (AP) -- Japanese scientists have photographed for the first time in the wild a live giant squid, one of the most mysterious creatures of the deep sea. The team, led by Tsunemi Kubodera from the National Science Museum in Tokyo, tracked the 8-meter (25-foot) long Architeuthis as it attacked prey at 900 meters deep off the coast of Japan's Bonin islands. "We believe this is the first time a grown giant squid has been captured on camera in its natural habitat," said Kyoichi Mori, a marine researcher who co-authored a piece on the finding in the Royal Society Journal, a leading British biological publication. The camera was operated by remote control during research at the end of October 2004, Mori told The Associated Press on Wednesday. Mori said the squid, which was purplish red like smaller squid, attacked its quarry aggressively, calling into question the image of the animal as lethargic and slow moving. "Contrary to belief that the giant squid is relatively inactive, the squid we captured on film actively used its enormous tentacles to go after prey," Mori said. "It went after some bait that we had on the end of the camera and became stuck, and left behind a tentacle six meters long, " Mori said. Kubodera, also reached by the AP, said researchers ran DNA tests on the tentacle and found it matched those of other giant squids found around Japan. `'But other sightings were of smaller, or very injured squids washed toward the shore -- or of parts of a giant squid," Kubodera said. "This is the first time a full-grown, healthy squid has been sighted in its natural environment in deep water." Giant squids have long attracted human fascination and were written about and mythologized by the ancient Greeks. Scientific interest in the animals has surged in recent years as more specimens have been caught in commercial fishing nets.
<urn:uuid:acf6ed86-8c86-451f-9901-db4dd94a6dd0>
2.765625
433
Comment Section
Science & Tech.
43.783462
Note: Background information originally by Tim Goeke ODBC (Open Database Connectivity) is an abstract API which allows you to write applications which can interoperate with various RDBMS servers. ODBC provides a product-neutral interface between frontend applications and database servers, allowing a user or developer to write applications which are transportable between servers from different manufacturers.. The ODBC API matches up on the backend to an ODBC-compatible data source. This could be anything from a text file to an Oracle or Postgres RDBMS. The backend access come from ODBC drivers, or vendor specifc drivers that allow data access. psqlODBC is such a driver, along with others that are available, such as the OpenLink ODBC drivers. Once you write an ODBC application, you should be able to connect to any back end database, regardless of the vendor, as long as the database schema is the same. For example. you could have MS SQL Server and Postgres servers which have exactly the same data. Using ODBC, your Windows application would make exactly the same calls and the back end data source would look the same (to the Windows app).
<urn:uuid:c72d60ac-754f-4fee-aba7-ea9cf22f8666>
3.0625
243
Documentation
Software Dev.
40.537727
There has been an overwhelming popular demand for us to weigh in on recent reports in the Times Britain faces big chill as ocean current slows and CNN Changes in Gulf Stream could chill Europe (note the interesting shift in geographical perspective!). At the heart of the story was a statement at the recent EGU meeting by Peter Wadhams from Cambridge University, that convection in a normally active area of the Greenland Sea was much reduced last winter. Specifically, in an area where a dozen or so convective ‘chimneys’ form, only two small chimneys were seen. (Unfortunately, I can’t seem to be able to find a relevant abstract of Dr. Wadhams talk, and so I have to rely on the Times’ news reports for the specifics). Convective chimneys in the seas bounded by Greenland, Iceland and Norway occur when intense cooling of the ocean, usually associated with a low-pressure system passing through, breaks down the normally stable ocean layers and causes the now colder, denser water to convect and mix down to a relatively deep layer. This area of the world is one of only a few places where the underlying ocean column is marginally stable enough that this process can occur in the open ocean and lead to convective chimneys going down 2000 to 3000 meters. The deep water masses formed in this way are then exported out of the area in deep currents that eventually make up “North Atlantic Deep Water” (which also contains contributions from the Labrador Sea and entrainment of other water masses). This process is part of what is called the ‘thermohaline’ or ‘overturning’ circulation and is associated with a significant amount of heat transport into the North Atlantic, which indeed keeps Britain and the rest of the North Atlantic region 3 to 6 degrees C warmer than they otherwise would be. The figure gives two model estimates for the impact of this circulation (Stocker, 2002). This heat transport is often associated with the Gulf Stream in the media and among the public. However, my pedantic side obliges me to point out that the Gulf Stream is a predominantly wind-driven western boundary current that moves up from the Gulf of Mexico along the US coast to Cape Hatteras, at which point it heads off into the central Atlantic (see also this letter by Carl Wunsch). It then turns into the North Atlantic Drift which is really the flow of water responsible for the anomalous northward heat transport in the Atlantic. There is good evidence from past climates, theoretical studies and climate models that large changes, a slowing down or even a complete collapse, in the North Atlantic Drift and the thermohaline circulation can happen. Indeed climate models generally (though not exclusively) forecast a slowdown in this circulation by 2100. This occurs mainly as a function of increased rainfall in the region which strengthens the ocean layering and reduces the amount of convection in the region. It is probably futile to insist on it at this point, but a collapse of the overturning circulation is not the same as a collapse or reversal of the Gulf Stream (which as I mentioned above is predominantly wind-driven). Getting back to the statement by Peter Wadhams though, how does this relatively small-scale observation get translated into headlines forecasting changes in the Gulf Stream and chilly times ahead for Europe? The major problem is that the background story and the climate model results are now very well known, and any scientific result that appears to project onto this storyline therefore gets a lot of attention. However, it is a long way from the Greenland Sea to the Gulf Stream and some important points did not get a mention in the news stories. Firstly, we know that there is a great deal of decadal variability in how much and where deep convection takes place. Indeed, it was reported by Schlosser et al (1991), that based on CFC measurements, very little convection had occured in the Greenland Sea over the previous 7 years. Subsequently, convection was renewed. Similarly, convection in the Labrador Sea (the other main component) has also oscillated, possibly out of phase with the convection in Greenland. Studies by Dickson et al (1999, 2002) showed that properties of the deep water overflowing the Denmark Strait (between the Greenland Sea and rest of the Atlantic) appear to be related to patterns of variability like the North Atlantic Osillation, and this may help explain some of the variabilty. To be sure, there are some long term trends that are becoming discernable. There is a freshening of the North Atlantic visible since the 1950s. Long continuous records of temperature and salinity at Ocean Weather Station M in the Norwegian Sea indicate that the deep water has also warmed noticeably. However, monitoring networks are now starting to be put in place (Osterhus et al, 2005) and better integrated data will be available in the future. It is important to bear in mind that while the changes being seen are indeed significant given the accuracy of modern oceanography, the magnitude of the changes (a few hundredths of a salinity unit) are very much smaller (maybe two orders of magnitude) than the kinds of changes inferred from the paleo data or seen in climate models. Thus while continued monitoring of this key climatic area is clearly warranted, the imminent chilling of the Europe is a ways off yet.
<urn:uuid:fc7347fd-75bf-473d-b4bc-018f65de652d>
3.375
1,092
Nonfiction Writing
Science & Tech.
36.475355
Kemp's Ridley Sea Turtle Scenes from the video: Why is the Kemp's Ridley endangered? How old can it live? Why does it like to lay eggs when it's windy? Brenda Justice talks with TPWD biologist Robert Adami who tells us this and more! Windows Media | Real Media Kemp's Ridley sea turtle, found in coastal waters and bays of the Gulf and in the Atlantic Ocean, is the smallest, most endangered sea turtle. This reptile weighs 80 to 100 pounds and grows to 30 inches long. Little is known about its life in the open ocean. It prefers shallow waters close to shore where it feeds on such things as crabs, snails, clams and some plants, and often is caught and drowned in shrimp nets. Pollution, both chemical and plastic, affect it. From April through August females lay clutches of soft, white eggs in sandy beaches from Veracruz, Mexico, to Corpus Christi, but few have nested on Texas beaches in recent years. When the young hatch in 50 to 70 days, they head for the water. In some areas, this turtle and its eggs are eaten by humans. Because it is critically endangered, the Ridley is the focus of international conservation efforts.
<urn:uuid:a1bc67c9-11cd-4d57-bc3c-4a2615d93619>
3.171875
256
Audio Transcript
Science & Tech.
68.262402
Ensure that you have fonts that can represent all characters in ISO 8859-15 and Windows cp1252. C1 code points in ISO 8859-15 Test passes if A. The symbols describe the lines to their right. B. The document encoding indicated by the browser chrome (if available), says ISO 8859-15. Assertion: When an ISO 8859-15 encoded document contains code points in its C1 control range that correspond to graphic characters in the Windows 1252 encoding, these are not displayed as Windows 1252 characters, and the user agent continues to otherwise handle this document as ISO 8859-15. Encoding information is sent in the HTTP header; the success of this test rests on the assumption that the HTTP header applies the expected character encoding. The left side of the top line contains the following sequence of code points from the C1 range: 0x80 0x9A 0x9E 0x8E. These are not graphic characters in 8859-15, but in Windows cp1252 they represent the sequence of characters shown in the graphic below. The characters to the right serve to distinguish ISO 8859-15 from Windows cp 1252, ISO 8859-1 and Windows cp1256 (Arabic). If the XML fails to load due to a syntax error, that is also evidence that the assertion held true.
<urn:uuid:f2d68e3d-c297-49df-82fe-8d1f636ca758>
2.78125
287
Documentation
Software Dev.
65.560918
These two pictures show the speed of the solar wind. The speed of the solar wind was measured by the Ulysses spacecraft. The red and blue arrows show how fast the solar wind was "blowing". Longer arrows show higher speeds. The picture on the left shows the Sun at "solar min" when there are few sunspots. The picture on the right shows the "solar max" period of a sunspot cycle. That's when the Sun is very active and there are usually lots of sunspots. At solar min, the fast solar wind (~750 km/sec) flows outward from coronal holes near the poles. The slow solar wind (~400 km/sec) flows more slowly from the Sun's equator. At solar max the Sun's magnetic field is a scrambled mess. The speed of the solar wind is not really related to latitude at solar max. Images courtesy of the ESA.
<urn:uuid:2ab86bf2-ffbc-4eb8-8c5c-013fdca5339a>
3.84375
188
Knowledge Article
Science & Tech.
76.2715
This is an interesting question, particularly considered in the context that Cairns-Smith (1985) even suggested that clays (silicates in solution) may have had some sort of early selection acting on them due to their surface chemistries. However, there are a number of major problems with Silicon. Some are chemical and some are astrophysical in nature. For example: - Silicon has a lower electronegativity than carbon and a longer bond length. Silicon can polymerize, but many conformations (such as rings) are highly reactive or unstable. - Silicon lacks chirality. Since biochemical reactions are very specific this may present a fundamental problem for alien biochemistries. - We don't see silicon macromolecules in nature. Large carbon molecules are seen in space such a polycyclic aromatic hydrocarbon rings. The largest silicon molecule seen in space is a chain of SiC_3 (and maybe SiC_4). - On reacting with oxygen (which it does readily) silicon likes to form solids like sand. - Silicon is much less common than Carbon in the Universe. The Solar abundance of silicon is 1/10 that of carbon, and supernova yields suggest that the silicon abundance may be as low as 1/100 that of carbon during nucleosynthesis in low/intermediate mass stars. To form complex silicon molecules we would probably need to keep it in an oxygen-free environment and somehow maintain it in solution. One possibility would be to hold it at high pressure and temperature such as in the interiors of planets (think deep hot biosphere theory) but this presents another host of problems for conceivable biochmistries and is very speculative. Apponi, A.J., McCarthy, M.C., Gottlieb, C.A., & Thaddeus, P. 1999, Journal of Chemical Physics, 111, 3911 Cairns-Smith, A. G. (1985) Seven Clues to the Origin of Life Cambridge University Press, New York, ISBN 0-521-27522-9. Woosley, S.E., & Weaver, T.A. 1995, Astrophysical Journal Supplement, 101, 181
<urn:uuid:13d249cf-347a-4497-aae2-d1727d14234b>
3.546875
452
Q&A Forum
Science & Tech.
48.407308
Read/Search this Article Temperate species of the Drosophila melanogaster species group enter reproductive diapause for overwintering in response to short daylength. During the prediapause period, they accumulate triacylglycerols (TAGs) as energy resources for winter. Under laboratory conditions, the capacity for storing TAGs differs in different species, and appears to be closely correlated with diapause and cold-hardiness; cool-temperate species, such as those of the auraria species complex which enter a deep diapause and are highly cold-hardy accumulate a larger amount of TAGs than warm-temperate species, such as D. rufa and D. lutescens which enter a weak diapause and are less cold-hardy. On the other hand, a subtropical species, D. takahashii, which has no diapause in nature and is not cold-hardy, is unable to store as much TAGs as the temperate species. These species were tested winter survival and TAGs content under outdoor conditions in Sapporo (a cool-temperate region) , northern Japan. In the strains of cool-temperate species from northern Japan, individuals which eclosed in mid autumn accumulated TAGs up to 163 μg/ mg body weight and 50 to 70% of them survived until spring, while those which eclosed later in autumn accumulated less TAGs and had a lower ability to overwinter. The TAG content was lower in the warm-temperate species and the subtropical strain of D. triauraria, and dropped to very low levels by mid winter, and these species and strain were unable to survive until spring. These observations suggest that TAG level plays an important role in overwintering of the Drosophila species. In addition, differential scanning calorimetry analysis revealed that the transition temperatures of TAGs were lower in diapausing adults than in reproducing ones, and also lower in species or strains adapted to cooler climates than those adapted to warmer climates. These phenomena were correlated to the fatty acid compositions of the TAGs. Furthermore, in the temperate species of the montium species subgroup (D. subauraria, D. biauraria, D. triauraria and D. rufa) , the amount of saturated TAGs was smaller than the value expected on the assumption that fatty acids are randomly distributed in the TAGs, suggesting the non-random distribution of unsaturated fatty acids among TAGs. This may facilitate the lowering of the transition temperature of TAGs, and hence may be related to the ability of Drosophila to cope with temperate climates.
<urn:uuid:cf20209d-f468-4afd-8d55-0a9b88873e8d>
2.921875
551
Academic Writing
Science & Tech.
28.610164
The Bowen Ratio Surface Flux Observations (KSU) Data Set contains surface flux measurements made at selected sites within the FIFE area. The sites were equipped with Bowen ratio equipment that was operated by several different groups. Each surface flux station was capable of measuring the fluxes of net radiation, latent heat and sensible heat. The Bowen ratio stations measured the soil heat flux ... as well. The components of the energy balance were determined with the Bowen Ratio Energy Balance (BREB) method. The BREB is a combination of the transport and the energy balance equations. The surface flux and micrometeorological measurements available in this data set were collected from 23 locations with 27 site identifiers from 1987 through 1989. Thirteen of these locations were instrumented with stationary bowen ratio systems which collected daily measurements for months. These systems were all located in the northwest quadrant of the FIFE study area within the Konza Prairie Natural Research Area. Ten locations were instrumented in 1987 for a few days at a time with a portable Bowen ratio system. This roving system visited all but the southeast quadrant of the FIFE study area.
<urn:uuid:e77aad46-53a7-4035-a1df-4ab556554b28>
3.34375
227
Structured Data
Science & Tech.
37.361071
This data set decribes surface and volume properties of snow and ice to validate radar satellite measurements. Ground penetrating radar (GPR) at 500 and 1000GHz provides information on snow layers and thickness near the surface at two sites on the McMurdo Ice Shelf, one site at Ross Island, and on the landfast and new sea ice in McMurdo Sound. A 50MHz antenna is used to measure total ice thickness ... and the thickness of the ice above the saline layer underneath the ice shelf. Snow density and morphology is measured in snow pits using standard glaciological methods and an infrared camera, as well as an ice corer for depths down to about 8m. Snow stakes were used to measure the annual accumulation on land ice over a one year period. A dust layer is used to quantify the average accumulation over a 5 year period. Stake and GPR measurements suggest a high temporal and spatial variabilty in snow accumulation near Ross Island. A laser ranger on the skidoo and a small unmanned aircraft are used to determine the surface roughness of snow on land ice. For sea ice, the GPR system is used on one north south and two east-west transects to measure snow thickness on ice. Ice drilling is used to determine total sea ice thickness along the transects. A helicopter EM bird is used to measure a grid of sea ice and ice shelf thickness across the McMurdo Sound. Ground measurements validate the performance of the HEM bird in the presence of platelet ice. In 2011 more in-depth research was conducted into sea ice thickness. The subice platelet layer thickness under sea ice was measured in regular intervals at two North-South oriented profiles and four east-west oriented profiles. Holes were drilled at regular intervals into sea ice at measurement sites about 5 km apart. At these holes sea ice thickness and snow depth on top of the sea ice was measured. In between the sites, sea ice thickness was measured using an electromagnetic induction device, and snow on sea ice was measured using a ground penetrating radar system. Ocean temperature and salinity was measured through holes in the sea ice along the ice edge of the McMurdo Ice Shelf within 3 km of the ice shelf front. The response of the sea ice to tidal height was measured at three locations using GPS Stations. At these locations samples of sea ice were taken for geophysical, oceanographic and biological analysis. Water samples were also taken. Sea ice thickness (surface elevation and draft of sea ice) and surface reflectance was measured along ten North-South oriented profiles and eight east-west oriented profiles. This was performed by the HEM bird and the ground measurements validate these results.
<urn:uuid:c2423c59-d798-449c-b0f4-c86beff79cc8>
3.59375
546
Academic Writing
Science & Tech.
43.586003
An instance of Category (~>) declares the arrow (~>) as a category. The category with Haskell types as objects and Haskell functions as arrows. The category with categories as objects and adjunctions as arrows. The category of all monoids, with monoid morphisms as arrows. |Category ~> => Category (Op ~>)| |Category (Discrete n) => Category (Discrete (S n))| |Category (Discrete Z)| |Monoid m => Category (MonoidA m)| A monoid as a category with one object. |HasTerminalObject ~> => Category (Peano ~>)| |(Category c1, Category c2) => Category (:**: c1 c2)| The product category of category |(Category c, Category d) => Category (Nat c d)| Functor category D^C. Objects of D^C are functors from C to D. Arrows of D^C are natural transformations. |(Dom m ~ ~>, Cod m ~ ~>, Category ~>, Functor m) => Category (Kleisli ~> m)| |Category (Dialg f g)| |(Category (Dom t), Category (Dom s)) => Category (:/\: t s)| Whenever objects are required at value level, they are represented by their identity arrows.
<urn:uuid:09c03a43-49db-42fb-af96-9800a1e1f6ce>
2.921875
292
Documentation
Software Dev.
36.161312
NASA is now taking the exploration at Mars to a whole new level via tha Mars Rover named curiosity that apparently weighs a full ton and is a giant, scientifically armed, robot. The prime purpose of the the robotic beast will obvioulsy be to detect new life, or possibilites of having life at the red planet. Also, it will to detect whether life ever existed on the planet or was is it just wishful thinking on or part. Curiosity is going to stay up for quite a while and I guess we will have to wait for the results to know whether or not we have any organic siblings. Till then let me give you a brief collection to trivia regarding the rover. The Seventh Slice through the Martian Atmosphere Curiosity is the seventh machine that sliced through the atmosphere at Mars at a speed of 13000 mph. Attempts have been made by other countries as well but most of them were in vain. The Seven Minutes of Terror Interesting to know that being the seventh machine to land Mars, it took seven horrible minutes for the Mars rover Curiosity to land onto the surface and give the earthlings a chance to see a touch down onto the surface of our neighbouring planet. It was named by a Sixth Grader Mars Science Laboratory, which is the name given to Curiosity, was selected by Nasa from a science competition. The name was thus an original by a sixth grader. The Expense overload on the Rover The rover was supposed to be finished by 2009 and had to have landed by 2010. However, some developemental delays interupted the initial time line and the cost of the rover jumped a billion dollars over the initial estimate. Phew – now that is a major technological investment. The weight of the rover You know how they idiomatically say that something weighs a ton, well the curiosity Rover actually, in its whole entirety, weigh a ton! Isn’t that massive! It is slow Now that the rover as landed on the planet, it is said that during its whole stay at Mars, it will only travel like 12 miles at a speed of 0.00073 mph.! That is slow! Powered by Nuclear Energy Apparently Nasa did not trust the sun to much which is why the rover is powered by nuclear energy, a stock enough to last it as long as 14 years although its estimated is stay is only 23 months! The computers on the rover are less powerful than an Iphone 4s Well some might say that the computers on the rover are very meager in comparison too Apples smart phones, but that is quite misleading since the computers that are onboard the Rover, which the engineers at Nasa took years to perfect, are radiation savy. The rover has several of these handy incase one fails. Curiosity will search for ingredients to life, not life itself The rover isn’t capable of finding life itself on the red planet, just the inidications whether life does or does not, can or cannot, exist on the planet! The lost JetPack Lander It is intersting to note, that the jetpack lander that helped the rover to land on the ground is no where to be found! Nasa maintains that this was exactly the plan since they did not want the jetpack lander to pollute the environament for the rover! Sneaky I say! Author Zehra Farooq
<urn:uuid:f7896e53-13c6-4a66-8724-e5df128b1355>
3.078125
702
Listicle
Science & Tech.
52.351402
I think the last one is the type im looking for, could someone show me how that might work with a number like You have been given two equivalent expressions. , called the "floor" or "integer part" (for positive numbers) gives the largest integer larger than or equal to x. , called the "decimal part" is . If x= 209383404, then x/1000= 209383.404. It's "floor" is 209383. Or if x= 209383404, x/1000= 209383.404. It's "decimal part" is .404 and then 209383.404- .404= 209383. Hi ... Yeah, I know that it's essentially the same, but it goes back to my question about the floor function being an operation (i.e. I'm pretty sure it is also an operation, but I'm wondering if there's any significance in it being called a function rather than an operation) Also, I asked if my proposal was okay, because I think it fails the requirement of two operations ... hebby, this is how it would work: Hi ... I actually had a response typed out before I lost electricity (and thus internet connection) ... That's kind of my point, why is it called an operation instead of a function? Why floor function instead of floor operation? Similarly, why addition operation, instead of addition function? Is there any particular reason for this terminology? I know the main difference between an operation and a function is that an operation maps to only one "dimension", whilst functions can be mapped to more. In any case, this is kinda heading off-topic ... hebby, depending on how much calculus you've learnt, you should be familiar with the floor function, and I think that would be the solution. Otherwise, well ... you'll have to think of something Um, there's something wrong with your latex, but I'm guessing you're saying that f:R^2 -> R^2 ... and yeah, that is a function, and I did say that functions can be mapped to more than one dimension. It's operations that are limited to R^n -> R. I was just wondering (since my math is limited), why the distinction between operation and function, and why in particular some functions are distinguished as operations, whilst others are not (even though they are also operations) ... such as the floor function .... A function is a combination of operations arranged in order to achieve a particular goal. Take the factorial as an example : . This is a combination of multiplications, arranged in order to obtain the factorial of a number. A function is not necessarily defined as or or any other letter. is a function. is a function. is a function. Etc ... Do you understand better now ? yup ... makes more sense ... but if that is the case, going back to hebby's original question, then I don't think we could use the floor function .... For your convenience ... hebby is asking how you can "cut off" the last three digits (on the right), of some integer, in two mathematical operations ... i.e. 123456 becomes 123 I don't know if this is possible in two operations. I can do it in three, but I can't find a way to do it in two. Anyway, what is the point of this kind of question ? If one can find a way to do it somehow, one will do it this way until further improvement, don't you think ? He won't just be looking for something that takes two operations ... Cuz from what you gave, I can still see how it's kind of "one dimensional" (the only dimension is ) Since you disagree with Bacterius, how would you distinguish operations from functions? Just curious as to how different people's views are
<urn:uuid:85dcb8a7-bd59-4fb7-a62b-f6c640908a76>
2.84375
809
Comment Section
Science & Tech.
75.074058
Momentum and Kinetic Energy Why is momentum the derivative of kinetic energy? This relation follows from the definitions: kinetic energy, KE, is defined as: KE = 1/2 x M x V^2 and momentum, P, is defined as: M x V where M is the mass and V is the velocity, so: d(KE)/dV = 1/2 x d(M x V^2) and if the mass is a constant: d(KE)/dV = 1/2 x M x d(V^2/dV) = 1/2 x M x 2 x V = M x V = P. There is one further refinement that needs to be taken into consideration, but it does not change the overall result. Velocity is a vector quantity (that is, it has both magnitude and direction so strictly speaking V = (Vx, Vy, Vz) and V^2 is, strictly speaking, the dot product of V and itself, V*V = Vx^2 + Vy^2 + Vz^2. Momentum on the other hand is a vector, and so has both magnitude and direction so strictly speaking it is the magnitude of P: |P| = [M x (V*V)^1/2]. In introductory presentations of mechanics this refinement is usually ignored. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:193c6f23-f0e9-43f0-b67d-afd9bea34c5b>
3.953125
306
Knowledge Article
Science & Tech.
70.869674
Back to Table of Contents So far, we have learned that energy is a measure of the capability of an object or system to do work, and we have also learned about the basic different forms of energy. But these concepts still don't quite do justice to the full concept of energy, for energy has a number of very special additional properties we have not fully discussed yet. If you think about these carefully, and don't take them for granted, you'll realize that they don't follow from simple intuition. Rather, these properties had to be discovered or proven somehow. We'll explore briefly how these properties were proven in the next section. In this section, we'll first review them: These properties are; Energy can be transferred from one object or system to another through the interaction of forces between the objects (unlike the condition of, say, being the color red, which is intrinsic to the object in question). Energy comes in multiple forms: kinetic, potential, thermal (heat), chemical, electromagnetic, and nuclear energy. (as discussed in the previous section). In principle, energy can be converted from any one of these forms into any other, and vice versa, limited in practice only by the Second Law of Thermodynamics (we discuss the Second Law, that is "entropy", in a later section). Energy is always conserved, that is, it is never created anew or destroyed - this is called the First Law of Thermodynamics. Thus, when an object does work on another object, the energy can only be converted and/or transferred, but never lost or generated anew. In a sense, energy is like perfect money - transferred but always preserved, assuming no inflation or deflation! Although most people are aware of these facts nowadays and take them for granted, these are really amazing properties if you stop and think about them. How was anyone ever able to prove such properties? These properties go far beyond the intuitive concept of energy given at the beginning of this primer. You may find this hard to see now, because we generally take these ideas for granted. But for thousands of years, people didn't have a clearly defined concept of energy, and didn't know, for example, that there is a definition of "energy" which refers to a quantity that is always conserved. Moreover, even after kinetic energy and potential energy became understood, it still took people centuries to figure out that heat is just another form of energy. Before our present understanding of physics evolved, it was still a logical possibility that the Universe might have been constructed quite differently, such that energy, in the sense of power to modify the world, would not have been conserved and/or things in even everyday life might have been controlled by some kind of supernatural beings. We can now see easily that such a world would likely look very different from our own, because the basic properties of energy are actually responsible for "constraining" many aspects of our world: Everything from the branching structures of trees to the way that our bodies and the planets move are all strongly constrained by the properties of energy. Back to Table of Contents
<urn:uuid:1a8c8a09-fefc-44fe-b4b2-0bf5feda3ca2>
3.53125
635
Knowledge Article
Science & Tech.
34.266209
The current climate-change driven acceleration in tidewater glacier melt in Greenland is another example of why calling anything “impossible” in complex systems is usually a bad idea: The abrupt acceleration of melting in Greenland has taken climate scientists by surprise. Tidewater glaciers, which discharge ice into the oceans as they break up in the process called calving, have doubled and tripled in speed all over Greenland. Ice shelves are breaking up, and summertime “glacial earthquakes†have been detected within the ice sheet. “The general thinking until very recently was that ice sheets don’t react very quickly to climate,†said Martin Truffer, a glaciologist at the University of Alaska at Fairbanks. “But that thinking is changing right now, because we’re seeing things that people have thought are impossible.” [via NY Times]
<urn:uuid:55b31f38-020f-4410-9fdb-6e998f227389>
2.953125
188
Truncated
Science & Tech.
29.310824
There is no sensible answer to this question. You can put any amount of charge on a blob of aluminum sitting in a vacuum, or surrounded by an ideal insulator. Why not? If you put an awful lot of electrons on a blob of aluminum sitting in a vacuum, the electrons will eventually start shooting off by thermionic emission, and most of the excess charge will be gone after, let's say, 1 day. If you put even more electrons, most of them will be gone after 1 millisecond. But there is no "maximum" really, just a gradual speed-up of the discharging. Even 1 excess electron will not be stable for eternity. If you subtract electrons instead of adding them, certainly nothing will happen. Well, I guess positively-charged atomic nuclei could fly off if the charge was significant enough. Again, this process does not let you say that a certain amount of charge is "the maximum possible", it's just a process that happens more and more frequently as the charge increases. If you add or subtract an awful lot of electrons from a blob of aluminum surrounded by insulator, the insulator will eventually break down. If you have an ideal insulator that cannot break down, then nothing will happen no matter how many electrons are there. You seem to have the idea that all electrons must come from surface atoms, so if you take away every electron from every surface atom, then it will be impossible to take away any more charge. But if you think about it, that's sort of a weird idea, when there's still all those electrons inside the metal! In fact, the idea is not correct. The "surface" where an insulator can store charge is not infinitesimal, nor necessarily exactly one atom thick. It's actually a depth equal to the so-called Debye length. If you subtract loads of electrons -- every electron from every "surface" atom -- the Debye length will just increase allowing you to scrape electrons out of atoms residing farther and farther from the surface.
<urn:uuid:2f034c55-44c7-49b5-9bec-df9edb189b2b>
2.828125
415
Q&A Forum
Science & Tech.
49.031168
As often happens with new types of quizzes, players had lots of excellent feedback (including corrections) on how the quiz was presented. Since we plan to offer at least another dozen quizzes of this type over the next year or so, we want to make sure that all future "unnecessary" quizzes are unambiguous and easy to understand. Here is our current idea for how to define the quiz: Each choice contains a combination of DDL statements and PL/SQL blocks. A choice is correct if it does not contain unnecessary code. A piece of code is unnecessary if you can remove it from the choice without changing the result of running the remaining code in that choice. Rules for this quiz: - You cannot add anything to the choice. You can only remove text - and there are limitations to what can be removed: - PL/SQL is composed of delimiters, identifiers and literals. Removal of part of a delimiter or literal is not allowed. You can, however, remove an entire word (text separated by a delimiter or whitespace) from an identifier. Examples: you cannot remove "PLS_" from "PLS_INTEGER"; you cannot remove single quotes from around a literal string; you can remove "ZONE" from TIMESTAMP WITH TIME ZONE"(the result may be invalid code, but it would be a valid removal). - You cannot remove whitespace or comments. - If it starts as a PL/SQL block, it must end that way. You cannot, in other words, remove BEGIN and END; and leave in place some part of the statements in the block as individual SQL statements. - A change in the resources needed to execute the choice (CPU, memory, etc.) does not, for this quiz, constitute a change in the choice. In other words, if the removal results in a choice that is slower or consumes more memory, but otherwise accomplishes the same work (inserts a row, displays text, etc.), then the choice does contain unnecessary code. BEGIN NULL; DBMS_OUTPUT.PUT_LINE (1); END;After removing the NULL; statement, the block will do exactly the same thing it did before. But the following choice should be marked correct, since if you remove the "NULL" or ";", the block will no longer be valid. BEGIN NULL; END;I believe these rules clarify all issues raised in the Commentary for the 1 March quiz. What do you think? Are there other scenarios you have in mind that would not be addressed by these rules?
<urn:uuid:a3de639b-1303-4d74-a684-3d96646401cd>
2.984375
536
Comment Section
Software Dev.
51.304396
I have, this year, grown some crimson-flowered nasturtiums in a pot and, needless to say, the white butterflies soon found them. Before long both large white (Pieris brassicae) and small white (Pieris rapae) caterpillars appeared, the large whites (below) being particularly obvious on the leaves and flowers. This morning nearly all these caterpillars had disappeared and a couple of wasps were spotted searching the plants for more. By the evening all I could find was a solitary small white (below) that had survived, perhaps, because of its more cryptic colouring. Caterpillars of the white butterfly family, the Pieridae, store toxic chemicals derived from their foodplants within their bodies making them distasteful to some of their predators, mainly birds. That is why the large white larvae feel safe to feed openly on the foodplants and have a black and yellow pattern of colour that acts as an " I taste nasty" warning. While it may offer some protection against birds it does not stop the wasps and various other creatures that eat large white and other caterpillars, indeed the bright colour may even help them find their prey. Since wasps and large whites have probably been around together for thousands of years, it makes one wonder about the survival of the fittest dictum. Constant removal by wasps does not seem to have resulted in better adapted caterpillars, yet somehow they evolved to their current colour and shape. The last laugh, perhaps, will go to the nasturtiums who have survived the onslaught and have plenty of summer time left to set seed.
<urn:uuid:71fd5f71-91f8-49cf-985e-131a336ebaa2>
3.15625
342
Personal Blog
Science & Tech.
39.228077
A Tall Color Polymorphism in Acris Tadpoles in Response to Differential Previous workers have noted that cricket frog tadpoles of the genus Acris have black tail tips. My initial collections of Acris crepitans tadpoles from various localities in Kansas revealed a polymorphism in tail color associated with habitat type: ponds primarily have black-tailed tadpoles, whereas lakes and creeks have mostly plain-tailed forms. Collections of potential predators from these localities showed that the black-tailed pond populations co-occur with a high density of the aeshnid dragonfly larva, Anax junius, and led to the hypothesis that the black tail functions as a deflection mechanism to divert he attach of the Anax larva to the tail of the tadpole and away from the more vulnerable head and body. Plain-tailed tadpoles are found primarily in lakes and creeks where fish are the major predators. Tail damage data from natural populations and data from predator – prey experiments support the hypothesis. Disruptive selection is cost likely the mechanism responsible for maintenance of this polumorphism. Gene exchange occurs as adult frogs migrate from one habitat type to another, but selection on tadpoles by different predator regimes is habitat SREL Reprint #0817 Caldwell, J.P. 1982. Disruptive selection: a tail color polymorphism in Acris tadpoles in response to differential predation. Canadian Journal of Zoology 60:2818-2827.
<urn:uuid:f50f9223-0109-45b7-b158-a66f9ed43713>
2.8125
340
Academic Writing
Science & Tech.
26.921859
Like all insects, ants have six legs. Each leg has three joints. The legs of the ant are very strong so they can run very quickly. If a man could run as fast for his size as an ant can, he could run as fast as a racehorse. Most insects have three parts to their body and ants are no exception. These three parts are called the Head, Thorax and Abdomen: The abdomen of the ant contains two stomachs. The ant stores food for itself in one stomach while the second stomach holds food which is shared with other ants. Like all insects, the outside of their body is covered with a hard armour this is called the exoskeleton. Ants have two eyes that are called 'compound eyes'. This means that each eye is made up of many smaller eyes (like a fly or bee). Ants have antennae which are used for not only to touch, but also for their sense of smell. Their heads have a pair of large, strong jaws. The jaws open and shut sideways like a pair of scissors. Adult ants cannot chew or swallow solid food. Instead they swallow the juice which they squeeze from pieces of food. They throw away the dry part that is left over. An ant brain has about 250,000 brain cells. A human brain has 10,000 million so a colony of 40,000 ants has collectively the same size brain as a human. Ants usually lose, or never develop, their wings. Therefore, unlike their wasp ancestors, most ants travel by walking. Some tend to develop literal paths, the tiny equivalent of deer trails, or create unseen paths using chemical hints (Pheromones) left for others to smell. The more cooperative species of ants sometimes form chains to bridge gaps, whether that be over water, underground, or through spaces in arboreal paths. Among their reproductive members, most species of ant do retain wings beyond their mating flight; most females remove their own wings when returning to the ground to lay eggs, while the males almost invariably die after that maiden flight. Some ants are even capable of leaping. A particularly notable species is Jerdon's Jumping ant (Harpegnathos saltator). More Insects and Bugs Praying Mantis |
<urn:uuid:f8e5a09f-1c23-42b1-8978-51259b7b00a1>
3.5625
468
Knowledge Article
Science & Tech.
65.274
Associated article: Scripting with Java & Python Tags: Web Development JVM Languages Published source code accompanying the article by Boudewijn Rempt in which he shows how you can embed a standard language such as Python into a Java application. Also see PYCONSOL.ZIP. Scripting With Java and Python by Boudewijn Rempt Example 1: class Console (Object): def __init__(self, adapter=None, adapterName=""): "@sig public Console(Object adapter, java.lang.String adapterName)" Example 2: > jythonc --deep --package com.tryllian.pyconsole --jar ...
<urn:uuid:d5ac0fdb-f404-45c0-8605-243810ed9125>
2.71875
137
Truncated
Software Dev.
37.112703
Protons are found to be smaller in size Posted 13 July 2010 - 02:45 AM All atoms are made up of nuclei orbited by electrons. The nuclei, in turn, are made of neutrons and protons, which are themselves made of particles called quarks. For years the accepted value for the radius of a proton has been 0.8768 femtometers, where a femtometer equals one quadrillionth of a meter. The size of a proton is an essential value in equations that make up the 60-year-old theory of quantum electrodynamics, a cornerstone of the Standard Model of particle physics. The Standard Model describes how all forces, except gravity, affect subatomic particles. But the proton's current value is accurate only by plus or minus one percent—which isn't accurate enough for quantum electrodynamics, or QED, theory to work perfectly. So physicists have been searching for ways to refine the number. Smaller Proton Size Revealed by Lasers In a ten-year experiment, a team led by Randolf Pohl of the Max-Planck Institute of Quantum Optics in Garching, Germany, used a specialized particle accelerator to alter hydrogen atoms, which are each made of a single proton orbited by an electron. For each hydrogen atom, the team replaced the atom's electron with a particle called a muon, which is 200 times more massive than an electron. "Because the muon is so much heavier, it orbits very close to the proton, so it is sensitive to the proton's size," said team member Aldo Antognini, of the Paul-Scherrer Institute in Switzerland. Read the complete article here. Posted 13 July 2010 - 05:51 AM Nature is what it is, and will do what it will do. We mostly observe, and only fundamentally interact. Not even the speed of light, in a vacuum is "always" consistant, yet most lay persons treat this "Usual Constant" as a "law". Posted 13 July 2010 - 07:16 AM Posted 13 July 2010 - 08:03 AM That's what keeps you young Posted 13 July 2010 - 08:32 AM Remember when (the proverbial) they used to tell us the brain had an unlimited capacity to learn? *NOW* they're saying it ain't so! "ISN'T THAT NICE?" Posted 13 July 2010 - 09:22 AM Posted 14 July 2010 - 09:59 PM Posted 27 November 2010 - 12:32 AM Thats just my opinion though. Nothing more. Music is a moral law. It gives soul to the universe, wings to the mind, flight to the imagination, and charm and gaiety to life and to everything. ~Plato Dreams are illustrations... from the book your soul is writing about you. ~Marsha Norman Posted 27 November 2010 - 07:22 PM Posted 28 November 2010 - 10:42 AM Protons are known to transform into neutrons through the process of electron capture (also called inverse beta decay). For free protons this process does not occur spontaneously but only when energy is supplied. Humm..Electromagnetic....One with or without the other...Let's see where it goes.. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
<urn:uuid:5f79c845-2b3e-45e4-8ac0-55f34f4beb6e>
3.625
710
Comment Section
Science & Tech.
66.070653
Have modern programming languages failed? From the point of view of learnability and maintainability, yes! What would a truly maintainable and learnable programming language look like? This is the second of a six-part series exploring the future of programming languages (read The World’s Most Maintainable Programming Language: Part 1, The World’s Most Maintainable Programming Language: Part 3, The World’s Most Maintainable Programming Language: Part 4, The World’s Most Maintainable Programming Language: Part 5, and The World’s Most Maintainable Programing Language: Conclusion). The most important feature of a language is that it is completely consistent. There should be no inconsistency or nuance or shade of meaning. Inconsistency is the enemy of understanding. This is true when discussing possible multiple interpretations of a construct during compilation and analysis or when reading the source code to a program. Consistency is partly a notional problem. Consider languages that support object orientation but force the programmer to name the invocant of a method explicitly — there is tremendous potential for confusion! Is the do all method and attribute accesses have an implicit invocant? How unmaintainable the practice that raises such a question! To solve this problem at least, while respecting the principle of learnability and avoiding false cognates, a maintainable programming language should name all invocants after the name of their classes. That is, within a class named AlienInvader, all methods will automatically have access to the invocant through the symbol named alien_invader. (Consider how confusing it would be if the language allowed unfettered creativity. Would there be instances such as That helps consistency in the small, but what about throughout an application or a problem domain? Another potential point of inconsistency is in using different symbol names for the same types of items. For example, a database handle may be handle. Various parts of the code may refer to the same type of thing with different names — inconsistency that leads to misunderstandings. This is a similar but different point. Here the problem is the existence of separate pockets of jargon. When these pocket communities overlap, their jargon conflicts. (As well, the term “handle” is vastly inappropriate for any community and will not appear in the final To solve this problem, libraries will enforce the use of one particular identifier for each separate entity in the system. Obviously the library designer knows best about how to use the entities modeled by the library, having carefully considered all of the potential use cases (and taking into account the language’s design principles), so there will be no clearer names than those provided. Some coders used to other, less maintainable systems, may object on terms of “creativity” and “expression”, but excess of consistency in service of maintainability is certainly no vice, where any good coder can tell stories of irredeemably creative symbol names providing no value to a system. The compiler might even go as far as to include a part of speech checker to ensure that method and function names are verb clauses, variable and object and entity names are nouns, and aggregate data structures have the proper number, case, and pluralization. (Typos are a significant source of errors.) External consistency is a problem with regard to specifications and implementations as well. Not only must maintainable programs be consistent within themselves, but they must be consistent with other programs. Even though many programs end up interoperating, where external consistency is obviously important, allowing even subtle linguistic and semantic drift in small pockets will only lead to difficulties in understanding. Many programming languages, even those with formal specifications, fall afoul of the problem where the specification is ambiguous or an implementation does not implement the specification appropriately. To alleviate this, all implementations must implement the specification appropriately and no implementation will be complete unless it produces the same output for the same file as another implementation. Put another way, no program will be complete and correct unless multiple implementations have compiled it to the same code. This suggests that the compiler should be a front-end to two or more separate compilers, ideally running on separate platforms. This need not extend the length of the compilation stage significantly if the tools take full advantage of threading and parallelization techniques, but by reducing ambiguity in the language many of the difficulties in parsing and optimization go away and it should represent a small investment for the sake of program correctness.
<urn:uuid:e7e4adb5-d15a-4967-9f53-83700e1fce7d>
2.8125
941
Personal Blog
Software Dev.
23.264171
26. Robert Goddard 27. Early Rocketry 29a. Looking Outwards 29b. Looking Earthwards 29c.Observing local space 29d. Useful Spaceflight 29e.Exploring far Space 30.To Space by Cannon? 32. Solar Sails 32a. Early Warning of 33. Ion Rockets This groups includes about 200 communication satellites ("comsats") scattered in synchronous orbits above the Earth's equator. At a distance of 6. 6 Earth radii (42,000 km or 26,000 miles), these satellite make one orbit per day, (sidereal day of 23 h 56 min--see here) and therefore as the Earth turns, they always stay above the same ground station. Comsats have become essential to the relaying of television broadcasts, long distance telephone connections and computer communications: if you are receiving this from the world-wide web (especially if you are outside the US) this document might well have been routed to you through one of them. NASA maintains several communication satellites as data links to other satellites, an arrangement found more economical than the use of tracking stations on the ground. In addition, networks of low-altitude communication satellites (e.g. "Iridium", a system of 66 spacecraft, plus spares) are being deployed for use by cellular telephones. Some beneficial satellites were already listed under different classifications: weather satellites, like those of the GOES series, and those which scan the Sun and the "solar wind" for activity affecting "space weather" inside the magnetosphere. Still another application is the 24-satellite network of the "Global Positioning System" (GPS), in circular orbits at distances of about 4.1 Earth radii (26,000 km or 16,000 miles). GPS satellites continually broadcast their precise locations, and these can be read by small portable receivers, relatively inexpensive. Using a built-in computer, these receivers then derive their own precise position on the ground, within 10-50 meters. Russia operates its own system, GLONASS, and European countries are planning a third one. Originally developed by the US Department of Defense (whose users derive from them even more precise positions), the GPS satellites are widely used by the public--by ships at sea, airplanes, hikers in the wilderness, even drivers trying to navigate large cities. GPS has played an enormous role in the second Gulf War, precisely guiding missiles and bombs to specific targets in Iraq. In the weekly magazine of the aerospace industry "Space News International," issue of 31 March 2003, Jeremy Singer and Simon Saradzhyan wrote that Iraq had anticipated this and set up transmitters to jam GPS signals and interfere with them. They reported that 6 such transmitters were located and destroyed , the last of them by a GPS-guided bomb.| For a detailed article about this technology, see "Satellite-Guided Munitions" by Michael Puttré, p. 66-73 in "Scientific American," issue of February 2003. Questions from Users: *** What if we had to give up use of satellites? *** "Iridium" flares Next Satellite Class: #29e Missions to Planets and to Distant Space Next Regular Stop: #30 Far-out Pathways to Space: Great Guns? Timeline Glossary Back to the Master List Author and Curator: Dr. David P. Stern Mail to Dr.Stern: stargaze("at" symbol)phy6.org . Last updated: 8 April 2003
<urn:uuid:256bf5c4-50ad-4a67-b756-759fda07a7bb>
3.359375
738
Knowledge Article
Science & Tech.
47.678121
where n is a nonsquare integer and x and y are integers. Trivially, x = 1 and y = 0 always solve this equation. Lagrange proved that for any natural number n that is not a perfect square there are x and y > 0 that satisfy Pell's equation. Moreover, infinitely many such solutions of this equation exist. These solutions yield good rational approximations of the form x/y to the square root of n. The name of this equation arose from Leonhard Euler's mistakenly attributing its study to John Pell. Euler was aware of the work of Lord Brouncker, the first European mathematician to find a general solution of the equation, but apparently confused Brouncker with Pell. This equation was first studied extensively in India, starting with Brahmagupta, who developed the chakravala method to solve Pell's equation and other quadratic indeterminate equations in his Brahma Sphuta Siddhanta in 628, about a thousand years before Pell's time. His Brahma Sphuta Siddhanta was translated into Arabic in 773 and was subsequently translated into Latin in 1126. Bhaskara II in the 12th century and Narayana in the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Solutions to specific examples of the Pell equation, such as the Pell numbers arising from the equation with n = 2, had been known for much longer, since the time of Pythagoras in Greece and to a similar date in India. For a more detailed discussion of much of the material here, see Lenstra (2002) and Barbeau (2003). Pell's equations were studied as early as 400 BC in India and Greece. They were mainly interested in the equation because of its connection to the square root of two. Indeed, if x and y are integers satisfying this equation, then x / y is an approximation of √2. For example, Baudhayana discovered that x = 17, y = 12 and x = 577, y = 408 are two solutions to the Pell equation, and give very close approximations to the square root of two. Later, Archimedes used a similar equation to approximate the square root of 3, and found 1351/780. Around AD 250, Diophantus created a different form of the Pell equation He solved this equation for a = 1, and c = −1, 1, and 12, and also solved for a = 3 and c = 9. Brahmagupta created a general way to solve Pell's equation known as the chakravala method. Alkarkhi worked on similar problems to Diophantus, and Bháscara Achárya created a way to create new solutions to Pell equations from one solution. E. Strachey published the work of Bháscara into English in 1813. Let denote the sequence of convergents to the continued fraction for . Then the pair (x1,y1) solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. Thus, the fundamental solution may be found by performing the continued fraction expansion and testing each successive convergent until a solution to Pell's equation is found. Once the fundamental solution is found, all remaining solutions may be calculated algebraically as As an example, consider the instance of Pell's equation for n = 7; that is, |h / k (Convergent)||h2 −7k2 (Pell-type approximation)| |2 / 1||−3| |3 / 1||+2| |5 / 2||−3| |8 / 3||+1| Therefore, the fundamental solution is formed by the pair (8, 3). Applying the recurrence formula to this solution generates the infinite sequence of solutions Thus, these polynomials can be generated by the standard technique for Pell equations of taking powers of a fundamental solution: It may further be observed that, if (xi,yi) are the solutions to any integer Pell equation, then xi = Ti (x1) and yi = y1Ui − 1(x1) (Barbeau, chapter 3). is a matrix of unit determinant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: If and are two successive convergents of a continued fraction, then the matrix has determinant (−1)n. Størmer's theorem applies Pell equations to find pairs of consecutive smooth numbers. As part of this theory, Størmer also investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has a prime factor that does not divide n. As Lenstra (2002) describes, Pell's equation can also be used to solve Archimedes' cattle problem. An analogous equation, known as the Negative Pell equation has also been extensively studied and can be solved via a method of using continued fractions. However, unlike the Pell equation, we do not know the exact time for which the Negative Pell equation is soluble. A recent paper by Cremona and Odoni has demonstrated that the proportion of square-free ds, for which the Pell equation is soluble is at least 40% (approximately). | isbn = 0-387-90230-9}}. Originally published 1977. | url = http://www.ams.org/notices/200202/fea-lenstra.pdf}}.
<urn:uuid:63577e4e-646b-4804-b8f2-c31d974d1a3e>
3.828125
1,206
Knowledge Article
Science & Tech.
49.68087
When a solar flare erupted yesterday, scattering a billion atomic bombs’ worth of energy into space, NASA’s Solar Dynamics Observatory was staring at the sun. They recorded this video, which NASA released Friday morning. The footage shows the flare in three different wavelengths of light. Teal and gold correspond to ultraviolet light, while the blue channel shows only that wavelength. The flare itself affected Earth directly for about an hour, causing problems on some radio frequencies, but the larger impacts will come from a subsequent wave of charged solar particles called a coronal mass ejection, or CME. Researchers think the CME will pummel Earth’s natural magnetic field Saturday through Sunday, exposing power grids and satellites to possible disruptions. The bright side, however, will be bright: Solar storms trigger dazzling northern lights. For this weekend’s CME, heliophysicist Alex Young of NASA Goddard Space Flight Center hopes the auroras will reach as far south as Washington, D.C., though it’s impossible to know for certain until moments before the CME reaches Earth. A biochemist’s homemade snowflake grower brings to life the fragile ambiance of an electronic song in a new music video. Called “Cascades,” by U.K. artist Ryan Teague, the video took months of planning, four days of shooting and roughly two terabytes of photos to animate the growth of hard-to-create ice crystals. “The dancing, contorting trees you see at the beginning of the video are ice structures — most no more than a fraction of a millimetre across — which were grown on the tip of an electrically charged, motorized needle,” said video director and producer Craig Ward in a press release. Now this is something you don’t see every day: actual video footage of a binary star system exploding into space. This isn’t to be confused with a supernova, which is a star that has collapsed in on itself causing it to completely self-destruct — rather, a nova is a cataclysmic nuclear explosion that happens in a binary system when one star sucks away too much hydrogen from its partner. A white dwarf can only hold so much hydrogen before it reaches critical mass and completely explodes, thereby ejecting all its excess mass into space. The Aurora Borealis is beautiful from the ground, but what does it look like from the air? A team of atmospheric physicists sent 30 balloons with HD cameras attached up to find out. “The project had three main goals,” says Ben Longmier, plasma physicist and rocket scientist at Ad Astra Rocket Company and lecturer in physics at the University of Houston. “Firstly, we were looking to answer some questions about the Aurora, to understand the physics and science around the Aurora and learn how to better predict it. Secondly, we wanted to develop new technology to enable us to answer these questions, this included developing HD imaging of the Aurora and new plasma instruments to stabilize the payloads for imaging. We also needed to develop a way to control the balloon buoyancy. “Thirdly, education outreach, specifically highlighting the connection to science, technology, engineering and maths. The goal was to motivate students of all ages and to inspire them to understand that life in science doesn’t have to be mundane or boring and it can be cutting edge and take you to very unique environments.” A few millionths of a second is all it takes for electrons to traverse the world’s most powerful X-ray laser, but this new time-lapse video (above) makes the half-mile trip in 37 seconds. The Linac Coherent Light Source, or LCLS, uses magnets to accelerate pulses of electrons to within 99.9999999 percent of the speed of light. A railroad of other magnets then wiggles the subatomic particles and bleeds off their potent energy as X-ray photons (see video below). Physicists can harness the resulting X-ray beam as a strobe light to make stop-motion movies of atoms and molecules in motion. The beam is also powerful enough to obliterate samples into hot, dense matter — a phase typically found inside the cores of stars and gigantic planets.
<urn:uuid:2b626079-f340-48de-a6b2-60e41f73f7de>
2.890625
883
Content Listing
Science & Tech.
43.703845
The end of the Shuttle program does not mean the end of space exploration for the United States. NASA is continuing to go places where humans can’t yet travel by sending robotic missions. I’m very pro-robotic missions as they’re much cheaper, practical, and easier than human-crewed spaceflight and allow us to go places where it’s extraordinarily hazardous to send humans. While it’s pretty to think of human explorers walking on Mars or seeing Jupiter rise from the landscape of Ganymede I think our notions of exploration are still too fantastical for our technology. Space exploration by humans is a daunting task. From a practical standpoint is ridiculous to even consider leaving the planet—everything we need we must take with us: food, air, water, shelter—so it’s best to let our machines (that have far fewer needs than our fragile bodies) make the trip. On the other hand, of course, all human voyages since the birth of our species have been impractical, from our first travels out of Africa, pre-columbian migration to the Americas, the exploration of Antarctica and the moon shot all required extraordinary courage, skill, and perseverance. There are several challenges when it comes to getting to Mars and while I don’t think they’re insurmountable they do make a touchdown on the surface out of our reach for right now. A voyage to the outer planets would be even more challenging in terms of time and complexity. I do believe we’ll make it one day, if not in my lifetime. We also have a great many problems here on Earth that need addressing, as well, and robotic missions spare us the resources (would that we used them) to work out the problems humans having living and working on Earth, let alone in outer space. But I digress. Until we’ve overcome the difficulties of exploring space in person we can send machines to do the work. Which is exactly what NASA will be focused on in the next few years. Mercury — MESSENGER Venus — Venus Express Moon — Lunar Reconnaissance Orbiter, Chang’e 2 Mars — Opportunity rover, Mars Reconnaissance Orbiter, Mars Express, Mars Odyssey Asteroids — Dawn (Vesta and Ceres) Saturn — Cassini Pluto/Kuiper Belt — New Horizons Comets — Rosetta Beyond — Voyagers 1 and 2 As a side note, Voyager 1 is the most distant man-made object in history, and the fastest space probe with a velocity of 38,400mph relative to the sun. Juno is the next mission to Jupiter. The spacecraft will carry 7 instruments to examine the formation and evolution of Jupiter. Not only will this mission give insight into the formation of our own solar system, it will also shed light on the numerous extrasolar planets that are thought to be analogous to Jupiter and the other gas giants of our own solar system. Juno will launch between August 5 and August 26, 2011 and arrive at Jupiter in 2016. Due to advances in solar panel technology Juno will also be the first mission to the outer planets to use solar power, rather than a radioisotope thermoelectric generator. Each of Juno’s three solar panels are more than 30 feet long. Curiosity (also know as the Mars Science Laboratory), on the other hand, will use an RTG, like the Viking landers before it. This will allow the Mini Cooper-sized rover to operate during any season and in a variety of weather. As I mentioned above, MSL is the size of a Mini Cooper automobile. It’s by far the largest rover ever to be sent to Mars. Curiosity’s enormous size will allow it to carry the most robust suite of instruments and experiments ever brought to Mars. NASA has assigned Curiosity the tasks of determining whether life ever arose on Mars, characterizing the climate and geology (areology) of Mars, and preparing for human exploration. Curiosity launches between November 25 and December 18, with a Martian landing in August of 2012. Curiosity will weigh nearly a full short ton and, illustrative of the problems of a Mars landing, the Jet Propulsion Laboratory has come up with an ambitious landing scheme in which the rover will be dropped out of the sky by a crane. If all goes well, Curiosity is expected to explore the area around Gale crater for over two (Earth) years. These two missions highlight the future of space exploration, both human and robotic, and point towards a promising future of exploration in our solar system and beyond.
<urn:uuid:29718e0f-4cc0-4eb7-bed1-3a026b6d8c7c>
2.796875
959
Personal Blog
Science & Tech.
42.536859
Record Search Query: [Source_Name: Short_Name='TERRA'] A Tour of the Cryosphere Entry ID: SVS_CRYOSPHERE Abstract: The cryosphere consists of those parts of the Earth's surface where water is found in solid form, including areas of snow, sea ice, glaciers, permafrost, ice sheets, and icebergs. In these regions, surface temperatures remain below freezing for a portion of each year. Since ice and snow exist relatively close to their melting point, they frequently change from solid to liquid and back again due to ... fluctuations in surface temperature. Although direct measurements of the cryosphere can be difficult to obtain due to the remote locations of many of these areas, using satellite observations scientists monitor changes in the global and regional climate by observing how regions of the Earth's cryosphere shrink and expand. This animation portrays fluctuations in the cryosphere through observations collected from a variety of satellite-based sensors. The animation begins in Antarctica, showing some unique features of the Antarctic landscape found nowhere else on earth. Ice shelves, ice streams, glaciers, and the formation of massive icebergs can be seen clearly in the flyover of the Landsat Image Mosaic of Antarctica. A time series shows the movement of iceberg B15A, an iceberg 295 kilometers in length which broke off of the Ross Ice Shelf in 2000. Moving farther along the coastline, a time series of the Larsen ice shelf shows the collapse of over 3,200 square kilometers ice since January 2002. As we depart from the Antarctic, we see the seasonal change of sea ice and how it nearly doubles the apparent area of the continent during the winter. From Antarctica, the animation travels over South America showing glacier locations on this mostly tropical continent. We then move further north to observe daily changes in snow cover over the North American continent. The clouds show winter storms moving across the United States and Canada, leaving trails of snow cover behind. In a close-up view of the western US, we compare the difference in land cover between two years: 2003 when the region received a normal amount of snow and 2002 when little snow was accumulated. The difference in the surrounding vegetation due to the lack of spring melt water from the mountain snow pack is evident. As the animation moves from the western US to the Arctic region, the areas effected by permafrost are visible. As time marches forward from March to September, the daily snow and sea ice recede and reveal the vast areas of permafrost surrounding the Arctic Ocean. The animation shows a one-year cycle of Arctic sea ice followed by the mean September minimum sea ice for each year from 1979 through 2008. The superimposed graph of the area of Arctic sea ice at this minimum clearly shows the dramatic decrease in Arctic sea ice over the last few years. While moving from the Arctic to Greenland, the animation shows the constant motion of the Arctic polar ice using daily measures of sea ice activity. Sea ice flows from the Arctic into Baffin Bay as the seasonal ice expands southward. As we draw close to the Greenland coast, the animation shows the recent changes in the Jakobshavn glacier. Although Jakobshavn receded only slightly from 1964 to 2001, the animation shows significant recession from 2001 through 2009. As the animation pulls out from Jakobshavn, the effect of the increased flow rate of Greenland costal glaciers is shown by the thinning ice shelf regions near the Greenland coast. This animation shows a wealth of data collected from satellite observations of the cryosphere and the impact that recent cryospheric changes are making on our planet. [Summary provided by the NASA Scientific Visualization Studio.] Anav, A., L. Ciattaglia, and C. Rafanelli. 1999. Was El Nino 1997-98 responsible for the anomalous CO2 trend in the Antarctic atmosphere? pgs. 375-85 in Conference Proceedings of the VIII Workshop on the Antarctic Atmosphere. Volume 69. Bologna, Italy. Ciattaglia, L. 1997. First 3 Years of Atmospheric CO2 Concentration Measurements at Jubany Station: Characteristics, Growth Rate, and Relationship with the Origins of Air Masses. In Conference Proceedings of the VII Workshop on the Antarctic Atmosphere. Bologna, Italy. Ciattaglia, L., and T. Colombo. 1997. Some Characteristics of Atmospheric Carbon Dioxide as Measured in the Antarctic Peninsula Area. In Abstracts of the 5th International Carbon Dioxide Conference. Cairns, Australia. Ciattaglia, L., A. Guerrini, and T. Colombo. 1995. A New CO2 Continuous Monitoring Station in Antarctica: Jubany (South Shetland). In Proceedings of the WMO Expert Meeting on Carbon Dioxide Concentration and Isotopic Measurement Technique. Boulder, Colorado, USA. Ciattaglia, L., T. Colombo, and K.A. Masarie. 1999. Continuous measurements of atmospheric CO2 at Jubany Station, Antarctica. Tellus Ciattaglia, L., A. Guerrini, T. Colombo, and P. Chamard. 1995. Some Characteristics of the Atmospheric CO2 Concentration as Measured at Jubany (Antarctica) During the First 18 Months of Operation. In Conference Proceedings of the VI Workshop on the Antarctic Atmosphere. Florence, Italy. Ciattaglia, L., P. Chamard, T. Colombo, and R. Santaguida. 1997. Italian Greenhouse Gas Programs in the Mediterranean Region and in Antarctica. In Proceedings of the WMO Expert Meeting on Carbon Dioxide Concentration and Isotopic Measurement Technique. Melbourne, Australia. Creation and Review Dates
<urn:uuid:92b2d0fd-3989-4fb7-b125-a11f1c53e604>
3.59375
1,200
Knowledge Article
Science & Tech.
46.058986
Open source on the web Apache runs the internet. For that matter, it’s been running the internet ever since it’s been around. Apache has been termed the killer application for the Internet. The leading HTTP server on the web, Apache Web Server currently hosts about 67% of the websites according to a survey carried out by http://www.netcraft.com. Apache is open source with versions running on multiple platforms including but not limited to Linux, Windows, UNIX, NetWare and BSD. Apache doesn’t make it to the top of the charts without reason. IT includes essential but powerful features like authentication, scripting, proxy and logging options. Using Apache, people may host multiple sites on a single machine and have the power to password protect the pages. Apache is also highly configurable and extensible for third-party customization and modules. It is also interesting to know how the software gets its name. The name actually comes from ‘A Patchy Server’ since the software was created in different patches at different stages. For further information, please visit http://httpd.apache.org/. If there ever was a language that ruled the web, could go on ruling it for as long as the web existed, it would be PHP. To say that PHP is now a major web standard will be an understatement. PHP is a prevalent, general-purpose scripting language that is used for web development and can be embedded into HTML. PHP was originally started in 1994 by Rasmus Lerdorf as a way to post his résumé and collect viewing statistics, and was called Personal Home Page Tools. It was rewritten by two Israeli developers, Zeev Suraski and Andi Gutmans, and renamed PHP: Hypertext Preprocessor. PHP is popular as a server-side scripting language and enables experienced developers to easily begin creating dynamic web content applications. PHP also enables easy interaction with the most common databases, such as MySQL, Oracle, PostgreSQL, DB2, and many others. See http://www.php.net/ for more information. More on Open Source in Software Industry Open Source Licenses Open source software is distributed under a license, just like regular proprietary software is. An open source license is a license with a difference: instead of keeping or expanding the software developer's rights over the product, it basically gives them away. Open Source in Web Development Apache runs the internet. For that matter, it’s been running the internet ever since it’s been around. Apache has been termed the killer application for the Internet. Open Source Portals The needs of the people can only understood by the people, and only the people can provide feasible solutions. However, the solutions need to be updated from time to time, or new ones need to be thought of. Also, there should be a common forum on which all the problems and solutions can be posted. In our OSS Myth-Busting spree, we noticed that one of the chief misconception people have about open source software companies are all run by volunteers and there isn’t much of a cash flow over there. People believe that OSS developers are just passionate hobbyists and nothing more than that. There is no doubt about the fact that open source has taken the world by storm. Open Source is spearheading the revolution to bring about a change in the world order, where the environment is free, conducive and constructive. ReferencesNo References Available
<urn:uuid:7e1d231d-a751-4f94-b441-78665717ede5>
3.15625
712
Knowledge Article
Software Dev.
46.978036
The Hamming distance(HD) between two strings of equal length is the number of characters that differ between the two strings at the same position, for example the HD between "gold" and "wolf" is 2; the minimum Hamming distance(MHD) between k strings of same length selected out of N strings is equal to the minimum of the HDs among all possible combinations of size k selected out of N. My question is: If I randomly select k binary numbers of length n out of N=2^n numbers, what is the expected value of the minimum Hamming distance between the k selected numbers? I need to find a general formula that gives me the expected value of the MHD from the values of K and N. Below is a table that shows, for N=8 and all k's, the MHD plus its occurrences frequency (OF): MHD OF k=2 k=3 k=4 k=5 k=6 k=7 k=8 1 12 48 68 56 28 8 1 2 12 8 2 3 4 Below another table for N=16 MHD OF k=2 k=3 k=4 k=5 k=6 k=7 k=8 k=9 ....... 1 32 352 1592 4240 7952 11424 12868 11440 2 48 208 228 128 56 16 2 3 32 4 8 Appreciate any help, regards.
<urn:uuid:eb7941e9-ee3d-4f76-92c8-63f05349cb5b>
3.328125
291
Q&A Forum
Science & Tech.
105.124726
Copyright © University of Cambridge. All rights reserved. Take a 25-30 cm piece of string and knot it to make a loop. (Or you could use a rubber band). Hook the loop around three fingers and stretch it tight to make a triangle. Move your fingers around to change the triangle. Draw the triangles you make, then sort them into groups.
<urn:uuid:ace4c3fb-a66c-4be2-b5f3-33e8910076a0>
2.75
75
Tutorial
Science & Tech.
80.104754
29 April 2011 Posted in DNA Day Dr. Walters’ research focuses on human impacts in the marine environment. She is interested in both pure ecology questions and goal-based conservation issues for a wide range of marine and estuarine habitats in the Caribbean and the southeastern US, especially the Indian River Lagoon system (IRL) and the Florida Keys. In the IRL, her program focuses on understanding interactions among organisms on intertidal oyster reefs (including invasive barnacles and mussels), as well as looking at the impacts of recreational boat wakes on the recent declines of these reefs. They are collaborating with The Nature Conservancy on community-based restoration of this critical habitat. Other on-going research in the IRL includes studies on mangroves and salt marsh plants, boat propeller scar impacts on seagrass beds, and dispersal and allelopathic impacts of invasive Brazilian pepper on native flora. Additional research in her lab on invasive species has targeted dispersal of one of the world’s 100 worst invasive species, Caulerpa taxifolia, via e-commerce and retail shops. Outreach to the aquarium industry is currently underway with colleagues from CA Sea Grant. In the Florida Keys, Bahamas and Virgin Islands, she has been collaborating with many scientists to better understand how increases in abundances of certain species of macroalgae significantly reduces recruitment and survival of hard corals and how the return of the long-spined sea urchin Diadema antillarium may change this pattern.
<urn:uuid:81ec3e9a-e583-494c-9783-f9e585064306>
2.953125
310
Nonfiction Writing
Science & Tech.
22.362283
Share article c# interview question :- What is difference between semaphore and semaphoreslim ?: When we use lock/monitor for concurrency management it just al ... When we use lock/monitor for concurrency management it just allows 1 thread to pass at a time. Using semaphore and SemaphoreSlim we can allow more than 1 thread to pass from the lock area. SemaphoreSlim is more light weight and used for request coming from within the process while Semaphore is for request coming from external process. See the following video on concurrent generic collections in c#: - Click to get c# interview questions and answers Get more Most asked c# interview questions from author’s blog MVC Interview questions and answers Article WCF Interview questions videos C# Interview Questions & Answers Article C# design pattern (UNIT of Work Design Pattern) C# design pattern interview questions – What is Dependency injection ? C# interview questions and answers: - What is the difference between “==” and .Equals()? .NET INTERVIEW QUESTIONS & ANSWERS ARTICLE WPF INTERVIEW QUESTIONS & ANSWERS ARTICLE
<urn:uuid:ba60b967-4d2c-4d70-8277-125ad62944bb>
2.703125
249
Content Listing
Software Dev.
43.136419
quax writes "In the wake of the Fukushima disaster the nuclear industry again faces massive opposition. Germany even decided to abandon nuclear energy altogether and the future of the industry is under a cloud of uncertainty in Japan. But one thing seems to be here to stay for a very, very long time: radioactive waste that has half-lives measured in thousands of years. But there is a technology under development in Belgium that could change all this: A sub-critical reactor design, driven by a particle accelerator can transmute the nuclear waste into something that goes away in about two hundred years. Could this lead to a revival of the nuclear industry and the reprocessing of spent reactor fuel?"
<urn:uuid:7cc05a84-6d00-4162-aab0-51433851c99c>
2.6875
137
Comment Section
Science & Tech.
33.015
This blog is the modern version of a field journal, a place for reports on the daily progress of scientific expeditions — adventures, misadventures, discoveries. As with the expeditions themselves, you never know what you will find. Posts published by John Vucetich As the emergence of snow fleas signals the end of winter, scientists gather up their findings on wolves and moose and prepare to leave Isle Royale. Scientists look for female wolves among the Chippewa Harbor Pack, but the absence of mating behaviors suggests that the four wolves traveling together are all male. After enjoying a relatively mild winter so far, the moose of Isle Royale National Park struggle to plow through 10 inches of new snow in search of food. Scientists follow the mating behavior of two Isle Royale wolves — a promising sign for the future of the island’s wolf population. The wolves in Isle Royale National Park are entering a period when old moose will be rare, making hunting more difficult. Scientists follow the only surviving wolf pack at Isle Royale National Park as it pursues a cow moose and her frantic calf. When a wolf comes back to an unattended moose carcass, black wings flap in every direction up into the trees and above the forest canopy. In a sign of decreased moose populations on Isle Royale, balsam fir trees are rising from their moose-induced stupor. With patience, collecting what moose and wolves leave behind can produce gratifying results. Tracking wolves can lead to surprising observations, but getting the full picture means chasing after wolf scat.
<urn:uuid:91f39371-0aa7-469a-94bf-5ad1398d7218>
3.265625
331
Content Listing
Science & Tech.
41.813922
Sci. STKE, 14 February 2006 PHOTOSYNTHESIS Red Light Signals Repair In plant and photosynthesizing cyanobacteria, light is not only essential for photosynthesis but also damages the photosystem II (PSII) reaction centers. A repair cycle replenishes the PSII with newly synthesized subunits, and this process appears to require movement of the PSII from the grana (stacks of thylakoid membranes) to the stroma lamellae (membranous connections between the grana). Sarcina et al. used the cyanobacterium Synechococcus sp PCC7942 as a model system to study the mobility of PSII to different wavelengths of light. Chlorophyll is naturally fluorescent, with 80% of the fluorescence from PSII and 20% from PSI, so the mobility of PSII was monitored using fluorescence recovery after photobleaching (FRAP) of the chlorophyll. Exposure of the cells to red light stimulated the movement of chlorophyll, which in no light, blue light, or green light conditions was immobile. The movement in response to red light was dose dependent, with 3-second exposure triggering movement of 20% of the chlorophyll and 15-second exposure triggering a maximum movement of 60% of the chlorophyll. When entire cells were exposed to red light, the fluorescence became concentrated in discrete areas. Both red light and blue light cause photodamage to PSII; however, red light appears to engage the repair mechanism more quickly than does blue light. PSII damage was monitored by measuring oxygen production in response to blue or red light in the presence or absence of lincomycin, which inhibits protein synthesis and blocks the repair process. Blue light-treated cells exhibited sustained oxygen production (no loss in production for the first 20 minutes after exposure) in the absence of lincomycin, indicating rapid initiation of the repair process. Red light-treated cells showed loss of oxygen production within the first 10 minutes, and the repair cycle was engaged after 20 minutes. Thus, red light appears to produce a specific signal that alters PSII mobility, allowing it to effectively engage the repair cycle and maintain photosynthesis under conditions that cause photodamage. Exactly how the red light is perceived and how it stimulates PSII mobility remain to be determined. M. Sarcina, N. Bouzovitis, C. W. Mullineaux, Mobilization of photosystem II induced by intense red light in the cyanobacterium Synechococcus sp PCC7942. Plant Cell 18, 457-464 (2006). [Abstract] [Full Text] Citation: Red Light Signals Repair. Sci. STKE 2006, tw57 (2006). Science Signaling. ISSN 1937-9145 (online), 1945-0877 (print). Pre-2008: Science's STKE. ISSN 1525-8882
<urn:uuid:7d45ac41-417d-4125-8f35-5d0e3434dc98>
3.078125
611
Academic Writing
Science & Tech.
42.116842
Nonclustered Index Structures Nonclustered indexes have the same B-tree structure as clustered indexes, except for the following significant differences: The data rows of the underlying table are not sorted and stored in order based on their nonclustered keys. The leaf layer of a nonclustered index is made up of index pages instead of data pages. Nonclustered indexes can be defined on a table or view with a clustered index or a heap. Each index row in the nonclustered index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. The row locators in nonclustered index rows are either a pointer to a row or are a clustered index key for a row, as described in the following: If the table is a heap, which means it does not have a clustered index, the row locator is a pointer to the row. The pointer is built from the file identifier (ID), page number, and number of the row on the page. The whole pointer is known as a Row ID (RID). If the table has a clustered index, or the index is on an indexed view, the row locator is the clustered index key for the row. If the clustered index is not a unique index, SQL Server 2005 makes any duplicate keys unique by adding an internally generated value called a uniqueifier. This four-byte value is not visible to users. It is only added when required to make the clustered key unique for use in nonclustered indexes. SQL Server retrieves the data row by searching the clustered index using the clustered index key stored in the leaf row of the nonclustered index. Nonclustered indexes have one row in sys.partitions with index_id >0 for each partition used by the index. By default, a nonclustered index has a single partition. When a nonclustered index has multiple partitions, each partition has a B-tree structure that contains the index rows for that specific partition. For example, if a nonclustered index has four partitions, there are four B-tree structures, with one in each partition. Depending on the data types in the nonclustered index, each nonclustered index structure will have one or more allocation units in which to store and manage the data for a specific partition. At a minimum, each nonclustered index will have one IN_ROW_DATA allocation unit per partition that stores the index B-tree pages. The nonclustered index will also have one LOB_DATA allocation unit per partition if it contains large object (LOB) columns . Additionally, it will have one ROW_OVERFLOW_DATA allocation unit per partition if it contains variable length columns that exceed the 8,060 byte row size limit. For more information about allocation units, see Table and Index Organization. The page collections for the B-tree are anchored by root_page pointers in the sys.system_internals_allocation_units system view. |The sys.system_internals_allocation_units system view is for internal use only and is subject to change. Compatibility is not guaranteed.| The following illustration shows the structure of a nonclustered index in a single partition. In SQL Server 2005, the functionality of nonclustered indexes can be extended by adding included columns, called nonkey columns, to the leaf level of the index. While the key columns are stored at all levels of the nonclustered index, nonkey columns are stored only at the leaf level. For more information, see Index with Included Columns.
<urn:uuid:258426dc-a75c-4eb4-ac7e-31689881e537>
3.359375
770
Documentation
Software Dev.
38.523675
Between 1970 and 2008 the sparrow population of England has declined by 71%. Dr. Julia Schroeder of Sheffield University has published an article in PloS ONE of a study she undertook on the island of Lundy. her conclusions are that noise pollution is responsible for the decline. She postulates that the noise prevents the parents from hearing their brood's cries and that they therefore feed them less, resulting in weak offspring. In Belgium the sparrow population, after a certain decline between 1970 and 1990, stabilized and local ornithologists think that the decline was due to a variety of causes, noise pollution being a possible one of them. The steep and continued decline in England is shocking and surprising to me, since when I was in grade school, the sparrow was touted as the species that had most effectively adapted to the urban environment in England. We were taught that while the fish disappeared from the Thames and most birds disappeared from the London skies, even at the nadir of the London environment during the killer smogs of 1952-3 , the sparrow continued to thrive.
<urn:uuid:88789ee7-93d7-4c16-ba4d-003117118428>
2.859375
219
Personal Blog
Science & Tech.
46.173936
However, what sets our protocol apart from automated or radioactive ones is the use of biotin. Biotin is a compound that we incorporate into the primer, (used to elongate replicating strands of DNA), which can be detected through a streptavidin-alkaline-phosphatase wash. The biotin binds to the DNA fragments while the streptavidin alkaline-phosphatase wash latches onto the biotin. Using X-ray film, the DNA fragments can be captured because the wash cleaves phosphates that emit light. This particular method is called chemiluminescent detection. A hurdle we have faced with this protocol is keeping the biotin from degrading at the high temperatures necessary to denature the DNA so that biotin-labeled primers can attach to the fragments. We've slightly altered the protocol so that the biotin survives denaturation.
<urn:uuid:878d57e7-5401-4db8-8f76-ca31366e0466>
3.078125
195
Knowledge Article
Science & Tech.
29.848077
The eruption of Iceland’s Eyjafjallajökull volcano in 2010 turned much of the northern hemisphere into an ash-strewn no-fly zone. But Eyjafjallajökull was just the start. Katla, an Icelandic volcano 10 times bigger, has begun to swell and grumble. Two more giants, Hekla and Laki, could erupt without warning. Iceland is a ticking time bomb: When it blows, the consequences will be global. Meet scientists trying to understand those consequences — for air travel and for the global food supply and Earth’s climate. Could we be plunged into years of cold and famine? What can we do to prepare for the coming disaster?
<urn:uuid:d96945f9-8477-4d90-b9bc-460571a799c8>
3.296875
148
Truncated
Science & Tech.
58.958839
Video: 17.7 Foot Python Breaks Size Records CREDIT: University of Florida photo by Kristen Grace/Florida Museum of Natural History A 17.7 foot Burmese python found in the Florida Everglades has been recorded as the biggest snake found in the state, scientists say. The snake weighed 164 pounds and carried 87 eggs in its oviducts, which is a state record. The discovery of the creature indicates that snakes are able to survive for a very long time without many natural predators in the Everglades. Such snakes could become a big predator in the Everglades, according to researchers. MORE FROM LiveScience.com
<urn:uuid:4d939aa8-9409-4c5c-9031-9b413c345ce4>
2.84375
133
Truncated
Science & Tech.
44.876699
Clouds, Fire, and Ice in the Cascades On September 19, 2012, the Advanced Spaceborne Thermal Reflection and Emission Radiometer (ASTER) on NASA’s Terra satellite collected these unique views of the Three Sisters and Broken Top volcanoes near Bend, Oregon. At the time, the Pole Creek fire blazed nearby in Deschutes National Forest. ASTER combines 14 spectral bands in infrared, red, and green wavelengths of light to make false-color images. In the top image, vegetated areas appear bright red; snow and ice looks white; and clouds are a wispier off-white. Exposed rock and barren land near the summits of the mountains are shades of brown. Smoke billowing from the fire appears gray. A view of the same area created from ASTER’s thermal band (bottom) shows how temperature varies throughout the scene. Warmer temperatures are shown with brighter colors, and cooler temperatures are darker. Actively-burning hot spots from the Pole Creek fire are the hottest features in the image, while high-floating cirrus clouds near North Sister are the coldest. The smoke is transparent to ASTER’s thermal band because smoke plumes consist of ash particles and other combustion products so fine that they are easily penetrated by the relatively long wavelengths of thermal infrared radiation. In contrast, ASTER cannot see through clouds because they tend to have larger particles that thermal infrared radiation cannot easily pass through. About 20,000 years ago, ice likely blanketed the mountains that make up the Oregon Cascades, forming a small ice cap. Most of that ice retreated long ago, but scattered glaciers still dot the upper reaches of Oregon’s tallest peaks. Today there are more than 450 perennial snow and ice features in Oregon. About 60 of these are larger than a square kilometer, and 35 are named glaciers. About half of the named glaciers are situated near the Three Sisters and Broken Top volcanoes. All of the peaks top 9,100 feet (2,700 meters). South Sister, at 10,358 feet (3,157 meters), is the third tallest mountain in the state; North Sister, at 10,085 feet (3,074 meters), is the fourth tallest. InciWeb. (n.d.) Pole Creek. Accessed October 1, 2012. Portland State University. (n.d.) Glaciers of the American West: Glaciers of Oregon. Accessed October 1, 2012. Stefanov, W. (n.d.) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Data and Band Combinations. Accessed October 1, 2012. NASA JPL Photojournal. NASA Spacecraft Images Oregon Wildfire. Accessed October 1, 2012. NASA Earth Observatory Image by Jesse Allen and Robert Simmon, using data from the NASA/GSFC/METI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team. Caption Adam Voiland. Terra - ASTER
<urn:uuid:ebffa0f3-8233-4380-903e-517cb541c946>
3.71875
630
Knowledge Article
Science & Tech.
54.770665
|Nov21-08, 05:27 PM||#1| Recrystallization Percent Recovery 1. The problem statement, all variables and given/known data Calculating percent recovery of an unknown solid and describing if the percent recovery represents the "true" value of the recovered pure solid. How do we know that quality of an unknown solid which was recrystallized improved as a result of the recrystallization? 2. Relevant equations 3. The attempt at a solution For the first question, I am confused because I don't understand what the "true value" means. I got 75% as my percent recovery, but I don't understand what the question means by "true" value. For the second question I have no idea what it's saying. After recrystallization we expect the compound to be purer. But we have to give some "evidence". So please help me there. |Nov24-08, 04:23 PM||#2| |Similar Threads for: Recrystallization Percent Recovery| |melting point and solubility in water of benzoic acid||Biology, Chemistry & Other Homework||7| |percent hydrolysis/percent ionazation||Biology, Chemistry & Other Homework||1| |Recrystallization||Materials & Chemical Engineering||1|
<urn:uuid:4ceda84a-0331-44da-a4aa-61a39491e7e6>
2.75
288
Comment Section
Science & Tech.
43.33159
This chapter discusses the rule system in PostgreSQL. Production rule systems are conceptually simple, but there are many subtle points involved in actually using them. Some other database systems define active database rules, which are usually stored procedures and triggers. In PostgreSQL, these can be implemented using functions and triggers as well. The rule system (more precisely speaking, the query rewrite rule system) is totally different from stored procedures and triggers. It modifies queries to take rules into consideration, and then passes the modified query to the query planner for planning and execution. It is very powerful, and can be used for many things such as query language procedures, views, and versions. The theoretical foundations and the power of this rule system are also discussed in On Rules, Procedures, Caching and Views in Database Systems and A Unified Framework for Version Modeling Using Production Rules in a Database System.
<urn:uuid:e453d012-7017-4815-a74c-2fb779acaa8b>
3.25
177
Documentation
Software Dev.
22.137336
All information/photos, courtesy of the NOAA El Niño in the central and eastern equatorial Pacific Ocean is expected to be a dominant climate factor that will influence the December through February winter weather in the United States, according to the 2009 Winter Outlook released by NOAA’s Climate Prediction Center. Such seasonal outlooks are part of NOAA’s suite of climate services. Highlights of the U.S. Winter Outlook (December through February) include: * Warmer-than-average temperatures are favored across much of the western and central U.S., especially in the north-central states from Montana to Wisconsin. Though temperatures may average warmer than usual, periodic outbreaks of cold air are still possible. * Below-average temperatures are expected across the Southeast and mid-Atlantic from southern and eastern Texas to southern Pennsylvania and south through Florida. * Above-average precipitation is expected in the southern border states, especially Texas and Florida. Recent rainfall and the prospects of more should improve current drought conditions in central and southern Texas. However, tornado records suggest that there will also be an increased chance of organized tornado activity for the Gulf Coast region this winter. * Drier-than-average conditions are expected in the Pacific Northwest and the Ohio and Tennessee River Valleys. * Northeast: Equal chances for above-, near-, or below-normal temperatures and precipitation. Winter weather in this region is often driven not by El Niño but by weather patterns over the northern Atlantic Ocean and Arctic, such as the North Atlantic Oscillation. These patterns are often more short-term, and are generally predictable only a week or so in advance. * California: A slight tilt in the odds toward wetter-than-average conditions over the entire state. * Alaska: Milder-than-average temperatures except along the western coast. Equal chances for above-, near-, or below-median precipitation for most areas except above median for the northwest. * Hawaii: Below-average temperatures and precipitation are favored for the entire state.. Special Note: This seasonal outlook does not predict where and when snowstorms may hit or total seasonal snowfall accumulations. Snow forecasts are dependent upon winter storms, which are generally not predictable more than several days in advance. NOAA understands and predicts changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and conserves and manages our coastal and marine resources. Visit http://www.noaa.gov/ for more information.
<urn:uuid:0fa9d687-5d3a-4178-9079-14de156fc24e>
3.09375
506
Knowledge Article
Science & Tech.
31.964879
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2003 May 7 Explanation: Look up from Earth's South Pole, and this stellar starscape is what you might see. Alternatively, this patch of sky is also visible from many southern locations as well as the orbiting International Space Station, where the above image was recently recorded. To the left of the photograph's center are the four stars that mark the boundaries of the famous Southern Cross. The band of stars, dust, and gas crossing the middle of the photograph is part our Milky Way Galaxy. At the lower left is the dark Coal Sack Nebula, and the bright nebula on the far right is the Carina Nebula. The Southern Cross is such a famous constellation that it is depicted on the national flag of Australia. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: LHEA at NASA / GSFC & Michigan Tech. U.
<urn:uuid:ca04df12-990f-43b3-b625-bada266a59b8>
3.40625
226
Knowledge Article
Science & Tech.
52.270002
In part one of our series on scientific data preservation, we spent some time discussing the challenges of making sure the samples used to generate scientific data get kept around. It might seem that there's an obvious solution to that issue: document things, digitize them, and take advantage of the rapid increases in hard drive capacity. After all, that's what we do with data from one-time events, like earthquakes and astronomical events. It's a nice thought, but two recent developments point out that it's little more than wishful thinking. The first is that, as the LHC has ramped up the pace of its collisions, software filters have kicked in that are starting to determine which events actually get archived. Instead of a "preserve everything" approach to scientific data, the people running the LHC are now taking a "preserve the interesting stuff and a random sample of the rest" approach. As collision intensities continue to ramp up, that random sample will be an ever-shrinking slice of the full complement of events taking place. At full beam intensity, three levels of filtering will take place, each of which will discard all but one of every 10,000 collisions recorded. It's possible to write this off as an exception, as the LHC is a one-of-a-kind, multibillion dollar machine, and we won't build another one like it. But physics isn't the only field that's drowning in a flood of data, as a chart from a paper in genome biology reveals. For decades, hard drive capacity per dollar has been doubling every 14 months, comfortably outstripping the first decade's DNA sequencing productivity. But, since the second-generation DNA sequencing machines appeared on the market, the base pairs per dollar figure has been doubling every five months. Genome sequencing centers are now struggling to cope with the flood. The reality is that we simply can't save everything. And, as a result, scientists have to fall back on judgement calls, both professional and otherwise, in determining what to keep and how to keep it. We'll consider the what now, and deal with the how separately. What's really "raw," anyway? Digital cameras actually provide an illustrative example of the challenges of digitizing scientific data. Ostensibly, the images are simply a recording of the light that hits a sensor at a specific time and place. Except nobody ever actually gets that. Low-end cameras don't output raw files; high-end cameras correct for bad pixels and the like in hardware; things are date stamped only if you set the camera's clock properly, and you're on your own when it comes to location data. Even if all those hurdles can be cleared, the end results need to be in a format that's easy to interpret and compact, with extraneous data culled and random variability compensated for. So, even though digitization could help provide an exact record, it generally doesn't. Equivalents to all of this exist for scientific data. It's easy argue that science should be focused on preserving the raw data, but lots of hardware doesn't even provide raw data anymore, and a lot of the processing is there simply to compensate for defective hardware. On the low end, the purity and concentration of DNA solutions can be determined by using a spectrophotometer to measure the absorption of specific wavelengths of light and performing some minor calculations. Most companies ultimately recognized that this is what their machines were being used for, and put together a one-button program that did all the work, and spit out the concentration. Nobody ever sees the raw data anymore. And the number never gets used directly; instead, the concentration used for a given experiment (which is derived from the figure the machine spits out) is what ends up being recorded. DNA sequencing machines are somewhere in the middle. For most machines, the raw data is a curve that represents light emissions. But, obviously, light emissions on their own don't actually say anything about a DNA base. Instead, the metadata associated with it can be converted into a more informative format—for Sanger sequencing, the results are a trace file, which looks like this: This lets a trained eye determine the identity of the base, and a sense of the confidence in this call, based on the height and shape of the curve. But the human genome would have never gotten completed if people had to make judgments about every single base, or manually track which sequencing project every trace file belonged to. Instead, algorithms were developed that make base calls and provide a quality score for each base that reflects the confidence, and an analysis and storage pipeline developed to ensure that samples were properly identified. By this point, we're well removed from the raw data, which was actually the amount of light registered by a digital sensor. But the National Center for Biotechnology Information, which is the organization tasked with storing the genome data in the US, stopped requiring trace data with the completion of the human genome. And, for many of the nongenomic sequences it stores, it relies on the investigators that submit data to retain the original copy used to derive the sequences. Some of that dates back to the era of radioactive sulfur and X-ray films, done in labs that have since shut down—you can safely bet that it's no longer available. At the high end, things like bad pixels effect even the most sophisticated instruments, but NASA calibrates its instruments before scientific use, and releases the adjusted data to the scientific community. So, you can safely assume that recent dumps of Kepler data doesn't include any artifacts caused by bad hardware. Similar things happen for other hardware. For example, the satellite NASA uses to track ocean levels (currently, Jason-1) suffers from a gradual orbital decay that slowly brings it closer to the ocean. Its raw data is essentially meaningless—anyone wanting to actually analyze ocean levels needs to use the adjusted data provided by NASA. The decisions on how close to the initial instrument to go when saving raw data is, again, something that requires case-by-case decisions based on the instruments and scientific needs, so it's not possible to set a one-size-fits-all policy for preservation. Scientifically, it can make all the difference, as two cases can illustrate. One of the early controversies in the area of climate science arose over discrepancies between the measurement of surface temperatures and satellite-derived measurements of the lower atmosphere. Eventually, however, various sources of instrument error in the satellite record were identified; when corrected for, the two records were brought into rough agreement (you can get a sense of some of the issues from this paper). Earlier in June came another calibration controversy, this one about data from NASA's WMAP satellite, which images the microwave background that resulted from the Big Bang. Although the satellite's original calibration seems to have been widely accepted by the cosmology community, a separate group is apparently claiming that it can perform a different calibration and get results that do away with dark matter and energy. Without everyone having access to the original, uncalibrated data, none of this would be possible. But, despite its potential importance, scientists in other fields are making justifiable decisions to pitch everything but heavily processed data. All of this, of course, assumes that we can manage to keep any of the data around long enough to argue about it in the first pace, an assumption that, as we'll see, is often sorely tested.
<urn:uuid:bcd65e97-3944-4994-968a-e3345aceb477>
3.40625
1,520
Nonfiction Writing
Science & Tech.
33.654004
It can be tempting, when calculating an answer or working with numbers, to just record whatever answer a calculator or computer pumps out. For example: Say we want to multiply 24.8201 by 0.0946. Simply punching these numbers into a calculator we get 0.45598146. However, writing out every calculated value out to eight, nine, ten places will make your data look sloppy, needlessly complicate calculations, and most importantly – will greatly exaggerate the level of precision with which our measurements are being made. Counting Significant Figures 1. All non-zero integers are significant figures. 123 has three significant figures 987654 has six significant figures 2. Zeroes located between non-zero integers are significant figures 701 has three significant figures 60204 has five significant figures 3. All zeroes to the left of the first non-zero digit and to the right of the last are not significant. 3.14 has three significant figures 15900000000000 also has three significant figures 0.0078 has two significant figures 0.00000000000000000000000017 also has two significant figures Operations With Significant Figures When performing any operation, whether it be addition, subtraction, multiplication, division, etc – your calculated value can be no more precise than the least precise value in the operation. That is to say, the significant figures of your calculated value should be rounded up to match those of the value with the least number of sig figs. Scientific notation should also be used if appropriate. 24.8201 x 0.0946 = 0.45598146 --> .0456 6563 x 107.28 = 704078.64 --> 704100 -or- 7.041 x 105
<urn:uuid:34b69122-f958-46b3-81a1-c7cf9801229e>
3.765625
358
Structured Data
Science & Tech.
57.990538
Your brain contains about 10 billion neurons, each of which connect to other nerve cells through about 10,000 synapses. Neurons process signals coming into the nervous system, and then produce output signals that stimulate the body’s biological functions, everything from walking to kissing. Viewed as a whole, the brain’s network of neurons can be seen as a massively parallel information processing system. A computer. And when that computer breaks down you begin to lose memory or worse, develop diseases such as Parkinson’s or Alzheimer’s. Unfortunately you can’t take your brain down to Fry’s or Best Buy to purchase an upgrade. But what if you could put something in your brain that would enhance the signal processing capabilities of individual neurons? Scientists say they’ve done just that with carbon nanotubes. From an EE Times article: The researchers propose engineering carbon nanotube scaffolds as electrical bypass circuitry, not only for faulty neural networks but potentially to enhance the performance of healthy cells to provide “superhuman” cognitive functions. However, many engineering hurdles remain to realizing the potential of augmenting neural networks with carbon-nanotube circuitry, including stabilizing the mechanical interfaces between nanotubes and neurons, determining which signal-sites to record from, which sites to stimulate, and just what kind of signals will affect repairs or improve cognitive functions. Eventually, the researchers hope that carbon nanotube-based circuitry will enable brain-machine interfaces for neuroprosthetics that process sight, sound, smell and motion. Such circuits could, for instance, veto epileptic attacks before they occur, perform spinal bypasses around injuries, and repair or enhance cognitive functions. So in summary, a lot of engineering challenges remain. But the potential is incredible: healing brain diseases, supercharging the intellect of healthy brains and eventually building some kind of human-machine interface. If you’re into that sort of thing.
<urn:uuid:f07f9288-7c17-4f94-b25f-29c46995d5f0>
3.65625
404
Personal Blog
Science & Tech.
26.407303
|Posted on Jan 27, 2009 02:28:30 PM | Dan Kanigan | 1 Comments || The Ares I-X flight test vehicle is being built from a lot of off-the-shelf components, such as the solid rocket booster first stage, which is coming directly from the space shuttle inventory, or the avionics, which are from the Atlas V Evolved Expendable Launch Vehicle. However, one of the lesser-known off-the-shelf parts for Ares I-X is the Roll Control System, or RoCS. The RoCS four thrusters fire alongside the rocket in short pulses to control the vehicle’s roll. After clearing the launch tower, the Ares I-X rocket will be rolled 90 degrees to the same orientation that the Ares I rocket will use. Once that maneuver is completed, the RoCS keeps Ares I-X from rolling during flight like a corkscrew or a football spiraling downfield. This required a rocket engine that could be turned on and off like a thermostat -- only when needed to maintain position within a certain range. There were actually a couple of choices: one was to use reaction control thrusters from the space shuttle. However, Ares I-X would have needed four thrusters per RoCS module -- eight in all for the mission. However, with the Shuttle production lines shut down and Ares I-X being an expendable rocket, the Shuttle program couldn't afford to part with any of their thrusters. Another option -- the one eventually chosen -- was the upper stage engine of the Peacekeeper missile system, which was in the process of being demilitarized and dismantled as part of the second Strategic Arms Reduction Treaty (START II). The Peacekeeper's axial engine (or AXE) met several of the Ares I-X requirements, including the fact that it was a reliable, off-the-shelf system; it was able to handle the on/off pulsing cycle needed for the flight; its thrust was such that only two engines would be required per module; and it was relatively low-cost and available for use. (The Air Force agreed to transfer the axial engines NASA needed as well as the engines’ propellant and pressurization tanks, "for just the cost of shipping," as RoCS team leader Ron Unger put it.). What a fantastic use of these components: instead of being used for their original mission as part of a nuclear weapon, they are contributing to the first step in America's next generation of space exploration! Tags : Ares I, Ares I-X, Constellation, roll control system, space shuttle
<urn:uuid:a022c0a5-295f-4fc6-9ea0-fe775c3524d4>
2.84375
542
Personal Blog
Science & Tech.
44.087322
Here d is the exterior derivative, which is defined using the manifold structure only. The theorem is to be considered as a generalisation of the fundamental theorem of calculus and indeed easily proved using this theorem. The theorem is often used in situations where M is an embedded oriented submanifold of some bigger manifold on which the form ω is defined. The theorem easily extends to linear combinations of piecewise smooth submanifolds, so called chains. Stokes theorem then shows that closed forms defined up to an exact form[?] can be integrated over chains defined only up to a boundary[?]. This is the basis for the pairing between homology groups and de Rham cohomology. The classical Kelvin-Stokes theorem: relating the integral of the rotation of a vector field over a surface Σ in Euclidean 3 space to the integral of the vector field over its boundary, is a special case of the general Stokes theorem (with n = 2) once we identify a vector field with a 1 form using the metric on Euclidean 3 space. The first known statement of the theorem is by William Thomson (Lord Kelvin) and appears in his letter to Stokes. Likewise the Ostrogradsky-Gauss theorem is a special case if we identify a vector field with the n-1 form obtained by contracting the the vector field with the Euclidean volume form. The Fundamental Theorem of Calculus and Green's theorem are also special cases of the general Stokes theorem. The general form of the Stokes theorem using differential forms is more powerful and generally easier to work with.
<urn:uuid:ef94c98e-3fbd-4653-b96d-d85019ddde08>
3.046875
333
Knowledge Article
Science & Tech.
42.659056
Author: XianJim lee It is hard to debug in embedded system, especially the bug can't be reproduced in simulator. Don't be surprise, if it takes several days to find out why the system failed--just because there is no suitable debugger! You are lucky, if you develop programs in Linux embedded environment. GDB server is powerful tool for you, actually, it helps me solved many difficult problems. It is a pity that it doesn't support shared library, you can't insert break points in the shared library code. There is a command named add-shared-symbol-files, but it doesn't work. The mechanism of inserting break points is very simple: generally, the debugger insert a piece of special instruction at the address, when the CPU execute that instruction, an exception will be throw, then the debugger take over the control of execution. Why doesn't it work? The most possible answer is that the symbol does not match the according address. After reading the help information of the command add-symbol-file, I knew that I should specify an address for it. But what address should I specify? We should know where the code of the shared library locates in the memory. You will say, that is simple, we can consult the /proc/$PID/maps. Yes, you are right, but not enough. The following example shows a complete demonstration of debugging shared library. 1. Let's create a shared library. int foo(int a, int b) int s = a + b; 2. Then create an executable file that calls the shared library. extern int foo(int a, int b); int main(int argc, char* argv) int s = foo(10, 20); 3. Of course we need a Makefile. all: so main gcc -g foo.c -shared -o libfoo.so gcc -g main.c -L./ -lfoo -o test rm -f test *.so 4. Make and prepare for running. # export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./ 5. Run the gdbserver # gdbserver localhost:2000 ./test 6. Connect to gdbserver and run to function main. (gdb) symbol-file test (gdb) target remote localhost:2000 (gdb) b main 7. Now, It is time to check where library is loaded. # ps -efgrep ./test (You can get the PID from the output, here is 7186) # cat /proc/ 7186/maps It will output something like: 007b1000-007cc000 r-xp 00000000 08:02 2737838 /lib/ld-2.6.so 007cc000-007cd000 r--p 0001a000 08:02 2737838 /lib/ld-2.6.so 007cd000-007ce000 rw-p 0001b000 08:02 2737838 /lib/ld-2.6.so 08048000-08049000 r-xp 00000000 08:02 1759415 /root/writting/gdbserver/test 08049000-0804a000 rw-p 00000000 08:02 1759415 /root/writting/gdbserver/test 4d940000-4da8e000 r-xp 00000000 08:02 2738392 /lib/libc-2.6.so 4da8e000-4da90000 r--p 0014e000 08:02 2738392 /lib/libc-2.6.so 4da90000-4da91000 rw-p 00150000 08:02 2738392 /lib/libc-2.6.so 4da91000-4da94000 rw-p 4da91000 00:00 0 b7efc000-b7efd000 rw-p b7efc000 00:00 0 b7f11000-b7f12000 r-xp 00000000 08:02 1759414 /root/writting/gdbserver/libfoo.so b7f12000-b7f13000 rw-p 00000000 08:02 1759414 /root/writting/gdbserver/libfoo.so b7f13000-b7f14000 rw-p b7f13000 00:00 0 bff04000-bff19000 rw-p bffeb000 00:00 0 [stack] ffffe000-fffff000 r-xp 00000000 00:00 0 [vdso] This means the code segment of libfoo.so is loaded at 0xb7f11000. 8. With the help of objdump, we can get the offset. # objdump -h libfoo.so grep text It will output something like: .text 00000154 000002f0 000002f0 000002f0 2**4 So, the offset is 0x000002f0 9. Add the loaded address and offset, we can get the real address. 10. Now, we can load the symbol file into gdb. (gdb) add-symbol-file libfoo.so 0xb7f112f0 add symbol table from file "libfoo.so" at .text_addr = 0xb7f112f0 (y or n) y Reading symbols from /root/writting/gdbserver/libfoo.so...done. 11. Done, debug it as normal case. Well, it works, but it is still complex. If you get better solution, let me know please, thank you in advance.
<urn:uuid:5b7d1f91-823e-4c3d-8797-843a852c9c7f>
3.125
1,251
Personal Blog
Software Dev.
102.200753
I know how to find the area of a triangle in three-space, say ABC: First I would find and then and then use the formula for the area of a parallelogram: Finding the magnitude of that and dividing it by two (since the area of a triangle is half that of the area of a parallelogram), I would find the area of the triangle ABC. But, how would I find the area of a triangle that does not have a Z component (is in two-space)? For example: Find the area of the triangle with vertices of (1, -2), (-1, 3), (2, 4). Since this is in two-space, I can't take the cross product of any two of these vectors to find the area. According to my book, the area of this triangle is and I have no idea how they computed that. Any help? I would appreciate learning a systematic approach to solving the areas of a triangle in two-space. Thanks in advance.
<urn:uuid:c84c82b4-a916-45fe-b765-1f2210c30c3a>
3.078125
206
Q&A Forum
Science & Tech.
69.698571
4.4. The Galactic Center The central parsec of the galaxy, identified with the Sagittarius A nebula, contains ionized gas powered by about 1040 ionizing photons sec-1 (Lacy el al. 1980). A cluster of He I emission line stars has been observed and spectroscopically analyzed (Tamblyn et al. 1996, Najarro et al. 1997). The complete spectrum of infrared fine structure lines that has been observed, combined with the H Br and Br lines (see Shields & Ferland 1994 for a compilation) should in principle allow to perform an abundance analysis. From a two-component photoionization model Shields & Ferland (1994) estimate that the abundance of Ar should be about twice solar, but Ne seems rather to have the solar value. The evidence for over solar metallicity is thus mixed. The N/O ratio is estimated to about 3 - 4 times solar. However, the derived abundances may be clouded by errors in the reddening corrections (the extinction is as high as AV = 31, so, even at far infrared wavelengths, reddening become important) and uncertainties in the atomic parameters (mainly those determining the ionization structure). As a consistency check, Shields & Ferland (1994) compared the electron temperature measured from recombination lines with their model predictions. For that, they included heating by dust, and assumed the same grain content as in the model of Baldwin et al. (1991) for Orion. They found the measured temperatures to be consistent with a metallicity 1 - 2 times solar, while 3 times solar would be only marginally consistent. However, with a population of small grains, photoelectric heating would be more important, and larger metal abundances could be acceptable. The Galactic center has since then been reobserved by ISO (Lutz et al. 1996), but a detailed discussion of the new results remains to be done.
<urn:uuid:8d4282c9-39d1-43c1-a009-35fb0e7ad2d3>
2.734375
391
Academic Writing
Science & Tech.
43.671044
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers? Different combinations of the weights available allow you to make different totals. Which totals can you make? Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. This article suggests some ways of making sense of calculations involving positive and negative numbers. In this problem, we're investigating the number of steps we would climb up or down to get out of or into the swimming pool. How could you number the steps below the water? Imagine a very strange bank account where you are only allowed to do two things... Play this game to learn about adding and subtracting positive and negative numbers Investigate different ways of making £5 at Charlie's bank. Can you be the first to complete a row of three? In this game, you can add, subtract, multiply or divide the numbers on the dice. Which will you do so that you get to the end of the number line first? The picture shows a lighthouse and many underwater creatures. If you know the markings on the lighthouse are 1m apart, can you work out the distances between some of the different creatures? In this game the winner is the first to complete a row of three. Are some squares easier to land on than others? How can we help students make sense of addition and subtraction of negative numbers? A brief history of negative numbers throughout the ages The classic vector racing game brought to a screen near you. In this article for teachers, Liz Woodham describes resources on NRICH that can help primary-aged children get to grips with What is the smallest number of answers you need to reveal in order to work out the missing headers? This article -useful for teachers and learners - gives a short account of the history of negative numbers.
<urn:uuid:9060d095-b37d-46ac-8863-672cc263de5c>
3.71875
478
Content Listing
Science & Tech.
59.689906
Some of the things I overheard at Stephen Hawking's 70th birthday conference did make me wonder whether I hadn't got the wrong building and stumbled in on a sci-fi convention. "The state of the multiverse". "The Universe is simple but strange". "The future for intelligent life is potentially infinite". And — excuse me — "the Big Bang was just the decay of our parent vacuum"?! A traditional view of science holds that every system — including ourselves — is no more than the sum of its parts. To understand it, all you have to do is take it apart and see what's happening to the smallest constituents. But the mathematician and cosmologist George Ellis disagrees. He believes that complexity can arise from simple components and physical effects can have non-physical causes, opening a door for our free will to make a difference in a physical world. Most of us think that we have the capacity to act freely. Our sense of morality, our legal system, our whole culture is based on the idea that there is such a thing as free will. It's embarrassing then that classical physics seems to tell a different story. And what does quantum theory have to say about free will? "Astronomers are used to large numbers, but few are as large as the odds I'd have given this celebration today," is how Astronomer Royal Martin Rees started his presentation at Stephen Hawking's birthday symposium yesterday. He was talking about the 1960s when he first met Hawking who was then already suffering motor neurone disease. But Rees' prediction has been proved wrong. Hawking turned 70 yesterday and since the time of their first meeting he has made enormous contributions to cosmology and physics. Human reasoning is biased and illogical. At least that's what a huge body of psychological research seems to show. But now a psychological scientist from the University of Toulouse in France has come up with a new theory: that logical and probabilistic thinking is an intuitive part of decision making, only its conclusions often lose out to heuristic considerations. Researchers in Germany have created a rare example of a weird phenomenon predicted by quantum mechanics: quantum entanglement, or as Einstein called it, "spooky action at a distance". The idea, loosely speaking, is that particles which have once interacted physically remain linked to each other even when they're moved apart and seem to affect each other instantaneously. Whenever you smell the lovely smell of fresh coffee or drop a tea bag into hot water you're benefiting from diffusion: the fact that particles moving at random under the influence of thermal energy spread themselves around. It's this process that wafts coffee particles towards your nose and allows the tea to spread around the water. Diffusion underlies a huge number of processes and it has been studied intensively for over 150 years. Yet it wasn't until very recently that one of the most important assumptions of the underlying theory was confirmed in an experiment. The only good thing about a wash-out summer is that you get to see lots of rainbows. Keats complained that a mathematical explanation of these marvels of nature robs them of their magic, conquering "all mysteries by rule and line". But rainbow geometry is just as elegant as the rainbows themselves. It's 21st of October and for puzzle lovers this can only mean one thing: the G4G Celebration of mind. This annual party celebrates the legacy of Martin Gardner, magician, writer and father of recreational maths, with mathemagical events in his honour happening all over the world.
<urn:uuid:d8321922-a3f8-44f1-a8b3-41ae942cbc40>
2.84375
727
Content Listing
Science & Tech.
45.496606
Schopf claimed to have discovered bacteria fossils in these rocks. He published his results in a highly cited Science paper back in 1993 (Schopf, 1993). The title of the paper "Microfossils of the Early Archean Apex chert: new evidence of the antiquity of life" establishes his claim. It's worth quoting the abstract of the paper because it shows the confidence Schopf exuded. Not only did he claim that the 3.5 billion year old Apex chert contained bacterial fossils but, even more astonishingly, he identified eleven different species and clearly stated that they resembled cyanobacteria. Eleven taxa (including eight heretofore undescribed species) of cellularly preserved filamentous microbes, among the oldest fossils known, have been discovered in a bedded chert unit of the Early Archean Apex Basalt of northwestern Western Australia. This prokaryotic assemblage establishes that trichomic cyanobacterium-like microorganisms were extant and morphologically diverse at least as early as approximately 3465 million years ago and suggests that oxygen-producing photoautotrophy may have already evolved by this early stage in biotic history.The data were immediately challenged. There were two problems, First, many paleonotologists questioned whether the "fossils" were really fossils. They suggested that the structures could easily be inorganic in nature and not remnants of living organisms. Secondly, the presence of cyanobacteria—among the most complex bacteria—is inconsistent with molecular data. Even though the early tree of life is complicated, the available evidence indicates that cyanobacteria arose late in the evolution of bacterial taxa. It's very unlikely that the earliest forms of life could be cyanobacteria, or even photosynthetic bacteria. The publicity associated with the presumed discovery of the earliest forms of life was too much to resist. In spite of the criticisms, the "fact" of these "fossils" made it into the textbooks within months of the discovery. The original figures have often been purged from more recent editions but the widespread claim that life originated 3.5 billion years ago persists. Schopf defended and promoted his work in a trade book—The Cradle of Life— published in 1999. In that book he appeared to address most of his critics. He insisted that his "fossils" met all the rigorous tests of science. The Fossils Aren't Fossils Over the years, the challengers became more and more emboldened. In 2002 Martin Brasier published a re-analysis of Schopf's original fossils and noticed that the published images were not as complete as they could be. In the figure shown here, Brasier et al. (2002) compare Schopf's original images ("b" and "c") with a larger view of the same material. The "fossils" look much more like inorganic inclusions that just happen to resemble strings of bacteria, according to Brasier. A debate between Martin Brasier and Bill Schopf took place in April 2002 and it was widely perceived to have resulted in victory for Brasier. The "fossils" aren't fossils. A report in Nature presented the bottom line (Dalton, 2002). The textbooks say that oxygen-producing microorganisms evolved some 3.5 billion years ago. But as that claim and its author come under attack, the history of life on Earth may have to be rewritten...A similar piece in Science helps drive the point home (Kerr, 2002). Supporters and critics of Schopf alike describe him as a driven and tenacious character — nicknamed 'Bull' Schopf by some — whose energy and enthusiasm has done much to raise the profile of micropalaeontology, and to draw funding into the field. "He has a driving ambition to be in the limelight, and he doesn't like to admit he's wrong," says one former colleague. But these traits have led Schopf into conflict with his collaborators on at least one previous occasion. The search for fossils in rocks formed before the Cambrian explosion of life 540 million years ago "has been plagued by misinterpretation and questionable results," leading paleontologist William Schopf of the University of California, Los Angeles (UCLA), once noted. Now Schopf's own claim for the oldest known fossils--fossils that have entered textbooks as the oldest ever found--is under attack as a misinterpretation of intriguingly shaped but purely lifeless minerals.The latest paper by Pinti et al. (2009) extends earlier observations of the Apex chert that re-interpret it as a hydrothermal vent. Temperatures reached 250° during formation of the vent and the alternation between molten and cooler forms of material was not conducive to life. Furthermore, deposits of iron oxides and clay minerals could be mistaken for microfossils . A paper in this week's issue of Nature argues that the microscopic squiggles in a 3.5-billion-year-old Australian chert are not fossilized bacteria, as Schopf claimed in a 1993 Science paper (30 April 1993, p. 640), but the curiously formed dregs of ancient hot-spring chemistry. "There's a continuum [of putative microfossils] from the almost plausible to the completely ridiculous," says lead author Martin Brasier, a micropaleontologist at the University of Oxford, U.K. "Our explanation is that they are all abiogenic artifacts." If true, the analysis calls into question the fossil record of life's first billion years. It would also raise doubts about the judgment of Schopf, the man chosen by NASA to set the standard for distinguishing signs of life from nonlife at the press conference unveiling martian meteorite ALH84001 (Science, 16 August 1996, p. 864). But Schopf says that such speculation is unwarranted. "I would beg to differ" with Brasier's interpretation, he says. "They're certainly good fossils." Organic Traces of Early Life? One of the early signatures of life is trace organic matter. In theory, it is possible to distinguish between organic molecules that form by chemical processes and organic molecule that are synthesized by living organisms. The key is the ratio of the two isotopes of carbon; 12C and 13C. The common isotope is 12C and living organisms preferentially incorporate 12C when they synthesize carbohydrates, lipids, and other molecules of life. The result is that organic molecules made in cells have a smaller percentage of the heavy isotope, 13C. The presence of "lighter" organic molecules is evidence of life—or so the story goes. Even this evidence of early life is being challenged. For example, a review of the evidence for life in the 3.7 billion year old rocks of western Greenland points out two potential problems (Fedo et al., 2006). First, the material has probably been misidentified—it is not what it was claimed to be. Recent evidence suggests that the rocks are igneous, not sedimentary. Secondly, the isotope ratios may not be accurate and/or they can be explained by non-biological processes. Isotope ratios are not an unambiguous indication of life. These problems, and others, with the Akilia rocks of western Greenland have been known for many years. They were discussed in a hard-hitting Nature News and Views article by Stephen Moorbath in 2005. You may not understand the technical details (I don't) but there's no mistaking the tone when Moorbath says ... This persuasive discovery seems an almost inevitable, yet highly problematic, consequence to the increasing scientific doubts about the original claim. We may well ask what exactly was the material originally analysed and reported? What was the apatite grain with supposed graphite inclusions that figured on the covers of learned and popular journals soon after the discovery? These questions must surely be answered and, if necessary, lessons learned for the more effective checking and duplication of spectacular scientific claims from the outset.There's another, potentially more serious, problem with using isotope ratios as evidence of early life. Gérard et al. (2009) have recently documented the presence of modern bacteria in drillcore samples of rocks that are 2.7 billion years old. They detected trace amounts of ribosomal RNA that were sufficient to identify more that ten diverse species of bacteria living in these subsurface formations. To my regret, the ancient Greenland rocks have not yet produced any compelling evidence for the existence of life by 3.8 billion years ago. The reader is reminded that another debate on early life is currently in progress on 3.5-billion-year-old rocks in Western Australia, where chains of cell-like structures, long identified as genuine fossils10, have recently been downgraded by some workers11 to the status of artefacts produced by entirely non-biological processes. To have a chance of success, it seems that the search for remnants of earliest life must be carried out on sedimentary rocks that are as old, unmetamorphosed, unmetasomatized and undeformed as possible. That remains easier said than done. For the time being, the many claims for life in the first 2.0–2.5 billion years of Earth's history are once again being vigorously debated: true consensus for life's existence seems to be reached only with the bacterial fossils of the 1.9-billion-year-old Gunflint Formation of Ontario12. If modern bacteria can invade and colonize ancient rocks then it's highly likely that more ancient bacteria can also live in ancient rocks. Over the course of millions of years, these colonizers can leave traces of organic molecules. But those molecules do not show that life existed in those places at the time when the rocks were formed. In other words, just because you have "light" organic molecules in rocks that are billions of years old does not mean that the cells that created those molecules lived billions of years ago. The conclusion of the Gérard et al. (2009) paper is worth quoting, Our results strongly suggest that contemporary bacteria inhabit what are generally considered exceptionally well-preserved subsurface Archaean fossil stromatolites of the Hamersley Basin, Western Australia. They are possibly in very low numbers, their distribution confined to microfractures where water may circulate (perhaps only intermittently), and their metabolic activities might be extremely low. However, upon geological timescales spanning 2.7 Gy, even such low cell numbers must have contributed significantly to the pool of biogenic signatures associated to these rocks, including microfossils, biological isotopic fractionation and lipid biomarkers. Although our results do not necessarily invalidate previous analyses, they cautiously question the interpretation of ancient biomarkers or other life traces associated to old rocks, even pristine, as syngenetic biogenic remains when bulk analyses are carried out.What does all this tell us about early life? It tells us that the evidence for life before 3 billion years ago is being challenged in the scientific literature. You can no longer assume that life existed that early in the history of Earth. It may have, but it would be irresponsible to put such a claim in the textbooks without a note of caution. What else does this story tell us? It tells us something about how science is communicated to the general public. The claims of early life were widely reported in the media. Every new discovery of trace fossils and trace molecules was breathlessly reported in countless newspapers and magazines. Nobody hears about the follow-up studies that casts doubt on those claims. Nobody hears about the scientists who were heroes in the past but seem less-than-heroic today. That's a shame because that's how science really works. That's why science is so much fun. Brasier, M.D., Green, O.R., Jephcoat, A.P., Kleppe, A.K., Van Kranendonk, M.J., Lindsay, J.F., Steele, A., and Grassineau, N.V. (2002) Questioning the evidence for Earth's oldest fossils. Nature 416::76-81. [PubMed] Dalton, R. (2002) Microfossils: Squaring up over ancient life. Nature 417:782-784. [doi:10.1038/417782a] Fedo, C.M., Whitehouse, M.J. and Kamber, B.S. (2006) Geological constraints on detecting the earliest life on Earth: a perspective from the Early Archaean (older than 3.7 Gyr) of southwest Greenland. Phil. Trans. R. Soc. B 361:851-867. [doi: 10.1098/rstb.2006.1836] Gérard, E., Moreira, D., Philippot, P., Van Kranendonk, M.J., and López-García, P. (2009) Modern Subsurface Bacteria in Pristine 2.7 Ga-Old Fossil Stromatolite Drillcore Samples from the Fortescue Group, Western Australia. PLoS ONE 4: e5298. [doi:10.1371/journal.pone.0005298] Pinti, F.L., Mineau, R., and Clement, V. (2009) Hydrothermal alteration and microfossil artefacts of the 3,465-million-year-old Apex chert. Nature Geoscience 2:640-643. [doi: 10.1038/ngeo601] Schopf, J.W. (1993) Microfossils of the Early Archean Apex chert: new evidence of the antiquity of life. Science 260:640-646. [PubMed] Gérard, E., Moreira, D., Philippot, P., Van Kranendonk, M., & López-García, P. (2009). Modern Subsurface Bacteria in Pristine 2.7 Ga-Old Fossil Stromatolite Drillcore Samples from the Fortescue Group, Western Australia PLoS ONE, 4 (4) DOI: 10.1371/journal.pone.0005298 Pinti, D., Mineau, R., & Clement, V. (2009). Hydrothermal alteration and microfossil artefacts of the 3,465-million-year-old Apex chert Nature Geoscience, 2 (9), 640-643 DOI: 10.1038/ngeo601
<urn:uuid:a932d18a-898c-44fb-ace8-b0a41c2d7bb0>
3.5625
3,030
Personal Blog
Science & Tech.
50.278337
Large models hold many details. Tons of details. It is precisely this sheer size that makes it so hard for us to grasp the entirety of a complex system in one shot. To ease the process of understanding we require tools to help us make sense of all these details. To tame these models, we need to understand their inner structure and their inter-relationships. How do we best do that? By means of browsers. A browser is a specific user interface that allows us to look at the space provided by the model, to navigate from one part of this space to another, and to act upon it. Browsers can be of many kinds, but for the purpose of this discussion we will distinguish between generic and dedicated ones. In Smalltalk, for example, the central browser is the Inspector. This is a generic tool that allows us to manipulate objects from the point of view of the Smalltalk language. It accommodates any instance of any class, it shows us the inner structure of the instance in terms of its variables, and it allows us to execute code that involves the current instance. In Moose we have the Moose Finder fulfilling a similar role, only it does it by interpreting the objects from the point of view of the meta-model of Moose. These are great tools, but they are not effective when it comes to navigating a set of objects from the point of view of a model that is at a higher level and that ignores the internal implementation details. Because the Inspector offers a low level point of view on the objects, whenever we want to discover a higher level structure we are required to go through many irrelevant, implementation-related, clicks to get to the objects of interest. A better way is to use a browser that supports a flow dedicated to the model of interest. For example, when it comes to manipulating code, we use a code browser. In Smalltalk, we could write code in the Inspector, but we typically prefer not to. Dedicated browsers are desirable, but they are expensive to build. As a result we have no dedicated way to browse the large majority of models around us. This situation needs rectification, and Glamour presents the solution in the form of an engine for building dedicated browsers. This chapter describes the details of Glamour. It starts with a short tutorial and then it gradually introduces the architecture and the most important concepts.
<urn:uuid:1d521b15-f861-43db-8554-1c3a6713cdf5>
2.734375
486
Documentation
Software Dev.
47.411651
The Great Wall |Looking back, no signs of civilization on the blue planet appear. As preliminary reports have circulated that the Hubble Space Telescope may have imaged the first extrasolar planet directly, the question of how an advanced civilization might indicate its presence over such vast distances has come to the forefront. After seeing reflected light from another planet, one might wish to resolve continents, clouds and oceans, even some artifact that a lifeform was actively shaping its environment. A case study from the perspective of how the Earth might appear from afar centers around the disputed claims about orbital pictures of the Great Wall of China. So that's one reason an astrobiologist might wonder about detecting habitable planets in visible light: Does civilization leave a big enough footprint? To consider an orbital view of humanity's influence, the European Space Agency's Proba satellite has shown a winding segment of the 7240-km long Great Wall of China situated just northeast of Beijing. The Great Wall's relative visibility or otherwise from orbit has inspired much recent debate. |Hubble infrared camera Nicmos took this image of what may be the first planet imaged directly around another star. "The infrared is better at determining temperature and the abundance of certain gases that we think of as part of a habitable planet. The optical part of the spectrum is better at looking for variability, looking for clouds." -- Arizona Prof. J. Lunine Credit: Hubble HST An often repeated myth has been that the Great Wall is the only man-made object visible from the moon. But Apollo astronaut Alan Bean wrote that "The only thing you can see from the moon is a beautiful sphere, mostly white (clouds), some blue (ocean), patches of yellow (deserts), and every once in a while some green vegetation. No man-made object is visible on this scale. In fact, when first leaving earth's orbit and only a few thousand miles away, no man-made object is visible at that point either." Even from orbit, human influence on the globe is not a dominant factor. The 21 hours spent in space last October by Yang Liwei - China's first ever space traveller - were a proud achievement for his nation. The only disappointment came as Liwei informed his countrymen he had not spotted their single greatest national symbol from orbit. "The Earth looked very beautiful from space, but I did not see our Great Wall," Liwei told reporters after his return. China has cherished for decades the idea that the Wall was just about the only manmade object visible to astronauts from space, and the news disappointed many. A suggestion was made that the Wall be lit up at night so it can definitely be seen in future, while others called for school textbooks to be revised to take account of Liwei's finding. However such revisions may be unnecessary, according to American astronaut Eugene Cernan, speaking during a visit to Singapore: "In Earth's orbit at a height of 160 to 320 kilometres, the Great Wall of China is indeed visible to the naked eye." Liwei may well have been unlucky with the weather and local atmospheric or light conditions - with sufficiently low-angled sunlight the Wall's shadow if not the Wall itself could indeed be visible from orbit. |China's Great Wall imaged from the Proba satellite (inset upper left) and from ground level (lower inset) What is for sure is that what the human eye may not be able to see, satellites certainly can. Proba's High Resolution Camera (HRC) acquired this image of the Wall from 600 km away in space. The HRC is a black and white camera that incorporates a miniature Cassegrain telescope, giving it far superior spatial resolution to the human eye. So while the HRC resolves mad-made objects down to five square metres, astronauts in low Earth orbit looking with the naked eye can only just make out such large-scale artificial features as field boundaries between different types of crops or the grid shape formed by city streets. They require binoculars or a zoom lens to make out individual roads or large buildings. Proba (Project for On Board Autonomy) is an ESA micro-satellite built by an industrial consortium led by the Belgian company Verhaert, launched in October 2001 and operated from ESA's Redu Ground Station (Belgium). Orbiting 600 km above the Earth's surface, Proba was designed to be a one-year technology demonstration mission of the Agency but has since had its lifetime extended as an Earth Observation mission. It now routinely provides scientists with detailed environmental images thanks to CHRIS - a Compact High Resolution Imaging Spectrometer developed by UK-based Sira Electro-Optics Ltd - one of the main payloads on the 100 kg spacecraft. Also aboard is the HRC, a small-scale monochromatic camera made up of a miniature Cassegrain telescope and a 1024 x 1024 pixel Charge-Coupled Device (CCD), as used in ordinary digital cameras, taking 25-km square images to a resolution of five metres. Proba boasts an 'intelligent' payload and has the ability to observe the same spot on Earth from a number of different angles and different combinations of optical and infra-red spectral bands. A follow-on mission, Proba-2, is due to be deployed by ESA around 2005. NASA's planned Kepler mission will monitor thousands of stars over a four-year period, searching for transiting planets. Kepler will be sensitive enough to detect Earth-sized worlds, if any exist, around several hundred nearby stars. These studies will then lead to the ambitious Terrestrial Planet Finder mission (2012-2015), which will examine extrasolar planets for signs of life. In December 2001, NASA selected the Kepler Mission , a project based at NASA Ames, as one of the next NASA Discovery missions. The Kepler Mission, scheduled for launch in 2006, will use a spaceborne telescope to search for Earth-like planets around stars beyond our solar system. A key criterion for such suitable planets would be whether they reside in habitable zones, or regions sometimes protected by gas giants but with temperate climates and liquid water. One NASA estimate says Kepler should discover 50 terrestrial planets if most of those found are about Earth's size, 185 planets if most are 30 percent larger than Earth, and 640 if most are 2.2 times Earth's size. In addition, Kepler is expected to find almost 900 giant planets close to their stars and about 30 giants orbiting at Jupiter-like distances from their parent stars. By the middle of the next decade, space telescopes should be capable of seeing any 'Earths' and investigating them to see if they are habitable, and, indeed, whether they actually support life. But future prospects of resolving much more than vague details are likely to be remain a grand challenge. Related Web Pages Astrobiology Magazine New Planets Extrasolar Planets Encyclopedia Planet Quest (JPL) Habitability: Betting on 37 Gem Infrared Telescope Powers Up Alpha and Omega: Part II The Mystery of Standard Candles Inevitability Beyond Billions
<urn:uuid:6c85d3fb-ce46-435d-868c-3a0c5ba97fc1>
3.328125
1,455
Content Listing
Science & Tech.
37.08985
Physics breakthroughs aside, are there more conventional ways we can reach the stars? Centauri Dreams often cites (with admiration) Robert Forward’s work on beamed laser propulsion, which offers a key advantage: The spacecraft need carry no bulky propellant. Forward’s missions involved a 7200-GW laser to push a 785 ton unmanned probe on an interstellar mission. A manned attempt would involve a 75,000,000-GW laser and a vast vehicle of some 78,500 tons. The laser systems involved in such missions, while within our understanding of physics, are obviously well beyond our current engineering. Are there other ways to accomplish such an interstellar mission? One possibility is a hybrid system that combines what is known as Miniature Magnetic Orion technologies with beamed propulsion. The spacecraft would carry a relatively small amount of fission fuel, with the remainder of the propellant — in the form of particles of fissionable material with a deuterium/tritium core — being beamed to the spacecraft. In a recent paper in Acta Astronautica, Dana Andrews (Andrews Space) and Roger Lenard (Sandia National Laboratories) describe these technologies and their own recent studies of the Mini-Mag Orion concept. Mini-Mag Orion, of course, harkens back to the original Project Orion, an attempt to develop a spacecraft that would be driven by successive detonations of nuclear bombs. Mini-Mag Orion takes the concept in entirely new directions, reducing the size of the vehicle drastically by using magnetic compression technology, which Andrews and Lenard have studied using Sandia National Laboratories’ Z-Pinch Machine, the world’s largest operational pulse power device. Their experimental and analytical progress is outlined in the paper referenced below; they now propose a follow-on program to extend their experimental work. The originally envisioned spacecraft would compress small fuel pellets to high density using a magnetic field, directing plasma from the resultant explosion through a magnetic nozzle to create thrust. This highly efficient form of pulsed nuclear propulsion is here paired for interstellar purposes with beamed propulsion methods, taking advantage of a pellet stream that continuously fuels the departing spacecraft. The interstellar Mini-Mag Orion attains approximately ten percent of light speed using these methods, and as Andrews and Lenard show, the hybrid technologies here studied reduce power requirements from the departing star system and the timeframe over which acceleration and power have to be applied. Image: The Mini-Mag Orion interstellar concept, a hybrid starship accelerated by beamed pellet propellants, and decelerated with a magnetic sail. Credit: Roger Lenard/Dana Andrews; Andrews Space. With the bulk of the propellant being supplied externally, deceleration in the target star system is an obvious challenge, one met through the use of a magnetic sail. Here is the authors’ explanation of what is essentially ‘free’ deceleration: In 2003 both Andrews and Lenard postulated using a large superconducting ring to intercept charge particles in interstellar space to slow the spacecraft down from high speeds. Additionally, the solar wind emanating from a star system provides an additional source of charged particles that can interact with the magnetic field. Deceleration can actually begin a sizable distance from the target star system… [T]he first phase of the deceleration starts at 21600 AU with a two-turn superconducting carbon nano tube reinforced loop. This loop captures the charged interstellar medium and deflects it to decelerate the spacecraft. This initial hoop size is 500 km in radius and carries 1,000,000 A of current. The spacecraft decelerates from .1 c to 6300 km/s by the time the spacecraft reaches 5000 AU. This will be quite a light show, so if there are any intelligent life forms with an observing system, they should be able to see the arrival. Quite a light show indeed! But note this: Even in the absence of a paradigm-changing physics breakthrough, Andrews and Lenard, as Forward before them, have demonstrated that there are ways to reach nearby stars with technologies we understand today and may be able to build within the century. Assume methods no more advanced than these coupled with advances in biology and life extension and it is conceivable that long-lived human crews could populate the galaxy in a series of 60 to 90 light year expansions, an interstellar diaspora that, the authors calculate, could occur every four to five thousand years. Work out the numbers and you get half the galaxy populated within a million years (Fermi’s question again resonates). The paper is Lenard and Andrews, “Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation,” Acta Astronautica 61 (2007), pp. 450-458.
<urn:uuid:91ccb7b1-755d-4ed3-b670-4d97fb4e69dc>
3.796875
990
Knowledge Article
Science & Tech.
31.175298
Contact: Science Press Package American Association for the Advancement of Science Carrying stuff on your head in the Himalayas Click here for a high resolution photograph. If you think hiking for an afternoon is a lot of work, imagine hiking for a week while carrying a pack from your head that weighs almost as much as you do. High in the Himalayan Mountains in the country of Nepal, men, women and sometimes children working as "porters" transport heavy loads of cargo on their backs up and down steep footpaths to out-of-the-way places. For example, porters carry goods for about 100 kilometers from the Kathmandu valley to a bazaar in the town of Namche, Nepal near Mount Everest. A group of scientists studied these porters and found that they use their heads to carry cargo while using a small amount of energy. The porters are even more energy efficient than African women who carry water and wood on their heads -- the only other group of people to be studied who use their heads to carry cargo. Both of these groups carry goods more efficiently than soldiers carrying backpacks. These findings appear in the 17 June, 2005 issue of the journal Science. Unlike western-style backpacks with two straps that loop around the shoulders and chest, the porters put their goods in a basket with a single strap that goes around the top of the head. This head strap, called a "namlo," connects to the basket called a "doko," which can be filled with goods. The porters carry a T-shaped stick or "tokma" that supports the basket when the porters rest. Guillaume Bastien and his colleagues at the Université Catholique de Louvain in Belgium calculated how much energy the people burned when they carried loads of different weights and walked at different speeds. To calculate how much energy the porters are using to carry their loads, the researchers measured how much oxygen they used and how much carbon dioxide they produced. The porters walk slowly for many hours each day, take frequent rests and carry the greatest loads possible. The scientists are not sure how the porters are able to work so efficiently.
<urn:uuid:bf6929cf-81f7-4ab0-9f8a-b1dacf40f9c5>
3.125
452
Knowledge Article
Science & Tech.
51.180502
O'Reilly Book Excerpts: Programming Visual Basic .NET ADO.NET, Part 1 This excerpt is Chapter 8 from Programming Visual Basic .NET, published in December 2001 by O'Reilly. A Brief History of Universal Data Access Database management systems provide APIs that allow application programmers to create and access databases. The set of APIs that each manufacturer's system supplies is unique to that manufacturer. Microsoft has long recognized that it is inefficient and error prone for an applications programmer to attempt to master and use all the APIs for the various available database management systems. What's more, if a new database management system is released, an existing application can't make use of it without being rewritten to understand the new APIs. What is needed is a common database API. Microsoft's previous steps in this direction included Open Database Connectivity (ODBC), OLE DB, and ADO (not to be confused with ADO.NET). Microsoft has made improvements with each new technology. With .NET, Microsoft has released a new mechanism for accessing data: ADO.NET. The name is a carryover from Microsoft's ADO (ActiveX Data Objects) technology, but it no longer stands for ActiveX Data Objects--it's just ADO.NET. To avoid confusion, I will refer to ADO.NET as ADO.NET and to ADO as classic ADO. If you're familiar with classic ADO, be careful--ADO.NET is not a descendant, it's a new technology. In order to support the Internet evolution, ADO.NET is highly focused on disconnected data and on the ability for anything to be a source of data. While you will find many concepts in ADO.NET to be similar to concepts in classic ADO, it is not the same. When speaking of data access, it's useful to distinguish between providers of data and consumers of data. A data provider encapsulates data and provides access to it in a generic way. The data itself can be in any form or location. For example, the data may be in a typical database management system such as SQL Server, or it may be distributed around the world and accessed via web services. The data provider shields the data consumer from having to know how to reach the data. In ADO.NET, data providers are referred to as managed providers. A data consumer is an application that uses the services of a data provider for the purposes of storing, retrieving, and manipulating data. A customer-service application that manipulates a customer database is a typical example of a data consumer. To consume data, the application must know how to access one or more data providers. ADO.NET is comprised of many classes, but five take center stage: - Represents a connection to a data source. - Represents a query or a command that is to be executed by a data source. - Represents data. The DataSet can be filled either from a data source (using a DataAdapter object) or dynamically. - Used for filling a DataSet from a data source. - Used for fast, efficient, forward-only reading of a data source. With the exception of DataSet, these five names are not the actual classes used for accessing data sources. Each managed provider exposes classes specific to that provider. For example, the SQL Server managed provider exposes the SqlConnection, SqlCommand, SqlDataAdapter, and SqlDataReader classes. The DataSet class is used with all managed providers. Any data-source vendor can write a managed provider to make that data source available to ADO.NET data consumers. Microsoft has supplied two managed providers in the .NET Framework: SQL Server and OLE DB. The examples in this chapter are coded against the SQL Server managed provider, for two reasons. The first is that I believe that most programmers writing data access code in Visual Basic .NET will be doing so against a SQL Server database. Second, the information about the SQL Server managed provider is easily transferable to any other managed provider. Pages: 1, 2
<urn:uuid:b619889e-d127-482f-bfa8-f0385a710ca2>
2.984375
839
Truncated
Software Dev.
43.136288
Blue Marlins, Makaira nigricans Taxonomy Animalia Chordata Actinopterygii Perciformes Istiophoridae Makaira nigricans Description & Behavior Largest of the Atlantic marlins, blue marlins, Makaira nigricans (Lacepéde, 1802), aka Atlantic blue marlins, billfishes, Cuban black marlins, marlins, ocean gars, ocean guards, and squadrons, commonly reach 2.9 m with a maximum of 5 m and maximum weights between 636 - 820 kg (or 540 - 1,800 kg depending upon the source). Males, however, grow much more slowly than females and do not generally exceed 136 kg; all trophy fish are females. The blue marlin's body is cobalt blue on top, with a silvery white belly, and their upper jaw is famously elongated like a spear. Their tails are high and crescent-shaped and their dorsal fins are pointed at the front end. Their body is covered in embedded scales which end in one or two sharp points. Their lateral line is reticulated, or interwoven like a net, but this characteristic is difficult to see in large specimens. Maturity is reached at about 80 cm in males (40 kg) and 50 cm in females (55 kg). World Range & Habitat A highly migratory species, blue marlins are usually found offshore in deep blue tropical or temperate waters. They are known to make regular seasonal migrations, moving toward the equator in winter and away again in summer, with some migrations spanning the entire Atlantic. Some scientists recognize Makaira nigricans and Makaira mazara as two different species based on differences in their lateral line. Many, however, lump the two together as a single species occurring in the Atlantic, Pacific and Indian oceans. Feeding Behavior (Ecology) Blue marlin's prey includes octopuses, squid and pelagic fishes such as blackfin tuna and frigate mackerel. They hunt during the daytime rarely gathering in schools, preferring to hunt alone. Blue marlins have been reported to use their long, sharp bills to slice or stun prey. Very little is known about the spawning of blue marlins except that they are external fertilizers, open water egg scatterers, and they spawn in the eastern Atlantic during the summer. Their eggs are transparent and spherical and measure 1 mm in diameter. Conservation Status & Comments A very popular sport fish due to its challenging size and strength — they are also one of the world's fastest fishes — the blue marlin is also marketed for human consumption fresh or frozen. Blue marlins are under intense pressure from longline fishing. In the Caribbean region alone, Japanese and Cuban fishermen annually take over a thousand tons. All vessels within 200 miles (320 km) of the U.S. coastline are required to release any billfish caught. However, the survival rate of released fish is low because of damage during capture. Atlantic blue marlins have not (yet) been evaluated as to whether they are threatened or endangered by the International Union for Conservation of Nature (IUCN). In 2010, Greenpeace International added the Atlantic blue marlin to its seafood red list. "The Greenpeace International seafood red list is a list of fish that are commonly sold in supermarkets around the world, and which have a very high risk of being sourced from unsustainable fisheries." References & Further Research Research Makaira nigricans » Barcode of Life ~ BioOne ~ Biodiversity Heritage Library ~ CITES ~ Cornell Macaulay Library [audio / video] ~ Encyclopedia of Life (EOL) ~ ESA Online Journals ~ FishBase ~ Florida Museum of Natural History Ichthyology Department ~ GBIF ~ Google Scholar ~ ITIS ~ IUCN RedList (Threatened Status) ~ Marine Species Identification Portal ~ NCBI (PubMed, GenBank, etc.) ~ Ocean Biogeographic Information System ~ PLOS ~ SCIRIS ~ SIRIS ~ Tree of Life Web Project ~ UNEP-WCMC Species Database ~ WoRMS Feedback & Citation Have trouble with something? Let us know! Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more.
<urn:uuid:b5ab976e-34a7-416f-85c1-f522060f1221>
3.09375
958
Knowledge Article
Science & Tech.
33.54297
Iodine is separated from the other radioactive species by extraction into carbon tetrachloride. Before this extraction is made, a complete interchange must be affected between the added iodine carrier and the tracer iodine present in the sample. The carrier iodide (I−) is oxidized to periodate (IO4−) in alkaline solution by NaOCl. The IO4− is reduced to I2 by hydroxylamine hydrochloride (NH2OH•HCl) and extracted into CCl4. The I2 is back-extracted by reduction to I− with NaHSO3. Iodide is then precipitated as silver iodide for chemical yield measurement and radioactivity counting. CCl4 (Note 1) Ethyl alcohol, 95% Iodine carrier, 10.0 mg I−/mL (Note 2) HN2OH HCl, 1M NaHSO3, 0.5 M AgNO3, 0.1 M In a 250 mL separatory funnel containing 100 mL water sample (Note 3), add 2 mL of I− carrier solution, 2 mL of NaOH, and 4 mL of NaOCl. Shake funnel for 2 minutes. Add 50 mL of CCl4, 4 mL of 1M NH2OH HCl and 3 mL of conc. HNO3. Shake the funnel for 2 minutes, and allow phases to separate.
<urn:uuid:8b9968fc-48f6-41da-8186-a5691f0e556d>
2.78125
298
Tutorial
Science & Tech.
67.666439
There is another way, but it may be difficult to make the measurements. There is a device called a spherometer for measureing the curvature of a surface, but you probably do not have one of those laying around. You could accomplish what it does if you could make very accurate thickness measurements with a micrometer or vernier caliper, and if your lens is symmetrical (same radii on both sides). A very accurate measurement of the thickness of the lens in the middle, and at a known distance from the center (25mm) can be used to find the radius of curvature. Most thin lenses are not knife sharp on the edge, so you might want to move in a bit and make an actual measurement just inside the radius. Is this a possibility?
<urn:uuid:9ae4d36f-bc46-43c8-b116-e9eb64022136>
2.78125
159
Q&A Forum
Science & Tech.
45.427727
Just over a year ago, the spacecraft Huygen's made a 147-minute descent through the dusty atmosphere of Saturn's mammoth moon Titan—it was the most distant touchdown ever by a human-made spacecraft. The landmark trip provided the first views of the surface of the mysterious moon, which is larger than the planet Mercury. When spliced together and sped up, the images sent back from Huygen's descent form a four-minute movie of the landing. Once the craft makes is way though all the dust, Titan looks like a big dusty, orange mountain range. Visible in the picture are drainage channels produced from methane rain and a system of sand dunes—not at all unlike those on earth. At the conclusion of its descent Huygens lands on the shoreline of a dry river basin. The surface of Titan, which is a frigid -180 degrees C, is covered with rocks made of water ice. Watch the silent video (with lots of telemetry details) or the narrated version. Meanwhile, new calculations show that a full day on nearby Saturn is actually eight minutes longer than previously thought, at a very quick 10 hours, 47 minutes and 6 seconds long. These new figures could help scientists figure out just what makes the gaseous giant work. How Cassini/Huygens works. NASA/JPL Natural color image of Saturn, with tiny moon Enceladus right of center. NASA/JPL/Space Science Institute Sand dunes on Titan's surface. NASA/JPL (top), NASA/JSC (bottom) Saturn's rings, as seen from the Cassini spacecraft.
<urn:uuid:18670e96-bee4-446e-a65a-af2e34cd5b8b>
4.28125
339
Knowledge Article
Science & Tech.
56.855
Gases are the state of matter with the greatest amount of energy. Pressure is created by gas particles running into the wall of the container. Pressure is measured in many units: 1 atm = 101300 Pa = 101.3 kPa = 760 mm Hg = 14.7 psi. Atmospheric pressure is the pressure due to the layers of atmosphere above us. Kinetic Molecular Theory The Kinetic Molecular Theory has several assumptions for ideal gases. - Gases are made of atoms or molecules - Gas particles are in rapid, random, constant motion - The temperature is proportional to the average kinetic energy - Gas particles are not attracted nor repelled from each other - All gas particle collisions are perfectly elastic (they leave with the same energy they collided with) - The volume of gas particles is so small compared to the space between them that the volume of the particle is insignificant Real gases do have a volume (that takes up space which other particles cannot occupy) and they do have attractions/repulsions from one another as well as in-elastic collisions. The KMT is used to understand gas behavior. Pressure and volume are inversely proportional. Pressure and temperature are directly proportional. Pressure and number of particles are directly proportional. An expandable container will expand or contract so that the internal and external pressures are the same. Non-expandable containers will explode or implode if the difference in the pressures is too great for the container to withstand. Symbols for all gas Laws: P = Pressure; V = Volume; n = moles; T = Temperature (in Kelvin); R = Gas constant or “a” and “b” = correction factors for real gases Combined Gas Law: When something is held constant, it cancels out. Dalton’s Law of Partial Pressure: Mole fraction: Partial Pressure and mole fraction: Ideal Gas Law: Ideal Gas Law with Molar Mass: Ideal Gas Law with Density: Real Gas Law: Use the molar volume of a gas at STP (1 mole of any gas at STP = 22.4 L) to convert between moles and liters of a gas in stoichiometry. Then use the appropriate gas law to find the volume at non-STP conditions. Diffusion and Effusion Diffusion is the rate at which a gas travels through a container. Effusion is the rate at which gas escapes through a tiny hole in the container. Both are inversely proportional to the square root of the molar mass (heavier molecules travel slower). Graham’s Law:
<urn:uuid:f1ae121f-1518-4b6c-91eb-03d481573459>
4.15625
550
Knowledge Article
Science & Tech.
42.521429
May 18, 2012 | 5 A hundred million years ago, everything looked quite different. There were dinosaurs on land and gigantic creatures in the sea, and the continents were arranged quite differently. But one thing would be familiar to the modern eye: pollination. Recently, scientists found pollen grains on tiny insects preserved in amber from the Cretaceous period, providing the oldest known record of pollination. Today, pollination is all around us. Over 80 percent of plant species depend on insects to transfer little nuggets of pollen from the male to female parts of flowers for reproduction. Many flowers have specialized parts to attract insects to them, and many insects have corresponding hairs adapted to gathering and carrying pollen. But it wasn't always that way—pollination has evolved over millions of years, and probably began at the beginning of the Cretaceous, perhaps even with insects like these. The tiny insects here belong to a group called thrips—critters with fringed wings that are less than two millimeters long. Six of the little bugs were preserved in the amber that the researchers found, and on them hundreds of pollen grains, probably from a cycad or ginkgo tree. The work was published in Proceedings of the National Academy of Sciences. Finding these thrips sheds some light on how early insects were pollinating plants, and why. The thrips had little hairs on their bodies, perfect for collecting pollen, but they didn't evolve those hairs just to help plants out. Instead, the researchers think, the species fed their larvae with the pollen, and those larvae probably lived in the female parts of ginkgo and cycad plants. So the insects would go to the male parts of the plants, gather the pollen grains, and bring them back to their larvae on the female organs. In the process, some of that pollen fertilized the trees, setting off a chain of evolutionary adaptations in insects and flowers that continues to this day. Deadline: Jul 30 2013 Reward: $100,000 USD The Seeker desires a method for producing pseudoephedrine products in such a way that it will be extremely difficult for clandestine che Deadline: Jul 14 2013 Reward: $1,000,000 USD This is a Reduction-to-Practice Challenge that requires written documentation and& Get Both Print & Tablet Editions for one low price!X
<urn:uuid:aed5a532-4dd7-413e-82b3-d841bfdc3988>
4.0625
485
Truncated
Science & Tech.
47.683348
Karatsuba algorithm with Excel VBA 15.04.2009Someone posted in my favorite german office forum - - a new game: “Let's multiply, starting with 2, the result with itself and so on”. Have you ever tried to multiply large numbers in Excel? Well, Excel uses a number precision of 15 digits, beyound the numbers are rounded. Thus, formulas are not suitable. Can VBA do the job? Yes, but only partially since the data types are restricted too. In addition, we have also to consider the computing time for multiplying large numbers. If you search the Internet, you’ll quickly find some interesting procedures for multiplying large numbers. One of them is the “Karatsuba algorithm” which significantly reduces the multiplication of two n-digit numbers. The algorithm replaces some multplications by additions. And how does it work? The basic step of Karatsuba's algorithm is a formula that allows us to compute the product of two large numbers using three multiplications of smaller numbers plus some additions and digit shifts. The Karatsuba algorithm is an example of the “divide and conquer” algorithm which works by recursively breaking down a problem into two or more sub-problems of the same type, until these become simple enough to be solved directly. Let’s imagine that we want multiply the numbers 123456789 and 98765. The result will be 12193258259412. First, we note that the two numbers have different numbers of digits (9 and 5 digits). So, let’s first prepend some zeros to the two numbers: Number_1 = 0123456789 Number_2 = 0000098765As surely noticed, I also prepended to Number _1 one zero. Both have now 10 digits, this number can be easily divided by 2. Now, we split the numbers into for 5 digits long numbers: Number_1_1 = 01234 and Number_1_1 = 56789 Number_2_1 = 00000 and Number_2_2 = 98765If we assume that N equals to 5, we can use following procedure to compute our base numbers: Number_1 = Number_1_1 * 10 ^ N + Number _1_2 Number_2 = Number_2_1 * 10 ^ N + Number _2_2In our example: Number_1 = 01234 * 10 ^ 5 + 56789 = 0123456789 Number_2 = 00000 * 10 ^ 5 + 98765 = 0000098765The algorithm initially performs three sub-products, which obey the following rule: P_1 = Number_1_1 * Number_2_1 P_2 = Number_1_2 * Number_2_2 P_3 = (Number_1_1 + Number_1_2) * (Number_2_1 + Number_2_2)In our example: P_1 = 01234 * 00000 = 0 P_2 = 56789 * 98765 = 5608765585 P_3 = (01234 + 56789) * (00000 + 98765) = 5730641595At this point, we can use the same rule for calculating P_1, P_2 and P_3. A computer program can perform this task recursively. At the end we should obtain small numbers which we can normaly multiply. What's missing is the calculation rule to re-assemble the numbers: Result = P_1 * 10 ^ 2N + (P_3 - P_2 - P_1) * 10 ^ N + P_2You will get exactly 12193258259412 for our example, which corresponds to the desired result. Implementation in VBA As mentionned above, we can’t use the data types Long or Double for multiplying large numbers. So we’ll have to use strings and implement by ourself operators for adding and subtracting. Let’s have a look on the code below: Public Function mlfpKaratsuba(First As String, Second As String) As String Dim a As String Dim b As String Dim c As String Dim d As String Dim r As String Dim x As String Dim y As String Dim z As String Dim n As Long Dim p As Long ' Length... n = mlfhMax(Len(First), Len(Second)) n = n + n Mod 2 n = n / 2 ' Check... If n < 2 Then ' Return... mlfpKaratsuba = CStr(CLng(CStr(0) & First) * _ CLng(CStr(0) & Second)) ' Exit.. Exit Function End If ' Decompose... If 2 * n > Len(First) Then a = Left(String(2 * n - Len(First), CStr(0)) & First, n) b = Right(First, n) Else a = Left(First, n) b = Right(First, n) End If If 2 * n > Len(Second) Then c = Left(String(2 * n - Len(Second), CStr(0)) & Second, n) d = Right(Second, n) Else c = Left(Second, n) d = Right(Second, n) End If ' Recurse... x = mlfpKaratsuba(a, c) y = mlfpKaratsuba(b, d) z = mlfpKaratsuba(mlfhAdd(a, b), mlfhAdd(c, d)) ' Calculate... a = x & String(2 * n, CStr(0)) b = mlfhAdd(y, x) b = mlfhSubstract(z, b) b = b & String(n, CStr(0)) ' Result... r = mlfhAdd(a, b) r = mlfhAdd(r, y) ' Return... mlfpKaratsuba = r End FunctionIn a first step, we calculate the numbers of digits for our two parameters 'First' and 'Second' which are representing our numbers to multiply. Then we compute n, which must be dividable by two. After prepending an appropriate number of zeros to our parameters, we recursively call the Function mlfpKaratsuba() for calculating P_1, P2 and P3 (x, y and z in our code). The functions mlfhAdd() and mlfhSubstract() respectively add and substract numbers represented as strings. To calculate the result, we must remember that the calculation also contains string addi- tions and subtractions. The high potency of 10 ^ N can be simply made by appending zeros. You can download our sample project file from our . The code writes the results in text files located in the same directory as the workbook.
<urn:uuid:7c0e2bcb-cbaf-4582-83b8-bb7fdb0bccc7>
3.546875
1,468
Tutorial
Software Dev.
77.45216
Figure 3. Relationship between soil pH and relative plant nutrient availability (a widening bar equates to greater availability). Where nutrients are shown interlocking, they combine at that pH to form insoluble compounds that reduce phosphate solubility. Credit: Taken from R.W. Miller and D.T. Gardiner. Soils in Our Environment. Prentice Hall, 2001.
<urn:uuid:1a201618-78b2-4c8f-9586-5f284aa23ac2>
3.140625
76
Knowledge Article
Science & Tech.
41.832858
Saturday, January 12, 2013 How Erlang does scheduling In this, I describe why Erlang is different from most other language runtimes. I also describe why it often forgoes throughput for lower latency. TL;DR - Erlang is different from most other language runtimes in that it targets different values. This describes why it often seem to perform worse if you have few processes, but well if you have many. From time to time the question of Erlang scheduling gets asked by different people. While this is an abridged version of the real thing, it can act as a way to describe how Erlang operates its processes. Do note that I am taking Erlang R15 as the base point here. If you are a reader from the future, things might have changed quite a lot—though it is usually fair to assume things only got better, in Erlang and other systems. Toward the operating system, Erlang usually has a thread per core you have in the machine. Each of these threads runs what is known as a scheduler. This is to make sure all cores of the machine can potentially do work for the Erlang system. The cores may be bound to schedulers, through the +sbt flag, which means the schedulers will not "jump around" between cores. It only works on modern operating systems, so OSX can't do it, naturally. It means that the Erlang system knows about processor layout and associated affinities which is important due to caches, migration times and so on. Often the +sbt flag can speed up your system. And at times by quite a lot. The +A flag defines a number of async threads for the async thread pool. This pool can be used by drivers to block an operation, such that the schedulers can still do useful work while one of the pool-threads are blocked. Most notably the thread pool is used by the file driver to speed up file I/O - but not network I/O. While the above describes a rough layout towards the OS kernel, we still need to address the concept of an Erlang (userland) process. When you call spawn(fun worker/0) a new process is constructed, by allocating its process control block in userland. This usually amounts to some 600+ bytes and it varies from 32 to 64 bit architectures. Runnable processes are placed in the run-queue of a scheduler and will thus be run later when they get a time-slice. Before diving into a single scheduler, I want to describe a little bit about how migration works. Every once in a while, processes are migrated between schedulers according to a quite intricate process. The aim of the heuristic is to balance load over multiple schedulers so all cores get utilized fully. But the algorithm also considers if there is enough work to warrant starting up new schedulers. If not, it is better to keep the scheduler turned off as this means the thread has nothing to do. And in turn this means the core can enter power save mode and get turned off. Yes, Erlang conserves power if possible. Schedulers can also work-steal if they are out of work. For the details of this, see . IMPORTANT: In R15, schedulers are started and stopped in a "lagged" fashion. What this means is that Erlang/OTP recognizes that starting a scheduler or stopping one is rather expensive so it only does this if really needed. Suppose there is no work for a scheduler. Rather than immediately taking it to sleep, it will spin for a little while in the hope that work arrives soon. If work arrives, it can be handled immediately with low latency. On the other hand, this means you cannot use tools like top(1) or the OS kernel to measure how efficient your system is executing. You must use the internal calls in the Erlang system. Many people were incorrectly assuming that R15 was worse than R14 for exactly this reason. Each scheduler runs two types of jobs: process jobs and port jobs. These are run with priorities like in an operating system kernel and is subject to the same worries and heuristics. You can flag processes to be high-priority, low-priority and so on. A process job executes a process for a little while. A port job considers ports. To the uninformed, a "port" in Erlang is a mechanism for communicating with the outside world. Files, network sockets, pipes to other programs are all ports. Programmers can add "port drivers" to the Erlang system in order to support new types of ports, but that does require writing C code. One scheduler will also run polling on network sockets to read in new data from those. Both processes and ports have a "reduction budget" of 2000 reductions. Any operation in the system costs reductions. This includes function calls in loops, calling built-in-functions (BIFs), garbage collecting heaps of that process[n1], storing/reading from ETS, sending messages (The size of the recipients mailbox counts, large mailboxes are more expensive to send to). This is quite pervasive, by the way. The Erlang regular expression library has been modified and instrumented even if it is written in C code. So when you have a long-running regular expression, you will be counted against it and preempted several times while it runs. Ports as well! Doing I/O on a port costs reductions, sending distributed messages has a cost, and so on. Much time has been spent to ensure that any kind of progress in the system has a reduction cost[n2]. In effect, this is what makes me say that Erlang is one of a few languages that actually does preemptive multitasking and gets soft-realtime right. Also it values low latency over raw throughput, which is not common in programming language runtimes. To be precise, preemption means that the scheduler can force a task off execution. Everything based on cooperation cannot do this: Python twisted, Node.js, LWT (Ocaml) and so on. But more interestingly, neither Go (golang.org) nor Haskell (GHC) is fully preemptive. Go only switches context on communication, so a tight loop can hog a core. GHC switches upon memory allocation (which admittedly is a very common occurrence in Haskell programs). The problem in these systems are that hogging a core for a while—one might imagine doing an array-operation in both languages—will affect the latency of the system. This leads to soft-realtime which means that the system will degrade if we fail to meet a timing deadline. Say we have 100 processes on our run-queue. The first one is doing an array-operation which takes 50ms. Now, in Go or Haskell/GHC[n3] this means that tasks 2-100 will take at least 50ms. In Erlang, on the other hand, task 1 would get 2000 reductions, which is sub 1ms. Then it would be put in the back of the queue and tasks 2-100 would be allowed to run. Naturally this means that all tasks are given a fair share. Erlang is meticously built around ensuring low-latency soft-realtime properties. The reduction count of 2000 is quite low and forces many small context switches. It is quite expensive to break up long-running BIFs so they can be preempted mid-computation. But this also ensures an Erlang system tend to degrade in a graceful manner when loaded with more work. It also means that for a company like Ericsson, where low latency matters, there is no other alternative out there. You can't magically take another throughput-oriented language and obtain low latency. You will have to work for it. And if low latency matters to you, then frankly not picking Erlang is in many cases an odd choice. "Characterizing the Scalability of Erlang VM on Many-core Processors" http://kth.diva-portal.org/smash/record.jsf?searchId=2&pid=diva2:392243 [n1] Process heaps are per-process so one process can't affect the GC time of other processes too much. [n2] This section is also why one must beware of long-running NIFs. They do not per default preempt, nor do they bump the reduction counter. So they can introduce latency in your system. [n3] Imagine a single core here, multicore sort of "absorbs" this problem up to core-count, but the problem still persists. (Smaller edits made to the document at Mon 14th Jan 2013) - ► 2012 (9) - ► 2010 (14) - ► 2009 (22) - ► 2008 (20) - Jesper Louis Andersen - Lambda-loving CS Geek. Likes metal music. Likes dogs. Likes cats. Does not like pictures of dogs and cats (unless they are lambdacats!)Has an unhealthy coffee addiction. Calls himself the coffee zombie in the morning (BEEEEANS!)Has a neverending curiosity gene. Likes intelligence.
<urn:uuid:8581de6d-7340-4aae-a02a-23c6183f4a36>
2.890625
1,921
Personal Blog
Software Dev.
61.119031
Section 3: Newton's Law of Universal Gravitation An underlying theme in science is the idea of unification—the attempt to explain seemingly disparate phenomena under the umbrella of a common theoretical framework. The first major unification in physics was Sir Isaac Newton's realization that the same force that caused an apple to fall at the Earth's surface—gravity—was also responsible for holding the Moon in orbit about the Earth. This universal force would also act between the planets and the Sun, providing a common explanation for both terrestrial and astronomical phenomena. Figure 5: Newton's law of universal gravitation. Source: © Blayne Heckel. More info Newton's law of universal gravitation states that every two particles attract one another with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between them. The proportionality constant, denoted by G, is called the universal gravitational constant. We can use it to calculate the minute size of the gravitational force inside a hydrogen atom. If we assign m1 the mass of a proton, 1.67 x 10-27 kilograms and m2 the mass of an electron, 9.11 x 10-31 kilograms, and use 5.3 x 10-11 meters as the average separation of the proton and electron in a hydrogen atom, we find the gravitational force to be 3.6 x 10-47 Newtons. This is approximately 39 orders of magnitude smaller than the electromagnetic force that binds the electron to the proton in the hydrogen nucleus. See the math Local gravitational acceleration The law of universal gravitation describes the force between point particles. Yet, it also accurately describes the gravitational force between the Earth and Moon if we consider both bodies to be points with all of their masses concentrated at their centers. The fact that the gravitational force from a spherically symmetric object acts as if all of its mass is concentrated at its center is a property of the inverse square dependence of the law of universal gravitation. If the force depended on distance in any other way, the resulting behavior would be much more complicated. A related property of an inverse square law force is that the net force on a particle inside of a spherically symmetric shell vanishes. Figure 6: GRACE mission gravity map of the Earth. Source: © Courtesy of The University of Texas Center for Space Research (NASA/DLR Gravity Recovery and Climate Experiment). More info Just as we define an electric field as the electric force per unit charge, we define a gravitational field as the gravitational force per unit mass. The units of a gravitational field are the same units as acceleration, meters per second squared (m/s2). For a point near the surface of the Earth, we can use Newton's law of universal gravitation to find the local gravitational acceleration, g. If we plug in the mass of the Earth for one of the two masses and the radius of the Earth for the separation between the two masses, we find that g is 9.81 m/s2. This is the rate at which an object dropped near the Earth's surface will accelerate under the influence of gravity. Its velocity will increase by 9.8 meters per second, each second. Unlike big G, the universal gravitational constant, little g is not a constant. As we move up further from the Earth's surface, g decreases (by 3 parts in 105 for each 100 meters of elevation). But it also decreases as we descend down a borehole, because the mass that influences the local gravitational field is no longer that of the entire Earth but rather the total mass within the radius to which we have descended. Even at constant elevation above sea level, g is not a constant. The Earth's rotation flattens the globe into an oblate spheroid; the radius at the equator is nearly 20 kilometers larger than at the poles, leading to a 0.5 percent larger value for g at the poles than at the equator. Irregular density distributions within the Earth also contribute to variations in g. Scientists can use maps of the gravitational field across the Earth's surface to infer what structures lay below the surface. Gravitational fields and tides Every object in the universe creates a gravitational field that pervades the universe. For example, the gravitational acceleration at the surface of the Moon is about one-sixth of that on Earth's surface. The gravitational field of the Sun at the position of the Earth is 5.9 x 10-3 m/s2, while that of the Moon at the position of the Earth is 3.3 x 10-5 m/s2, 180 times weaker than that of the Sun. Figure 7: Plot of the tidal Water Level (WL) at Port Townsend, Washington. Source: © NOAA. More info The tides on Earth result from the gravitational pull of the Moon and Sun. Despite the Sun's far greater gravitational field, the lunar tide exceeds the solar tide. That's because it is not the gravitational field itself that produces the tides but its gradient—the amount the field changes from Earth's near side to its far side. If the Sun's gravitational field were uniform across the Earth, all points on and within the Earth would feel the same force, and there would be no relative motion (or tidal bulge) between them. However, because the gravitational field decreases as the inverse of the distance squared, the side of the Earth facing the Sun or Moon feels a larger field and the side opposite feels a smaller field than the field acting at Earth's center. The result is that water (and the Earth itself to a lesser extent) bulges toward the Moon or Sun on the near side and away on the far side, leading to tides twice a day. Because the Moon is much closer to Earth than the Sun, its gravitational gradient between the near and far sides of the Earth is more than twice as large as that of the Sun.
<urn:uuid:87794702-6740-4320-974e-d4706d505580>
3.734375
1,206
Knowledge Article
Science & Tech.
50.250124
Recall that if we have y = f(g(x)) that And if y = f(g(h(x))) So to facilitate this process a bit I'm going to rewrite this as: We are going to be using the product rule: y = f(x)g(x) --> y' = f'(x)g(x) + f(x)g'(x). Then we use the chain rule on the individual factors. After a bit of simplifying I get: Now. At the point (2, 4) the slope has a value of -93/2. Thus the tangent line will be: and will pass through the point (2, 4): Thus the tangent line is I'll use the quotient rule here, then the chain rule for each factor: After a bit of simplification: Now for the first tangent line. We want the tangent at the point . The slope will be: So the first line has the form . Inserting the point into this I get that . So the first line is: . In a similar fashion the line tangent to the function at the point is: . To find the point of intersection we need to solve the system of equations: Use your favorite method here. It's ugly and if you want help I'll provide it. Until then I'll simply state the solution: The point of intersection is: . 1) Check my math. This is sufficiently complicated I may have made a small error somewhere. I'd simply graph the problem, but my nicer graphing software isn't available. (On my TI-92 it seems to check out, but the resolution is bad.) 2) Whoever came up with this one is a sadist! Note: A graphing calculator would be very handy here. If you don't have one I would recommend using decimals instead of trying to combine the fractions by hand. (This is one of the rare places I suggest that.) We are looking for a place where x and y are the same for both equations. Since both are solved for y, we can say that the right hand sides of both equations are equal: Then use either of the original equations to find y. I'll use the first:
<urn:uuid:f5a31571-11d9-4caf-99c1-c87de1989fd3>
2.75
479
Tutorial
Science & Tech.
79.353775
"Exploring Materials - Nano Gold" is a hands-on activity in which visitors discover that nanoparticles of gold can appear red, orange or even blue. They learn that a material can act differently when it’s nanometer-sized. "Exploring Tools - Mitten Challenge" is a hands on activity in which visitors build a Lego® structure while wearing mittens. They learn that it is difficult to build small things when your tools are too big. "Exploring Size - Memory Game" is a card game exploring the different size scales - macro, micro and nano - objects within these different scales and the way these objects are measured. Visitors compete to find matching pairs of cards. Visitors will engage in a variety of survey type questions focusing on different aspects of nanotechnology. For each question posed, they will be provided short descriptions about the possible options. They will then place their vote using a marble in the container labeled with their selection. Throughout the day the public will be able to visualize how others have answered the same question by looking at the quantity of marbles in each container. Museum staff can use the data to chart trends in public knowledge about nanotechnology. "Exploring Products - Sunblock" is a hands-on activity comparing sunblock containing nanoparticles to ointment. Visitors learn how some sunblocks that rub in clear contain nanoparticles that block harmful rays from the sun. Scientist Speed Dating is a facilitated, yet informal and high-energy, social activity to encourage a large group of people to speak with one another, ask questions, and learn about specific areas of research and practice within the field of nanoscale science and engineering, as well as the related societal and ethical implications of work in this field. Nano Around the World is a card game designed to get participants to reflect on the potential uses of nanotechnology across the globe. Players each receive three cards: a character card, a current technology card, and a future technology card. They are asked to assume the role of their character to find nanotechnologies that might benefit them. After game play there is a facilitated discussion to help players reflect on the choices they made, the difficulty in finding appropriate technologies for many of the characters, and the possible nanotechnologies that could benefit a wider array of people than current nanotechnologies do. "Exploring Materials - Liquid Crystals" is a hands on activity demonstrating that the way a material behaves on the macroscale is affected by its structure on the nanoscale. Visitors investigate the properties of a heat sensitive liquid crystal and make their own liquid crystal sensor to take home. "Exploring Size - Scented Solutions" is a hands on activity illustrating how small nano is. By sniffing a series of diluted scent solutions, visitors discover that nano-sized particles may be too small to see, but they're not too small to smell! This hands-on activity will guide you in making a synthetic gecko tape with micron sized hairs that mimics that behavior of the gecko foot. The process is called "nanomolding." Also described is an easy setup using Legos for testing how much weight the gecko tape can hold. Significant amount of research is ongoing in the field of synthetic Gecko tape due to its wide variety of applications. This program gives a glimpse of one of the methods used by researchers for making a synthetic gecko tape and its properties.
<urn:uuid:538b9f66-743e-4158-87d5-25735a790c55>
3.625
699
Content Listing
Science & Tech.
34.592928
Goals: MESSENGER was designed to map the surface composition, study the magnetic field and interior structure of our solar system's smallest and innermost planet -- Mercury. It carries eight instruments to study Mercury's polar deposits, core and magnetic dynamo, crust and mantle, magnetosphere, crustal composition, geologic evolution and exosphere. Accomplishments: On 18 March 2011 (UTC), MESSENGER became the first spacecraft to orbit Mercury. During a series of flybys that edged it closer to orbit insertion, the spacecraft revealed more of Mercury than has ever been seen before. Images and data reveal Mercury as a unique, geologically diverse world with a magnetosphere far different than the one first discovered by Mariner 10 in 1975. MESSENGER solved the decades-old question of whether there are volcanic deposits on the planet's surface. MESSENGER orbital images have revealed volcanic vents measuring up to 25 kilometers (15.5 miles) across that appear to have once been sources for large volumes of very hot lava that, after eruption, carved valleys and created teardrop-shaped ridges in the underlying terrain. The spacecraft also found Mercury has an unexpectedly complex internal structure. Mercury's core is huge for the planet's size, about 85% of the planetary radius, even larger than previous estimates. The planet is sufficiently small that at one time many scientists thought the interior should have cooled to the point that the core would be solid. However, subtle dynamical motions measured from Earth-based radar combined with parameters of the gravity field, as well as observations of the magnetic field that signify an active core dynamo, indicate that Mercury's core is at least partially liquid.
<urn:uuid:58052536-7fac-4c76-a83a-5a4fc7567ff7>
4.09375
345
Knowledge Article
Science & Tech.
26.009684
Protecting Many Species to Help Our Own That’s according to the most authoritative compilation of living things at risk — the so-called Red List maintained by the International Union for Conservation of Nature. By generalizing from the few groups that we know fairly well — amphibians, birds and mammals — a study in the journal Nature last year concluded that if all species listed as threatened on the Red List were lost over the coming century, and that rate of extinction continued, we would be on track to lose three-quarters or more of all species within a few centuries. We know from the fossil record that such rapid loss of so many species has previously occurred only five times in the past 540 million years. The last mass extinction, around 65 million years ago, wiped out the dinosaurs. The Red List provides just a tiny insight into the true number of species in trouble. The vast majority of living things that share our planet remain undiscovered or have been so poorly studied that we have no idea whether their populations are healthy, or approaching their demise. Less than 4 percent of the roughly 1.7 million species known to exist have been evaluated. And for every known species, there are most likely at least two others — possibly many more — that have not yet been discovered, classified and given a formal name by scientists. Just recently, for instance, a new species of leopard frog was found in ponds and marshes in New York City. So we have no idea how many undiscovered species are poised on the precipice or were already lost. It is often forgotten how dependent we are on other species. Ecosystems of multiple species that interact with one another and their physical environments are essential for human societies. These systems provide food, fresh water and the raw materials for construction and fuel; they regulate climate and air quality; buffer against natural hazards like floods and storms; maintain soil fertility; and pollinate crops. The genetic diversity of the planet’s myriad different life-forms provides the raw ingredients for new medicines and new commercial crops and livestock, including those that are better suited to conditions under a changed climate. This is why a proposed effort by the I.U.C.N. to compile a Red List of endangered ecosystems is so important. The list will comprise communities of species that occur at a particular place — say, Long Island’s Pine Barrens or the Cape Flats Sand Fynbos in South Africa. This new Red List for ecosystems will be crucial not only for protecting particular species but also for safeguarding the enormous benefits we receive from whole ecosystems. Another important step was the recent creation of a new Intergovernmental Platform on Biodiversity and Ecosystem Services. The organization, created under the auspices of the United Nations, will provide the scientific background for international policy negotiations affecting biodiversity. Do we need to protect so many species? Or can we rely on ecosystems with a depleted number of parts? Recent results from a study of grassland ecosystems shed important new light on these questions. Seventeen grasslands with different numbers of species were created and then studied over many years. The analysis, published in Nature last fall, showed that more than 80 percent of the plant species contributed to the effective functioning of the ecosystems, causing, for instance, a greater buildup of nutrients in soils. Another study, published in Science in January, showed that more species allow for better functioning in arid ecosystems, which support nearly 40 percent of the world’s human population. The bottom line is that many species are needed to maintain healthy ecosystems, and this is especially the case in a rapidly changing world, because species take on new roles as conditions change. Benefits provided by ecosystems are vastly undervalued. Take pollination of crops as an example: according to a major United Nations report on the Economics of Ecosystems and Biodiversity, the total economic value of pollination by insects worldwide was in the ballpark of $200 billion in 2005. More generally, efforts to tally the global monetary worth of the many different benefits provided by ecosystems come up with astronomically high numbers, measured in tens of trillions of dollars. These ecosystem services are commonly considered “public goods” — available to everyone for free. But this is a fundamental failure of economics because neither the fragility nor the finiteness of natural systems is recognized. We need markets that put a realistic value on nature, and we need effective environmental legislation that protects entire ecosystems. Richard Pearson is a scientist at the American Museum of Natural History and the author of “Driven to Extinction: The Impact of Climate Change on Biodiversity.” A version of this op-ed appeared in print on June 3, 2012, on page SR5 of the New York edition with the headline: Are We in the Midst Of a Sixth Mass Extinction?.
<urn:uuid:3ca61c7b-f2f8-4145-b11c-f7c4ae45a44b>
3.796875
988
Nonfiction Writing
Science & Tech.
34.890619
Planting trees is a good idea for saving the environment from global warming. However considering the present scenario where the amount of damage that has been done to the environment is huge, it is a long wait before the benefits of planting trees really make an impact. All this could change if rather than waiting for the natural growth cycle of trees, artificial’ trees can be used for energy generation. These artificial trees can be used to harness two types of renewable energies -: Solar energy and Wind energy, thus replicating the working of a solar-wind harvester. Use of such artificial trees is clean and environmentally friendly. These trees utilize light, heat and wind to harness energy which is then used to generate clean electricity. This science of imitating the functions of natural trees has been termed as biomimicry. The concept incorporates the three energy generation techniques of photovoltaics (generating solar electricity), thermoelectrics (generating electricity from heat) and piezoelectrics (generating electricity from pressure) into the shape of a leaf called a ‘nanoleaf’. The nanoleaf forms the core of the entire concept. A nanoleaf is a natural looking leaf that contains tiny photovoltaic and thermoelectric elements. The nanoleaves utilize much of the light spectrum to generate electricity while reflecting a small portion of the light. It can also convert energy from the invisible light spectrum known as infrared radiation into electricity using the tiny thermoelectric elements thus working even after sunset. Tiny piezoelectric elements are fixed in the petiole twigs and branches of these artificial trees. When wind moves the nanoleaves causing them to flap back and forth, the mechanical stress is felt in the petiole, twigs and branches. This mechanical stress is then converted into energy by the tiny piezoelectric elements incorporated in the tree generating millions of Pico watts of electricity. The artificial energy tree is to be made of recycled materials and its trunks will look like natural wood. Installing such a tree would be done by drilling holes into the ground followed by anchoring the base and trunk into the ground. The branches would be assembled as segments. The trees will provide a supply of clean and renewable energy. The trees look like natural trees and therefore can be utilized to provide a scenic view by mimicking gardens and possibly forests. They can mimic the functions of natural trees like acting as sound buffers and providing shade during the summer.
<urn:uuid:eff4345f-5e6a-4e4f-b587-011b9b78e13e>
3.953125
518
Knowledge Article
Science & Tech.
30.332349
February 28, 2007 GCRIO Program Overview Our extensive collection of documents. Archives of the Global Climate Change Digest A Guide to Information on Greenhouse Gases and Ozone Depletion Published July 1988 through June 1999 FROM VOLUME 5, NUMBER 3, MARCH 1992 "Multi Wavelength Measurements of Atmospheric Turbidity and Determination of the Fluctuations in Total Ozone over Antarctica," R. Singh (Radio Sci. Div., Nat. Physical Lab., New Delhi 110012, India), P.K. Pasricha et al., Atmos. Environ., 26A(4), 525-530, 1992. Sun photometer measurements were made at 310, 368 and 500 nm during several Indian expeditions to Antarctica, and at 368 and 500 nm over the ocean on a cruise to one of the expeditions. Optical depth and turbidity due to atmospheric haze aerosols were computed. The wavelength 310 nm, towards the upper limit of the UV-B band that is highly absorbed, is best suited for monitoring fluctuations in total ozone. "More Rapid Polar Ozone Depletion through the Reaction of HOCl with HCl on Polar Stratospheric Clouds," M.J. Prather (NASA Goddard Inst. Space Studies, 2880 Broadway, New York NY 10025), Nature, 355(6360), 534-537, Feb. 6, 1992. Uses a chemical model to show that this reaction plays a critical part in polar ozone loss, by rapidly converting HCl to ClOx. As alternative sources of N-containing oxidants have been converted in late autumn to inactive HNO3 by known reactions on sulfate aerosol, this reaction becomes the most important pathway for releasing the stratospheric chlorine that enters the polar night as Two items from J. Geophys. Res., 97(D1), Jan. 20, 1992: "SAGE II Stratospheric Density and Temperature Retrieval Experiment," P.-H. Wang (Sci. Technol. Corp., POB 7390, Hampton VA 23666), M.P. McCormick et The retrieval analysis of solar occultation measurements described involves two steps, one of which inverts the concentration of air molecules, aerosols, ozone and NO2 from the derived atmospheric extinction at five wavelengths. "Comparison of 2-D Model Simulations of Ozone and Nitrous Oxide at High Latitudes with Stratospheric Measurements," M.H. Proffitt (Aeronomy Lab., NOAA, 325 Broadway, Boulder CO 80303), S. Solomon, M. Loewenstein, 939-944. Evaluates a linear reference relationship between O3 and N2O that has been used to estimate polar winter O3 loss from aircraft data, by comparing it with a model simulation and with satellite measurements. The relationship holds for winter, but is likely to be inappropriate in other seasons. "Two items from Geophys. Res. Lett., 19(1), Jan. 3, "Laboratory Measurements of Direct Ozone Loss on Ice and Doped-Ice Surfaces," E.J. Dlugokencky (CMDL, NOAA, R/E/AL2, 325 Broadway, Boulder CO 80303), A.R. Ravishankara, 41-44. Results using ice and solid solutions of nitric acid, sulfuric acid and sodium sulfite show that direct ozone loss on stratospheric particles is not important. "In Situ Stratospheric Measurements of CH4, 13CH4, N2O and OC18O Using the BLISS Tunable Diode Laser Spectrometer," C.R. Webster (Jet Propulsion Lab., 4800 Oak Grove Dr., Pasadena CA 91109), R.D. May, 45-48. Two items from ibid., 18(12), Dec. 1991: "Measurements of ClO and O3 from 21° N to 61° N in the Lower Stratosphere during February 1988: Implications for Heterogeneous Chemistry," J.C. King (Dept. Meteorology, Pennsylvania State Univ., Univ. Pk. PA 16802), W.H. Brune et al., 2273-2276. Examines the possibility that the decadal decline in stratospheric ozone at northern midlatitudes is caused by the heterogeneous reaction of N2O5 on sulfate aerosols, by comparing observations to a 2-D model. Results show that reactive chlorine is being enhanced and heterogeneous chemistry is a likely cause, but the details of the heterogeneous chemistry and other possible chemical mechanisms need to be explored. "Recent Trends in Stratospheric Total Ozone: Implications of Dynamical and El Chichón Perturbations," S. Chandra (NASA-Goddard, Greenbelt MD 20771), R.S. Stolarski, 2277-2280. An apparent decrease in total ozone of 5-6% during the winter of 1982-83 following the eruption of El Chichón seen in reprocessed Nimbus-7 TOMS data is largely explained by the quasi-biennial oscillation; at most 2-4% of the decrease can be attributed to El Chichón. Interannual variability and planetary wave activity can introduce apparent seasonal trends that could affect assessment of total ozone changes caused by chemical perturbations. Three items from J. Geophys. Res., 96(D12), Dec. 20, "Modeling the February 1990 Polar Stratospheric Cloud Event and Its Potential Impact on the Northern Hemisphere Ozone Content," L. Lefèvre (Météo-France, Ctr. Nat. Recherches Météorol., 42 Ave. Coriolis, 31057 Toulouse Cedex, France), L.P. Riishojgaard et al., Balloon-borne and ground-based instruments indicate that a major type II polar stratospheric cloud (PSC) event occurred above Scandinavia in February 1990 at temperatures as low as -90° C. Short integrations were carried out at high spatial resolution with the "Emeraude" GCM, with emphasis on a localized area downstream from the PSC believed to be the most chemically active air. The largest discrepancy between total ozone forecast and TOMS data occurs at this location, suggesting the possibility that considerable ozone is destroyed subsequent to formation of PSC. "Spectroscopic Measurement of HO2, H1O2 and OH in the Stratosphere," J.H. Park (Atmos. Sci., NASA-Langley, Hampton VA 23665), B. Carli, "The Influence of Dynamics on Two-Dimensional Model Results: Simulations of 14C and Stratospheric Aircraft NOx Injections," C.H. Jackman (Lab. Atmos., NASA-Goddard, Greenbelt MD 20771), A.R. Douglass et al., Three different dynamical formulations, differing in the advective component of the stratosphere to troposphere exchange rate, were used to simulate total ozone and 14C amounts after nuclear tests in the early 1960s, and NOx injections from a proposed fleet of stratospheric aircraft and their effect on ozone. Results show the difficulty of simultaneously modeling constituents with different altitude and latitude dependencies, and that ozone loss from NOx injections is sensitive to the exchange rate used. Two items from J. Geophys. Res., 96(D11), Nov. 20, 1991: "Trends in Total Ozone at Toronto between 1960 and 1991," J.B. Kerr (Atmos. Environ. Serv., 4905 Dufferin St., Downsview, Ont. M3H 5T4, Can.), Ground-based measurements show total ozone decreased by about 4.2% during the 1980s because of a decrease in the late winter-early spring season of about 7.0%, consistent with revised TOMS satellite data. The trend is distinct from previous fluctuations, which were presumably due to natural variability, and occurs at other stations. "Intercomparison of Total Ozone Data Measured with Dobson and Brewer Spectrophotometers at Uccle (Belgium) from January 1984 to March 1991, Including Zenith Sky Observations," H. De Backer (Belgian Meteor. Inst., Ave. Circulaire 3, B-1180 Brussels, Belg.), D. De Muer, 20,711-20,719. Although seven years of quasi-simultaneous observations reveal a significant relative drift between the two instruments of 0.1% per year, this disappears when a strong, downward SO2 trend at the site is accounted for. The question of SO2 tendency must be addressed in any trend analysis using Dobson total ozone Three items from Can. J. Phys., 69(8-9), Aug.-Sep. 1991: "Inferring Middle Atmospheric Ozone Height Profiles from Ground-Based Measurements of Molecular Oxygen Emission Rates. I: Model Description and Sensitivity to Inputs," R.J. Sica (Dept. Phys., Univ. Western Ontario, London, Ont. N6A 3K7, Can.), 1069-1077. Analysis of a model for the inversion of twilight emission-rate measurements for two spectral bands shows that the method can successfully determine the shape but not the absolute value of the O3 "Lidar Measurements of the Middle Atmosphere," A.I. Carswell (Dept. Phys., York Univ., 4700 Keele St., N. York, Ont. M3J 1P3, Can.), S.R. Pal et al., 1076-1086. Presents measurements of stratospheric aerosols and ozone and profiles of density and temperature from a new lidar facility. "Rapid Motion of the 1989 Arctic Ozone Crater as Viewed with TOMS Data," F.E. Bunn (Ph.D. Associates Inc., Kinsmen Bldg., 4700 Keele St., N. York, Ont. M3J 1P3, Can.), F.W. Thirkettle, W.F.J. Evans, 1087-1092. Total ozone values from the NIMBUS-7 TOMS instrument show that an Arctic ozone crater (a thinning of the ozone layer) formed in late January when the vortex moved away from the pole to over Scandinavia, later moving over Toronto and then near Edmonton. A similar, unexpected crater was present in the Antarctic fall, on March 1989. These phenomena were mainly produced by dynamic uplift, but there may have been ozone depletion as well because of reduced Guide to Publishers Index of Abbreviations
<urn:uuid:938b0afd-6203-4651-8b39-8a488f9db156>
2.828125
2,366
Content Listing
Science & Tech.
57.974831
The Current Humidity map shows relative humidity, contoured every 10 percent, for the most recent hour. Relative humidity is defined as the amount of water vapor in a sample of air compared to the maximum amount of water vapor the air can hold at any specific temperature in a form of 0 to 100%. Humidity may also be expressed as absolute humidity and specific humidity. Relative humidity is an important metric used in forecasting weather . Humidity indicates the likelihood of precipitation , or fog . High humidity makes people feel hotter outside in the summer because it reduces the effectiveness of sweating to cool the body by preventing the evaporation of perspiration from the skin. This effect is calculated in a heat index table. Warm water vapor has more thermal energy than cool water vapor and therefore more of it evaporates into warm air than into cold air.
<urn:uuid:ee6a3701-fc63-4278-b007-6b21a84e8e5b>
4.25
177
Knowledge Article
Science & Tech.
25.485223
The change of seasons on Earth has been a cause for celebration since time immemorial. Caused by the tilt of Earth's axis relative to its orbital plane around the sun, seasons have profound changes on our weather and climate. When seasons change, nature reacts differently, depending on location. Temperatures change, rain or snow falls, rivers may flood, to name just a few effects. A new slide show, "The Change of Seasons: Views from Space," shows some of the ways seasonal change affects our planet, and invites you to share your own photos of seasonal change where you live: http://www.jpl.nasa.gov/education/seasons.cfm .
<urn:uuid:ba06807e-30f9-43a9-8141-334074067410>
3.53125
140
Knowledge Article
Science & Tech.
58.775254
you know that when x=1, the value of y=0, also the slope (y') is flat, and the change of slope (y'') is 0. The only value you don't know at x=1 is y''', and that can be solved: 3y'''+ 5y''-y' +7y=0 ==> 3y'''+ 0 - 0 + 0=0 ==> y''' must equal 0 So, all given values are 0 and all flat at x=1 ... Now what happens as you move away from x=1? If y were to change then the slope (y') will change, and you know that the slope is constrained by the equation 3y'''+ 5y''-y' +7y=0. So let us say that as x increases, y increases. So the slope of y goes from 0 to positive, and so the rate of change in slope (y'') must also increases, and hence y''' also! So they must all increase together. What would that do to: 3y'''+ 5y''-y' +7y=0 ? I suspect (but haven't got as far as proving) that it is not possible to have y increase, because the various rates of change would make 3y'''+ 5y''-y' +7y ≠ 0. Sorry, but that is as far as I have got.
<urn:uuid:ffb1edd4-a72f-4b9f-91a4-b279be4fcf42>
3.453125
308
Comment Section
Science & Tech.
90.098394
Widely distributed throughout Southern, Central and East Europe, North Africa and eastwards to Central Asia the Black or Water Poplar is almost certainly native to lowland England but as with many of our tree species its present distribution is largely artificial, the result of widespread planting. Nationally sources cite figures of c.7,500 trees, of which c. 600 are female (Jones, 2004)- these numbers are regarded by many experts to be inaccurate but the relative proportion of the sexes is unlikely to change greatly. The great disparity in the distribution and abundance of the sexes would be difficult to account for as a natural phenomenon. The distribution of diversity is being elucidated by DNA sequencing techniques; these have identified the wide distribution of particular vegetatively propagated and hence planted clones. Most trees are to be found south of a line from the Mersey to the Humber estuaries, with scattered occurrences northwards. Genetically distinct populations occur in the central plain of Ireland. The greatest concentrations in Great Britain occur in the Aylesbury Vale, but significant populations also occur in Wiltshire, along the River Severn and in Somerset, Suffolk and Shropshire. We currently know of about 300 trees in Greater London, a significant proportion of which are female. Past confusion with hybrids and their backcrosses and the mapping also of cultivars such as the Lombardy Poplar and recent conservation/amenity plantings mean that this map should be viewed with some caution. Careful search is still however revealing previously overlooked veteran trees throughout the British range. A species naturally occurring in open wet woodland and on forested floodplains, clearance and drainage since the Neolithic period have restricted the species to riverbanks, hedgerows and field margins. It does not tolerate dense shade well. In cultivation it is remarkably tolerant of a range of soil conditions and of atmospheric pollutants and has been widely planted in much drier localities than it would naturally occupy.
<urn:uuid:b58a360f-5ef4-4bb3-86dd-1106f2fc6304>
3.5625
394
Knowledge Article
Science & Tech.
25.499167
The North-western part Mediterranean basin is characterised by an anticlockwise (cyclonic) gyre. The current is relatively strong (about 0.25 to 0.5 m/s) and the surface water masses (incoming from the Atlantic Ocean trough the Strait of Gibraltar) flow from Corsica Island towards Spain along the French coast and following the Shelf break in the Gulf of Lion. This permanent current is called the Ligurian current or the North current. The wind, the evaporation, the seasonal cooling and warming of the sea can generate perturbation of this mean motion. The equations of the water motion due to all physical processes are approximated using a computer. This kind of calculation is called numerical modelling and modellers are able to evaluate at each position the velocity, the temperature and the salinity of a water particle. Obviously, numerical models are not perfect and don’t reproduce exactly the behaviour of a water parcel at each position and at any time. You will certainly find differences between the simulation and the actual state of the Sea. Nevertheless this kind of modelling can efficiently help you to understand and forecast the sea motions.
<urn:uuid:7ccb1331-4d51-451b-9d37-a2a179bcdd0c>
3.6875
237
Knowledge Article
Science & Tech.
38.61405
In this tutorial, we will discuss about getBytesRead(), getByteWritten() and method of Inflater class. The Inflater class provide support for Decompression using ZLIB library. The getBytesRead() method returns total number of compressed bytes of input stream read by inflater. It works same as getTotalIn(), but returns long instead of integer. The getByteWritten() method returns total number of Uncompressed bytes.. It works same as getTotalOut(), but returns long instead of integer. The getRemaining() method returns total number of bytes remains in associated input buffer.. In the given Example, we will discuss about the Inflater class which is used for decompression. The FileInputStream class creates a input stream and read bytes from file. The InflaterInputStream class creates a input stream with a given Inflater. It reads data from stream and decompress. The java.util.zip.Inflater class extends java.lang.Object class. It provid following method: |long||getBytesRead()||Function getBytesRead() returns number of compress bytes from associated input stream.| |long||getTotalWritten()||Function getBytesWritten() returns number of Uncompress bytes from associated output stream.| |int||getRemaining()||The getRemaining() method returns total number of bytes remains in associated input buffer.| Compressed file : testfile.txt Number of bytes of Compressed data : 19bytes Remaining bytes after decompression data : 0bytes Number of bytes of Uncompressed data : 59bytes If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:13b19413-3b8c-42f9-a8fb-60a30b925b15>
3.140625
385
Documentation
Software Dev.
36.665959
Feature article: Dragonflies by BK Researched by Brian Kane Dragonflies! Where do they come from? What do they eat? Do they sting? Where do they get their name? These are some of the many questions that may occur to us when hordes of dragonflies appear in Broome after the wet season each year. There are a lot of ‘folk names’ given to dragonflies such as ‘Horse stinger’ in the UK. The name may come from from the way a captured dragonfly curls it abdomen as if in an attempt to sting. Another explanation was that they could be seen flying round horses in fields. They were really feeding on the flies attracted to the horses. Occasionally a fly would irritate or bite a horse enough to make it twitch or skip about. People seeing it made the inference that it was the dragon stinging, rather that an unseen fly biting. Dragonflies are fearsome predators of other flying insects but this beautiful creature is harmless to humans. Dragonflies are among the oldest insects on earth, for fossilized remains show that they existed 300 million years ago. This is an interesting time span considering that the famous dinosaur footprints at Minyirr (Gantheaume Point) date from the Cretaceous environment of 130 million year ago. Some of these ancient dragonflies had a wingspan over 60 cm (today the largest wingspan is megaloprepus coerulatus in South America which is 19 cm ). There are 4500 different species of dragonfly in the world today (300 species in Australia) varying in size and colour. They are sunlight-loving day flying insects living near water, usually by stagnant pools and marshes. They have four large wings with a lace-like pattern of veins, long slender bodies, a huge head and prominent eyes. Most of them are brilliantly coloured with bodies that are red, blue, green, brown, yellow, and so on. The colour becomes stronger as the insect grows older. Observation here in Broome indicates that dragonflies are of a green, brown colouration. Dragonflies move through the air at tremendous speeds sometimes reaching up to 90 kmph. They can fly for hours on end and have been known to travel 30 km or so, but usually they patrol a particular area looking for insects to eat. Mosquitoes, flies and midges are a large part of their diet and these are plucked from the air. This fact alone should endear these delightful creatures to us. In contrast to its enormous eyes, the dragonfly’s antennae (for sensing, touch and smell) are poorly developed and less important to it. The jaws have strong tooth-like projections for biting into its prey. They have three pairs of legs attached to the body just behind the mouth. These are used for seizing its prey in mid-air. Mating usually takes place in the air, then it is over to the female to lay its eggs into fresh water, the stems of water plants or into mud. After 2 – 5 weeks the eggs hatch into nymphs. These are called "mud-eyes" and are excellent bait for freshwater fish. They have the basic body structure of the adult insect but are fatter and without wings. The nymths are dull brown in colour and remain underwater until they are ready to change into adult dragonflies. They breathe by means of gills. Nymths are carnivorous (flesh eating) even tackling tadpoles and small fish. Dragonflies take anything from one to five years, and possibly even longer, to complete their Iife cycle. During its life as nymph the insect moults, shedding its skin, as many as ten or fifteen times. When the nymph is ready to moult for the last time, it comes out of the water and climbs up a plant above the surface of the water. After a short period of rest, the skin splits, the wings expand and a spectacular dragonfly emerges. The species ‘Trapezostigma loewii’ breed in warm still waters such as the flood plains of northern Australia. Their emergence as adults is often taken as a signal that the wet season was over. Dragonflies mark the end of the Yawuru season, Mankala (the wet) and the beginning of the short season of Marul. This was the time when Aboriginal people move back to the coast. Last year, this change of season occurred in Broome on March 24th with the first easterly wind change and the immediate appearance of swarms of dragonflies especially around the Post Office area in Chinatown. It is amazing that dragonflies can fly forwards, sideways, backwards and hover (sort of like a helicopter). It is said that Di Vinci wrote many papers on the possibility of such an aircraft being possible after he observed the dragonfly. They are much loved and there are many internet sites devoted to their study and preservation such as the British Dragonfly society -http://www.dragonflysoc.org.uk). Dragonflies have only a short lifespan in the air because they seem to disappear after two months. However, they have the vital role of keeping the insect population in balance and their preservation is important to the well being of everyone who live here in the Kimberley. SPECIES FOUND IN THE KIMBERLEY (and in most regions of Australia): SUB ORDER: ZYGOPTERA - Protoneuridae - near streams : black - Coenagrionidae - over lily pads and desert water holes: black and blue, can be red - Lestidae - bronze and black, dull in the north, form huge colonies SUB ORDER: ANISOPTERA - Libellulidae - common: red and blue, females - yellow This dragon fly is on lilypads at Beagle Bay (courtesy CAS) Broome dragonfly (Aethriamanta circumsignata?) all-red diplacodes haematodes. These fellas (because they're all male) are the emissaries of the Dry Season. (Double click to enlarge) Red dragon fly at Black Ledge April 2008 - Brian Kane Blue Dragonfly April Broome (photo BK) Mating blue dragonfly. Large on top is male and smaller female clings to male upside down while flying April Broome (photo BK)
<urn:uuid:0c59585a-60e1-49f2-8d1c-4eb06a4dbbe8>
3.5
1,325
Knowledge Article
Science & Tech.
53.327667
The beast-footed carnivorous dinosaurs What Is a theropod? The theropod (meaning "beast-footed") dinosaurs are a diverse group of bipedal saurischian dinosaurs. They include the largest terrestrial carnivores ever to have made the earth tremble. What most people think of as theropods (e.g., T. rex, Deinonychus) are extinct today, but recent studies have conclusively shown that birds are actually the descendants of small nonflying theropods. Thus when people say that dinosaurs are extinct, they are technically not correct. Still it's not as exciting seeing a sparrow at your birdfeeder as it would be to see a Tyrannosaurus rex there. Our knowledge of the evolutionary history of the Theropoda is constantly under revision stimulated by new, exciting fossil finds every year or so such as Mononykus olecranus, a very bird-like theropod found recently in the Mongolian desert, or Giganotosaurus carolinii, a giant theropod probably rivaling the size of T.rex., found recently in Argentina. In fact, the 1960's discovery and study of the remains of Deinonychus antirrhopus helped to revise paleontology's old vision of all dinosaurs as slow, stupid reptiles, and was a key factor in the onset of the controversial hot-blooded/cold-blooded debate. Currently, there are two or three main groups of theropods, depending on whom you ask; we have yet to fully understand their origin. Why is this so? The main reason is the lack of good specimens; theropod remains are fairly rare and more often than not, fragmentary theropods have a poor fossil record compared to most of the ornithischian dinosaurs. Fossils of small theropods are especially rare, since small bones are harder to find and are weathered away easily. Without well-preserved, complete specimens, it is hard to tell who is most closely related to whom using cladistics. Several characters that typify a theropod: hollow, thin-walled bones are diagnostic of theropod dinosaurs. A jumbled box containing theropod bones (from the UCMP collections) is shown at right. The hollow nature of the bones is certainly more obvious in 3D, but you should at least be able to make out the general tubular structure of the bones. Other theropod characters include modifications of the hands and feet: three main fingers on the manus (hand); the fourth and fifth digits are reduced; and three main (weight-bearing) toes on the pes (foot); the first and fifth digits are reduced. Most theropods had sharp, recurved teeth useful for eating flesh, and claws were present on the ends of all of the fingers and toes. Note that some of these characters are lost or changed later in theropod evolution, depending on the group in question. Let's take a look at the major groups of theropods.... The Herrerasauridae are an early group represented by Herrerasaurus, which was discovered in a wonderful middle-late Triassic period fossil locality (the famous Ischigualasto Formation) in Argentina in the 1970s. Another herrerasaur is Staurikosaurus, which has been known since the 1960s from remains found in Brazil. More recently (in 1993), another herrerasaur-like fossil was found in the same general area and named Eoraptor, or "dawn thief." It appears to be closely related to the herrerasaurs, but smaller in size and slightly older. Both Eoraptor and the herrerasaurs seem to have been small to medium-sized carnivores. These curious animals have some basic theropod characteristics, but lack others; in fact, they lack some dinosaurian characteristics as well. The Herrerasauridae and Eoraptor may be the earliest group of theropods, or it is quite possible that they are not even theropods at all, but rather non-dinosaurs (dinosauromorphs) closely related to the ancestor of dinosaurs. The fact is, we don't know for sure. Experts in dinosaur systematics are currently embroiled in a controversy over the exact relationships of the Herrerasauridae to theropods and other dinosaurs. A second group of theropods is the Ceratosauria, a more morphologically modified and diverse group which includes the UCMP's very own Dilophosaurus, one of the stars of the novel and movie Jurassic Park. Segisaurus is a small, mysterious theropod known from only one specimen, which is housed in the collections of the UCMP. Recent discoveries have revealed that ceratosaurs formed a more diverse group than was previously expected.... The last, and by far the largest group of theropods is the Tetanurae, consisting of two major clades (sister taxa), the Carnosauria and the Coelurosauria. Some early tetanurines such as Megalosaurus fall outside of this dichotomy, but most are poorly known. The carnosaurs were the huge, fierce predators such as Allosaurus (shown at the top of this page chasing Dryosaurus, an ornithischian dinosaur), and recent headline-makers like the gigantic Carcharodontosaurus and Giganotosaurus, both of which seemed to have reached or exceeded the size of T. rex, making them the largest terrestrial bipeds ever to have terrorized the terrestrial realm. The Coelurosauria were generally smaller in stature, but more diverse, including such famous creatures as Velociraptor (shown at right grasping Protoceratops, a ceratopsian dinosaur) and our feathered friends the birds. Recent studies have agreed that T. rex and the tyrannosaurs belong with the coelurosaurs, not with the carnosaurs as was originally believed. Such is the ever-changing nature of theropod phylogeny; new finds and analyses are frequently overturning old ideas. This area of dinosaur paleontology is in a major state of flux. Enjoy your visit with the fearsome and amazing theropods! You can learn more about theropod groups by either selecting links from the above text, or by clicking on a box in the cladogram pictured above. Take an "audio tour" and hear about the discovery and reconstruction of Dilophosaurus from the discoverer himself, the late Sam Welles.
<urn:uuid:0c5d61de-4edf-4bc7-ac08-ec181a5a2939>
3.578125
1,359
Knowledge Article
Science & Tech.
35.538697
New Technology Telescope The 3.58-metre New Technology Telescope (NTT) was inaugurated in 1989. It broke new ground for telescope engineering and design and was the first in the world to have a computer-controlled main mirror. The main mirror is flexible and its shape is actively adjusted during observations by actuators to preserve the optimal image quality. The secondary mirror position is also actively controlled in three directions. This technology, developed by ESO, known as active optics, is now applied to all major modern telescopes, such as the Very Large Telescope at Cerro Paranal and the future European Extremely Large Telescope. The design of the octagonal enclosure housing the NTT is another technological breakthrough. The telescope dome is relatively small, and is ventilated by a system of flaps that makes air flow smoothly across the mirror, reducing turbulence and leading to sharper images. Star formation, protoplanetary systems, Galactic center, spectroscopy. - Images taken with NTT - Images of NTT - For scientists: More details can be found on the telescope page. - ESO press releases with results from the New Technology Telescope |Name:||New Technology Telescope| |Enclosure:||Compact optimised enclosure| |Type:||Optical & near-infrared telescope| |Optical design:||Ritchey-Chrétien reflector| |Diameter. Primary M1:||3.58 m| |Material. Primary M1:||ZeroDur Schott| |Diameter. Secondary M2:||0.875 m| |Material. Secondary M2:||ZeroDur Schott| |Diameter. Tertiary M3:||0.84 m X 0.60 m (elliptical)| |First Light date:||23 March 1989|
<urn:uuid:44a8c0b8-9b40-433e-8e01-f7b7d3bf861e>
3.34375
385
Knowledge Article
Science & Tech.
33.541299
Henricia pumila Eernisse et al., 2010 Common name(s): Dwarf mottled henricia; mottled henricia |Synonyms: Cribrella laeviuscula var. crassa?, Henricia leviuscula variety F| |Henricia pumila from Sares Head, ray length 18 mm; diameter of central disk 5 mm.| |(Photo by: Dave Cowles, August 2010 )| The specific epithet pumila means dwarf. This species probably corresponds to at least some of the individuals described by Fisher (1911) as H. leviuscula variety F. How to Distinguish from Similar Species: Most other Henricia do not have a mottled aboral side, broadcast spawn their eggs rather than brood them, have longer rays and a ratio between ray (R) length to inter-ray disk radius (r) of more than 5. Geographical Range: The type specimen is from San Juan Island, WA. This is the only small, brooding Henricia in the Puget Sound area. Full range probably from Sitka, Alaska to upwelling areas in Baja California, but does not appear to inhabit southern California south of Point Conception. Biology/Natural History: This species broods its eggs and embryos under the central disk until they crawl away as juveniles. Brooding seems to occur January to April. |Main Page||Alphabetic Index||Systematic Index||Glossary| Lamb and Hanby (2005) list it as Henricia sp. nov. on p. 330 Eernisse, Douglas J., Megumi F. Strathmann, and Richard R. Strathmann, 2010. Henricia pumila sp. nov.: A brooding seastar (Asteroidea) from the coastal northeastern Pacific. Zootaxa 2329: pp. 23-26 Fisher, W.K. (1911) Asteroidea of the North Pacific and adjacent waters. Part I. Phanerozonia and Spinulosa. Bulletin of the U. S. National Museum, 76(1), 1–406. General Notes and Observations: Locations, abundances, unusual behaviors: We freqnently encounter this species in the intertidal near Rosario. This closeup of the aboral surface of a ray shows the pattern of pseudopaxillae. is to the left. Compare the pattern of these ossicles with that seen in Henricia leviuscula. This oral view of the ray shows the lighter color on the oral side. Note that the marginal plates are longer than other ossicles but still not markedly enlarged. is creamy colored as in the type specimen. Notice the papulae extended among the pseudopaxillae. This individual with a 5.5 cm total arm spread, was photographed in 2011. Found intertidally on Sares Head. Rosario Invertebrates web site provided courtesy of Walla Walla University
<urn:uuid:0b553740-6a25-48ae-a4a6-bc39bcea1d68>
2.78125
654
Knowledge Article
Science & Tech.
53.509097
Carolyn asks: "What is it like in the eye of a hurricane? Is it as it appears on the radar: clear and calm?" and Michelle asks: "Why is the eye of the hurricane so much calmer than the surrounding wind/storm?" Let's start with the basics about the eye of a hurricane. The eye is the low pressure center of a tropical cyclone. A hurricane is a type of tropical cyclone. In the eye, winds are normally calm and sometimes the sky clears. An example of this can be seen in a Youtube video from a NOAA P-3 aircraft flying in the eye of Hurricane Katrina in 2005. The video shows the sky is blue and there's a "stadium effect" because it feels like you're on a field looking up in the stand. When you get to the eye, you have to first go through the "eye wall." It's the ring of thunderstorms that surround the eye. The heaviest rain, strongest winds and worst turbulence are normally in the eye wall. Now, to the second question: Why is the eye of the hurricane calm? You have to look at the way air flows into and around a hurricane. As we said, the eye is a low pressure area. So, air from outside the hurricane tries to move into the eye to equalize the pressure. But the air doesn't go in a straight line. It flows in a curve and ends up blowing in a circle around the eye. In fact, most of the air never reaches the eye, and instead blows in the eye wall. Much of the air then flows upward in the eye wall and exits the storm at the top. Since the winds end up spinning in a ring around the eye, there isn't enough left to blow in the eye itself and the eye is relatively calm.
<urn:uuid:00378325-474f-4f6f-b093-d5e3b000b081>
3.671875
375
Q&A Forum
Science & Tech.
77.334811
Download and unzip the zip file and you will see an folder. Open the folder contains the compiled program and you can double click on the file to work with the source code. When the game starts, you'll see a screen that looks similar to this: The program for this tutorial is almost exactly identical to the program used in the drawing A Sine curve with a The difference is that for this tutorial, we're using a 2. Examining The Program: Let's examine the C# source code that produces the behavior we see on-screen First, you'll notice that we create the variable, and give it an initial value of 0.0f. While this isn't strictly necessary (we will assign it the value of sin(x) before using it), it's good to see that there are different styles of writing equivalent, correct code. Some people prefer to initialize all of their variables, so we're showing you this example code in order to show you that style of coding. Let's briefly examine the details of how the First, we can see that the for loop is a normal, counting for loop: xPos = 0.0f; xPos < World.WorldMax.X; xPos += 1) You'll notice that we choose to declare as a float (instead of an integer), mainly to show you that one can declare any type of variable you want in the initialization step of the for loop. You'll also notice that we increment using the counting expression " ", which is equivalent to " = xPos + 1 " , both of which are equivalent to " Again, the goal here is to show you that there are multiple, correct ways of writing this loop. Second, you can see that the body of the loop should all be familiar to you - using the functions , the Y value of the center of the new soccer ball is calculated, just like in the previous tutorial(s). Start from a blank starter project (1000.201, if you need it), and re-do the code from memory as much as possible. On your first try, do what you can, and keep the above code open so that when you get stuck, you can quickly look up what you forgot (and that after you finish a line, so that you can compare your line to the 'correct' line). On the next try, do the same thing, but try to use the finished code less. Repeat this until you can type everything, without refering the tutorial's code. Repeat this exercise daily for several days, so that you really get the hang of this. As you go on, periodically review this by re-doing this Familiarizing Yourself With Loops: Modifying the program For this exercise, you should use the same project that was explained in the Go back to the tutorial about animating sinusoidal motion, and run that program. Observe how the program moves the small basketball along the sine wave. You should implement identical functionality here in this program, and you should do that from memory as much as possible. To be clear: do not refer to the source code in that previous tutorial unless you absolutely have to - the more of this exercise that you can and figure out (and do) on your own, the more you'll learn from
<urn:uuid:63f7a8c1-80aa-4f7b-b0f5-a2c69545ff33>
3.375
728
Tutorial
Software Dev.
56.565556