text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Rocky Mountain Research Station Publications RMRS Online Publication - Journal Articles, External Publications, and Special Reports Rill erosion in natural and disturbed forests: 1. Measurements Robichaud, P. R.; Wagenbrenner, J. W.; Brown, R. E. 2010. Rill erosion in natural and disturbed forests: 1. Measurements. Water Resources Research. 46: W10506. Rill erosion can be a large portion of the total erosion in disturbed forests, but measurements of the runoff and erosion at the rill scale are uncommon. Simulated rill erosion experiments were conducted in two forested areas in the northwestern United States on slopes ranging from 18 to 79%. We compared runoff rates, runoff velocities, and sediment flux rates from natural (undisturbed) forests and in forests either burned at low soil burn severity (10 months or 2 weeks post?fire), high soil burn severity, or subject to skidding of felled logs. Keywords: rill erosion, runoff, forest management About PDFs: For best results, do not open the PDF in your Web browser. Right-click on the PDF link to download the PDF file directly to your computer. Click here for more PDF help. PDF File Size: 1.2 MB Title: RMRS Other Publications: Rill erosion in natural and disturbed forests: 1. Electronic Publish Date: September 18, 2012 Last Update: September 18, 2012 RMRS Publications | Order a publication | Contact Us
<urn:uuid:0ee88a00-4fa0-425a-b414-c7ec3f61a132>
2.8125
318
Content Listing
Science & Tech.
55.395
By ripcord on Friday, January 16, 2009 - 11:37 pm: Edit Post John, can you provide a simple, understandable explanation as to how & why, and under what conditions, it's 'sometimes' possible for hot water to freeze faster than cold water? I know it's possible to test this theory and come up with results both ways, but I can't find a 'simple' explanation as to what variables it takes to get which results. By sp123 on Saturday, January 17, 2009 - 01:45 pm: Edit Post well because the molecules in hot water are moving faster than cold water and when you drop the temputure so fast it causes all them to stop so quickly thus they all freeze up. some of the reason why By frnash on Saturday, January 17, 2009 - 03:01 pm: Edit Post I am really having difficulty accepting this theory! For an example, let's say we are introducing a quantity of water at an initial temperature somewhere near the boiling point into an environment at an ambient temperature of -40°C (which is conveniently equal to -40°F). 1. A certain amount of energy, at the rate of 1 calorie per gram, (the specific heat of water/ice) will have to be removed from that water just to reduce its temperature to the freezing point (0°C, or 32°F): 2. An additional amount of energy, at the rate of 79.72 calories/gram (the energy of the heat of fusion, must be withdrawn from that water to convert it to ice (perhaps ice crystals), still at 0°C, or 32°F the liquid must turn to solid before the temperature can continue to fall. 3. Of course, still more energy (the specific heat of water/ice) will have to be removed from the resulting quantity of ice to reduce its temperature from the freezing point to ambient (say -40°C, conveniently = -40°F, as noted above) The only variant in the above calculations is the initial temperature of the water. Clearly the warmer the water, the more energy it will have to lose in the process. Thus if it takes any amount of time at all for the energy loss to occur, it is quite impossible for hot water to freeze faster than cold water! Of course for relatively small quantities of water and such a low ambient temperature that incremental energy loss can probably occur in an instant, regardless of the initial temperature of the water, especially if you toss a cup of water up into the air (as suggested in an experiment described elsewhere in this forum I note that in that case they did not try the experiment with the same quantity of water at an initial temperature close to the freezing point!), thus scattering the water as droplets, thereby increasing the surface area on which that heat transfer will occur! Or is there something I'm missing? By 700xcsp on Saturday, January 17, 2009 - 03:05 pm: Edit Post It was -23 here Thursday and at work we were talking about this. We took a cup of steaming hot water and tossed it outside in the air and it evaporated. Did the same thing with room temp water and it froze when it hit the ground and the same thing happened with cold water. Just thought I would tell you our little experiment. By frnash on Saturday, January 17, 2009 - 03:28 pm: Edit Post "Did the same thing with room temp water … and cold water." So the hot water evaporated?… and the other two did not? So perhaps with the hot water by definition at a higher energy level all those high energy li'l water molecules "flew apart" more quickly than in the case of the more "lethargic" cold water molecules, thus increasing the aggregate surface area of the hot water and allowing the heat transfer noted in step 1 of my above note to occur more rapidly? Seems a bit of a stretch to me. Ya never know; "thermodamnamics", not a favorite subject. By thebluff on Sunday, January 18, 2009 - 06:30 pm: Edit Post I know on our boiler heat, the hot water will freeze very quickly when household temp water lines do not freeze. my dad is a former heat guy, says that is a true-ism By cmharcou on Monday, January 19, 2009 - 07:25 am: Edit Post send mythbusters and e-mail. By soopy on Monday, January 19, 2009 - 08:24 am: Edit Post someone once told me it was due to the condensation that develops on the exterior of hot water pipes in cold weather. Thus the quicker- to- freeze theory. ? . By admin on Monday, January 19, 2009 - 02:15 pm: Edit Post I did a little internet research on this topic and it seems as though it may be more than just a myth. However, there are some parameters that need to be met as well. First, we are not talking about a volume of water that is tossed into the air, the sites I checked out were all talking about a volume of water within a container and also did not talk about the entire container freezing, just the top of the water that was exposed to the cold air. There are a number of factors that scientists think may be at work, like the fact that hot water will evaporate quicker and evaporation takes a lot of enery and that energy loss can cause the water at the surface to cool faster. Some issues with vapor pressures too and a few other factors were mentioned, but no sound and tested fact as to why the container of hot water sometimes froze faster than a similarly sized container of cool water. Sounds like a good one for Mythbusters to me! PS. Soopy- Who ever told you that condensation will form on a hot water pipe is sure smoking some good stuff. Condensation happens when air encounters an object that is cold enough for the moisture in the air to condense on it. A warm pipe would cause just the opposite effect! By swanker on Monday, January 19, 2009 - 08:16 pm: Edit Post John check out the video with Adam from the Skyview Lodge a cup of hot water and minus 25. Too cool. By frnash on Monday, January 19, 2009 - 08:50 pm: Edit Post Yeh, -25°F, I'd say that's the understatement of the week! By ubee on Tuesday, January 20, 2009 - 01:03 pm: Edit Post any plumbers out there? when ever i have gone to fix frozen pipes 99% of the time it was a hot water pipe! other 1% was cold water only! no explanation just observation! By ripcord on Tuesday, January 20, 2009 - 07:37 pm: Edit Post Thanks John, I did some internet research myself and found out even among the scientific community there's no consensus on a conclusive explanation for this phenomenon. As you stated there are several parameters that need to be met. My own observations show that, while a bucket of hot water will form a 'layer' of ice first, the bucket of cold will freeze 'solid' sooner... so the definition of 'freezes first' is what raises controversy. I checked 'Mythbusters' website and apparently they tried this experiment using ice cube trays... their conclusion, 'if you want to make ice cubes quiclky in your freezer, use cold water. As for pipes freezing, this seems to have more to do with 'supercooling' (where water remains liquid when it's temperature is below 32 F.), something to do with the molecules not being able to arrange themselves to form ice.(That's a whole nother topic) By lvr1000 on Tuesday, January 20, 2009 - 08:43 pm: Edit Post I always thought that the hot water pipe usually froze first because hot water is used less in a house than cold, toliets and water softner the main reason. Therefore sits longer in the pipe before moving. And if you think about it, the water is only hot right after it comes out of the heater. Once it sits, it is the same temp as the cold. By lenny on Saturday, January 31, 2009 - 12:47 am: Edit Post I did a small test, I put (2) 16 oz plastic cups outside, 1 with boiling water and the other cold tap, the cold froze considerable faster than the boiling cup. it's a myth IMHO By frnash on Saturday, January 31, 2009 - 02:21 pm: Edit Post "I put (2) 16 oz plastic cups outside …" As expected for "still" water; curious to hear if it's any different if you toss each cupful of water up in the air … particularly on a very cold day, preferably at least in the minus teens. Maybe a bit late for such temps this season? Certainly no chance of such temps during the next week! By dfattack on Sunday, February 08, 2009 - 10:08 am: Edit Post It's what "lvr1000" said. In a house the hot water lines are used less, therefore they freeze first. Not sure if "ripcord" was referring to household pipes or not, but that is the answer. By redrev on Tuesday, February 17, 2009 - 04:34 am: Edit Post an old theory is that hot water freezes quicker but i think the hot water lines get used less. you used to see a lot of ice makers hooked to hot water so everyone thought it froze quicker. actually they hooked to hot water to make the ice appear more clear.
<urn:uuid:212224f3-e4d3-42c6-a831-f9df41dda97b>
3.21875
2,023
Comment Section
Science & Tech.
61.634375
STL for C++ Programmers Author: Leen Ammeraal Publisher: John Wiley Reviewer: Bob Adkins Good news—STL, Standard Template Library, is alive and well on Linux. Leen Ammeraal demonstrates this as well as his considerable skills as a master teacher in STL for C++ Programmers. Although the primary focus is not Linux, his many excellent examples are easily adapted to the g++ environment in Linux 2.0++ (his examples adapted for Linux can be found at http://www.cwareco.com/download.html. Ammeraal writes concise yet thorough explanations on each aspect of using STL. STL started in the 1970s with Alexander Stepanov's ideas about designing general algorithms. Stepanov, together with Meng Lee, took these ideas to HP and developed the first C++ based STL. By 1994, STL was accepted into the C++ draft standard by the ANSI/ISO C++ standards committee. STL distinguishes general algorithms from the more specialized data and methods encapsulated by ordinary abstract data types. In this way, complex and powerful algorithms can be implemented independently of the data to which they are applied, allowing for generalization and reuse of these algorithms. In STL more familiar object abstraction is reserved for data and methods. These are then tailored and bound to the characteristics of their underlying container type such as sequence containers and associative containers. Examples of this distinction with respect to sequence containers, such as vector objects, are begin, end and insert. These methods access and manipulate the underlying data of the vector container class. However, these methods are specific to the treatment of the data and should not be confused with more general algorithms such as find, sort and other advanced numeric algorithms (e.g., accumulate or inner product). General algorithms are a kind of method abstraction in contrast to more traditional data abstraction. STL support has been available on Linux since GNU's libg++ 2.6.2. Now with release 18.104.22.168, the library is quite usable for most major features with the exception of name space scoping. There are minor differences with other implementations, such as Borland's BC5 environment, but these differences mostly concern header naming conventions. There is also a curious problem with fstream which involves an unexpected file access mode default. g++ 2.8.0 will offer a more complete STL based on newer code from SGI and a complete redesign of the compiler's template implementation. Unfortunately, g++ 2.8.0 is not expected to fix the problems with using name spaces. From the beginning Leen Ammeraal presents a quick and practical startup for the STL beginner. He then explains how to use the sequence containers (vectors, lists and deques), the associative containers (sets and maps) and, later, examines containers derived from these basic types such as stacks, queues and priority queues. As a simple application, he shows how to to build a telephone directory using associative map containers. Later, he demonstrates a more complex map application, called a concordance, which produces a line-oriented index of all words in a text file. He also shows function objects which can be used to build custom ordering relationships among the elements of a container. He moves on to algorithms and the practical details of STL's generic algorithms for manipulating sequences and for sorting. He demonstrates the built-in numeric algorithms which make STL attractive for implementing statistical analysis such as the Least Squares Method. As his final chapter, Ammeraal presents a wonderfully fun example of “Very Large Numbers”. Here Ammeraal uses STL to calculate pi to an arbitrarily large number of digits. Ammeraal exploits the power of STL to reduce the implementation complexities of defining and operating on extremely large numbers. He notes that, thanks to the STL's vector container, this version is “simpler and more elegant” than an earlier solution he presented in his book Algorithms and Data Structures in C++. For added spice, I modified his program to generate a histogram of the digits computed for pi. At 100,000 places, digit “1” is a very slight favorite. Moreover, with this example Linux shows its strength. After turning on full g++ optimization, I was able to calculate these 100,000 digits in just under 20 minutes. Under DOS/Windows, Ammeraal indicated that this same calculation took several hours using BC5. |Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013| |Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013| |Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013| |Weechat, Irssi's Little Brother||Jun 11, 2013| |One Tail Just Isn't Enough||Jun 07, 2013| |Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013| - Containers—Not Virtual Machines—Are the Future Cloud - Non-Linux FOSS: libnotify, OS X Style - Linux Systems Administrator - Validate an E-Mail Address with PHP, the Right Way - Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer - Senior Perl Developer - Technical Support Rep - UX Designer - Introduction to MapReduce with Hadoop on Linux - RSS Feeds Free Webinar: Hadoop How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster. Some of key questions to be discussed are: - What is the “typical” Hadoop cluster and what should be installed on the different machine types? - Why should you consider the typical workload patterns when making your hardware decisions? - Are all microservers created equal for Hadoop deployments? - How do I plan for expansion if I require more compute, memory, storage or networking?
<urn:uuid:9702c015-a0a4-4668-bab7-03c3d6824b11>
2.984375
1,348
Knowledge Article
Software Dev.
41.96728
or episodically changes the ground surface and complicates flood hazard mapping, especially along the Atlantic coast, which has dunes that are reshaped by storms, and, to a lesser degree, the Gulf coast. Storm surge, tides, and waves are the greatest contributors to coastal flooding. Storm surge is the pulse of water that washes onto shore during a storm, measured as the difference between the height of the storm tide and the predicted astronomical tide. It is driven by wind and the inverse barometric effect of low atmospheric pressure, and is influenced by tides and by uneven bathymetric and topographic surfaces. Faster wind speeds and larger storms create a greater storm surge potential. Storm surge alters topographic features that might otherwise dampen the effects of surge and wave forces. For example, sand dunes that normally prevent storm water progress onto a barrier island may be reshaped or even removed during a severe storm. Water surface elevations at the shoreline are a combination of the average water level determined by wind setup (due to the direct action of wind stresses at the air-sea interface) and wave setup (due to breaking waves, Figure 5.2) and a fluctuating water level caused by wave runup (the maximum extent of high-velocity uprush of individual waves above the average water level). All of these factors are included in coastal flood models to estimate the BFE. Storm surge models are often loosely coupled with wave models to calculate the 1 percent annual chance stillwater elevation (SWEL) and the wave dynamics associated with a coastal flooding event. Recent flood studies in Mississippi and Louisiana used loosely coupled two-dimensional (2-D) surge and wave models to calculate the SWEL and wave setup. The SWEL value (with or without wave setup) from the wave and surge models is used to calculate wave crest values using erosion and wave calculations through the Coastal Hazards Analysis and Modeling Program (CHAMP) and the Wave Height Analysis
<urn:uuid:622efeec-a968-4d08-810d-c939831f37ed>
4.15625
399
Knowledge Article
Science & Tech.
24.852858
Biological agent type Plant species attacked Click on image to view larger photo. Egg and larval track Images by Eric Coombs, Oregon Department of Agriculture. Collecting adults on beating sheets If images are downloaded and used from the ODA web site please be sure to credit the photographer. Impact on target plant The larvae feed on the seeds in the seedpods. Adults feed on pollen. Collection and release Collect adult beetles in the spring during flowering. They can be sweep-netted or beat onto a beating sheet and aspirated into a collecting vial. Release 100-200 adults per site. The beetle is widespread at most sites, therefore further redistribution will be on an as needed basis. The seed beetle has been released in 19 counties and recovered in 16. History and comments The seed beetle Bruchidius villosus was first released in Oregon in 1998, in Marion and Lane Counties. The beetle is an accidental introduction to the East Coast, but went through the TAG testing protocol for safety in order to be introduced into Oregon. The insect has also been released as a biocontrol agent in New Zealand. The beetle has been introduced at numerous sites since 2000. Populations are increasing, and at nursery sites (>5 years old), the bruchid is out competing the seed weevil. Limited redistribution began in 2004. In 2007 and 2008, the bruchid was widely redistributed in W Oregon through 2010. Extensive surveys in 2009 showed that the beetle is widely established through out the entire Willamette Valley, at several coastal sites, and at elevations up to 2,700 feet. Experiments are being conducted to see if the beetle will survive on French broom in SW OR.
<urn:uuid:2b85a363-8856-47fa-bd26-5a228f684959>
3.5625
356
Knowledge Article
Science & Tech.
50.622972
Colour-magnitude diagrams - in which the absolute magnitude (Mv) of a selection of stars is plotted against their colour (B-V) - are frequently used in determining fundamental properties of the stars, and are commonly used as a teaching tool for explaining stellar evolution. In this latter context, Andrew Gould has created an Hipparcos colour-magnitude diagram (see above) that is colour-coded by transverse velocity, the latter derived from the Hipparcos proper motion measurements. The data are selected to ensure the inclusion of a good sample of luminous stars, a representative sample of local stars, and most of the halo stars observed by Hipparcos. Colour-coding the data according to the transverse velocity of the stars reveals the connection between the photometric and kinematic properties of the local population. Younger stars, typically found early on the main sequence, tend to move slower than later main sequence stars, which are mostly older. See astro-ph/0403506 for a more thorough description of this figure and details on its interpretation. (A high-resolution image is also available there.) Gaia's precise astrometric and photometric measurements for more than 1 billion stars in our Galaxy, and beyond, will result in improved constraints on models of stellar evolution. Image courtesy of Andrew Gould.
<urn:uuid:17bc47d5-5c87-44c0-8de5-eaee84489538>
4.09375
277
Knowledge Article
Science & Tech.
27.017554
- Describe what you know about tsunamis. - Explain why the force of even a small wave is enough to knock you over at the beach. (Hint: Think about the mass of water. One liter of water has a mass of one kilogram.) - What do waves carry? - Explain the link between earthquakes and tsunamis. - What happens to a tsunami as it enters shallow water? - How did the 2004 earthquake in Indonesia kill people as far away as East Africa? - Describe the steps in forecasting a tsunami. - How does an invisibility cloak work? - List four types of waves an invisibility cloak could redirect. - Explain the role wind plays in developing a seiche. - When the wind stops and a seiche wave first rolls back, what force is pulling on the water? - If you cannonballed into a swimming pool, would your splash create a tsunami or a seiche? Explain your answer. - In 1954, a three-meter (10-foot) wave struck the Chicago lakeshore. The wave swept away and drowned eight people who had been fishing off a pier jutting into Lake Michigan. Based on your reading of this article, what was the likely source of this wave? - Would Sébastien Guenneau’s tsunami cloak concept be practical? Why or why not? Explain your reasoning. - How would you protect a coastal area and its inhabitants from the threat of a tsunami?
<urn:uuid:93d00618-50b7-4b20-aa47-696cb70eb99f>
3.640625
312
Tutorial
Science & Tech.
65.068312
The recent hurricanes and skyrocketing oil and gasoline prices helped to prove the existence of a new element. In early October 2005, a major research institution announced the discovery of the heaviest element yet known to science. The new element has been named "Governmentium." Governmentium (Gv) has one neutron, 25 assistant neutrons, 88 deputy neutrons, and 198 assistant deputy neutrons, giving it an atomic mass of 312. These 312 particles are held together by forces called 'morons' which are surrounded by vast quantities of lepton-like particles called 'peons.' Since Gv has no electrons, it is inert. However, it can be detected, because it impedes every reaction with which it comes into contact. A minute amount of Gv causes one reaction to take over four days to complete, when it would normally take less than a second! Gv has a normal half-life of 4 years; it does not decay; but instead undergoes a reorganization in which a portion of the assistant neutrons and deputy neutrons exchange places. In fact, Governmentium's mass will actually increase over time, since each reorganization will cause more morons to become neutrons, forming 'isodopes.' This characteristic of moron promotion leads most scientists to believe that Gv is formed whenever morons reach a certain quantity in concentration. This hypothetical quantity is referred to as 'Critical Morass.' When catalyzed with money, Gv becomes "Administratium' (Am) - an element which radiates just as much energy as Gv, since it has half as many peons but twice as many morons.
<urn:uuid:aa161387-4c94-4856-b733-2b89a0e136c6>
3.3125
335
Knowledge Article
Science & Tech.
37.923748
From Traffic jams without bottlenecks—experimental evidence for the physical mechanism of the formation of a jam by Yuki Sugiyama et al., New J. Phys. 10 (2008) 033001 (including movies). If ever you have been driving on a crowded highway, chances are high that you have taken part in a similar "experiment", just that no one has captured it on film and put it on YouTube. This happened to me last Monday on my way to work: First, I got stuck in a traffic jam at the merging of three lanes into two - no wonder in rush-hour traffic. But then there was a second full stop, a few kilometres down the road, and for no obvious reason at all - no construction site, no junction, no accident... it was the classical phantom traffic jam. This kind of annoying phenomenon occurs on roads with a steady traffic flow if the distances between cars become too small: As soon as a car slows down a bit for whatever reason, the following car must break also, and so on. And because drivers are humans and have a reaction time, they break ever later, and ever stronger, and at one point, they come to a full stop. This stop then moves "upstream", in the opposite direction of the traffic flow - it's a shockwave-like phenomenon that has been intensively studied by German physicists since the 1990s. But it seems that no one so far has checked the models that describe the phantom jams in a controlled fashion, and so, the Japanese guys have set up an experiment: Take 22 cars, put them on a road, and tell the drivers to go on at a constant speed. As so often in physics, periodic boundary conditions are a useful trick to simulate a much larger system - the cars are driving on a circular track. It doesn't take long before the shockwave develops. Here is a chart, taken from the paper, that shows the evolution of the flow of the cars: The horizontal axis shows distance along the circular track, the vertical axis indicates time. The lines trace the paths of each of the 22 cars. The flatter the line, the higher the speed, and a vertical segment of the line means halt. One can see how a perturbation of the steady flow set in after just 40 seconds around metre 150 of the track. At closer inpection, the culprit seems to be a car that was a bit slower than the others for a while. Speeding up (the kink in the orange circle) doesn't help - the following cars have to break, and the phantom traffic jam can't be avoided anymore. The plot shows nicely how the perturbation - the zone of zero velocity (aka the jam) - travels at constant speed in the direction opposite to the traffic flow. Too bad - phantom traffic jams just happen, it's all physics... - The short paper about this experiment by the Japanese group (Traffic jams without bottlenecks—experimental evidence for the physical mechanism of the formation of a jam, New J. Phys. 10 (2008) 033001) is available as open access. The text is understandable also to non-physicists! And there is a second movie, showing a bird's eye perspective of the developing crisis. - Modern understanding of phantom traffic jams started with the papers Cluster effect in initially homogeneous traffic flow by B. S. Kerner and P. Konhäuser, Phys. Rev. E 48 (1993) R2335 - R2338, and A cellular automaton model for freeway traffic by Kai Nagel and Michael Schreckenberg, J. Phys. I France 2 (1992) 2221-2229. A detailed review is Traffic and Related Self-Driven Many-Particle Systems by Dirk Helbing, Reviews of Modern Physics 73 (2001) 1067-1141, cond-mat/0012229. - Here is a collection of great Java applets which show the emergence of phantom jams in simulations (by Martin Treiber, Technical University Dresden). TAGS: physics, traffic jam, self-organization
<urn:uuid:f206132b-5292-472a-8c42-cb0ca5fe8b68>
2.6875
846
Personal Blog
Science & Tech.
62.105252
John Newlands had published his Law of Octaves in 1865. The Law of Octaves had two elements in one box and did not allow space for undiscovered elements, so it was criticized and did not gain recognition. A year earlier (1864) Lothar Meyer published a periodic table which described the placement of 28 elements. Meyer's periodic table ordered the elements into groups arranged in order of their atomic weights. His periodic table arranged the elements into 6 families according to their valence, which was the first attempt to classify the elements according to this property. While many people are aware of Meyer's contribution to the understanding of element periodicity and the development of the periodic table, many have not heard of Alexandre-Emile Béguyer de Chancourtois. de Chancourtois was the first scientist to arrange the chemical elements in order of their atomic weights. In 1862 de Chancourtois presented a paper describing his arrangement of the elements to the French Academy of Sciences. The paper was published in the Academy's journal, Comptes Rendus, but without the actual table. The periodic table did appear in another publication, but it was not as widely read as the Academy's journal. de Chancourtois was a geologist and his paper primarily dealt with geological concepts so his periodic table did not gain the attention of the chemists of the day. The modern periodic table orders the elements according to increasing atomic number rather than increasing atomic weight, but the earlier tables were true periodic tables since they grouped the elements according to periodicity of their chemical and physical properties. Protons, which define elements today, were unknown at the time.
<urn:uuid:5d6c6781-29b5-4410-aadc-c1aeef91b968>
4.125
342
Knowledge Article
Science & Tech.
33.416357
We use polyurethane to make just about everything—garden hoses, furniture, the entirety of my local 99-cent store. It's easy to produce, durable, and dirt cheap. What it isn't is recyclable—there isn't a single natural process that breaks it down. That is until a newly-discovered Amazonian fungus takes a bite. Pestalotiopsis microspora (not shown) is a resident of the Ecuadorian rainforest and was discovered by a group of student researchers led by molecular biochemistry professor Scott Strobel as part of Yale's annual Rainforest Expedition and Laboratory. It's the first fungus species to be able to survive exclusively on polyurethane and, more importantly, able to do so in anaerobic conditions—the same conditions found in the bottom of landfills. This makes the fungus a prime candidate for bioremediation projects that could finally provide an alternative to just burying the plastic and hoping for the best. [Fastcoexist] Image via manzrussali / Shutterstock
<urn:uuid:8204dc26-b522-4f46-beba-105c9bb01bf1>
3.453125
218
Truncated
Science & Tech.
25.64707
There is a vulnerability in Microsoft Windows caused by incorrect processing of malformed Embedded Open Type (EOT) fonts. This vulnerability can be used to achieve remote code execution if a user views a web page containing a reference to a specially crafted font file. From Microsoft: Embedded OpenType (EOT) fonts are a compact form of fonts designed for use on Web pages. These fonts can be embedded in a document. This ensures that a user views the document exactly as the author intended. Eot format is basically a compressed true type font (TTF) file. The TTF file itself can be viewed as a collection of tables. The compression process first transforms some font tables into a different format, divides the file into chunks and than uses a variant of LZ compression to compress each chunk separately. Such obtained compressed data is added to the EOT header to form a .eot file.The decompression process first analyzes the eot header, splits the font data into chunks, decompresses each chunk and transforms some of the tables back into ttf format.More on the EOT format and the compression/decompression process can be found at the following links: The vulnerability is an integer overflow that can occur during the conversion of hdmx table from MicroType (compressed format used by EOT) back to the TrueType format. By exploiting this integer overflow the attacker can write arbitrary data to a memory location b+x, where b is the buffer location and x is (almost arbitrary) 32-bit number controlled by the attacker. This vulnerability can be used to achieve remote code execution if a user views a web page containing a reference to a specially crafted font file. Due to the spread and the impact of the vulnerability, exploiting details will not be released at this time.
<urn:uuid:ad86a302-c09f-44f2-a8bb-ec1bc70c6968>
2.890625
368
Knowledge Article
Software Dev.
38.2096
Rare earth magnets are special kinds of magnets, and they behave in a very strange way when exposed to metals — especially copper. The faster they go, the more they slow down. Find out why, and take a look at people trying to force a slow-motion magnet to go fast. Many people have played around with a magnet, but it's tougher to get your hands on a rare earth magnet. Rare earth magnets are made from rare earth elements. These elements are not rare in and of themselves, but they are so scattered throughout the Earth's crust that it's tough to find large amounts of them in one place. They are prized for their unique qualities. Europium, for example, is the reason why you can see red in your TVs and computer screens. Its job cannot be done by anything else. Rare earth magnets are combinations of these elements, tough to get a hold off, and have both a stronger magnetic field and a lighter weight than most conventional magnets. This combination of lightness and strength allows them to display a strange quality that we don't see in regular magnets. Moving a magnet around near a coil of wire will induce a current in the wire. Moving current in a wire will create a magnetic field - in other words, an electromagnet. This force opposes the motion of the magnet. This is called Lenz's Law. The faster the magnet goes, the stronger the current it produces. The stronger the current, the stronger the magnetic force. And so, the faster the magnet goes, the more it will be slowed down. Take a look at this, as a falling brick of a rare earth magnet gets slowed down quickly as it picks up speed. (The sudden braking force is also due to the fact that it gets closer to the copper disc.) The most popular demonstration of this law is with a long, hollow copper tube (see top video). Drop anything else through a copper tube, and nothing much will happen. Drop a rare earth magnet through a copper tube, though, and you will see a long, slightly jerky, slow motion drop. A very weak magnet will stall, mostly, as it picks up speed towards the middle or the end of the tube. A stronger one, like this, will make many little drops as it falls, induces a current, and gets stalled by the electromagnetic force of that induced current. As it stops moving, the current drops away, and it falls again, repeating the cycle over and over. Top Image of Rare Earth Element Thulium: Alchemist-hp
<urn:uuid:7d35deaa-c8c4-4e5f-8a78-18e9d2dd98ec>
3.40625
515
Nonfiction Writing
Science & Tech.
64.242446
According to the general theory of relativity, space itself is affected by the movement of massive objects. Like a ripple in a lake caused by a fallen rock, gravitational waves ripple out from the source of the motion and radiate through space, possibly affecting other objects. Gravitational waves distort an object's gravitational field. This distortion can cause the object to change shape: a spherical configuration could change into an ellipsoid. Gravitational waves can affect all of space. Since nothing can travel faster than the speed of light, all of space can not be affected all at once. Instead, the waves spread out and flow across space. The strongest gravitational waves come from very massive, dense objects that move at high velocities. The collapse of a black hole could put out a large amount of gravitational waves. But even the strongest gravitational waves should be extremely weak by the time they reach the Earth. By the time they get here, only a quadrillionth of their original strength remains. Gravitational waves are hard to detect. Until the 1960s, there were no gravitational wave detectors. During that decade, Joseph Weber built the first gravitational wave detectors. They were made of massive aluminum cylinders, cooled to low temperatures. These were expected to oscillate in reaction to gravitational waves. Since then, detectors are generally the same, except they are sometimes made of niobium instead of aluminum. Even with these detectors, no conclusive evidence for gravitational waves has been found. Scientists have suggested several different ways to look for gravitational waves. One idea is to use the Earth and spacecraft as free particles. Then, an observer would only have to look for oscillations in the time it takes for radio signals to travel between the two points.
<urn:uuid:0c443744-48f8-4537-9125-c76440b408ea>
4.625
355
Knowledge Article
Science & Tech.
34.820982
Caribbean Monk Seals, Monachus tropicalis Taxonomy Animalia Chordata Mammalia Carnivora Phocidae Monachus tropicalis Description & Behavior Caribbean monk seals, Monachus tropicalis (Gray, 1850), (also formerly known as West Indian monk seals and West Indian seals) are now extinct. Adults of this species were grayish-brown, females were slightly darker, with a yellowish color underneath and on their muzzles. They reached between 2-2.4 m in length and weighed about 160 kg. These seals were similar in appearance to the closely-related Hawaiian monk seals, Monachus schauinslandi, which still exist but are critically endangered, and Mediterranean monk seals, Monachus monachus, which are also critically endangered. The main predators of Caribbean monk seals were sharks and humans. World Range & Habitat There have been no confirmed sightings of Caribbean monk seals since 1952. The species is thought to have originally inhabited the beaches, cays, and reefs of the Caribbean, ranging from the Greater Antilles to the northern Lesser Antilles, the Bahamas, the northeastern coasts of Central America, Mexico's Yucatan Peninsula, and the Florida Keys. The last remaining colony is believed to have lived at Serranilla Bank, halfway between Nicaragua and Jamaica. Feeding Behavior (Ecology) Caribbean monk seals probably had similar diets to that of Hawaiian monk seals which include regional fish, lobsters, octopuses, reef fishes, and eels. The breeding season for the Caribbean monk seal used to began in December. Pups were born with black fur coats and likely measured about 1 m in length. Conservation Status & Comments Caribbean monk seals were widely hunted for their blubber for oil and for their meat. The ship of Columbus recorded killing eight "sea wolves", likely Caribbean monk seals, in 1495. Local fishermen hunted this species commercially as well. Caribbean monk seals were known to be very nonaggressive as well as sensitive to disturbance, factors that human's exploited until they were extinct. Although sightings of a seal-like animal were spotted in Puerto Rican waters near the north coast of Haiti, along the coast of the Dominican Republic, and in the eastern Bahamas, a 6,377 km aerial survey of the former range of Caribbean monk seals in 1973 provided no evidence that members of this species still exist (and there have been no reported sightings since that time). The species is listed as Extinct on the IUCN Red List and as an Appendix I species under CITES. References & Further Research Research Monachus tropicalis » Barcode of Life ~ BioOne ~ Biodiversity Heritage Library ~ CITES ~ Cornell Macaulay Library [audio / video] ~ Encyclopedia of Life (EOL) ~ ESA Online Journals ~ FishBase ~ Florida Museum of Natural History Ichthyology Department ~ GBIF ~ Google Scholar ~ ITIS ~ IUCN RedList (Threatened Status) ~ Marine Species Identification Portal ~ NCBI (PubMed, GenBank, etc.) ~ Ocean Biogeographic Information System ~ PLOS ~ SCIRIS ~ SIRIS ~ Tree of Life Web Project ~ UNEP-WCMC Species Database ~ WoRMS Feedback & Citation Find an error or having trouble with something? Let us know and we'll have a look! Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more.
<urn:uuid:941ad125-1ed4-4094-a568-e3146f3c67da>
3.453125
780
Knowledge Article
Science & Tech.
30.330247
Return to Mathematics Index Elizabeth Kelly Pilsen Community Academy 1420 W. 17th Street Chicago IL 60607 Students will reinforce their measurement skills. Students will manipulate a launcher to create specific angles. Students will be introduced to the concept of tangents. This is for the 7th and 8th grades. Each group of four students will need: -A water rocket and a pump. -Launchers to rest the rockets against. -Protractors to measure the angles at the launch site. -A tape measure. 1. Students will determine what angle will launch water rockets to the highest altitude. Students will use the tangents of angles to determine the height. The angles to be used are 30, 45, 60, 75, and 90. Students will launch the rockets at the five different angles listed above. Use two ounces of water for each launch and pump 15 times. 2. Students will measure the distance from the launch site to the spot where the rocket is at it's highest altitude. 3. Students will record this data. Example: 30 degrees, 14 ft. 4. Students will then find the altitude by using the formula tangent of the angle = opposite/adjacent. Example: tan 30=x/distance from launch site to Students will turn in their finished data sheet. Observations of the students will be noted during the experiments to determine their use and understanding of a protractor. Students will need to use data from a chart and record accurate data in order to complete their data sheet. For the purpose of this experiment, air resistance and wind will be ignored. Students will learn that the rockets will reach the highest altitude when the rockets are shot from the higher degree of angles. Sneider, Cary I. Experimenting with Model Rockets. The Regents of the University California. Berkeley, California. 1989.
<urn:uuid:00389363-ee9a-44d8-96cf-3dcb00ae9f81>
3.96875
401
Tutorial
Science & Tech.
61.541804
January 16, 2013 The face of the blue bottle fly (Calliphora vomitoria). New research shows how this carrion-eater carries the mammals of the forest in his stomach. Photo by: J.J. Harrison. Led by Sebastien Calvignac-Spencer with the Robert Koch Institute, a team of scientists have analyzed the DNA found in the stomachs of blood-devouring flies, known as blow flies and flesh flies; these are insects that feed on carrion, open wounds, or even feces, taking the animals' DNA with them. Unlike the leech study, which only focused on Vietnam, Calvignac-Spencer and his team took samples from flies in two parks, one in Madagascar (Kirindy Reserve) and another in Cote d'Ivoire (Tao National Park). "The main advantage in using flies is their worldwide distribution. Terrestrial leeches are on the contrary restricted to the tropical belt and even there cannot be found everywhere (for example they do not occur in the two forests in which we led our study)," Calvignac-Spencer told mongabay.com. In fact since terrestrial leeches are found neither in Kirindy Reserve or Tao National Park, flies were the next logical choice. Pygmy hippos in a Kenyan Reserve. "One surprise was that we could sample the entire local community of primates [in Tao National Park] by analyzing ca. 120 flies," Calvignac-Spencer told mongabay.com, which he called "not a tremendous effort" for getting DNA on all nine primate species in the protected area. In Madagascar's Kirindy Reserve, the researchers were able to identify DNA from four mammal species, including the gray mouse lemur (Microcebus murinus) and the fat-tailed dwarf lemur (Cheirogaleus medius). While four mammals may not sound like much, the researchers were able to get this data from just 40 or so flies. While the study's authors write that carrion flies "represent an extraordinary and thus far unexploited resource of mammal DNA," Calvignac-Spencer says that this doesn't mean leeches should be avoided where available. "Mammal DNA will probably be of better quality and persist longer in leeches than in flies," he notes. Indeed last year's study, found that DNA in leech blood-meals was still viable after four months or about the time it takes a terrestrial leech to get hungry again. "As exemplified here, both [subjects] have pros and cons," Calvignac-Spencer says. "So we expand the toolbox for conservation biologists but in no case say that it will supersede in all instances those already available." One other expansion that may be underway is using the new technology to track not just mammals, but birds, reptiles and amphibians too. In fact, Calvignac-Spencer's research identified the DNA of the water rail (Rallus aquaticus) in a number of flies from Kirindy Reserve. In Tao National Park, they were able to extract the DNA of a hornbill and a frog, though were not able to track it all the way down to the species level. Terrestrial leech in Borneo. Photo by: Rhett A. Butler. Still, even as promising as the new method is, there are still some hiccups. For example, a number of the mammals could not be identified to their species level. Rats, mice, and shrews proved particularly difficult. "The reason is that the database to which we compared our sequences did not contain sequences for all possible species," explains Calvignac-Spencer. "This shows that the precision of our assignment (family/genus/species level) critically depend on the quality of reference databases, although an exhaustive database is not a strict prerequisite." Still, the new method has a ton of advantages: it's non-invasive, it requires little more to do in the field than catching flies, and may be able to be done on the cheap, according to Calvignac-Specncer, especially if "next generation sequencing strategies are implemented." "We expect that costs in personnel will sharply decrease when compared to classical methods, e.g. transects, etc," he notes. "Indeed there is no need to train people to recognize species (which takes a lot of time) and setting up fly traps is extremely fast and easy." Someday, leeches and flies may be used not only to identify which species are lurking in the woods, but also to estimate abundances, track population crises, prove or disprove the existence of a cryptic species, and even to find out the best places to look for new species. CITATION: Sébastien Calvignac-Spencer, Kevin Merkel, Nadine Kutzner, Hjalmar Kühl, Christophe Boesch, Peter M. Kappeler, Sonja Metzger, Grit Schubert, Fabian H. Leendertz. Carrion fly-derived DNA as a tool for comprehensive and cost-effective assessment of mammalian biodiversity. Molecular Ecology. 2013. DOI: 10.1111/mec.12183 Does the Tasmanian tiger exist? Is the saola extinct? Ask the leeches (04/30/2012) The use of remote camera traps, which photograph animals as they pass, has revolutionized research on endangered and cryptic species. The tool has even allowed scientists to document animals new to science or feared extinct. But as important as camera traps have become, they are still prohibitively expensive for many conservationists and require many grueling hours in remote forests. A new paper in Current Biology, however, announces an incredibly innovative and cheaper way of recording rare mammals: seek out the leeches that feed on them. The research found that the presence of mammals, at least, can be determined by testing the victim's blood for DNA stored in the leech. How a text message could save an elephant or a rhino from a poacher (01/15/2013) Soon a text message may save an elephant's or rhino's life. The Kenya Wildlife Service (KWS) is implementing a new alarm system in some protected areas that will alert rangers of intruders via a text message, reports the Guardian. Elephants and rhinos have been killed in record numbers across Africa as demand for illegal rhino horns and ivory in Asia has skyrocketed. Advanced technology reveals massive tree die-off in remote, unexplored parts of the Amazon (12/12/2012) Severe drought conditions in 2010 appear to have substantially increased tree mortality in the Western Amazon, a region thought largely immune from the worst effects of changes occurring in other parts of the world's largest rainforest, reported research presented last week at the fall meeting of the American Geophysical Union (AGU). The findings suggest that the Amazon may face higher-the-expected vulnerability to climate change, potentially undercutting its ability to help mitigate greenhouse gas emissions by absorbing carbon dioxide through faster growth. Conservationists turn camera traps on tiger poachers (11/12/2012) Remote camera traps, which take photos or video when a sensor is triggered, have been increasingly used to document rare and shy wildlife, but now conservationists are taking the technology one step further: detecting poachers. Already, camera traps set up for wildlife have captured images of park trespassers and poachers worldwide, but for the first time conservationists are setting camera traps with the specific goal of tracking illegal activity. 3-D laser mapping shows elephants have big impact on trees (08/06/2012) Scientists have long known that African elephants (Loxodonta africana) are talented tree-topplers, able to take down even large trees in order to gobble out-of-reach leaves. However the extent of his behavior across a large area has been difficult to quantify. But a new study in Ecology Letters has used a bird's-eye view—with 3-D—of Kruger National Park in South Africa to determine the impact of elephants on trees. 10 African countries to develop satellite-based deforestation tracking systems with help of Brazil (07/30/2012) Ten tropical African countries will receive training and support to develop national forest monitoring systems, reports the United Nations. Brazil, which has an advanced deforestation tracking system, will guide the initiative in partnership with the Central Africa Forests Commission (COMIFAC) and the UN Food and Agriculture Organization (FAO). Smartphones promoted as a tool for indigenous forest protection (07/23/2012) Smartphones beeping in the woods may be a welcome presence that augurs the increased ability of indigenous communities to be stewards of their own biodiverse forests. Representatives of these communities and their supporters have advocated that international conservation policies like Reduced Emissions through Deforestation and Degradation (REDD) be increasingly managed by the communities themselves. Google Earth used to discover unknown forest in Angola, scientists find it full of rare birds (07/09/2012) An expedition, followed up by some computer hunting on Google Earth, has discovered large remnants of old growth forest, including thriving bird communities, in the mountains of Angola. The Namba Mountains in Angola were expected to contain around 100 hectares of forest, but an on-the-ground survey, coupled with online research, has discovered numerous forest fragments totaling around 590 hectares in the remote mountains, boosting the chances for many rare species.
<urn:uuid:2135b1dd-b083-44c1-8398-2605aff3a6cd>
3.40625
2,006
Content Listing
Science & Tech.
38.475511
These last two bases – called 5-formylcytosine and 5 carboxylcytosine – are actually versions of cytosine that have been modified by Tet proteins, molecular entities thought to play a role in DNA demethylation and stem cell reprogramming. The discovery could advance stem cell research by giving a glimpse into the DNA changes – such as the removal of chemical groups through demethylation – that could reprogram adult cells to make them act like stem cells. Basic DNA bases Variant bases of cytosine 5. 5-methylcytosine (methyl group is tacked onto a cytosine) 6. Tet proteins can convert 5 methylC (the fifth base) to 5 hydroxymethylC (sixth) 7. 5-Methylcytosine to 5-Formylcytosine 8. 5-Methylcytosine to 5-Carboxylcytosine Science - Tet Proteins Can Convert 5-Methylcytosine to 5-Formylcytosine and 5-Carboxylcytosine 5-methylcytosine (5mC) in DNA plays an important role in gene expression, genomic imprinting, and suppression of transposable elements. 5mC can be converted to 5-hydroxymethylcytosine (5hmC) by the Tet proteins. Here, we show that, in addition to 5hmC, the Tet proteins can generate 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC) from 5mC in an enzymatic activity–dependent manner. Furthermore, we reveal the presence of 5fC and 5caC in genomic DNA of mouse ES cells and mouse organs. The genomic content of 5hmC, 5fC, and 5caC can be increased or reduced through overexpression or depletion of Tet proteins. Thus, we identify two previously unknown cytosine derivatives in genomic DNA as the products of Tet proteins. Our study raises the possibility that DNA demethylation may occur through Tet-catalyzed oxidation followed by decarboxylation. Much is known about the “fifth base,” 5-methylcytosine, which arises when a chemical tag or methyl group is tacked onto a cytosine. This methylation is associated with gene silencing, as it causes the DNA’s double helix to fold even tighter upon itself. Last year, Zhang’s group reported that Tet proteins can convert 5 methylC (the fifth base) to 5 hydroxymethylC (the sixth base) in the first of a four step reaction leading back to bare-boned cytosine. But try as they might, the researchers could not continue the reaction on to the seventh and eighth bases, called 5 formylC and 5 carboxyC. The problem, they eventually found, was not that Tet wasn’t taking that second and third step, it was that their experimental assay wasn’t sensitive enough to detect it. Once they realized the limitations of the assay, they redesigned it and were in fact able to detect the two newest bases of DNA. The researchers then examined embryonic stem cells as well as mouse organs and found that both bases can be detected in genomic DNA. The finding could have important implications for stem cell research, as it could provide researchers with new tools to erase previous methylation patterns to reprogram adult cells. It could also inform cancer research, as it could give scientists the opportunity to reactivate tumor suppressor genes that had been silenced by DNA methylation. 18 pages of supplemental information If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
<urn:uuid:28e3c1f7-f505-4376-9564-d8e1fd3b581b>
3.078125
799
Truncated
Science & Tech.
36.427438
While most constants are only defined in one namespace, the case-insensitive true, false, and null constants are defined in ALL namespaces. So, this is not valid: <?php namespace false; const ENT_QUOTES = 'My value'; echo ENT_QUOTES;//Outputs as expected: 'My value' const FALSE = 'Odd, eh?';//FATAL ERROR! ?> Fatal error: Cannot redeclare constant 'FALSE' in /Volumes/WebServer/0gb.us/test.php on line 5 You can define a constant by using the define()-function or by using the const keyword outside a class definition as of PHP 5.3.0. Once a constant is defined, it can never be changed or undefined. You can get the value of a constant by simply specifying its name. Unlike with variables, you should not prepend a constant with a $. You can also use the function constant() to read a constant's value if you wish to obtain the constant's name dynamically. Use get_defined_constants() to get a list of all defined constants. Note: Constants and (global) variables are in a different namespace. This implies that for example TRUE and $TRUE are generally different. If you use an undefined constant, PHP assumes that you mean the name of the constant itself, just as if you called it as a string (CONSTANT vs "CONSTANT"). An error of level E_NOTICE will be issued when this happens. See also the manual entry on why $foo[bar] is wrong (unless you first define() bar as a constant). If you simply want to check if a constant is set, use the defined() function. These are the differences between constants and variables: - Constants do not have a dollar sign ($) before them; - Constants may only be defined using the define() function, not by simple assignment; - Constants may be defined and accessed anywhere without regard to variable scoping rules; - Constants may not be redefined or undefined once they have been set; and - Constants may only evaluate to scalar values. Example #1 Defining Constants define("CONSTANT", "Hello world."); echo CONSTANT; // outputs "Hello world." echo Constant; // outputs "Constant" and issues a notice. Example #2 Defining Constants using the const keyword // Works as of PHP 5.3.0 const CONSTANT = 'Hello World'; As opposed to defining constants using define(), constants defined using the const keyword must be declared at the top-level scope because they are defined at compile-time. This means that they cannot be declared inside functions, loops or if statements. See also Class Constants. Don't let the comparison between const (in the global context) and define() confuse you: while define() allows expressions as the value, const does not. In that sense it behaves exactly as const (in class context) does. // this works * Path to the root of the application // this does not * Path to configuration files const PATH_CONFIG = PATH_ROOT . "/config"; // this does * Path to configuration files - DEPRECATED, use PATH_CONFIG const PATH_CONF = PATH_CONFIG; Constant names shouldn't include operators. Otherwise php doesn't take them as part of the constant name and tries to evaluate them: define("SALARY-WORK",0.02); // set the proportion $salary=SALARY-WORK*$work; // tries to subtract WORK times $work from SALARY
<urn:uuid:93cf676d-108b-4593-836a-726be5fab669>
2.765625
784
Documentation
Software Dev.
51.617649
First glimpses inside an anti-atom Oct 30, 2002 Physicists working on the Antiproton Decelerator at CERN have studied the internal states of anti-hydrogen atoms for the first time. The ATRAP team found that the antiprotons and positrons in their experiment combine to form anti-hydrogen atoms in highly excited states. If the anti-atoms can be trapped in their ground state, it should be possible to compare the atomic structure of anti-hydrogen with ordinary hydrogen and perform the most accurate ever tests of CPT (charge-parity-time) symmetry. Any violation of CPT symmetry would require new physics beyond the Standard Model of particle physics (G Gabrielse et al 2002 Physical Review Letters in press). The anti-hydrogen atoms were produced from antiprotons from CERN’s Antiproton Decelerator and positrons from a radioactive sodium-22 source. The positrons, which were trapped between sets of antiprotons in a “Penning” trap, cooled the antiprotons. When both reach a similar temperature some combine to form anti-hydrogen atoms, consisting of a positron orbiting an antiproton nucleus. These anti-atoms, which are electrically neutral, drift out of the trap. Any anti-hydrogen atoms moving along the axis of the apparatus traverse a strong electric field that removes the positron from the anti-atom. This “field-ionisation” technique allows the resulting negatively charged antiprotons to be trapped and counted. Using this technique the researchers were able to produce nearly 170 000 cold anti-hydrogen atoms. This means that a remarkable 11% of the antiprotons in the Penning trap formed anti-hydrogen atoms. This compares well with previous experiments performed at CERN, by researchers on the ATHENA collaboration, using a similar trapping technique they produced about 50 000 anti-hydrogen atoms two months ago. ATRAP’s field-ionisation technique also gives information about the internal states of the anti-hydrogen atoms, showing that the principal quantum number n is between about 43 and 55 (where n=1 corresponds to the ground state). By changing the strength of the ionising electric field the researchers hope to discover more about the internal state of the anti-hydrogen atoms, and to learn how to de-excite them to the ground state. This knowledge will be essential because hydrogen atoms and anti-atoms can only be trapped if they are in their ground state. This high rate of production, and the fact that the anti-atoms are formed in highly excited states, suggests that the anti-hydrogen atoms are formed in three-body collisions between two positrons and an antiproton. The ATRAP collaboration, which includes researchers from the US, Switzerland, Germany and Canada first demonstrated the cooling of antiprotons with positrons in a Penning trap last year. Since then they have carried out more detailed studies of this cooling process to ensure that the antiproton loss observed during positron cooling is indeed due to the formation of anti-hydrogen and not other mechanisms. The team is confident that every recorded event comes from the production of an anti-hydrogen atom and that their measurements are free of background. The ultimate goal of the experiments will be to trap cold anti-hydrogen atoms and study their spectra in detail. Comparing the spectra of anti-hydrogen with hydrogen, and studying the transition from the n=2 to the n=1 state in particular, will give researchers new insights into the differences between matter and antimatter. About the author Belle Dumé is Science Writer at PhysicsWeb.
<urn:uuid:065dc173-cf95-4429-98cd-df1aff65cb22>
3.484375
766
Truncated
Science & Tech.
30.651304
You know you blog about diverse topics when searches for "creamed spinach" and "mashed turnips" bracket things like "torture prisoners" and "why are echinoderms important". No good invertebrate-focused biologist could ever leave that last question unanswered, so here goes: - They have a very cool water vascular system. - They have mutable connective tissue, meaning they can dynamically vary the rigidity of their skeleton. - They're deuterostomes, meaning that their embryonic development is very similar to chordate (e.g., human, bird) development, and thus they're useful for developmental-biology research. - Many are keystone species in intertidal and marine communities. - They're terribly cute, what with their tube feet and all.
<urn:uuid:7e03231e-bea7-4b46-9c08-169897ea0341>
3.015625
166
Personal Blog
Science & Tech.
28.293295
A tour to a century old underground mine provides much more than expected. Tags: experiments, mines, Minnesota, MINOS, neutrinos, Physics, science, Soudan, tours On March 10, 2010 at 10:57 pm Very well done, Katie. Both informative and leaving one wanting to know more. I was aware of this project but not with details such as the one about neutrinos being shot from Illinois’ underground location. On March 10, 2010 at 11:17 pm I liked it On March 10, 2010 at 11:24 pm On March 11, 2010 at 1:26 am Fascinating! They used the term “neutrino waves” on Star Trek. Always interesting when I see something from scifi in the real world. On March 11, 2010 at 3:17 am On March 11, 2010 at 4:01 am A fascinating article, it sounds like you had a great day out, it sounds like a great place to visit. On March 11, 2010 at 4:57 am Interesting and definitely out of my understanding. Sure the lab exists for better comprehension of what we need to know about matter except ourselves. On March 11, 2010 at 5:28 am On March 11, 2010 at 10:38 am That is really cool. I think this is the same lab that took a photo of the sun – from under the earth (photographic exposure from the neutrinos they are capturing) Unreal! On March 12, 2010 at 5:38 am interesting, there is a lot we don’t know about particles that pass right through us (mobiles, wireless, broadband, microwaves etc) what impact do they have on our organs as they pass through? Maybe this place is helping our understanding On March 15, 2010 at 4:20 pm really well done maybe you revealed classified experiment down there . take care.:) On April 9, 2010 at 5:40 pm I’ve been in two mines before.One a former gold mine the other an underground cave…also discovered in Minnesota. I don’t pretend to understand that part of science,but it was an interesting article that makes one want to find our more. Entries (RSS) and Comments (RSS)
<urn:uuid:5c9a4619-87b0-4c0c-bb34-fb9549e99fda>
3.109375
483
Comment Section
Science & Tech.
73.406061
Keep Watching the Ice Meet the satellites bringing data to the discussion of global warming - By Ben Iannotta - Air & Space magazine, September 2006 IN EARLY MAY, WHEN MOST PEOPLE in the United States enjoy the warm days of spring, veteran NASA glaciologist Jay Zwally instead heads north to the ice-bound edge of western Greenland. It takes two aircraft, an LC-130 transport and a de Havilland Twin Otter, to get him to a collection of tents pitched on a wooden platform, known by glaciologists as the Swiss Camp. The name is a remnant of its origins as a research site operated by a Swiss university, now used by NASA and others to measure the effects of climate change. Just south of the camp lies the Jakobshavn Glacier, a thick slab of ice that is gradually sliding downhill toward the Greenland coast. At the water’s edge it becomes the Jakobshavn Isbrae, a tongue of ice spilling into a natural inlet. The surface of the glacier melts in the summer, and Zwally needs to prepare his Global Positioning System sensors in the ice to measure how the surface water affects the glacier’s velocity. When Zwally is not on the glacier, he’s in Maryland running the science side of a $283 million NASA satellite project that is watching it. Whether deploying equipment on the ice or into orbit, he is seeking data to answer some of the thorniest questions in the debate over global climate change: How quickly will Earth’s ice melt as the planet continues its expected warming trend, and what will the extra water do to sea levels? In the community of glaciologists, Zwally’s training as a mechanical engineer and physicist is unusual. As a young engineer in the 1960s, he built part of a solar wind detector that flew aboard NASA’s Explorer 34 scientific satellite. He then earned a Ph.D. in physics from the University of Maryland but couldn’t find a job in that field. “I got this wonderful opportunity to go to the National Science Foundation and I got into glaciology, polar research,” Zwally says. In 1974, he moved to NASA, and now works at the Goddard Space Flight Center in Greenbelt, Maryland. In the mid-1980s, Zwally first conceived of a satellite that could measure Earth’s ice by bouncing infrared laser pulses off Earth’s ice sheets. In January 2003, NASA launched the Ice Cloud and Land Elevation Satellite, or ICESat, with Zwally serving as project scientist. Forty times a second, ICESat fires its laser, catches the reflections with its telescope, and calculates the height of the ice based on how long it takes the signals to return. ICESat’s altimeter data will enable scientists to calculate the volume of the ice still present in Earth’s glaciers. By comparing the measurements taken over the life of the satellite, scientists will figure out whether Earth’s glaciers overall are losing or gaining ice, and if so how much. ICESat is one of two satellite systems NASA is counting on to make the first truly quantitative, three-dimensional measurements of the planet’s ice cover. The second system, the Gravity Recovery and Climate Experiment, or GRACE, consists of twin satellites that can sense the shrinking of ice sheets by measuring the diminishment of their gravity tug. GRACE was launched in March, 2002.
<urn:uuid:a770e595-f9da-48bf-b0fc-3bf5679251fa>
3.375
724
Truncated
Science & Tech.
44.646048
"It's like finding Moby Dick in Lake Ontario," says Tullis Onstott of the nematode worms his Princeton University team discovered living far beneath the Earth's surface in South Africa. The tiny worms – just 500 micrometres long – were found at depths ranging from 900 metres to 3.6 kilometres, in three gold mines in the Witwatersrand basin near Johannesburg. That's an astonishing find given that multicellular organisms are typically only found near the surface of the Earth's crust – Onstott's best guess is in the top 10 metres. The creatures seem to live in water squeezed between the mines' rocks, can tolerate temperatures reaching 43 °C and feed off bacteria. Carbon dating of the water they live in suggests that the worms have been living at these depths for between 3000 and 12,000 years. Click "source" for entire article.
<urn:uuid:47360cee-9491-4562-8fd8-586a7f82df5d>
3.59375
182
Truncated
Science & Tech.
54.762937
Stories in the Ice by Peter Tyson Online Producer, NOVA Nature's Time Machine How would you like to have a time machine that could take you back anywhere over the past 300,000 years? You could see what the world was like when ice sheets a thousand feet thick blanketed Canada and northern Europe, or when the Indonesian volcano Toba blew its top in the largest volcanic eruption of the last half million years. Well, scientists have such a time machine. It's called an ice core. Scientists collect ice cores by driving a hollow tube deep into the miles-thick ice sheets of Antarctica and Greenland (and in glaciers elsewhere). The long cylinders of ancient ice that they retrieve provide a dazzlingly detailed record of what was happening in the world over the past several ice ages. That's because each layer of ice in a core corresponds to a single year—or sometimes even a single season—and most everything that fell in the snow that year remains behind, including wind-blown dust, ash, atmospheric gases, even radioactivity. Indeed, fallout from the Chernobyl nuclear accident has turned up in ice cores, as has dust from violent desert storms countless millennia ago. Collectively, these frozen archives give scientists unprecedented views of global climate over the eons. More important, the records allow researchers to predict the impact of significant events—from volcanic eruptions to global warming—that could strike us today. Ice Core Timeline Special thanks to Mark Twickler, University of New Hampshire Stories in the Ice | Antarctic Almanac | Water World | Live and Breathe Antarctica Teacher's Guide | Resources | Transcript Editor's Picks | Previous Sites | Join Us/E-mail | TV/Web Schedule About NOVA | Teachers | Site Map | Shop | Jobs | Search | To print PBS Online | NOVA Online | WGBH © | Updated November 2000
<urn:uuid:37d1f1d1-13e0-4b7f-baee-846e697efdbd>
3.6875
388
Truncated
Science & Tech.
31.535992
|Jul13-10, 02:47 AM||#1| Qualitatively define entropy I want to know a concrete qualitative definition of entropy. If we define it to be a measure of randomness (disorder) in a system then as per intuition it would mean that a system with less probability in a given microstate will have greater entropy. But as per statistical mechanics S=-K∑ [Pi log Pi] It means a system with lesser probability will have less entropy isnít it? And how is it possible to express about equation as follows S=-K log N? please help me in getting out of this confusion, physics news on PhysOrg.com >> Promising doped zirconia >> New X-ray method shows how frog embryos could help thwart disease >> Bringing life into focus |Jul13-10, 06:38 AM||#2| First let me explain why entropy is related to probability. If you have two systems, then each of them will be in an unknown macrostate X1 and X2 respectively. For one isolated system entropy makes no sense. These two macrostates have some number of micro state realizations N1(X1) and N2(X2). So the compound state (X1,X2) will have N1*N2 realizations which is also proportional to the probability of the compound state p(X1, X2). So the most likely state is the one with N1*N2 -> max To avoid multiplication we introduce S=log(N) and now our condition is S_1+S_2 -> max (which is no more than saying we want the most likely compound state) In our model we have (energy) micro states k1, k2, ..., kn for each system. One macro state X specifies how many particles are in each of thes states. So X= a1 particles in state k1; a2 particles in state k2; ... To distribute (a1+a2+...+an) particle in such a way there are a1!a2!...an!/(sum a_i)! possibilities (multinomial distribution). If you now define p_i = a_i/(sum a) and use the approximation [itex]a!\sim a^a[/itex], then you get the equation for entropy [tex]S\propto \sum_i p_i\ln(p_i)[/tex] You see it's all connected and derives from pure probability theory. |Jul15-10, 12:57 AM||#3| I have a another question which is as follows:- In statistical mechanics it is said that molecular motional energies is quantised and in a given microstate of a macrostate they have definite arrangements in energy levels for total energy of that macrostate. And if moleular motional energies are quantised it would mean that energy levels are also quantised , isn't it? Then how is it possible that in isothermal expansion of gas in vaccum adds new energy levels and thus causes an increase in entropy inspite of having quantisation in energy levels? |Similar Threads for: Qualitatively define entropy| |How is the entropy of the universe increasing when entropy is simply transferred?||Introductory Physics Homework||5| |String entropy and black hole entropy||Beyond the Standard Model||3| |can we define this||Calculus||0| |minimize entropy of P == maximize entropy of ???||Set Theory, Logic, Probability, Statistics||2| |Is the Entropy of the Universe Zero?:(Entropy as Entanglement)||Quantum Physics||10|
<urn:uuid:665b43cb-2865-4e8f-a32e-3a27409116e0>
3.046875
788
Comment Section
Science & Tech.
55.347926
Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to solving the problems of information management. In general, a server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. Oracle Database Application Development SQL and PL/SQL form the core of Oracle's application development stack. Not only do most enterprise back-ends run SQL, but Web applications accessing databases do so using SQL. Enterprise Application Integration applications generate XML from SQL queries, and content-repositories are built on top of SQL tables. Overview of Oracle SQL SQL (pronounced SEQUEL) is the programming language that defines and manipulates the database. SQL databases are relational databases, which means that data is stored in a set of simple relations. SELECT last_name, department_id FROM employees; Overview of PL/SQL PL/SQL is Oracle's procedural language extension to SQL. PL/SQL combines the ease and flexibility of SQL with the procedural functionality of a structured programming language, such as :- IF ... THEN, WHILE, and LOOP.
<urn:uuid:6b507915-24c5-4ebd-8160-17f26d99bafa>
3.34375
273
Knowledge Article
Software Dev.
31.528292
Because these processes happen beneath the Sun's visible surface -- the photosphere -- they can't be seen from Earth, and it's very difficult to conduct experiments. Computer simulations provide a way to see what's otherwise unseeable. The difficulty has to do with the complexity of the flow. Along with general features that occur over relatively large scales, such as the granulation pattern on the Sun's surface, the researchers need to observe fine details.Vorticity in the Convection Zone "You want to see the small vortices and how they interact," says Woodward. "Very fast time scales and small length scales are involved in this problem." To approach realistic results, the researchers must allow the gas to be compressible -- its density changes as pressure and temperature change, which increases the computing requirements. The modeling must also replicate large pressure differentials between the bottom and top of the convection zone. With the CRAY T3D, it became possible to more realistically model this depth differential. In October 1994, PSC made two weeks of dedicated time available. Woodward and Porter are still analyzing the pile of data (250 gigabytes) that resulted, a process that could take a year or longer. Nevertheless, scientific visualization has already revealed phenomena not seen before. Temperature in the Convection Zone This rectangular slab is a volume rendering showing a side view of the solar convection zone, roughly the outer third of the sun. It is idealized, with hard walls at the top and bottom. Energy is added from the bottom, to model radiation from nuclear fusion in the Sun's core. Colored fields represent "vorticity," how strongly the gas is spinning. Black areas are weak; green is slightly stronger, and white the strongest. The knotted, densely packed vortex tubes, explains David Porter, show vigorously turbulent regions, especially along the top boundary and in the large downflow lane near the right edge. Vertical vortex tubes at the lower center resemble Earth tornados. Here the flow converges in both horizontal directions and expands upward. This view looking down at the surface of the convection zone shows temperature according to color -- aqua (cold) through blue, green, red to yellow (hot). A cellular network of cool downflow regions surrounds isolated, warm upflows in a pattern that resembles "solar granules" observed on the Sun's surface. The second slice (right) is from deeper in the convection zone (one pressure scale height below the surface). Here, the cool downflow lanes have merged into fewer, larger cells. Effects of turbulence can be seen in the knotted downflow structure. Some downflow lanes, crushed into isolated downflowing plumes, are seen as blue dots. In general, the researchers see more turbulence than was apparent in previous simulations. Representative of this are vortex rings, which in the visualizations look like smoke rings, moving up and down. The visualizations also reveal a hierarchy of dynamics. "You have small convection cells at the surface," explains Porter, "and they're embedded in a large convective cell that spans the depth of the layer. So we're seeing the hierarchy of convection theorists had predicted, but only now is numerical experiment capable of resolving and testing these theories." "This is the first three-dimensional calculation of compressible convection we've done," says Woodward, "where we have real confidence in the results." From here, the researchers plan to add more realism -- a stable layer beneath the convection zone and a free surface on top, like taking the lid off a boiling pot. "It's a scientific philosophy of our grand challenge team that we don't want to take a step towards a more complex formulation of the problem, a more realistic model, until we've understood the simplified one. We're ready now to take that step." go back to the main page
<urn:uuid:a8c73cc3-03d4-4566-8890-2b17906294c4>
3.96875
790
Academic Writing
Science & Tech.
39.983227
California Beetle Project > Species Pages > Narpus angustus Scientific name: Narpus angustus Casey Images (click to enlarge) What it looks like: 3-4 mm in length. Its body is dark brown with striae, or shallow punctures, running the length of its elytra. The rows of striae alternate with rows of light brown, short hairs. Where you'll find it: This beetle is found throughout the coastal California mountain ranges, in large, clear, rapid streams. Natural History: These beetles are abundant where bars of coarse gravel drop off into deep pools. Both the adults and the larvae are aquatic. This page was written by Maren Farnum, a 2005 California Beetle Project intern.
<urn:uuid:79a26480-91c6-47d7-9559-f94d2d24b990>
3.375
156
Knowledge Article
Science & Tech.
55.525
Here is an attempt at explaining why. Magnetic fields are additive, and by this fact, one would expect that putting two equal magnets together would double the field strength. However, magnetic field strength drops off exponentially as you move away, so the pull is at its strongest right at the surface, but significantly less so just a little ways away. Now you must consider that the two magnets do not occupy the same space. So, if you have 2 magnets stacked up and you place an object on top, you will feel the full effect of the first magnet, but will be the width of the first magnet distance away from the second magnet. Therefore the pull from the second is slightly less than the pull from the first, but still having an effect. So you have a pull that is more than a single magnet, but less than double. This continues to be the case as you add magnet stages...the third magnet is two magnet widths away, etc... Additionally, you are still working to overcome drag and gravity forces that at some point adding magnet stages will cease to have a significant counter efect for an overall "diminishing returns" as mentioned by Keith. I hope this helps. “Education never ends. It is a series of lessons, with the greatest for the last.” ~ Sir Arthur Conan Doyle (Sherlock Holmes)
<urn:uuid:aad2db3c-f1a6-41ff-bb61-226d63d856f2>
2.921875
274
Comment Section
Science & Tech.
64.363517
Asteroids are massive lumps of rock that orbit the Sun. They can be anything up to 1000 kilometers wide and are sometimes described as minor planets. Most asteroids in the Solar System lie in a belt - the Asteroid Belt - orbiting the Sun between Mars and Jupiter, but some are orbiting relatively nearby. These nearby asteroids are called Potentially Hazardous Asteroids or PHAs because they could collide with the Earth as gigantic meteorites. Meteorites are bits of space debris which fall to the surface of a planet or moon. They are really quite common. More than a tone's worth of meteorites falls to earth every day. Traveling at several thousand meters per second, meteorites become super-heated by friction with the Earth's atmosphere and explode on impact with the ground. A large meteorite can impact with the force of several nuclear bombs and even cause the climate to change. It's thought that a very large meteorite caused the extinction of the dinosaurs. And a meteorite the size of a small house caused the crater below. It is over one kilometer in diameter and 200 meters deep. The Solar System The Solar System is our local neighborhood in space. The Solar System is composed of the Sun and its satellites. The Sun's satellites include major bodies such as planets and countless other, minor bodies such as asteroids and comets - anything in fact that orbits the Sun. There are presently nine planets in the Solar System. To remember their first letters in order from the Sun, remember: Most Very Elderly Men Just Sleep Under News Papers [continues] Cite This Essay (2005, 04). Asteroids, Meteorites & the Solar System. StudyMode.com. Retrieved 04, 2005, from http://www.studymode.com/essays/Asteroids-Meteorites-Solar-System-54343.html "Asteroids, Meteorites & the Solar System" StudyMode.com. 04 2005. 04 2005 <http://www.studymode.com/essays/Asteroids-Meteorites-Solar-System-54343.html>. "Asteroids, Meteorites & the Solar System." StudyMode.com. 04, 2005. Accessed 04, 2005. http://www.studymode.com/essays/Asteroids-Meteorites-Solar-System-54343.html.
<urn:uuid:e706a4a4-fbaf-4393-82d3-f30e3ba872ef>
3.5
499
Truncated
Science & Tech.
54.663929
How to tell a butterfly from a moth Butterflies and moths are in the same order (Lepidoptera, which comes from the Greek words lepidos for scale and pteron for wings). Many people have trouble telling them apart, but once you know what to look for, itıs easy to tell which is which. Butterflies have skinny antennae with knobs or clubs on the ends. Many moths have feather-like antennae, or thread-like antennae without knobs. When resting, butterflies close their wings high above their backs, but canıt fold them. Moths fold their wings down on top of their backs at rest. Butterflies usually have long slender bodies. Moths have fat, often fuzzy bodies. You wonıt see butterflies flying around at night or many moths active during the day. The third life stage (pupal stage) of a butterfly is a smooth chrysalis. A moth spends its pupal stage in a cocoon spun with silk.
<urn:uuid:0957e373-5662-4b49-9e27-eea9eeb2ee3e>
3.484375
218
Knowledge Article
Science & Tech.
66.646684
In this compiler step, the default modifiers for classes, methods, properties, and fields are added where appropriate. Classes and other types are by default public, properties, methods, and events are made public by default, and fields are by default protected. A constructor is added to a class definition if it does not already contain one. A statement followed by a conditional modifier such as "if", "unless", or "while": print "good job" unless score < 75 is replaced with a regular conditional statement (if ... statement). "if x != null" is replaced by "if x is not null"
<urn:uuid:d260f580-29da-4ca4-9673-ee28bc6f7419>
2.890625
129
Documentation
Software Dev.
50.90875
Monday, July 16, 2012 at 10:42 AM Webmaster level: All In web development context, semantics refers to semantic markup, which means markup used according to its meaning and purpose. Markup used according to its purpose means using heading elements (for instance, h6) to mark up headings, paragraph elements ( p) for paragraphs, lists ( menu) for lists, tables for data tables, and so on. Stating the obvious became necessary in the old days, when the Web consisted of only a few web sites and authors used tables to code entire sites, table cells or paragraphs for headings, and thought about other creative ways to achieve the layout they wanted. (Admittedly, these authors had fewer instruments at their disposal than authors have today. There were times when coding a three column layout was literally impossible without using tables or images.) Up until today authors were not always certain about what HTML element to use for what functional unit in their HTML page, though, and “living” specs like HTML 5 require authors to keep an eye on what elements will be there going forward to mark up what otherwise calls for “meaningless” fallback elements like To know what elements HTML offers, and what meaning these elements have, it’s necessary to consult the HTML specs. There are indices—covering all HTML specs and elements—that make it a bit simpler to look up and find out the meaning of an element. However, in many cases it may be necessary to check what the HTML spec says. For example, take the codeelement represents a fragment of computer code. This could be an XML element name, a filename, a computer program, or any other string that a computer would recognize. HTML elements carry meaning as defined by the HTML specs, yet ID and class names can bear meaning too. ID and class names, just like microdata, are typically under author control, the only exception being microformats. (We will not cover microdata or microformats in this article.) ID and class names give authors a lot of freedom to work with HTML elements. There are a few basic rules of thumb that, when followed, make sure this freedom doesn’t turn into problems: - Keep use of IDs and classes to a minimum. - Use functional ID and class names; if that is not possible, use generic ID and class names. - Use names that are as short as possible but as long as necessary. Advantages of using semantic markup Using markup according to how it’s meant to be used, as well as modest use of functional ID and class names, has several advantages: - It’s the professional thing to do. - It’s more accessible. - It’s more maintainable. “Neutral” elements, elements with ambiguous meaning, and presentational elements constitute special cases. span offer a “generic mechanism for adding structure to documents.” They can be used whenever there is no other element available that matches what the contents in question represent. In the past a lot of confusion was caused by the em elements. Authors cursed i for being presentational, and typically suggested a 1:1 replacement with em. Not to stir up the past, here’s what HTML 5 says, granting all four elements a raison d’être: ||“a span of text to be stylistically offset from the normal prose without conveying any extra importance, such as key words in a document abstract, product names in a review, or other spans of text whose typical typographic presentation is boldened”|| ||“strong importance for its contents”|| ||“a span of text in an alternate voice or mood, or otherwise offset from the normal prose, such as a taxonomic designation, a technical term, an idiomatic phrase from another language, a thought, a ship name, or some other prose whose typical typographic presentation is italicized”|| ||“stress emphasis of its contents”|| Last but not least, there are truly presentational elements. These elements will be supported by user agents (browsers) for forever but shouldn’t be used anymore as presentational markup is not maintainable, and should be handled by style sheets instead. Some popular ones are: How to tell whether you’re on track A quick and dirty way to check the semantics of your page and understand how it might be interpreted by a screen reader is to disable CSS, for example using the Web Developer Toolbar extension available for Chrome and Firefox. This only identifies issues around the use of CSS to convey meaning, but can still be helpful. There are also tools like W3C’s semantic data extractor that provide cues on the meaningfulness of your HTML code. Other methods range from peer reviews (coding best practices) to user testing (accessibility). Do’s and Don’ts ||For headings there are heading elements.| ||Presentational markup is expensive to maintain.| ||Use table elements for tabular data.| ||Use table elements for tabular data.| ||Denote paragraphs by paragraph elements, not line breaks.|
<urn:uuid:1d850790-2f7c-4007-83c1-2c780881f222>
3.546875
1,100
Personal Blog
Software Dev.
41.334074
Credit: MIT and the HETE-2 Team Renee, Stargazer and HETE-2 HETE-2, the High Energy Transient Explorer, is a space observatory designed to scan the sky to look for strange explosions in space called Gamma Ray Bursts. HETE-2 will find these bursts and let astronomers know about them within minutes for followup studies. HETE-2 is NASA's newest observatory in space - it was launched on Monday morning, October 9th, 2000, at 1:38 am EDT from the Kwajalein Island missile range. The satellite was launched from a Pegasus rocket nicknamed "Renee". A rather interesting aspect of the launch is that Renee is an air-launched missile - Renee was carried aloft by an aircraft called "Stargazer". At a certain altitude and orientation, Stargazer dropped Renee, Renee's rocket motors were fired, and HETE-2 was put into orbit. The picture above on the left shows Renee attached to Stargazer; HETE-2 is inside the "faring"or nosecone of the missile. The image on the right shows the HETE-2 observatory undergoing some tests before launch. Last Week * HEA Dictionary * Archive * Search HEAPOW Each week the HEASARC brings you new, exciting and beautiful images from X-ray and Gamma ray astronomy. Check back each week and be sure to check out the HEAPOW archive! Page Author: Dr. Michael F. Last modified May 26, 2001
<urn:uuid:4408ce32-cd9b-40ff-882f-971203bcfc13>
2.90625
350
Knowledge Article
Science & Tech.
51.46076
Daily Tech: Sun Makes History: First Spotless Month in a Century. The event is significant as many climatologists now believe solar magnetic activity – which determines the number of sunspots -- is an influencing factor for climate on earth. Delta Farm Press: Global cooling gains momentum among scientists. “Carbon dioxide is not to blame for global climate change, Sorokhtin said. “Solar activity is many times more powerful than the energy produced by the whole of humankind. Man’s influence on nature is a drop in the ocean.” Canadian climatologist Timothy Ball said, “If we are facing (a crisis) at all, I think it is that we are preparing for warming when it is looking like we are cooling. We are preparing for the wrong thing.” wattsupwiththat: Livingston and Penn paper: “Sunspots may vanish by 2015″. Belfast Telegraph: Is there a cold future just lying in wait for us? On-Line Opinion: Activity is quiet on the sunspot front. At the time of writing the sun is still spot free. NASA solar physicist David Hathaway points out, quite rightly, that the sun’s behaviour is within major statistical limits - just. The average solar cycle lasts 131 months plus or minus 14 months and the current cycle - the quiet period counts as part of the old cycle - has lasted nearly 143 months. The solar cycle went quiet for years at the beginning of last century before restarting, Hathaway notes, so nothing out of the ordinary has happened - at least, not yet. Another group at the US National Solar Observatory in Tucson, Arizona, William Livingston and Matthew Penn, believe that there may be a deeper process at work. Sunspots are highly magnetic regions that are somewhat cooler than the rest of the sun’s surface (they appear dark compared to the rest of the sun, but if seen separately would appear very bright) and the two researchers have been tracking both the temperature and magnetic strength of the spots. They found that the spots have been warming up and becoming less magnetic. An average of the trend is a straight line going down which hits the bottom of the graph at 2014. They have concluded that, although sun spots may appear briefly from time to time in the next few years, they will disappear by 2014. This conclusion is in a paper submitted to the journal Science three years ago but rejected in peer review. With the sun now so quiet the paper has been resurrected from a filing cabinet in the observatory and circulated informally. Dr Livingston told me (by phone from his office in Tucson) that the paper had been rejected on the grounds that it was a purely statistical argument so it would be better to wait and see what happened, and he considered that a fair point. They are now waiting “for the right moment” to resubmit. But what happens after 2014? Dr Livingston says that as they are using a purely statistical argument, without any theory to back it, they do not know. All they know is that the trend reaches zero in 2014. Conventional theory on the sun’s inner workings never forecast anything like this - in fact, forecast the exact opposite - but has been revised to say that the sun will restart some time next year.
<urn:uuid:d875e843-4a5b-4fa1-a5c7-fe34cf954a57>
3.078125
682
Personal Blog
Science & Tech.
52.814161
Order By Clause (Visual Basic) Specifies the sort order for a query result. You can use the Order By clause to sort the results of a query. The Order By clause can only sort a result based on the range variable for the current scope. For example, the Select clause introduces a new scope in a query expression with new iteration variables for that scope. Range variables defined before a Select clause in a query are not available after the Select clause. Therefore, if you want to order your results by a field that is not available in the Select clause, you must put the Order By clause before the Select clause. One example of when you would have to do this is when you want to sort your query by fields that are not returned as part of the result. Ascending and descending order for a field is determined by the implementation of the IComparable interface for the data type of the field. If the data type does not implement the IComparable interface, the sort order is ignored. The following query expression uses a From clause to declare a range variable book for the books collection. The Order By clause sorts the query result by price in ascending order (the default). Books with the same price are sorted by title in ascending order. The Select clause selects the Title and Price properties as the values returned by the query. The following query expression uses the Order By clause to sort the query result by price in descending order. Books with the same price are sorted by title in ascending order. The following query expression uses a Select clause to select the book title, price, publish date, and author. It then populates the Title, Price, PublishDate, and Author fields of the range variable for the new scope. The Order By clause orders the new range variable by author name, book title, and then price. Each column is sorted in the default order (ascending).
<urn:uuid:bcdf3b45-2bff-4fc7-b3d4-304d886a402a>
2.78125
385
Documentation
Software Dev.
47.283627
Barn Pole Paradox Date: Winter 2011-2012 I do not see how this relativity problem is explained. Its a variation of the barn pole paradox. Can you offer insights? Einstein’s special relativity theories apply to constant VELOCITY situations. This means no change of speed or direction. An observer on the ring is constantly changing direction. After half of a revolution, the observer’s direction has exactly reversed. Length contraction is much more complex in such a situation. General relativity is needed for this situation. Dr. Ken Mellendorf Illinois Central College You are mixing relative effects with non-relative effects to create the paradox. In creating what you think is a paradox, you are placing unrealistic constraints on the system (e.g. how do you decouple this massively energetic ring from the silo?). Hope this helps, Click here to return to the Physics Archives Update: June 2012
<urn:uuid:f0dfbd2f-5454-4bfa-8654-b5b5188e5e06>
2.984375
199
Q&A Forum
Science & Tech.
48.406367
The James Webb Space Telescope (JWST) will be the premiere observatory of the next decade, serving thousands of astronomers worldwide. It will study every phase in the history of our Universe, ranging from the first luminous glows after the Big Bang, to the formation of solar systems capable of supporting life on planets like Earth, to the evolution of our own Solar System. Moderated by J.D. Harrington, NASA Astrophysics Public Affairs Officer. • Geoff Yoder, Program Director, NASA HQ, Washington, D.C. • Eric Smith, Deputy Program Director / Program Scientist, NASA HQ, Washington, D.C. • John Mather, JWST Project Scientist, NASA Goddard Space Flight Center, Greenbelt, Md. • Amber Straughn, Astrophysicist / Deputy Project Scientist for Communications & Outreach, NASA Goddard Space Flight Center, Greenbelt, Md. • Jon Arenberg, Chief Engineer, Northrup Grumman Aerospace Systems, Redondo Beach, Calif. This Google+ Hangout discussed NASA’s James Webb Space Telescope, the agency’s flagship science project that will launch in October, 2018. Panelists discusses the program’s development status, how the tennis court-sized spacecraft will work, explained its science objectives after launch, and highlighted it’s future impact on the world.
<urn:uuid:a809cd21-4131-4190-991e-f98515ca04eb>
3
284
Content Listing
Science & Tech.
42.514392
Cascading Style Sheets Cascading Style Sheets are a big breakthrough in Web design because they allow developers to control the style and layout of multiple Web pages all at once. Before Cascading Style Sheets, changing an element that appeared on many pages required changing it on each individual page. Cascading Style Sheets work just like a template, allowing Web developers to define a style for an HTML element and then apply it to as many Web pages as they'd like. With CSS, when you want to make a change, you simply change the style, and that element is updated automatically wherever it appears within the site. Both Navigator 4.0 and Internet Explorer 4.0 support Cascading Style Sheets. If you needed any more proof of the problem-solving nature of CSS, the World Wide Web Consortium (W3C) has recommended Cascading Style Sheets (level 1) as an industry standard.
<urn:uuid:6925b78c-ca22-4174-bcee-9ba804e3fc1b>
2.828125
192
Knowledge Article
Software Dev.
46.80375
Mystery of the Megavolcano A remote lake in Southeast Asia conceals evidence of Earth's greatest volcanic cataclysm of the last 100,000 years. Miles beneath its placid surface lies a magma chamber that exploded so violently during the Ice Age that gases and ash may have encircled the globe and blotted out the sun for years on end. The Toba eruption may have helped kick the climate into an unprecedented freeze and perhaps even pushed ancestral human populations to the brink of extinction. In a classic science detective story, NOVA pieces together the clues about this great catastrophe and probes questions raised about human evolution and Earth's fragile ecosystem.
<urn:uuid:62783aee-6902-49ee-835a-2bf4df98af34>
3.234375
131
Truncated
Science & Tech.
27.760642
Introductionmetal, chemical element displaying certain properties by which it is normally distinguished from a nonmetal, notably its metallic luster, the capacity to lose electrons and form a positive ion, and the ability to conduct heat and electricity. The metals comprise about two thirds of the known elements (see periodic table). Some metals, including copper, tin, iron, lead, gold, silver, and mercury, were known to the ancients; copper is probably the oldest known metal. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Chemistry: General
<urn:uuid:fc33d595-a2d6-4c76-a43b-63e82368cd1a>
3.625
126
Knowledge Article
Science & Tech.
20.833333
|Nathalie Baumann, MSc / Biogeograph, Research Associate at ZHAW - the Zurich University of Applied Sciences - Institute of Environment and Nature Ressources, Centre of nature management-Urban Greening,Competence Centre Green Roofs, is researching several greenroofs in peri-urban areas in Switzerland. The driver for this research greenroof is improving the Lapwing bird population by increasing biomass and the food base with a biodiversity greenroof.| Nathalie says, "Biodiversity Design for Northern Lapwings: 18 circles with roof garden soil (4cm height) and on top of it are placed ?rectangular pieces of grass with soil? (2cm); these circle surfaces are randomly spread on this gravel roof (former or base substrate). And on the top of it we have also put some hay mulch (dried meadow grass). The Northern Lapwings are ground-nesting birds that like open-grassland ? therefore we try to put on this gravel roof some surfaces where some grassland type plants can grow and attract insects. Insects and most of their larvae are the basic food of the Northern Lapwings chicks." |In general, on a number of green roof locations in peri-urban areas in Switzerland, breeding pairs of northern lapwings (Vanellus vanellus) and little ringed plover (Charadrius dubius) are being observed and investigated. The investigation focuses on how breeding takes its course on roofs, whether chicks can successfully fledge, and, if necessary, how changes in the design of flat roofs can improve fledging success rates.| The aim of this research plots with hay mulch is to find out about functioning, use and efficiency of light-weight substrate and its innovation as a cheap sustainable and renewable resource. The use of this kind of substrate allows plant succession and createsdifferent microhabitats for a variety of insects. Hence, these roofs provide suitable habitats for endangered ground-nesting birds to breed and successfully raise chicks. Additional thumbnail photos: |For additional info, please contact Stephan Brenneisen, Dr. phil Geograph, ZHAW - Zurich University of Applied Sciences, Institute of Environment and Nature Ressources, Centre of nature management-Urban Greening, Competence Centre Green Roofs, Gr?ntal, Postfach 335, CH - 8820, email: email@example.com; or Nathalie Baumann, Dipl. BioGeografin, email: firstname.lastname@example.org.| |The Greenroof Projects Database is published, designed, and maintained by| Greenroofs.com, LLC, Copyright © 2010. All rights reserved.
<urn:uuid:28c3fa7f-2b23-4ebb-9e8d-50a033eb8ae4>
3.03125
565
Knowledge Article
Science & Tech.
38.556954
The aim of this task is to give you some (additional) experience at jQuery and Ajax programming. Study the HTML and jQuery code in this example to see how class attributes and event handlers can dynamically be added to a "clean" HTML document when the document is loaded. Now consider the simplified user registration document presented here. js/user.js to perform some client-side input validation. More specifically, write jQuery code to dynamically add event handlers to perform the following tests: find_user.php. This script gets the username from the variable user, returns the username if it exists in the database, and returns the empty string otherwise. For testing, the database initially contains the usernames "dylan", "heidi", "jolon", "rodney". Each validation failure should be reported in an alert box (or otherwise). Use Ajax to implement a simple shopping cart application. A left panel should display a list of items for sale with their prices (the catalog). A right panel should display the list of items selected by the user, with their prices, and a total price (the shopping cart). The catalog should be stored in a database table on the server. The shopping cart should be stored in another database table. When the user selects an item from the catalog, or removes an item from the shopping cart, an HTTP request should automatically be sent to the server, which should update the shopping cart, and send the updated cart back to the client (as a JSON object), to be displayed by replacing the content of the right panel only. Complete Task 1 only
<urn:uuid:ce0be049-3462-4168-92d5-e39a8c008fe0>
2.75
332
Tutorial
Software Dev.
46.844318
Binnig, Gerd (gĕrt bĭnˈĭkh) [key], 1947–, German physicist, Ph.D. Univ. of Frankfurt, 1978. At the IBM Research Laboratory in Zürich, Binnig and fellow researcher Heinrich Rohrer built the first scanning tunneling microscope, an instrument so sensitive that it can distinguish individual atoms. For their innovation they shared the 1986 Nobel Prize in Physics with Ernst Ruska, who invented (1933) the first electron microscope. In 1986 Binnig developed the atomic force microscope, which can image individual atoms in materials that do not conduct electricity. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Gerd Binnig from Infoplease: See more Encyclopedia articles on: Physics: Biographies
<urn:uuid:3421a6df-5f8a-4da8-864a-f77090c62794>
2.8125
176
Knowledge Article
Science & Tech.
38.563095
Properties of Ice |Crystalline Structure of Ice. Ice can assume a large number of different crystalline structures, more than any other known material. At ordinary pressures the stable phase of ice is called ice I, and the various high-pressure phases of ice number up to ice XIV so far. (Ice IX received some degree of notoriety from Kurt Vonnegut's novel Cat's Cradle.) There are two closely related variants of ice I: hexagonal ice Ih, which has hexagonal symmetry, and cubic ice Ic, which has a crystal structure similar to diamond. Ice Ih is the normal form of ice; ice Ic is formed by depositing vapor at very low temperatures (below 140°K). Amorphous ice can be made by depositing water vapor onto a substrate at still lower temperatures. Each oxygen atom inside the ice Ih lattice is surrounded by four other oxygen atoms in a tetrahedral arrangement. The distance between oxygens is approximately 2.75 Angstroms. The hydrogen atoms in ice are arranged following the Bernal-Fowler rules: 1) two protons are close (about 0.98A) to each oxygen atom, much like in a free water molecule; 2) each H20 molecule is oriented so that the two protons point toward two adjacent oxygen atoms; 3) there is only one proton between two adjacent oxygen atoms; 4) under ordinary conditions any of the large number of possible configurations is equally probable. Phase Diagram of Water and Ice. The plot at right shows the phase diagram of water (click on the image for an expanded version). The triple point of water -- when ice, water, and water vapor can coexist -- is at a temperature of 0.01C (0C = 273.16K), and a pressure of 6.1 mbar. Water is the only substance which we commonly experience near its triple point in everyday life. Pressure of Ice and Water. The plot at right shows the equilibrium water vapor pressure of ice and water as a function of temperature, over the range of interest for snow crystal growth . The pressure units are in mbar, and one can convert to other units using a conversion calculator (1 mbar = 100 Pascal (Newtons/square meter) = 0.75 mm Hg = 0.001 atmospheres.) The vapor pressure is well described by the Clausius-Clapeyron relation, and a fit to the data yields the approximations: Pwater(T) = [2.8262e9 - 1.0897e6*T - 94934*T2 Pice(T) = [3.6646e10 - 1.3086e6*T - 33793*T2]exp(-6150/TK) where pressures P are in mbar, the temperature T is in degrees Celsius, and TK is in degrees Kelvin (Note 0C = 273.16K). These approximate expressions are accurate to better than 0.1 percent from -50C to 50C. at the right shows the water vapor supersaturation value, equal to (Pwater-Pice)/Pice. This is the supersaturation level that is typically found in dense clouds, which after all are made of water droplets. Supersaturation levels higher than this are probably quite unusual in the atmosphere. related to Ice and the Formation of Snow Crystals. Mass of a water molecule: |Ice density (near 0°C): |Latent heats of sublimation, evaporation, |Heat capacity of ice, water (near 0°C): |Electric dipole moment of a water |Intrinsic dielectric polarizability of a |Total dielectric polarizability of a water molecule (near 0°C): |Ice surface energy: |Diffusion constant for water molecules in air at STP: |Critical radius for nucleation: |Coefficient of thermal expansion of ice: |Thermal conductivity of ice (near -20°C): | From B. J. Mason, The Physics of Clouds (Clarendon Press, 1971)..
<urn:uuid:ae9be8d7-e17b-4ec2-8215-edf33cb3ef53>
3.75
914
Knowledge Article
Science & Tech.
49.25524
JPL's Atmospheric Infrared Sounder Experiment captured this infrared view of Hurricane Ivan in September 2004. The instrument, which flies aboard NASA's Aqua satellite, measures the temperature of cloud tops. The lowest temperatures (seen in purple) are associated with high, cold cloud tops that make up the top of the hurricane. Warmer areas are closer to red. + Full image and caption Go to next slide>
<urn:uuid:769638fe-a823-4078-8d6c-610e41b00797>
3.234375
83
Truncated
Science & Tech.
40.114773
Basic concepts of classical physics. Some people doubt the correctness of certain physical principles. Some are inventing modifications of physics. It's good to know excatly which principles they do accept as correct, to see whether there is any common ground for discussion. Which of these do you accept as correct? - Newton's first and second laws, embodied in Fnet = ma. The net force, Fnet on a body of mass m causes it to have acceleration a. - Newton's third law. FAB = -FBA. If body A exerts a force on body B, then B exerts an equal size and oppositely directed force on A. - The definition of torque, t = r´F, where ´ is the vector (cross) product operator. - The rotational analogue of Newton's law: tnet = Ia where I is moment of inertia and a is angular acceleration. - The definitions of displacement, velocity, acceleration and force as vectors, and the fact that the law of vector addition applies to them. - The definition of momentum, p = mv. It is a vector and the law of vector addition applies. - The definition of work, W = F•x, where x is displacement and • is the vector scalar (dot) product operator. Work is a scalar. - The definition of kinetic energy, Ek = (1/2)mv2. Kinetic energy is a scalar. - The work-kinetic energy principle: Wnet = DEk in a closed non-dissipative system. The Wnet is the net work done on the system by all external influences. - Conservation of net mass in a closed system (classical physics). - Conservation of net momentum in a closed system. - Conservation of net energy in a closed system. - Conservation of net angular momentum in a closed system. - The thermal energy principle: Q = DU in a closed system, where U is the internal thermal energy and Q measures the thermal energy transfer in or out of the system. (In older books thermal energy was called "heat".) - The first law of thermodynamics. DU = Q - W where U is internal energy, Q is the thermal energy added to the system, and W is the work done by the system. This is the usual sign convention. - The second law of thermodynamics. This has several equivalent statements, but one is that no heat engine can have an efficiency greater than that of a Carnot engine operating between the same two temperatures. That maximum efficiency of any engine is therefore e = (Th - Tc)/Th, where the Ts are absolute temperatures of the reservoirs. Note that this efficiency is always less than one since absolute temperatures are always positive (greater than zero). - The third law of thermodynamics. The entropy of a closed system never decreases over time. Aside from any possible carelessness on my part, these are all correct. Furthermore, they are so logically connected that you can't change or discredit any one of them without changing the others. Before you try, see Things to consider before you rewrite classical physics.
<urn:uuid:36b81a86-c2fd-4df3-9efd-da2d3a25851d>
3.546875
653
Knowledge Article
Science & Tech.
50.733218
About this Image Dubai undertook a massive engineering project to create hundreds of artificial islands along its Persian Gulf coastline. Built from sand dredged from the seafloor and protected from erosion by rock breakwaters, the islands were shaped into recognizable forms, including a map of the world (shown here). Satellites images from the past decade have documented the islands' creation. Credit: NASA image created by Jesse Allen, using data provided courtesy of NASA/GSFC/METI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team.
<urn:uuid:3b8bcc62-c826-4358-974b-01e0fdb7d6af>
3.375
117
Knowledge Article
Science & Tech.
35.510714
Name: Ross J. What exactly makes fruit rot? The goal of a fruit is to spread its seed, so it needs to rot in order to get the seeds out of the fruit. There are actually hormones, especially ethylene oxide that promote fruit ripening. If you want to get a piece of fruit to ripen, put it in a bag with an apple which generates a lot of ethylene oxide. Mostly mold.... and some bacteria...they have to eat too...:) Peter Faletra Ph.D. Office of Science Department of Energy bacteria and fungi In addition to the other excellent answers to your inquiry offered already, I'd like to add a couple more contributors to this not-so-straightforward process. The fruit itself produces enzymes, such as amylases & proteases, which also assist in the tissue breakdown associated with rotting. In fact, oxide mentioned by vanhoeck actually promotes the activity of some of these enzymes. Once this process is initiated by the fruit itself, it is much easier for the bacteria & fungi to colonize themselves. This might sound more complicated than you expected, but that probably reflects the fact that it is a crucial property for enabling the plant to "spread its seed", as also noted by vanhoeck. Thanks for the good question, Jeff Buzby, Ph.D. Children's Hosp. of Orange Cnty. Div. of Educational Programs Argonne National Laboratory Click here to return to the Molecular Biology Archives Update: June 2012
<urn:uuid:de66df51-4cc9-4eaf-b9f6-c4522536cf57>
3.578125
334
Q&A Forum
Science & Tech.
56.658173
Forecasters and scientists are predicting an El Nino year. Antonio Neves reports on the climate patterns and chain of events that occur around the world. the.Sci covers STEM topics such as science, technology, engineering, and mathematics and puts them in context with current events. These stories explain how things work, who make them happen, and why they are relevant to teens. Elana Michelle reports on pollution in the Chesapeake Bay, how it affects the watershed, and the roles of government and citizens in this environmental crisis. Correspondent Spencer Michels gets a lesson in space technology and career education from the NASA team leader of the Mars Rover mission. Elana Michelle investigates why young athletes are incurring more concussions during team sports. Julie Iriondo reports on the 100th anniversary of the famous Antarctic expedition by Sir Ernest Shackleton to be recreated in 2014 with modern equipment. Correspondent Miles O'Brien investigates scientific experiments on chimpanzees and the ethical implications for animal rights. The Motorola Foundation
<urn:uuid:0a68c3b5-e6d5-4143-8719-c0dc3547dd8a>
3.015625
210
Content Listing
Science & Tech.
30.629318
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 13 results on physics.org and 95 results in our database of sites 95 are Websites, 0 are Videos, and 0 are Experiments) Search results on physics.org Search results from our links database Pick your nuclear weapon of choice, drop a virtual nuclear bomb on your area and see its impact on the surrounding area. Ouch. An introduction to nuclear medicine, nuclear reactors, nuclear physics, nuclear power, nuclear waste and war. Includes movie clips of the effects of nuclear detonation. A well presented and comprehensive introduction to nuclear science. Nuclear structure, Antimatter, decay, Cosmic rays, etc. Excellent graphics This nuclear primer explains what meltdown means, and how the situation at Fukushima compares with past nuclear accidents. Site contains arguments for and against Nuclear Power from an American and international perspective. A site to inform readers of the progress of nuclear physics in a wide range of fields. Up to date information on current research for scientists in the nuclear community. Part of a nuclear reactor tour, this site gives a simplified explanation of nuclear energy. Description of nuclear cross section, with good relevant links to other areas of the site. This site provides information on nuclear weapons and war. Contains some biographies of scientists, photographs of bomb blasts and victims, also some audio clips. A site of useful links focusing on nuclear computing, nuclear engineering, fusion, reactors, weapons and waste issues. Showing 1 - 10 of 95
<urn:uuid:8942070c-3b00-4911-a819-b9d27cf19e37>
2.734375
348
Content Listing
Science & Tech.
49.393406
Gravity is a very important force. Every object in space exerts a gravitational pull on every other, and so gravity influences the paths taken by everything traveling through space. It is the glue that holds together entire galaxies. It keeps planets in orbit. It makes it possible to use human-made satellites and to go to and return from the Moon. It makes planets habitable by trapping gasses and liquids in an atmosphere. It can also cause life-destroying asteroids to crash into planets. Ask any question below to learn about gravity in space. What is gravity? Is there gravity in space? How do scientists know what the path of an object in space will be? What causes an orbit to happen? Can gravity affect the surface of objects in orbit around each other? What's a gravity well?
<urn:uuid:f98fef74-a136-4196-95d6-e8292b2fe24e>
3.953125
167
Knowledge Article
Science & Tech.
54.337356
Among presentations at the National Climate Change Adaptation Forum April 2-4, 2013 were case studies of projects in different ecosystems that are addressing the effects of climate change. Short videos tell stories unfolding in three locations. Sunrise at Black Creek Preserve. Photo: R Rodriguez, Jr., www.scenichudson.org Protecting and restoring freshwater tidal migration zones along the Hudson River Although during Hurricane Sandy they proved the value of natural habitats in mitigating flood damage, the tidal wetlands of the Hudson River could nonetheless drown as sea levels rise. The nonprofit organization Scenic Hudson is undertaking a number of measures to protect the river and its valley from this consequence of climate change, such as building resilient structures; encouraging community conversations about climate-change readiness, land conservation and stewardship; and conducting acquisition and restoration projects. Beaver near its lodge. Photo: NPS Restoring a natural ecosystem engineer to provide riparian areas in Southern Utah Can a nocturnal, semi-aquatic rodent become a superhero in the fight against climate change? The Grand Canyon Trust thinks its possible. By forming ponds, wetlands and meadows, beaver restore and expand riparian habitat that numerous species depend on. As climate change lengthens droughts and produces more extreme precipitation events, beaver dams could increase the volume of water retained in the mountains, raise the water table and expand riparian areas. To encourage the work of these natural engineers, the Trust is reintroducing beaver in scores of stream segments in southern Utah. Using climate science to strategically guide habitat conservation Saving the entire earth is a daunting prospect, but identifying and protecting areas that offer the most important conservation opportunities is a size of task that collaborative efforts can tackle. In Montana, the Trust for Public Land worked with Trout Unlimited, The Nature Conservancy and Montana Fish, Wildlife and Parks to conserve and restore 52,000 acres identified as potentially resilient and pertinent to two at-risk coldwater fisheries, bull trout and Westslope cutthroat trout. The project has multiple benefits both for the species dependent on this habitat and for modeling an approach to public investment in landscape-level conservation.
<urn:uuid:5d7c5736-8f10-4926-82cc-700ec8faf4cc>
3.109375
438
Content Listing
Science & Tech.
20.944666
This thing. Please tell me if if my facts are right, since none of my material really covers it and I have had to piece this together from practice problems. 1. This is formed when 2 sound waves encounter each other 2. The distance between the two purple dots is the wavelength of the beat 3. The distance between the green dots is the wavelength of sound I am a little confused because I thought the distance between the purple dots would be the 'wavelength'. But in EK they describe the conditions I stated above. Are these correct? TY
<urn:uuid:765a4606-23fb-41d7-b47f-eab33e865c2e>
2.703125
115
Comment Section
Science & Tech.
68.656193
Broadening of Spectral Lines In the study of transitions in atomic spectra, and indeed in any type of spectroscopy, one must be aware that those transitions are not precisely "sharp". There is always a finite width to the observed spectral lines. One source of broadening is the "natural line width" which arises from the uncertainty in energy of the states involved in the transition. This source of broadening is important in nuclear spectra, such as Mossbauer spectra, but is rarely significant in atomic spectroscopy. A typical lifetime for an atomic energy state is about 10^-8 seconds, corresponding to a natural linewidth of about 6.6 x 10^-8 eV. For atomic spectra in the visible and uv, the limit on resolution is often set by Doppler broadening. With the thermal motion of the atoms, those atoms traveling toward the detector with a velocity v will have transition frequencies which differ from those of atoms at rest by the Doppler shift. The distribution of velocities can be found from the Boltzmann distribution. Since the thermal velocities are non-relativistic, the Doppler shift in the angular frequency is given by the simple form From the Boltzmann distribution, the number of atoms with velocity v in the direction of the observed light is given by The distribution of radiation around the center frequency is then given by This is in the form of a Gaussian, and the width at half-maximum is given by Often it is convenient to express this in terms of wavelength. When you move further down the spectrum into the microwave region for molecular rotational spectra, the natural linewidth again emerges as a larger source of broadening than Doppler broadening. At some pressure, the perturbations of rotational energy levels by molecular collisions (pressure broadening) becomes the limiting factor for resolution. Atomic Structure Concepts Haken & Wolf
<urn:uuid:72b481a2-46ea-4b98-a296-dacb56aad052>
3.265625
405
Knowledge Article
Science & Tech.
29.304399
For detailed information on Oklahoma earthquakes, or to report an earthquake, go to http://www.okgeosurvey1.gov/ For more information on world-wide earthquakes go to: A brief update on the 2009 Oklahoma earthquakes northeast of Oklahoma City: On average there are about 50 measurable earthquakes each year in Oklahoma with only a few of these having shaking strong enough to be felt. A total of 43 felt earthquakes in 2009 made this an exceptional year for seismic activity in Oklahoma. Twenty-seven of the felt earthquakes occurred in Oklahoma County, and another 7 were located in Lincoln County. Is the number of felt earthquakes occurring northeast of Oklahoma City, unusual? Somewhat, but at this point there is no reason to be alarmed. Small earthquakes such as these can occur anywhere in the world. The US Geological Survey (USGS) estimates that there are as many as 3,000 of these small earthquakes occurring every day. Earthquake swarms like this can go on for many months, and usually do not lead up to a major earthquake. Do we know what is causing the earthquakes? No, without further study it is not possible to determine what is causing the earthquakes. The USGS and the Oklahoma Geological Survey are working together to conduct a limited field study to better measure any future earthquakes that could occur in eastern Oklahoma County.
<urn:uuid:4b3d689e-2682-4e3d-aabd-437c40d1907e>
3.21875
270
Knowledge Article
Science & Tech.
41.15471
|» Metadata||» Status| Sqlite is a lightweight embedded database library. It is a small C library that implements a self-contained, embeddable, zero-configuration SQL database engine and is included by default with PHP 5. Sqlite_Tools is an object oriented interface to effectively manage and backup Sqlite databases. Why SQLite Tools The stregth of Sqlite and its superb portability might also be seen as a weakness. Because each database is Client-Server relational databases implements a number of build-in features to make the db corruption a remote occurance. Whilst Sqlite does offer some similar functionality (e.g. synchronous value), there are no PHP functions in this directions and there is little knowledge on how to successfully maintain and backup multiple database insuring the integrity of the initial and cloned database. Sqlite_Tools functionality can be summarized in two different branches: Database Manipulation and Maintenance and others, each of the function is well commented so you should be able to understand its purpose. Database backup or remote live replication ftpBackup : backups one or more databases via ftpsee the example Sqlite_Tools Output and Usage Because Sqlite_Tools is a library its output for mostly of the function is basically raw and can be used in connection with the logs function which keeps each of the operations (e.g. database opened, database integrity check, database backup on ftp performed) output in a local sqlite logs database. On the source link below you can see a number of usage examples. Updated 11 July 2004 |» Dependencies||» Links| |» Timeline||» Changelog|
<urn:uuid:9018f29c-03cf-4af4-8874-7d3dd3994fa2>
2.859375
360
Documentation
Software Dev.
24.002473
Yes Scott that is a very good point the universe could indeed extend way beyond the horizon of our visible universe, in just the same way as the horizon on Earth limits our view of the surface. However current thinking suggests otherwise. I think we are at crossed purposes Doonhamer? Yes I accept that I may very well be completley wrong to have any doubts about the big bang but even though it is now the accepted model for the universe I still reserve the right to hold with the notion that it may actually turn out to be wrong, which is I believe the proper scientific approach? Perhaps I should expand on the above comments? we could try estimating the age of the universe by measuring the Hubble constant (which is the current expansion rate of the universe) and use it to extrapolate back to the point of the big bang. However this relies heavily on the history of the expansion rate and overall density and composition of the universe as we observe it now, which may not be one and the same thing. So if the universe is 'flat' and composed of more matter then the age of the universe is towards the lower end of any estimate. On the other hand if the universe has very little matter and curved then it could be much older. Or if the universe contains matter that conforms to the cosmological constant, then the universe can be even older still. Measuring the Hubble constant has taxed the greatest minds for many years with the best estimates being from 65 to 80 km/sec/Megaparsec, where a Megaparsec is 1 megaParsec = 3.08568025 × 10 to the power of 22 metres. The best 'guess' being 72 km/sec/Megaparsec. So based on this information it is possible to say that the universe is between 12 and 14 billion years old. However If the universe was flat, and contained mostly ordinary or dark matter, then the age of the universe would be about 9 billion years, so age of the universe would be shorter than the age of oldest stars which obviously couldn't be right could it? So either the measurement of the Hubble constant is wrong or the Big Bang theory is incorrect or possibly we need to add a type of matter that conforms to the cosmological constant, but that sadly doesn't work either. Of course if the lower estimates of globular cluster ages are right then all is well for the big bang even without a cosmological constant. What helps underpin the expanding big bang unverse is the WMAP data and as long as the origin of large scale structures are right, well then the finer structure of the cosmological microwave background will have a bearing on the density, expansion and composition of the universe as we now observe it. As it happens the WMAP data has shown these parameters to an accuracy of better than 3% of what is said to be the critical density and applying that level of precision allows an estimate for age of the universe at approximatley 13.7 give or take about 1%........... The expansion age as given by the WMAP data is therefore greater than that of the oldest globular clusters, so the big bang comes out on top because if the expansion as measured by WMAP had shown the age of the universe to be less than the oldest globular clusters, well something pretty fundamental must be wrong. So big bang wins............. for now. http://localhostr.com/files/d624ef/tinfoil.gif
<urn:uuid:3b5b44f3-46f5-419c-b4ce-9c76793dfd60>
2.890625
721
Comment Section
Science & Tech.
60.914553
ESA Scientists Capture the Lion's Offspring Down Under 28 Nov 2001 (Source: European Space Agency) ESA Science News Dr. Detlef Koschny ESTEC, Noordwijk, Netherlands After an eventful trip to the other side of the world, ESA's intrepid scientists have returned with a treasure trove of data about the 2001 Leonid meteor shower. From their remote encampment in the Australian outback, the four-man team from the European Space Research and Technology Centre in the Netherlands successfully observed many thousands of shooting stars while carrying out some groundbreaking trials of new scientific experiments. Team leader Detlef Koschny and colleague Roland Trautner happily recounted their successful campaign to capture the Lion's offspring. Question: Were you able to see the Leonids as you had hoped? Koschny: We were rather nervous because the night of the predicted maximum was cloudy - the first cloudy night we had in Australia - but, fortunately, the clouds went away and we had three hours of beautiful Leonids. We saw the first Leonid fireballs through holes in the clouds - this led to quite spectacular views, since the clouds were black and basically invisible (an unknown experience to a European observer, where there are always lights to illuminate the clouds). Miraculously, it slowly but steadily cleared up and one hour after midnight we had beautiful skies: the Magellanic Clouds were blazing, Canopus, Sirius and Achernar brilliant. The show started with about one bright Leonid (-2 magnitude or brighter) per minute. Most of them had orange-yellow heads and left a blueish-green trail that lasted for a few seconds. A small number showed persistent trails for half a minute or so. The highlight was a -2 mag Leonid which flew just above the southern horizon, parallel to it for about 90 degrees! Trautner: It was a great show - amazing! The most spectacular view for me was in the morning twilight (on 19 November), when the Sun was painting the sky a cobalt blue, the bright stars and the Milky Way were still visible, and there were brilliant fireballs coming in. It was the most beautiful moment of the whole night. We were very lucky because we had good weather at the end. There had been cloud and smoke from bush fires earlier in the night. The bush fires last for weeks - the farmers just let them burn. We could see them getting closer until they were burning near the road we would have to use on our return journey. Fortunately, the fire was already extinguished when we made our way back to Broome. Koschny: The weather was a worry to us. There had been thunderstorms around Perth, and we were told that the weather was also bad around Wolf Crater, so we decided to camp out at a dry lake nearer Broome. We made the right decision. Question: It sounds as if your observations were successful. Koschny: We captured many meteors and fireballs on video - probably several thousand in total, though we won't know the actual numbers until we analyse our tapes. At one point I saw five meteors within one second. My impression was that the activity was fairly constant for about three hours. It was definitely less activity than the 1999 Leonids that we had observed from Spain. There didn't really seem to be a significant peak, but this may be because of the observing geometry. We saw a bright fireball every minute at first, when the radiant (the apparent source of the Leonids) was low above the horizon. Later, as the radiant rose higher in the sky, we could see a lot more, fainter meteors. Our visual observations were reported via satellite phone to Vladimir Krumov from the International Meteor Organisation, who kindly acted as the coordinator. We obtained about 200 hours of video data from five intensified video cameras. Two of the cameras were equipped with objective gratings, so we were able to successfully record meteor spectra showing both emission and absorption lines, and we can now start to analyse the chemistry of these meteors. We also got some nice recordings from the electric field sensor that was measuring the electric field of the atmosphere. The signal was converted to the audio range and recorded on the video tape of our wide angle camera. Although the camera shows about 200 meteors brighter than +1 mag, so far we have not found (heard) any obvious correlation between the electric field and a meteor. We will be analysing the data in detail over the coming weeks to see if we can find any evidence of this. Trautner: We suffered from high temperatures - above 40 C every day. This increased the electric current consumption of the MI probe electronics and blew the fuses. Another problem we encountered was the power supply for our equipment. Fortunately, we were able to recharge our batteries during the day using a solar panel and by linking up to our car batteries and generators. The solar array was very useful - it would have been a disaster if the car batteries had run dry! After a number of MI probe test runs, the display on the laptop controlling the probe died, so that brought my tests to a sudden end. However, I had run sufficient tests before that to get plenty of useful data. It will be very valuable for assessing the performance of the new instrument architecture. Question: You mentioned the threat from bad weather, heat and bush fires. Were there any other problems that you had to overcome? Trautner: We were driving around looking for a good site to set up the MI probe when we had an encounter with a farmer's daughter wielding a rifle! She did not realise that her father had given us permission to be on the property and thought we were trespassing. She told us in no uncertain terms to get off the property. It was only after she rang her father that she realised her mistake. She wrote us an apology afterwards. We also had to keep a look out for lizards. Some of them were up to 1.5 metres long and they looked like small crocodiles! They were very shy, but if we saw any of these animals, we were very respectful! We also saw a lot of other animals - kangaroos, bush turkeys, emus, etc. There were plenty of insects too - sometimes they were a real plague. Koschny: All in all, it was a fantastic experience. Especially sitting in the outback, with nighttime temperatures above 20 deg C, three hours away from civilisation, seeing the Magellanic Clouds and the Southern Cross, was something I will never forget. [NOTE: Images supporting this article are available at http://sci2.esa.int/leonids/leonids2001/
<urn:uuid:fa25ed0a-d36e-48b9-aaa4-24553f54779a>
2.75
1,385
Audio Transcript
Science & Tech.
51.044739
The First 360 view of the Full Sun From NASA Heliophysics. Seeing the whole sun front and back simultaneously will enable significant advances in space weather forecasting for Earth, and improve planning for future robotic or crewed spacecraft missions throughout the solar system. These views are the result of observations by NASA’s two Solar TErrestrial Relations Observatory (STEREO) spacecraft. The duo are on diametrically opposite sides of the sun, 180 degrees apart. One is ahead of Earth in its orbit, the other trailing behind. Launched in October 2006, STEREO traces the flow of energy and matter from the sun to Earth. It also provides unique and revolutionary views of the sun-Earth system. The mission observed the sun in 3-D for the first time in 2007. In 2009, the twin spacecraft revealed the 3-D structure of coronal mass ejections which are violent eruptions of matter from the sun that can disrupt communications, navigation, satellites and power grids on Earth.
<urn:uuid:5c06fb1a-9ea5-4ec3-818f-ec3e25f0f804>
3.84375
203
Knowledge Article
Science & Tech.
45.998333
How does mathematics research affect approaches to everyday problems? What is the “Netflix problem” and what does it have to do with math? To find out, WID communications sat down with Ben Recht, assistant professor of computer sciences and researcher in the institute’s Optimization group, who recently received the Lagrange Prize in Continuous Optimization for studying the math behind making predictions from incomplete data. Question: In addition to your other work, what got you started in this line of research? Recht: In 2006, Netflix announced a prize for $1 million for anyone who could improve its movie recommendation engine. Others and myself started playing around with their datasets to see if they could make improvements. At the same time, it was the heyday of this new idea called “compressed sensing,” which revisited the foundation of how we acquire data. Take a video, for example. The raw data in a video is huge since you get every pixel for every frame. So, for instance, we store videos using compression, allowing them to be streamed over the airways of the Internet. With compressed sensing, you can combine compression and acquisition, which allows you to acquire medical images faster, make better radar systems or even take pictures with single-pixel cameras. This new model of sensing opened up new direction in engineering and provided a slew of very interesting applied math problems. So academic discussions in addition to the Netflix problem influenced your work on mathematical compressed sensing? What we realized was that we could take these tools that were being used in compressed sensing and apply them to completely different problems — like this problem given by Netflix. The problem looks like this: I have a big database, and almost all of my data is missing. That problem, if you just looked at it, was very related to compressed sensing, so we joined the two. What did you explore in your research, which eventually earned you the Lagrange Prize? We were able to show that as long as the number of entries was sufficiently large — the number of things you knew about a person was big enough — then you could fill the whole matrix in without seeing any more entries. To give you a feel for that, even if I had a database with a million users and 10 million products, as long as I saw enough products per person, then I could actually predict with very high accuracy what they would be interested in purchasing. For all practical problems that involve people, that number seemed to be between 25 and 100. It’s relying on the assumption that people aren’t unique, even though we like to think we’re all unique and sporadic. Psychologists have found that the number of factors that influence our behavior are fewer than we think. That’s the only assumption we make. Would you say that you solved the “Netflix problem”? No. What’s interesting about our solution is that we first confirmed that it was truly possible to fill in the entries if you had a computer that could solve this particular problem. And we provided an algorithm that would achieve the type of predictions we could make. If you look at that algorithm, it turned out to be the exact one that the first person who published on the Netflix prize had. What have you learned from this prize-winning research? One of my favorite things about this paper is that the algorithm used to solve the problem was the common sense one. It’s exactly what many talented engineers had proposed as a heuristic, or the most intuitive solution. That’s what I think is the most interesting about the work that I do: showing that simple algorithms and a lot of data are the best solution nine times out of 10. What does receiving the Lagrange Prize mean to you? This prize is given every three years for outstanding achievements in optimization from the past six years. I’m completely flattered. The committee is filled with some of the guys who defined the subject, and the fact that they think that this is the best paper in continuous optimization is awesome. –Interview conducted by Marianne English
<urn:uuid:0d608fa9-348f-4431-9f9c-bb40c6dde0fd>
2.953125
840
Audio Transcript
Science & Tech.
48.357917
Ask a question about 'Amaranthus albus' Start a new discussion about 'Amaranthus albus' Answer questions from other users is an annual species of flowering plant The flowering plants , also known as Angiospermae or Magnoliophyta, are the most diverse group of land plants. Angiosperms are seed-producing plants like the gymnosperms and can be distinguished from the gymnosperms by a series of synapomorphies... . It is native to the tropical Americas but a widespread introduced species An introduced species — or neozoon, alien, exotic, non-indigenous, or non-native species, or simply an introduction, is a species living outside its indigenous or native distributional range, and has arrived in an ecosystem or plant community by human activity, either deliberate or accidental... in other places, including Europe, Africa, and Australia. When it dries it forms tumbleweed A tumbleweed is the above-ground part of a plant that, once mature and dry, disengages from the root and tumbles away in the wind. Usually, the tumbleweed is the entire plant apart from the roots, but in a few species it is a flower cluster. The tumbleweed habit is most common in steppe and desert... A common name of a taxon or organism is a name in general use within a community; it is often contrasted with the scientific name for the same organism... s include (United States) tumble pigweed ; (Great Britain) pigweed amaranth , prostrate pigweed , white amaranth , and white pigweed
<urn:uuid:bcf60c85-cb35-43a6-8017-2c2b636c18bd>
3.609375
341
Q&A Forum
Science & Tech.
37.040747
Water Droplet Experiencing Leidenfrost Effect When a drop of liquid lands on a surface much hotter than its boiling point, the bottom layer of the drop vaporizes instantly. The gas pressure from the vapor layer keeps the liquid droplet from touching the hot surface. The vapor layer is too thin to be easily seen but it’s insulating enough to keep the liquid just below its boiling point. Instead of instantly boiling away, the droplet can survive for several minutes. This image shows a droplet of liquid water experiencing this phenomenon, which is called the Leidenfrost effect. The water droplet shown below was dropped on to a heated plate or polished aluminum and illuminated by a Helium-Neon (red range) laser. Notice the circulation pattern in the droplet, highlighted by reflective particles suspended in the liquid. The bottom "droplet" is a reflection of the actual water drop. This photograph was taken using a handheld digital camera at low speed approximately 5cm from the droplet. Image credit: Ilya Lisenker of University of Colorado - Boulder. This image was taken as part of Jean Hertzberg's "Flow Visualization: The Physics and Art of Fluid Flow" mechanical engineering class at the University of Colorado - Boulder, and will be shown at the 63rd Annual Meeting of the APS Division of Fluid Dynamics (DFD), November 2010, in San Antonio, Texas. "Circulation Pattern within a Drop Experiencing Leidenfrost Effect," Ilya Lisenker (2010) Flow Visualization Image Galleries
<urn:uuid:c08c31de-a8fa-4179-b4e5-4942951aadb8>
3.703125
326
Knowledge Article
Science & Tech.
35.404412
The Lunar Late Heavy Bombardment - The small craters can be saturated on the oldest surfaces, but probably not the basins. - There are more than 40 basins (D>300km) on the Moon. - They are all older then ~3.8Gyr (Wilhelms 1987). - Historically, the issue of the nature of the bombardment comes down to the ages of three. - The only good, and agreed upon ages we have are for the two youngest: ......Imbrium (3.85Ga) and Orientale (3.82Ga). - Nectaris was though to be either ~3.9Ga or 4.1Ga. - This has led to controversy of whether the lunar basin forming impacts 3.9Ga were: - The tail end of terrestrial planet accretion. - A spike in the impact rate. (Tera et al. 1974) - To get a spike either need to: 1) create new impact population or 2) the orbits of the planets must have changed. - Four basic dynamical scenarios have been proposed: - Break up of an asteroid (Zappala et al 1998) - No: Requires 1000X mass of Ceres. - Co-orbitals of the Moon (Cuk & Gladman 2008) - No: Can't last long enough. - Dynamical instability of a 5th terrestrial planet in the asteroid belt (Chambers 2007) - Influx of comets from the outer Solar System (Wetherill 1975) - Triggered by the migration of the giant planets (Levison et al. 2001) - Best model to date is the so-called Nice model.
<urn:uuid:82eb6238-d35f-4e15-9bd4-73e12a78f4e0>
3.59375
357
Academic Writing
Science & Tech.
72.936598
Calambokidis, J., J. Barlow, J.K.B. Ford, T.E. Chandler, and A.B. Douglas. 2009. Insights into the population structure of blue whales in the eastern North Pacific from recent sightings and photographic identifications. Marine Mammal Science 25:816-832 full PDF. The definitive version is available at Wiley and some of the figures from the ms are provided below. For details and photographs regarding the sightings of blue whales in British Columbia in 2007 see press release issued in 2007. Blue whales were widely distributed in the North Pacific prior to the primary period of modern commercial whaling in the early 1900s. Despite concentrations of blue whale catches off British Columbia and in the Gulf of Alaska, there had been few documented sightings in these areas since whaling for blue whales ended in 1965. In contrast, large concentrations of blue whales have been documented off California and Baja California and in the eastern tropical Pacific since the 1970s, but it was not known if these animals were part of the same population that previously ranged into Alaskan waters. We document 15 blue whale sightings off British Columbia and in the Gulf of Alaska made since 1997, and use identification photographs to show that whales in these areas are currently part of the California feeding population. We speculate that this may represent a return to a migration pattern that has existed for earlier periods for eastern North Pacific blue whale population. One possible explanation for a shift in blue whale use is changes in prey driven by changes in oceanographic conditions, including the Pacific Decadal Oscillation (PDO), which coincides with some of the observed shifts in blue whale occurrence. Locations of blue whale identifications in British Columbia and Gulf of Alaska and where these individuals were also seen off California.
<urn:uuid:e77ee3c0-6509-4fa1-a0a8-fc51535a1a6d>
3.375
389
Academic Writing
Science & Tech.
43.984499
Read here and here. Climate "scientists" across the world have been blatantly fabricating temperatures in hopes of convincing the public and politicians that modern global warming is unprecedented and accelerating. The scientists doing the fabrication are usually employed by the government agencies or universities, which thrive and exist on taxpayer research dollars dedicated to global warming research. A classic example of this is the New Zealand climate agency, which is now admitting their scientists produced bogus "warming" temperatures for New Zealand. "NIWA makes the huge admission that New Zealand has experienced hardly any warming during the last half-century. For all their talk about warming, for all their rushed invention of the “Eleven-Station Series” to prove warming, this new series shows that no warming has occurred here since about 1960. Almost all the warming took place from 1940-60, when the IPCC says that the effect of CO2 concentrations was trivial. Indeed, global temperatures were falling during that period.....Almost all of the 34 adjustments made by Dr Jim Salinger to the 7SS have been abandoned, along with his version of the comparative station methodology." A collection of temperature-fabrication charts.
<urn:uuid:ddb7c4c6-f11a-4bf1-8619-aed13834dc50>
2.734375
234
Personal Blog
Science & Tech.
39.5775
|Figure 1. Printing and Previewing Plain Text: Here's the main form of the TextPrinting Project in Part II of this series, along with the print preview form from the same project.| art I of this solution series explores VB's basic printing concepts: the Printer object, page geometry, graphics methods, and the Print common dialog control. In this solution, you'll see how to control where printing occurs on a page. Building on the knowledge gained from Part I, in Part II you'll build a text editor and text-printing utility that includes font selection and print-preview capabilities. The VB6 TextBox control provides the base functionality you need to build the text editor; but what good is a text editor without printing capabilities? Figure 1 shows the main form of the TextPrinting project and the print-preview form from the same project. When sent to a printer, the text will print just as it appears on the preview form, within the specified margins. |Author's Note: Printing formatted text is a very different story. The RichTextBox control provides a Print method, which you can use to send the control's text to the printer. Printing formatted text is a different task from printing unformatted text, so I won't cover the topic in this solution, except to say that your best bet is to either use a third party control or print through Word using OLE automation. |Figure 2. Printing ListView Content: The figure shows both a ListView control and the result of printing the contents of that ListView using the project from Part III of this series.| Finally, in Part III of this solution series you'll see how to print tabular data by adding two methods, one for previewing and another for printing items of the ListView control. It's not the most elaborate printing tool, but it's a nice component to add to any interface that uses the ListView control. Figure 2 shows a ListView control displaying a Customer table in detail mode as well as a preview of the control's printout. You can use the same technique to print any type of tabular data, including price lists, invoices, and so on. All you have to do is populate the control, set the column widths and call a method to print or preview the items. Although VB6's Printer object is convenient, printing in VB has some serious limitations, such as a lack of support for controls for printing and previewing documents (a situation that was rectified in VB.NET). Generating elaborate printouts with VB6 requires a substantial programming effort; therefore most VB6 developers use third party controls for their printouts. But you don't always want to include a large printing library to gain decent print capabilities; therefore, good printing tools have their place in every developer's utilities collection. You don't have to rely on third-party controls to gain additional printer control in VB. This solution reviews the basics of printing with VB6 and then applies the basic concepts to build some practical tools and utilities.
<urn:uuid:30ff34e8-ccee-4bc0-880a-947c6054066b>
2.8125
635
Tutorial
Software Dev.
53.128303
Most galaxies in the Local Group are dwarf systems, fainter by factors of 100 to more than 10,000 than giant galaxies. These insignificant objects nonetheless provide a strong constraint on the nature of dark matter. While most of the stars in the Universe reside in giant galaxies like the Milky Way, numerically the most common galaxies are dwarf systems hundreds to many thousands of times fainter. We can only detect such dim galaxies if they are relatively nearby; the local group, for example, contains about two dozen low luminosity galaxies (Hodge 1995). Luminosity and gas content are key properties of dwarf galaxies. Dwarf ellipticals (dE) are about 100 times fainter than giant galaxies; these gas poor systems typically rotate slowly. Their luminosity profiles are often better fit by exponential laws than by the de Vaucouleurs profiles which fit giant ellipticals. At the other extreme of a simplified low-luminosity Hubble sequence are dwarf spiral (Sm) and irregular (Irr) galaxies with similar luminosities; these systems are gas rich and rotate rapidly. The smallest systems, dwarf spheroidals (dS), are another factor of 100 fainter; they are barely visible against the stellar background of the Milky Way. Below is a summary of these properties: M < -19: E -- S0 -- Sa -- Sb -- Sc Sd Sm M < -14: dE -- dS0 Irr . . . M < -9: dS ....................... Irr gas poor --------------> gas rich Three of the four low-luminosity Sd & Sm spiral galaxies studied by Carignan & Freeman (1985) have rotation curves which rise gently out to the last point measured. While a pure-disk mass model fits the declining rotation curve of NGC 7793, massive halos with central densities of about 0.003 M_sun/pc^3 are apparently required in NGC 247, NGC 300, & NGC 3109. Within their Holmberg radii, all three of these galaxies have halo-to-luminous mass ratios of order unity, as does the fainter spiral UGC 2259. In this respect these galaxies are similar to giant Sc galaxies. Further studies indicate that while some dwarf spirals appear to be scaled-down versions of giant spirals, others are completely dominated by their dark halos. Deeper 21-cm observations of NGC 7793 imply that this galaxy has a massive halo after all, and yield central halo densities an order of magnitude higher than those reported above (Carignan & Puche 1990). Still more impressive is the faint dwarf galaxy DDO 154, where 21-cm observations reveal a very regular gas disk extending beyond 5 Holmberg radii (Carignan & Beaulieu 1989). In this galaxy the stars and neutral hydrogen together amount to only about 10% of the mass required to explain the rotation curve; the other 90% of some 4*10^9 M_sun total mass is dark. The central halo density is about 0.015 M_sun/pc^3. As already noted for giant galaxies, there is little direct information on the shape of the dark matter distribution. The halo densities quoted above are generally derived from models based on isothermal sphere. An alternate interpretation is that the dark mass is associated with the neutral hydrogen (Freeman 1993). For example, scaling up the HI mass of DDO 154 by a factor of about 7 yields a good fit to the rotation curve without invoking an additional halo (Carignan & Beaulieu 1989). Dwarf spheroidal galaxies are one to two orders of magnitude fainter than the spiral systems just discussed. The mere presence of such galaxies in the vicinity of the Milky Way is evidence for dark matter; visible stars don't provide enough mass to hold these diffuse objects together against the tides of our galaxy (Faber & Lin 1983). It's impossible to measure rotation curves or integrated velocity dispersions for such faint galaxies; instead, line-of-sight velocities must be measured for individual stars (Aaronson 1983). The core radius r_c and central 1-D velocity dispersion sigma_0 together provide a dynamical estimate of the central mass density rho_0. In a constant-density sphere the gravitational potential well is parabolic: 2pi 2 (1) Phi(r) = Phi_0 + --- G rho_0 r , 3where Phi_0 is the potential at r = 0 (see Lecture 7). The core radius is roughly where Phi(r_c) - Phi_0 = sigma_0^2; solving for the central density yields 3 sigma_0^2 (2) rho_0 = ------------ . 2 pi G r_c^2A more accurate derivation, based on the isothermal sphere, yields 9 sigma_0^2 (3) rho_0 = ------------ 4 pi G r_c^2(BT87, Eq. 4-124b). Thus from the core radius and central velocity dispersion one can directly infer the total central mass density. Comparing this with the observed central luminosity density gives an estimate for the central mass-to-light ratio, (M/L). Results for a number of dwarf spheroidal galaxies are listed in the review articles by Kormendy (1987) and Pryor (1992). V-band mass to light ratios range from 5.7 (Fornax) to 94 (Draco). The central mass densities of these systems range from 0.073 M_sun/pc^3 (Fornax) to 1.3 M_sun/pc^3 (Draco). The faintest dwarf spheroidals such as Draco (M_V = -8.9) seem to be completely dominated by dark matter; the visible stars are basically a population test of test objects moving in a potential well generated almost entirely by dark matter. The halos of dwarf galaxies provide an important constraint on non-baryonic forms of dark matter. Cowsik & McClelland (1973) suggested that neutrinos with nonzero rest mass could provide the dark matter in clusters of galaxies. The comoving density of light neutrinos has been constant since the Universe had a temperature of roughly 1 MeV; in physical coordinates the present mean neutrino density is a few hundred per cubic centimeter. This background of low-energy neutrinos is undetectable with present technology. But if these neutrinos have rest masses m_nu of a few tens of eV then they dominate the mass density of the Universe; estimates of the density parameter Omega thus provide upper bounds on neutrino mass. Assuming for simplicity that all neutrino species have the same mass, this bound is -3 2 (4) m_nu < 100 eV (T/2.7 K) (H_0/100 km/s/Mpc) Omega/g ,where T is the the microwave background temperature, H_0 is Hubble's constant, and g = g_nu_e + g_nu_mu + g_nu_tau + ... where g_nu is the number of spin states of neutrino species nu; as far as we know, only left-handed neutrinos exist (except in Deep Space Nine; see Krauss 1995). Present values for H_0 imply that m_nu < 50 eV/g in a critical-density world-model, or m_nu < 10 eV/g in an Omega = 0.2 low-density model. Tremaine & Gunn (1979) pointed out that halos of individual objects provided a complementary limit, which on the face of it rules out stable neutral leptons with masses less than about 1 MeV. In the early Universe the neutrino momentum distribution is specified by Fermi-Dirac statistics; the number-density of neutrinos of species nu with momenta in the range p to p+dp is 3 g_nu dp (5) n_nu(p) d p = ---- ------------- , h^3 exp(p/kT) + 1where T is the temperature, k is Boltzmann's constant, and h is Planck's constant. The maximum of this function at p = 0 is just one-half the limit implied by the Uncertainty Principle. Changing variables from momentum to velocity, the corresponding maximum phase-space mass density is -- 1 \ 4 (6) f_max = --- ) g_nu m_nu . h^3 / -- nu The Collisionless Boltzmann Equation preserves phase-space mass density not only in already formed galaxies but even during the collapse of dark halos, so Eq. (6) is a firm upper bound to the maximum phase space density of dark matter in galactic halos. For an isothermal sphere halo model, the latter density is rho_0 (7) f_max = ---------------------- . (2 pi sigma_0^2)^(3/2)Requiring that Eq. (7) bs less that the upper bound given by Eq. (6) thus implies a limit on the neutrino mass: -3/4 1/4 -1/4 (8) m_nu > 106 eV (sigma_0/100 km/s) (rho_0/1 M_sun/pc^3) g ,where once again all neutrino species are assumed to have the same mass. This limit is strongest for those systems with low velocity dispersion and high central density (Tremaine & Gunn 1979). Now if the dark halos of dwarf spheroidals like Draco have the same velocity dispersions as the luminous components then the neutrino mass must be m_nu > 500 eV, contradicting the cosmological limit derived above (e.g. Lin & Faber 1983). This would appear to rule out light leptons, including neutrinos, as the dark matter in dwarf spheroidal galaxies. Because neutrinos are such attractive and elegant dark matter candidates, people have looked hard for loopholes in this argument. Tremaine & Gunn modeled the neutrino halo with an isothermal sphere, and assumed that their velocities are comparable to the stellar velocities. If the neutrinos have a larger velocity dispersion, the limit implied by Eq. (8) is lowered. But to make halos with central densities of 1 M_sun/pc^3 out of neutrinos with cosmologically reasonable masses, the neutrino velocity dispersion would have to be at least 100 km/s; dwarf galaxies like Draco would have halos with core radii of 10 kpc and masses comparable to the mass of the entire Milky Way (Gerhard & Spergel 1992). Such high masses are quite implausible! Halo models with anisotropic velocity distributions may also modify Tremaine & Gunn's limit. In particular, hollow-halo models with tangentially anisotropic velocities permit halos of the necessary mass to be constructed without exceeding the cosmological limit on maximum phase-space density (Ralston & Smith 1991, Madsen 1991). But such anisotropic halo models often turn out to be dynamically unstable (Barnes 1993). It thus appears that Tremaine & Gunn's limit, though somewhat model dependent, precludes the possibility that the halos of dwarf galaxies are composed of massive neutrinos. Last modified: April 18, 1997
<urn:uuid:82471459-a2df-416d-8b5b-d3e19936f2c2>
3.34375
2,435
Academic Writing
Science & Tech.
55.879826
Science Main Index Almost three-fourths of the world's surface is covered in water. This water is home to over 20,000 different species of fish. The earliest fossils of fish date back over 400 million There are a wide variety of fish from the goby which is less than one half an inch long, to the whale shark which can be over 60 feet long. Most fish breathe through gills. Gills perform the gas exchange between the water and the fish's blood. They allow the fish to breathe oxygen in the water. Fishes are vertebrates that have a skeleton made of either bone or cartilage. About 95% of fishes have skeletons made of bone. These bony fishes have a swim bladder, a gas-filled sac, that they can inflate or deflate allowing them to float in the water even when not swimming. Fishes with a cartilage skeleton tend to be heavier than water and sink. They must swim to keep afloat. Cartilaginous (cartilage) fish include the ray and the shark. Most fish swim using a tail fin. Muscles in the tail fin move it from side to side, forcing water backward, and propeling the fish forward. Other fins help the fish change direction and stop. Pectoral fins on their side help them swim up and down. Dorsal and anal fins on the top and bottom keep the fish upright. Pelvic fins on the underside help steer left and Many fish eat plants, while others such as the shark, eat other fish. If you are interested in more information fish, check out our fish video collection. Web Sites about Fish: for Kids maintained by the Monterey Bay Aquarium See a QuickTime movie showing the birth of a seahorse at the Birch Aquarium of the Scripps Instituion of Oceanography. This will require the free QuickTime plug-in which you can download.
<urn:uuid:3f20cde6-5136-4eff-b2fe-6f2678f2b3d7>
3.40625
412
Knowledge Article
Science & Tech.
65.397308
You are not logged in. Formula for ratio of weights at different latitudes As you know, the Earth is not a perfect sphere so a person’s weight varies from place on place on the surface of the globe. Assuming that the Earth is an ellipsoid (that is, the intersection of the Earth with a plane passing through the Poles is an ellipse) then the person’s weight depends only on which latitude he or she is on. Suppose he/she weighs at latitude N or S. Then his/her weight at latitude N or S is given by the following formula, which I calculated today: where is the ratio of the Equatorial radius of the Earth to the Polar radius of the Earth. (2) and are weights, not masses. The person’s mass does not vary with location on Earth. (3) Because the left-hand side is a ratio, it does not matter which unit you use to measure weight. You can use newtons, pounds, stones, or even the unscientific kilos. The utility of the formula is that if you know your weight at a particular latitude on Earth, you can calculate your weight at any other latitude on Earth. Q: Who wrote the novels Mrs Dalloway and To the Lighthouse? A: Click here for answer. Re: Formula for ratio of weights at different latitudes The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “A secret's worth depends on the people from whom it must be kept.” ― Carlos Ruiz Zafón
<urn:uuid:6c456316-8ff5-4823-b43b-767f2259b00c>
3.140625
362
Q&A Forum
Science & Tech.
64.556848
View your list of saved words. (You can log in using Facebook.) Any of a subgroup of amphibole minerals that are calcium-iron-magnesium-rich and monoclinic in crystal structure. Hornblende, whose generalized chemical formula is (Ca,Na)(Mg,Fe,Al)(Al,Si)O(OH), occurs widely in metamorphic and igneous rocks. Common hornblende is dark green to black in colour and usually found in middle-grade metamorphic rocks (formed under medium conditions of temperature and pressure). Such metamorphic rocks with abundant hornblende are called amphibolites. This entry comes from Encyclopædia Britannica Concise. For the full entry on hornblende, visit Britannica.com.
<urn:uuid:a216f875-3d79-437b-9ec1-11512b03d2c2>
2.90625
162
Structured Data
Science & Tech.
35.255196
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895. Please note, Degree Days are not available for Agricultural Belts Contiguous U.S. Temperature Rankings, November 1974 More information on Climatological Rankings (out of 119 years) |Sep - Nov 1974 |26th Coldest||1976||Coldest since: 1972| |92nd Warmest||1963||Warmest since: 1973|
<urn:uuid:c0f2997d-ad7e-43d2-a42d-49302509ab04>
2.75
143
Structured Data
Science & Tech.
60.488056
The world's most ambitious scientific experiment is buried 100 meters underground, straddling Switzerland and France. A billion times every minute, the Large Hadron Collider (LHC) slams together protons, while four giant detectors watch closely. - So how does the Large Hadron Collider work? - Why can slamming tiny particles into each other provide clues about the nature of all space and time? - What mysteries are physicists trying to solve with data from the LHC? - How does the cutting edge of particle physics relate to the world around us, from the patterns of stars in the sky to the fact that they shine at all? Natalia Toro, PI Faculty, works at the intersection of theories and hard data. She will explain how complex collision data from the LHC is being digested and examined right now, and how it may set the course for the science of the future.
<urn:uuid:499865e7-6e3a-40d1-9d9f-a59fb976dab3>
3.6875
182
Truncated
Science & Tech.
53.878373
Al Gore urges everyone to plant trees in An Inconvenient Truth. But where, asks Dickson Despommier, a 67-year-old microbiologist at Columbia University, can we plant them if, as scientists suggest, more and more of the world's forests will soon become farmland to support our explosive population growth? Nearly 41 percent of Earth's land is now used for agriculture, yet we're on the brink of vast population growth, from 6.7 billion people today to an estimated 9.2 billion by 2050, with the majority living in cities. The only way to make room for enough carbon-sequestering trees to reverse global warming, Despommier argues, is to change the way we farm. Radically. Despommier envisions blocks of vertical farms in the world's biggest cities, each structure 30 stories high, providing enough food and water for 50,000 people a year, with no waste. He is in discussions with potential investors to build the first prototype. Despommier also sits on the board of New York Sun Works, an eco-friendly engineering firm in Manhattan that in May demonstrated a similar—if much smaller—urban-farm concept on a floating barge. Q: How did you come to the idea of putting a farm in a skyscraper? A: About eight years ago, I asked my students to come up with ideas on urban sustainability, and they proposed 13 acres of farmable land on the commercial rooftops of Manhattan. We figured out that it would feed just 2 percent of the city, so I said, "Let's take the 1,723 abandoned buildings in Manhattan, retrofit them and do hydroponics." Then I said, "OK, forget about money, space and time, and design a building that will feed and hydrate 50,000 people a year." I wanted individuals to eat 2,000 calories a day and drink water created by evapotranspiration. Q: Meaning water A: Right. The condensation comes from the leaves, even though you put the water into the roots. If you had a vertical farm the size of a city block, the plants inside could produce enough water for roughly 50,000 people. Q: Where would irrigation come from? A: The sewage. First you'd desludge it. Then you'd filter it through nonedible barrier plants and again through a tower of zebra mussels, the best filtering organism out there. After that, the water would be pristine. Q: How many different kinds of fruits and vegetables would you grow inside the building? A: More than 100—strawberries, blueberries, even miniature banana plants. We got a list from NASA of produce that can be grown indoors. It turns out that NASA has a big hydroponics program, because there's no takeout on Mars—you can't send out for a pizza. Genetic engineering and artificial selection will also play an important role in vertical farming because there are a lot of plants, such as traditional corn, that we don't yet know how to grow indoors. Q: How will this fight global warming? A: All the governmental reports say the same thing: The biggest polluter is agriculture. I love the look of a wheat field, but it's a huge trade-off to grow food outside the city—40.5 percent of the earth is used for agriculture. As the population grows, the demand for food goes up and more land is cleared for farming. Come up with an alternative to traditional agriculture, and you already have the strategy for sequestering carbon dioxide: planting trees. Q: How much will all this cost? A: The first vertical farm could run into the billions of dollars. I envision state-of-the-art stuff: The plants will be placed in automated conveyer belts that move past stationary grow lights and automated nutrient-delivery systems. The first buildings would have to be subsidized, with energy incentives and tax incentives. We're talking about the equivalent of engineering a Saturn rocket. Q: When could we see the first farm? A: With funding, there could be a prototype in 5 to 10 years. I hope I live to be 106 and see the skyline dotted with them. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:a5e30293-1b4f-4ebc-8e21-1a461960825d>
3.046875
931
Audio Transcript
Science & Tech.
59.81458
Want to stay on top of all the space news? Follow @universetoday on Twitter Object Name: Messier 42 Alternative Designations: M42, NGC 1976, The Great Orion Nebula, Home of the Trapezium Object Type: Emission and Reflection Nebula with Open Galactic Star Cluster Right Ascension: 05 : 35.4 (h:m) Declination: -05 : 27 (deg:m) Distance: 1.3 (kly) Visual Brightness: 4.0 (mag) Apparent Dimension: 85×60 (arc min) Locating Messier 42: Finding Messier 42 is very easy from a dark sky location by centering on the glowing region in the center of Orion’s “sword”. However, from urban locations, these stars might not be visible, so aim your binoculars or telescope about a fist width south of the three prominent stars that make the asterism known as Orion’s Belt. It’s a very bright and large object well suited to all sky conditions and instruments! Remember to use low power to get the full majesty of M42 and to increase magnification to study various regions. What You Are Looking At: Known as “The Great Orion Nebula,” let’s learn what makes it glow. M42 is a great cloud of gas spanning more than 20,000 times the size of our own solar system and its light is mainly florescent. For most observers, it appears to have a slight greenish color – caused by oxygen being stripped of electrons by radiation from nearby stars. At the heart of this immense region is an area known as the “Trapezium” – its four brightest stars form perhaps the most celebrated multiple star system in the night sky. The Trapezium itself belongs to a faint cluster of stars now approaching main sequence and resides in an area of the nebula known as the “Huygenian Region” (named after 17th century astronomer and optician Christian Huygens who first observed it in detail). Buried amidst the bright ribbons and curls of this cloud of predominately hydrogen gas are many star forming regions. Appearing like “knots,” these Herbig-Haro objects are thought to be stars in the earliest stages of condensation. Associated with these objects are a great number of faint red stars and erratically luminous variables – young stars, possibly of the T Tauri type. There are also “flare stars,” whose rapid variations in brightness mean an ever changing view. “Orion may seem very peaceful on a cold winter night, but in reality it holds very massive, luminous stars that are destroying the dusty gas cloud from which they formed,” said Tom Megeath, an astronomer at the Harvard-Smithsonian Center for Astrophysics. While studying M42, you’ll note the apparent turbulence of the area – and with good reason. The “Great Nebula’s” many different regions move at varying speeds. The rate of expansion at the outer edges may be caused by radiation from the very youngest stars present. “In this bowl of stars we see the entire formation history of Orion printed into the features of the nebula: arcs, blobs, pillars and rings of dust that resemble cigar smoke,” said Massimo Roberto, an astronomer at the Space Science Telescope Institute in Baltimore. “Each one tells a story of stellar winds from young stars that impact the environment and the material ejected from other stars.” Although M42 may have been luminous for as long as 23,000 years, it is possible that new stars are still forming, while others were ejected by gravitation – known as “runaway” stars. A tremendous X-ray source (2U0525-06) is quite near the Trapezium and hints at the possibility of a black hole present within M42. The Trapezium’s stellar winds also are responsible for the formation of stars inside the nebula – their shock waves compressing the medium and igniting starbirth. “When you look closely, you see that the nebula is filled with hundreds of visible shock waves,” said Bob O’Dell, an astronomer from Vanderbilt University. O’Dell was fortunate enough to use Hubble to map Orion’s stellar winds and create a map of two of Orion’s three star-forming regions… Regions where the winds have been blowing continuously for nearly 1,500 years! What else have we learned about the Great Orion nebula in recent years? Try the discovery of 13 drifting gas planets. These rare, “free-floating” objects were confirmed by Patrick Roche of the University of Oxford and Philip Lucas of the University of Hertfordshire just before the turn of the century. They were found with the Hubble Space Telescope while looking for faint stars and brown dwarfs. “The objects are likely to be large gas planets similar in size to Jupiter and consisting primarily of hydrogen and helium.” said Roche, “From the measured brightness and the known distance to the Orion nebula, we knew they did not have enough material for any nuclear processing in their interiors.” Chances are very good these planets may be failed stars – much like our own Jupiter. But these planets don’t orbit a star the same way our solar system’s planets orbit the Sun… they simply roam around. Dr. Roche said that the 13 objects “probably formed in a different way from the planets in our solar system” in that they were not made “out of the residue of material left over from the birth of the sun.” Instead, they formed “like stars via the collapse of a cloud of cold gas,” explained Lucas. “But they possess most of the physical properties and structure of gas giant planets,” added Lucas. History: Messier 42 was possibly discovered 1610 by Nicholas-Claude Fabri de Peiresc and was recorded by by Johann Baptist Cysatus, Jesuit astronomer, in 1611. For fans of the great Galileo, he was the first to mention the Trapezium cluster in 1617, but did not see the nebula. (However, do not despair! For it is my belief that he was simply using too much magnification and therefore could not see the extent of what he was looking at.) The first known drawing of the Orion nebula was created by Giovanni Batista Hodierna, and after all of these documents were lost, the Orion nebula was once again credited to Christian Huygens 1656, documented by Edmund Halley in 1716. It then went on to Jean-Jacques d’Ortous de Mairan in his nebulae descriptions, to be added by Philippe Loys de Chéseaux to his list, expounded by Guillaume Legentil in his review, and finally left to Charles Messier to add his catalog on March 4, 1769. “The drawing of the nebula in Orion, which I present at the Academy, has been traced with the greatest care which is possible for me. The nebula is represented there as I have seen it several times with an excellent achromatic refractor of three and a half feet focal length, with a triple lens, of 40 lignes [3.5 inches] aperture, and which magnifies 68 times. This telescope made in London by Dollond, belongs to M. President de Saron. I have examined that nebula with the greatest attention, in an entirely serene sky, as follows: February 25 & 26, 1773. Orion in the Meridian. March 19, between 8 & 9 o’clock in the evening. [March] 23, between 7 & 8 o’clock. The 25th & 26th of the same month, at the same time. These combined observations and the drawings brought together, have enabled me to represent with care and precision its shape and its appearances. This drawing will serve to recognize, in following times, if this nebula is subject to any changes. There may be already cause to presume this; for, if one compares this drawing with those given by MM. Huygens, Picard, Mairan and by le Gentil, one finds there such a change that one would have difficulty to figure out that this was the same. I will make these observations in the following with the same telescope and the same magnification. In the figure which I give, the circle represents the field of the telescope in its true aperture; it contains the Nebula and thirty Stars of different magnitudes. The figure is inverted, as it is shown in the instrument; one recognizes there also the extension and the limits of this nebula, the sensible difference between its clearest or most apparent light with that which merges gradually with the background of the sky. The jet of light, directed from the star no. 8 to the star no. 9, passing by a small star of the 10th magnitude, which is extremely rare, as well as the light directed to the star no. 10, and that which is opposite, where there are the eight stars contained in the nebula; among these stars, there is one of the eighth magnitude, six of the tenth, and the eighth of the eleventh magnitude. M. de Mairan, in his Traite de l’Aurore Boreale, speaks of the star no. 7. I report it in my drawing below such as it is at present, and as I have seen; so to speak surrounded by a thin nebulosity. In the night of October 14 to 15, 1764, in a serene sky, I determined with regard to Theta in the nebula, the positions of the more apparent stars in right ascension and declination, by the means of a micrometer adapted to a Newtonian telescope of 4 1/2 feet length. These stars are numbered up to ten; I have reported them in the drawing containing the field of the telescope; and an eleventh of them is beyond the circle. The positions of the stars which are not marked with numbers have been fixed by estimating their relative alignments. One will know easily also the magnitude of the Stars by the model which I have reported on the figure. Those of the tenth and the eleventh magnitude are absolutely telescopic and very difficult to find.” However, it would be Sir William Herschel who would devote much love, time, and attention to the Great Orion Nebula – even though his findings would never be made public. As a true master observer, he had quite a talent for sensing what truly might lay beyond the boundary: “In 1783, I reexamined the nebulous star, and found it to be faintly surrounded with a circular glory of whitish nebulosity, faintly joined to the great nebula. About the latter end of the same year I remarked that it was not equally surrounded, but most nebulous toward the south. In 1784 I began to entertain an opinion that the star was not connected with the nebulosity of the great nebula in Orion, but was one of those which are scattered over that part of the heavens. In 1801, 1806, and 1810 this opinion was fully confirmed, by the gradual change which happened in the great nebula, to which the nebulosity surrounding this star belongs. For the intensity of the light about the nebulous star had by this time been considerably reduced, by attenuation or dissipation of nebulous matter; and it seemed now to be pretty evident that the star is far behind the nebulous matter, and that consequently its light in passing through it is scattered and deflected, so as to produce the appearance of a nebulous star. A similar phenomenon may be seen whenever a planet or a star of the 1st or 2nd magnitude happens to be involved in haziness; for a diffused circular light will then be seen, to which, but in a much inferior degree, that which surrounds this nebulous star bears a great resemblance.” But of course, the great Sir William Herschel also had nights from his many notes on M42 where he simply said: “The nebula in Orion which I saw by the front-view was so glaring and beautiful that I could not think of taking any place of its extent.” May your own views be as incredible… Top M42 image credit, Palomar Observatory, 2MASS Sky Survey Image, M42 and Trapezium Hubble Images, Messier’s Historic M42 Sketch and M42 Green Hubble Palette courtesy of Chuck Reese.
<urn:uuid:7aa0f59a-ccc2-42f5-a590-1871b8d1bac1>
3.484375
2,636
Knowledge Article
Science & Tech.
50.062629
'The Lycaenidae are members of the Superfamily Papilionoidea, the true butterflies. Worldwide in distribution, this family has approximately 4,700 species that are unevenly distributed. Coppers are especially dominant in north temperate regions , blues are richest in the Old World tropics and north temperate zones, and hairstreaks are particularly abundant in New World tropics. The adults are typically small to tiny and often brilliantly colored--iridescent blues, bright reds, and oranges. Adults of both sexes have three pairs of walking legs , though most males have fused segments in their front legs. Most adults visit flowers for nectar, but some harvesters feed on wooly aphid honeydew and some hairstreaks feed on aphid honeydew or bird droppings. Females lay urchin shaped eggs on host leaves or flower buds; the resulting caterpillars are typically slug-shaped. In many species, caterpillars depend on ants for protection, so caterpillars produce sugary secretions that are collected by the ants. Most species overwinter in either the egg or pupal stage. Hairstreaks are members of the Family Lycaenidae. Richest in tropical habitats , hairstreaks are numerous in the Americas and comprise about 1,000 species. In tropical species, the upperside of small to medium-sized adults is often iridescent blue, due to reflected light from the wing scales . However, most of the North American species are brown above. Migration is rare, but a few species (such as the Gray Hairstreak) are good long-distance colonists . Males perch to await mates, and females lay eggs singly. Caterpillars usually feed on leaves or reproductive structures of woody trees or shrubs . Interestingly, the chrysalids of several species can produce sounds between their abdominal segments, likely related to their interactions with ants . Hairstreaks typically overwinter in the egg or pupal stage. Typically found in the intertidal zone at the water's edge at a mean distance from sea level of 105 meters (343 feet). - Whittaker & Margulis,1978 - C. Linnaeus, 1758 - (Hatschek, 1888) Cavalier-Smith, 1983 - Grobben, 1908 - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Latreille, 1829 - Snodgrass, 1938 - Heymons, 1901 - C. Linnaeus, 1758 - Order: Lepidoptera () - C. Linnaeus, 1758 - Butterflies and Moths - Superorder: Panorpida () - Cohort: Myoglossata () - Infraclass: Pterygota () - Subclass: Dicondylia () - Class: Insecta () - C. Linnaeus, 1758 - Insects - Epiclass: Hexapoda () - Superclass: Panhexapoda () - Infraphylum: Atelocerata () - Heymons, 1901 - Subphylum: Mandibulata () - Snodgrass, 1938 - Phylum: Arthropoda () - Latreille, 1829 - Arthropods - Superphylum: Panarthropoda () - Cuvier - Infrakingdom: Ecdysozoa () - A.M.A. Aguinaldo et al., 1997 ex T. Cavalier-Smith, 1998 - Branch: Protostomia () - Grobben, 1908 - Subkingdom: Bilateria () - (Hatschek, 1888) Cavalier-Smith, 1983 - Kingdom: Animalia () - C. Linnaeus, 1758 - animals Name Status: Accepted Name . Members of the genus Ogyris ZipcodeZoo has pages for 0 species and subspecies in this genus: - Search for Pictures: images.google.com - Search for Scholarly Articles: Google Scholar - Search using Scientific Name and Vernacular Names: All the Web | AltaVista Canada | AltaVista | Excite | Google | HotBot | Lycos - Search using Specialized Databases: GenBank | Medline | Scirus | CISTI/CAL | Agricola Periodicals | Agricola Books - A hand-book to the order Lepidoptera. By W.F. Kirby. .. London, E. Lloyd, limited, 1896-1897. url , p. 131. - A handbook to the order Lepidoptera. By W. F. Kirby. London, E. Lloyd, limited, 1896-1897. url p. 131. - Australian insects. By Walter W. Froggatt. Sydney, W. Brooks url p. 440. - Bibliography (Lepidoptera: Rhopalocera) / Charles A. Bridges. Urbana, Ill.: C.A. Bridges, c1993. url p. 118, p. 12. - Catalogue of the family-group, genus-group and species-group names of the Riodinidae & Lycaenidae (Lepidoptera) of the world / Charles A. Bridges. Urbana, Ill.: C.A. Bridges, c1994. url p. 37. - Illustrations of new species of exotic butterflies: selected chiefly from the collections of W. Wilson Saunders and William C. Hewitson / by William C. Hewitson. London: John Van Voorst, -1876. url , . - Proceedings of the Linnean Society of New South Wales. Sydney, Linnean Society of New South Wales. url p. 247. - Proceedings of the Royal Zoological Society of New South Wales. Mosman, New South Wales [etc.]The Society. url p. 49. - The Australian zoologist. Sydney, Royal Zoological Society of New South Wales url p. 41, p. 7. - The Entomologist's monthly magazine. Oxford [etc.]Entomologist's Monthly Magazine Ltd. [etc.] url p. 169. - The Victorian naturalist. [Melbourne]Field Naturalists Club of Victoria. url , , p. 18, p. 23, p. 3, p. 3, p. 85, p. 99. - Transactions and proceedings of the Royal Society of South Australia (Incorporated). Adelaide: W.C. Rigby, 1912-1937. url p. 348, p. 440. - Brands, S.J. (comp.) 1989-present. The Taxonomicon. Universal Taxonomic Services, Zwaag, The Netherlands. Accessed January 31, 2012. Accessed through GBIF Data Portal February 17, 2008: - Biodiversity Heritage Library NamebankID: 6218451 - Catalogue of Life Accepted Name Code: Lep-197183.0 - Zipcode Zoo Species Identifier: 1443682 - http://www.butterfliesandmoths.org/taxonomy?f=16&sci=Lycaenidae&com=Gossamer-wing Butterflies [back] - http://bugguide.net/index.php?q=search&keys=Euristrymon&search=Search [back] - Standard Deviation = 165.230 based on 203 observations. Terrestrial altitude and ocean depth information for each observation from British Oceanographic Data Centre. [back]
<urn:uuid:606b5a01-56a2-4322-9a7c-7fad8ab56d38>
3.765625
1,609
Knowledge Article
Science & Tech.
53.972861
Some materials have the curious property of being magnetic under normal everyday conditions - for example, they stick to the metallic door of your fridge. Technically speaking, they show a spontaneous magnetisation at room temperature, and are called ferromagnetic, for the Latin name of iron, which is the prototype of a material with these properties. It comes out, the state of being magnetic is a phase, similar to being solid or fluid, and indeed, one can study phase diagrams for magnetic materials. For example, if the temperature of a magnetic chunk of iron is raised above a certain, specific temperature, the magnetisation is lost. This temperature is called the Curie temperature for Marie's husband Pierre, a pioneer of solid-state physics. The Curie temperature of iron is at 1043 K. The appearance (or disappearance) of spontaneous magnetisation at the Curie temperature is not only technologically relevant, it is also very useful for geologists: if ferromagnetic minerals in volcanic lava cool down from red-hot molten rock to below the Curie point, they "freeze in" the orientation of the Earth's magnetic field at that very moment. This allows to reconstruct the orientation and strength of the Earth's magnetic field over history. One goal of physicists in the early years of the 20th century was to understand how spontaneous magnetisation comes about, and to find a quantitative description of the magnetisation as a function of temperature. To this end, they made simplified assumptions, for example, that atoms behave like miniature compass needles which interact just with their neighbours. One of these models was proposed by the German physicist Wilhelm Lenz in 1920, and then analysed in more detail by his student Ernst Ising - it's the famous Ising model (Ising was born in Cologne, Germany, hence the pronunciation of the name is "eeh-sing", not "eye-sing"). In the Ising model, one assumes that the magnetic moments of atoms can have only two orientations, and that it is energetically favourable if the magnetic moments of neighbouring atoms are oriented in parallel - it costs an energy J to flip one magnetic moment with respect to its neighbour. Then, one applies the rules of statistical mechanics and tries to calculate the magnetisation - the average orientation of the magnetic moments. As it comes out, there is indeed a spontaneous magnetisation below a certain temperature - one of the most elementary examples of spontaneous symmetry breaking. And, even more spectacular from the theorist's point of view, in the special case of a restriction to just two dimensions, Onsager and later Yang (the Yang of parity violation and Yang-Mills theories) could derive an exact formula for the magnetisation M as a function of temperature. It looks pretty complicated, but the interesting thing is that there is only one free parameter in the formula, the Curie temperature TC, which depends on the energy J necessary to flip a magnetic moment. Essentially, the magnetisation is 1 at zero temperature (meaning that all magnetic moments point in the same direction), and drops to zero as the eighth root when the temperature approaches the Curie point. As nice as it may be to have such a formula, it would be interesting to check in an experiment if it is correct. However, there is a drawback: It's valid only in two dimensions, i.e. for planar layers just one atom thick, and it works only for magnetic moments which can only be parallel or antiparallel to one fixed direction. Fortunately, progress in materials science in the 1990s has made it possible to produce thin ferromagnetic films only a few atomic layers thick, with magnetic moments which show indeed the restricted orientation with respect to an axis as described in the Ising model. So, these films should behave like the Ising model, and one can try to measure the magnetisation as a function of temperature. This is what is shown in this plot by C. Rau, P. Mahavadi, and M. Lu: Figure taken from C. Rau, P. Mahavadi, and M. Lu: Magnetic order and critical behavior at surfaces of ultrathin Fe(100)p(1×1) films on Pd(100) substrates, J. Appl. Phys. 73 No. 10 (1993) 6757-6759 (DOI: 10.1063/1.352476). It is, unfortunately, not possible to measure magnetisation directly, so one has to rely on other effects which are directly dependent on magnetisation - in this case, one uses a method called Electron capture spectroscopy (ECS): A beam of ions is shot on the film, the ions capture electrons from the surface, and emit light which can be detected. If the surface is magnetised, the light is polarised, and thus, the polarisation of the emitted light is a measure of magnetisation. This is what is plotted on the vertical axis: the polarisation P, normalised to the polarisation P0 at low temperatures. For, as it comes out in the experiment, the polarisation - and hence, the magnetisation of the film - is nearly constant at low temperatures, and drops sharply to zero when approaching a specific temperature, to be identified as the Curie temperature TC. In the figure, normalised polarisation is shown as a function of temperature T, where temperature has been normalised to the Curie temperature. Now, one can compare with the theoretical prediction for the magnetisation of the Ising model as a function of temperature. This is the solid black curve. There are no more free parameters, and, as it comes out, the agreement with experimental data is perfect. Here is an intriguing circle from experiment to theory back to experiment: Experimental data of ferromagnets measured more than 100 years ago show the appearance of spontaneous magnetisation as temperature drops below the Curie point. Models are constructed to try to understand this, and for a simplified model restricted to two dimensions, an exact formula for the magnetisation can be derived. Finally, real materials show up which correspond to the idealisations and simplifications made in the model, the magnetisation can be measured... and it works! This post is part of our 2007 advent calendar A Plottl A Day.
<urn:uuid:5f8d3882-1d40-4123-9c83-74562432ca7d>
3.796875
1,280
Personal Blog
Science & Tech.
36.799366
Posted on April 13th, 2009 No comments On 13th April 2029, twenty years from today, the asteroid designated Apophis (full designation 99942 Apophis, originally designated 2004 MN4) is going to come very close to the Earth. It originally caused a brief period of concern throughout December 2004 because some initial observations of its trajectory indicated a significant probability that it would strike the Earth some time in 2029; this probability being up to 2.7%. It was discovered by Roy A. Tucker, David J. Tholen, and Fabrizio Bernardi on 19th June 2004 (hence the orginal 2004 designation). It will fly by at only 18,300 miles above the Earth’s surface. At this relatively-low altitude, it will appear well inside the height of Earth’s manmade geosynchronous communications satellites. At its closest approach, the asteroid (with a width of 300 metres) will shine as bright as a 3rd magnitude star and make itself easily visible to the naked eye from cities across three continents: Europe, Asia, and Africa. After certain calculations, it turns out that there is a small chance (about 1 in 45,000 that is) that the encounter with Apophis in 2029 will bend its orbit sufficiently, so that when it returns to Earth it actually hits it on 13th April 2036 (or so the experts say). Should such an impact arrise, NASA estimates that it could hit the Earth with the equivalent energy of an 880 Megaton bomb! Just as a point of comparison, the 1883 super eruption of the volcano Krakatoa was the equivalent of approximately 200 megatons.
<urn:uuid:9ff315b2-3732-42b3-959b-4e21d9ca8afc>
3.671875
343
Personal Blog
Science & Tech.
48.20023
(Submitted Apil 23, 2007) How far away is the furthest known galaxy? The most distant galaxy known today is called IOK-1, with a redshift of 6.964 which puts it about 12.88 billion light years away from earth. Here is more information: A galaxy called Abell 1835 IR1916 was found in 2004 and was originally thought to be at redshift 10, or about 13.18 billion light years away, but subsequent attempts to confirm the observation did not see the same Here is the original report: And here is a recent paper that discusses the attempts to confirm the original observation: Jay and Jeff for Ask an Astrophysicist
<urn:uuid:43f13c12-5942-4f5e-9c55-cd75c2a3d56e>
3.28125
151
Q&A Forum
Science & Tech.
60.681364
Counting Fish from a Submersible August 30, 2001 Andrew Shepard, Director National Undersea Research Program University of North Carolina at Wilmington One objective of our mission is to make a quantitative assessment of the distribution and abundance of fishes living on Oculina Bank. So, how are fish counted in the ocean? Taking a fish census is important because it provides information to fishery managers who must decide whether to regulate the amount of fish that can be harvested. But not all species can be counted the same way. Fish species vary in size and mobility, live in different habitats, have different life histories, and respond differently to disturbances created by devices and vehicles introduced into their environment. Blind or remote fish sampling, in which the people taking the samples cannot see the fish in the water, has drawbacks. Blind sampling methods include trawling and bioacoustic (sonar) sampling. Fish that are smaller than the net's mesh size pass through the trawl. The area sampled by a trawl may not entirely cover the habitat of a target species, because the precise location of the target sample area cannot be visually established. Trawling on a rocky bottom cannot be done without destroying the net and the bottom habitat. So if trawling is done at mid-water depths for a fish that also lives on the bottom, a sampling bias is introduced, affecting the accuracy of the census. Bioacoustic sampling, while useful for estimating the number and biomass of large schools of fish, is not useful on a reef. Species cannot be distinguished, and individuals hiding beneath ledges cannot be "seen" at all. What, then, is the best way to count fish? The best way for many species, according to scientists, is to go beneath the surface for a first-hand look, either with a remotely operated vehicle (ROV) or a human occupied vehicle (HOV). An ROV, depending on the lens angle of the camera attached to it, generally has a narrower field of vision than an HOV. This restricts the number of individuals that the fish census taker is able to see and count. Depending on current speed, an ROV may be unable to stop and sit motionless on the bottom to conduct the count, which is often a preferred technique. The shortcomings of blind sampling and fish censuses with ROVs suggest that an HOV may be the best method to count reef fish. Even these marvels of technology, however, cannot produce entirely accurate fish counts. The light and noise of a moving HOV will spook some fish and attract others. In some cases, the lights of an HOV will illuminate prey for predatory species. To adjust for these sources of potential error, fish census takers move the HOV along transect lines to count fish and invertebrates that do not scatter at the sight of an HOV. For bottom-dwelling fish, such as gag and scamp grouper, which spook at the sight of submersible lights, fishery scientists may turn off the lights, park the HOV, and wait for the fish to reappear before beginning their census. An extremely low-light camera helps with these "quiet" counts. Generally, scientists conduct fish counts near dusk, when the fish are most active as they come and go from the reef. On Oculina Bank, ocean explorers are trying to establish the abundance of reef fish, especially different species of grouper, relative to fish counts taken at the same sites in the early 1980s and mid-1990s. In the early 1980s, before the bank was fished heavily, populations of reef groupers were highly abundant. By the mid 1990s, they were highly depleted. Enumerating individual fish creates a new challenge. Even if one is not moving about in an HOV, counting large numbers of fish with the naked eye as they dart here and there is befuddling without the aid of a video pause function. Scientists attempt to count the precise number of individuals in their field of view, provided they are not so numerous as to make it impossible. The number of individuals in large schools can only be estimated by counting those in a small area and multiplying that number by the estimated area of the school. To avoid double counting of individuals, scientists often follow a rule of counting only the total number of individuals within their view at one time. The Clelia submersible and ROV both made two dives today on Eau Gallie reef, outside the protected area ofthe Experimental Oculina Research Reserve. The ROV dives were used to determine where to dive with the Clelia. On the first submersible dive, the Clelia followed the track of the earlier ROV transect, where most of the habitat was covered with coral rubble. Relatively few fish were observed. The track covered by the transect of the second ROV dive showed only a featureless flat plain, again covered with coral rubble. Based on those results, the scientists decided to explore a different area, with more topographic relief, just to the east and inside the reserve. They found 18 different species of fish and swarms of greater amberjack feeding on unfortunate prey, illuminated by the lights of the Clelia. August 30, 2001 Director National Undersea Research Program University of North Carolina at Wilmington As part of NOAA's 2001 Ocean Exploration activities, teachers and scientists worked together to develop lesson plans on ocean exploration. One of the plans relates to how we know where we are underwater and how we decide where to take the remotely operated vehicle (ROV) and manned submersible to conduct our exploration. After all, there are no street signs or street lights or hand-held GPS (Geographic Positioning System) units that work underwater (yet). And there are no buoys marking Eau Gallie, Chapman's Reef, Jeff's Reef, Sebastian Pinnacles or any of the places we want to explore on Oculina Bank. Imagine you are driving in a field miles in diameter at night with your fuel gauge near empty. Which way should you turn? There are several ways you might get out. You could get lucky and drive straight to the road. You might have a map of the field and, with enough landmarks or a compass, find the road home. You might know the field so well, that you could find your own way out. You might have a helicopter overhead directing you or a friend wh could not see you, but had a beacon on your car and a radio to guide you out. In essence, we use some or all of these techniques to find our way around the deep ocean. We are exploring during this expedition, so we don't know the area well on our initial dives during the first few days. We are underwater, so the helicopter can't actually see us. Our ship, the Seward Johnson II, does have an acoustic (sonar) system coupled with a GPS that can provide us with a position for the submersible, and they can direct the sub to a dive site. But this assumes the scientists know where they want the ship to go in the first place.So, before the Oculina Banks Expedition began, we reviewed descriptions of past dives done in the reserve to determine the areas that the scientists aboard wanted to explore. (Remember, we are evaluating the condition of the bank habitat and the abundance of reef fish to compare them to earlier surveys and judge whether the habitat and fish populations have improved or declined.) We start each dive day by reviewing a side-scan sonar survey done at many of the dive sites in 1995. At 6:00 am, we then conduct a fathometer survey of target features. This acoustic sounding device provides a single line of continuous depth. A line grid provides a picture of the bottom topography. At 7:00 am, we do an ROV survey to visually map out the proposed submersible dive site. At 10:00 am, we dive the submersible and hopefully go exactly where we planned. We do another ROV survey at a nearby location on the site in the afternoon and dive the submersible on the same area if the ROV survey suggests it will be a worthwhile location to explore. In 2002, we will return to Oculina Bank with a state-of-the-art 3-D mapping system that will provide a detailed chart of the reserve. This product will get us closer to the road. Islands in the Stream Project Coordinator Geomorphology is defined as the study of landscapes, and geomorphologists are those who seek to understand and describe how landscapes are formed. As with any discipline, geomorphologists often focus their efforts, and their careers on studying specific types of landscapes such as mountain ranges, fluvial drainage systems, and coastal barrier islands. Theirs is the science of understanding the relationship between terra firma, and the processes that act upon it, both building it up and tearing it down. As a geomorphologist trained in studying fluvial and coastal systems, I approached my first submersible dive on Oculina Bank wondering what strange and exciting new landscapes I would be able to see first hand, as well as wondering about the forces in the deep marine environment that sculpt it. Cramped in the aft compartment of the Clelia, my legs curled protectively around the high definition video camera that we would be using to record our adventure, I was unable to witness our descent. I was only aware of the change in color, a dimming of the harsh Florida summer sunlight to a refreshingly cool aquamarine, deepening in tone and color the deeper we went. Finally, after reaching bottom in 237 ft of water, I was able to move into position and gaze out through the submersible's large viewing sphere at a world I had only seen up to this point through graphs made from acoustical soundings or images recorded on film - images that do capture the imagination, yet leave one feeling several steps removed from the scene. Even though safely ensconced in the submersible, we were viewing things first hand, able to respond immediately and directly to ideas and thoughts that came to mind based on what we were witnessing. We were exploring. The target of our investigation was a large series of ridges that had been selected by assessing side-scan imagery, data collected from a fathometer survey, and viewing video taken by the ROV earlier in the day. Unlike terrestrial environments, these hills and ridges were formed not from orogenic processes or the differential erosion of rocks and sediments, but from living creatures. Here, 20 nautical miles from the coast, Oculina varicosa, the ivory tree coral, at some yet to be determined time in the past, gained purchase on scattered ledges, pinnacles, and outcrops of limestone jutting from the seafloor, bathed by nearby Gulf Stream. With the eye of a geomorphologist, I thought back to the past, visualizing a system of coral reefs and shoreline during a time when sea level was much lower. Visualizing the waters rising over a period of thousands of years, drowning the reefs and shorelines, their sediments slowly lithifying . Visualizing the birth of the coral mounds we were now exploring. Thus, the morphology has changed from low relief outcrops scatter shot in a sea of sediment, to hills and ridges tens of meters high with slopes angling steeply into troughs and swales. The Oculina, growing on the skeletons of previous generations, has formed these rubble-filled piles of coral. I was told by one of the scientists that he once pushed a metal rod into the top of one of these mounds a distance of approximately twenty feet without contacting the substrate. Geomorphology is a study of landscapes and how they form, and here in the submersible I found myself able to witness first hand an extremely unique and very fragile landscape, one that has been forming for thousands of years, and as with all landscapes, providing habitat for a unique assemblage of species. And I found myself also pondering the future. If I were able to visit the banks again in another thousand years, what would I find? How would the landscape have changed? How would human behaviors influence its shape, function and form. I pondered this because our activities do play a role in geomorphology -- an often dramatic role. In the case of the Oculina landscape, what was once an extensive area of living, intricately intertwined live corals has now, in many locations, been reduced to broken piles of dead coral with only small scattered live colonies. Scars from trawling activities and abandoned lost fishing gear -- piles of monofilament line are apparent throughout the area, evidence of physical destruction that has pulverized many sections of the reef. However, we also recorded several large intact colonies of Oculina coral that were dead. What is causing this? These are the questions that drive our investigation, as well as a sense of responsibility for ensuring that systems such as this will not be completely lost. Again, what will the Oculina Banks landscape look like in a thousand years, and what will be the agents of change? Sign up for the Ocean Explorer E-mail Update List.
<urn:uuid:aa81012d-b4a2-4a77-8853-be4527ecfc87>
3.75
2,729
Knowledge Article
Science & Tech.
42.668247
Evolution as Reproduction with Variability Evolution as Reproduction with Variability Biological evolution is often thought of as a process by which adaptation is generated through selection. While it is recognized that random variation underlies the process, emphasis is usually placed on selection and resulting adaptation, leaving a sense that it is selection that drives evolution. The simulation below highlights the creative role of random variation, offering a somewhat different perspective: that of evolution as open-ended exploration driven by randomness and constrained by selection, with adaptation as a dynamic, transient consequence rather than an objective. A brief explanation of the simulation and how it can be used is provided below (more to come). Download model (right click to save) This simulation was created with Netlogo and can be run locally (as well as modified) by downloading the model as well as the Netlogo software package, the latter made freely available by Uri Wilensky and the Center for Connected Learning at Northwestern University. |WHAT IS IT?| |Evolution simulations normally focus attention on adaptation as a consequence of selection. This simulation focuses instead on the underlying process of reproduction with variance, and illustrates selection as operating within that context. From this perspective, what drives evolution is not selection but rather random variation, and adaptation is a transient consequence of interactions between random variation and a selective regime which, at any given time, constrains the space explored by random variation.| |HOW IT WORKS| In the presence of selection, agents in the regime under selection pressure die without producing offspring like themselves. To maintain population levels constant, in this case a new agent is born having the x and y characteristics randomly chosen from the distribution of such characteristics in the population at the time. The degree of selection pressure is modelled as the probability that an agent in the regime under selection pressure dies without producing an offspring like itself. Under low selection pressure an agent in that regime has some probability of producing agents more or less like themselves. |HOW TO USE IT Step through a simulation by repeatedly clicking the Go button, or start and stop a continuously running simulation using the Go Until button. A record of all variants that have existed during a simulation can be obtained by turning on the Mark switch. In this case, variants will leave a white mark as an indication of their existence. To get rid of the record of variants, click "Reset mark." Bar diagrams display the x and y distributions of the agent population at any given time; mean x and y values of these distributions are continually updated to the right. |THINGS TO NOTICE |In the absence of selection pressure, there can be directionality to change over time resulting simply from reproduction with variance. Start with a population of 100 or so agents at x = -16 and y = 0. Because agents can not be smaller than -16, all offspring will be at that value or to the right of it. This is what's called in the literature a "left wall effect." Because of it, the mean x value of the population will over time move progressively toward 0 and until that value is reached, new agents of progressively more righward x values will be appearing. Directionality to change over time in the absence of selection pressure does not depend on a left wall effect. Start with a population of 100 or so agents at x = 0 and y = 0, and turn Mark on. Notice that the mean x value doesn't change over time but that there is a progressive increase in the number of variants that have existed over time until all possibilities have been tried. By itself, reproduction with variance moves progressively toward exploring all possibilities. You can check that while this doesn't depend on selection, it does depend on variability by repeating the observations with variability set to 0. Its interesting to think about selection in this context. Return to the starting condition of a population of 100 or so agents at x = -16 and y = 0 and variability = 1, and now turn on selection with a vertical bar at x = 0 and selection pressure set to maximum. Agents explore all possibilities to the left of the selection regime but none of those to the right. From this perspective, selection can be thought of as a contraint on a tendency to explore that is inherent in reproduction with variance. To observe adaptation, start with a population of agents at x = 0 and y = 0 and run the simulation until the population has become randomly distributed across the space of possibilities. Pause the simulation and create a selection regime using the vertical and horizontal bars. This might involve, for example, selecting against all agents with x values less than 0 or selecting against all agents that have either x or y values less than -3 or greater than 3. Restart the simulation and notice that the population quickly becomes restricted to locations outside the selective regime. That adaptation is a transient phenomenon, dependent on the interaction between random variation and a selective regime, can be illustrated by pausing the simulation again, removing the selection regime, and observing the return of the population to a random distribution across all possibilities. The effects of selection pressures that change with time, and their dependence on the population at any given time, can also be explored to some degree with the simulation. Start, for example, with a population at x = -14 and let the simulation run until the population has expanded somewhat. Pause the model, create a selection regime that selects against agents with x values less than some value, and restart the model. Notice that the population adapts to the new selection regime, more slowly if there are no current members of the population outside the values selected against and more rapdily if there are. By progressively extending the selection regime, one can move the population progressively toward any given set of characteristics. |THINGS TO TRY| The constraint of selection can be overcome to varying degrees either by reducing selection pressure or by increasing variance. Try out various combinations to see what differences it makes. |CREDITS AND REFERENCES |Model created by Paul Grobstein, August 2010, and available at http://serendip.brynmawr.edu/exchange/grobstein/EvolVariability.|
<urn:uuid:078d5f29-1af5-4725-8f51-850f6b6b1fdc>
3.484375
1,265
Tutorial
Science & Tech.
29.365802
Locking is essential in threaded programs. It restricts code from being executed by more than one thread at the same time. This makes threaded programs reliable. The lock statement uses a special syntax form to restrict concurrent access. Lock is compiled into a lower-level implementation based on threading primitives. Here we see a static method A that uses the lock statement on an object. When the method A is called many times on new threads, each invocation of the method accesses the threading primitives implemented by the lock. Only one method A can call the statements protected by the lock at a single time, regardless of the thread count. Program that uses lock statement [C#] static readonly object _object = new object(); static void A() // Lock on the readonly object. // ... Inside the lock, sleep for 100 milliseconds. // ... This is thread serialization. static void Main() // Create ten new threads. for (int i = 0; i < 10; i++) ThreadStart start = new ThreadStart(A); Possible output of the program In this example, the Main method creates ten new threads, and then calls Start on each of them. The method A is invoked ten times, but the tick count shows the protected method region is executed sequentially—about 100 milliseconds apart. If you remove the lock statement, the methods will be executed all at once, with no synchronization. Let's examine the intermediate representation for the lock statement in the above example method A. In compiler theory, high-level source texts are translated to lower-level streams of instructions. The lock statement here is transformed into calls to the static methods Monitor.Enter and Monitor.Exit. The lock is actually implemented with a try-finally construct. This uses the exception handling control flow. Intermediate representation for method using lock .method private hidebysig static void A() cil managed .locals init ( object obj2) L_0000: ldsfld object Program::_object L_0007: call void [mscorlib]System.Threading.Monitor::Enter(object) L_000c: ldc.i4.s 100 L_000e: call void [mscorlib]System.Threading.Thread::Sleep(int32) L_0013: call int32 [mscorlib]System.Environment::get_TickCount() L_0018: call void [mscorlib]System.Console::WriteLine(int32) L_001d: leave.s L_0026 L_0020: call void [mscorlib]System.Threading.Monitor::Exit(object) .try L_000c to L_001f finally handler L_001f to L_0026 By using the lock statement to synchronize accesses, we are creating a communication between time and state. The state is connected to the concept of time and sequential accesses to the lock. In the Theory of Relativity, there is also a communication between time and state. This is the speed of light, which is a constant based on the relation of time and space. This connection is present also in locks—in threading constructs. For a better description of how relativity mirrors concurrent synchronization, please see the wizard book. Structure and Interpretation of Computer Programs We examined the lock statement in the C# language, first seeing its usage in an example program, and then describing this synchronization. Next, we stepped into the intermediate representation and its meaning in compiler theory. We related the Theory of Relativity and the complexities of the physical universe to the lock statement.
<urn:uuid:e135547d-c613-4b35-a5cc-44f0142aadff>
4.15625
776
Documentation
Software Dev.
52.640456
Have you ever wondered why stars twinkle? It's caused by poor seeing conditions and is the bane of Earthbound astronomers. Some stars twinkle and change color so rapidly that they have been reported as UFOs. Poor seeing conditions are caused by turbulent mixing in the Earth's atmosphere. Light from stars must go through our atmosphere and is perturbed as it goes through varying layers of air. The lower the elevation of the observer, the more air the light must travel through, the more perturbation. That is the reason observatories with powerful telescopes are built on top of mountains or launched into space. Poor seeing is also called scintillation. Scintillation, or seeing effects, are always much more pronounced near the horizon than near the zenith (straight up) because light travels through more atmosphere at the horizon than when viewed from straight below. The worst seeing conditions are found immediately after a cold front has gone through the area. This turbulence is cause by the mixing of different air masses along the surface of the Earth to around 180,000 feet or more. If you are looking at a celestial object such a planet or the moon during scintillation, study the image for several minutes and the object will come in and out of focus. Recently, ground-based telescopes, including some advanced amateur equipment, can reduce the amount of scintillation with the use of state-of-the-art adaptive optics. This device keeps the image constant and is used for astrometry, lightcurve photometry, variable star studies and long-exposure astrophotography. The next time you are out on a clear dark night and you see stars twinkle and change color, it's not a UFO, but simply refracting starlight as it travels through our atmosphere.
<urn:uuid:d05ff9a7-c70d-4299-b34c-4090e2ddf7c3>
3.65625
363
Knowledge Article
Science & Tech.
45.36549
- With an average depth of less than 20 meters (65 feet), Lake Erie is the shallowest of the Great Lakes. According to the map labeled Lake Erie Depth, the west basin of Lake Erie is (deeper) (shallower) than the east basin. - In general, shallower areas of a lake store less heat, cool off faster in autumn, and are usually the first to form ice in winter. From Lake Erie Depth map, it seems likely that winter ice would first from in the (east) (west) basin. - Check your prediction by comparing the depth map with the long-term average ice cover maps for January and February. For Lake Erie, the ice cover begins in the (deeper) (shallower) basin and spreads to the (deeper) (shallower) basin. - In general, the deeper areas of a lake are the last to form ice in winter and the first to lose ice in the spring. But for Lake Erie, with the prevailing wind blowing from southwest to northeast along the length of the lake, floating ice is transported to the (eastern) (western) basin. - According to the ice cover diagrams, Coast Guard ice breaking assistance would most likely be required for ships attempting to transport cargo in the eastern basin of Lake Erie between the ports of Erie, PA and Buffalo, NY in early (January) - A lake-effect snow is a highly localized fall of snow immediately downwind from an unfrozen lake. It occurs, in part, because of the energy and moisture that the open lake waters add to the cold air blowing across it. Because winds during lake-effect snows often blow from the west, the roads most likely to be closed by lake effect snows are those between (Detroit and Toledo) (Erie - The formation of an insulating ice cover limits the transfer to the air of the energy and moisture that is necessary for the development of lake-effect snow. Based on the ice cover maps for January and February, of the two months, the one with the greater potential for lake-effect snow is (January) (February).
<urn:uuid:28c3b88c-0dde-4daf-b42c-007396786110>
4
467
Knowledge Article
Science & Tech.
42.844342
Category #4 includes all non-harmful mutations that do not fall under categories #1-#3 above. These organisms greatly outnumber the first three types. By defi- nition, they do not have any direct bearing on the calculations. Using the mean estimates of mutation rates and categorizations we’ll now choose a saturation cycle period and calculate how many mutations have occurred. In a period of 1000 saturation cycles (one “year”), 10e32 cell divisions result in: 10e23 non-harmful mutations maximum 10e22 non-harmful mutations mean 10e21 non-harmful mutations minimum Using the above “mean,” the following seems (See “It should now be noted . . .” on page . . ./e-boundary/page8.html) to show the populations at the end of the first “year”: Category #1: 5.49 * 10e19 (see population “2-A”, page 8) (mutation type “C1”) Category #2: 5.49 * 10e19 (see population “2-B”, page 8) (mutation type “C2”) Category #3: 5.1 * 10e17 (see population “2-C”, page 8) (mutation type “C3”) Category #4: a bit less than 10e22 We will no longer keep track of category #4 as, by definition, these organisms don’t have anything leading to any structure that might allow for another energy source. According to baggage principles, these organisms, though they greatly outnumber other non-harmful mutations, won’t out-compete average varieties in the long term. Since the best they can do is present an average competition, their effect is no differ- ent than the competition offered by the non-mutated original variety. In categories #1-#3 we see that in the timeframe of 1000 saturation cycles secondary non-harmful mutations are to be predicted. That is, the mutants in these three cate- gories have themselves produced descendants and that a portion of these third-level groups each received a non-harmful mutation. Before we can proceed with calculating secondary mutations, we need to establish naming conventions and abbreviations. This will facilitate charting family trees, timelines, and tables and help keep populations organized. The original population that saturated the oceans of the planet will be designated “1-A”. Groups arising directly from the original, through non-harmful mutations, will be named “2-A”, “2-B”, “2-C”, etc. Third “generation” groups will be named “3-A”, “3-B”, etc., as they came about when second “generation” groups received non-harmful mutations. For the purposes of the simulations, the word “generation” does not refer to a single cell cycle but to the origination of mutant groups. The three basic categories of mutation will be designated “C1”, “C2”, and “C3”. It will be seen that when population sizes are large enough, groups may produce more “generations” of C1, C2, and C3 during the “year”. This leads to an accumu- lation of two qualities in the next “generation” of groups. Now we need to keep track of the accumulated qualities, the two types of relevant qualities being “immediate survivability” and the “potential for another energy source”. We’ll abbreviate the first quality as “IS” and the second as “PAES”. The degrees of these qualities will be designated by the number of mutations that have contributed to them. “IS=3” refers to three degrees of immediate survivability, for example. “PAES=2” refers to two non-harmful mutations that have contributed to the organism’s future potential for having a secondary energy source. Home Page (Introduction)
<urn:uuid:2755b9f6-61db-4660-be5f-7cb56fc3979b>
2.859375
935
Documentation
Science & Tech.
46.979264
concatenate result-type &rest sequences => result-sequence Arguments and Values: result-type---a sequence type specifier. result-sequence---a proper sequence of type result-type. concatenate returns a sequence that contains all the individual elements of all the sequences in the order that they are supplied. The sequence is of type result-type, which must be a subtype of type sequence. All of the sequences are copied from; the result does not share any structure with any of the sequences. Therefore, if only one sequence is provided and it is of type result-type, concatenate is required to copy sequence rather than simply returning it. It is an error if any element of the sequences cannot be an element of the sequence result. If the result-type is a subtype of list, the result will be a list. If the result-type is a subtype of vector, then if the implementation can determine the element type specified for the result-type, the element type of the resulting array is the result of upgrading that element type; or, if the implementation can determine that the element type is unspecified (or *), the element type of the resulting array is t; otherwise, an error is signaled. (concatenate 'string "all" " " "together" " " "now") => "all together now" (concatenate 'list "ABC" '(d e f) #(1 2 3) #*1011) => (#\A #\B #\C D E F 1 2 3 1 0 1 1) (concatenate 'list) => NIL (concatenate '(vector * 2) "a" "bc") should signal an error Affected By: None. An error is signaled if the result-type is neither a recognizable subtype of list, nor a recognizable subtype of vector. An error of type type-error should be signaled if result-type specifies the number of elements and the sum of sequences is different from that number.
<urn:uuid:d6828a64-35ed-4f72-96d1-a4a8956f913b>
3.34375
433
Documentation
Software Dev.
37.01072
NASA may alter the design of its upcoming Mars Science Laboratory rover so it will not only crush and analyse soil and rocks on the Red Planet, but will also store samples for a future mission to deliver to Earth. The possible change could shorten the wait time for a Mars sample return mission, which a new report ranks as the highest scientific priority for future Mars missions. Scientists have been asking for a sample return mission since the 1960s, but cost, mission complexity and lack of appropriate technology have prevented any such missions from going forward. Still, they say studying Martian samples in labs on Earth could teach them much more about the climate, geochemistry and possibility of past or present life on Mars than remote studies with robots - even those as capable as the Mars Exploration Rovers. That view is highlighted in a new NASA-commissioned report by the US National Research Council (NRC), which outlines the first comprehensive strategy devised for the detection of life on Mars since 1995. It states: "The highest-priority science objective for Mars exploration must be the analysis of a diverse suite of appropriate samples returned from carefully selected regions on Mars." The report suggests using a series of spacecraft for sample return rather than a single mission, an approach that would reduce the complexity and weight of each individual probe. In this scenario, one or more missions could collect and store, or "cache", samples; one could retrieve a scientifically promising cache and launch it into orbit around Mars; and a third might then bring the sample back to Earth for detailed analysis. The report also calls for increased funding from NASA to develop the technology needed for such missions. Alan Stern, NASA's associate administrator for science, is hoping to speed up the development of such a programme by seeing if rover missions already on the drawing board could be altered to perform the first task of caching samples. He has asked NASA's Mars Science Laboratory (MSL) team to study the feasibility of adding such a capability to the mega-rover, which is due to launch in 2009 to study whether the planet could ever have sustained microbial life. He has also started preliminary talks with the European Space Agency about providing a caching system for ExoMars, a rover mission scheduled for launch in 2013, though he told New Scientist that so far ESA has not expressed an opinion on the matter. At an NRC colloquium on astrobiology and Mars exploration in Pasadena, California, US, on Sunday, many researchers applauded putting a sample return programme in the spotlight. But the idea of caching samples on MSL raised a few eyebrows. For one thing, the rover has already passed its critical design review - a milestone after which significant mission design changes are unusual. "It comes very late in MSL's development," says Bruce Jakosky, a researcher at the University of Colorado in Boulder, US, and chair of the NRC committee that put together the astrobiology strategy report. "We should question whether the astrobiology science objectives can be addressed by the type of sample MSL can obtain. I'd like to see it discussed within the community." Kenneth Nealson of the University of Southern California agrees. He says some scientists are concerned that the powdery samples MSL will collect are the most likely to change with atmospheric exposure on the surface. "It would be more of a technology demonstration," he told New Scientist. "[Still], one sample is always better than zero." Another session on the exploration of the Martian subsurface came to the conclusion that the return of an MSL cache probably would not meet the requirements to help scientists directly address astrobiological questions. They said such a mission would be valuable for other areas of scientific interest, but that any discoveries of possible Martian life would most likely require a sample collected from the subsurface - probably between 1 and 10 metres down, where samples would be relatively untouched by oxidising agents at the surface (see Life may lie deep below Martian surface). Mars Rovers - Mars is full of surprises. Learn more in our continually updated special report. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Thu Dec 13 09:44:43 GMT 2007 by Exolunker Surface blasted by UV or frozen. A cave search would be much more informative. More Of The Same . . . Sat May 31 19:28:10 BST 2008 by Sarah O'conner When we first gazed at the moon through powerful telescopes, the moon appeared to be full of dirt and rocks. However, we were not convinced so we sent astronauts there. They brought back samples for scientists to analyze and it was confirmed that indeed, this was just dirt and rocks. However, we were not convinced so we went to the moon another dozen times or so just to really be sure it was real dirt and real rocks. When we first gazed at Mars through powerful telescopes it appeared to be a mysterious planet. Our early flybys of this planet revealed a terrain full of dirt and rocks. However, we were not convinced so we sent Rover there. The little robotic vehicle traversed the topography sending back pictures and data and guess what? - more dirt and rocks. However, we were not convinced so we sent Phoenix there. It landed safely and started analyzing the soil and sending beautifully detailed images of a Martian landscape full of ⦠dirt and rocks. But wait, this is different. The Phoenix landed in the North Pole area in the hopes of discovering life. Its little sensors microscopically scrutinized the soil and made an amazing discovery. Mars is still full of dirt and rocks. But wait, this is different. The dirt has a pattern to it. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:930d3cd2-ef36-4b6f-8194-0902abff7fa1>
3.421875
1,280
Comment Section
Science & Tech.
45.130054
Zap a metal with light and the electrons on the surface ripple into waves - known as plasmons - which emit light of their own. The frequency of that light reflects the electronic nature of the surface and is highly sensitive to contamination. Kevin Tetz and colleagues in the Ultrafast and Nanoscale Optics Group at the University of California, San Diego, have designed a system to exploit that to test for any surface contamination on the surface of, well, anything. Their idea uses a thin layer of metal drilled with nanoscale holes, laid onto the surface being tested. When the perforated plate is zapped with laser light, the surface plasmons that form emit light with a frequency related to the materials touching the plate. A sensitive light detector is needed to measure the frequency of light given off. The team says devices using this approach can be small and portable, will work on very low power, and could detect everything from explosives to bacteria. All that needs to be done now is build a system able to decode the light signatures. Since the 1970s New Scientist has run a column uncovering the most exciting, bizarre or even terrifying new patented ideas. Read recent Invention columns: Bat-style footstep detector, Infrared lie detector, Muscle-fatigue blocker, Oil sands digester , Drug-delivering contact lenses, Quad bike skiis,Graffiti warning system, Heart pump, Space satnav, Exercise bed, and Flat-panel ion thrusters. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Fri Sep 26 21:18:48 BST 2008 by Greg This device would appear to be a surface scanner requiring direct contact with the surface to function. The sci-fi device in question does not require direct contact and can penetrate beneath the surface of an object. So while this device if successfully developed will have many applications, it is by no means comparable to a tricorder. Fri Sep 26 21:33:14 BST 2008 by Max Great post Greg! However, you missed a fantastic opportunity to offer a better example than the tri-corder. I assume you have one. Otherwise that would make your post disappointingly pointless. Mon Sep 29 16:14:34 BST 2008 by Robert Smith There is almost zero water testing for the many endocrine disrupting farm chemicals, birth control tainted sewage outflow, and BPA in Lake Erie,, or any lake for that matter. What would be the cheapest way to test for them. We had prices on a mass spec job and it was very high for a years study. Any ideas? prime3end atty yahoo dotty com I suspect an endocrine disrupting synergy among these chemcials that is bad for the fish and critters, but bad for people too. Lake Erie is a water source of a score of million but the Bush EPA (BEPA) doesn't want to study it. Fri Sep 26 22:20:18 BST 2008 by Joe Sheehy Not to mention very nerdtastic! By the way, I do believe they said "tricorder LIKE device". Sure It Can Scan, But Can It Actually Tell You Anything Useful? Fri Sep 26 22:49:06 BST 2008 by Sudeep SPR based sensors have been around for a long time. While this might be a novel implementation that can have certain uses, I fail to see how it is a "Universal" scanner. Surface plasmon based sensing alone cannot tell you what exact material you are probing. You might detect frequency shifts indicating the presence of some contaminant/bio-molecule but that is all the information you will get. You can even quantify concentrations/%age surface coverage but you cannot actually determine exactly what is bound on the surface from surface plasmon sensing alone. If you have to detect bacteria or some bio-molecules you will need to modify the surface chemistry with appropriate probe molecules to ensure that you actually know what it is that is binding on the surface and causing these frequency changes. There is nothing novel about this, this same technique is being used in SPR based and optical sensors for many years now. So while this device might have some possible benefits (sensitivity/ease of use, etc.) it certainly is nothing like what is hyped up to be in this article All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:0db5963e-8287-456a-9395-0845f1695517>
3.203125
1,019
Comment Section
Science & Tech.
48.204561
Fluids and Fault Lines by G. Golitsyn Why large earthquakes are rather rare. Carl Friedrich Gauss (part II) by S. Gindikin More about this "prince of mathematicians." The eye and the sky by V. Surdin The art of seeing distant objects. Physics Contest: Tunnel trouble by Larry D. Kirkpatrick and Arthur Eisenkraft New twists on classic gravitation problems. Kaleidoscope: Do you know atoms and their nuclei? by A. Leonovich Getting down to basics. Digit Demographics: Repartitioning the world by V. Arnold Population and the powers of two. At the Blackboard I: Fuel economy on the Moon by A. Stasenko A practical problem of no use (at present). At the Blackboard II: The Markov equation by M. Krein Diophantus, Vieta, Pythagoras ... cameo appearances galore. Informatics: Cantor Cheese by Don Piele A new column by an old friend of Dr. Mu. How Do You Figure?: Challenges in Physics and Math Brainteasers: Just for the Fun of It! Check out this sample! Scientific crossword puzzle. Report on the 40th International Mathematical Olympiad. Answers, Hints & Solutions Information | Background | The CyberTeaser | Quantum | Miscellaneous Stuff Copyright © 2000 National Science Teachers All rights reserved
<urn:uuid:25d51988-62e0-47ab-afd4-d58fe26f211a>
2.75
324
Content Listing
Science & Tech.
56.392351
Steve wants to be a biohacker. He has no access to a wet lab but he has done a few molecular biology classes and is up-to-date on the state of the art. He sends away for a kit and 4 to 8 weeks later receives it in the mail. It includes a biohacker device to be connected to his computer and a number of vials of a specially engineered bacteria. After reading the instructions he carefully inserts a vial into the biohacker loads up the software, and inserts a slide. His first creation is the same as most inexperienced biohackers, the test page: Steve has started on a path that has been paved by a technological revolution that, as I write this, has yet to occur. The development which will take biohacking out of the universities and military labs and into the garage - where all interesting technology must eventually lead - is in vivo DNA synthesis. That is, making any desired DNA sequence directly inside the cell. For the purposes of this discussion I'm talking about bacterial cells as the vast majority of biohacking today is done using bacteria. But there's no reason why eukaryotic cells cannot be manipulated just as easily and, in fact, proteins that typically occur in eukaryotic cells may be just what is needed to make in vivo DNA synthesis a reality. I'm specifically thinking about the process of RNA splicing. The most common kind of splicing is cis-splicing where the introns of a single transcript are spliced out, joining together the exons into contiguous mRNA, but there's another kind. Trans-splicing is the selective joining of two exons that are not within the same transcript. The transcripts are required to share a complementary sequence which causes the two transcripts to line up. The introns are removed as usual. So it seems that by selective expression of two genes containing appropriate complementary sequences and the required indicators to initiate trans-splicing, a level of mRNA synthesis can be achieved. Imagine a library of "left" and "right" exons of five bases length. This is a total of 2 x 5 x 4 = 40 genes. If each of these genes is promoted by an addressable external signal, we can make 400 different mRNA sequences of 10 bases length on command. Reverse transcriptase can be used to convert the mRNA back into DNA. But it would appear we can do even more than this. Once trans-splicing has joined a left to a right exon, there's no reason why complementary sequences on the right exon can not be used to initiate trans-splicing of another transcript. Perhaps we will require another library of five base length exons with appropriate complementary sequences, call them right-right-exons, but we definitely won't need any right-right-right-exons as once we can make reasonably long sequences on command we can encode the appropriate complementary sequence. Now the only limitation is the trans-splicing mechanism - of which k-base-long splices have been reported. Unfortunately, creation of a "bootstrap" bacteria for a biohacker device will require the use of a wet lab to create. I, currently, am not in the business of biology, and I'm not eligible to participate in the iGEM competition as I have been out of school for quite some time now. If you are interested in this idea and would like to try it out, I would love to hear your results. Similarly, if you are unimpressed by this idea, or think you have heard it before, drop me a line.
<urn:uuid:98be2a7d-4269-4c55-8684-2e0b8538d61c>
2.953125
743
Personal Blog
Science & Tech.
44.480939
PHP stands for PHP: Hypertext Preprocessor. The first P is actually recursive. It is created by Rasmus Lerdorf in 1995 and was expanded as Personal Home Page, but was renamed to this form in version 3.0 when released in June 1998. Now PHP is greatly popular. You might have wondered seeing the number of opportunities for PHP in freelancing and other job sites. PHP is a server side scripting language used to design dynamic web pages, similar to ASP and JSP. Though HTML is used for the same purpose, much more can be done using PHP. You can collect data from users, can have database interactions etc. Even large websites can be created very easily with reduced time. PHP is designed to use along with HTML. You can use PHP inside your existing HTML code, or can put your HTML tags inside the PHP code. Actually PHP is not an isolated language. It adds some functionality to and existing HTML code. And once the request is sent to server, the output produced will be a pure HTML document that is if you right click on a page and click View Source (Internet Explorer), you can’t find that the code contained PHP. The files with PHP coding are to be saved with extension php file. PHP is mostly used along with MySQL database, but still it can be programmed to interact with any other database. Sample PHP Program <title> My First PHP Page</title> echo “Inside PHP”; The file with this code is to be saved with .php extension. If it is placed on PHP enabled server and loaded in the browser, it shows the output as Inside PHP. The first thing to be noted in the above sample is that the PHP code is written within php tag and the php statement ends with a semicolon. “echo” keyword is used to output a string. It is not actually a function, but a language construct. Variable in PHP can be declared with $ sign added at the beginning, also they are case sensitive. For e.g. if you plan to use a string welcome with a value, you should create it as: $welcome=”Hello Good Morning”. But for defining Constants no dollar sign is needed. It can be done as follows: define (“PI”, 3.14); The data types available in PHP are integers, doubles and strings. For a commenting a single line in PHP there are 2 options - For multiple line comments, start the commenting with /* and end it with */. Different Operators available in PHP are - Assignment Operators (=) - Arithmetic Operators (+,-,*, /, %) - Comparison Operators (==, =,<,>, ≤,≥) - String Operators (.) - Combination Arithmetic and Assignment Operators (+=,-=,*=, /=, %=,.=) Different Statements available in PHP are - If, if/else if statements - Switch/case statement - For, while and do while statements - Include and require statements All the above operators and statements are known to any programmer with basic programming knowledge. So it is not explained in detail in this article. Include and require statements are identical and they are used to avoid code repetition. Suppose the same code is to be repeated in many pages, that code can be kept in a single file and whenever it is required you can use an include statement with that particular file name. For e.g. if the repeating code is kept in a file named (common.php), whenever you want to use that code you can specify <? php include (“common.php”); > or <? php require (“common.php”);>. You can thus avoid repetition of code. An include statement if come across an error generates a warning and continue execution, while a require statement after generating the warning, it stops execution of the program. The difference is only in the way in which the error is handled. Different Built-In Functions available are - Array Manipulator Functions (merge, sort, count etc) - Date and Time Functions (getdate, localtime, date etc) - File system Functions (For dealing with flat files) - Error Handling Functions - Directory Functions - Mail Functions (For dealing with e-mails) - Database Functions ( For dealing with database) PHP can also be used to handle many complex software requirements like cookie handling, dynamic image handling, client server applications, encryption techniques, artificial intelligence, user interactions etc. - Reduced Development Time: PHP helps to develop web applications very rapidly and efficiently. - Speed: When compared with other scripting languages, PHP is showing better performance in terms of speed. Even during complex process like database interaction, it takes considerably less time. So it is used in applications that need high performance e.g. server administration, mail functionalities etc. - Platform independent: PHP supports many platforms like Windows, Linux etc. But many scripting languages are designed for a particular platform. - Easy syntax: Anybody having a basic programming knowledge can easily learn the syntax of PHP. It is very much similar to C syntax. - Database connectivity: PHP can be connected to a number of different databases. - Open source: The user is given a free license to reuse PHP. Thus it is an open source language. - PHP supports both structural and object oriented programming concepts. - Easy deployment - Error Handling: The main disadvantage of PHP is in the area of Error Handling. When compared to other scripting languages, PHP has a very poor error handling methods. - Security flaws: As PHP is a web programming language, it is more prone to vulnerabilities. - Performance: PHP does not provide performance of C or C++. - Considering the advantages and disadvantages of PHP and also the requirements of our web application, it is our duty to make a wise decision on the choice of scripting language.
<urn:uuid:da9be74a-0597-48c7-945f-9981ed5b4f3e>
3.453125
1,240
Documentation
Software Dev.
48.730908
Ever since brown tree snakes were inadvertently brought into Guam from the Solomon Islands after the Second World War, they've been going about their natural business of targeting bats, birds, lizards, and small mammals -- in the course of which they've (also inadvertently) wiped out or significantly reduced a number of the Pacific island's native species. Over the intervening decades, scientists have developed a range of countermeasures, including the use of traps, snake-detecting dogs, and spotlight searches along airport and seaport fences. With mixed success, these strategies have focused on preventing the snakes from getting onto planes or ships headed to other islands, like Hawaii, where it's expected they would do even more damage. But the new plan is to get proactive and go after the snakes where they live -- in the jungle -- by means of lethal / over-the-counter drugs: In the U.S. government-funded project, tablets of concentrated acetaminophen, the active ingredient in Tylenol, are placed in dead thumb-size mice, which are then used as bait for brown tree snakes.In humans, acetaminophen helps soothe aches, pains, and fevers. But when ingested by brown tree snakes, the drug disrupts the oxygen-carrying ability of the snakes' hemoglobin blood proteins."They go into a coma, and then death," said Peter Savarie, a researcher with the U.S. Department of Agriculture (USDA) Wildlife Services, which has been developing the technique since 1995 through grants from the U.S. Departments of Defense and Interior.Only about 80 milligrams of acetaminophen--equal to a child's dose of Tylenol--are needed to kill an adult brown tree snake. Once ingested via a dead mouse, it typically takes about 60 hours for the drug to kill a snake."There are very few snakes that will consume something that they haven't killed themselves," added Dan Vice, assistant state director of USDA Wildlife Services in Hawaii, Guam, and the Pacific Islands.But brown tree snakes will scavenge as well as hunt, he said, and that's the "chink in the brown tree snake's armor." For more on how to increase the odds that airdropped mice get caught in the high tree branches where brown snakes live, why most other species that might be lured by the mice are already gone, and the hows and whys of radio-tagging bait, read the full story at National Geographic. This article available online at:
<urn:uuid:a4475a3f-388d-423e-a6dd-c845a6b1598d>
3.296875
516
Truncated
Science & Tech.
43.542503
Narrator: This is Science Today. Cosmologists at the University of California, Irvine's Center for Cosmology are trying to tackle some of the oldest and longstanding questions in science ... and humanity. Bullock: Questions like how old is the universe? How big is the universe? What is it made of? These are really the questions we're trying to answer. Narrator: James Bullock, who directs the center, is an associate professor of physics and astronomy who studies galaxy formation. Bullock: One of the things we do that I think is fairly unique, is we unite a group of people who study a vast array of things, but with one common goal. We have astronomers to study the properties of the large-scale universe and we have particle physicists to try to understand what the fundamental constituents of nature are. And what's unique here is that these two questions are actually united under common goals. In order to really understand the evolution of the universe, we need to understand how it's made up on its smallest scales and only by uniting these people in a common cause can we actually make progress and I think we're doing that. Narrator: For Science Today, I'm Larissa Branin.
<urn:uuid:2156f8d2-9b89-4b9f-9343-f84c5eb1c449>
3.0625
249
Audio Transcript
Science & Tech.
48.558435
My friend Brightblades is right in one thing. It seems your teacher was working off a caricature of what the theory of evolution actually says. First of all, you should read Sklivvz's excellent answer at this question. Now to address the elephant in the room, the accident at Chernobyl only happened in 1986. That was only 26 years ago. In that timeframe, noticeable effects in an animal population really would not be at all noticeable. Furthermore, the paper cited by Marta Cz-C actually shows that there have been some changes (in fungi though, not animals). fungi seem to interact with the ionizing radiation differently from other Earth’s inhabitants. Recent data show that melanized fungal species like those from Chernobyl’s reactor respond to ionizing radiation with enhanced growth. Fungi colonize space stations and adapt morphologically to extreme conditions. Radiation exposure causes upregulation of many key genes, and an inducible microhomology-mediated recombination pathway could be a potential mechanism of adaptive evolution in eukaryotes. Read the rest of the paper for more information on how there have been some other slight changes to fungi at Chernobyl as well as other locations throughout the world. Now I am going to repeat a bunch of stuff from one of my web pages that talks about evolution. This web page is set up mostly to deal with creationist arguments, however, the caricature is so severe as to warrant this. As I said earlier, evolution is a population phenomenon. Evolution acts upon heritable variation of characteristics, and you can only have variation of this sort within a population. A single individual organism, at least if its a multicellular eukaryote, has a fixed genome. It can't change what it has inherited. But a large number of organisms can all have different genomes, and can disseminate variation via inheritance to the next generation. It is upon the population as a whole that evolution acts, with various mechanisms coming into play to remove some variations from the population, and propel other variations to numerical dominance within the population. The organisms in question remain part of that population, and within a generation, those organisms don't change. But the moment a new generation is produced, dissemination of variation can result in the appearance of a new feature in one or more members of that population. If that new feature leads to greater reproductive success for the organism possessing it, that feature spreads through the population, as more and more future offspring inherit it. Over time, the population changes, and more and more organisms with new features appear within that population. Understanding inheritance basics and mechanisms for changes (genes), we have all that is needed for the appearance of cladogenesis events. Split a decent sized population of living organisms into two, and let's call these new, separate populations A and B. Now let a barrier be erected between population A and population B, so that individuals from one cannot reproduce with individuals from the other. This barrier can be an insurmountable physical obstacle, for example, but this need not be the only form such a barrier can take. Now, first of all, there is no reason whatsoever to think that population A and population B will start off in identical states to begin with. After all, those two populations were derived from an original population comprising lots of organisms with different genomes, and the likelihood of population A and population B being identical at the start of this process is vanishingly small. Then, once our barrier is erected, and our populations are allowed to reproduce separately from that point on, there is no reason to think that those populations will move in the same direction in the long term. Indeed, it is far more likely that they will be subject to different environmental and ecosystem influences, and those different environmental and ecosystem influences will shape the long term heredity of those populations. Indeed, that's all that natural selection IS - it's a single, concise term used to encapsulate all of those environmental and ecosystem influences succinctly, and additionally to encapsulate the fact that those influences affect the inheritance of characteristics within a population over the long term. As a consequence, any two separated populations of living organisms, that originated from a single population, will diverge from each other. If the extant influences on those two populations are sufficiently different, that divergence will take place more rapidly. Eventually, we will arrive at a point where those two populations become sufficiently diverged from each other that individuals from population A can no longer produce viable offspring with individuals from population B, and vice versa. When this happens, we have a speciation event. Indeed, this has been observed taking place in the wild AND in the laboratory, and has been documented in the relevant scientific papers. So, if anyone wishes to assert that there are 'magic barriers' to speciation or other cladogenesis events, then reality doesn't agree. You will not have any animals giving birth to any radically different animals as a result of radiation. Most changed sue to radiation will not provide any particular advantage to an animal anyway. Also, a change may also be dependent on a previous change and require many generations to fully manifest. All this was demonstrated by the long term evolution experiment led by Richard Lenski at Michigan State University. So in essence, your teacher was just plain wrong. As for the findings of the past 20 years, we are always learning more and more. For instance, there has been an explosive capability in DNA analysis and sequencing, which has only provided more support for the theory of evolution. A mechanism that Charles Darwin could not have had any idea about in his day and age, yet it perfectly supports his conclusions. Again, read the answer provided at Skeptics.
<urn:uuid:b329facc-745c-4d12-afb3-239d0197f39b>
3.15625
1,160
Q&A Forum
Science & Tech.
28.944874
Mountain Climate Simulator Entry ID: MTCLIM Abstract: MT-CLIM is a computer program that uses observations of daily maximum temperature, minimum temperature, and precipitation from one location (the "base") to estimate the temperature, precipitation, radiation, and humidity at another location (the "site"). The base and the site can be at different elevations, and can have different slopes and aspects. Better results are obtained when the base and ... site are relatively close to one another (at the bottom and the top of a valley, for example). Temperature estimates at the site are based on the base temperatures and a user-supplied temperature lapse rate. Separate lapse rates can be supplied for daily maximum and minimum temperature. Precipitation estimates at the site are based on the daily record of precipitation from the base, and a user-specified ratio of annual total precipitation between the site and the base. The estimation of radiation and humidity are more complex, since these parameters are not assumed to be measured at the base station. Humidity estimates are based on the observation that daily minimum temperature is usually very close to dewpoint temperature. The MT-CLIM algorithm includes corrections to this assumption for arid climates. Radiation estimates are based on the observation that the diurnal temperature range (from minimum temperature to maximum temperature) is closely related to the daily average atmospheric transmittance. In conjunction with information about the latitude, elevation, slope, and aspect of the site, this relationship can be used to estimate daily total radiation with a typical error range of +/- 15%. Some of the limitations of MT-CLIM are the use of a single base station for observations and the need for the user to specify the temperature and precipitation relationships with elevation. We have developed an expanded version of the MT-CLIM logic, called Daymet, that uses observations from a large number of base stations. (Summary adapted from Bristow, K.L., and G.S. Campbell, 1984. On the relationship between incoming solar radiation and daily maximum and minimum temperature. Agricultural and Forest Meteorology, 31:159-166. Running, S.W., R.R. Nemani, and R.D. Hungerford, 1987. Extrapolation of synoptic meteorological data in mountainous terrain and its use for simulating forest evaporation and photosynthesis. Canadian Journal of Forest Glassy, J.M., and S.W. Running, 1994. Validating diurnal climatology of the MT-CLIM model across a climatic gradient in Oregon. Ecological Applications, 4(2):248-257. Kimball, J.S., S.W. Running, and R. Nemani, 1997. An improved method for estimating surface humidity from daily minimum temperature. Agricultural and Forest Meteorology, 85:87-98. Thornton, P.E., and S.W. Running, 1999. An improved algorithm for estimating incident daily solar radiation from measurements of temperature, humidity, and precipitation. Agricultural and Forest Meteorology, 93:211-228. Creation and Review Dates
<urn:uuid:67367d95-ab1e-4316-a4ce-1dccc2d464a1>
3
691
Academic Writing
Science & Tech.
30.185295
A self-perpetuating bamboo disturbance cycle in a neotropical forest We investigate a hypothesis for explaining maintenance of forest canopy dominance: bamboo (Guadua weberbaueri and Guadua sarcocarpa) loads and crushes trees, resulting in a self-perpetuating disturbance cycle. Forest inventory data revealed a peculiar pattern of tree form and size class distribution in bamboo-dominated plots within the Tambopata River watershed, Madre de Dios, Peru. Bamboo disproportionately loaded trees 5–29 cm in diameter, and this size class had over seven times more canopy damage than trees in control plots without bamboo. These differences were accompanied by reduced tree basal area and tree density in the 5–29-cm-diameter size class in the presence of bamboo. Elevated tree canopy damage was not apparent for trees [greater-than-or-equal]30 cm dbh, which are beyond the reach of bamboo. Additional evidence for the impact of bamboo was revealed by an experiment using artificial metal trees. Artificial trees in bamboo-dominated forest plots had nine times higher frequency of physical damage and nine times more plant mass loading as compared with control plots. Our results support the hypothesis that bamboo loading causes elevated physical damage to trees and suppresses tree recruitment, particularly for trees 5–29 cm in diameter.(Published Online July 27 2006) (Accepted March 20 2006) Key Words: clonal growth; community ecology; competition; disturbance; Guadua; Peru; succession. c1 Corresponding author. Mailing address: Rt. 1 Box 543, Rowlesburg, WV 26425, USA. Email: firstname.lastname@example.org
<urn:uuid:52a921a1-8031-4b54-bf4c-10d39f06f59d>
2.9375
347
Academic Writing
Science & Tech.
30.345314
The fast-track technology, called marker-assisted selection (MAS), or molecular breeding, takes advantage of rapid improvements in genetic sequencing, but avoids all the regulatory and political baggage of genetic engineering. Bill Freese, a science policy analyst with the Center for Food Safety, a nonprofit advocacy group, calls it “a perfectly acceptable tool. I don’t see any food safety issue. It can be a very useful technique if it’s used by breeders who are working in the public interest.” Molecular breeding isn’t genetic engineering, a technology that has long alarmed critics on two counts. Its methods seem outlandish – taking genes from spiders and putting them in goats, or borrowing insect resistance from soil bacteria and transferring it into corn – and it has also seemed to benefit a handful of agribusiness giants armed with patents, at the expense of public interest. By contrast, molecular breeding is merely a much faster and more efficient way of doing what nature and farmers have always done, by natural selection and artificial selection respectively: It takes existing genes that happen to be advantageous in a given situation and increases their frequency in a population. In the past, farmers and breeders did it by walking around their fields and looking at individual plants or animals that seemed to have desirable traits, like greater productivity, or resistance to a particular disease. Then they went to work cross-breeding to see if they could tease out that trait and get it to appear reliably in subsequent generations. It could take decades, and success at breeding in one trait often meant bringing along some deleterious fellow traveler, or inadvertently breeding out some other essential trait.
<urn:uuid:842f5ae4-eb89-4615-babf-d77f748fbbb8>
3.40625
336
Nonfiction Writing
Science & Tech.
23.264887
As for the normal distribution, you can characterize it as the unique distribution with the following properties: Let $X_1, X_2, \cdots X_n$ be independent identically distributed normal random variables. Then the joint distribution of the vector $X=(X_1, X_2, \cdots X_n)$ is the same as that of $AX$ where $A$ is any orthogonal matrix. So the normal distribution is intimately related to the geometry of real inner product spaces. The $\pi$ comes from the fact that you can integrate such a distribution by first integrating over a sphere and then integrating over $[0,\infty]$. Because the distribution is orthogonally invariant, you pick up a constant corresponding to the area of the sphere. For $n=2$ you get the circle, and this is the usual calculation for computing the normalization constant for the normal distribution. So then the mystery becomes: given that the normal distribution is so closely tied to inner product spaces, why does it show up all the time? The central limit theorem tells us that all that really matters in large scale limits are the first and second moments. The first moment can always be eliminated by re-centering. So all that matters is the second moment. But the second moment comes from the covariance, which is an inner product! (technically, only once you restrict to re-centered random variables, but we are doing that) I'd venture a guess that most, if not all, appearances of $\pi$ in statistics boil down to this fact that covariance is an inner product, and the fact that spheres, which are the norm-level sets for inner product spaces, have areas related to $\pi$
<urn:uuid:481b9623-d76c-4087-8b0c-fd75ee36fb79>
2.984375
364
Q&A Forum
Science & Tech.
42.813776
48 is called an abundant number because it is less than the sum of its factors (without itself). Can you find some more abundant If the answer's 2010, what could the question be? There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? Ben’s class were making cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? What happens when you add the digits of a number then multiply the result by 2 and you keep doing this? You could try for different numbers and different rules. If the numbers 5, 7 and 4 go into this function machine, what numbers will come out? These sixteen children are standing in four lines of four, one behind the other. They are each holding a card with a number on it. Can you work out the missing numbers? On the planet Vuv there are two sorts of creatures. The Zios have 3 legs and the Zepts have 7 legs. The great planetary explorer Nico counted 52 legs. How many Zios and how many Zepts were there? Look on the back of any modern book and you will find an ISBN code. Take this code and calculate this sum in the way shown. Can you see what the answers always have in common? Using the statements, can you work out how many of each type of rabbit there are in these pens? There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places. 56 406 is the product of two consecutive numbers. What are these Can you arrange 5 different digits (from 0 - 9) in the cross in the This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules. Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only. What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10? Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice. Can you design a new shape for the twenty-eight squares and arrange the numbers in a logical way? What patterns do you notice? Cherri, Saxon, Mel and Paul are friends. They are all different ages. Can you find out the age of each friend using the In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? Well now, what would happen if we lost all the nines in our number system? Have a go at writing the numbers out in this way and have a look at the multiplications table. Explore Alex's number plumber. What questions would you like to ask? What do you think is happening to the numbers? Zumf makes spectacles for the residents of the planet Zargon, who have either 3 eyes or 4 eyes. How many lenses will Zumf need to make all the different orders for 9 families? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! Find the next number in this pattern: 3, 7, 19, 55 ... Use 4 four times with simple operations so that you get the answer 12. Can you make 15, 16 and 17 too? What is happening at each box in these machines? Find out what a Deca Tree is and then work out how many leaves there will be after the woodcutter has cut off a trunk, a branch, a twig and a leaf. Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest? Where can you draw a line on a clock face so that the numbers on both sides have the same total? Use the information to work out how many gifts there are in each On my calculator I divided one whole number by another whole number and got the answer 3.125 If the numbers are both under 50, what are they? This big box multiplies anything that goes inside it by the same number. If you know the numbers that come out, what multiplication might be going on in the box? Put operations signs between the numbers 3 4 5 6 to make the highest possible number and lowest possible number. Work out Tom's number from the answers he gives his friend. He will only answer 'yes' or 'no'. A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3. Explore Alex's number plumber. What questions would you like to ask? Don't forget to keep visiting NRICH projects site for the latest developments and questions. This group activity will encourage you to share calculation strategies and to think about which strategy might be the most How would you count the number of fingers in these pictures? What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column? Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. Can you work out what a ziffle is on the planet Zargon? Which is quicker, counting up to 30 in ones or counting up to 300 in tens? Why? Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out? Amy has a box containing domino pieces but she does not think it is a complete set. She has 24 dominoes in her box and there are 125 spots on them altogether. Which of her domino pieces are missing? This number has 903 digits. What is the sum of all 903 digits? Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g?
<urn:uuid:117b7c27-ed15-40eb-93f8-0db73dca5f3f>
3.21875
1,503
Content Listing
Science & Tech.
76.915311