text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
There's a lot of jargon thrown around when you talk about Test Driven Development. Here's a list of common terms used in TDD (and specifically in Test::Unit) that you should know.
Test Method - A method that tests a specific feature. Each of the methods starting with test are the test methods. Test methods manipulate the objects to be tested and run a number of assertions on them, which will determine if the test is pass or fail. More often referred to as just a "test."
Assertion - A method that will pass or fail the test method if certain conditions are met. The most basic assertion is the familiar assert_equal. If the two objects passed to assert_equal are not equal, then the assertion will fail, which causes the test to fail. No further assertions are run in a test if one of the assertions fails, however all assertions in a test must pass in order for the test to pass.
Test Fixture - Any objects created before and destroyed after any test in a test suite. These objects are created using the setup method, and destroyed using the teardown method. The setup and teardown methods are called around each test method, so test methods don't need to worry about any changes they've made to the fixture objects.
Test Case - A collection of test methods and an optional test fixture. Test cases are classes that have inherited from Test::Unit::TestCase. Any methods in them that begin with test_ are considered to be test methods, and will run when the test case is run.
Test Suite - A collection of test cases. A test suite gathers the test cases, runs the test methods, then gathers and formats the results and displays them to the user. If you're simply running test cases, you won't ever have to see test suites. However, gathering your test cases into test suites can help give you a better UI and provides more opportunity to run tests often.
Run - A test run is the running of a test case or test suite. Used as both a noun and verb, you could say "run the tests" or "there were no more errors when I gave it another run."
Test Result - The aggregated results of all the tests. Depending on your user interface, this can be very different. Tests may be displayed as only a single dot when they pass, or all of them could be listed along side their pass or fail status. It's recommended that you use a user interface with color capability so passes and failures really stand out.
Red and green - Failing tests are red, passing tests are green. You can say a test "is red" or has "gone red." Put simply, it's your job to make all the tests "go green." Many user interfaces will display tests in red and green to help you isolate failing tests quickly. Other interfaces (such as HTML interfaces) may use other graphics, backgrounds and methods to highlight tests other than simply red and green colors. | <urn:uuid:df75db9c-0214-461b-88af-da4768ce4af7> | 3.71875 | 609 | Documentation | Software Dev. | 63.055931 |
Science Fair Project Encyclopedia
A microscope (Greek: micron = small and scopos = aim) is an instrument for viewing objects that are too small to be seen by the naked or unaided eye. The science of investigating small objects using such an instrument is called microscopy, and the term microscopic means minute or very small, not easily visible with the unaided eye. In other words, requiring a microscope to examine.
The most common type of microscope—and the first to be invented—is the optical microscope. This is an optical instrument containing one or more lenses that produce an enlarged image of an object placed in the focal plane of the lens(es).
See also: Microscopy.
Simple optical microscope
A simple microscope, as opposed to a standard compound microscope (see below) with multiple lenses, is a microscope that uses only one lens for magnification. Van Leeuwenhoek's microscopes consisted of a single, small, convex lens mounted on a plate with a mechanism to hold the material to be examined (the sample or specimen). This use of a single, convex lens to magnify objects for viewing is still found in the magnifying glass, the hand-lens , and the loupe.
Compound optical microscope
The diagrams below show compound microscopes. In its simplest form—as used by Robert Hooke, for example—the compound microscope would have a single glass lens of short focal length for the objective, and another single glass lens for the eyepiece or ocular. Modern microscopes of this kind are usually more complex, with multiple lens components in both objective and eyepiece assemblies. These multi-component lenses are designed to reduce aberrations, particularly chromatic aberration and spherical aberration. In modern microscopes the mirror is replaced by a lamp unit providing stable, controllable illumination.
Compound optical microscopes can magnify an image up to 1000× and are used to study thin specimens as they have a very limited depth of field. Typically they are used to examine a smear, a squash preparation, or a thinly sectioned slice of some material. With a few exceptions, they utilize light passing through the sample from below and special techniques are usually necessary to increase the contrast in the image to useful levels (see contrast methods). Typically, on a standard compound optical microscope, there are three objective lenses: a scanning lens (4×), low power lens (10×), and high power lens (40×). Advanced microscopes often have a fourth objective lens, called an oil immersion lens. To use this lens, a drop of oil is placed on top of the cover slip, and the lens moved into place where it is immersed in the oil. An oil immersion lens usually has a power of 100×. The actual power of magnification is the product of the powers of the ocular (usually 10×) and the objective lenses being used.
To study the thin structure of metals (see metallography ) and minerals, another type of microscope is used, where the light is reflected from the examined surface. The light is fed through the same objective using a semi-transparent mirror.
The stereo, binocular or dissecting microscope is designed differently from the diagrams above, and serves a different purpose. It uses two eyepieces (or sometimes two complete microscopes) to provide slightly different viewing angles to the left and right eyes. In this way it produces a three-dimensional (3-D) visualisation of the sample being examined.
The stereo microscope is often used to study the surfaces of solid specimens or to carry out close work such as sorting, dissection, microsurgery, watch-making, small circuit board manufacture or inspection, and the like. Great working distance and depth of field here are important qualities for this type of microscope. Both qualities are inversely correlated with resolution: the higher the resolution (i.e., magnification), the smaller the depth of field and working distance. A stereo microscope has a useful magnification up to 100×. The resolution is maximally in the order of an average 10× objective in a compound microscope, and often way lower.
Other types of optical microscope include:
- the inverted microscope for studying samples from below; useful for cell cultures in liquid;
- the student microscope designed for low cost, durability, and ease of use; and
- the research microscope which is an expensive tool with many enhancements.
A lens magnifies by bending light (see refraction). Optical microscopes are restricted in their ability to resolve features by a phenomenon called diffraction which, based on the numerical aperture (NA or AN) of the optical system and the wavelengths of light used (λ), sets a definite limit (d) to the optical resolution . Assuming that optical aberrations are negligible, the resolution (d) is given by:
Due to diffraction, even the best optical microscope is limited to a resolution of 0.2 micrometres.
History of the microscope
It is impossible to say who invented the compound microscope. Dutch spectacle-makers, Hans Janssen and his son Zacharias Janssen, are often said to have invented the first compound microscope in 1590, but this was a declaration by Zacharias Janssen himself halfway the 17th century. The date is certainly not likely, as it has been shown that Zacharias Janssen actually was just about born in 1590. Another favorite for the title of 'inventor of the microscope' was Galileo Galilei. He developed an occhiolino or compound microscope with a convex and a concave lens in 1609. Christiaan Huygens, another Dutchman, developed a simple 2-lens ocular system in the late 1600's that was achromatically corrected and therefore a huge step forward in microscope development. The Huygens ocular is still being produced to this day, but suffers from a small field size, and the eye relief is uncomfortably close compared to modern widefield oculars.
Anton van Leeuwenhoek (1632-1723) is generally credited with bringing the microscope to the attention of biologists, even though simple magnifying lenses were already being produced in the 1500's, and the magnifying principle of water-filled glass bowls had been described by the Romans (Seneca). Van Leeuwenhoek's home-made microscopes were actually very small simple instruments with a single very strong lens. They were awkard in use but enabled van Leeuwenhoek to see highly detailed images, mainly because a single lens does not suffer the lens faults that are doubled or even multiplied when using several lenses in combination as in a compound microscope. It actually took about 150 years of optical development before the compound microscope was able to provide the same quality image as van Leeuwenhoek's simple microscopes. So although he was certainly a great microscopist, van Leeuwenhoek is, contrary to widespread claims, certainly not the inventor of the microscope.
Other types of microscopes
See also microscopy
- Atom probe
- Atomic force microscope
- Electron microscope
- Field ion microscope
- Field emission microscope
- Phase contrast microscope , see Frits Zernike
- Scanning tunneling microscope
- Virtual microscope
- X-ray microscope
- Total internal reflection fluorescence microscope
- Confocal laser scanning microscopy
- Angular resolution
- How to prepare an onion cell slide
- Microscope image processing
- Microscope slide
- Microscopy laboratory in: A Study Guide to the Science of Botany at Wikibooks
- Micscape - a monthly magazine directed towards the amateur microscopist
- Microscope Directory
- Royal Microscopical Society
- The Microscope - quarterly journal
- virtual microscope on plankton
- A virtual polarization microscope (requires Java)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6cc4e9b8-e081-4792-89f0-42fd25ba3f89> | 3.59375 | 1,656 | Knowledge Article | Science & Tech. | 24.224756 |
Science Fair Project Encyclopedia
The Reynolds number is the most important dimensionless number in fluid dynamics and provides a criterion for determining dynamic similarity. Where two similar objects in perhaps different fluids with possibly different flowrates have similar fluid flow around them, they are said to be dynamically similar.
- vs - mean fluid velocity,
- L - characteristic length (equal to diameter 2r if a cross-section is circular),
- μ - (absolute) dynamic fluid viscosity,
- ν - kinematic fluid viscosity: ν = μ / ρ,
- ρ - fluid density.
The Reynolds number is the ratio of inertial forces (vsρ) to viscous forces (μ/L) and is used for determining whether a flow will be laminar or turbulent. Laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion, while turbulent flow, on the other hand, occurs at high Reynolds numbers and is dominated by inertial forces, producing random eddies, vortices and other flow fluctuations.
The transition between laminar and turbulent flow is often indicated by a critical Reynolds number (Recrit), which depends on the exact flow configuration and must be determined experimentally. Within a certain range around this point there is a region of gradual transition where the flow is neither fully laminar nor fully turbulent, and predictions of fluid behaviour can be difficult. For example, within circular pipes the critical Reynolds number is generally accepted to be 2300, where the Reynolds number is based on the pipe diameter and the mean velocity vs within the pipe, but engineers will avoid any pipe configuration that falls within the range of Reynolds numbers from about 2000 to 4000 to ensure that the flow is either laminar or turbulent.
The similarity of flows
In order for two flows to be similar they must have the same geometry and equal Reynolds numbers. When comparing fluid behaviour at homologous points in a model and a full-scale flow, the following holds:
where quantities marked with * concern the flow around the model and the others the real flow. This allows us to perform experiments with reduced models in water channels or wind tunnels, and correlate the data to the real flows. Note that true dynamic similarity may require matching other dimensionless numbers as well, such as the Mach number used in compressible flows, or the Froude number that governs free-surface flows. Some flows involve more dimensionless parameters than can be practically satisfied with the available apparatus and fluids (preferably air or water), so one is forced to decide which parameters are most important. This is why good experimental modelling requires a fair amount of experience and good judgement.
Example on the importance of Reynolds number
If an aeroplane needs testing of its wing, one can make a scaled down small model of the wing and test the wing as table top model in the lab with the same Reynolds number the actual air plane is subjected to. The results of the lab model will be exactly similar to that of the actual plane wing results. Thus we need not bring a plane into the lab to test it actually. This is the example of "dynamic" "similarity." This is what Reynolds number is all about.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:45201c5e-d4a4-4337-b995-78d97efa53c1> | 4.25 | 696 | Knowledge Article | Science & Tech. | 33.723167 |
White Guinea Pigs
Name: Yvonne K.
If two white guinea pigs produced only white
offspring. Which gene is dominant and which is recessive and why?
It's probably recessive since only white are produced, however if the white
guinea pigs are both homozygous dominant they would also produce only white
offspring. So you need more information to say for sure.
Click here to return to the Molecular Biology Archives
Update: June 2012 | <urn:uuid:10de616e-ca66-4c3d-a083-e0dbe80af397> | 3.109375 | 98 | Knowledge Article | Science & Tech. | 38.294 |
The Evolving World Of Technology: The History And Future
Technology in today’s world is continuously evolving as time goes by. It involves the use of certain devices which are tactfully programmed to perform specific tasks in order to help people make their jobs easier and faster. And for the last 100 centuries of human existence, it has indeed affected not only the lives of every individual but also the environment that we live in as well as the development of science.
When it comes to knowing the history of technology, it will be impossible for anyone to tract its first real existence. No one can really tell exactly how it begun. In fact the history of technology is very long and complicated. But some historical records indicated that the birth of technology has started during the time when Homer and Hesiod defined it as the ‘expression of manual craft and artful skill’. Aristotle also introduced it as ‘technologia’, a Greek word which eventually then meant to be the productive science.
Other records also stated that technology has surpassed a lot of historical periods including the birth of Renaissance Period wherein the researches and learning begun to bloom again after the Dark Ages and Age of Discovery during the 15th century. But it was during the 16th century that the real exploration and discovery begun. It is in fact the century where modern science begun and where great inventions have been introduced and considered to be some of the greatest discoveries.
Scientists are not recognized during the 17th century. Those scientists including the great Isaac Newton were considered as philosophers rather than scientists. It is in this century that many major events in philosophy and science have been changed. But the next 18th century is different from that scenario. It was during this century that steam engines and other machines are invented to act as substitutes to using animals at work. The next century has been all about machineries especially its tools. Machines have been invented to make parts for other machines and tools to make their tools.
Technology as well as discoveries and science during 20th century have continuously developed into an increased rate. It was during this century that the inventions of airplanes, automobiles, telephones, and other things have eventually marked the great history of technology. In addition to those things are the improvements and discoveries in computers, mobile phones, space crafts, and other things.
Researches in nanotechnology, biotechnology and information technology have been the main focus during this current century. In line with this are the ones which are continuously enjoyed by the people from all ages. These things include the discovery of wireless connection through the Internet, artificial intelligence, powerful microchips, LEDs, and many others.
The technology in today’s world holds a lot of promising things for the next centuries to come. It would be the ages where touch screen appliances, voice control and portable devices will be used. These things only show how powerful technology is. It is indeed the system of continuous change and improvements. | <urn:uuid:884eb9ea-1ff4-4f79-80ce-bcefef8fae4d> | 2.890625 | 595 | Nonfiction Writing | Science & Tech. | 34.276563 |
Ascraeus Mons Lava Flow
Figure 1. Flow margin and channel outlines of the Ascraeus Mons flow. Flow sections (proximal, medial, distal), distances from inferred source area, and previous topography (dome, crater) are labeled. Outline is of a mosaic of non map-projected THEMIS daytime infrared images. (b) Locations of THEMIS and MOC visible images listed by figure number.1
Longer than any known lava flow on Earth, the channeled lava flow of Ascraeus Mons Volcano, Mars is 690 km (429 miles) long1. Though enormous compared to terrestrial counterparts, the Ascraeus Mons flow has similar morphological features to the 1907 flow on Mauna Loa Volcano and the Mana flows of Mauna Kea Volcano, Hawaii1. Combining field work data from the Hawaiian field sites, orbital images of Ascraeus Mons, and experimental modeling, we were able to measure the width, length, and volume at different points along the Ascraeus Mons flow, calculate effusion rates, estimate the flow's duration, and interpret the formation processes for the flow's morphologic features.
The Ascraeus Mons lava flow is situated in the saddle region between Pavonis Mons and Ascraeus Mons1. The source volcano, Ascraeus Mons, is one of the four large Tharsis shield volcanoes and measures 375 km by 870 km at the base and is 15 km high1. The lava flow has flow thickness =110 m and a =,35 m deep channel. We calculated effusion rates of 19,000-29,00m m3/s and eruption durations over 3 to 7 Earth months, assuming a constant eruption and effusion rate1. In comparison, the 1907 Mauna flow lasted 15 days, reach a total length of ~222 km along the eastern limb, had a channel depth of 2 to 7 m, and an average effusion rate of ~119 m3/s 2. See more information about the 1907 Mauna Loa flow.
The two flows are quantitatively different as demonstrated by the at least one order of magnitude difference in flow dimensions and eruption parameters. When comparing the morphologic features, however, qualitative similarities are apparent. Both flows have three distinct regions: the proximal, medial, and distal zones. The proximal section has an indistinct or very thin flow margin present, the medial section has well-defined levees and a flow margin, meanwhile the distal section is a wide, thick non-channeled lobe1. The flow width increased downstream at the boundary of the medial and distal zones for the Ascraeus Mons flow, 1907 flow, and the experimental wax flows1. Also, the island structured observed within the channel of the medial section of the Ascraeus Mons flow resemble island feature sin the Mauna Loa flow and most likely formed during surges of lava in the channel as observed during the 1984 eruption on Mauna Lao.
Another aspect of the research included determining why a portion of the martian lava channel within the medial section cuts off and then appears again further downstream, creating a section along the "channeled" portion of the flow where no channel is present (Figure 1a). Three potential mechanisms have been suggested based on observations of terrestrial lava channels: 1) roofing of the channel, 2) the stalled and solidified remnant lava in the channel, or 3) lava backed up behind a dam that formed within the channel1.
Overall, understanding the morphology of the Ascraeus Mons lava flow on Mars propels our knowledge of the emplacement processes and eruption conditions on Ascraeus Mons volcano and the other Tharsis volcanoes, allowing us to envision what it was like on the surface as these volcanoes formed.
1. Garry et al. (2007), Morphology and emplacement of a long channeled lava flow near Ascraeus Mons Volcano, Mars.
2. Zimbelman et al. (2008) 1907 paper in press.
Dr. Garry's Homepage
Back to List of Mars research | <urn:uuid:60107e78-8794-4f62-95d9-b4f0346d0d9c> | 3.1875 | 841 | Academic Writing | Science & Tech. | 48.094065 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
March 15, 1996
The McMath-Pierce Solar Observatory
Credit and Copyright: Bill Keel, University of Alabama
Explanation: This odd-looking structure silhouetted in the foreground houses the three largest solar telescopes in the world. Located in Kitt Peak, Arizona, the largest telescope inside the McMath-Pierce Facility is 1.6-meters in diameter and contains only mirrors. The telescope contains no windows or lenses because focusing bright sunlight would overheat them. Visible in the background of this sunrise photo are the Moon and Venus. The telescopes are used in many research projects including determining the Sun's structure, researching the cause of the solar corona, monitoring Sun-spots and solar flares, and observing bright planets and comets near the Sun. The telescopes even help monitor the Earth's atmospheric content of ozone and CFCs!
Authors & editors:
NASA Technical Rep.: Sherri Calvo. Specific rights apply.
A service of: LHEA at NASA/ GSFC | <urn:uuid:7a8f00cc-9a25-4c33-8fc9-a0e82c67145d> | 3.609375 | 238 | Knowledge Article | Science & Tech. | 35.568835 |
This is an excerpt from the HTML5 Solutions: Essential Techniques for HTML5 Developers book by Apress (where I'm one of the author).
It was already possible to send any kind of file from your computer to a remote server with the older
versions of HTML by using the form, and the <input type = file> in particular.
This form control, however, had the limit of being able to send only one file at a time. If the user wanted to
upload a photo album and thus send several photos, the developer had to use other technologies, such as
Now, with HTML5 and with the addition of an attribute, it is possible to manage everything without using
any external language.
HTML5 introduces a new attribute to the file input type, multiple, to improve file upload usability. Multiple
is a Boolean attribute that indicates whether the user should be allowed to specify more than one value.
It’s specified inline to the markup input tag:
<input type="file" multiple />
This attribute is supported by the latest versions of Safari, Chrome, Firefox, Internet Explorer, and Opera.
The input control will be rendered according to the browser, with a simple text input with a button on the
side to select the files (e.g., Opera), or with only a button (e.g., Chrome and Safari).
Other browsers, such as Chrome, use the same button label used for a simple file input type. However,
they specify the number of selected files for the user (but not their file names, as Opera and Firefox do).
To carry out a multiple selection, the user will use the SHIFT or CTRL or CMD keys after having clicked on
the Choose Files or Add Files button.
How to build it
From a technical point of view, the only thing you need to be aware of to allow the user to upload multiple
files is to add the multiple attribute in the declaration of the tag file input type.
Here is a complete example:
Solution 4-5: Sending multiple files
<legend>Solution 4-5: Sending multiple files</legend>
<label>Upload one or more files:</label>
<input type="file" name="multipleFileUpload" multiple />
The files that the user selects will have to be sent to the server and processed using server-side language.
Some programming languages, such as PHP, require you to add brackets to the name attribute of the tag
to send multiple files:
<input name="filesUploaded" type="file" multiple />
By doing so, PHP will construct an array data type, which will contain the uploaded files on the server. If
you don’t specify the brackets, the programming language will process the files in order and only provide
the last file in your script. | <urn:uuid:6fd5322d-70fb-4c93-b1ac-b7d3005ffbdb> | 2.96875 | 588 | Tutorial | Software Dev. | 41.074998 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
Re: Re-emergence of lost features.
> One may wonder whether the genetic information to redisplay an old
>feature >can still be there after tens of millions of years - after all,
>unexpressed >genetic material undergoes much faster mutation than
>expressed stuff which is >subject to evolutionary honing.
Atavistic features within species are actually not that uncommon - horses
with multiple digits on the feet are one prominent example.
As for the return of "lost features," one of my favorite examples involves
the hylid frog Gastrotheca, which has lower dentition. No other frog has
lower teeth. Hylids are very derived frogs, and Gastrotheca is a very
derived hylid - we could either posit a great many multiple losses, or a
Moreover, IF the argument that birds have digits 2-3-4 in the wings is
correct (and I seriously doubt that it is), then this is a similar case -
the fourth digit would have been regained through a reorientation of the
primary axis of digital development.
> Are any examples of normal-winged birds with throw-back clawed hands
You mean like the modern hoatzin, which loses the claws before maturity?
>And why did the phorusrhacoids lose out to the early carnivorous mammals
>when a similar design survived for so long before?
Phorusrhacoids spent most of their history in South America prior to the
appearance of large-bodied marsupial predators. The only other competing
terrestrial predators were "sebecoid" crocodyliforms. Their appearance in
North America was relatively recent (Pleistocene).
Christopher Brochu, Ph.D.
Postdoctoral Research Scientist
Department of Geology
Field Museum of Natural History
Lake Shore Drive at Roosevelt Road
Chicago, IL 60605 USA
phone: 312-922-9410, ext. 469 | <urn:uuid:d6035b35-9255-4a37-acb3-9a161201b9a8> | 2.875 | 446 | Comment Section | Science & Tech. | 33.53 |
09 Dec 2008:
NASA Satellite Technology Can Monitor Natural Oil Seepage
Scientists are using NASA satellites to track natural oil slicks seeping to the surface of the world’s oceans
, providing better leads on potential sources of greenhouse gas emissions as the slicks break up and release carbon dioxide. Such natural seepage accounts for almost half of the oil that enters the earth's oceans, according to a report in New Scientist
. While typical satellite radar images enable scientists to monitor seepage spots every 8 to 16 days, new techniques of analyzing NASA's MODIS images can
detect a broader spectrum of wavelengths, including the visual range, allowing a scan of the surface of the earth daily. That is particularly significant since the sheen of an oil slick can disintegrate within two days. One research team used MODIS to monitor the northwestern Gulf of Mexico, and the image of a naturally occurring slick can be seen, at left. Scientists say monitoring areas of persistent seabed oil seepage provides an opportunity to study the unique seafloor ecosystems that have evolved near seepage vents, potentially leading to the development of new ways to clean up man-made oil spills.
Yale Environment 360 is
a publication of the
Yale School of Forestry
& Environmental Studies
e360 on Facebook
Donate to e360
View mobile site
Subscribe to our feed:
South African photojournalist Adam Welz documents the harrowing relocation of six white rhinos to a region that has lost all its rhinos to poaching. View the gallery.
Business & Innovation
Policy & Politics
Pollution & Health
Science & Technology
Antarctica and the Arctic
Central & South America
A Yale Environment 360
video explores Ecuador’s threatened Yasuni Biosphere Reserve with scientists inventorying its stunning forests and wildlife. Watch the video.
is now available for mobile devices at e360.yale.edu/mobile
In a Yale Environment 360
video, photographer Pete McBride documents how increasing water demands have transformed the Colorado River, the lifeblood of the arid Southwest. Watch the video.
Top Image: aerial view of Iceland
. © Google & TerraMetrics.
The Warriors of Qiugang
, a Yale Environment 360
video that chronicles the story of a Chinese village’s fight against a polluting chemical plant, was nominated for a 2011 Academy Award for Best Documentary (Short Subject).
Watch the video. | <urn:uuid:6c4572d7-23c0-4a2c-9e1e-6e5c38b08e33> | 3.640625 | 506 | Content Listing | Science & Tech. | 28.897167 |
Goats, like most hoofed mammals, have horizontal pupils. The purpose of those elongated pupils is to allow them to scan the horizon for possible predators.
When a goat’s head tilts up (to look around) and down (to munch on grass), an amazing thing happens. The eyeballs actually rotate clockwise or counterclockwise within the eye socket. This keeps the pupils oriented to the horizontal.
That seems impossible to believe. Eyeballs might be able to scan from side to side, or swivel up and down, but how could they actually rotate clockwise or counterclockwise?
To test out this theory, I took photos of Lucky the goat’s head in two different positions, down and up. I then tilted one of the photos so that the slope of his forehead (marked by arrows) was constant. The photos prove that the pupils actually swivel about 25 degrees within the head. | <urn:uuid:7f735551-f645-4f94-8251-36fb8c505ff1> | 3.40625 | 195 | Personal Blog | Science & Tech. | 57.245112 |
This post is a joint outcome from a couple classes I took in the spring term, one of which was on two-phase fluid flow and the other of which was on scientific communication. The science communication class included a project in which we were to translate a bit of technical literature to a popular science level, and I selected for this purpose the discussion of boiling processes presented in my two-phase flow course. It turns out that boiling processes are a lot more interesting than I'd expected (at least, to me), so on the off chance that anyone else might also find this interesting I've posted my writing project below. Let me know what you think!
Most of us, even those who don't cook, are probably familiar with the way that water in a heated pot on the stove changes as it boils. First it begins to steam slightly, then it begins to show a scattering of tiny bubbles on the bottom surface. Eventually a few larger bubbles form and start to drift up towards the top, and finally the whole pot becomes consumed by rapidly rising columns of bubbles, roiling the surface, steaming madly, and sometimes even overflowing the pot if we're not careful.
Engineers who deal with boiling in industrial contexts need to know far more detail than this. If you're using the heat produced by a nuclear reactor to boil water to produce steam which turns a turbine to provide electrical power, you need to know exactly how much heat the water can absorb as it boils, in order to make sure that your reactor doesn't overheat and melt down. The rate of heat absorption depends strongly on the details of the boiling process, including the type of surface the water touches as it absorbs the heat, the size of the bubbles, the rate of bubble formation, and many other subtle parameters of the system.
In order to sort it all out, scientists and engineers studying the boiling process have spent a lot of time watching the laboratory equivalent of heated pots of water. What have they learned? Well, for one thing, it turns out that a watched pot does indeed boil, eventually. It's just that most of us lack these researchers' patience in attending to the slow and subtle processes involved. If you do decide to sit down someday and pay close attention, here's what their work tells us you'll see:
- Convection: Convection is actually the process that takes place before the real boiling starts. When a pot of water is heated from the bottom, the water on the bottom gets hot more quickly than the water on top. Since hot water is less dense than cool water, the hot water begins to rise toward the top of the pot, while the cool water begins to sink. When the hot water reaches the top, the most energetic molecules will steam away, causing the hot water to cool. At the same time, the cool water at the bottom is in contact with the hot surface of the pot and begins to heat up. Eventually the water on top has cooled enough and the water on the bottom has heated enough that the water on the bottom again becomes less dense and the cycle continues.
This process of convection, wherein heat is moved from the bottom of the pan to the top by the motion of the water, tends to organize itself into a multitude of tiny little hexagonal regions, each of which has cool water sinking at the edges and hot water upwelling in the center. These cells organize into a honeycomb pattern, which can sometimes be seen if there are small particles suspended in the water, such as tea leaves. This phenomenon is referred to by scientists as Rayleigh-Bénard convection, and there are dozens of cool YouTube videos you can watch to see it for yourself. Here's one example:
- Onset of boiling: As the bottom surface of the pan gets hotter, the water in contact with it doesn't simply heat up, it actually begins to vaporize. This forms tiny bubbles, which are usually initially trapped by surface tension in the little rough spots of the surface. These rough spots are called ``nucleation sites'', and it turns out that the chance that a bubble will form at a particular site depends on the size of the site. Nucleation sites that are too big or too small will not be able to form bubbles.
If the surface is sufficiently smooth, such as the interior of a glass or Pyrex container, there are no nucleation sites which are the right size for bubbles to form, and the water can actually be heated significantly above its boiling point without being able to pass into the bubble formation stage. This is called superheating. When a liquid is superheated, even the slightest disturbance, such as putting in a spoon to stir it, can cause a sudden explosive rush of bubble formation, which in turn can burn you if you're splashed by the hot liquid or steam. This is why it's important to be careful if you microwave water in a glass container --- superheating of the water is not an uncommon result!
Here's the Mythbusters' demonstration of superheating water in a microwave:
Note that despite what they say, it's definitely possible to superheat tap water (probably depending on your local water source). I've had it happen myself.
- Ordinary boiling: As the pan is heated still further, the initial tiny bubbles begin to grow in size until eventually they are too buoyant for the surface tension to hold them down any longer. Bubbles begin to break free and float to the water surface, where they burst and release their trapped vapor. As the pan continues to warm, bubbles grow and escape more quickly at each nucleation site, and more and more nucleation sites become able to form bubbles. Eventually bubbles begin to form and escape so quickly that successive bubbles from the same site merge together to form amorphous vertical globs called ``slugs'', or even a continuous column of vapor rising from the nucleation site to the surface of the water.
This is the point where a chef would consider the water to be at a ``rolling boil''. A thick cloud of steam rises from the liquid from the continuous bursting of bubbles at the surface, and the surface itself roils and churns. From a scientific perspective, this is also the point at which heat is being transferred from the stove heating element to the liquid in the pot as quickly as possible. Fortunately, kitchen stoves do not usually heat water beyond this point. Science, however, goes further yet.
- Transition and film boiling: If you did have a stove that could get hot enough, the next step in the boiling process would be for even the water between the columns of bubbles to begin to vaporize on the bottom surface of the pot, and the columns themselves to begin to merge. This is a dangerous regime to enter, because at this point the bottom of the pan starts to become completely covered with water vapor, and liquid water is no longer able to reach the surface. A similar phenomenon is displayed in the Leidenfrost effect, which is what allows the guy in this video to safely pour liquid nitrogen over his hand (don't try this at home!):
This is also the phenomenon which allows water droplets to dance and skate around just above the surface of a hot skillet:
In the ordinary boiling regime, heat can be transferred from the pot surface to the water relatively quickly by converting liquid water into water vapor, because the water vapor absorbs a great deal of energy as it forms. However, once the vapor has been formed, it absorbs energy at a much slower rate. If only vapor is in contact with the surface of the pot, the rate of heat absorption by the water will be very slow indeed. When the rate of heat transfer from the pot to the water has slowed down, the temperature of the pot itself begins to rise rapidly. If you have a powerful enough source of heat, your pot, or even the heating element itself, can get hot enough to melt. Usually this is not a desirable outcome.
I bet you never knew that a bit of kitchen science could be so important!
Van P. Carey (2008). Chapter 7: Pool Boiling, Section 1: Regimes of Pool Boiling Liquid Vapor Phase Change Phenomena: An Introduction to the Thermophysics of Vaporization and Condensation Processes in Heat Transfer Equipment, Second Edition Other: 978-1591690351 | <urn:uuid:fb16d0c2-7e7d-4676-b5eb-7e5c13172a09> | 3.5 | 1,704 | Personal Blog | Science & Tech. | 45.727615 |
Question: What kind of clouds do persistent contrails create? A persistent contrail is technically a cloud, correct? And given the altitude, it would be a cirrus cloud, correct?
The reason i ask is, somebody who viewed my video (Chemtrails and Chemclouds debunked: http://tinyurl.com/cllnq45) made the point that, if contrails form cirrus clouds, then showing cirrus clouds from 1905 is pointless - it proves nothing. I disagree with him. | <urn:uuid:2937abcc-47f0-472c-933e-807c0706f5aa> | 2.703125 | 105 | Comment Section | Science & Tech. | 66.884762 |
It has long been known that in the local universe the mix of morphological types differs in different galactic environments with ellipticals and S0's dominating in the densest clusters and spirals dominating the field population (Hubble and Humason 1931). This so called density-morphology relation has been quantified by Oemler (1974) and Dressler (1980) and is found to extend over five orders of magnitude in space density (Postman and Geller 1984). Whether this relation arises at formation (nature) or is caused by density driven evolutionary effects (nurture) remains a matter of debate. More recent studies of clusters of galaxies at intermediate redshifts show that both the morphological mix and the star formation rate strongly evolve with redshift (Poggianti et al. 1999; Dressler et al. 1997; Fasano et al. 2000). In particular the fraction of S0's goes down and the spiral fraction and star formation rate go up with increasing redshift. There are many physical mechanisms at work in clusters or during the growth of clusters that could affect the star formation rate and possibly transform spiral galaxies into S0's. In this review I will limit myself to the role that the hot intracluster medium (ICM) may play.
The first suggestion that an interaction between the ICM and disk galaxies may affect the evolution of these galaxies was made immediately after the first detection of an ICM in clusters (Gursky et al. 1971). In a seminal paper on "the infall of matter into clusters" Gunn and Gott (1972) discuss what might happen if there is any intergalactic gas left after the clusters has collapsed. The interstellar material in a galaxy would feel the ram pressure of the intracluster medium as it moves through the cluster. A simple estimate of the effect assumes that the outer disk gas gets stripped off when the local restoring force in the disk is smaller than the ram pressure. Thus disks gets stripped up to the so called stripping radius where the forces balance. They estimate that for a galaxy moving at the typical velocity of 1700 km/s through the Coma cluster the ISM would be stripped in one pass. This would explain why so few normal spirals are seen in nearby clusters. In particular it would explain the existence of so many gas poor, non star forming disk galaxies first noticed by Spitzer and Baade (1951) and later dubbed anemics by van den Bergh (1976).
Ram pressure stripping is but one way in which the ICM may affect the ISM. The effects of viscosity, thermal conduction and turbulence on the flow of hot gas past a galaxy were considered by Nulsen (1982), who concluded that turbulent viscous stripping will be an important mechanism for gas loss from cluster galaxies. While the above mentioned mechanisms would work to remove gas from galaxies and thus slow down their evolution, an alternative possiblity is that an interaction with the ICM compresses the ISM and leads to ram pressure induced star formation (Dressler and Gunn, 1983; Gavazzi et al. 1995).
On the observational side there has long been evidence that spiral galaxies in clusters have less neutral atomic hydrogen than galaxies of the same morphological type in the field (for a review see Haynes, Giovanelli and Chincarini 1984). The CO content however does not seem to depend on environment (Stark et al. 1986; Kenney and Young 1989). Both single dish observations and synthesis imaging results of the Virgo cluster show that the HI disks of galaxies in projection close to the cluster center are much smaller than the H I disks of galaxies in the outer parts (Giovanelli and Haynes, 1983; Warmels 1988a, b, c; Cayatte et al. 1990, 1994). All of these phenomena could easily be interpreted in terms of ram pressure stripping. Dressler (1986) made this even more plausible by pointing out that the gas deficient galaxies seem statistically to be mostly on radial orbits which would carry them into the dense environment of the cluster core. However nature turned out to be more complicated than that. In a comprehensive analysis of HI data on six nearby clusters Magri et al. (1988) conclude that the data can not be used to distinguish between inbred and evolutionary gas deficiency mechanisms or among different environmental effects. Although H I deficiency varies with projected radius from the cluster center, with the most H I poor objects close to the cluster centers, no correlation is found between deficiency and (relative radial velocity)2, as would be expected from ram pressure stripping.
In more recent years a number of developments have taken place. First there was a flurry of activity on the theoretical front, for the first time detailed numerical simulations on the effects of ram pressure stripping appeared. Since then both improved statistics on HI deficiency and detailed multiwavelengths observations of cluster galaxies undergoing trauma appeared. More recently detailed comparisons have been made between individual systems and numerical simulations. Finally synthesis imaging of neutral hydrogen no longer needs to be limited to a few selected systems in nearby clusters and results of volume limited surveys of entire clusters at redshifts between 0 and 0.2 have started to appear in the literature. In this review I will first discuss what we have learned about the statistical properties of the H I content of cluster galaxies. Then I will review some of the recent numerical work that has been done and compare these with observational results. After that I will discuss what we have learned from imaging surveys, and in conclusion I will discuss the importance of the ICM interaction for galaxy evolution. | <urn:uuid:0d43c42c-028e-4c2f-a50b-901d99f256cd> | 2.84375 | 1,130 | Academic Writing | Science & Tech. | 40.808929 |
|Gadolinium is a component of compact disks.|
|Atomic Number:||64||Atomic Radius:||237 pm (Van der Waals)|
|Atomic Symbol:||Gd||Melting Point:||1313 °C|
|Atomic Weight:||157.2||Boiling Point:||3273 °C|
|Electron Configuration:||[Xe]6s24f75d1||Oxidation States:||3|
From gadolinite, a mineral named for Gadolin, a Finnish chemist. The rare earth metal is obtained from the mineral gadolinite. Gadolinia, the oxide of gadolinium, was separated by Marignac in 1880 and Lecoq de Boisbaudran independently isolated it from Mosander's yttria in 1886.
Gadolinium is found in several other minerals, including monazite and bastnasite, both of which are commercially important. With the development of ion-exchange and solvent extraction techniques, the availability and prices of gadolinium and the other rare-earth metals have greatly improved. The metal can be prepared by the reduction of the anhydrous fluoride with metallic calcium.
Natural gadolinium is a mixture of seven isotopes, but 17 isotopes of gadolinium are now recognized. Although two of these, 155Gd and 157Gd, have excellent capture characteristics, they are only present naturally in low concentrations. As a result, gadolinium has a very fast burnout rate and has limited use as a nuclear control rod material.
As with other related rare-earth metals, gadolinium is silvery white, has a metallic luster, and is malleable and ductile. At room temperature, gadolinium crystallizes in the hexagonal, close-packed alpha form. Upon heating to 1235°C, alpha gadolinium transforms into the beta form, which has a body-centered cubic structure.
The metal is relatively stable in dry air, but tarnishes in moist air and forms a loosely adhering oxide film which falls off and exposes more surface to oxidation. The metal reacts slowly with water and is soluble in dilute acid.
Gadolinium has the highest thermal neutron capture cross-section of any known element (49,000 barns).
Gadolinium yttrium garnets are used in microwave applications and gadolinium compounds are used as phosphors in color television sets.
Gadolinium ethyl sulfate has extremely low noise characteristics and may find use in duplicating the performance of amplifiers, such as the maser.
The metal is ferromagnetic. Gadolinium is unique for its high magnetic movement and for its special Curie temperature (above which ferromagnetism vanishes) lying just at room temperature, meaning it could be used as a magnetic component that can sense hot and cold. | <urn:uuid:379e67b6-22f3-464a-8cb3-0bec142a8782> | 3.1875 | 604 | Knowledge Article | Science & Tech. | 27.526002 |
A recent joke on the comedy panel show 8 out of 10 cats prompted this question. I'm pretty sure the answer's no, but hopefully someone can surprise me.
If you put a person in a balloon, such that the balloon ascended to the upper levels of the atmosphere, is it theoretically possible that an orbiting satellite's (i.e. a moon's) gravity would become strong enough to start pulling you towards it, taking over as the lifting force from your buoyancy?
Clearly this wouldn't work on Earth, as there's no atmosphere between the Earth and the moon, but would it be possible to have a satellite share an atmosphere with its planet such that this would be a possibility, or would any shared atmosphere cause too much drag to allow for the existence of any satellite?
If it were possible, would it also be possible to take a balloon up to the satellite's surface, or would the moon's gravity ensure that its atmosphere was too dense near the surface for a landing to be possible thus leaving the balloonist suspended in equilibrium? Could you jump up from the balloon towards the moon (i.e. jumping away from the balloon in order to loose the buoyancy it provided). | <urn:uuid:4cf003cb-dfe3-4ef4-821e-3b89cda609ed> | 2.6875 | 242 | Q&A Forum | Science & Tech. | 53.470075 |
The fundamental question raised by these postulates of special relativity is how different coordinate systems (reference frames) are related, i.e., how one transforms between them. (x, y, z, t) denotes the coordinates of some event in frame S, what are the coordinates (x’, y’, z’, t’) in the frame S’ moving at the velocity v relative to S? But first, a clarification on proper time and coordinate time:
Proper time is time measured between events by use of a single clock, where these events occur at the same place as the clock. It depends not only on the events but also on the motion of the clock between the events. An accelerated clock will measure a shorter proper time between two events than a non-accelerated (inertial) clock between the same events.
In standard special relativity, we often want to express results in terms of a spacetime coordinate system relative to an implied observer. In this case, an event is specified by one time coordinate and three spatial coordinates. The time measured by the time coordinate is referred to as coordinate time, to distinguish it from proper time.
The answer is given by the Lorentz transformations:
x’ = y(x-vt)
y’ = y
z’ = z
t’ = y(t-vx/c2)
where y = 1/(1-v2/c2)1/2
(Notes from a lecture delivered by Bruce Gordon,
“Relativity Theory and the Nature of Time”) | <urn:uuid:109069e1-3d04-498f-92cf-ed06f964a6e2> | 4.53125 | 333 | Knowledge Article | Science & Tech. | 43.3958 |
To get more light in a tight spot, solar panels should be three dimensional, according to a study detailed today.
Researchers at the Massachusetts Institute of Technology published a paper in the journal Energy and Environmental Science this week which found that building a solar array with panels at different angles can significantly improve performance. The best improvements were in cloudy conditions, in winter months, and in locations far from the equator.
Using simulations and small test structures, the group found power increased between two to 20 times compared to a set of flat panels. In initial tests, though, it … Read more | <urn:uuid:3178313e-28f0-411a-9b26-690b66323097> | 2.921875 | 117 | Truncated | Science & Tech. | 31.658382 |
This 3-D image, called an anaglyph, shows the topography of Vesta’s eastern hemisphere. To create this anaglyph, two differently colored images are superimposed with an offset to create depth. When viewed through red-blue glasses this anaglyph shows a 3-D view of Vesta’s surface. The images used to generate the two differently colored images that make up this anaglyph were obtained during the approach phase of NASA’s Dawn mission in July 2011. At the time the distance from Dawn to Vesta was about 5,200 kilometers (3,200 miles), which results in an image resolution of about 500 meters (1,600 feet) per pixel. The depth effect or topography differences in this anaglyph were calculated from the shape model of Vesta. A number of Vesta’s large features are clear in this anaglyph. Firstly, the equatorial troughs are visible around Vesta’s equator. These troughs encircle most of the asteroid and are up to 20 kilometers (12 miles) wide. Secondly, to the north of these troughs there are a number of highly degraded, old, large craters. Vesta’s heavily cratered nature is clear from this anaglyph because younger, fresher craters are overlain onto many sets of older, more degraded craters. Due to Vesta’s angle towards the Sun the northernmost part of Vesta has yet to be illuminated and studied and is shown in shadow in this anaglyph. Finally, in the southern hemisphere there are generally fewer craters than in the northern hemisphere. Also visible protruding out from Vesta’s south polar region is a side view of the central complex of the Rheasilvia impact basin.
The Dawn mission to Vesta and Ceres is managed by NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA’s Science Mission Directorate, Washington D.C. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL.
More information about Dawn is online at http://dawn.jpl.nasa.gov.
Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA | <urn:uuid:627f8f55-ffea-4d4e-b67a-52156773cf86> | 3.828125 | 569 | Knowledge Article | Science & Tech. | 41.866054 |
Using this graphic and referring to it is encouraged, and please use it in presentations, web pages, newspapers, blogs and reports.
For any form of publication, please include the link to this page and give the cartographer/designer credit (in this case Hugo Ahlenius, UNEP/GRID-Arendal)
UNEP World Conservation Monitoring Centre. 2005. Global Cold-Water Coral Distribution (points). Cambridge, UK: UNEP-WCMC
Uploaded on Tuesday 21 Feb 2012
Coldwater coral reefs, distribution
Hugo Ahlenius, UNEP/GRID-Arendal
Scientists are just beginning to learn about the many species in the remote, deep waters of the polar oceans. Corals, for example, are not limited to the warm, shallow waters of the tropics. They also exist in many cold, deep waters all over the world, including Arctic and sub-Antarctic waters. Coral reefs are marine ridges or mounds, which have formed over millennia as a result of the deposition of calcium carbonate by living organisms, predominantly corals, but also a rich diversity of other organisms such as coralline algae and shellfish. The coldwater reefs are highly susceptible to deep-sea trawling and ocean acidification from climate change, which has its greatest impacts at high latitudes, while tropical reefs will become severely damaged by rising sea temperatures. | <urn:uuid:701ba308-2421-45e5-b23b-81cbfb2bd9d4> | 3.84375 | 288 | Knowledge Article | Science & Tech. | 27.484216 |
Peak oil - Dec 24
Click on the headline (link) for the full text.
Many more articles are available through the Energy Bulletin homepage
Bill McKibben, Sierra Club Magazine
Fossil fuels burned brightly in their day, but now it's time to make the leap to safer, cleaner, climate-friendly alternatives
EXPLORERS USED TO AMUSE their European audiences by telling of heathen tribesmen who would panic when an eclipse rubbed out the noonday sun. The natives would scream or pray or make ritual sacrifices to appease the god on whom they had always depended, a god now acting so irrationally. Our chief deity--the cheap energy that has made our lives rich and easy--is about to be eclipsed as well, and the sounds you hear (motorists moaning about the price of gas, politicians loudly insisting that sacrificing wilderness in the Far North will save the day) are no different. Except that solar eclipses pass quickly. This change is forever; fossil fuel was a onetime gift--and the sooner we understand that, the sooner we can go about the realistic task of doing without it.
Much of what passes for discussion about our energy woes is spent imagining some magic fuel that will save us. Solar power! Fusion power! Hydrogen power! But such wishful thinking hides the basic fact of our moment in time: We've already had our magic source of energy. Fossil fuel was as good as it gets: compact, abundant, and easy to handle and transport. All you had to do was stick a drill bit in the ground or scrape off a few feet of soil above a vein of black rock and you were set. Learning to use coal, gas, and oil kicked off the Industrial Revolution and, in subsequent centuries, underwrote the chemical revolution, transportation revolution, agricultural revolution, and electronics revolution. (Right now, even as you read this, fossil fuel is producing hundreds of billions of revolutions per minute.) Pretty much every action of modern life involves burning hydrocarbons, and it's modern life that we've come to like.
So it's no wonder that we start to get a little jittery when we contemplate the coming end of the fossil-fuel age. The growing recognition that we're approaching a peak in oil production is the most obvious sign, of course--our supply of petroleum is now measured in decades, and as each decade passes, that supply will become harder to find and more expensive to pump. The world's four biggest oil fields are in decline. We're using oil five times as fast as we're discovering new reserves. And just as those of us already in on modernity would like to start hoarding the remaining supply, the Chinese and Indians and lots of others are discovering that they'd enjoy taking their cars out for a spin as well.
If all we were faced with was peak oil, we might be able to keep the circus going. We could figure out ways to replace many uses of gasoline with coal, which is abundant as long as we don't mind removing all the remaining mountaintops in the southern Appalachians (a sacrifice, I predict, we would bring ourselves to make--or rather, we would bring ourselves to call upon Kentuckians and West Virginians to make). But we've got a far deeper problem than that, one coal can only make worse. Global warming, as we've come to understand in the past few years, is not a speculative, distant, or easily managed threat. It's not one more item on the list of problems, somewhere between global terrorism and failing inner-city schools. It's the first civilization-scale challenge humans have managed to create...
(January/February 2007 issue)
The latest issue of the Sierra Club magazine is devoted to energy. Several other articles on energy are online, in additio to the well written overview by McKibben. I think this is the most prominent mention of peak oil by the Sierra Club.
The Sierra Club has an energy section on their website, with the focus on global warming.
The blind spot of the Sierra Club seems to be long distance travel. For example, they don't mention air travel at all in their Ten Things You Can Do to Help Curb Global Warming - much more important than changing lightbulbs or having your car tuned. Half of pages in the the paper copy of this issue of their magazine is devoted to "Sierra Club Outings," those beautiful far-off locations which can only be reached by burning large quantities of fossil fuels.
Jerome a Paris, who was interviewed for McKibben's article, has a related post today on The technology of community.
A Primer on Reserve Growth - part 1 of 3
Rembrandt, The Oil Drum: Europe
The difference in vision between so called "optimists" and pessimists" with respect to the peak in world oil production is often caused by a view of future technological development in the oil industry. This development influences both conventional and unconventional oil production. Only a part of the oil in an oil field can be produced. It is claimed by oil companies and various institutes that technological advancement will increase the recoverable amount, thereby postponing the peak in conventional oil for several decades. In essence this means that the amount of recoverable reserve increases over time due to changes in technology, economy, insights. But also expected recoverable reserves increase over time due to past underestimates. This is why the term is called "reserve growth".
The only institute that has done exensive studies with respect to the growth of recoverable reserves over time is the United States Geological Survey. In their World Petroleum Assessment 2000, the USGS claims that between 1996 and 2025 worldwide conventional oil reserves will increase by 730 billion barrels due to reserve growth.
A large amount of forecasting institutes such as the International Energy Agency and Energy Information Administration take the figure of 730 billion barrels from the USGS for granted. In addition to forecasting institutes, oil companies often claim that reserve growth is the key to postponing the worldwide peak of conventional oil production. The question is to what extent the USGS prediction can be relied upon.
Two weeks ago I posted a piece about the discovery forecast of the USGS. In this second post with respect to the USGS World Petroleum Assesment 2000 we take a first glance at what reserve growth really is and what we can learn from studying the worldwide recovery factor of conventional oil fields
(23 Dec 2006)
How to address Contrarian Arguments: "We have huge reserves"
Luís de Sousa, The Oil Drum: Europe
On this second installment of the Contrarian Arguments series we'll look into the We have huge reserves rhetoric.
The first part can be found here: Part I : Fundamentals.
We have huge reserves, but I have bad news for you, they've been huger:
[graph: Regular Oil Reserves, as computed from Colin Campbell's "Growing Gap" graph]
The "Huge Reserves" kind of argument is probably the most important one to address, beyond all the madness and delusional arguments like infinite oil, this one can be used be serious geologists and researchers. It is the kind of argument you can get from people that have seriously (or close to it) studied the stuff, but came up to slightly different conclusions of those got by the regular peak researcher.
At the head of the serious people taking this kind of argument is CERA, our nemesis. So we'll look closer to CERA's work and understand what differentiates our conclusions.
(24 Dec 2006)
Rail-Volution on Peak Oil
The Rail-Volution conference last November had a session on Peak Oil. At the conference papers page, scroll down to: "Oil or Not -- Are We in a Transportation Energy Crisis? "
The peak oil presentations (in PDF) can be accessed directly:
Do we know enough to make decisions? (34 pages/1mb PDF)
Gary Landrio, Vice President, Rail Operations, Stone Consulting & Design, Warren, Pennsylvania
Transportation Energy Crisis?
(30 pages/0.7mb PDF)
Todd Litman, Executive Director, Victoria Transport Policy Institute, Victoria, British Columbia, Canada
Future Oil Supply Uncertainty and Impacts on Transportation/Land Use Planning
(17 pages/0.5mb PDF)
Rex Burkholder, Councilor, District 5, Metro Council, Portland, Oregon
Peak Food and Population Overshoot
John Rawlins, Whatcom Watch
A wall chart in the Whatcom Community College physics lab shows the historical (and projected future) curve of oil extraction along with other geo-petroleum data. There is also a curve on the chart that, because of its color, is difficult to see and is easy to overlook entirely. When I ask a student in my energy class to notice it and tell everyone else what the label on the curve is, there’s always a moment of realization and the dawning of a major future problem in a world with declining oil availability. The nearly invisible curve shows world population versus time, and the population curve correlates perfectly with the oil extraction rate curve.
Before oil (and natural gas) humans used manual labor to grow food, and the amount of food determines an upper limit on population. The large-scale, increasing use of oil and natural gas in the industrial world’s food-growing enterprise has meant ever-increasing quantities of food - until now. Therefore, population increase over the past 150 years correlates very well with oil extraction.
John Rawlins has a B.S. in physics and a Ph.D. in nuclear physics. He retired in 1995 from the Westinghouse Hanford Co. at the Hanford site in Eastern Washington. Currently, he teaches physics and astronomy at Whatcom Community College.
Concise summary of the issues. This is the third in a series by physicist Rawlins: "Fossil Fuels at Peak." Previous articles:
Part 1: A Personal Peak Oil Discovery Process
Part 2: Changes in Energy Infrastructure to Take Decades and Trillions of Dollars
We Don't Know Jack
Chris Nelder, Energy and Capital
Sometimes I feel sorry for journalists on the energy beat. Most of them are your basic college-educated, liberal arts generalists, with a flair for communication but not necessarily math or science.
Unfortunately, in a world waking up to the fearsome reality of peak oil, good numbers are hard to come by. This is especially true in the oil business, where “tight holing”-keeping information top secret-is a term as old as the business itself.
But there are some numbers that are available, and some of them are pretty reliable. And even a journalist has the math skills needed to work with those numbers-it’s basic arithmetic.
So when Chevron announced a “new” find in September of some three to 15 billion barrels of oil in the Gulf of Mexico, with a possible production rate of 300,000 to 500,000 barrels per day of light sweet crude, I looked for some good journalism about this exciting new discovery.
But I didn’t find it.
The Globe and Mail published an article called “Peak oil theorists don’t know Jack,” a short little puff piece that carefully skirted any facts and selectively quoted the press release, while suggesting that peak oil fears should be forgotten.
(21 Dec 2006) | <urn:uuid:282d9700-06f9-4ab5-93ed-80dd430f8782> | 3 | 2,363 | Content Listing | Science & Tech. | 44.885909 |
This Example describes a method to split a Text node into three new Node in a DOM document. Methods which are used for splitting of the text node in the DOM Document are described below :-
Element root = doc.getDocumentElement():-allows direct access to the root of the DOM document.
Element paragraph = (Element) root.getFirstChild():-creates a new node named paragraph and gets the child of the root in it.
Text newText = text.splitText(5):-This method splits the text node into two nodes at the specified offset.
Xml code for the program generated is:-
<name>Rose India in Rohini</name>
Output of the program:-
|Text node before spillting is: Rose India in Rohini
Spillted First node is: India in Rohini
Spillted Second node is: in Rohini
Spillted Third node is: Rohini
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:0bda1bbb-84ba-433b-b97c-315f7a2f835f> | 3.546875 | 235 | Documentation | Software Dev. | 47.67656 |
To include this story in your blog or website —
(16 Apr 2010 01:11 GMT) - The eerie glow that straddles the night time zodiac in the eastern sky is no longer a mystery. First explained by Joshua Childrey in 1661 as sunlight scattered in our direction by dust particles in the solar system, the source of that dust was long debated. In a paper to appear in the April 20 issue of The Astrophysical Journal, David Nesvorny and Peter Jenniskens put the stake in asteroids. More than 85 percent of the dust, they conclude, originated from Jupiter Family comets, not asteroids…
— copy and paste the formatted text using the form to the left. | <urn:uuid:570a184e-0656-4d0a-beaa-9e61b3b2f62c> | 2.984375 | 145 | Truncated | Science & Tech. | 49.389569 |
Toolchain is a collection of tools used to develop software for a certain hardware target. Toolchains are based on
particular versions of compiler, libraries, special headers and other tools. A cross-toolchain is a toolchain for
compiling binaries for different CPU architecture than the host CPU.
Scratchbox is a cross-compilation toolkit for embedded Linux application development. It is designed for compiling
software for different target CPU architectures. Scratchbox allows creating several target environments. Each target is
a separate environment that has a selected toolchain, target CPU and an own file system.
Building toolchains is not always trivial. Scratchbox uses scripts for building predefined toolchains. After
building a toolchain it should be tested to know that it works properly. Testing toolchains is harder than building
Scratchbox toolchains provide the cross compilation tools for compiling binaries for the target environment. Each
toolchain is built for a certain CPU target and they are based on certain gcc and C-library sources.
Scratchbox toolchains can be used both inside and outside Scratchbox. In Scratchbox each target uses a certain
toolchain with a specific target CPU. Scratchbox uses wrappers to make the toolchains appear as if they were native
toolchains. Outside Scratchbox the toolchains are used as any normal cross compilation toolchains.
All the installed toolchains can be found in the /scratchbox/compilers directory.
The toolchains use either glibc or uClibc C-libraries. The glibc toolchain is based on the Debian patched
glibc-2.3.2. Other packages used for building the toolchains are binutils, gcc and linux kernel headers. Several
patches are used when building the toolchains. The patching depends of the used toolchains configurations and used
packages. eg. when using uClibc then binutils and gcc are patched with uClibc patches.
Scratchbox.org offers prebuilt toolchains for x86 and ARM targets. Only these targets are
currently supported because too much work to support all different configurations. With the Scratchbox toolchain
sources you can build your own custom toolchains.
Scratchbox toolchain building scripts allow you to build easily predefined toolchains and give the possibility to
build custom toolchains for different targets. Changing the toolchain binutils, compiler or C-library packages can be a
more demanding task.
Scratchbox uses a gcc wrapper for wrapping most of the
toolchain command binaries. In the /scratchbox/compilers/bin/ directory you can see the
linked binaries. The wrapper knows how to handle each command and depending on the command it might change some of the
command parameters. Then it runs the actual command from the correct path that depends from the selected target. The
gcc wrapper reads all the target specific information from the target configuration files that can be found in the /scratchbox/users/username/targets/ directory.
Scratchbox has also an ld wrapper for linking the binaries
properly. When compiling inside Scratchbox a fake-native ld is
used. Outside Scratchbox a normally behaving ld is used. For example inside Scratchbox the dynamically linked binaries
are linked against the libraries that are in standard library paths, not in the toolchain's library path.
The gcc wrapper uses ccache by default. The default cache directory is /scratchbox/ccache/. Ccache can be disabled by setting the environment variable SBOX_USE_CCACHE to
"no". The cache directory can be changed with the CCACHE_DIR environment variable.
Scratchbox toolchain build scripts are available in the sb-toolchains source package that is available at the Scratchbox download page for each Scratchbox version.
Scratchbox toolchains are built with the GAR system . It is a mechanism for automating the
compilation and installation of third-party source code. It appears in the form of a tree of directories containing
Makefiles and other ancillary bookkeeping files (such as installation manifests and checksum lists).
The toolchains can currently be built with two different systems. The old one uses the sb-toolchains/meta/target-kit directory. When the new system uses the sb-toolchains/meta/gcc-glibc directory.
The old system allows only selecting the target architecture, compiler name and the build directory. It doesn't
select the used source packages or patched that are applied. The new toolchain build system is far more flexible. It
uses configuration files for building toolchains. The configuration file is passed to the meta makefile as a
The new build system uses separate build directories for separate toolchain build phases. This makes it's more
flexible to change build components in the toolchain build. eg. changing gcc's configuration arguments without breaking
the the build system for other toolchains. This would be done by creating a new build directory under the sb-toolchains/cc directory and changing the CC_DIR variable in the config file.
The sb-toolchains source package is used also to build arch
tools and device tools which are target dependent. The
build instructions in this document automatically build all of them; there is currently no documented way to disable
Toolchains have to be built with an already existing compiler. Scratchbox has the HOST target for this. HOST's
host-gcc toolchain is for compiling binaries for the host. It's configured to make the compiled binaries use the
Scratchbox's host libraries.
When building toolchains you need write privileges to the /scratchbox/compilers and /scratchbox/device_tools directories. The scratchbox source package contains the scripts/permhack script for
changing the privileges, but you can also do it like this:
The following will explain toolchain building with the new system.
The toolchain configuration file includes information like compiler name, target architechture, softfloat support
and all source package and patch information. For example see the existing configuration files in the sb-toolchains/meta/gcc-glibc directory.
For each toolchain component there are the several variables. eg. for the C-library:
version number of the C-library
the build directory of the C-library headers
the build directory of the C-library
the C-library source tar
the patches that are applied to the source tar
a special script that can be used to apply complex patches
The sb-toolchains/packaging/create_packages script generates deb and rpm packages from the compiled toolchains. If
you have made changes to the packages then fix the version numbers in the sb-toolchains/packaging/common.var file. At
the moment the script supports only creating arm, ppc and x86 toolchain packages.
For packaging one toolchain use the ./build_one_toolchain script. Run it outside Scratchbox so it can also create
All created compilers should be at least tested to compile, link and run a simple test program. The test program
should be tested to link both statically and dynamically. These tests should be done with both a C and a C++
The sb-toolchains/test_tools/test_scripts/reg_tests.sh script can be used for these
These tests are a collection of test for the C and C++ frontends of gcc. The tests can be found in gcc's directory
The result of running the testsuite are various *.sum and *.log files in the testsuite subdirectories. The *.log
files contain a detailed log of the compiler invocations and the corresponding results, the *.sum files summarise the
results. These summaries contain status codes for all tests:
PASS: the test passed as expected
XPASS: the test unexpectedly passed
FAIL: the test unexpectedly failed
XFAIL: the test failed as expected
UNSUPPORTED: the test is not supported on this platform
ERROR: the testsuite detected an error
WARNING: the testsuite detected a possible problem
The Scratchbox toolchains should be tested with these tests. These tests can be run with reg_tests.sh (see Section 4.1) or manually as described in GCC Testing . | <urn:uuid:b163588b-923d-4aff-86f5-5430d07bd221> | 3.421875 | 1,771 | Documentation | Software Dev. | 48.719537 |
About This Report
The goal of the
Yet Another Haskell Tutorial
is to provide a complete intoduction tothe Haskell programming language. It assumes no knowledge of the Haskell languageor familiarity with functional programming in general. However, general familiaritywith programming concepts (such as algorithms) will be helpful. This is not intendedto be an introduction to programming in general; rather, to programming in Haskell.Sufficient familiarity with your operating system and a text editor is also necessary(this report only discusses installation on configuration on Windows and *Nix system;other operating systems may be supported – consult the documentation of your chosencompiler for more information on installing on other platforms).
What is Haskell?
Haskell is called a lazy, pure functional programming language. It is called
be-cause expressions which are not needed to determine the answer to a problem are notevaluated. The opposize of lazy is
, which is the evaluation strategry of mostcommon programming languages (C, C++, Java, even ML). A strict language is one inwhich every expression is evaluated, whether the result of its computation is importantor not. (This is probably not entirely true as optimizing compilers for strict languagesoften do what’s called “dead code elmination” – this removes unused expressions fromthe program.) It is called
because it does not allow side effects (A side effectis something that affects the “state” of the world. For instance, a function that printssomething to the screen is said to be side-effecting, as is a function which affects thevalue of a global variable.) – of course, a programming language without side effectswould be horribly useless; Haskell uses a system of
to isolate all impure com-putations from the rest of the program and perform them in the safe way (see Chapter 9for a discussion of monads proper or Chapter 5 for how to do input/output in a purelanguage).Haskell is called a
language because the evaluation of a program isequivalent to evaluating a function in the pure mathematical sense. This also differsfrom standard languages (like C and Java) which evaluate a sequence of statements,one after the other (this is termed an | <urn:uuid:e1febcb5-c5c0-4d7a-b000-da8c58e64195> | 3.625 | 463 | Truncated | Software Dev. | 22.931612 |
The BioWeatherMap initiative looks to uncover insight into the geographic and temporal distribution of microbial life through an distributed and volunteer environmental sensing effort. The intent is to gather environmental samples from around the world that will be DNA sequence for ongoing discovery and surveillance.
This effort teamed with Autodesk to explore the visualization aspects of the data collections at the TED Global event this past July, and work is ongoing to help uncover new insights.
The goals is to address fundamental questions such as “How diverse is the microbial life around us?” and “How do microbial communities in different habitats change over time?” and “How can advanced sequencing technologies best be utilized to address issues in biodiversity, public health, and biosurveillance?”
Uncovering the diversity of microbial organisms will uncover unknown details of the life around us, with as much as 70% of organisms said to be microscopic. Understanding the makeup and spread of these organisms will aid in public health to help track, report and monitor emerging disease threats. This mapping also has implications for natural resources, with new insights into the impact of microbial diversity on soil and crop yields. | <urn:uuid:7a5c1fa1-3556-42c0-9853-2cd40293399a> | 2.90625 | 232 | Knowledge Article | Science & Tech. | 27.437917 |
Atomic Number: 14
Atomic Weight: 28.0855
Discovery: Jons Jacob Berzelius 1824 (Sweden)
Electron Configuration: [Ne]3s23p2
Word Origin: Latin: silicis, silex: flint
Properties: The melting point of silicon is 1410°C, boiling point is 2355°C, specific gravity is 2.33 (25°C), with a valence of 4. Crystalline silicon has a metallic grayish color. Silicon is relatively inert, but it is attacked by dilute alkali and by halogens. Silicon transmits over 95% of all infrared wavelengths (1.3-6.7 mm).
Uses: Silicon is one of the most widely used elements. Silicon is important to plant and animal life. Diatoms extract silica from water to build their cell walls. Silica is found in plant ashes and in the human skeleton. Silicon is an important ingredient in steel. Silicon carbide is an important abrasive and is used in lasers to produce coherent light at 456.0 nm. Silicon doped with gallium, arsenic, boron, etc. is used to produce transistors, solar cells, rectifiers, and other important solid-state electronic devices. Silicones range from liquids to hard solids and have many useful properties, including use as adhesives, sealants, and insulators. Sand and clay are used to make building materials. Silica is used to make glass, which has many useful mechanical, electrical, optical, and thermal properties.
Sources: Silicon makes up 25.7% of the earth's crust, by weight, making it the second most abundant element (exceeded by oxygen). Silicon is found in the sun and stars. It is a principal component of the class of meteorites known as aerolites. Silicon is also a component of tektites, a natural glass of uncertain origin. Silicon is not found free in nature. It commonly occurs as the oxide and silicates, including sand, quartz, amethyst, agate, flint, jasper, opal, and citrine. Silicate minerals include granite, hornblende, feldspar, mica, clay, and asbestos.
Preparation: Silicon may be prepared by heating silica and carbon in an electric furnace, using carbon electrodes. Amorphous silicon may be prepared as a brown powder, which can then be melted or vaporized. The Czochralski process is used to produce single crystals of silicon for solid-state and semiconductor devices. Hyperpure silicon may be prepared by a vacuum float zone process and by thermal decompositions of ultra-pure trichlorosilane in an atmosphere of hydrogen.
Element Classification: Semimetallic
Isotopes: There are known isotopes of silicon ranging from Si-22 to Si-44. There are three stable isotopes: Al-28, Al-29, Al-30.
Density (g/cc): 2.33
Melting Point (K): 1683
Boiling Point (K): 2628
Appearance: Amorphous form is brown powder; crystalline form has a gray
Atomic Radius (pm): 132
Atomic Volume (cc/mol): 12.1
Covalent Radius (pm): 111
Ionic Radius: 42 (+4e) 271 (-4e)
Specific Heat (@20°C J/g mol): 0.703
Fusion Heat (kJ/mol): 50.6
Evaporation Heat (kJ/mol): 383
Debye Temperature (K): 625.00
Pauling Negativity Number: 1.90
First Ionizing Energy (kJ/mol): 786.0
Oxidation States: 4, -4
Lattice Structure: Diagonal
Lattice Constant (Å): 5.430
CAS Registry Number: 7440-21-3
- Silicon is the eighth most abundant element in the universe.
- Silicon crystals for electronics must have a purity of one billion atoms for every non-silicon atom (99.9999999% pure).
- The most common form of silicon in the Earth's crust is silicon dioxide in the form of sand or quartz.
- Silicon, like water, expands as it changes from liquid to solid.
- Silicon oxide crystals in the form of quartz are piezoelectric. The resonance frequency of quartz is used in many precision timepieces.
References: Los Alamos National Laboratory (2001), Crescent Chemical Company (2001), Lange's Handbook of Chemistry (1952), CRC Handbook of Chemistry & Physics (18th Ed.) International Atomic Energy Agency ENSDF database (Oct 2010) | <urn:uuid:3cd6c4c4-910c-4217-b209-384cd65f78ab> | 3.796875 | 990 | Knowledge Article | Science & Tech. | 49.277975 |
This is an issue of interest, in as much as carbon in the atmosphere is increasing. A friend of mine, getting her PhD out at Berkeley, wrote this to me the other day: (Talking about the IPCC)
A decade ago, they were pretty conservative about assigning certainty to things, and they continue to be, but better models and more evidence have lead them, and most of the scientific community to accept that man-induced climate change is real. Now, it is acceptable to think about quantifying future climate changes. With the earth being a complex system and computer power not being quite good enough to run more complex models, the goal is to find as much consensus as possible.Nicole is one of those people whose opinions I would take to the bank. As she notes, climate change is still a great big mystery. In my mind, to borrow from Sir Winston Churchill, "It is a riddle, wrapped in a mystery, inside an enigma; ...."
What I usually say to people is that no matter what someone thinks about what is going on from year to year in climate currently, just have them think about what happens when you put a blanket on. It is harder for heat to escape from around your body, so you warm up. There is no question that the amount of Carbon Dioxide that we have put, and most likely will continue to put, into the atmosphere serves as an additional blanket. Take pretty much any physical model of the earth, add some carbon dioxide, and it will heat up. Tens of complex global models also agree with this. Another simple effect of warming is that the ocean, if warmed, will thermally expand. This alone will raise sea level. So, taking this as something we do know, we then have to answer the question, how does this extra warming affect the world and by what magnitude.
So to answer your questions, of course, for some areas, climate change could be good. I heard that people are starting to think about expanding vineyards in the north of England, for example, in anticipation of shifting climate. And, I think any reduction in carbon dioxide or other greenhouse gases will help increase certainty in the future. It's just something we've let run a little too wild in my opinion. But, no, I don't think the answers can really be quantified yet. I think we can count on some more extreme climate that may need to be dealt with.
Over at Powerline is this posting, "A Scientific Theory is Judged by its Predictive Power". As Nicole noted, folks were thinking about expanding vineyards in England. In fact, ten years ago there was this prediction:
However, the warming is so far manifesting itself more in winters which are less cold than in much hotter summers. According to Dr David Viner, a senior research scientist at the climatic research unit (CRU) of the University of East Anglia, within a few years winter snowfall will become "a very rare and exciting event".Note the pedigree of Dr Viner. CRU (Climatic Research Unit) at the University of East Anglia. These are the chaps who were revealed to have so arrogantly tried to shut up their backsliding colleagues last year in "Climategate".
"Children just aren't going to know what snow is," he said
These days England is experiencing have record low temperatures and Europe itself is having a hard time with the cold weather.
This cold weather could indicate that the "Atlantic Conveyor Belt" has broken down, or maybe the "Gulf Stream" isn't as powerful as it once was. However, in March, NASA said that if anything, the Atlantic Conveyor Belt had sped up a bit.
I remember from back when I was young, in 1980, while stationed at Clark Air Base, the 26th Aggressor Squadron had as its theme song "Oh Lord It's Hard to be Humble." Since the job of the pilots of that squadron was to teach other fighter pilots how to kill them (the Aggressors), and they were very good in air-to-air combat, they had to check and make sure their "humble" was OK, so they didn't let their egos get in the way of their mission, teaching others how to kill them.
Would that some of those scientists who talk about climate change were humble and thus helpful.
What would also be helpful is the opening of a discussion on what would be the best climate for humans. Or better put, why is this the best of all possible worlds, climate wise?
Things I know:
- Carbon Dioxide is building in the atmosphere.
- We are too dependent on foreign oil
- The climate is likely to change.
- Lots of developing nations (e.g., China) want to get theirs before we destroy the world economy in order to prevent heavy snowfall in Europe.
- Where climate change is going.
I am, however, willing to look at reasonable global solutions that will muster the agreement of 200 nations and will result in economic development for those in the world looking for economic development. We still have folks out there who deserve an opportunity for meaningful work and a decent diet.
Regards — Cliff | <urn:uuid:e5220ad9-404a-4b78-b860-e443838426e7> | 2.703125 | 1,072 | Personal Blog | Science & Tech. | 55.898635 |
Big Questions for Astro Observatory (ASTRO 1 & 2)
In order to understand how the Universe has changed from its initial simple state following the Big Bang (only cooling elementary particles like protons and electrons) into the magnificent Universe we see as we look at the night sky, we must understand how stars, galaxies and planets are formed.
Following the Big Bang and the gradual cooling of the Universe the primary constituents of the cosmos were the elements hydrogen and helium. Even today, these two elements make up 98% of the visible matter in the Universe. Nevertheless, our world and everything it contains—even life itself—is possible only because of the existence of heavier elements such as carbon, nitrogen, oxygen, silicon, iron, and many, many others. How long did it take the first generations of stars to seed our Universe with the heavy elements we see on Earth today? When in the history of the Universe was there a sufficient supply of heavy elements to allow the formation of prebiotic molecules and terrestrial-like planets upon which those molecules might combine to form life. | <urn:uuid:c0a4ebe3-cf16-4687-ae30-437e6a735a46> | 3.984375 | 216 | Knowledge Article | Science & Tech. | 23.060507 |
These reactions are explained in Maitland Jones 2nd ed., §9.1
In this four-dimensional example we show a general addition reaction: the chlorohydrin formation of E-2-pentene.
The first step is the formation of a chloronium ion, followed by the attack of water from the opposite side of the molecule. So this is an example of an anti addition.
The remaining pages are about the stereochemistry of the products formed.
|using Chime||using Jmol| | <urn:uuid:de2609b8-5d40-43f6-ba69-fd6885495532> | 2.875 | 106 | Tutorial | Science & Tech. | 49.267807 |
Polymerase chain reaction (PCR) is now such a fundamental technique in the biotechnology laboratory that L. A. Pray wrote in 2004: "PCR is to biology what petroleum is to transportation." PCR is a basis for multiple ways — ranging from DNA fingerprinting and sequencing to mutagenesis — to analyze and detect nucleic acids. Real-time PCR is an extremely valuable analytical tool that not only reveals what DNA is present but how much. Real-time PCR is becoming the most widely used PCR application for genomic and gene expression analysis in research laboratories; it also is rapidly establishing itself as a technique in the clinical diagnostic lab. The need for faster, more accurate, and more economical systems with a high throughput has fueled the popularity of real-time PCR.
Using genomic DNA as the template for amplification, real-time PCR can be used in infectious disease diagnostics to rapidly determine levels of specific pathogens in various tissues. The molecular diagnostic laboratory also relies heavily on real-time PCR for detecting aneuploidies and diagnosising other genetic diseases. In microbiology laboratories, real-time PCR can be used to detect and quantitate various microbial contaminants in environmental samples.
The GMO Investigator real-time PCR starter kit is designed for teaching students the principles of PCR and its use in testing foods for the presence of genetic modifications. This kit can be used to quantitate the amount of DNA in a plant and to compare it with the level of genetically modified DNA that is recovered from each food sample. It is even possible to determine what fraction of a food product is made with genetically modified ingredients, in the same manner as standard testing laboratories do.
Pray LA (2004). Consider the Cycler, The Scientist 18, 34–37. | <urn:uuid:c1cee28c-fe61-4aa3-80a0-26bb3fb74588> | 3.609375 | 355 | Knowledge Article | Science & Tech. | 32.072909 |
Ernest D. Courant
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
particle accelerator development
...required—the largest weighs approximately 40,000 tons. A means of increasing the energy without increasing the scale of the machines was provided by a demonstration in 1952 by Livingston, Ernest D. Courant, and H.S. Snyder of the technique of alternating-gradient focusing (sometimes called strong focusing). Synchrotrons incorporating this principle needed magnets only...
What made you want to look up "Ernest D. Courant"? Please share what surprised you most... | <urn:uuid:752fd2d2-7440-4432-9adb-4601e1c45661> | 3.328125 | 146 | Knowledge Article | Science & Tech. | 49.882193 |
New Zealand Mudsnail (Potamopyrgus antipodarum)
New Zealand mudsnails found in the Duluth-Superior Harbor.
New Zealand mudsnails on a penny to show size
Species and Origin: A tiny snail that reproduces asexually. Native to New Zealand, it was accidentally introduced with imported rainbow trout in Idaho in the 1980s and into the Great Lakes via ballast water from ocean going ships.
Impacts: Densities can reach 100,000 to 700,000 per square meter. They outcompete species that are important forage for native trout and other fishes and provide little nutrition to fish that eat them.
Status: First discovered in the late 1980s in the Snake, Idaho, and Madison Rivers, they quickly spread to other western rivers. They were discovered in Lake Ontario, and later in Thunder Bay, Lake Superior in 2001. In fall of 2005, they were discovered in the Duluth-Superior harbor. See US map .
Where to look: Look on docks, rocks, and other hard surfaces along the shorelines and bottoms of lakes, rivers, and streams.
Regulatory classification (agency): It is proposed as a prohibited invasive species (DNR), which means import, possession, transport, and introduction into the wild will be prohibited.
Means of spread: They likely spread by attaching to recreational fishing gear, other types of equipment placed in the water, or in fish shipments.
How can you help?
- Inspect and remove visible animals, plants, and mud from waders, recreational fishing equipment, research gear, and other field equipment.
- Rinse everything with 120° F water, or dry equipment for 5 days.
- Report suspected infestations. | <urn:uuid:43f93a05-6a83-4f99-9130-27f4f0d1145a> | 3.609375 | 363 | Knowledge Article | Science & Tech. | 45.169906 |
The wealth of the waters around the Falkland Islands is well documented with regard to fisheries, as well as being a world hot-spot for many seabird species. However, little is known about the relative importance of Falkland Islands' waters for marine mammals. There are approximately 25 species occurring in Falkland Islands' waters although many are restricted to deep oceanic waters, rarely seen or are on migration passing on route to Antarctica or warmer waters in higher latitudes. Some species are resident in the Falklands including the tourist attracting killer whales at Sea Lion Island and the two coastal dolphin species – Commerson's and Peale's dolphins.
Commerson's Dolphin(Cephalorhynchus commersonii)
Dusky Dolphin (Lagenorhynchus obscurus)
Southern Rightwhale Dolphin(Lissodelphis peronii)
Hourglass Dolphin(Lagenorhyncheus cruciger)
Spectacled Porpoise (Phocoena dioptrica)
Peale's Dolphin(Lagenorhynchus australis)
Southern Right Whales(Eubalaena australis)
Pygmy Right Whale (Caperea marginata)
Minke Whale (Balaenoptera acutorostrata)
Sei Whale (Balaenoptera borealis)
Blue Whale (Balaenoptera musculus)
Fin Whale (Balaenoptera physalus)
Humpback Whale(Megaptera novaeangliae)
Sperm Whale (Physeter macrocephalus)
Long-Finned Pilot Whale(Globicephala melas)
Killer Whale (Orcinus orca)
Arnoux's Beaked Whale(Berardius arnuxii)
Southern Bottlenose Whale(Hyperoodon planifrons)
Gray's Beaked Whale(Mesoplodon grayi)
Hector's Beaked Whale(Mesoplodon hectori)
Strap-Toothed Whale(Mesoplodon layardii)
Cuvier's Beaked Whale(Ziphius cavirostris)
Andrew's beaked whale (Mesoplodon bowdoini)
The Peale's dolphin is the most numerous and most frequently encountered cetacean around the Falkland Islands and are seen in groups ranging from 1 to 15 animals. They are present around the Falklands throughout the year and restricted to shelf waters less than 200m deep. It would seem probable that there is a continuous distribution from the Falklands to South America - this is supported by transect observations recording the species from the Falklands coast to Chile. The species is inquisitive and frequently approaches vessels to bow-ride and this may result in the species being more visible during surveys. Peale's dolphins are also seen in inshore waters where it is often found feeding along kelp beds and in this environment can overlap with Commerson's dolphins.
In the Falkland Islands, Commerson's dolphins are relatively commonly sighted in inshore coastal waters all year, particularly in sheltered waters of less than 10m in depth, such as bays, harbours, river mouths and around kelp beds. Almost all of the records of this species made during at-sea surveys of 1998 - 2000 are within 10km of the shore and no records were made further than 25km offshore. They appear to be opportunistic bottom feeders, taking mysid shrimps, fish and squid.
Locally, the Commerson's dolphin is called the Puffing pig.
The fin whale is classified as Endangered. Like the blue whale, the fin whale was severely reduced worldwide by modern commercial whaling. Fin whales were once commonly sighted in the Falkland Islands as they migrated from the coast of Brazil to their summer feeding grounds in the Antarctic. The whaling records from New Island (West Falklands) during 1900 – 1905 are mostly of fin whales but it is possible that they were mistaken for sei whales. Today, fin whales are not often sighted in Falkland Islands waters. During at-sea surveys in 1998 – 2000, 57 fin whales were sighted particularly between November and January and were most commonly sighted in waters greater than 200m on the continental slope and adjacent to the Burdwood Bank.
The sei whale is classified as Endangered. Sei whales were heavily exploited in Southern Hemisphere whaling grounds once the stocks of blue and fin whales had been reduced. The extent to which stocks have recovered since then is uncertain because there has been relatively little research in recent years. Sei whales passed the Falkland Islands on migration, which supported a whaling industry at New Island in the early part of this century. Sei whales were recorded on 31 occasions in groups of 1-3 individuals during at-sea surveys of 1998 – 2000. They were most common during the austral summer between November and April on the Patagonian shelf and shallower waters to the east of the Falklands.
The southern minke whale has a circumpolar distribution from Antarctica to almost equatorial regions. In the Falkland Islands, minke whales were recorded on 60 occasions during at-sea surveys of 1998– 2000, usually alone and mostly during the austral summer over the Patagonian shelf, around East Falklands and to the northwest of the Falklands zone. There is some scientific whaling of minke whales in Antarctic waters by Japanese whaling fleets, but this occurs in the Pacific/Indian sectors of the Southern Ocean and is unlikely to involve the stock of minke whales that migrate through the waters of the Falkland Islands.
The sperm whale has a global distribution and individuals found north and south of the equator are thought to be from separate breeding stocks, with seasonal movement from the equator to the polar regions. Sperm whales sighted in Falkland Islands' waters are most likely to be males - as most females and their calves remain in the warmer waters at higher latitudes. During at-sea surveys of 1998 - 2000, 28 individuals were sighted on 21 occasions throughout the year, particularly in waters greater than 200m around the Burdwood Bank and in the extreme north. Sperm whales have stranded in the Falkland Islands on five occasions, the last being at Race Point Farm in 2011. There are relatively high interactions with sperm whales in the toothfish longline fishery with sperm whales often appearing once line hauling commences. The Falkland Islands Fisheries Department adapted a new method of fishing, which involves protecting the hooks in a net sleeve to prevent sperm whales targeting the toothfish.
Seven species of beaked whale have been recorded stranded in the Falkland Islands with only the southern bottlenose whale recorded as a live sighting. Most sightings were between September and February in deep oceanic waters off East Falklands during at-sea surveys of 1998 - 2000. The biology, distribution and abundance of most beaked whale species are not well known. Most species appear to have circumpolar distributions from Antarctica to the low latitudes. The frequency of strandings in the Falkland Islands suggest that some species such as Gray's beaked whale and strap-toothed beaked whale are more common relative to Andrew's beaked whale and Hector's beaked whale. Most beaked whales normally inhabit deep ocean waters (>2,000 m) or continental slopes (200 – 2,000m) where they feed on deep-water mesopelagic squid and some fish species. The anatomy and behaviour of the beaked whale makes them very sensitive to anthropogenic noises such as sonar and airgun arrays, which may increase strandings.
The killer whale is the largest member of the dolphin family and has a worldwide distribution in both coastal and oceanic waters. Behaviour varies within its range but killer whales often form strong family groups, with pods specialising in one prey. During the summer months in the Falkland Islands when penguins and pinnipeds are breeding, killer whales are commonly sighted in coastal waters and there appears to be at least one resident pod to the southeast of the archipelago around Sea Lion Island and Beauchêne Island. Killer whales seen in the Falkland Islands fit the description of the A-type whale. On Sea Lion Island, the killer whales use ambush and shallow water hunting techniques along rocky outcrops and beaches used by elephant seal pups and juveniles.
Photos - Killer whales at Sea Lion Island stalking elephant seal pups (left) and a rare sighting of an individual breaching off Sea Lion Island
The long-finned pilot whale has a worldwide distribution in both coastal and oceanic waters. In the Falkland Islands, it was one of the more frequently recorded cetacean species during the 1998 – 2000 at-sea surveys, with 27 records of 872 animals in pods of between 2 - 200 whales, particularly in water depths greater than 200m and during winter months. Long-finned pilot whales are often seen in association with other cetacean species, particularly southern right whale dolphins and hourglass dolphins. The long-finned pilot whale has a propensity to strand and it is the most commonly stranded whale in the Falkland Islands. Five hundred and seventy five long-finned pilot whales were sampled from six mass strandings of between 27 and 273 animals on the beaches during 2000 and 2006. The pilot whales feed mainly on the mesopelagic squid- Moroteuthis ingens with hoki (Macruronus magellanicus) being of secondary importance especially for large males. | <urn:uuid:46d951a1-93bc-40fe-9e26-1e1869979db0> | 3.375 | 1,968 | Knowledge Article | Science & Tech. | 31.84269 |
The intrinsic angular momentum of a particle is known as its spin. The fermions all have spin half, whereas the bosons have spin one. The component of the spin in the direction of motion of a particle is called helicity. This means that the fermions can have helicity .
Massless particles may exist in just one helicity state. Neutrinos only exist in negative helicity states, known as left handed states, and anti-neutrinos in positive helicity, right handed states.
The and bosons have spin 1 so may have helicity 1 or zero. As the photon and gluon are massless, they cannot exist in a helicity zero state. However, virtual photons and gluons do have mass, so they may have the zero helicity states
Particles with helicity are said to be transversely polarised and those with zero helicity are longitudinally polarised. | <urn:uuid:c0e6b60c-6e43-469d-9714-b43ef10f8e2f> | 3.859375 | 193 | Knowledge Article | Science & Tech. | 36.566083 |
A silicon chip levitates individual atoms used in quantum information processing. Photo: Curt Suplee and Emily Edwards, Joint Quantum Institute and University of Maryland. Credit: Science.
These advances could enable the creation of immensely powerful computers as well as other applications, such as highly sensitive detectors capable of probing biological systems. “We are really excited about the possibilities of new semiconductor materials and new experimental systems that have become available in the last decade,” said Jason Petta, one of the authors of the report and an associate professor of physics at Princeton University.
Petta co-authored the article with David Awschalom of the University of Chicago, Lee Basset of the University of California-Santa Barbara, Andrew Dzurak of the University of New South Wales and Evelyn Hu of Harvard University.
Two significant breakthroughs are enabling this forward progress, Petta said in an interview. The first is the ability to control quantum units of information, known as quantum bits, at room temperature. Until recently, temperatures near absolute zero were required, but new diamond-based materials allow spin qubits to be operated on a table top, at room temperature. Diamond-based sensors could be used to image single molecules, as demonstrated earlier this year by Awschalom and researchers at Stanford University and IBM Research (Science, 2013).
The second big development is the ability to control these quantum bits, or qubits, for several seconds before they lapse into classical behavior, a feat achieved by Dzurak’s team (Nature, 2010) as well as Princeton researchers led by Stephen Lyon, professor of electrical engineering (Nature Materials, 2012). The development of highly pure forms of silicon, the same material used in today’s classical computers, has enabled researchers to control a quantum mechanical property known as “spin”. At Princeton, Lyon and his team demonstrated the control of spin in billions of electrons, a state known as coherence, for several seconds by using highly pure silicon-28.
Quantum-based technologies exploit the physical rules that govern very small particles — such as atoms and electrons — rather than the classical physics evident in everyday life. New technologies based on “spintronics” rather than electron charge, as is currently used, would be much more powerful than current technologies.
In quantum-based systems, the direction of the spin (either up or down) serves as the basic unit of information, which is analogous to the 0 or 1 bit in a classical computing system. Unlike our classical world, an electron spin can assume both a 0 and 1 at the same time, a feat called entanglement, which greatly enhances the ability to do computations.
A remaining challenge is to find ways to transmit quantum information over long distances. Petta is exploring how to do this with collaborator Andrew Houck, associate professor of electrical engineering at Princeton. Last fall in the journal Nature, the team published a study demonstrating the coupling of a spin qubit to a particle of light, known as a photon, which acts as a shuttle for the quantum information.
Yet another remaining hurdle is to scale up the number of qubits from a handful to hundreds, according to the researchers. Single quantum bits have been made using a variety of materials, including electronic and nuclear spins, as well as superconductors.
Some of the most exciting applications are in new sensing and imaging technologies rather than in computing, said Petta. “Most people agree that building a real quantum computer that can factor large numbers is still a long ways out,” he said. “However, there has been a change in the way we think about quantum mechanics – now we are thinking about quantum-enabled technologies, such as using a spin qubit as a sensitive magnetic field detector to probe biological systems.”
Awschalom, David D., Bassett, Lee C. Dzurak, Andrew S., Hu, Evelyn L., and Petta, Jason R. 2013. Quantum Spintronics: Engineering and Manipulating Atom-Like Spins in Semiconductors. Science. Vol. 339 no. 6124 pp. 1174–1179. DOI: 10.1126/science.1231364
The research at Princeton University was supported by the Alfred P. Sloan Foundation, the David and Lucile Packard Foundation, US Army Research Office grant W911NF-08–1-0189, DARPA QuEST award HR0011-09–1-0007 and the US National Science Foundation through the Princeton Center for Complex Materials (DMR-0819860) and CAREER award DMR-0846341.
Catherine Zandonella | Source: EurekAlert!
Further information: www.princeton.edu
More articles from Physics and Astronomy:
“Out of This World” Space Stethoscope Valuable on Earth, Too
22.05.2013 | Johns Hopkins
Storms on Uranus, Neptune Confined to Upper Atmosphere
21.05.2013 | University of Arizona
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
22.05.2013 | Life Sciences
22.05.2013 | Ecology, The Environment and Conservation
22.05.2013 | Earth Sciences
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:42277280-ecde-41c4-82b7-e96e19e19c19> | 3.671875 | 1,582 | Content Listing | Science & Tech. | 41.220231 |
Oh boy! It’s time for the last law.
Newton’s Third Law of Motion states that for every action there will be an equal and opposite reaction. In other words:
Newton’s Third Law of
Motion Cake: If you push cake, it pushes back.
Now that may seem like a weird idea. How can cake push? It doesn’t even have arms.
Well, let’s look at Newton Bunny. Here he is skating on Galileo’s Pond:
What would happen if we placed a giant chocolate cake right in front of him?
SMACK! The cake hits him.
Now wait a minute! Does he hit the cake or does the cake hit him? The answer is both. It takes two sides to create a reaction. If Newton didn’t hit the cake, he wouldn’t put a dent in the cake. And if the cake didn’t hit him back, he would just somehow pass through it like it was a ghost-cake.
So Newton hits the cake and the cake hits him back. Like this:
Newton is pushing a cake and the cake pushes just as hard back. So the cake stays where it is and so does Newton.
But cake is just cake, and eventually it reaches a point where it can’t push back any harder. What if Newton pushes cake beyond its pushing limit?
Anytime there is an unbalanced force something changes to balance it. The extra force is converted into motion. In Newton’s case, he falls into the cake.
Uh, oh! Newton pushed too hard. He fell into the cake.
So if you push something, it pushes back, otherwise you would fall through it.
Here’s a little review. See if you can fill out these activity sheets:
(Answers: Top – any path is fine, but once the bunny hits the cake he will be forced to stop. Bottom – The bunnies on the left hand side of the page will push the wall over because the bunny is pushing harder than the wall pushes back.)
In my last post I mentioned we would be discussing sneaky forces. These are forces you can’t see. Newton’s Third Law says that every action requires a reaction. So if something changes (reacts), you need an action. According to Newton’s First Law of Motion, if you threw a cake into the air, it would keep sailing up, up, and away. But cake won’t do that. Eventually it comes crashing down to the ground. Splat! Even though you can’t see it, something is pulling the cake down. Do you know what it is? Which of the following sneaky forces do you think is acting on the cake in the air?
Gravity is the reason a cake falls to the ground. Here are some other stories. See if you can identify the sneaky force at work.
1) A cake is sliding along a table. No one touches it, but it comes to a stop anyway. What stopped the cake?
2) Buster Bunny delivered a cake to his cousin in jail. He baked a saw into the cake. A police officer with a magnet took the cake away from him without touching the cake. What force took the cake?
3) A cake resting on ice is placed next to a fan. The fan is turned on, and the cake slides away from the fan. What moved the cake?
Here are the answers:
1) Friction, 2) Magnetism, 3) Wind
So there you have it. Now you know all about Newton’s Three Laws of
Motion Cake. Here are the three laws in review:
Newton’s 1st Law of Cake: Cake on a plate will not get up and start moving by itself. If the cake is moving, it will keep going until something stops it.
Newton’s 2nd Law of Cake: If you want to punch a cake harder, you must either hit it with something bigger or punch it faster.
Newton’s Third Law of Cake: If you push cake, it pushes back.
And now for a really nerdy aside: I’m a civil engineer and my husband is a physicist. We were having a discussion/disagreement about Newton’s Third Law. Now my husband is the kindest man I’ve ever met, but he didn’t like the way I was treating Newton’s Third Law, so he said, “Well, you just don’t think about motion because you’re a civil engineer. Civil engineers never do any work.” I had to laugh because he’s right. The whole point of civil engineering is avoiding work… Work is Force multiplied by Distance (W = F×d), and civil engineers build stationary objects. We make things that bend an wiggle but hopefully nothing that moves too much. Movement is the domain of mechanical engineers. So civil engineers build airports and bridges, and mechanical engineers build airplanes and cars. Also here is a free body diagram of cake being eaten by an alligator.
Don’t worry, my husband and I resolved our disagreement and the above is physicist approved.
Have fun with physics! And go smash some cake! | <urn:uuid:725613bc-1f7f-4051-aade-0387799f262e> | 3.3125 | 1,111 | Personal Blog | Science & Tech. | 76.014309 |
Perl 6 Basics Tablet: Revision 9
1st law of language redesign: Everyone wants the colon for their particular syntax.
Basics doesn't mean here easy but fundamental. Which mostly translates to how to format and reformat data (numbers, strings and more).
Please note that any Perl 6 source code is treated as unicode by default. Also use strict; and use warnings; are enabled implicitly.
Unless you use blocks, a Perl program executes one statement after another in linear progression. They have to be separated by a semicolon (;), except before and after a closing curly brace, where it is optional.
Spaces and Indentation
Perl doesn't care about indentation. And spaces are still in many places without meaning. However these have become fewer.
Like in Perl 5 and many other languages of its league a "#" tells the compiler to ignore the rest of the line.
Konverting into numerical context means still: take from left to right all digits and other characters, up to the first char that clearly don't belong to a number definition and stop there.
A single underscore is allowed only between any two digits in a literal number, like:
$people = 3_456_789; # same as 3456789
0b binary - base 2, digits 0..1
General Radix Form
:10<42> # same as 0d42 or 42
$float = 60.2e23 # becomes automatically 6.02e24
To distinguish them from a division operation, you have to groupe them with braces.
As always, .perl gives you a almost like source code formatting which results here in "3/7". Adding .nude you get "(3/7)", the nude source code.
Are now normal quoted strings, only with a special delimiter.
Q :to 'EOT';
To make templates in which variables and closures are evaluated, take the normal double quote and just add the adverb for the heredoc delimiter or define with other adverbs what exactly you want to have evaluated.
The .perl method is a built in Data::Dumper (pretty printer) which gives you structured data the way you write it in perl source code. | <urn:uuid:3565f3c6-73c5-4fb6-a643-7ce8d08429b3> | 3.125 | 466 | Documentation | Software Dev. | 53.9559 |
|Feb23-04, 02:11 PM||#35|
Blog Entries: 27
In condensed matter physics, the ground state of metals at T=0K corresponds to what is known as the "vacuum state" in quantum field theory. In this configuration, the states below the Fermi energy is completely occupied, while the state above the Fermi energy are completely empty.
Now, at finite temperatures, or due to fluctuations, you can have what is known as single-particle excitation above the Fermi energy. When this occure, you have an electron in a state above the Fermi energy, and a hole left behind in the filled states below the Fermi energy. But here's the deal - you can describe this new system EITHER by describing the electron that is above the Fermi energy, OR the hole in the filled states below that.[Refer to Mattuck's "Guide to Feynman Diagram in Many-Body Physics"] In other words, you can set your "universe" to be the empty states and consider the presence of electrons as your elementary excitation, or you can shift your universe to be filled with electrons and consider your elementary excitation to be these positive holes. It is similar in some sort to shifting your "gauge", or potential.
In this respect, the holes behave no different than a positive particle in vacuum (i.e if you shift your "vacuum" to be the level of negative electrons). We give it all the attributes of a particle - it has mass (or effective mass more accurately), charge, spin, etc... In fact, in condensed matter, the holes are the "antimatter" equivalent of the excited electron - i.e. they can anhilate to produce energy.
Now is this nothing more than a mathematical artifact? It isn't. The concepts of holes as a valid entity comes into play in many instances beyond just semiconductors. In high-Tc superconductors, the majority of the families of the cuprates are hole-doped! One actually remove electrons in the filled Mott insulator of the copper-oxide plane. The resulting holes behave like any other positively charged particle. In fact, this is the most common descrption of these family of compounds. Contrast that with the electron-doped cuprates that has generally lower Tc than their hole-doped counterpart, and you can already tell that there are some real physics differences involved here.
Keep in mind that these concepts, and the questions that have been asked in this string, can make more sense if one study a little bit of many-body physics. Only then would one see why things like "holes" and "excitations", etc, are more transparent. It is only within the many-body context would these things have definite meanings.
|Feb23-04, 11:39 PM||#36|
I honestly have no idea what you're talking about. What "tone" am I supposed to change? *What* inflammatory attitude? As far as I know, I'm behaving exactly the same as I always do on numerous forums (several of which I moderate.) You'll have to be far more specific if you want me to understand your complaints.
I certainly am pointing out your physics errors in no uncertain terms. Is this what you're really objecting to? If not, then please quote me the specific sentences in my messages which give you problems. Also please tell me which forum rules they violate.
You're obviously not an internet newbie. You should be well aware that "tone" of messages is frequently all in the mind of the reader, not in the message itself, and therefore is not a reliable indicator. If you're sure that I'm misbehaving in some way, you need to make certain that the "tone" is not all in your own head. It's easy: quote me the specific passages where I break the rules of physicsforums.com
I suggest you take a look at this article:
Email lists: flamewars and psychology
Also this one about electric current that I posted earlier:
Which way does "electricity" flow in circuits?
If you have a few hours, take a look at my collected writings:
Or the rest of my site:
amasci.com: the good stuff
William J. Beaty
University of Washington
|Similar Threads for: Electric Flow|
|period of an electric dipole rotating in an external electric field||Introductory Physics Homework||2|
|Transition from pipe flow to open channel flow||Mechanical Engineering||6|
|Can an inviscid flow rotational? Potential Flow?||Mechanical Engineering||2|
|Temp flow and electrical flow||Classical Physics||5|
|electric flow||Introductory Physics Homework||3| | <urn:uuid:46d1b657-a76a-4cea-9333-1bfcbe6eacad> | 3.03125 | 1,014 | Comment Section | Science & Tech. | 46.445549 |
One natural disaster we don’t have to worry about in Texas is volcanoes, and after looking at these photos of Mount Etna in Italy we can be glad of that.
Mount Etna rises about two miles above sea level on the island of Sicily and regularly erupts. The most recent series of eruptions began during early on February 19, and this is believed to be the largest eruption of Mt. Etna since at least 2000.
The series of eruptions culminated on Saturday night, when the volcano sent fountains of lava more than 800 meters into the air, according to an estimate by Boris Behncke of the Osservatorio Etneo. His photograph of the eruption appears below.
As Erik Klemetti notes, the eruption in the photo below is taller than two Empire State buildings.
Another perspective of Etna’s eruptions appears in the false-color satellite image below that combines shortwave infrared, near-infrared, and green light. It comes via NASA’s Earth Observatory.
This combination of imaging techniques makes it easy to differentiate between fresh lava (red), snow (blue), clouds (white), and forest (purple — just kidding, it’s green).
This is Europe’s tallest active volcano, so enjoy its activity from afar. | <urn:uuid:4c5fdd2e-9a21-4682-a777-6fb031843c1b> | 3.046875 | 271 | Personal Blog | Science & Tech. | 42.030563 |
Back in September 2010, astronomers announced the discovery of a remarkable and exciting planet: it was three times our mass (high, but far closer to Earth conditions than the super-Jupiters usually found) and orbiting in the "Goldilocks zone" of its star… which meant that it could possibly have liquid water on its surface! This achingly earth-like planet made a major buzz, and in fact I used its characteristics to estimate that there could be billions of Earthlike planets in our galaxy.
But there’s just one small, really eensy-teensy problem: the planet may not exist. But it also might. Maybe.
We’re still early in the game here, and there’s a lot going on… but it’s worth peeking a bit deeper. There’s science here, and math, and even some interesting media jiggery-pokery.
We know of more than 500 planets orbiting other stars, and astronomers have a diverse set of tools to find them. The first were discovered by what’s called reflexive motion (a nice animation of this is on the Astrobio.net site); as a planet orbits a star, the planet’s gravity tugs on the star, causing a tiny Doppler shift in the starlight. This is a very small and difficult thing to measure, but techniques improved vastly in the 1990s, and most planets have been discovered this way. The success of this technique has been confirmed by other methods, too, including planetary transits, when the orbiting planet passes in front of the star from our viewpoint, and blocks a little bit of its light. Several planets detected using reflexive motion were confirmed by subsequent transits. We know the method works.
But like any technique, things get fuzzy when you push it. Gliese 581 is a red dwarf star a mere 20 light years way; it’s one of the closest stars in the sky to us. Two different teams of astronomers, one Swiss and one American, have observed the star for a long time, and they both confirm the existence of four planets around the star (more on that in a sec). But one of the teams (Steven Vogt and Paul Butler) claimed they found two more planets: Gliese 581 f and g, with the latter being the planet in question.
Odd planet out
Almost immediately, the planet was called into doubt; the Swiss team re-examined their data and could not be absolutely certain that Gliese 581 g was there, but still gave it a thumbs-up at the 90+% level. That’s not too bad.
Interestingly, not too long after the announcement I was at a meeting with several astronomers, and one noted that Vogt’s team made a big assumption: all the planet orbits were circular. If in fact one of the planets had an elliptical orbit it could set up a false-positive, making it look like another planet was there when it wasn’t. According to Vogt this turns out not to be the case; I contacted him and he let me know that orbital ellipticity was one of the characteristics they modeled as a variable. In other words, their computer model made no assumptions about orbit shape, but in fact the best fits in the end were circular orbits.
Still and all, there have been some questions about the planet’s existence, and I’ve been holding back from posting until something happened. Well, something did: Philip Gregory, an emeritus astronomer with the University of British Columbia, has analyzed both data sets using sophisticated statistical techniques, and he concluded that Gliese 581 g almost certainly wasn’t real. In fact, he says the odds of it being a false alarm are 99.9978%!
So which is it? Is it 90+% certain to be real, or 99.9978% certain it isn’t?
Let me be up front with you: I don’t know. Gregory analyzed the data using Bayesian analysis, a method of looking at the statistical certainty of a set of observations. This is fiendishly complex in practice and to be honest is not something I’m familiar with. However, in his paper, Gregory himself claims that Vogt and Butler underestimated the amount of noise in their data. Vogt disputes this, saying that Gregory adds noise to their data rather arbitrarily. I’ll admit that it seemed odd to me that Gregory would add noise the way he did, but again I’m no expert.
Vogt also notes that how you run the computer model will change whether or not you find the planet. This part interests me, because I’ve run into similar situations myself. If you tell your computer that one of the planets (in this case, Gliese 581 d) has a highly elliptical orbit, then Gliese 581 g disappears: when you calculate the statistics, it’s far more probable that the planet does not exist. But if you keep Gleise 581 d’s orbit circular, Gliese 581 g can be seen in the data. These two different assumptions lead to two different solutions, where one has Gliese 581 g in it and the other doesn’t.
Which one is right? Vogt claims 581 g exists. I won’t go into details (the math gets a bit hairy) but basically he claims that statistically speaking, his solution fits the data batter then Gregory’s.
He said/He said
Well, that’s science! Two people disagree, and they make their cases. Vogt’s disagreements with Gregory’s methods are reasonable, in that he can make his case scientifically and mathematically. He may not be correct, but that’s a matter to be hammered out using science and peer review. Given that the claims are pretty specific (methods used, input parameters, statistical measurements), I think this will work itself out pretty rapidly.
However, the media got involved, and then things got a bit sticky.
I was tipped off to this matter with a link to the (Australian) ABC site which wrote about this disagreement. The following passage, I’ll admit, made me cringe a little. Note that the HIRES data are the observations by Vogt and Butler, while HARPS is from the other, Swiss, team:
Dr Steve Vogt says he and his colleagues "stand solidly" by their original findings.
"I have studied [the paper] in detail and do not agree with his conclusions," he says.
Vogt is concerned that Gregory has unfairly manipulated the HIRES data.
"By doing so, he finds a solution that is more consistent with the HARPS data only," he says.
OK, yikes. The word "manipulated" is pretty loaded. It’s easily interpreted as meaning the data are somehow being changed unfairly, and on purpose.
But then I saw an article in the Toronto Star that said this:
The revelation Gregory put forward is being dismissed by Vogt, who was quoted by the Australian Broadcasting Corporation as saying Gregory “manipulated” the numbers.
Egads. That made me cringe a lot. Note this is a second-generation quote; the Star was using something written in the ABC article. The Star continued with this:
"Vogt is not familiar with the Bayesian techniques so he might assume that I am manipulating the data. I attribute that to a lack of awareness on his part," said the soft-spoken Gregory.
Oh my. Well, to me the use of the word "manipulate" would be pretty accusatory in this context coming from a scientist when discussing the work of another, and this is why I initially contacted Vogt. He sent to me the email he sent to the ABC, and the word "manipulate" is nowhere in it. To a layman his email would be strongly worded, but as a scientist I see him attacking Gregory’s work, not the man himself. What he said wouldn’t draw any surprise at all were it said at a scientific conference, for example.
But the Star article actually got a response from Gregory about the "manipulation". That line I quoted above is a bit loaded, in my opinion, right down to the adjective "soft-spoken" used to describe Gregory. It’s almost as if the media were playing up the contention between the two men, trying to frame the story as being personal (with one scientist the aggressor, and the other the defender) as opposed to just a scientific difference of opinion.
Again, I strongly suspect that if Vogt and Gregory got together (or when Gregory’s paper goes through the review process; it’s been submitted but not peer-reviewed yet) this would all get figured out pretty quickly.
[UPDATE: As I was putting the final edits on this, Wired posted a pretty good article about all this.]
To g, or not to g?
So, does Gliese 581 g exist? I can only form an opinion right now based on what I’ve seen, and I don’t like to speculate over much. However, Vogt has good rebuttals to the opposing claims, and the Swiss team of astronomers does seem to back him on the existence of the planet.
What we really need are more and more sensitive observations. That’s going to be the rule and not the exception as we move forward in looking for earth-like planets. They’re small, and move slowly, and make themselves very difficult to detect with our current hardware. But progress moves on, and whether Gleise 581 g exists or not, finding another Earth orbiting another star is only a matter of time. Count on it.
Links to this Post
- Bestaat Gliese 581g eigenlijk wel? Grote kans van niet | Astroblogs | January 19, 2011
- Gliese 581g: the controversy « steve cross loves music and science | January 19, 2011
- The Ultimate Measure of a Planet: Habitability Isn’t a Yes/No Question | The Crux | Discover Magazine | December 6, 2011
- Studying Distant Worlds to Determine Life Capacities « From This Litter Box | February 6, 2012 | <urn:uuid:e746e8a3-48fc-47bf-a3ec-06c558379380> | 3.21875 | 2,185 | Personal Blog | Science & Tech. | 56.788651 |
The reference count is important because today's computers have a finite (and often severely limited) memory size; it counts how many different places there are that have a reference to an object. Such a place could be another object, or a global (or static) C variable, or a local variable in some C function. When an object's reference count becomes zero, the object is deallocated. If it contains references to other objects, their reference count is decremented. Those other objects may be deallocated in turn, if this decrement makes their reference count become zero, and so on. (There's an obvious problem with objects that reference each other here; for now, the solution is ``don't do that.'')
Reference counts are always manipulated explicitly. The normal way is
to use the macro Py_INCREF()
sizeof(long) >= sizeof(char*)). Thus, the
reference count increment is a simple operation.
It is not necessary to increment an object's reference count for every local variable that contains a pointer to an object. In theory, the object's reference count goes up by one when the variable is made to point to it and it goes down by one when the variable goes out of scope. However, these two cancel each other out, so at the end the reference count hasn't changed. The only real reason to use the reference count is to prevent the object from being deallocated as long as our variable is pointing to it. If we know that there is at least one other reference to the object that lives at least as long as our variable, there is no need to increment the reference count temporarily. An important situation where this arises is in objects that are passed as arguments to C functions in an extension module that are called from Python; the call mechanism guarantees to hold a reference to every argument for the duration of the call.
However, a common pitfall is to extract an object from a list and hold on to it for a while without incrementing its reference count. Some other operation might conceivably remove the object from the list, decrementing its reference count and possible deallocating it. The real danger is that innocent-looking operations may invoke arbitrary Python code which could do this; there is a code path which allows control to flow back to the user from a Py_DECREF(), so almost any operation is potentially dangerous.
A safe approach is to always use the generic operations (functions whose name begins with "PyObject_", "PyNumber_", "PySequence_" or "PyMapping_"). These operations always increment the reference count of the object they return. This leaves the caller with the responsibility to call Py_DECREF() when they are done with the result; this soon becomes second nature. | <urn:uuid:3afbd592-9978-46ff-9110-b54b16fe12e7> | 3.375 | 568 | Documentation | Software Dev. | 38.292596 |
Page:Popular Science Monthly Volume 44.djvu/697
THE ICE AGE AND ITS WORK.
By ALFRED K. WALLACE, F. R. S.
ERRATIC BLOCKS AND ICE-SHEETS.
IT is little more than fifty years ago that one of the most potent agents in modifying the surface features of our country was first recognized. Before 1840, when Agassiz accompanied Buckland to Scotland, the Lake District, and Wales, discovering everywhere the same indications of the former presence of glaciers as are to be found so abundantly in Switzerland, no geologist had conceived the possibility of a recent glacial epoch in the temperate portion of the northern hemisphere. From that year, however, a new science came into existence, and it was recognized that only by a careful study of existing glaciers, of the nature of the work they now do, and of the indications of the work they have done in past ages, could we explain many curious phenomena that had hitherto been vaguely regarded as indications of diluvial agency. One of the first fruits of the new science was the conversion of the author of Reliquiæ Diluvianæ — Dr. Buckland, who, having studied the work of glaciers in Switzerland in company with Agassiz, became convinced that numerous phenomena he had observed in this country could only be due to the very same causes. In November, 1840, he read a paper before the Geological Society on the Evidences of Glaciers in Scotland and the North of England, and from that time to the present the study of glaciers and of their work has been systematically pursued with a large amount of success. One after another crude theories have been abandoned, facts have steadily accumulated, and their logical though cautious interpretation has led to a considerable body of well-supported inductions on which the new science is becoming firmly established. Some of the most important and far-reaching of these inductions are, however, still denied by writers who have a wide acquaintance with modern glaciers; and as several works have recently appeared on both sides of the controversy, the time seems appropriate for a popular sketch of the progress of the glacial theory, together with a more detailed discussion of some of the most disputed points as to which it seems to the present writer that sound reasoning is even more required than the further accumulation of facts.The works referred to are: Do Glaciers Excavate? by Prof. T. G. Bonney, F. R. S. (The Geographical Journal, vol. i, No. 6); The Glacial Nightmare and the Flood, by Sir H. H. Howorth, M. P., F. R. S.; Fragments of Earth Lore, by Prof. James Geikie, F. R. S.; | <urn:uuid:24ad9b3d-fe15-40e4-88a5-e29c918cbcbf> | 3.34375 | 567 | Truncated | Science & Tech. | 53.910213 |
Comprehensive DescriptionRead full entry
BiologyInhabits coastal waters, mostly around coral reefs. Usually seen well above the bottom, frequently in aggregations. Young individuals are usually found over weed beds. Feeds mainly at night (Ref. 9987). Feeds on a combination of plankton and benthic animals including fishes, crustaceans, worms, gastropods and cephalopods. Juveniles feed primarily on plankton (Ref. 9710). Spawning occurs throughout the year, with peaks at different times in different areas (Ref. 26938). Marketed fresh and frozen (Ref. 9987). Has been reared in captivity (Ref. 35420). | <urn:uuid:ef6433f4-e429-4b0f-93df-652b52e0ee39> | 2.890625 | 141 | Knowledge Article | Science & Tech. | 46.401912 |
Benthic disturbance by fishing gear in the Irish Sea: a comparison of beam trawling and scallop dredging
Kaiser, M.J., Hill, A.S., Ramsay, K., Spencer, B.E., Brand, A.R., Veale, L.O., Prudden, K., Rees, E.I.S., Munday, B.W., Ball, B. and Hawkins, S.J. (1996) Benthic disturbance by fishing gear in the Irish Sea: a comparison of beam trawling and scallop dredging. Aquatic Conservation Marine and Freshwater Ecosystems, 6, (4), 269-285. (doi:10.1002/(SICI)1099-0755(199612)6:4<269::AID-AQC202>3.0.CO;2-C).
Full text not available from this repository.
1. The distribution of effort for the most frequently used mobile demersal gears in the Irish Sea was examined and their potential to disturb different benthic communities calculated. Fishing effort data, expressed as the number of days fished, was collated for all fleets operating in the Irish Sea in 1994. For each gear, the percentage of the seabed swept by those parts of the gear that penetrate the seabed was calculated.
2. For all gears, the majority of fishing effort was concentrated in the northern Irish Sea. Effort was concentrated in three main locations: on the muddy sediments between Northern Ireland and the Isle of Man (otter and Nephrops trawling); off the north Wales, Lancashire and Cumbrian coast (beam trawling); the area surrounding the Isle of Man (scallop dredging).
3. In some areas, e.g. between Anglesey and the Isle of Man, the use of scallop dredges and beam trawls was coincident. A comparative experimental study revealed that scallop dredges caught much less by-catch than beam trawls. Multivariate analysis revealed that both gears modified the benthic community in a similar manner, causing a reduction in the abundance of most epifaunal species.
4. Although beam trawling disturbed the greatest area of seabed in 1994, the majority of effort occurred on grounds which supported communities that are exposed to high levels of natural disturbance. Scallop dredging, Nephrops and otter trawling were concentrated in areas that either have long-lived or poorly studied communities. The latter highlights the need for more detailed knowledge of the distribution of sublittoral communities that are vulnerable to fishing disturbance. ©British Crown Copyright 1996.
|Subjects:||G Geography. Anthropology. Recreation > GC Oceanography
Q Science > QH Natural history > QH301 Biology
|Divisions:||University Structure - Pre August 2011 > School of Ocean & Earth Science (SOC/SOES)
|Date Deposited:||27 May 2011 12:43|
|Last Modified:||02 Mar 2012 12:17|
|Contributors:||Kaiser, M.J. (Author)
Hill, A.S. (Author)
Ramsay, K. (Author)
Spencer, B.E. (Author)
Brand, A.R. (Author)
Veale, L.O. (Author)
Prudden, K. (Author)
Rees, E.I.S. (Author)
Munday, B.W. (Author)
Ball, B. (Author)
Hawkins, S.J. (Author)
|RDF:||RDF+N-Triples, RDF+N3, RDF+XML, Browse.|
Actions (login required) | <urn:uuid:255b7497-124e-4fd2-be8a-dfef39e777e5> | 2.78125 | 807 | Academic Writing | Science & Tech. | 68.633256 |
Difference b/w int const * and const int *
please will yu help me to understand the difference b/w int const*p and
const int *p and why do we use them means what are the significance of these ??
Last edited by deep725; 02-07-2004 at 02:13 PM.
int const * <--- const pointer
const int * <-- pointer to const
without boring you with a long lecture on the types of const, there's a simple rule: if the const appears after the *, as in
int * const p;
Then the pointer is const, whereas the object to which it points isn't const. If the const appears before the *, either as
const int * p;
int const * p;
Then the object is const, whereas the pointer isn't.
Const pointers are rather rare, although they have some valid uses. pointers to const objects are much more common and are used in several ways such as passing read only arguments to a function, ensuring that a C-string isn't altered etc.
Top DevX Stories
Easy Web Services with SQL Server 2005 HTTP Endpoints
JavaOne 2005: Java Platform Roadmap Focuses on Ease of Development, Sun Focuses on the "Free" in F.O.S.S.
Wed Yourself to UML with the Power of Associations
Microsoft to Add AJAX Capabilities to ASP.NET
IBM's Cloudscape Versus MySQL | <urn:uuid:4feb265c-720a-4592-8873-1f4df9d71b24> | 3.25 | 307 | Comment Section | Software Dev. | 75.678746 |
Why Earthquakes Are Hard to Measure
Earthquakes are very hard to measure on a standard scale of size. The problem is like finding one number for the quality of a baseball pitcher. You can start with the pitcher's win-loss record, but there are more things to consider: earned-run average, strikeouts and walks, career longevity and so on. Baseball statisticians tinker with indexes that weigh these factors (for more, visit the About Baseball Guide).
Earthquakes are easily as complicated as pitchers. They are fast or slow. Some are gentle, others are violent. They're even right-handed or left-handed. They are oriented different ways—horizontal, vertical, or in between (see Faults in a Nutshell). They occur in different geologic settings, deep within continents or out in the ocean. Yet somehow we want a single meaningful number for ranking the world's earthquakes. The goal has always been to figure out the total amount of energy a quake releases, because that tells us profound things about the dynamics of the Earth's interior.
Richter's First Scale
The pioneering seismologist Charles Richter started in the 1930s by simplifying everything he could think of. He chose one standard instrument, a Wood-Anderson seismograph, used only nearby earthquakes in Southern California, and took only one piece of data—the distance A in millimeters that the seismograph needle moved. He worked up a simple adjustment factor B to allow for near versus distant quakes, and that was the first Richter scale of local magnitude ML:
ML = log A + B
A graphical version of his scale is reproduced on the Caltech archives site.
You'll notice that ML really measures the size of earthquake waves, not an earthquake's total energy, but it was a start. This scale worked fairly well as far as it went, which was for small and moderate earthquakes in Southern California. Over the next 20 years Richter and many other workers extended the scale to newer seismometers, different regions, and different kinds of seismic waves.
Later "Richter Scales"
Soon enough Richter's original scale was abandoned, but the public and the press still use the phrase "Richter magnitude." Seismologists used to mind, but not any more.
Today seismic events may be measured based on body waves or surface waves (these are explained in Earthquakes in a Nutshell). The formulas differ but they yield the same numbers for moderate earthquakes.
Body-wave magnitude is
mb = log(A/T) + Q(D,h)
where A is the ground motion (in microns), T is the wave's period (in seconds), and Q(D,h) is a correction factor that depends on distance to the quake's epicenter D (in degrees) and focal depth h (in kilometers).
Surface-wave magnitude is
Ms = log(A/T) + 1.66 logD + 3.30
mb uses relatively short seismic waves with a 1-second period, so to it every quake source that is larger than a few wavelengths looks the same. That corresponds to a magnitude of about 6.5. Ms uses 20-second waves and can handle larger sources, but it too saturates around magnitude 8. That's OK for most purposes because magnitude-8 or great events happen only about once a year on average for the whole planet. But within their limits, these two scales are a reliable gauge of the actual energy that earthquakes release.
The biggest earthquake whose magnitude we know was in 1960, in the Pacific right off central Chile on May 22. Back then, it was said to be magnitude 8.5, but today we say it was 9.5. What happened in the meantime was that Tom Hanks and Hiroo Kanamori came up with a better magnitude scale in 1979.
This moment magnitude, Mw, is not based on seismometer readings at all but on the total energy released in a quake, the seismic moment Mo (in dyne-centimeters):
Mw = 2/3 log(Mo) - 10.7
This scale therefore does not saturate. Moment magnitude can match anything the Earth can throw at us. The formula for Mw is such that below magnitude 8 it matches Ms and below magnitude 6 it matches mb, which is close enough to Richter's old ML. So keep calling it the Richter scale if you like—it's the scale Richter would have made if he could.
The U.S. Geological Survey's Henry Spall interviewed Charles Richter in 1980 about "his" scale. It makes lively reading.
PS: Earthquakes on Earth simply can't get bigger than around Mw = 9.5. A piece of rock can store up only so much strain energy before it ruptures, so the size of a quake depends strictly on how much rock—how many kilometers of fault length—can rupture at once. The Chile Trench, where the 1960 quake occurred, is the longest straight fault in the world. The only way to get more energy is with giant landslides or asteroid impacts. | <urn:uuid:5b9329fd-3372-4066-a25e-a3eea0c559a7> | 4.09375 | 1,052 | Knowledge Article | Science & Tech. | 57.906409 |
Web standards checklist
The term web standards can mean different things to different people. For some, it is 'table-free sites', for others it is 'using valid code'. However, web standards are much broader than that. A site built to web standards should adhere to standards (HTML, XHTML, XML, CSS, XSLT, DOM etc) and pursue best practices (valid code, accessible code, semantically correct code, user-friendly URLs etc).
In other words, a site built to web standards should ideally be lean, clean, CSS-based, accessible, usable and search engine friendly.
• Quality of code
1. Does the site use a correct Doctype?
A doctype (short for 'document type declaration') informs the validator which version of (X)HTML you're using, and must appear at the very top of every web page. Doctypes are a key component of compliant web pages: your markup and CSS won't validate without them
2. Does the site use a Character set?
If a user agent (eg. a browser) is unable to detect the character encoding used in a Web document, the user may be presented with unreadable text. This information is particularly important for those maintaining and extending a multilingual site, but declaring the character encoding of the document is important for anyone producing XHTML/HTML or CSS.
3. Does the site use Valid (X)HTML?
Valid code will render faster than code with errors. Valid code will render better than invalid code. Browsers are becoming more standards compliant, and it is becoming increasingly necessary to write valid and standards compliant HTML.
4. Does the site use Valid CSS?
You need to make sure that there aren't any errors in either your HTML or your CSS, since mistakes in either place can result in botched document appearance.
5. Does the site use unnecessary classes or ids?
6. Is the code well structured?
Semantically correct markup uses html elements for their given purpose. Well structured HTML has semantic meaning for a wide range of user agents (browsers without style sheets, text browsers, PDAs, search engines etc.)
7. Does the site have any broken links?
Broken links can frustrate users and potentially drive customers away. Broken links can also keep search engines from properly indexing your site.
8. How does the site perform in terms of speed/page size?
• Degree of separation between content and presentation
1. Does the site use CSS for all presentation aspects (fonts, colour, padding, borders etc)?
2. Are all decorative images in the CSS, or do they appear in the (X)HTML?
The aim for web developers is to remove all presentation from the html code, leaving it clean and semantically correct.
• Accessibility for users
1. Are "alt" attributes used for all descriptive images?
2. Does the site use relative units rather than absolute units for text size?
3. Do any aspects of the layout break if font size is increased?
4. Does the site use visible skip menus?
5. Does the site use accessible forms?
6. Does the site use accessible tables?
For data tables, identify row and column headers... For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells.
7. Is there sufficient colour brightness/contrasts?
8. Is colour alone used for critical information?
9. Is there delayed responsiveness for dropdown menus (for users with reduced motor skills)?
10. Are all links descriptive (for blind users)?
• Accessibility for devices
1. Does the site work acceptably across modern and older browsers?
2. Is the content accessible with CSS switched off or not supported?
3. Is the content accessible with images switched off or not supported?
4. Does the site work in text browsers such as Lynx?
5. Does the site work well when printed?
6. Does the site work well in Hand Held devices?
7. Does the site include detailed metadata?
8. Does the site work well in a range of browser window sizes?
• Basic Usability
1. Is there a clear visual hierarchy?
2. Are heading levels easy to distinguish?
3. Is the site's navigation easy to understand?
4. Is the site's navigation consistent?
5. Does the site use consistent and appropriate language?
6. Does the site have a sitemap page and contact page? Are they easy to find?
7. For large sites, is there a search tool?
8. Is there a link to the home page on every page in the site?
9. Are links underlined?
10. Are visited links clearly defined?
• Site management
1. Does the site have a meaningful and helpful 404 error page that works from any depth in the site?
2. Does the site use friendly URLs?
3. Do your URLs work without "www"?
While this is not critical, and in some cases is not even possible, it is always good to give people the choice of both options. If a user types your domain name without the www and gets no site, this could disadvantage both the user and you.
Benefits of valid XHTML & CSS
• Increased interoperability. XHTML pages can easily be viewed on wireless devices like PDA's and cell phones.
• Cleaner, more logical markup adding better integration in older existing systems.
• Future transition to more advanced technology. Allows future XML technology to be easily integrated in to an existing site.
• Greater accessibility, broadening your potential customer base. Pages will work with screen readers for the visually impaired.
• Bandwith conservation. Pages are smaller than old HTML designs and will load much faster for slower internet connections.
• Increased readability by search engines | <urn:uuid:11120f16-e88b-4acb-9248-06c3b22cbed5> | 2.875 | 1,226 | Tutorial | Software Dev. | 59.581169 |
Some black holes may be so ancient that they predate the stars themselves, forming instead in the chaotic first moments after the Big Bang. There might even be some black holes out there from the universe before the Big Bang.
This is one of those stories with a ton of "ifs" and some fairly daring leaps of logic, but nothing about it is actually impossible. So, with those caveats in mind, let's look at the actual theory. Let's assume that the lifespan of the universe is cyclical, beginning with a Big Bang and a period of frenzied expansion, which eventually leads to a similar phase of contraction into what is ultimately the Big Crunch, which then leads to another Big Bang.
Based on what we know about dark energy and the accelerating expansion of our universe, it doesn't seem terribly likely that we're headed for a Big Crunch, but let's assume for the sake of argument that our Big Bang really did follow on from the Big Crunch of a previous universe. A key thing to keep in mind is that a Big Bang and a Big Crunch would look and act almost exactly the same, except of course they happen in opposite directions.
One of the more intriguing outgrowths of thinking about the physics of the Big Bang is the notion of primordial black holes. Unlike regular black holes, which form out of the remnants of collapsed stars, primordial black holes would have formed in the immediate aftermath of the black hole, when the universe was very small and thus very dense. Parts of this ancient maelstrom would have been particularly dense, and these pockets would have formed into small black holes, which then would have spread throughout the universe.
It's a neat idea, but primordial black holes remain strictly theoretical. But that hasn't stopped physicists Bernard Carr and Alan Coley from taking these two ideas and knitting them together into one spectacular proposal: that there are primordial black holes that formed not in the Big Bang at the beginning of our universe but in the Big Crunch at the end of the one before.
Carr and Coley calculate that black holes of surprisingly reasonable mass could have survived the Big Crunch/Bang - black holes with as much mass as our Sun could have survived the journey from one universe to the next. This means that there really could be objects in our universe that are older than the universe.
But there's a problem, at least if you're interested in some actual hard evidence for all this. We haven't seen a primordial black hole, but astrophysicists have a decent idea of what they should look like - in fact, primordial black holes are one of the dozens of potential candidates to explain the extremely powerful gamma ray bursts that are occasionally seen. The thing is, these pre-universe black holes would look almost exactly the same as regular primordial black holes, and we currently have no way to detect them...again, assuming they exist in the first place.
So for now, black holes from before the Big Bang will probably need to remain just a theory. But hey - it's still one hell of a theory. | <urn:uuid:54b31f49-279e-409e-a26a-9627b64609a7> | 3.84375 | 627 | Nonfiction Writing | Science & Tech. | 48.856633 |
What is Marine Biology?
Simply put, marine biology is the study of life in the oceans and other saltwater environments such as estuaries and wetlands. All plant and animal life forms are included from the microscopic picoplankton all the way to the majestic blue whale, the largest creature in the sea—and for that matter in the world.
The study of marine biology includes a wide variety of disciplines such as astronomy, biological oceanography, cellular biology, chemistry, ecology, geology, meteorology, molecular biology, physical oceanography and zoology and the new science of marine conservation biology draws on many longstanding scientific disciplines such as marine ecology, biogeography, zoology, botany, genetics, fisheries biology, anthropology, economics and law.
Like all scientific disciplines, the study of marine biology also follows the scientific method. The overriding goal in all of science is to find the truth. Although following the scientific method is not by any means a rigid process, research is usually conducted systematically and logically to narrow the inevitable margin of error that exists in any scientific study, and to avoid as much bias on behalf of the researcher as possible. The primary component of scientific research is characterization by observations. Hypotheses are then formulated and then tested based on a number of observations in order to determine the degree to which the hypothesis is a true statement and whether or not it can be accepted or rejected. Testing is then often done by experiments if hypotheses can produce predictions based on the initial observations.
The essential elements of the scientific method are iterations and recursions of the following four steps:
Hypothesis (a theoretical, hypothetical explanation)
Prediction (logical deduction from the hypothesis)
Experiment (test of all of the above)
These steps are all used in the study of marine biology, which includes numerous sub fields including:
- Microbiology: The study of microorganisms, such as bacteria, viruses, protozoa and algae, is conducted for numerous reasons. One example is to understand what role microorganisms play in marine ecosystems. For example, bacteria are critical to the biological processes of the ocean, as they comprise 98% of the ocean's biomass, which is the total weight of all organisms in a given volume. Microbiology is also important to our understanding of the food chain that connects plants to herbivorous and carnivorous animals. The first level in the food chain is primary production, which occurs at the microbial level. This is an important biological activity to understand as primary production drives the entire food chain.
Scientists also study marine microbiology to find new organisms that may be used to help develop medicines and find cures for diseases and other health problems.
- Fisheries and Aquaculture: to protect biodiversity and to create sustainable seafood sources because of the world's dependence on fish for protein. There are many areas of study in this field.
- The ecology of fisheries includes the study of their population dynamics, reproduction, behavior, food webs, and habitat.
- Fisheries management includes studies on the impact of overfishing, habitat destruction, pollution and toxin levels, and ways to increase populations for sustainability as seafood.
- Aquaculture includes research on the development of individual organisms and their environment. The objective is most often to develop the knowledge needed to cultivate certain species in a designated area in open water or in captivity in order to meet consumer demand. Technological advances have enabled seafood "farms" to produce high-demand products that traditional commercial fisheries cannot meet. This is a controversial area however, and an issue that will become of greater importance as our fish stocks continue to decline.
- Environmental marine biology: includes the study of ocean health. It is important for scientists to determine the quality of the marine environment to ensure water quality is sufficient to sustain a healthy environment. Coastal environmental health is an important area of environmental marine biology so that scientists can determine the impact of coastal development on water quality for the safety of people visiting the beaches and to maintain a healthy marine environment. Pollutants, sediment, and runoff are all potential threats to marine health in coastal areas. Offshore marine environmental health is also studied. For example, an environmental biologist might be required to study the impact of an oil spill or other chemical hazard in the ocean. Environmental biologists also study Benthic environments on the ocean bottom in order to understand such issues as the chemical makeup of sediment, impact of erosion, and the impact of dredging ocean bottoms on the marine environment.
- Deep-sea ecology: advances in technology of equipment needed to explore the deep sea have opened the door to the study of this largely unknown space in the sea. The biological characteristics and processes in the deep-sea environment are of great interest to scientists. Research includes the study of deep ocean gases as an alternate energy source, how animals of the deep live in the dark, cold, high pressure environment, deep sea hydrothermal vents and the lush biological communities they support.
- Ichthyology: is the study of fishes, both salt and freshwater species. There are some 25,000+ species of fishes including: bony fishes, cartilaginous fishes, sharks, skates, rays, and jawless fishes. Ichthyologists study all aspects of fish from their classification, to their morphology, evolution, behavior, diversity, and ecology. Many ichthyologists are also involved in the field of aquaculture and fisheries.
- Marine mammology: This is the field of interest to most aspiring marine biologists. It is the study of cetaceans—families of whales and dolphins, and pinnipeds (seals, sea lions, and the walrus). Their behaviors, habitats, health, reproduction, and populations are all studied. These are some of the most fascinating creatures in the sea; therefore, this is an extremely competitive field, and difficult to break into because the competition for research funding is also quite heavy.
One area of research currently being conducted on whales is the impact of military sonar on their health and well-being. The scientific community believes that high frequency sound waves cause internal damage and bleeding in the brains of whales, yet the military denies this claim. Military sonar can also interfere with the animal's own use of sonar for communication and echolocation. More research is needed; however, in recent years science has proven the claims to be valid and the military has begun limiting its use of sonar in specific areas.
- Marine ethology: The behavior of marine animals is studied so that we understand the animals that share the planet with us. This is also an important field for help in understanding how to protect endangered species, or how to help species whose habitats are threatened by man or natural phenomena. The study of marine animal behavior usually falls under the category of ethology because most often marine species must be observed in their natural environment, although there are many marine species observed in controlled environments as well. Sharks are most often studied in their natural habitat for obvious reasons.
Why Study Marine Biology?
Life in the sea has been a subject of fascination for thousands of years. One of the most important reasons for the study of sea life is simply to understand the world in which we live. The oceans cover 71% (and rising) of this world, and yet we have only scratched the surface when it comes to understanding them. Scientists estimate that no more than 5% of the oceans have been explored. Yet, we need to understand the marine environment that helps support life on this planet, for example:
Health of the oceans/planet
Pollution (toxicology, dumping, runoff, impact of recreation, blooms)
Dissolution of carbon dioxide...
Sustainability and biodiversity
Impacts on the food chain...
Research and product development
Alternate energy sources....
How is Marine Biology Studied?
Advances in technology have opened up the ocean to exploration from the shallows to the deep sea. New tools for marine research are being added to the list of tools that have been used for decades such as:
- Trawling - has been used in the past to collect marine specimens for study, except that trawling can be very damaging to delicate marine environments and it is difficult to collect samples discriminately. However when used in the midwater environment, trawls can be every effective at collecting samples of elusive species with a wide migratory range.
- Plankton nets - plankton nets have a very fine weave to catch microscopic organisms in seawater for study.
- Remotely operated vehicles (ROVs) - have been used underwater since the 1950s. ROVs are basically unmanned submarine robots with umbilical cables used to transmit data between the vehicle and researcher for remote operation in areas where diving is constrained by health or other hazards. ROVs are often fitted with video and still cameras as well as with mechanical tools for specimen retrieval and measurements.
- Underwater habitats - the National Oceanic and Atmospheric Administration (NOAA) operates Aquarius, a habitat 20 m beneath the surface where researchers can live and work underwater for extended periods.
- Fiber optics - Fiber optic observational equipment uses LED light (red light illumination) and low light cameras that do not disturb deep-sea life to capture the behaviors and characteristics of these creatures in their natural habitat.
- Satellites - are used to measure vast geographic ocean data such as the temperature and color of the ocean. Temperature data can provide information on a variety of ocean characteristics such as currents, cold upwelling, climate, and warm water currents such as the Gulf Stream. Satellites are also used for mapping marine areas such as coral reefs and for tracking marine life tagged with sensors to determine migratory patterns.
- Sounding - hydrophones, the microphone's counterpart, detect and record acoustic signals in the ocean. Sound data can be used to monitor waves, marine mammals, ships, and other ocean activities.
- Sonar - similar to sounding, sonar is used to find large objects in the water and to measure the ocean's depth (bathymetry). Sound waves last longer in water than in air, and are therefore useful to detect underwater echoes.
- Computers - sophisticated computer technology is used to collect, process, analyze, and display data from sensors placed in the marine environment to measure temperature, depth, navigation, salinity, and meteorological data. NOAA implemented computer technology aboard its research vessels to standardize the way this data is managed.
Marine Biology versus Biological Oceanography
The difference between the terms "marine biology" and "biological oceanography" is subtle, and the two are often used interchangeably. As mentioned above, marine biology is the study of marine species that live in the ocean and other salt-water environments. Biological oceanography also studies marine species, but in the context of oceanography. So a biological oceanographer might study the impact of cold upwellings on anchovy populations off the coast of South America, where a marine biologist might study the reproductive behavior of anchovies.
Feedback & Citation
Find an error or having trouble with something? Let us know and we'll have a look!
Help us continue to share the wonders of the ocean with the world, raise awareness of marine conservation issues and their solutions, and support marine conservation scientists and students involved in the marine life sciences. Join the MarineBio Conservation Society or make a donation today. We would like to sincerely thank all of our members, donors, and sponsors, we simply could not have achieved what we have without you and we look forward to doing even more. | <urn:uuid:5070f1e8-5b0f-41d8-9980-731eb3faafd6> | 3.6875 | 2,358 | Knowledge Article | Science & Tech. | 22.14149 |
LLVM (formerly Low Level Virtual Machine) is a compiler infrastructure written in C++
; it is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs written in arbitrary programming languages. Originally implemented for C and C++, the language-agnostic design (and the success) of LLVM has since spawned a wide variety of front ends: languages with compilers which use LLVM include Objective-C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL, D, and Rust.
The name LLVM was originally an initialism for Low Level Virtual Machine, but the initialism caused widespread confusion because the scope of the project is not limited to the creation of virtual machines. As the scope of LLVM grew, it became an umbrella project that included a variety of other compiler and low-level tool technologies as well
, making the name even less apt. As such, the project abandoned the initialism.
Now, LLVM is a brand that applies to the LLVM umbrella project, the LLVM intermediate representation, the LLVM debugger, the LLVM C++ standard library, etc. | <urn:uuid:5b975a7e-6b43-457e-bbf1-e9eba52e3a4c> | 3.46875 | 247 | Comment Section | Software Dev. | 29.30169 |
in the past you could not give an explanation for various phenomena in which was an action at a distance like magnetism or gravity that occurred in a vacuum; For this reason, it was hypothesized ether as something that would by connecting to bring the information of the positive charge to the negative charge (remaining in the field of magnetism), in short, was inconceivable that this attractive force could exist in empty. At a later stage it was thought to .......... Then with General Relativity about gravity is thought to a deformation of the space-time, for which (for example) the Earth has a fixed course in its turn around the sun Now excuse me if my question to the introduction I wrote inaccuracies, but did not know how to explain otherwise ........ I come to the question whether in the past had broken into the ether because it was inconceivable that a force could be transmitted in a vacuum, now why it can deform according to the General Relativity? If it is empty what deform? Then I have to re-introduce the ether for something to deform? I hope that no one gets a laugh for this my silly question, but instead to help me understand a little better as far as possible. Luca
Welcome to physics.se .
The luminiferus aether was postulated because of the existence of light as electromagnetic waves. Physicists at the time had studied waves in various media and could not conceive of a wave existing without a medium to carry it, similar to water for waves on the sea, or air for sound waves. So this hypothesized medium was an inertial frame against which everything had some velocity.
There exists the Michelson Morley experiment which first disproved the existence of the luminiferous aether and several later and recent ones. The disproof says that there is no basic inertial frame for electromagnetic waves, since they always move with velocity c in any frame.
The behaviour of matter and electromagnetic waves is consistent with this when described in terms of special relativity which ensures that the velocity of light is c. Any physics model that is Lorenz invariant assures that this is true. In fact field theoretical models for particle physics populate the vacuum with virtual pairs of particles, BUT, the whole model is Lorenz invariant so this teeming population of the vacuum is not the luminiferous aether of old.
General relativity equations also respect Lorenz invariance so even if it is hard to visualize space as an underlying dynamical system, since it is Lorenz invariant it cannot be thought as a substitute of the ancient luminiferous aether, either.
It is Lorenz invariance found to be always validated experimentally that disproved the existence of ether .
In general relativity, it is the geometry of the space itself which is dynamic.
These points illustrate why the introduction of ether is unnecessary. I think 'deform' is not the best word to use when thinking of General Relativity. A better way to think of it is in terms of a geometry of space and time which is non-euclidean. | <urn:uuid:f4fb48e0-e8a1-471a-a3bb-19b99f0555b0> | 2.796875 | 623 | Q&A Forum | Science & Tech. | 44.14338 |
I found this NASA graphic illustrating Jason Major's Universe Today article "A Moon With Two Suns", this graphic illustrates the known worlds of the Kepler-47 planetary system. Kepler-47 is of note as it is the first planetary system discovered with multiple planets orbiting two stars. (Last September I noted the discovery of the Kepler-16 planetary system, which has only one planet orbiting two stars.)
Kepler-47c is of note for orbiting in the habitable zone of the system. While it is likely some kind of superterrestrial world or ice giant, Kepler-47c may possess relatively Earth-like moons. NASA is excited.
While the inner planet, Kepler-47b, orbits in less than 50 days and must be a sweltering world, the outer planet, Kepler-47c, orbits every 303 days, putting it in the "habitable zone," where liquid water might exist. But Kepler-47c is slightly larger than Neptune, and hence in the realm of gaseous giant planets, difficult to imagine as suitable for life. That does not preclude the chance that it has large a moon with a solid surface and liquid water lakes or seas. Kepler-16b was likened to Tatooine, Luke Skywalker's home planet in the movie Star Wars—a world with a double sunset. Kepler-47c suggests a different possible scene: our hero standing on a moon, gazing at a double sunset, with a Neptune-class planet rising behind her.
The research team used data from the Kepler space telescope, which measures dips in the brightness of more than 150,000 stars, to search for transiting planets. Using ground-based telescopes at the McDonald Observatory at the University of Texas at Austin, they made crucial spectroscopic observations to determine characteristics of the stars in the binary system which is 4,900 light-years from Earth. They are orbiting each other very fast, eclipsing each other every 7.5 days. One star is similar to the Sun in size, but only 84 percent as bright. The second star is a red dwarf star only one-third the size of the Sun and less than one percent as bright.
At a first glance, Kepler-47c seems to have a very eccentric orbit. This would create serious seasonal effects on any moons orbiting Kepler-47c, and on the planet itself. | <urn:uuid:26f9f153-6169-4dcb-8ef9-b6ce34b429dc> | 3.28125 | 476 | Personal Blog | Science & Tech. | 52.371316 |
NASA eClips: Preparing to Launch Ares 1-X
Real World: Preparing to Launch Ares I-X--- Now PlayingPreparing for future space missions, NASA is testing the Ares rockets prototype, Ares I-X. By finding the vehicles center of gravity, engineers can calculate the exact mass of the vehicle. The forces felt by the astronauts during launch are similar.
Ares I-X Completes a Successful Flight Test NASA's Ares I-X test rocket lifted off Oct. 28, 2009, at 11:30 a.m. EDT from Kennedy Space Center in Florida for a two-minute powered flight. The flight test lasted about six minutes from its launch from the newly modified Launch Complex 39B until splashdown of the rocket's booster stage nearly 150 miles downrange.
The 327-foot-tall Ares I-X test vehicle produced 2.6 million pounds of thrust to accelerate the rocket to nearly 3 g's and Mach 4.76, just shy of hypersonic speed. It capped its easterly flight at a suborbital altitude of 150,000 feet after the separation of its first stage, a four-segment solid rocket booster.
Parachutes deployed for recovery of the booster and the solid rocket motor, which were recovered at sea and will be towed back to Florida by the booster recovery ship, Freedom Star, for later inspection. The simulated upper stage and Orion crew module, and the launch abort system will not be recovered.
The flight test is expected to provide NASA with an enormous amount of data that will be used to improve the design and safety of the next generation of American spaceflight vehicles, which could again take humans beyond low Earth orbit
Related Mathematics Problems
These problems provide a mathematical introduction to some of the issues related to the use of solar energy in space.
Problem 282: Exploring the Ares 1-X Launch: The Hard Climb to Orbit Students learn about the energy required to send a payload into orbit by studying the Ares 1-X rocket launch. [Grade: 8-10 | Topics: Algebra II] [Download PDF]
Problem 281: Exploring the Ares 1-X Launch: Energy Changes Students learn about kinetic and potential energy while studying the Ares 1-X rocket launch. [Grade: 8-10 | Topics: Algebra II] [Download PDF] | <urn:uuid:451edcbb-0ea5-44bf-8c3f-400b57bcca37> | 3.5625 | 479 | Tutorial | Science & Tech. | 52.176123 |
minuteArticle Free Pass
minute, in timekeeping, 60 seconds, now defined in terms of radiation emitted from atoms of the element cesium under specified conditions. The minute was formerly defined as the 60th part of an hour, or the 1,440th part (60 × 24 [hours] = 1,440) of a mean solar day—i.e., of the average period of rotation of the Earth relative to the Sun. The minute of sidereal time (time measured by the stars rather than by the Sun) was a fraction of a second shorter than the mean solar minute. The minute of atomic time is very nearly equal to the mean solar minute in duration.
What made you want to look up "minute"? Please share what surprised you most... | <urn:uuid:fdba6e0f-34d2-49f9-9be4-4bf6973273a1> | 3.40625 | 156 | Knowledge Article | Science & Tech. | 62.979768 |
An article on the BBC news website, discusses IBM’s latest development in artificial intelligence, which is modelled on the way that the brain is wired via neural connections which build and strengthen but also adapt and change over time, making the brain plastic and malleable (read about brain plasticity here). The computer chips used by IBM are capable of rewiring their connections when they encounter new information, in a similar way as that which it is believed biological synapses use in the brain.
This allows the computer to learn and adapt over time, an interesting (and human) feature based on the ways in which synaptic connections in humans (and other animals) physically connect themselves when useful information is presented (that is, something which can help us better predict outcomes). IBM’s computer doesn’t do this by physically ‘soldering’ and removing connections, but rather by amplifying and minimising certain signals, teaching the computer how much ‘attention’ to pay to certain signals.
This seems to me an intriguing parallel with the way that our emotions teach us what to pay attention to (what in the world is important and what is not), and it will be intriguing to see how well their model of the mind can replicate the way that thinking and experience ‘emerges’ from the neuronal level computations of our brains.
You can read more about IBM’s SyNAPSE project at the website. | <urn:uuid:0b873ee7-fecb-4075-a3e0-fa527a632230> | 3.3125 | 296 | Knowledge Article | Science & Tech. | 25.240833 |
We are living in an age of unprecedented language creation. Between the explosion of languages on the JVM and the new native languages, we find ourselves with a happy surfeit of very interesting choices. These options are not just the toy creations of comp-sci undergraduates, but sophisticated products with extensive libraries and active communities. Where they tend to be weak, however, is in tooling. And unfortunately, for many language developers, tooling is a metaphor for the coding front end: They strive to create editor plugins to provide basic syntax assistance. The more important support for debugging is often consigned to the use of
printf-like statements to dump
trace statements and variables' contents to the console.
White PapersMore >>
- Informed CIO: SDN and Server Virtualization on a Collision Course
- InformationWeek 2013 Strategic Security Survey
- The Untapped Potential of Mobile Apps for Commercial Customers
- Agile for Safety Critical Systems: Project Management Practices
I have always found this substitution of
printf for debugging to be a profoundly wrong conflation of two concepts. Yet, because we've all had the experience of using
printf or its equivalents to help chase down bugs, we tend to go along with the proposal. Some well-known developers even proclaim their preference for
There are multiple aspects of
printf statements that make them very poor substitutes and, in fact, at times dangerous tools.
Location. Martin advises "judiciously placed print statements." Well, if you're in a serious debugging mode, judiciously placing
printf is a very difficult thing to do. It implies some strong knowledge of the nature of the cause of the defect you're chasing. My experience is that, frequently, you get the first attempt at
printf wrong, and then must start to guess where else to place the statements. Sometimes, it's not even guessing: You need to put them at several upstream points to coarsely locate where a variable unexpectedly changes values. Finally, when you get the right location, you must then add new statements to track down why the variable is changing. It's a mess that brings me to the second point.
Time cost. Every
Complexity. While conceptually nothing is simpler than dumping a variable to the console, in fact, it's no trivial matter. This is particularly true of data structures, especially those containing pointers. Now, the
dump statement is useful. Debuggers handle this transparently and allow you to walk lists and arrays with no difficulty.
Clean up. Congratulations, you found the bug! Now, it's time to clean up your
In almost every dimension, the dumping of variables to the console is an inferior alternative to using the debugger. It takes just as long, if not longer, to find defects, and the practice inserts detrimental artifacts into the codebase.
When I hear developers say that they're happy debugging with | <urn:uuid:8c1f9606-5666-4bd1-8fb3-23fc2d777d16> | 2.765625 | 588 | Personal Blog | Software Dev. | 40.126481 |
The material presented here tends to resume the literature dealing mainly with the structural description of the microbial loop and discusses some functional aspect in action within the microbial food webs. For more detailed information, the interested readers can refer to the literature listed below.
Since Pomeroy (1974), it has been shown that the microbial consortia play a key role in both structure and function of open ocean ecosystems (Azam et al., 1983). Figure 1 shows a cartoon representation of the oceanic global carbon cycling.
Two major herbivorous and microbial pathways determine transformation and transfer of matters in the ocean (Fig. 2). The relative flux intensity within each pathway depends upon a “competition” between bacteria and the particle grazers’ pathways. Due to the dominance of the bacterial production in oligotrophic environments and in most of the mesoplelagic water column, fluxes are highly diverted towards the pathway n°1. However, it is important to appreciate conditions that determine flux partitioning between these paths.
Transfer pathways hypothesis: In marine ecosystems, the “Microbial Loop” can be distinguished from the “Microbial Food Webs” in that the former likely consists of the pathways relating heterotrophic bacteria to bacterivorus protests (zooflagellates) and Dissolved Organic Matter (DOM), and the latter includes all microbial communities below ca 100 µm including all the ? 10 µm primary producing organisms. Therefore, as much as it is true that the Microbial Loop can mainly act as a carbon sink, the Microbial Food Web is the crucial link for the whole ecosystems. The sink aspect is mostly due to the fact that a considerable amount of Particulate Organic Matter (POC) passes through bacterial production that end up in DOM pools. Four issues are considered:
- Transfer pathways hypothesis – size of primary producers: It can be appreciated that the size of the primary producers is what determines whether a microbial community is going to act mainly as a trophic sink or link in the marine ecosystems. For instance, when cyanobacteria dominate an ecosystem, the primary production is mostly trapped within the Microbial Loop. Under such conditions not more than 6% of the primary production can reach higher trophic levels. Whereas, when > 2 µm phytoplankton dominate, then up to 20% is transferred, which would constitute the total basic supply for the whole system.
- Transfer pathways hypothesis – patchiness: In addition to the size of the primary producers, patchiness in the oligotrophic pelagial is probably the most important feature in the structure and function of the water column ecosystem. We believe that the primary productivity is regulated by small-scale microbial interactions, mainly through feed-backs (“mutualism”) between free-living bacteria and phytoplankton, optimizing the use of mineral nutriments at low concentrations; condition characterizing the oligotrophic environments that represent the most open ocean ecosystems. Patchiness must also be important for protozoan survival, because oligotrophic concentrations of prey (? 20 µg C l-1) hardly support optimal growth conditions (Km) for most microphagous organisms that develop, however, normally in such poor conditions. Therefore, appropriate space distribution of both nutrient and prey appears as essential survival conditions: “hot spots”.
- Transfer pathways hypothesis – protozoan feeding mechanisms: The importance of distributional patterns of bacteria, for instance, can be seen in its effect on the composition of the bacterivorous community. For protistan bacterivores, the food intake is dominated either by encounter or filter feeding mechanisms, often occurring in phagotrophic flagellates and ciliates, respectively. The phagotrophic flagellates (zooflagellates) control mainly the bacterial production, at rather high bacterial concentrations, and the ciliates remove production of > 1 µm both hetero- and autotrophic cells, and to some extent of bacteria. Indeed, open ocean’s ciliates are mostly Polyhymenophorean, multiple mouth surrounding membranelles that have relatively high specific water filtration flux, due to large inter-membranelle spacing, allowing them to survive on nano-sized prey at regular low oligotrophic concentrations. However, one can notice that some polymenophorean ciliates are so small (? 12 µm) that they can retain bacteria as well, while at low concentrations (high water flux through mouth filtration structure) in open ocean, where zooflagellates can become almost inefficient grazers on bacteria at oceanic bulk phase concentrations. The size differential between predator and prey in a microbial food web (in this instance, bacteria and bacterivores) can depend on prey concentrations.Figure 3. Cartoon diagram of space distribution of microbial communities relative to their food uptake kinetic characteristics. V = Food uptake or ingestion rate. S = substrate or prey concentrations.
- Transfer pathways hypothesis – spatial organization of food web: with regard to the NH4+ [a major phytoplanktonic regenerated inorganic supply, supplied mainly by protozoan excretion through the gazing of the “extra-biomass” production in both bacteria and microalgae], the bacteria-phytoplankton mutualism depends upen low Km-low Vmax (low flow) and High Km-High Vmax (high flow) uptake systems for NH4+ in free-living bacteria and phytoplankton, respectively. Bacteria could thus be more active than phytoplankton at low NH4+ concentrations (outside of phytoplankton-bacteria “hot spots” or “nutrient spheres”, and vice versa, at high NH4+ concentration (inside of “nutrient spheres” at vicinity of phytoplankton cells). A similar situation seems to exist for the bacterial affinity for their own their own (5’-nucleotidase mediated) orthophosphate liberated within these “hot-spots”. Figure 3 shows a cartoon diagram, suggesting that as well as for the bacteria-phytoplankton uptake system for the NH4+, the protozoan activity itself seems also to be regulated through Km-Vmax ingestion system for particulate foods. Some experimental results suggested that zooflagellates and small polyhymenophorean oligotrichous ciliates both seem to exhibit High Km-High Vmax ingestion behaviour for bacteria and zooflagellates, respectively. This situation involves an effective ingestion of both bacteria and zooflagellates at vicinities of phytoplankton cells, ultimately leading to an important liberation of NH4+ in that “hot-spots”. On the other hand, there are large polyhymenophorean, oligotrichous, ciliates such as large Oligotrichina and Tintinnina that exhibit low Km-either Low or high Vmax (high flow, Polyhymenophorean, that feed on both zooflagellates and phytoplankton cells outsite and inside of “hot-spots” as well as on whole “hot-spots” themselves widely dispersed in space.
These large ciliates appear thus as major pathway between microbial as well as primary production and higher trophic level at open ocean oligotrophic situations. Indeed, metazoans such as copepods show both preference and high clearance rates on large ciliates relative to small ciliates, further supporting the idea that link to higher trophic levels is through large ciliates which can exist between “hot-spots” patches.
Pushing further the speculation, we could imagine that whole ocean ecosystem is organized in series of nesting boxes (like Russian dolls) of “patches” and “inter-patches” environments inhabited by communities with high Km-high Vmax, and low Km-low Vmax, respectively.
- Ammerman JW, Azam F (1985) Science 227: 1338-1340.
- Azam, F (1998) Science 280: 694-696
- Azam F, Ammerman JW (1984) In Flow of energy and materials in marine ecosystems (ed Fasham MJR) 345-360 (Plenum).
- Azam F, Cho BC (1987) In Ecology of microbial communities (eds) 261-281 (Cambridge University Press).
- Azam F, Smith DC (1991) In Particle analysis in oceanography. (ed Demers S) 213-236 (NATO Series, G27. Springer).
- Azam F, Fenchel T, Field JG, Fray JS, Meyer-Reil LA, Thingstad F (1983) Mar Ecol Prog Ser 10: 257-263.
- Bratback G, Thingstad TF (1985) Mar Ecol Prog Ser 25: 23-30.
- Caron DA, Goldman JC (1990) In Ecology of marine protozoa (ed capriulo GM) 283-306 (Oxford University Press).
- Dolan J (1991) Estuarine, Coastal and Shelf Science 33: 137-152.
- Ducklow HW, Purdie DA, Williams LeBPJ, davis JM (1986) Science 232: 865-867.
- Fenclel T (1980a) Limnol Oceanogr 25: 735-740.
- Fenchel T (1980b) Arch. Protistenk 123: 239-260.
- Ferrier C, Rassoulzadegan F (1991) Limnol Oceanogr 36: 657-669.
- Hagström A, Azam F, Andersson A, Wikner J, Rassoulzadegan F (1988) Mar Ecol Progr Ser 49: 171-178.
- Kirchman, DL (ed) (2000) Microbial Ecology of the Oceans. (Wiley-Liss New York)
- Pomeroy LR (1974) BioScience 24: 499-504.
- Rassoulzadegan F (1990) Zoological science 7 (S): 189-196.
- Rassoulzadegan F (1993) In Trends in Microbial Ecology (eds Guerrero R. & Pedros-Alio C) 435-439 (Spanish Society for Microbiology).
- Rassoulzadegan F, Sheldon RW (1986) Limnol Oceanogr 31: 1010-1021.
- Rivier A, Browlee, DC, Sheldon RW, Rassoulzadegan F (1985) mar Micro Food Webs 1: 36-51.
- Sherr EB, Sherr BF, Fallon RD, Newell SY (1986) Limnol Oceanogr 31: 177-183.
- Stoecker DK, Capuzzo JM (1990) J Plankton Res 12: 891-908.
- Tiselius P (1989) mar Ecol Prog ser 56: 49-56.
- Wiadnyana NN, Rassoulzadegan F (1989) Mar Ecol Prog Ser 53: 37-45. | <urn:uuid:84994abc-1fdb-4707-9bdd-cb3d8f529fc5> | 2.96875 | 2,381 | Academic Writing | Science & Tech. | 35.565704 |
1. The space shuttle fleet has been made up of: Columbia, Challenger, Discovery, Atlantis, and Endeavor. The space shuttle is 184 feet long. The orbiter is 122 feet long.
2.It takes only 8 minutes for the Space Shuttle to accelerate to a speed of more than 17,000 miles per hour. The liftoff weight of the space shuttle is 4.5 million pounds.
3.The main engine on the Space Shuttle weighs as much as a train locomotive, but puts out as much horse power as 39 locomotives.
4.The Space Shuttle is one of the most complicated and innovative machines ever built. It was a huge leap in technology when the first shuttle was launched, because it represented space craft that was re-useable.
5.Space Shuttle Columbia was the first ship in the NASA fleet. It completed 27 missions before being destroyed in re-entry when all seven of its astronauts were killed.
6.The Space Shuttle Challenger disaster occurred on January 28, 1986, when the space shuttle Challenger broke apart 73 seconds into its flight, leading to the deaths of its seven crew members.
7.Crews range in size from five to seven people. Over 600 crew members have flown on shuttle missions. It has also sent more than 3 million pounds of cargo into space. The longest any shuttle has stayed in orbit is 17.5 days, in November 1996.
8.A Space Shuttle and its boosters ready for launch are the same height as the Statue of Liberty but weigh almost three times as much.
9.The shuttle launches like a rocket, orbits like a spacecraft and lands like a plane.
10.Endeavor will be the last shuttle launched into space in September 2010. It will be the 134th space shuttle flight. | <urn:uuid:e5ed5d47-dc34-47a4-89e3-abac4c239701> | 3.59375 | 366 | Listicle | Science & Tech. | 73.584615 |
NES Video Chat: Let's Talk About Meteors, Meteorites and Comets
Dr. Bill Cooke and Rhiannon Blaauw answered questions on Jan. 12, 2012 about meteors, meteorites and comets and their potential danger to spacecraft.
Cooke, the lead for NASA's Meteoroid Environmental Office, and Blaauw, a meteor physicist, both have astronomy degrees and work in the Meteoroid Environment Office at NASA's Marshall Space Flight Center in Huntsville, Ala. The MEO provides NASA with models of the meteoroid environment, which are used in the design of protective shields on spacecraft. The MEO improves models by analyzing meteor observations data collected by using equipment such as radar and all-sky and low-light-level cameras. The all-sky cameras currently are being placed around the United States. They detect fireballs, or very bright meteors, every night and post the results on a public website
. Looking at these data help Cooke and Blaauw determine the number and size of space rocks hitting Earth every day, how fast they were going, and whether or not Earth is experiencing a meteor shower.
Space Math: Problem 18 -- Meteorite Impact Risks
› Space Math: Problem 26 -- Astronomy as a Career
› Finding Impact Craters With Landsat →
› Lunar Impact Monitoring (Reading Level: grade 12) →
› Find a Meteorite →
› NASA's All Sky Fireball Network →
› Lunar Impacts
› Meteor Counter iPhone App → | <urn:uuid:ee33948c-46ae-40b5-b1fc-4fb7fc145aa4> | 3.375 | 319 | Truncated | Science & Tech. | 35.896616 |
Special Relativity: Kinematics
Time Dilation and Length Contraction
The most important and famous results in Special Relativity are that of time dilation and length contraction. Here we will proceed by deriving time dilation and then deducing length contraction from it. It is important to note that we could do it the other way: that is, by beginning with length contraction.
|t A =|
In the frame of an observer on the ground, call her O B , the train is moving with speed v (see ii) in ). The light then follows a diagonal path as shown, but still with speed c . Let us calculate the length of the upward path: we can construct a right-triangle of velocity vectors since we know the horizontal speed as v and the diagonal speed as c . Using the Pythagorean Theorem we can conclude that the vertical component of the velocity is as shown on the diagram. Thus the ratio the diagonal (hypotenuse) to the vertical is . But we know that the vertical of the right-triangle of lengths is h , so the hypotenuse, must have length . This is the length of the upward path. Thus the overall length of the path taken by the light in O B 's frame is . It traverses this path at speed c , so the time taken is:
|t B = =|
Clearly the times measured are different for the two observers. The ratio of the two times is defined as γ , which is a quantity that will become ubiquitous in Special Relativity.
All this might seem innocuous enough. So, you might say, take the laser away and what is the problem? But time dilation runs deeper than this. Imagine O A waves to O B every time the laser completes a cycle (up and down). Thus according to O A 's clock, he waves every t A seconds. But this is not what O B sees. He too must see O A waving just as the laser completes a cycle, however he has measured a longer time for the cycle, so he sees O A waving at him every t B seconds. The only possible explanation is that time runs slowly for O A ; all his actions will appear to O B to be in slow motion. Even if we take the laser away, this does not affect the physics of the situation, and the result must still hold. O A 's time appears dilated to O B . This will only be true if O A is stationary next to the laser (that is, with respect to the train); if he is not we run into problems with simultaneity and it would not be true that O B would see the waves coincide with the completion of a cycle.
Unfortunately, the most confusing part is yet to come. What happens if we analyze the situation from O A 's point of view: he sees O B flying past at v in the backwards direction (say O B has a laser on the ground reflecting from a mirror suspended above the ground at height h ). The relativity principle tells us that the same reasoning must apply and thus that O A observes O B 's clock running slowly (note that γ does not depend on the sign of v ). How could this possibly be right? How can O A 's clock be running slower than O B 's, but O B 's be running slower than O A 's? This at least makes sense from the point of view of the relativity principle: we would expect from the equivalence of all frames that they should see each other in identical ways. The solution to this mini-paradox lies in the caveat we put on the above description; namely, that for t B = γt A to hold, O A must be at rest in her frame. Thus the opposite, t A = γt B , must only hold when O B is at rest in her frame. This means that t B = γt A holds when events occur in the same place in O A frame, and t A = γt B holds when events occur in the same place in O B 's frame. When v 0âáγ 1 this can never be true in both frames at once, hence only one of the relations holds true. In the last example described ( O B flying backward in O A 's frame), the events (laser fired, laser returns) do not occur at the same place in O A 's frame so the first relation we derived ( t B = γt A ) fails; t A = γt B is true, however.
We will now proceed to derive length contraction given what we know about time dilation. Once again observer O A is on a train that is moving with velocity v to the right (with respect to the ground). O A has measured her carriage to have length l A in her reference frame. There is a laser light on the back wall of the carriage and a mirror on the front wall, as shown in .
|t A =|
Since the light traverses the length of the carriage twice at velocity c . We want to compare the length as observed by O A to the length measured by an observer at rest on the ground ( O B ). Let us call the length O B measures for the carriage to be l B (as far as we know so far l B could equal l A , but we will soon see that it does not). In O B 's frame as the light is moving towards the mirror the relative speed of the light and the train is c - v ; after the light has been reflected and is moving back towards O A , the relative speed is c + v . Thus we can calculate the total time taken for the light to go up and back as:
|t B = + = âÉá γ 2|
But from our analysis of time dilation above, we saw that when O A is moving past O B in this manner, O A 's time is dilated, that is: t B = γt B . Thus we can write:
|γt A = γ = t B = γ 2âá = γâál B =|
Note that γ is always greater than one; thus O B measures the train to be shorter than O A does. We say that the train is length contracted for an observer on the ground.
Once again the problem seems to be that is we turn the analysis around and view it from O A 's point of view: she sees O B flying past to the left with speed v . We can put O B in an identical (but motionless) train and apply the same reasoning (just as we did with time dilation) and conclude that O A measures O B 's identical carriage to be short by a factor γ . Thus each observer measures their own train to be longer than the other's. Who is right? To resolve this mini-paradox we need to be very specific about what we call 'length.' There is only one meaningful definition of length: we take object we want to measure and write down the coordinates of its ends simultaneously and take the difference. What length contraction really means then, is that if O A compares the simultaneous coordinates of his own train to the simultaneous coordinates of O B 's train, the difference between the former is greater than the difference between the latter. Similarly, if O B writes down the simultaneous coordinates of his own train and O A 's, he will find the difference between his own to be greater. Recall from Section 1 that observers in different frames have different notions of simultaneous. Now the 'paradox' doesn't seem so surprising at all; the times at which O A and O B are writing down their coordinates are completely different. A simultaneous measurement for O A is not a simultaneous measurement for O B , and so we would expect a disagreement as to the observers concept of length. When the ends are measured simultaneously in O B 's frame l B = , and when events are measured simultaneously in O A 's frame l A = . No contradiction can arise because the criterion of simultaneity cannot be met in both frames at once.
Be careful to note that length contraction only occurs in the direction of motion. For example if the velocity of an object is given by = (v x, 0, 0) , length contraction will occur in the direction only. The other dimensions of the object remain the same to any inertial observer.
Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note! | <urn:uuid:e9266ee4-2816-4955-aecd-b4a5c64fca0a> | 4 | 1,782 | Academic Writing | Science & Tech. | 66.745229 |
Cell signals: A nanowire probe (thin inverted V) can record signals from cells without harming them.
Source: “Three-Dimensional, Flexible Nanoscale Field-Effect Transistors as Localized Bioprobes” Bozhi Tian et al.
Science 329: 830-834
Results: Researchers at Harvard have made biocompatible nanoscale probes that use transistors to take precise electrical and chemical readings inside cells. The tips of the probes are about the size of a virus.
Why it matters: To create complex bioelectronics such as neural prosthetics designed for fine control of artificial limbs, researchers need to create better interfaces with single cells. Existing electrodes can take intracellular measurements. But to be accurate, they must be large in comparison to the cell and can damage it.
This work also represents the first time digital devices, in the form of transistors at the tips of the probes, have been integrated with cells.
Methods: Using a process that they developed, the researchers grow millions of V-shaped silicon nanowires at a time. The tip of each V acts as a very small transistor that can be inserted into a cell to send and receive electrical signals. The probe is more sensitive than a passive electrode, and it can enter cells without damaging them both because it’s so small and because it’s coated with a double layer of fatty molecules, just like a cell membrane. When placed near the membrane, the cell will actually pull the electrode inside. The electrical and chemical activity inside the cell changes the behavior of the transistor to produce a reading.
Next steps: The researchers want to incorporate circuits made from the nanoprobes into medical devices, including scaffolds for making artificial tissues. These circuits could “innervate” artificial tissue, mimicking the role of nerves to measure and respond to electrical signals propagating through the nervous system. The researchers also aim to take advantage of the electrodes’ ability to send electrical signals in addition to recording them. Applications could include neural interfaces with two-way communication between muscles and the nervous system.
Capturing Lost Energy
Device harvests power from heat as well as light in solar radiation.
Source: “Photon-enhanced thermionic emission for solar concentrator systems” Jared W. Schwede et al.
Nature Materials 9: 762-767
Results: A device built by researchers at Stanford University converts both the light and the heat in the sun’s radiation into an electrical current.
Why it matters: Conventional solar cells can use only a narrow band of the sun’s energy; the rest of the spectrum is lost as heat. The most common type of silicon solar cells convert 15 percent of the energy in sunlight into electricity. But Stanford researchers realized that the light in solar radiation could also enhance the performance of a device called a thermionic energy converter, which usually uses only heat. They say that such devices could in theory convert solar energy with 50 percent efficiency.
Methods: A thermionic energy converter consists of two electrodes separated by a small space. When one electrode is heated, electrons jump across the gap to the second electrode, generating a current. The Stanford researchers found that when they replaced the metal typically used to make the top electrode with a semiconducting material like the ones used in solar cells, photons hitting that electrode also drove current in the device. The Stanford prototype converts about 25 percent of the light and heat energy in radiation into electricity at 200 °C. Conventional thermionic energy converters require temperatures around 1,500 °C, which is impractical for many applications, and conventional solar cells don’t function well above around 100 °C.
Next steps: The researchers are working to make the device more efficient by testing different semiconducting materials for use as the top electrode. They’re also redesigning the system to work in conjunction with a solar concentrator that would raise temperatures to 400 to 600 °C. That would produce enough excess heat to harness with a steam engine. | <urn:uuid:025c5124-e41a-4927-9389-074f7757b31c> | 3.828125 | 834 | Content Listing | Science & Tech. | 33.008095 |
Narrator: This is Science Today. A groundbreaking experiment, which demonstrated that a piece of semiconductor material can slow down light pulses, may one day lead to very high speed network communications. Connie Chang-Hasnain, who led the University of California, Berkeley project, says it may also be used to add ‘the eyes’ to tiny wireless devices known as smart dust sensors.
Chang-Hasnain: All those sensors are supposed to transmit information and what better information than a video, right? So those video information need to be connected and today, there’s no way to connect them and to intelligently process them.
Narrator: But their semiconductor experiment offers hope that such technology is on the horizon.
Chang-Hasnain: With the all-optical buffer available and the slow light device available, we will be able to allow better utilization of this information, so we can do environmental protection monitoring of resources, monitoring earthquake and to the next extent, to control these monitoring devices as well.
For Science Today, I’m Larissa Branin. | <urn:uuid:15315a9a-a9ec-466f-b8d1-78e3a1a6f39a> | 3.25 | 226 | Audio Transcript | Science & Tech. | 30.000368 |
Coast erosion is the process of wearing away material from the coastal profile due to imbalance in the supply and export of material from a certain section. It takes place in the form of scouring in the foot of the cliffs or in the foot of the dunes. Coast erosion takes place mainly during strong winds, high waves and high tides and storm surge conditions, and results in coastline retreat. The rate of erosion is correctly expressed in volume/length/time, e.g. in m3/m/year, but erosion rate is often used synonymously with coastline retreat, and thus expressed in m/year.
Erosion will take place on the shoreface and on the beach if the export is greater than the supply of material, this means that the level of the seabed and the beach will decrease. The deficit can be due to both cross-shore processes and longshore processes. Erosion due to cross-shore processes mainly occurs during extreme events associated with storm surge, which partially is a reversible process (this is also referred to as dune erosion). The most important reason for long-term erosion is a deficit in the littoral drift budget, which is often caused by a deficit in supply of sand to the area in question (this process is also referred to as structural erosion).
- Types and background of coastal erosion: explanation of two different types of coastal erosion, dune erosion and structural erosion.
- Articles on different causes of erosion: Natural Causes of Coastal Erosion and Human Causes of Coastal Erosion
- Articles on the background of erosion: Coastal Hydrodynamics And Transport Processes
- Erosion for different coastal types: Accretion and erosion for different coastal types, see also Coastal zone characteristics (description of different coastal types) and Classification of coastlines (classification of different coastal types).
- Biogeomorphology of aquatic systems: Interaction between ecology and geomorphology of a system
- Coastal Erosion along the Changjiang Deltaic Shoreline | <urn:uuid:2a60cbf1-bf85-4dfb-9ccf-65a62867b3fc> | 3.953125 | 412 | Knowledge Article | Science & Tech. | 26.834702 |
In physics (especially astrophysics), redshift happens when light or other electromagnetic radiation from an object moving away from the observer is increased in wavelength, or shifted to the red end of the spectrum. In general, whether or not the radiation is within the visible spectrum, "redder" means an increase in wavelength â equivalent to a lower frequency and a lower photon energy, in accordance with, respectively, the wave and quantum theories of light.
Astronomers have used a novel planet hunter instrument to detect a new possible life supporting, Earth-like exoplanet. In fact the new planet is classed as a super-Earth, since it has a minimum mass of 7.1 times that of our planet, and is properly located on its parent star’s orbit to support the presence of liquid [...]
The first galaxies formed very fast after the Big Bang – in cosmic time, that is. It’s estimated that the earliest ones appeared some 500 million years after the Big Bang, a period about which researchers know very little. How they observed it Even though they are typically very bright, such galaxies are quite hard [...] | <urn:uuid:f66365a4-9b87-4fd9-893d-a9066bdd7c8f> | 3.796875 | 229 | Content Listing | Science & Tech. | 48.057489 |
The SimpleHTTPServer module defines a request-handler class, interface-compatible with BaseHTTPServer.BaseHTTPRequestHandler, that serves files only from a base directory.
The SimpleHTTPServer module defines the following class:
|request, client_address, server)|
A lot of the work, such as parsing the request, is done by the base class BaseHTTPServer.BaseHTTPRequestHandler. This class implements the do_GET() and do_HEAD() functions.
The SimpleHTTPRequestHandler defines the following member variables:
"SimpleHTTP/" + __version__, where
__version__is defined in the module.
application/octet-stream. The mapping is used case-insensitively, and so should contain only lower-cased keys.
The SimpleHTTPRequestHandler defines the following methods:
'HEAD'request type: it sends the headers it would send for the equivalent
GETrequest. See the do_GET() method for a more complete explanation of the possible headers.
If the request was mapped to a directory, the directory is checked for
a file named
index.htm (in that order).
If found, the file's contents are returned; otherwise a directory
listing is generated by calling the list_directory() method.
This method uses os.listdir() to scan the directory, and
404 error response if the listdir() fails.
If the request was mapped to a file, it is opened and the contents are
returned. Any IOError exception in opening the requested
file is mapped to a
'File not found'
error. Otherwise, the content type is guessed by calling the
guess_type() method, which in turn uses the
'Content-type:' header with the guessed content type is
output, followed by a blank line signifying the end of the headers,
and then the contents of the file are output. If the file's MIME type
text/ the file is opened in text mode; otherwise
binary mode is used.
For example usage, see the implementation of the test() function. | <urn:uuid:c6ca205f-0ec9-4c9a-ab08-c0db3ca68cfa> | 2.828125 | 441 | Documentation | Software Dev. | 37.319101 |
This module provides a more portable way of using operating system dependent functionality than importing a operating system dependent built-in module like posix or nt.
This module searches for an operating system dependent built-in module like
mac or posix and exports the same functions and data
as found there. The design of all Python's built-in operating system dependent
modules is such that as long as the same functionality is available,
it uses the same interface; for example, the function
os.stat(path) returns stat information about path in
the same format (which happens to have originated with the
Extensions peculiar to a particular operating system are also available through the os module, but using them is of course a threat to portability!
Note that after the first time os is imported, there is no performance penalty in using functions from os instead of directly from the operating system dependent built-in module, so there should be no reason not to use os!
The os module contains many functions and data values. The items below and in the following sub-sections are all available directly from the os module.
When exceptions are classes, this exception carries two attributes, errno and strerror. The first holds the value of the C errno variable, and the latter holds the corresponding error message from strerror(). For exceptions that involve a file system path (such as chdir() or unlink()), the exception instance will contain a third attribute, filename, which is the file name passed to the function.
os.path.split(file)is equivalent to but more portable than
posixpath.split(file). Note that this is also an importable module: it may be imported directly as os.path. | <urn:uuid:7116b945-d24c-43c2-8f40-a2e30153ad94> | 2.78125 | 357 | Documentation | Software Dev. | 38.256365 |
Precession is a change in the orientation of the rotational axis of a rotating body. It can be defined as a change in direction of the rotation axis in which the second Euler angle (nutation) is constant. In physics, there are two types of precession: torque-free and torque-induced.
In astronomy, "precession" refers to any of several slow changes in an astronomical body's rotational or orbital parameters, and especially to the Earth's precession of the equinoxes. See Precession (astronomy).
Torque-free precession occurs when the axis of rotation differs slightly from an axis about which the object can rotate stably: a maximum or minimum principal axis. Poinsot's construction is an elegant geometrical method for visualizing the torque-free motion of a rotating rigid body. For example, when a plate is thrown, the plate may have some rotation around an axis that is not its axis of symmetry. This occurs because the angular momentum (L) is constant in absence of torques. Therefore, it will have to be constant in the external reference frame, but the moment of inertia tensor (I) is non-constant in this frame because of the lack of symmetry. Therefore, the spin angular velocity vector () about the spin axis will have to evolve in time so that the matrix product remains constant.
The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:
where is the precession rate, is the spin rate about the axis of symmetry, is the angle between the axis of symmetry and the axis about which it precesses, is the moment of inertia about the axis of symmetry, and is moment of inertia about either of the other two perpendicular principal axes. They should be the same, due to the symmetry of the disk.
For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is . Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g., for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy, ; this unphysical tendency can be counter-acted by repeatedly applying a small rotation vector perpendicular to both and , noting that .
Another type of torque-free precession can occur when there are multiple reference frames at work. For example, the earth is subject to local torque induced precession due to the gravity of the sun and moon acting upon the earth’s axis, but at the same time the solar system is moving around the galactic center. As a consequence, an accurate measurement of the earth’s axial reorientation relative to objects outside the frame of the moving galaxy (such as distant quasars commonly used as precession measurement reference points) must account for a minor amount of non-local torque-free precession, due to the solar system’s motion.
Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a part of a gyroscope) "wobbles" when a torque is applied to it, which causes a distribution of force around the acted axis. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the torque are constant, the axis will describe a cone, its movement at any instant being at right angles to the direction of the torque. In the case of a toy top, if the axis is not perfectly vertical, the torque is applied by the force of gravity tending to tip it over.
The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot.
To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation.
First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis.
In the picture, a section of the wheel has been named dm1. At the depicted moment in time, section dm1 is at the perimeter of the rotating motion around the (vertical) pivot axis. Section dm1, therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as dm1 is forced closer to the pivot axis of the rotation (by the wheel spinning further), due to the Coriolis effect dm1 tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section dm2 of the wheel starts out at the vertical pivot axis, and thus initially has zero angular rotating velocity with respect to the rotation around the pivot axis, before the wheel spins further. A force (again, a Coriolis force) would be required to increase section dm2's velocity up to the angular rotating velocity at the perimeter of the rotating motion around the pivot axis. If that force is not provided, then section dm2's inertia will make it move in the direction of the top-right arrow. Note that both arrows point in the same direction.
The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis.
It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous.
In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity - via the pitching motion - elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side.
Gyroscopic precession also plays a large role in the flight controls on helicopters. Since the driving force behind helicopters is the rotor disk (which rotates), gyroscopic precession comes into play. If the rotor disk is to be tilted forward (to gain forward velocity), its rotation requires that the downward net force on the blade be applied roughly 90 degrees (depending on blade configuration) before that blade gets to the 12 o'clock position. This means the pitch of each blade will decrease as they pass through 3 o'clock, assuming the rotor blades are turning CCW as viewed from above looking down at the helicopter. The same applies if a banked turn to the left or right is desired; the pitch change will occur when the blades are at 6 and 12 o'clock, as appropriate. Whatever position the rotor disc needs to placed at, each blade must change its pitch to effect that change 90 degrees prior to reaching the position that would be necessary for a non-rotating disc.
To ensure the pilot's inputs are correct, the aircraft has corrective linkages that vary the blade pitch in advance of the blade's position relative to the swashplate. Although the swashplate moves in the intuitively correct direction, the blade pitch links are arranged to transmit the pitch in advance of the blade's position.
Precession is the result of the angular velocity of rotation and the angular velocity produced by the torque. It is an angular velocity about a line that makes an angle with the permanent rotation axis, and this angle lies in a plane at right angles to the plane of the couple producing the torque. The permanent axis must turn towards this line, since the body cannot continue to rotate about any line that is not a principal axis of maximum moment of inertia; that is, the permanent axis turns in a direction at right angles to that in which the torque might be expected to turn it. If the rotating body is symmetrical and its motion unconstrained, and, if the torque on the spin axis is at right angles to that axis, the axis of precession will be perpendicular to both the spin axis and torque axis.
Under these circumstances the angular velocity of precession is given by:
In which Is is the moment of inertia, is the angular velocity of spin about the spin axis, and m*g and r are the force responsible for the torque and the perpendicular distance of the spin axis about the axis of precession. The torque vector originates at the center of mass. Using = , we find that the period of precession is given by:
There is a non-mathematical way of visualizing the cause of gyroscopic precession. The behavior of spinning objects simply obeys the law of inertia by resisting any change in direction. If a force is applied to the object to induce a change in the orientation of the spin axis, the object behaves as if that force was applied 90 degrees ahead, in the direction of rotation. Here is why: A solid object can be thought of as an assembly of individual molecules. If the object is spinning, each molecule's direction of travel constantly changes as that molecule revolves around the object's spin axis. When a force is applied, molecules are forced into a new change of direction at places during their path around the object's axis. This new change in direction is resisted by inertia.
Imagine the object to be a spinning bicycle wheel, held at the axle in the hands of a subject. The wheel is spinning clock-wise as seen from a viewer to the subject’s right. Clock positions on the wheel are given relative to this viewer. As the wheel spins, the molecules comprising it are travelling vertically downward the instant they pass the 3 o'clock position, horizontally to the left the instant they pass 6 o'clock, vertically upward at 9 o'clock, and horizontally right at 12 o'clock. Between these positions, each molecule travels a combination of these directions, which should be kept in mind as you read ahead. If the viewer applies a force to the wheel at the 3 o'clock position, the molecules at that location are not being forced to change direction; they still travel vertically downward, unaffected by the force. The same goes for the molecules at 9 o'clock; they are still travelling vertically upward, unaffected by the force that was applied. But, molecules at 6 and 12 o'clock ARE being "told" to change direction. At 6 o'clock, molecules are forced to veer toward the viewer. At the same time, molecules that are passing 12 o'clock are being forced to veer away from the viewer. The inertia of those molecules resists this change in direction. The result is that they apply an equal and opposite force in response. At 6 o'clock, molecules exert a push directly away from the viewer. Molecules at 12 o'clock push directly toward the viewer. This all happens instantaneously as the force is applied at 3 o'clock. This makes the wheel as a whole tilt toward the viewer. Thus, when the force was applied at 3 o'clock, the wheel behaved as if the force was applied at 6 o'clock--90 degrees ahead in the direction of rotation.
Precession causes another peculiar behavior for spinning objects such as the wheel in this scenario. If the subject holding the wheel removes one hand from the axle, the wheel will remain upright, supported from only one side. However, it will immediately take on an additional motion; it will begin to rotate about a vertical axis, pivoting at the point of support as it continues its axial spin. If the wheel was not spinning, it would topple over and fall if one hand was removed. The initial motion of the wheel beginning to topple over is equivalent to applying a force to it at 12 o'clock in the direction of the unsupported side. When the wheel is spinning, the sudden lack of support at one end of the axle is again equivalent to this force. So instead of toppling over, the wheel behaves as if the force was applied at 3 or 9 o’clock, depending on the direction of spin and which hand was removed. This causes the wheel to begin pivoting at the point of support while remaining upright.
The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as the earth, described above. They are:
- Thomas precession a special relativistic correction accounting for the observer's being in a rotating non-inertial frame.
- de Sitter precession a general relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass.
- Lense-Thirring precession a general relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass.
In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of the Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages.
Axial precession (precession of the equinoxes)
Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5 degrees.
Hipparchus is the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1º per century (which is not far from the actual value for antiquity, 1.38º). The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, the Earth has a nonspherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.
The orbit of a planet around the Sun is not really an ellipse but a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.
Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies.
|Wikimedia Commons has media related to: Precession|
- Boal, David (2001). "Lecture 26 - Torque-free rotation - body-fixed axes". Retrieved 2008-09-17.
- DIO 9.1 ‡3
- Bradt, Hale (2007). Astronomy Methods. Cambridge University Press. p. 66. ISBN 978 0 521 53551 9.
- Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.)
- An even larger value for a precession has been found, for a black hole in orbit around a much more massive black hole, amounting to 39 degrees each orbit.
|Wikibooks has a book on the topic of: Rotational Motion| | <urn:uuid:c3cf057b-dfee-4636-a84c-21d82b1c0544> | 4.09375 | 3,562 | Knowledge Article | Science & Tech. | 41.832234 |
Warming Affects Ecosystems Not Just Biodiversity
Ecosystems perform important tasks – like nutrient cycling, breakdown of waste and carbon storage – on which humans depend, so it's important we understand how climate change might affect them.
Researchers measured the effect of a 4-degree rise in temperature on communities of benthic organisms in fresh-water bodies – creatures that live at the 'bottom' of the ecosystem.
They found that the number of species in the community didn't change, but the balance of small and large organisms shifted significantly, and this in turn affected how efficiently organic material broke down in the water.
Lots of studies have looked into the potential ecological effects of global warming, but the relationship between community structure and ecosystem functioning in the context of climate change is not so well understood.
A team of researchers, led by Dr. Gabriel Yvon-Durocher of Queen Mary, University of London, wanted to see how warming might affect the structure of benthic communities and what impact that would have on ecosystem functioning. They established such communities in a series of outdoor tanks – designed to mimic shallow lakes – then raised the temperature of half the tanks by four degrees.
After allowing the communities a year to develop, PhD student Matteo Dossena sampled the experiment, once in April and again in October. He identified and weighed nearly 20,000 organisms ranging from micro-organisms through to invertebrates like dragonflies, whose larvae live in the sediments at the bottom of the water.
The study revealed that the diversity of species in the community was unaffected by warming, but the relative number of small versus large organisms was strongly affected by the temperature change.
The researchers also observed a large a seasonal difference in the effect of warming on community structure. In the spring, there was a big decline in the number of larger organisms, while in the autumn there were relatively more of the larger organisms in the warmed tanks. In the tanks that hadn't been warmed up, the size structure of the community stayed the same year-round.
"The effect of warming on the seasonality of community size structure was unexpected," says Yvon-Durocher. "From similar studies on plankton we were expecting to see an increase in smaller organisms at higher temperatures, but the marked seasonal change in response to warming was a surprise." | <urn:uuid:2e060336-9c5d-4627-93ee-8bf202b36108> | 3.9375 | 476 | Truncated | Science & Tech. | 25.785177 |
The electrons and antiprotons travel side by side in one of the straight sections of the Recycler. The electrons absorb energy from the antiprotons shrinking the size and spread of the antiprotons and, just like a cooled gas, the antiprotons becomes denser. This allows the Recycler to store more antiprotons and then, when transferred, increases the luminosity in the Tevatron.
|last modified 9/9/2004 email Fermilab|
|Security, Privacy, Legal| | <urn:uuid:c2ceb443-c392-44cd-bc7f-9b5d1b8552dd> | 2.859375 | 108 | Documentation | Science & Tech. | 27.245952 |
[Physics FAQ] - [Copyright]
Updated May 1996 by PEG (thanks to Colin Naturman).
Updated August 1993 by SIC.
Original by John Blanton.
In 1935 Albert Einstein and two colleagues, Boris Podolsky and Nathan Rosen (EPR) developed a thought experiment to demonstrate what they felt was a lack of completeness in quantum mechanics. This so-called "EPR Paradox" has led to much subsequent, and still ongoing, research. This article is an introduction to EPR, Bell's Inequality, and the real experiments that have attempted to address the interesting issues raised by this discussion.
One of the principal features of quantum mechanics is that not all the classical physical observables of a system can be simultaneously well defined with unlimited precision, even in principle. Instead, there may be several sets of observables that give qualitatively different, but nonetheless complete (maximal possible), descriptions of a quantum mechanical system. These sets are sets of "good quantum numbers," and are also known as "maximal sets of commuting observables." Observables from different sets are "noncommuting observables".
A well known example is position and momentum. You can put a subatomic particle into a state of well-defined momentum, but then the value of its position is completely ill defined. This is not a matter of an inability to measure position to some accuracy; rather, it's an intrinsic property of the particle, no matter how good our measuring apparatus is. Conversely, you can put a particle in a definite position, but then the value of its momentum is completely ill defined. You can also create states of intermediate "knowledge" of both observables: if you confine the particle to some arbitrarily large region of space, you can define the value of its momentum more and more precisely. But the particle can never have well-defined values of both position and momentum at the same time. When physicists speak of how much "knowledge" they have of two noncommuting observables in quantum mechanics, they don't mean that those observables both have well-defined values that are not quite known; rather, they mean the two observables do not have completely well-defined values.
(Technically speaking, the situation is a little more complicated. Even for observables that don't commute, it is sometimes possible for both to have well-defined values. Such subtleties are very important to those who examine the derivation of Bell's Inequality in great detail in order to find hidden assumptions. For the purposes of this short article, we'll overlook these finer points.)
Position and momentum are continuous observables. But the same situation can arise for discrete observables, such as spin. The quantum mechanical spin of a particle along each of the three space axes is a set of mutually noncommuting observables. You can only know the spin along one axis at a time. A proton with spin "up" along the x-axis has an undefined spin value along the y and z axes. You cannot simultaneously measure the x and y spin projections of a proton. EPR sought to demonstrate that this phenomenon could be exploited to construct an experiment that would demonstrate a paradox that they believed was inherent in the quantum mechanical description of the world.
They imagined two physical systems that are allowed to interact initially so that they will subsequently be defined by a single quantum mechanical state. (For simplicity, imagine a simple physical realization of this idea—a neutral pion at rest in your lab, which decays into a pair of back-to-back photons. The pair of photons is described by a single two-particle wave function.) Once separated, the two systems (read: photons) are still described by the same wave function, and a measurement of one observable of the first system will determine the measurement of the corresponding observable of the second system. (Example: the neutral pion is a scalar particle—it has zero angular momentum. So the two photons must speed off in opposite directions with opposite spin. If photon 1 is found to have spin up along the x-axis, then photon 2 must have spin down along the x-axis, since the total angular momentum of the final-state, two-photon, system must be the same as the angular momentum of the initial state, a single neutral pion. You know the spin of photon 2 even without measuring it.) Likewise, the measurement of another observable of the first system will determine the measurement of the corresponding observable of the second system, even though the systems are no longer physically linked in the traditional sense of local coupling.
(By "local" is meant that influences between the particles must travel in such a way that they pass through space continuously; i.e. the simultaneous disappearance of some quantity in one place cannot be balanced by its appearance somewhere else if that quantity didn't travel, in some sense, across the space in between. In particular, this influence cannot travel faster than light, in order to preserve relativity theory.)
QM creates the puzzling situation in which the first measurement of one system should "poison" the first measurement of the other system, no matter what the distance between them. (In one commonly studied interpretation, the mechanism by which this proceeds is "instantaneous collapse of the wave function". But the rules of QM do not require this interpretation, and several other perfectly valid interpretations exist.) One could imagine the two measurements were so far apart in space that special relativity would prohibit any influence of one measurement over the other. For example, after the neutral-pion decay, we can wait until the two photons are light years apart, and then "simultaneously" measure the x-spin of the photons. QM suggests that if say the measurement of photon 1's x-spin happens first, then this measurement must instantaneously force photon 2 into a state of well-defined x-spin, even though it is light years away from photon 1.
How do we reconcile the fact that photon 2 "knows" that the x-spin of photon 1 has been measured, even though they are separated by light years of space and far too little time has passed for information to have travelled to it according to the rules of special relativity? There are basically two choices. We can accept the postulates of QM as a fact of life, in spite of its seemingly uncomfortable coexistence with special relativity, or we can postulate that QM is not complete: that there was more information available for the description of the two-particle system at the time it was created, but that we didn't know that information, perhaps because it cannot be known in principle, or perhaps because QM is currently incomplete.
So, EPR postulated that the existence of such "hidden variables", some currently unknown properties, of the systems should account for the discrepancy. Their claim was that QM theory is incomplete: it does not completely describe physical reality. System II knows all about System I long before the scientist measures any of the observables, thereby supposedly consigning the other noncommuting observables to obscurity. Furthermore, they claimed that the hidden variables would be local, so that no instantaneous action at a distance would be necessary. Niels Bohr, one of the founders of QM, held the opposite view that there were no hidden variables. (His interpretation is known as the "Copenhagen Interpretation" of QM.)
In 1964 John Bell proposed a mechanism to test for the existence of these hidden variables, and he developed his famous inequality as the basis for such a test. He showed that if the inequality were ever not satisfied, then it would be impossible to have a local hidden variable theory that accounted for the spin experiment.
Using the example of two photons configured in the singlet state, consider this: in the hidden variable theory, after separation, each photon will have spin values for each of the three axes of space, and each spin will have one of two values; call them "+" and "−". Call the axes x, y, z, and call the spin on the x-axis x+ if it is "+" on that axis; otherwise call it x−. Use similar definitions for the other two axes.
Now perform the experiment. Measure the spin on one axis of one photon and the spin in another axis of the other photon. If EPR were correct, each photon will simultaneously have properties for spin in each of axes x, y and z.
Next, look at the statistics. Perform the measurements with a number of sets of photons. Use the symbol N(x+, y−) to designate the words "the number of photons with x+ and y−". Similarly for N(x+, y+), N(y−, z+), etc. Also use the designation N(x+, y−, z+) to mean "the number of photons with x+, y− and z+", and so on. It's easy to demonstrate that for a set of photons
(1) N(x+, y−) = N(x+, y−, z+) + N(x+, y−, z−)
because the z+ and z− exhaust all possibilities. You can make this claim if these measurements are connected to some real properties of the photons.
Let n[x+, y+] be the designation for "the number of measurements of pairs of photons in which the first photon measured x+, and the second photon measured y+". Use a similar designation for the other possible results. This is necessary because this is all that it is possible to measure. You can't measure both x and y for the same photon. Bell demonstrated that in an actual experiment, if (1) is true (indicating real properties), then the following must be true:
(2) n[x+, y+] <= n[x+, z+] + n[y−, z−].
Additional inequality relations can be written by just making the appropriate permutations of the letters x, y and z and the two signs. This is Bell's Inequality, and it is proved to be true if there are real (perhaps hidden) variables to account for the measurements.
At the time Bell's result first became known, the experimental record was reviewed to see if any known results provided evidence against locality. None did. Thus an effort began to develop tests of Bell's Inequality. A series of experiments was conducted by Aspect ending with one in which polarizer angles were changed while the photons were in flight. This was widely regarded at the time as being a reasonably conclusive experiment that confirmed the predictions of QM.
Three years later, Franson published a paper showing that the timing constraints in this experiment were not adequate to confirm that locality was violated. Aspect measured the time delays between detections of photon pairs. The critical time delay is that between when a polarizer angle is changed and when this affects the statistics of detecting photon pairs. Aspect estimated this time based on the speed of a photon and the distance between the polarizers and the detectors. Quantum mechanics does not allow making assumptions about where a particle is between detections. We cannot know when a particle traverses a polarizer unless we detect the particle at the polarizer.
Experimental tests of Bell's Inequality are ongoing, but none has yet fully addressed the issue raised by Franson. In addition there is an issue of detector efficiency. By postulating new laws of physics, one can get the expected correlations without any nonlocal effects unless the detectors are close to 90% efficient. The importance of these issues is a matter of judgment.
The subject is alive theoretically as well. Eberhard and later Fine uncovered further subtleties in Bell's argument. Some physicists argue that it may be possible to construct a local theory that does not respect certain assumptions in the derivation of Bell's Inequality. The subject is not yet closed, and may yet provide more interesting insights into the subtleties of quantum mechanics. | <urn:uuid:35c86aa1-bdcf-4a99-a03f-cc2af44c1da4> | 3.546875 | 2,453 | Knowledge Article | Science & Tech. | 41.024495 |
Go to the source code of this file.
||const char *
The explain_open_or_die function is used to call the open(2) system call. On failure an explanation will be printed to stderr, obtained from the explain_open(3) function, and then the process terminates by calling exit(EXIT_FAILURE).
This function is intended to be used in a fashion similar to the following example:
|pathname||The pathname, exactly as to be passed to the open(2) system call. |
|flags||The flags, exactly as to be passed to the open(2) system call. |
|mode||The mode, exactly as to be passed to the open(2) system call. |
- This function only returns on success. On failure, prints an explanation and exits, it does not return.
Definition at line 27 of file open_or_die.c. | <urn:uuid:b857e533-e3d2-498b-8f88-7a9bb3b752fd> | 3.015625 | 199 | Documentation | Software Dev. | 57.689875 |
pauses a StopWatch
subroutine pause_watch (watch, clock, err)
Pauses the specified clocks of the specified watches. This is useful when you want to temporarily stop the clocks to avoid timing a small segment of code, for example printed output or graphics, but do not know which watches or clocks are running. When pause_watch is called, the information about which of the clocks were running is maintained, so that a subsequent call to end_pause_watch will restart only those clocks that were running. Watches that are paused can not be started, stopped, reset, or paused again until they are resumed by end_pause_watch. However, they can be read and printed.
One or more watches must be specified. The argument watch can be a single variable of type watchtype (see stopwatch(3)) to pause one watch, an array of type watchtype to pause several watches, or a variable of type watchgroup (see stopwatch(3)) to pause the watches in a group.
The optional argument clock specifies which clocks to pause on the specified watch(es). If omitted, the current default clocks (see option_stopwatch(3)) are paused. If present, clock must be a character string containing 'cpu', 'user', 'sys', or 'wall', or an array of such character strings.
If present, the optional intent OUT integer argument err returns a status code. The code is the sum of the values listed below.
An error message will be printed to a specified I/O unit (unit 6 by default) if print_errors is TRUE (default is TRUE). The error message contains more detail about the cause of the error than can be obtained from just the status code, so you should set print_errors to TRUE if you have trouble determining the cause of the error.
If abort_errors is TRUE (default is FALSE), the program will terminate on an error condition. Otherwise, the program will continue execution but the watch(es) will not be paused.
See option_stopwatch(3) for further information on print_errors, abort_errors and I/O units.
The relevant status codes and messages are:
In addition to the run time diagnostics generated by StopWatch
, the following
problem may arise:
type (watchtype) w1, w2(3)
type (watchgroup) g1
call pause_watch(w2, err=errcode)
call pause_watch(g1, (/'cpu ', 'wall'/), errcode)
The first call pauses the default clocks on a single watch. The second call pauses the default clocks on three watches given as an array and returns a status code. The third call pauses the cpu and wall clocks on the watches in the group g1, and returns a status code.
It cannot be determined whether or not a watch variable or watch group has been created (passed as an argument to create_watch or create_watchgroup). If a watch or watch group that has never been created is passed into pause_watch, it might generate a Fortran error due to passing a pointer with undefined association status to the Fortran intrinsic function associated. Some compilers will allow this as an extension to the Fortran 90 standard and recognize that the pointer is not associated, in which case the ``Watch needs to be created'' error message is generated. | <urn:uuid:034bb837-bc21-4528-8150-ca1621ac851a> | 3.453125 | 703 | Documentation | Software Dev. | 47.597923 |
The .NET Framework provides the ability to provide custom behavior for a type of component while it is in design mode. Designers are classes that provide logic that can adjust the appearance or behavior of a type at design time. All designers implement the IDesigner interface. Designers are associated with a type or type member through a DesignerAttribute. A designer can perform tasks at design time after a component or control with which a designer is associated has been created.
Designers can be built to perform a variety of types of tasks in design mode. Designers can:
Alter and extend the behavior or appearance of components and controls in design mode.
Perform custom initialization for a component in design mode.
Access design-time services and configure and create components within a project.
Add menu items to the shortcut menu of a component.
Adjust the attributes, events, and properties exposed by a component with which the designer is associated.
Designers can serve an important role in assisting with the arrangement and configuration of components, or to enable proper behavior for a component in design mode that otherwise depends on services or interfaces available only at run time.
Some controls may require visual cues in design mode to make configuration easier. For example, a Panel object might not have a visible border at run time. Without a border, the panel is not visible on a form with the same background color. Therefore, the designer for the Panel object draws a dotted-line border around the panel.
The System.ComponentModel.Design namespace provides the basic interfaces that developers can use to build design-time support. | <urn:uuid:41b89164-60f4-4f58-bb94-afa4ecd42858> | 2.875 | 323 | Documentation | Software Dev. | 31.917765 |
Wednesday, March 19, 2008
INL Advanced Test Reactor test site
An earlier post described the start of testing of the multiple-layer-coated fuel grains that form the billiard-ball-sized fuel pebbles in the pebble bed reactor. Idaho National Laboratory used its Advanced Test Reactor to expose these test fuel grains to radiation levels much higher than in an operational PBR, thus simulating years of exposure in a few months. The multiple, coated layers of silicon carbide and ceramic graphite contain the radioactive products of fission. These tested fuel grains have not failed, at the level of 9% burn-up of the uranium within. Tests will continue to see if a 12-14% burnup can be achieved by year-end.
Tuesday, March 4, 2008
Will the public rethink nuclear power?
I have not posted anything to this blog about pebble bed reactors for months. I have been busy developing a way to educate the general public about the broader issues of nuclear power.
Most people to whom I have presented the pebble bed reactor have been encouraging and supportive. The most common query I receive is "what about the waste?".
I now think public acceptance of nuclear power will depend on reprocessing to burn up the most hazardous radioactive waste. Also, reprocessing will wondrously provide a century of power just from the existing spent fuel inventories at nuclear power plant sites. Not only can non-fissile U-238 be bred into plutonium fuel, but abundant thorium can also be bred into U-233 fuel. We can have fuel that meets all our energy needs for thousands of years and waste that decays in a few hundred.
I have tried to rethink the advantages and disadvantages of pebble bed reactor technology. which I summarize below.
- Passive safety makes core meltdown impossible.
- Modularity allows smaller plants, with less capital investment risk, and distributed siting.
- Small size permits factory mass production and on-site assembly.
- High temperature, air cooled reactor needs no water for cooling.
- 50% efficiency means 2/3 the fuel use.
- High temperature permits direct hydrogen production.
- Multi-layer pebbles containing all reaction waste products are ready for burial.
- Technology learning curve not yet fully traversed.
- Licensing in the US will require new NRC skills and knowledge.
- US needs more nuclear power now, from already approved designs.
- Fuel supply will be strained at the proposed one-unit-per-week installation schedule.
- Reprocessing fuel in the hard pebbles will be difficult.
DARTMOUTH COLLEGE ILEAD
ENERGY POLICY AND ENVIRONMENTAL CHOICES:
RETHINKING NUCLEAR POWER
This is an 8-week course developed for the Dartmouth Ilead continuing education department. The course meets 2 hours a week beginning March 31, 2008, at Dartmouth College in Hanover NH. More information is available at http://rethinkingnuclearpower.googlepages.com.
The PowerPoint slides and audio of the talks will be posted after each session.
Energy units, uses, sources
Social benefits, demand growth, conservation, developing world
Periodic table, nuclear fission, nuclear power plants
Chernobyl, Three Mile Island
Radiation, health, safety, waste
Nuclear weapons proliferation
3. Environmental choices
Oil and gas depletion
Global warming, mining, coal, oil shale, tar sands
Wind, hydro, solar
Corn, sugarcane, cellulosic ethanol, biodiesel
Uranium and thorium availability
4. Current technology
Submarines and ships
Operating nuclear power plants, industry structure, NRC
Current products: GE, Westinghouse, Toshiba, Areva
5. Nuclear power plant visit
6. New technologies
High temperature gas reactors, liquid metal reactors
Hydrogen production, hydrocarbon synthesis, coal-to-liquid, electric cars
7. Global Nuclear Energy Partnership
Integral fast reactor, waste reprocessing
Fuel supply for non-nuclear nations
Current public awareness, funding, activities
Antinuclear activism, Union of Concerned Scientists, Caldecott
Public opinion, NEI, environmentalist shifts
Congressional and presidential candidate's views | <urn:uuid:34f45b2c-404f-421b-9a3f-5e19afce968a> | 3.203125 | 904 | Personal Blog | Science & Tech. | 36.985182 |
Hydrogen fusion requires two hydrogen nuclei to get close enough (typically a few fm) to fuse. Much of the problem of creating a fusion reactor is overcoming the Coulomb repulsion between a pair of nuclei - the millions of degrees for Maxwellian distributions, the Bremstrahlung losses for inertial confinement.
If we could align the paths of two neutral Hydrogen atoms (of whichever isotopes), what would the repulsion look like between them as they approach collision? Obviously at long range there is negligable force as both are neutral. But as they approach each other, what happens to the electron distribution?
Intuitively, I expect a bonding cloud to form between the nuclei, and antibonding clouds beyond them. This would presumably attract at first until reaching the usual Hydrogen covalent bond length, after which the internuclear repulsion would increasingly dominate.
But how does that compare to bare ionic collision? How much lower is the potential barrier?
Obviously if it was significantly lower and we could somehow engineer the collision to achieve fusion, the cross section would be larger than ionic fusion, but how much?
Or would the barrier be just as high over the final few femtometers? | <urn:uuid:f717eb18-d602-49e2-b576-051b52d01569> | 3.203125 | 251 | Q&A Forum | Science & Tech. | 40.866778 |
Cenomanian climate zones
New thoughts about the cretaceous climate and oceans
1. William W. Hay (a)
2. Sascha Floegel (b)
a Department of Geological Sciences, University of Colorado at Boulder, 2045 Windcliff Dr. Estes Park, CO 80517, USA
b. GEOMAR | Helmholtz-Zentrum für Ozeanforschung Kiel, Gebäude Ostufer, Wischhofstr. 1–3, D-24148 Kiel, Germany
Several new discoveries suggest that the climate of the Cretaceous may have been more different from that of today than has been previously supposed. Detailed maps of climate sensitive fossils and sediments compiled by Nicolai Chumakov and his colleagues in Russia indicate widespread aridity in the equatorial region during the Early Cretaceous. The very warm ocean temperatures postulated for the Mid-Cretaceous by some authors would likely have resulted in unacceptable heat stress for land plants at those latitudes, however, and may be flawed.
Seasonal reversals of the atmospheric pressure systems in the Polar Regions are an oversimplification. However, seasonal pressure difference between 30° and 60° latitude become quite pronounced, being more than 25 hPa in winter and less than 10 hPa in summer. This results in inconstant winds, affecting the development of the gyre-limiting frontal systems that control modern ocean circulation. The idea of Hasegawa et al. (2011) who suggest a drastic reduction in the size of the Hadley cells during the warm Cretaceous greenhouse is supported by several numerical climate simulations. Rapid contraction of the Hadley cell such that its sinking dry air occurs at 15° N latitude rather than 30° N is proposed to occur at a threshold of 1,000 ppmv CO2 in the atmosphere. This change will probably be reached in the next century.
There is a really, really important chunk here about RUBISCO: the equatorial terrestrial environment may have been a wasteland and lifeless at times. Huber pointed this out about the PETM as well. It is entirely possible we could get to that point with global warming/climate change. The above, btw, does not address as far as I can see, the evidence of ice in Australia during the early cretaceous. | <urn:uuid:58456aa2-0807-4124-9ae1-ae159a379a69> | 2.875 | 487 | Personal Blog | Science & Tech. | 40.69947 |
|Cerambycidae ~ Longhorn Beetles|
page 1 page 2 page 3
The members of this family are named for their long antennae, sometimes exceedingly so. The antennae of males are usually longer than those of females, and often the antennae are attached to the head in a strange notch at the front of the eye. Sometimes the notch is so deep that it splits the eye in half! In other cases, the antennae are just very close to normal shaped eyes.
Longhorn beetle larvae are called round-headed borers and most feed on dead and decaying wood. Some species feed on living plants. They tunnel inside the wood and so are rarely seen, only emerging as adults. Most species have a limited flight time during which they may be found.
Adult longhorn beetles feed on flower nectar, sap, or leaves and bark. They tend to be strong flyers and are sometimes attracted to lights at night. While individuals of some species are almost always the same size, at times there can be rather pronounced variation, both within the same gender and also between males and females. Males can be much smaller than females.
Beyond these generalizations, there is great diversity in this family. The size ranges from quite tiny to very large, and there are a variety of body shapes. I've never found any particular species in great numbers; usually they are solitary, mating pairs, or at most a few gathered around a good feeding or breeding spot.
For general identification purposes, it helps to divide this family into its well-delineated subfamilies. These groupings are so useful that they are given common names. Many species in our area are in the round-necked longhorn subfamily (Cerambycinae). Although quite diverse, these beetles all have protruding jaws, rather than having them tucked underneath the head. When viewed from above, it is often possible to see the jaws clearly at the front point of the face.
Several longhorns are similar in shape and color: sort of long and brown. One that I've seen several times right around our house is Eburia mutica. This is a medium sized beetle, about 15 mm long, and is a velvety texture with small white spots on the elytra (wing covers). The spots are a bit variable, especially those in the center of the body. Although individuals in our area seem to always have very small dots, I've seen images of this species where the same marks are larger and even doubled so that each dot is instead a pair of oval spots. The color of E. mutica is reddish brown, with a dusting of gray. The antennae are really long and there are small rounded black bumps on the pronotum.
About the same size or a bit smaller than the previous species, Anelaphus moestus is a very dark brown color with no markings. The antennae are not quite so long either. The best distinguishing feature is the covering of hair, especially on the eleytra. This is a beetle that has ended up in our house a couple of times, no doubt attracted by the lights. At first glance it resembles a nondescript click beetle but there are plenty of obvious differences between the two families.
Another beetle in the same size range, also brown, is Elaphidion linsleyi. Distinguishing characteristics on this one include spikes, on the antennal segments as well as at the rear end of the elytra, and a blotchy look caused by the grayish coating that covers an otherwise sort of shiny and dimpled body.
One slender brown beetle that has some very distinctive features is Styloxus fulleri. This is a smaller species, at about 12-13 mm in length. The elytra do not completely cover the other pair of wings, but end about two thirds down the length of the body. The antennae are super long, with very long thin segments. They eyes are huge on a rather small head. I've only seen this species during November.
The Hickory Borer (Knulliana cincta), is an impressive beetle, reaching up to about 30 mm in length, although some individuals are smaller. The color is brown, with a pair of diagonal tan spots on the elytra, which are sometimes not all that clearly seen. The body is covered with grayish hair and there are spines sticking out of the rear of the elytra. There are also small spines on the sides of the pronotum. The legs are rather long and very slender.
Although there are longhorn beetles that are specifically called flower longhorns, there are also some round-necked borers that are likely to be found feeding on nectar. One that I've seen very occasionally is Batyle suturalis. This is a small (about 10-12 mm long) rust colored species with varying amounts of black on the body. The antennae are not terribly long and are black. The texture is shiny with many small dimples in the elytra, and the whole beetle is covered with sparse long hairs.
Longhorn beetles that feed at flowers often resemble wasps, both in movement and coloration. While the previous species matches the color of a paper wasp (Polistes carolina), a Callidium species I've seen is metallic blue-black, similar to several species of sphecid wasps. This small longhorn has beaded-looking antennae and thick femurs. The pronotum is rather flattened and wide.
A fairly common longhorn that shows up on flowers in the spring is Placosternus difficilis. This beetle is obviously a wasp mimic, with its yellow stripes on a black body. The legs and antennae are reddish. It is medium sized at about 15 mm in length.
During a very dry autumn, I saw several longhorns flying about that looked quite similar to the previous species. However, there were differences in pattern, and the fact that they were out of season, which caused me to look them up. The Painted Hickory Borer (Megacyllene caryae) looks amazingly like Placosternus difficilis. However, it has mostly white markings that differ slightly, as shown in the photos, with just a touch of yellow.
At the same time, there were a few more robust and definitely more yellow longhorns flying about as well. Although I could never get a photo of one, I suspect they were the Locust Borer (Megacyllene robiniae). I took the accompanying photo in Denver, as this is a very widespread and common species. Even if I am mistaken in thinking that M. robiniae occurs here (I usually only trust a good photo or close-up examination for identifications), it is helpful to see the three look-alikes side by side.
One excellent wasp mimic that I've not seen at flowers but have seen mating around its larval host, dead ash branches, is the Red-headed Ash Borer (Neoclytus acuminatus). These small (about 11-12 mm long), slender reddish beetles move about with the same jerky motions as wasps, and fly readily. Their antennae are not particularly long but their legs are.
Wasps are not the only hymenopterans that longhorn beetles can mimic; ants are another model. Species in the genus Euderces are convincing mimics of the acrobat ants (genus Crematogaster). Our most common species is also the smallest. At under 4 mm in length, Euderces reichei really does resemble an ant as it feeds at flowers in the spring. A second species, Euderces picipes is similar but a bit larger (a whopping 5 mm long) and the white band near the center is angled more. This species seems less common and is slightly more elongate in shape than E. reichei. When dealing with insects this small, a single millimeter difference in size is actually quite easy to notice.
Longhorn beetles not only mimic wasps and ants, but also fireflies, which have toxic chemicals in their bodies and very recognizable warning colors: red/orange head and black body. Quite a number of other insects display this simple and effective color pattern, which probably works as long as the mimic is about the same size as a firefly. Stenosphenus dolosus is a rather infrequently seen longhorn. The body is very shiny, with the pronotum exceedingly so. The reddish front and dark rear probably give some protection to this conspicuous diurnal beetle.
Callimoxys sanguinicollis is a very interesting looking firefly mimic, with a dull and dimpled general appearance. The pronotum is orange and the rest of the body is black, except for the rear legs, which have yellow femurs that end in a black swollen part. The elytra are reduced, only reaching two thirds down the abdomen. The lower half of each wing cover is narrow and they do not connect in the center when folded.
One more flower-feeding longhorn that has the firefly colors is a Rhopalophora species. This slender beetle is about 8 mm long, has very thin antennae and legs with obvious swellings. It appears for a brief time in the spring and can be rather numerous on clustered flowers.
The longhorn with the shortest elytra I've seen is Molorchus bimaculatus. It is under 7 mm in length and the elytra only cover less than the front half of the abdomen. When a tiny insect like this is crawling about flowers, covered with pollen and showing obvious wings, it is hard at first glance to tell if it is a beetle or wasp.
Another very tiny longhorn is Obrium maculatum. The cryptic brown and tan markings of this beetle help it blend right in with leaves or plant debris. It is rather translucent and its 5 mm length makes it pretty difficult to see.
page 1 page 2 page 3 | <urn:uuid:43d31533-0039-4d83-ba39-e91c641528fd> | 2.9375 | 2,109 | Personal Blog | Science & Tech. | 52.353648 |
I got the first part, but the second part I having problems
with. Please help! Thanks in advance! =)
Three charges are arranged in a triangle as shown.
1) What is the net electrostatic force on the charge at the
origin? The Coulomb constant is 8.98755 × 109 N · m2/C2.
Answer in units of N. My answer = 9.886^10-6
2) What is the direction of this force (as an angle between -180?
and 180? measured from the positive x-axis, with counterclockwise
Answer in units of degrees. | <urn:uuid:4b0c0790-ff50-455e-a098-6df07cbec10f> | 2.96875 | 131 | Q&A Forum | Science & Tech. | 89.992477 |
Lightning storms show us that there's plenty of electricity up there in the clouds, and scientists have been trying to harness it since Ben Franklin's famous kite experiment.
Now there may be a breakthrough. Research presented today at he annual meeting of the American Chemical Society has defined how atmospheric water vapor becomes electrically charged, and this could lead to devices that can create electricity from the air itself.
If successful, this would validate a hypothesis presented by Nikola Tesla in 1892 when he said "Ere many generations pass, our machinery will be driven by a power obtainable at any point of the universe." I'm just not sure if he realized it would take another 118 years for the world to catch up. | <urn:uuid:d39a39cf-d404-44fd-bbce-5d512a39184d> | 3.078125 | 144 | Truncated | Science & Tech. | 38.914143 |
Limit Examples (part 2) More limit examples
Limit Examples (part 2)
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- OK, hopefully, my tool is working now.
- But anyway, so we were saying when x is equal to minus 0.001,
- so we're getting closer and closer to 0 from the negative
- side, f of x is equal to minus 1,000, right?
- You can just evaluate it yourself, right?
- And as you see, as x approaches 0 from the negative direction,
- we get larger and larger-- or I guess you could say smaller and
- smaller negative numbers, right?
- You get-- you know, if it's minus 0.0001, you'd get minus
- 10,000, and then minus 100,000, and then minus 1 million, you
- could imagine the closer and closer you get to zero.
- Similarly, when you go from the other direction, when you say
- what is-- when x is 0.01, there you get positive 100, right?
- When x is point-- the thing is frozen again-- when it's 0.001,
- you get positive 1,000.
- So as you see, as you approach 0 from the negative direction,
- you get larger and larger negative values, or I guess
- smaller and smaller negative values.
- And as you go from the positive direction, you get larger
- and larger values.
- Let me graph this just to give you a sense of what this graph
- looks like because this is actually a good graph to know
- what it looks like just generally.
- So let's say I have the x-axis.
- This is the y-axis.
- Change my color.
- So when x is a negative number, as x gets really, really,
- really negative, as x is like negative infinity, this
- is approaching zero, but it's still going to be a
- slightly negative number.
- And then as we see from what we drew, as we approach x is equal
- to 0, we asymptote, and we approach negative
- infinity, right?
- And similarly, from positive numbers, if you go out to
- the right really far, it approaches 0, but it's
- still going to be positive.
- And as we gets closer and closer to 0, it spikes up, and
- it goes to positive infinity.
- You never quite get x is equal to 0.
- So in this situation, you actually have as x approaches--
- so let me give you a different notation, which you'll
- probably see eventually.
- I might actually do a separate presentation on this.
- The limit as x approaches 0 from the positive direction,
- that's this notation here, of 1/x, right?
- So this is as x approaches 0 from the positive direction,
- from the right-hand side, well, this is equal to infinity.
- And then the limit as x-- this pen, this pen-- the limit as x
- approaches 0 from the negative side of 1/x.
- This notation just says the limit as I approach
- from the negative side.
- So as I approach x equal 0 from this direction, right, from
- this direction, what happens?
- Well, that is equal to minus infinity.
- So since I'm approaching a different value when I
- approach from one side or the other, this limit
- is actually undefined.
- I mean, we could say that from the positive side, it's
- positive infinity, or from the negative side, it's negative
- infinity, but they have to equal the same thing for
- this limit to be defined.
- So this is equal to undefined.
- So let's do another problem, and I think this should
- be interesting now.
- So let's say, just keeping that last problem we had in mind,
- what's the limit as x approaches 0 of 1/x squared?
- So in this situation, I'll draw the graph.
- That's my x-axis.
- That's my y-axis.
- So here, no matter what value we put into x, we get a
- positive value, right?
- Because you're going to square it.
- If you put minus-- you could actually-- oh, let me do it.
- It'll be instructive, I think.
- Once again, obviously you can't just put x equal to 0.
- You'll get 1/0, which is undefined.
- But let's say 1 over x squared.
- What does 1 over x squared evaluate to?
- So when x is 0.1, 0.1 squared is 0.01, so 1/x is 100.
- Similarly, if I do minus 0.1, minus 0.1 squared is positive
- 0.01, so then 1 over that is still 100, right?
- So regardless of whether we put a negative or positive number
- here, we get a positive value.
- And similarly, if I put-- if we say x is 0.01, if you evaluate
- it, you'll get 10,000, and if we put minus 0.01, you'll get
- positive 10,000 as well, right?
- Because we square it.
- So in this graph, if you were to draw it, and if you have a
- graphing calculator, you should experiment, it
- looks something like this.
- I can see this dark blue.
- So from the negative side, it approaches infinity, right?
- You can see that.
- As we get to smaller and smaller-- as we get closer and
- closer to 0 from the negative side, it approaches infinity.
- As we go from the positive side-- these are actually
- symmetric, although I didn't draw it that symmetric-- it
- also approaches infinity.
- So this is a case in which the limit-- oh, that's
- not too bright.
- I don't know if you can see -- the limit as x approaches 0
- from the negative side of 1 over x squared is equal to
- infinity, and the limit as x approaches 0 from the positive
- side of 1 over x squared is also equal to infinity.
- So when you go from the left-hand side, it
- equals infinity, right?
- It goes to infinity as you approach 0.
- And as you go from the right-hand side, it
- also goes to infinity.
- And so the limit in general is equal to infinity.
- And this is why I got excited when I first started
- learning limits.
- Because for the first time, infinity is a legitimate answer
- to your problem, which, I don't know, on some metaphysical
- level got me kind of excited.
- But anyway, I will do more problems in the next
- presentation because you can never do enough limit problems.
- And in a couple of presentations, I actually give
- you the formal, kind of rigorous mathematical
- definition of the limits.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:690deb5f-07c5-4224-a3fd-dc9ba2857aff> | 3.53125 | 1,821 | Truncated | Science & Tech. | 71.748062 |
How Does a One-way Mirror Work?
One-way mirrors are designed to allow a view from one side.
CREDIT: InnervisionArt, Shutterstock
How can the good guys see through the mirror while the bad guy sees nothing but a shiny surface?
The mirror used in your favorite detective show has a thin layer of partially reflective coating. Only about 80 percent of the light is reflected, while 20 percent passes through. Importantly, the suspect’s room is about 10 times brighter than the spying detectives’ room, so the light that passes through the mirror is enough to give the detectives a good look at the suspect.
Because the detectives’ room is not as bright, the scant amount of light that passes through into the very bright room is not enough to create an image.
MORE FROM LiveScience.com | <urn:uuid:88769a91-62b5-4066-98b4-8d0c875f0b91> | 3.15625 | 172 | Knowledge Article | Science & Tech. | 58.134946 |
Michael Le Page, biology features editor
"Snow chaos: and still they claim it's global warming." "Snowmageddon delivers another blow to global warming." "The mini ice age starts here."
A few months ago, these were the kind of headlines that were appearing in newspapers and blogs. After very cold winter weather in many parts of the northern hemisphere, the notion of global warming was ripe for mockery. The family of senator Jim Inhofe - who called global warming "a hoax", built an igloo in Washington DC, with a sign saying "Al Gore's new home".
And now? The winter weather has given way to a series of extraordinary heatwaves.
According to meteorologist Jeff Masters, nine countries have recorded their hottest ever temperatures this year, from the 53.5 °C recorded on 26 May in Pakistan to the 44 °C recorded in Russia on 11 July.
If these records are officially confirmed, it will mean more national heat records been set in one year than ever before. So should "global warmists" be crowing about how this record-breaking heat proves they were right all along?
No: the record-breaking heat does not "prove" global warming.
Just as extreme winter weather does not prove the world is cooling, so a few heatwaves do not prove the world is warming. After all, it's not hot everywhere: the southern cone of South America is currently enduring a cold snap that has killed dozens of people.
The various measures of average global temperatures, however, do suggest that surface temperatures are the hottest they have been since records began.
According to the US National Oceanic and Atmospheric Administration, for instance, June was the fourth consecutive warmest month on record. According to NASA, the average temperature over the past 12 months has been the hottest ever.
Hot weather can often be blamed on El Niño. During El Niño events, some of the vast amount of heat stored in the Pacific Ocean is transferred to the atmosphere, so the highest surface temperatures usually occur during El Niños.
But the latest El Niño was not as especially strong. What's more, we are currently getting less heat from the sun than we have for decades. This means that it's the combination of a strong underlying global warming trend due to rising carbon dioxide, together with a moderate El Niño, that explains why the planet is so hot at the moment.
A La Niña has now begun - the opposite of El Niño. In a La Niña the Pacific Ocean soaks up atmospheric heat, lowering surface temperatures. If it's a strong La Niña, 2010 might not turn out to be the hottest calendar year on record. But the next time there is an El Niño, especially if it coincides with a high in solar activity, we are likely to see a lot more records shattered. | <urn:uuid:8260326e-5823-4d72-9ef5-fafb6fba259e> | 3.3125 | 576 | Nonfiction Writing | Science & Tech. | 49.599749 |
One In The Hand
Eggs are traditionally thought of as being very fragile, but in fact the physics behind their shape is astounding.
- raw egg
- plastic bag or glove (for the unconfident!)
Challenge audience members to break the egg just by squeezing it. Let them wrap the egg in a plastic bag or wear a glove if they're worried… Believe it or not, it can't be done!
How Does it Work?
The shape of an egg is actually one of the strongest designs possible. The curved structure means that applying pressure to any particular area actually spreads the force out over the entire egg. So just squeezing it won't cause it to break. Of course applying a very sharp force to one point WILL cause it to break – which is why we usually tap the egg on the side of a bowl to break it when cooking.
Tips for Success
Ask your volunteers to remove any rings etc. before trying this trick – the sharp uneven force from such metal objects can cause the egg to break. Check your eggs for hairline fractures before attempting this trick – if there is any existing damage to the egg it won't work.
Did You Know?
The ornate and intricate arched doorways and ceilings in many old buildings aren’t just there for their aesthetic qualities. Arches are in fact one of the strongest building structures. In effect, every brick or piece of masonry within the arch is falling on all the others, distributing the weight evenly over the structure. | <urn:uuid:537358f1-6c02-4819-86da-68942d08931f> | 3.796875 | 309 | Tutorial | Science & Tech. | 62.688085 |
Line of infinite charge and a gaussian sphere
Construct a spherical gaussian surface centered on an infinite line of charge. Calculate the flux through the sphere and thereby show that it satisfies gauss law.
I know how i can do it for a cylinder, but a sphere?
I know that the ends of the wire (one diameter) wil have zero flux at it's ends
but wouldnt i have to integrate over a big hemispherical surface and then multiply by two but ..... wouldnt it be tedious? | <urn:uuid:0ba7a9bb-afb0-4277-a4ca-7d256f803e42> | 2.703125 | 107 | Comment Section | Science & Tech. | 63.350065 |
Scientists from ETH Zurich have developed a nanomaterial that protects other molecules from oxidation. Unlike many such active substances in the past, the ETH-Zurich researchers' antioxidant has a long shelf life, which makes it just the ticket for industrial applications.
In findings that could help overcome a major technological hurdle in the road toward smaller and more powerful electronics, an international research team involving University of Michigan engineering researchers, has shown the unique ways in which heat dissipates at the tiniest scales.
Mobile phones that bend, self-powered nanodevices, new and improved solar cell technology and windows that generate electricity are but a few of the potential products from the union of semiconductors and graphene.
There were high hopes of using carbon nanotubes, particularly for ultra-fast water transport to desalinate seawater. However, a simulation now reveals that these ultra-fast transport rates might have not been properly grounded after all. Researchers who work with experiments and computer models have...
Catalysts can stop working when atoms on the surface start moving. At the Vienna University of Technology, this dance of the atoms could now be observed and explained.
Tiny particles of matter called quantum dots, which emit light with exceptionally pure and bright colors, have found a prominent role as biological markers. In addition, they are realizing their potential in computer and television screens, and have promise in solid-state lighting.
Los Alamos National Laboratory scientists have designed a new type of nanostructured-carbon-based catalyst that could pave the way for reliable, economical next-generation batteries and alkaline fuel cells, providing for practical use of wind- and solar-powered electricity, as well as enhanced hybri...
What may be the ultimate heat sink is only possible because of yet another astounding capability of graphene. The one-atom-thick form of carbon can act as a go-between that allows vertically aligned carbon nanotubes to grow on nearly anything.
A fractal is a geometric structure that can repeat itself towards infinity. Zooming in on a fragment of it, the original structure becomes visible again. A major advantage of a 3D fractal is that the effective surface rises with every next step.
Method for attaching molecules to metal surfaces could find applications in medicine, electronics and other fields.
Researchers developed a portable way to produce ultracold atoms for quantum technology and quantum information processing, a scientific breakthrough that was published and featured on the front cover of Nature Nanotechnology. | <urn:uuid:4cbbac37-bd98-467b-9edd-59ef5f715f2a> | 3.1875 | 507 | Content Listing | Science & Tech. | 21.703312 |
The year 1609 was noteworthy for two astronomical milestones. That was when Galileo built his first telescopes and began his meticulous study of the skies. Within months he discovered the four major satellites of Jupiter, saw that Venus (like our moon) has illuminated phases and confirmed earlier observations of sunspots—all evidence that undermined the Aristotelian model of an unchanging, Earth-centered cosmos.
During that same year, Johannes Kepler published Astronomia Nova, which contained his detailed calculation of the orbit of Mars. It also established the first two laws of planetary motion: that planets follow elliptical orbits, with the sun at one focus, and that planets sweep through equal areas of their orbits in a given interval.
Small wonder, then, that when the United Nations General Assembly declared an International Year of Astronomy to promote the wider appreciation of the science, it selected 2009, the quadricentennial of those standout accomplishments (among many) by Galileo and Kepler that informally founded modern astronomy.
Currently astronomers can look beyond the familiar planets and moons to entirely new systems of worlds around other stars. As I write this, the tally stands at 344 known extrasolar planets. Only a handful of these bodies were found by telescopic means that Galileo or Kepler would have recognized, but each one owes its discovery to their work.
A recent and surprising trend is the apparent abundance of planets turning up close to very small stars—suns that may not be much larger than the planets circling them. Astronomers Michael W. Werner and Michael A. Jura have more in their article starting on page 26, including why the existence of these unlikely planetary systems might imply that the universe is chock-full of planets.
This year also marks the 50th anniversary of the famous “Two Cultures” lecture by C. P. Snow, the English physicist and novelist. Snow’s speech, and his later books that elaborated on it, argued that communication and respect between the sciences and humanities had broken down. Literary intellectuals, he said, were often nonplussed at their own ignorance of basic science and yet would be aghast at a scientist unfamiliar with Shakespeare; conversely, scientists were more likely to have some schooling in the arts. This asymmetrical hostility hurt society, Snow maintained, because it impeded the embrace of what science and technology could do to eliminate poverty and inequality.
Even today critics disagree about whether Snow’s thesis is better seen as controversial or clichéd. If the “two cultures” is a problem, however, some leaders—not just in science but also industry, government and nongovernmental organizations—are overcoming it spectacularly. They are doing what they can to ensure that the fruits of scientific knowledge are constructively applied to improve well-being and prosperity. This month, with our Scientific American 10 honor roll, we are proud to recognize a few of them.
Note: This story was originally published with the title, "Inspirational Orbits". | <urn:uuid:bd2a775a-217f-4161-bdf9-f0bd3f41b136> | 3.796875 | 609 | Nonfiction Writing | Science & Tech. | 31.078181 |
Scientists at the European Center for Nuclear Research (CERN) in Geneva were cheered like rock stars on July 4 when they formally announced that they had almost certainly nabbed the biggest and most elusive catch in modern physics: the Higgs boson.
Dubbed the "God particle," the Higgs boson is "the missing cornerstone of particle physics," said CERN director Rolf Heuer. This "milestone in our understanding of nature" essentially confirms that the universe was formed the way scientists believe it was.
Two teams of atom-smashing researchers at CERN’s Large Hadron Collider independently verified, with 99.99997 percent certainty, the new subatomic particle, which is a near-perfect fit for what physicists have expected of the Higgs boson since its existence was first theorized 48 years ago.
"It’s the Higgs," British physicist Jim Al-Khalili tells Reuters. "The announcement from CERN is even more definitive and clear-cut than most of us expected. Nobel prizes all round." So what does this all mean, and where does it leave us? Here, four questions answered about the God particle:
1. Why is this such a big deal?
Finding a Higgs-like boson validates much of how scientists believe the universe was formed. The media calls the Higgs boson the God particle because, according to the theory laid out by Scottish physicist Peter Higgs and others in 1964, it’s the physical proof of an invisible, universe-wide field that gave mass to all matter right after the Big Bang, forcing particles to coalesce into stars, planets, and everything else. If the Higgs field, and Higgs boson, didn’t exist, the dominant Standard Model of particle physics would be wrong. "There’s no understating the significance" of this discovery: says Jeffrey Kluger at TIME. "No Higgs, no mass; no mass, no you, me, or anything else."
2. Have they found the Higgs boson, or something else?
As momentous as this discovery is, "missing entirely from all of the high-fives and huzzahs today was a single, tiny word: ‘the,’" says TIME‘s Kluger. Instead of claiming to have found "the Higgs boson," the scientists were only willing to say they’d found "a Higgs." That’s pretty typical of "the most skeptical profession on earth," says Martin White at Australia’s The Conversation. But scientists have been busy on theories that "may one day supersede the Standard Model," and many of them do "predict more than one Higgs boson," each with different masses, energy levels, and other attributes. If this new discovery turns out to be "an exotic Higgs rather than the common garden variety," that will be "as popular as it would be earth-shattering."
3. Who gets the Nobel prize?
This Higgs breakthrough is "good news for physicists, but one dreadful headache for the Nobel committee," says Ian Sample in Britain’s The Guardian. Traditionally, each Nobel prize in the sciences is awarded to no more than three individuals, but literally thousands of people made this new discovery possible. "All deserve credit," but even the leaders of the CERN teams should hold off on writing their acceptance speeches: The likely laureates will be Peter Higgs and two of the other four living theoretical physicists whose 50-year-old work was just validated. This isn’t the first time the Nobel judges have faced this quandary: "Restricting those honored with a Nobel helps maintain their prestige. But in modern science, few discoveries are born in final form from so few parents."
4. What does this discovery mean for me?
Unless you’re a physicist, you probably still have no idea what the Higgs boson is — I don’t, says Robert Wright at The Atlantic. So why should you care about this discovery? Well, it’s an important step toward a possible understanding of how the universe formed — pretty interesting stuff — but the very fact that we don’t really get it "means we should all try to have some intellectual humility, especially when opining on grand philosophical matters, because the thing we’re using to try to understand the world — the human brain — is, in the grand scheme of things, a pretty crude instrument." On a more practical note, "the massive scientific effort" that led to the Higgs discovery has already changed your life, says TheAssociated Press. CERN scientists developed the World Wide Web "to make it easier to exchange information among one another."
What this means for the world’s future:
You wouldn’t have a cell phone or an iPad without the electron’s tunneling effect. This basic aspect of quantum physics has real-world applications that are both negative (it affects the minimum size of integrated circuits and their power loss characteristics) and positive uses in electronics.
At this point, nobody knows what the likely applications might be for the Higgs Boson. In fact, right now nobody knows for sure what its exact characteristics are or even whether there is only one type of Higgs Boson, or perhaps multiple types. The scientists at CERN and elsewhere still have a lot more data to sift through, and a lot more testing to be accomplished.
But the good news for scientists, and eventually for you, is that the existence of this particle has been proven, that it’s almost certainly the Higgs Boson and that means the Standard Model of quantum mechanics is correct. This in turn means that resources can be focused on this part of quantum mechanics. Eventually those resources can be used to describe the Higgs Boson more exactly and to determine how it manages to impart mass on other particles and what it is about other particles that makes them more or less affected by the Higgs field and the virtual particles that comprise it.
But what will it mean for you? Well, right now nobody knows, just as nobody knew until recently what the quantum effects of the electron could do for you. As it turned out, not only are we able to take advantage of those quantum effects, but we understand the limits imposed on devices we build because of those quantum effects. And while the existence of electrical resistance has been known since electricity started being used, who would have suspected the existence of negative resistance in tunneling devices?
Does this mean that we could see the emergence of effects of things such as negative mass and thus anti-gravity, as an outgrowth of the Higgs Boson discovery? Probably not, since the description of mass doesn’t seem to allow for negative numbers. But that only means that we don’t understand mass as thoroughly as we need to. Perhaps the discovery of the Higgs Boson will help us with that understanding. Ultimately, it’s the growth in understanding that’s critical to the applications that may come from this discovery.
One other note – if the reference to CERN seems strangely familiar to you, that capable laboratory was responsible for a development that affects your life every day. The World Wide Web was invented at CERN and that’s the site for the first ever Web server.
Leave us your comments… | <urn:uuid:2c50e809-6bc9-4325-8f9c-76367edcaa37> | 2.984375 | 1,526 | Listicle | Science & Tech. | 48.219167 |
Tom Yulsman is co-director of the Center for Environmental Journalism at the University of Colorado. His work has appeared in a variety of publications, including the New York Times, Washington Post, Climate Central, the Daily Climate and Audubon.
As cold temperatures swept into much of the United States on Monday, and wicked winter weather paralyzed much of Western Europe, the high temperature in Tromsø, Norway—above the Arctic Circle—topped out at nearly 40 degrees.
And it rained.
I know this for a fact, because I am here in Tromsø for the Arctic Frontiers conference, which ironically enough, is partly about climate change. The organizers handed out—yes, you guessed it—-umbrellas.
The long and short of the answer to what’s going on is, of course, complex—and not fully understood. What we take to be weird weather is certainly one of those things that happens from time to time. And in wintertime, one of the factors that can bring it on is a phenomenon known as the Arctic Oscillation.
Right now it is in a negative phase. Typically, this means that atmospheric pressure at sea level is higher than normal over the central Arctic. Meanwhile, it is lower than normal over middle latitudes. This pattern tends to bring relatively warm temperatures to parts of the Arctic, and Arctic chill to Europe and North America.
But there may be even more going on. My science-writing colleague Andrew Freedman wrote an excellent post at Climate Central about a phenomenon called “sudden stratospheric warming,” which is tightly related to the Arctic Oscillation. As he wrote late last week:
This phenomenon…started on Jan. 6, but is something that is just beginning to have an effect on weather patterns across North America and Europe.
While the physics behind sudden stratospheric warming events are complicated, their implications are not: such events are often harbingers of colder weather in North America and Eurasia. The ongoing event favors colder and possibly stormier weather for as long as four to eight weeks after the event, meaning that after a mild start to the winter, the rest of this month and February could bring the coldest weather of the winter season to parts of the U.S., along with a heightened chance of snow.
Freedman’s story includes a terrific map-based animation of the development of this event, culminating on January 18 with a big patch of red indicating warm temperatures covering northern Norway and other parts of the Arctic. (On that day, it rained here in Tromsø too.)
And there may be even more to what’s going on. There is some evidence that global warming is counter-intuitively linked to outbreaks of frigid winter weather in North America and Europe. The theory was spelled out in detail for general audiences in a New York Times op-ed column in 2010 by one of its chief proponents, Judah Cohen. In a nutshell, here’s how it works:
A warmer climate has resulted in a significant reduction in Arctic sea ice in summer and fall. The greater extent of open water exposed to the sun in summer absorbs more energy, and then causes more water vapor to wind up in the atmosphere above it during the fall. Like a lake effect snow that happens downwind of the Great Lakes, more snow tends to fall in parts of Siberia as a result. And that, in turn, disrupts the jet stream in such a way as to send Arctic air plunging into North America and Europe.
Or so the theory goes. It is still an active area of research.
Meanwhile, back here in Tromsø, on any given day during the cold season there is a 70 percent chance that precipitation will fall. Not surprisingly, it usually comes as snow. But the city has a January climate closer to that of Boston than the North Pole, with high temperatures averaging in the upper 20s. For this, the roughly 70,000 residents of Tromsø can thank the warm Norwegian Current, a tongue of the Gulf Stream. Moreover, the locals insist that rain during winter is not unheard of. And statistics bear them out. On those days when precipitation falls during the cold season, 10 percent of the time it comes as rain.
But so far it has rained on 50 percent of the days that I have been here.
So one thing is a bit clearer than the precise causes of the umbrella weather here and the parka conditions further south: On my way to dinner tonight, I will definitely be bringing an umbrella. | <urn:uuid:59928008-d44b-4efa-aed6-681cedbb1994> | 3.15625 | 942 | Personal Blog | Science & Tech. | 51.031881 |
Yes. A "listener object" is an object that has listener methods, but it may have other methods.
An event listener is an object that "listens" for events from a GUI component, like a button. The Java system represents an event as an object. When the user generates an event, the system creates an event object, which is then sent to the listener that has been registered for the GUI component.
When an event is generated by the GUI component, a method in the listener object is invoked. To be able to respond to events, a program must first:
In the picture, the component is the button,
contained in a frame.
The user event is a click on that button.
Event object is sent to the registered listener.
This is done by the Java system, which manages the GUI components.
It is up to the listener to do something.
(Thought Question: ) Does the Java system create an
Event object every time
the user interacts with a component? | <urn:uuid:99cc5492-ba2a-47e0-b483-dc0ea642a8ee> | 3.03125 | 209 | Q&A Forum | Software Dev. | 56.720673 |
The previous chapters discussed how to extend Python, that is, how to extend the functionality of Python by attaching a library of C functions to it. It is also possible to do it the other way around: enrich your C/C++ application by embedding Python in it. Embedding provides your application with the ability to implement some of the functionality of your application in Python rather than C or C++. This can be used for many purposes; one example would be to allow users to tailor the application to their needs by writing some scripts in Python. You can also use it yourself if some of the functionality can be written in Python more easily.
Embedding Python is similar to extending it, but not quite. The difference is that when you extend Python, the main program of the application is still the Python interpreter, while if you embed Python, the main program may have nothing to do with Python -- instead, some parts of the application occasionally call the Python interpreter to run some Python code.
So if you are embedding Python, you are providing your own main program. One of the things this main program has to do is initialize the Python interpreter. At the very least, you have to call the function Py_Initialize() (on Mac OS, call PyMac_Initialize() instead). There are optional calls to pass command line arguments to Python. Then later you can call the interpreter from any part of the application.
There are several different ways to call the interpreter: you can pass a string containing Python statements to PyRun_SimpleString(), or you can pass a stdio file pointer and a file name (for identification in error messages only) to PyRun_SimpleFile(). You can also call the lower-level operations described in the previous chapters to construct and use Python objects.
A simple demo of embedding Python can be found in the directory Demo/embed/ of the source distribution. | <urn:uuid:cf348030-754b-45e1-96b1-89d0f855c220> | 3.40625 | 388 | Documentation | Software Dev. | 38.982292 |
NASA scientists have found that cirrus clouds, formed by contrails from aircraft engine exhaust, are capable of increasing average surface temperatures enough to account for a warming trend in the United States that occurred between 1975 and 1994. According to Patrick Minnis, a senior research scientist at NASA’s Langley Research Center in Hampton, Va., there has been a one percent per decade increase in cirrus cloud cover over the United States, likely due to air traffic. Cirrus clouds exert a warming influence on the surface by allowing most of the Sun’s rays to pass through but then trapping some of the resulting heat emitted by the surface and lower atmosphere. Using a general circulation model, Minnis estimates that cirrus clouds from contrails increased the temperatures of the lower atmosphere by anywhere from 0.36 to 0.54°F per decade. Minnis’s results show good agreement with weather service data, which reveal that the temperature of the surface and lower atmosphere rose by almost 0.5°F per decade between 1975 and 1994.
This enhanced infrared image from the Moderate Resolution Imaging Spectroradiometer (MODIS), aboard NASA’s Terra satellite, shows widespread contrails over the southeastern United States during the morning of January 29, 2004. Such satellite data are critical for studying the effects of contrails. The crisscrossing white lines are contrails that form from planes flying in different directions at different altitudes. Each contrail spreads and moves with the wind. Contrails often form over large areas during winter and spring.
For information about why NASA studies contrails, read: Clouds Caused By Aircraft Exhaust May Warm The U.S. Climate.
Image courtesy NASA Langley Research Center
- Terra - MODIS | <urn:uuid:63a5d69b-e2db-4b8f-bceb-675f4f012a5a> | 4.1875 | 353 | Knowledge Article | Science & Tech. | 41.596414 |
The Transiting Exoplanet Survey Satellite (TESS) is a planned space telescope for NASA's Small Explorer program, designed to search for extrasolar planets using the transit method. Led by the Massachusetts Institute of Technology with seed funding from Google, TESS was one of 11 proposals selected for NASA funding in September 2011, down from the original 22 submitted in February of that year. On April 5, 2013, it was announced that TESS, along with the Neutron star Interior Composition ExploreR (NICER), had been selected for launch in 2017.
Mission concept
Once launched, the telescope would conduct a two-year all-sky survey program for exploring transiting exoplanets around nearby and bright stars. TESS would be equipped with four wide-angle telescopes and charge-coupled device (CCD) detectors, with a total size of 192 megapixels. Science data will be processed and stored for three months onboard, and only data of interest will be transmitted to Earth for further analysis. Data collected by the spacecraft are also stored for three months, enabling astrophysicists to search the data for an unexpected, transient phenomenon, such as a gamma-ray burst.
Scientific objectives
The survey will focus on G- and K- type stars with apparent magnitudes brighter than 12. Approximately 2 million of these stars would be studied, including the 1,000 closest red dwarfs. TESS is predicted to discover 1,000 - 10,000 transiting exoplanet candidates which are Earth-sized or larger, with orbital periods of up to two months. These candidates could be later investigated by the HARPS spectrometer and the future James Webb Space Telescope. The development team at MIT is so optimistic about the mission that they have suggested that the first manned interstellar space missions may be to planets discovered by TESS.
External links | <urn:uuid:221adfe0-ea87-4c33-9f06-9f5792677626> | 3.625 | 381 | Knowledge Article | Science & Tech. | 36.651173 |
Warming will trigger some processes which speed further warming, and other effects which mitigate it. The balance between these positive and negative feedbacks is a major cause of uncertainty in climate predictions.
For example, as the diagram shows, decreasing ice cover will mean exposed land absorbs more heat and speeds warming further.
In contrast, for example, plants' CO2 intake is likely to increase as higher temperatures increase growth rates, somewhat countering the warming effect.
1 Light coloured ice reflects back the Sunís energy efficiently.
2 Exposed land is darker coloured and absorbs more energy.
3 As the ice melts, more land is exposed. This absorbs more heat, melting more ice.
4 The altitude of the melting ice is reduced so it becomes harder for new ice to form. | <urn:uuid:33f071a5-924a-477a-b8cd-0ea943d0b6d4> | 3.796875 | 157 | Knowledge Article | Science & Tech. | 40.383819 |
Exercise 5.12: printAverage
Write a method named
printAverage that accepts a
Scanner for the console as a parameter and repeatedly prompts the user for numbers. Once any number less than zero is typed, the average of all non-negative numbers typed is displayed. Display the average as a
double, and do not round it. For example, a call to your method might look like this:
Scanner console = new Scanner(System.in); printAverage(console);
The following is one example log of execution for your method:
Type a number: 7 Type a number: 4 Type a number: 16 Type a number: -4 Average was 9.0
If the first number typed is negative, do not print an average. For example:
Type a number: -2
Contacting server and running tests...
Is there a problem?
Contact a Practice-It administrator. | <urn:uuid:b63f1549-7413-49b8-81ee-107516ea5c25> | 3.171875 | 189 | Documentation | Software Dev. | 51.365332 |
string fgets(int fp, int length);
Returns a string of up to length - 1 bytes read from the file pointed to by fp. Reading ends when length - 1 bytes have been read, on a newline (which is included in the return value), or on EOF (whichever comes first).
If an error occurs, returns false.
People used to the 'C' semantics of fgets should note the difference in how EOF is returned.
A simple example follows:
Example 1. Reading a file line by line | <urn:uuid:129f6b6d-e317-47cb-abae-4fbc048ed3de> | 4 | 114 | Documentation | Software Dev. | 70.730578 |
The Salton Trough
Click to download high resolution file
You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.
Description:This is a video that illustrates the creation of the Salton Trough. The video shows the location of the Salton Trough, then with pictures, explains the extensional forces that has created the Salton Trough.
References:http://fire.biol.wwu.edu/trent/alles/GeologySaltonTrough.pdf, http://www.cnsm.csulb.edu/departments/geology/VIRTUAL_FIELD/Salton_Sea/saltmain.htm, http://science.nationalgeographic.com/science/earth/the-dynamic-earth/plate-tectonics-article.html, http://ca.water.usgs.gov/groundwater/gwatlas/basin/terminal.html | <urn:uuid:a181430b-db5a-489b-b4b7-68e421443554> | 2.765625 | 209 | Truncated | Science & Tech. | 58.380939 |
The hypothesis: In the shadows of deep craters that pock the south pole of the Moon there might be ever-frozen water.
The experiment: Guide the final stages of the Lunar Crater Observation and Sensing Satellite (LCROSS) rocket into one of the craters and crash it into the surface, hopefully sending a plume of dust into the air that could be analyzed.
The event: On October 9, 2009 the LCROSS Centaur rocket crashed into the crater Cabeus, followed by the LCROSS package itself, which recorded (mostly by spectroscopy) information from the impact plume.
The first results: Just one month after impact, scientists announced that LCROSS did indeed find water. [SciTechStory: On the Moon or elsewhere: Follow the water]
The published results (some presented October 21, 2010 at a joint news conference): Yes, there is more water on the Moon than originally suspected. (Most scientists thought the Moon to be one of the driest places known to man.) In fact, substantial frozen water can be found in locations other than the bottom of deep craters. This is a piece of deduction that comes as something of a surprise. It follows from the measurements of temperatures in the area of the Moon’s south pole. They are the coldest ever directly measured in the solar system. One location registered 27 degrees Kelvin (-246 Centigrade, -411 Fahrenheit), that’s 27 degrees above absolute zero, the point at which atoms no longer move. At 100 degrees Kelvin water is not only frozen but will remain inert for billions of years. In fact, at 100 Kelvin many ‘volatiles’ such as hydrogen, methanol, ammonia, and carbon dioxide will also remain permanently frozen. Scientists believe temperature this cold exists not only in the shadows of craters, but also in the subsurface around the Moon’s south pole, in the lunar permafrost. There may be significant water in the soil even in areas that receive some direct sunlight. This could be up to 30% of the area around the pole.
This is a good thing. Working in somewhat warmer sunlight to extract water, hydrogen, and other substances from the soil of this area could make expeditions, settlement, and eventually commercial utilization possible. If you think about it, how well would machinery work at 100 Kelvin? (If it would work at all.)
For science, the treasure is in the preservation of materials that have been on the Moon – unaltered – for a billion years or more. The spectrographic analyses reveal the presence of many kinds of molecules, including those of hydrogen, oxygen, carbon, and nitrogen – the building blocks of life. The layers of material present the geological history of the Moon, perhaps all the way back to the days when it was still volcanically active.
However, there is the problem of fluff. Literally, the ‘soil’ of the crater is fluffy (‘light’, ‘airy’) in the extreme – so fluffy it could swallow astronauts and equipment far worse than quicksand. Is all the soil around the south pole like that? Probably not, but that’s unknown. As reported by Emily Lakdawalla at the press conference this problem was only partially addressed:
One questioner asked how easy it would be to get water out of the material that LCROSS crashed into. [Anthony] Colaprete answered, demurring a bit, saying that other people had thought much more about this problem than he or the other lunar scientists had. But he pointed out that the fluffy material is much easier to deal with than digging into solidly frozen ground; “you just scoop it up.” You can even just warm the surface, he said — the crater was steaming, and all you need to do is cover an area, then warm it up, to release the water.
However, in my conversation with Pete [Schultz] later, I learned that this ease of accessing the water cuts both ways. I asked Pete what would happen if you stuck a shovel into an area of the type that [Igor] Mitrofanov was talking about, a place that is not permanently shadowed, where there is ice-bearing material a few centimeters below the surface. Pete said, first of all, that despite all the results shown today we don’t really know what things look like much below the surface; all the material that got lofted upward was likely from pretty close to the surface, and the higher things went, the closer it was to the surface when it started. It could be a veneer of material, but we don’t know if it’s a veneer or not. It’s quite possible that it goes very, very deep, and if so, it could be very, very old — possibly old enough for these deposits to preserve volcanic gases left over from the later stages of the Moon’s geologic activity. Secondly, he said, this material is probably so delicate, that even sticking a shovel into the ground might warm it enough to make the water and other, even more volatile stuff (like molecular hydrogen and ammonia) go away — just the shovel will warm it up.
[Source: Planetary Society Blog]
I encourage everyone to read Emily Lakdawalla’s Planetary Society blog entry, LCROSS finds lots of water in accessible places at the Moon’s south pole – but we’ll have to tread carefully. It’s a fine piece of science writing that exposes the texture of real scientific enquiry and her infectious enthusiasm for the field. (She was a NASA deputy project manager and holds a master’s degree in geology.)
As you can see, the published results are far richer than the immediate results. In fact, they reveal a much more complex picture of conditions at the Moon’s south pole. Yes there is water, probably in quantities significant for human activity. There’s a lot more, a veritable treasure trove of materials including silver, manganese, and other ‘resources’ that humans like to grab. However, the environment is more difficult (some would say hostile) than expected. The scientists who made the reports are excited by what they found, but they’re also sanguine about accessibility. There is always the problem of something costing more to extract – especially in energy – than it is worth. | <urn:uuid:d5f2b934-b08e-4b0f-8834-54cc9095ccd0> | 3.921875 | 1,342 | Personal Blog | Science & Tech. | 45.801695 |
Astronomers from UC Berkeley have identified 33 pairs of waltzing black holes, closing the gap somewhat between the observed population of super-massive black hole pairs and what had been predicted by theory. "Astronomical observations have shown that 1) nearly every galaxy has a central super-massive black hole (with a mass of a million to a billion times the mass of the Sun), and 2) galaxies commonly collide and merge to form new, more massive galaxies. As a consequence of these two observations, a merger between two galaxies should bring two super-massive black holes to the new, more massive galaxy formed from the merger. The two black holes gradually in-spiral toward the center of this galaxy, engaging in a gravitational tug-of-war with the surrounding stars. The result is a black hole dance, choreographed by Newton himself. Such a dance is expected to occur in our own Milky Way Galaxy in about 3 billion years, when it collides with the Andromeda Galaxy."
coondoggie writes "NASA is looking to reduce the deadly impact of helicopter crashes on their pilots and passengers with what the agency calls a high-tech honeycomb airbag known as a deployable energy absorber. So in order to test out its technology NASA dropped a small helicopter from a height of 35 feet to see whether its deployable energy absorber, made up of an expandable honeycomb cushion, could handle the stress. The test crash hit the ground at about 54MPH at a 33 degree angle, what NASA called a relatively severe helicopter crash." | <urn:uuid:6b95ca3c-13fb-483b-8e87-836d7721c1a0> | 3.59375 | 314 | Comment Section | Science & Tech. | 34.230189 |
In response to Phil Plait’s post on why there are no green stars, Matt Springer of Built on Facts talks about Planck’s law, the Stefan-Boltzmann law, and toasters.
Tags: Astronomy, Blackbody, physics, Planck's Law, Radiation, Science, Spectrum, Stefan-Boltzmann Law
This entry was posted on August 4, 2008 at 4:11 pm and is filed under Math/Science. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
Connecting to %s
Notify me of follow-up comments via email.
Theme: Kubrick. Blog at WordPress.com.
Entries (RSS) and Comments (RSS).
Get every new post delivered to your Inbox. | <urn:uuid:5da532c3-5f46-4395-9274-45617367cf3a> | 3.09375 | 243 | Comment Section | Science & Tech. | 76.510833 |