text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
How Do We Know Light Behaves as a Wave? Visit The Physics Classroom's Flickr website and enjoy a photo overview of the topic of light interference. Looking for a lab that coordinates with this page? Try the Ripple Tank Lab from The Laboratory.PhET Simulation: Wave Interference This PhET simulation provides a virtual environment for demonstrating a wealth of wave properties and behaviors - including two-point source light interference.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on the wave nature of light.Optics Interference: Ripple Tank Program Demonstrate the effect of changing separation distance and wavelength upon a 2D interference pattern with this Open Source Physics (OSP) simulation. Two Point Source Interference Wave interference is a phenomenon that occurs when two waves meet while traveling along the same medium. The interference of waves causes the medium to take on a shape that results from the net effect of the two individual waves upon the particles of the medium. Wave interference can be constructive or destructive in nature. Constructive interference occurs at any location along the medium where the two interfering waves have a displacement in the same direction. For example, if at a given instant in time and location along the medium, the crest of one wave meets the crest of a second wave, they will interfere in such a manner as to produce a "super-crest." Similarly, the interference of a trough and a trough interfere constructively to produce a "super-trough." Destructive interference occurs at any location along the medium where the two interfering waves have a displacement in the opposite direction. For example, the interference of a crest with a trough is an example of destructive interference. Destructive interference has the tendency to decrease the resulting amount of displacement of the medium. Interference principles were first introduced in Unit 10 of The Physics Classroom Tutorial. The principles were subsequently applied to the interference of sound waves in Unit 11 of The Physics Classroom Tutorial. A defining moment in the history of the debate concerning the nature of light occurred in the early years of the nineteenth century. Thomas Young showed that an interference pattern results when light from two sources meets up while traveling through the same medium. To understand Young's experiment, it is important to back up a few steps and discuss the interference of water waves that originate from two points. In Unit 10, the value of a ripple tank in the study of water wave behavior was introduced and discussed. If an object bobs up and down in the water, a series water waves in the shape of concentric circles will be produced within the water. If two objects bob up and down with the same frequency at two different points, then two sets of concentric circular waves will be produced on the surface of the water. These concentric waves will interfere with each other as they travel across the surface of the water. If you have ever simultaneously tossed two pebbles into a lake (or somehow simultaneously disturbed the lake in two locations), you undoubtedly noticed the interference of these waves. The crest of one wave will interfere constructively with the crest of the second wave to produce a large upward displacement. And the trough of one wave will interfere constructively with the trough of the second wave to produce a large downward displacement. And finally the crest of one wave will interfere destructively with the trough of the second wave to produce no displacement. In a ripple tank, this constructive and destructive interference can be easily controlled and observed. It represents a basic wave behavior that can be expected of any type of wave. The interference of two sets of periodic and concentric waves with the same frequency produces an interesting pattern in a ripple tank. The diagram at the right depicts an interference pattern produced by two periodic disturbances. The crests are denoted by the thick lines and the troughs are denoted by the thin lines. Thus, constructive interference occurs wherever a thick line meets a thick line or a thin line meets a thin line; this type of interference results in the formation of an antinode. The antinodes are denoted by a red dot. Destructive interference occurs wherever a thick line meets a thin line; this type of interference results in the formation of a node. The nodes are denoted by a blue dot. The pattern is a standing wave pattern, characterized by the presence of nodes and antinodes that are "standing still" - i.e., always located at the same position on the medium. The antinodes (points where the waves always interfere constructively) seem to be located along lines - creatively called antinodal lines. The nodes also fall along lines - called nodal lines. The two-point source interference pattern is characterized by a pattern of alternating nodal and antinodal lines. There is a central line in the pattern - the line that bisects the line segment that is drawn between the two sources is an antinodal line. This central antinodal line is a line of points where the waves from each source always reinforce each other by means of constructive interference. The nodal and antinodal lines are included on the diagram below. A two-point source interference pattern always has an alternating pattern of nodal and antinodal lines. There are however some features of the pattern that can be modified. First, a change in wavelength (or frequency) of the source will alter the number of lines in the pattern and alter the proximity or closeness of the lines. An increase in frequency will result in more lines per centimeter and a smaller distance between each consecutive line. And a decrease in frequency will result in fewer lines per centimeter and a greater distance between each consecutive line. Second, a change in the distance between the two sources will also alter the number of lines and the proximity or closeness of the lines. When the sources are moved further apart, there are more lines produced per centimeter and the lines move closer together. These two general cause-effect relationships apply to any two-point source interference pattern, whether it is due to water waves, sound waves, or any other type of wave. Any type of wave, whether it be a water wave or a sound wave should produce a two-point source interference pattern if the two sources periodically disturb the medium at the same frequency. Such a pattern is always characterized by a pattern of alternating nodal and antinodal lines. Of course, the question should arise and indeed did arise in the early nineteenth century: Can light produce a two-point source interference pattern? If light is found to produce such a pattern, then it will provide more evidence in support of the wavelike nature of light. Before we investigate the evidence in detail, let's discuss what one might observe if light were to undergo two-point source interference. What would happen if a "crest" of one light wave interfered with a "crest" of a second light wave? And what would happen if a "trough" of one light wave interfered with a "trough" of a second light wave? And finally, what would happen if a "crest" of one light wave interfered with a "trough" of a second light wave? Whenever light constructively interferes (such as when a crest meeting a crest or a trough meeting a trough), the two waves act to reinforce one another and to produce a "super light wave." On the other hand, whenever light destructively interferes (such as when a crest meets a trough), the two waves act to destroy each other and produce no light wave. Thus, the two-point source interference pattern would still consist of an alternating pattern of antinodal lines and nodal lines. However for light waves, the antinodal lines are equivalent to bright lines and the nodal lines are equivalent to dark lines. If such an interference pattern could be created by two light sources and projected onto a screen, then there ought to be an alternating pattern of dark and bright bands on the screen. And since the central line in such a pattern is an antinodal line, the central band on the screen ought to be a bright band. In 1801, Thomas Young successfully showed that light does produce a two-point source interference pattern. In order to produce such a pattern, monochromatic light must be used. Monochromatic light is light of a single color; by use of such light, the two sources will vibrate with the same frequency. It is also important that the two light waves be vibrating in phase with each other; that is, the crest of one wave must be produced at the same precise time as the crest of the second wave. (This is often referred to as coherent light.) To accomplish this, Thomas Young used a single light source and projected the light onto two pinholes. The light from the source will then diffract through the pinholes and the pattern can be projected onto a screen. Since there is only one source of light, the set of two waves that emanate from the pinholes will be in phase with each other. As expected, the use of a monochromatic light source and pinholes to generate in-phase light waves resulted in a pattern of alternating bright and dark bands on the screen. A typical appearance of the pattern is shown below. Young's two-point source interference experiment is often performed in a Physics course with laser light. It is found that the same principles that apply to water waves in a ripple tank also apply to light waves in the experiment. For instance, a higher frequency light source should produce an interference pattern with more lines per centimeter in the pattern and a smaller spacing between lines. Indeed this is observed to be the case. Furthermore, a greater distance between slits should produce an interference pattern with more lines per centimeter in the pattern and a smaller spacing between lines. Again, this is observed to be the case. Most astounding of all is that Thomas Young was able to use wave principles to measure the wavelength of light. Details on the development of Young's equation and further information about his experiment are provided in Lesson 3 of this unit. For now, the emphasis is on how the same characteristics observed of water waves in a ripple tank are also observed of light waves. Thomas Young's findings provide even more evidence for the scientists of the day that light behaves as a wave. After all, can a stream of particles do all this?
<urn:uuid:5414b196-0c80-40c6-a3e1-8e84d07777ab>
3.859375
2,129
Tutorial
Science & Tech.
38.400838
I recently bumped into an acquaintance outside the Staples Center after a Lakers game. Neither of us was aware the other was at the game. Yet, even if we had known and were intentionally keeping an eye out for one another, we probably would never have been able to connect; the crowds at Lakers games are just too heavy. Proteins inside the cell face a similar problem. The cell interior is jam-packed with a large number and variety of proteins. Even the simplest bacterium harbors several thousand different types of proteins with numerous copies of each biomolecule existing in the cell’s interior. In many instances, proteins must interact and bind in a highly specific manner with other proteins to carry out their function. These protein-protein interactions (PPIs) are selective. If the wrong proteins bind to each other, the interaction is of no use to the cell. The jam-packed environment of the cell complicates things. Just like two friends searching for one another in a crowd, proteins are more likely to encounter a protein “stranger” than the desired “friend.” Biochemists are currently working to understand the specificity of PPIs and how proteins avoid unintended interactions with “strangers.” Recently, Harvard scientists identified some of the key factors that control PPIs.1 Their research adds to the body of evidence that supports the notion that life’s chemistry is designed, stemming from the work of a Creator. Factors Controlling PPIs As I wrote previously, protein surfaces are carefully structured to allow strong interactions between protein pairs while minimizing the strength of the unwanted interactions between protein “strangers.” The most recent work by the Harvard scientists indicates that the concentration of PPI-participating proteins in the cell is also carefully designed. Proteins that do not engage in PPIs have surfaces that prevent these biomolecules from accidently interacting with other proteins. Because of this structural feature, these proteins can exist at relatively high levels inside the cell. Proteins that interact with only one other protein have specific regions on their surfaces designed to promote the PPI. The remainder of the surfaces are designed to eschew PPIs. Proteins that interact with at least two other proteins also possess specially designed regions that promote binding with their multiple partners. These multi-partner proteins are much more likely to take part in unintended interactions with the wrong partners because more of their surface is devoted to PPIs. To control unwanted interactions, the concentration of these particular proteins is carefully balanced inside the cell. In other words, protein structure and concentrations have to be precisely regulated to promote the PPIs critical for life. As I point out in my book The Cell’s Design, high-precision structures and interactions, exemplified by PPIs, are hallmark features of biochemical systems and, by analogy to fine-tuned human designs, point to the work of a Creator.
<urn:uuid:b633120a-7106-4d42-a9a4-eec952fa4080>
3.21875
599
Knowledge Article
Science & Tech.
33.272133
Today, life on the moon. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity As I sit down to type, I first have to turn off my SETI screensaver. My PC computes fast Fourier transforms in the background while I work -- analyzing radio signals from space. Many people sign up to do that. I don't really expect to find an alien message saying, "Take me to your leader," but the screen-saver is fun. The larger question, "Is there life out there?" has been around for a long time. It began demanding our attention after Galileo turned his new telescope on the moon in 1609. Up to then, the moon had been a celestial crystalline sphere. Now it became craggy, pitted and imperfect -- like earth itself. If the moon and the planets were like earth, then mightn't they, too, support sentient Five years after Galileo's telescope had sent shock waves through seventeenth-century astronomy, John Wilkins was born in England. By the age of twenty, he'd finished a master's degree at Oxford. When he was twenty-four, he published a book with the The Discovery of a World in the Moone: Or A Discourse Tending To Prove that 'tis probable there may be another habitable World in that Planet. This was the first such study of the new secular moon. In it, Wilkins argues thirteen propositions -- some accurate, some not. Contrary to the Church's thinking, he argues that the moon is solid and opaque. Like earth, it's made of base and corruptible matter. It has no light of its own. It only reflects the sun. It has mountains and valleys, and it suffers meteor impacts. This was six years before the death of Galileo, and he's confident that earth orbits the sun just as the moon orbits earth. It's long before Newton, yet he accurately describes how orbiting bodies stay aloft. Wilkins says the moon has an Atmo-Sphere because that's what creates the fuzzy edge of the shadow cast on Earth during a solar eclipse. He blew that one. He didn't realize he was looking at the same penumbra we see in the shadow cast by a By now Wilkins had taken Holy Orders, and, when he speaks of life on the moon, his arguments shift from physical to theological. He worries about how Adam might've sired its inhabitants and whence their salvation might come. Wilkins's book was popular. A translation appeared in France, and Cyrano de Bergerac read it. Cyrano's last novel, published after his death, was a wildly fanciful science-fiction tale of traveling to that earthlike moon. The Dutch/English scientist Christiaan Huygens was fifteen years younger than Wilkins and undoubtedly knew him. In 1698, Huygens wrote his own book promoting the likelihood of intelligent life on So the seed was sown long before NASA. Sure, they got a few things wrong, but Wilkins's eleventh proposition was prophetic: ... as their world is our Moone, So our world is And we remember that mystical NASA photo of earth, rising like a great companion planet, over the rim of the moon -- while living beings were there to watch it, after all. I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds Wilkins, J., The Discovery of a World in the Moone. Or, A Discorse Tending To Prove that 'tis probable there may be another habitable World in the Planet. London, Printed by E.G. for Michael Sparl and Edward Forrest, 1638. (Actually, Wilkins's name did not appear on the first edition's title page. My source was a facsimile edition published by Da Capo Press, Inc., in 1972.) My thanks to Pat Bozeman, UH Library, for calling my attention to the Wilkins Should you, too, wish to run SETI calculations on your home PC, check the website, http://setiathome.ssl.berkeley.edu/ I am grateful to Carol Lienhard and Governor William P. Hobby (who have both run these programs for some time) for urging me to join them. For lack of space, I've omitted much of Wilkins's distinguished life. He was for example Christopher Wren's teacher. He was a friend to the distinguished cosmologist Thomas Burnett. And he was the primary founding father of the Royal Society. Wilkins's thirteen propositions in his book about the moon are worth recounting here. They are: 1) That the strangenesse of this opinion is no sufficient reason why it should be rejected, because other certaine truths have beene formerly esteemed ridiculous, and great absurdities entertained by common consent. 2) That a plurality of worlds doth not contradict any principle of reason or faith. 3) That the heavens doe not consist of any such pure matter which can priviledge them from the like change and corruption, as these inferiour bodies are liable unto. 4) That the Moone is a solid, compacted, opacous 5) That the Moone hath not any light of her 6) That there is a world in the Moone, hath beene the direct opinion of many ancient, with some moderne Mathematicians, and may probably be deduced from the tenents of others. 7) That those spots and brighter parts which by our sight may be distinguished in the Moone, doe shew the difference betwixt the Sea and Land in the 8) That the spots represent the Sea, and the brighter parts the Land. 9) That there are high Mountaines, deepe vallies, and spacious plaines in the body of the 10) That there is an Atmo-Sphere, or an orbe of grosse vaporous aire, immediately encompassing the body of the Moone. 11) That as their world is our Moone, so our world is their moone. 12) That tis probable there may bee such meteors belonging to that world in the Moone, as there are 13) That tis probable there may be inhabitants in this other World, but of what kind they are is Click on the thumbnail above to see a full-size image The frontispiece and title page of Wilkins's book. heliocentric representation of the solar system on Wilkins's explantion of orbital trajectories The Engines of Our Ingenuity is Copyright © 1988-2000 by John H.
<urn:uuid:a5c0cffe-60c1-4d7c-b3d6-1d9ae5666090>
2.984375
1,511
Nonfiction Writing
Science & Tech.
59.211184
The EPIC-MOS instruments on-board XMM-Newton - What do the cameras measure? - Example observations by EPIC-MOS - Modes of the EPIC-MOS Camera - X-ray telescope Mirrors For many more images of both XMM-Newton hardware and astronomical observations, please see the ESAC XMM-Newton gallery page. XMM-Newton has 3 X-ray telescopes on-board, each with its own European Photon Imaging Camera (EPIC). Although the X-ray telescopes focus' the X-rays using mirrors, the configuration is more reminiscent of a refracting telescope (explained below). Two of the X-ray telescopes use EPIC-MOS (Metal Oxide Semi-conductor) cameras, developed and built at the University of Leicester's Space Research Centre, and the third telescope uses an EPIC-PN camera, built at Germany's Max-Planck-Institut für extraterrestrische Physik (Garching) and the Astronomisches Institut Tübingen. Both EPIC-MOS cameras consist of seven CCD's, with each CCD made up of 600x600 pixels - the EPIC-MOS are essentially two 2.5Mpx digital X-ray cameras! The EPIC-PN camera is of a different design, and contains 12 CCD's of 64x189 pixels. What do the cameras measure? For each X-ray, the EPIC cameras record the following fundamental data: - » the time when the X-ray arrived, - this allows astronomers to see the rate at which X-rays arrive at the telescope, and so how the X-ray brightness of the astronomical object changes over time. A graph of brightness against time is known as a light-curve. Light-curves allow astronomers to see if the target suddenly brightens, or if it is fading. - » where the X-rays hit the camera's sensor, - this enables an image to be produced, allowing astronomers to see exactly where the X-rays are originating from in space. - » the energy of the X-ray - measuring the energy of the X-rays allow astronomers to understand the physical processes that are occurring at the target. Astronomers can directly measure such quantities as the temperature of the target, how many X-rays are absorbed by intervening gas, what chemical atoms are present, and much more - all by looking at the energies of the incoming X-rays. These quantities can also be use in combination. For example, astronomers will often make a light curve of the low energy X-rays, and another light-curve of the high energy X-rays, and compare the two. If a cloud of gas drifts over the source of X-rays, it preferentially absorbs low energy X-rays - so if an astronomer sees a dip in the low energy light-curve, but no change in the high energy light-curve, then they will presume that and absorbing cloud of gas has drifted over the source of X-rays. Also, by comparing a low energy image with an high energy image, an astronomer can instantly see where the hotter and cooler gas is physically located. Example observations by EPIC-MOS |OY Carinae: a binary star system This light-curve shows the number of X-rays counted over time - and the X-ray source appears to suddenly turn off, and back on again! OY Carinae is actually a pair of stars, and the X-rays are originating from just the smaller, white dwarf star. As the two stars orbit one another, the second larger star passes in front of the white dwarf star, once every 90 minutes, completely hiding the white dwarf (and the X-rays) from view for about 4 minutes. For more details, see the HEASARC picture of the week archive. |Geminga: a rapidly moving neutron star This image is of Geminga, a neutron star moving rapidly though space at 120 kilometres per second, leaving behind a pair of tails of X-ray hot gas, stretching 3 million million kilometres across the sky. The neutron star itself is only 20-30 kilometres across and is the dense remains of an exploded star. Geminga lies at a distance of about 500 light-years away. Its rapid movement creates a shock-wave that compresses the gas of the interstellar medium and its naturally embedded magnetic field by a factor of four. For more details, see the European Space Agency website. Capella: a bright giant star Our Sun produces X-rays in its outer atmosphere (the corona), due to the intense magnetic field heating the gas up to millions of degrees centigrade - hot enough for the gas to emit X-rays. Some stars, such as Capella, have an even more active corona than the Sun does. Modes of the EPIC-MOS Camera The number of images (frames) taken per second by the EPIC camera's is limited by the speed at which each frame can be read-out - there are 360,000 pixels to read per CCD! The CCD's are continuous exposed to the sky, and at the end of each frame, all 600 columns are rapidly transfered to a read-out storage area together, within a few milli-seconds. The data in this storage area is then read-out row-by-row (taking 2.6 seconds in full-frame mode - 600 times longer than the transfer time), before being prepared for transmission to the Earth. In full-frame mode, the data from all 360,000 pixels are read-out from the storage area in 2.6 seconds, and so this limits the number of images that can be taken per second. However, astronomers can choose between receiving a full 600x600 image once every 2.6s, or a smaller 300x300 image every 0.9s (large window mode), or a tiny 100x100 image every 0.3s (small window mode). For very high time resolution, astronomers use the EPIC-MOS timing mode, which takes an 100x1 pixel image every 1.5ms! The ability to use these modes are, however, dependent on the target. A source could be either too faint, such that an X-ray is not even detected within a frame time, or too bright - if two X-rays arrive at the same pixel at the same time (called pile-up) confusion can arrive (has one high-energy X-ray arrived, or two low energy X-rays?!). In full-frame mode, pile-up can occur if more than one X-ray is detected from a source every 1.4 seconds. X-ray observatories cannot use the conventional mirrors that we are used to in everyday life, since X-rays would just pass straight through the gaps between the atoms! The reason is that different wavelengths of light have different interaction cross-sections (or interaction area; a measure of the likelihood that an X-ray will interact with an atom). The larger the wavelength of light, the larger its interaction cross-section, and the larger the chance of an interaction i.e. an reflection. Optical light has a large wavelength and a large interaction cross section, and so is easily reflected by (interacts with) a polished surface. X-rays, however, have a short wavelength (high energy) and a small interaction cross-section - so they can pass between the atoms of a mirror without interacting with them at all. |X-ray mirrors focus the X-rays from space (coming in from the right) by redirecting the X-rays, through reflection, into a focus point on the cameras at the far left.| Imagine a fence (which represents a mirror), with evenly spaced posts (the atoms within the mirror). A large football (our optical photon with a large wavelength and so a large interaction cross-section) will be easily reflected off the fence. However, a small tennis ball (our X-ray, with a small interaction cross-section) will pass straight through the fence. So how do we focus an X-ray? Somehow, we need to get the incoming X-rays to interact with (i.e. "hit") more atoms. We can do this by turning the mirrors edge-on, and so the X-rays are more likely to hit an atom thus bringing the X-rays to a focus point. The X-rays are skipping off the atoms like a stone skipping over water. As you can see with the fence analogy, an incoming X-ray will see more atoms (or fence posts) when the mirror (fence) is placed edge on. |A large football (representing optical light with a large interaction cross section) reflects back to us off the fence posts (the spacing of the fence posts represents the spacing of atoms in a mirror).| |However, the small tennis ball (representing an X-ray photon with a small interaction cross-section, or "size") passes straight through the fence (atoms).| |So how do we focus an X-ray (or tennis ball)? Place the fence (or mirror) edge-on, and the tennis ball (X-ray) will be more likely to hit a fence-post (or atom) to enable it to be brought to a focus.| |Here is how to imagine it if you prefer to think of photons as wave packets of energy. We have two mirrors, one vertical and one almost edge-on. The optical wave (red, top) reflects off the mirror atoms (green) - but the (blue) X-ray passes straight through. However, the X-ray cannot avoid hitting an atom when the mirror is placed edge-on, bringing all the X-rays to a focus point.| Indeed, arguably the most complex part of X-ray observatories is the manufacture of the mirrors. The biggest problem with X-ray mirrors are the fact that only a tiny fraction of the incoming X-rays are brought to a focus - most just pass between the mirrors. In order to get around this problem, the designers of the XMM-Newton observatory placed 58 nested mirrors within each other (below, left), in an attempt to catch as many X-rays as possible. Whereas, designers of NASA's Chandra X-ray Observatories opted for only 4 mirrors (below, right), but four mirrors so accurately made that the resulting images are much sharper that the images taken by XMM-Newton. |The XMM-Newton Mirrors||Chandra Mirrors| Page maintained by Dr Tao Song
<urn:uuid:19e83dd9-ad9f-48a0-ba07-cf53bb5b4c95>
3.28125
2,232
Knowledge Article
Science & Tech.
53.849709
Brouwer E. M., P. (1996) Decomposition in situ of the sublittoral Antarctic macroalga Desmarestia anceps Montagne. Polar Biology, 16, 129-137. ISSN 0722-4060. |PDF - Published Version | Restricted to KNAW only Official URL: http://dx.doi.org/10.1007/BF02390433 Large amounts of detached Antarctic macroalgae accumulate in hollows of the seabed, where decomposition rates of the detached macroalgae are expected to be low, caused by lack of contact of the major part of the macroalgae with the sediment. To determine decomposition rates in Antarctic waters, untreated and pre-killed Desmarestia anceps fronds contained in nylon net bags were studied for 10 months under natural conditions in Factory Cove, Signy Island. Physical decomposition was shown to be more important than microbial decomposition. A weight loss of 40% occurred in untreated material within 313 days, while prekilled material almost all disappeared within 90 days. Despite the weight loss, changes in chlorophyll a content were negligible during the experiment. Changes in the C:N ratio and tissue N indicated low rates of microbial decomposition. Therefore, it was concluded that weight loss was mainly caused by fragmentation, and particles disappearing from the nets accounted for most of the loss of original tissue. It remains unknown as to how long nutrients stay in Antarctic macroalgal litter before they become available to the system. [KEYWORDS: Spartina-alterniflora; nitrogen; dynamics; carbon; litter; degradation; sediments; detritus; decay; field] |Deposited On:||25 Nov 2011 01:00| |Last Modified:||24 Apr 2012 16:39| Repository Staff Only: item control page
<urn:uuid:b8405443-c1ad-4bba-bfc4-7f1abeb1fdcf>
2.734375
386
Academic Writing
Science & Tech.
40.512569
Japan launched a new earthquake warning system in 2007. It uses sensors that pick up the faster-moving and less destructive P-waves produced by an earthquake and transmits a warning broadcast on television and radio before the destructive S-waves and surface waves arrive, giving recipients of the message, precious seconds to seek shelter. (P waves are generally about 1.7 times faster than S waves) The system gives people as much as a 50 second warning before an earthquake occurs in places far from the epicenter. The system is only really effective for places a decent distance away from epicenter. Eight earthquake warnings were issued the first year the system was in operation. The main problem with them is that few people actually were aware of the warnings. Japanese companies have begun developing cell phones ad other devices that can pick up the warnings better. Japan also has a tsunami warning system that issues warnings on television. Practical uses of the detection of P-waves Shinkansen bullet trains The Tohoku Shinkansen is a high-speed passenger train (bullet train) that connects Tokyo to the northern city of Morioka in Honshu, Japan. The line is protected by a seismic early warning system, which includes two sets of accelerometers: one set is deployed along the line (wayside system), while the other comprises eight accelerometers placed along the eastern coast of Honshu (coastal system). The coastal system is designed to protect the train against earthquakes with origin in the highly active offshore subduction zone. It causes trains to automatically stop when the ground acceleration exceeds a preset limit. Earthquake prepared elevators Otis, the elevator maker, had equipped a lot of their Japanese elevators with safety seismic equipment. About half the elevators Otis maintains in Japan—including most in high-rise buildings and regions with severe earthquake risk—are equipped with seismic detectors. At the first vibration of the quake (P-wave), these devices return the elevators to the ground floor so passengers can exit, then block them until Otis can check their safety. The detectors did their job. Some 16,700 elevators in the areas affected by the quake were shut down by the emergency systems. Otis, which had worldwide revenues of $11.58 billion in 2010 and manufactured about 40,000 of the 80,000 elevators it services in Japan, didn’t receive any report of trapped or injured passengers. “All the elevators operated as they were supposed to” said a company spokesperson. * Many factories in earthquake-prone areas have machines that automatically shut down when they sense vibrations from a quake. * The Japanese gas company has installed meters that shut off the gas supply in the event of tremor. * Nuclear power plants and other dangerous factories are also automatically shut down. (!?!?-ER) Important remark : P-waves and S-waves arrive close together at places close to the epicenter, where warning would only be a matter of a couple seconds at most. As a result, such equipment will only be reliable with an epicenter at least 100 km (around 15-20 secs distance for P-waves, and around 30-40 secs for S-waves) from a populated area. Practically speaking it will be mainly used for subduction triggered earthquakes. Other countries able to use these systems are among others South American countries, Philippines, Indonesian, New Zealand etc. However, only 2 seconds is required to shut down essential facilities like power systems for hospitals and other services, so different systems are applicable if used for life safety versus life continuation. Therefore even at a much closer distance, early warning systems are extremely useful. Something more about the speed of P-waves and S-waves The speed of an earthquake wave is not constant but varies with many factors. Speed changes mostly with depth and rock type. P waves travel between 4 and 13 km/sec. S waves are slower and travel between 3.5 and 7.5 km/sec. Generally spoken the destructive S-wave is approx. 50-60% of the velocity of the P-wave. Make the calculation yourself with an average of 8 km/sec. and a distance from the epicenter of 240 km. The P-wave will then arrive 30 sec. after the rupturing. The destructive S-wave, traveling at 4.8 km/sec (60% of 8) will arrive after 50 seconds. The 20 seconds time lapse (in this example) is for a lot of instruments enough to safely stop. Credits : A part of this article is derived from Jeff Hayes and MIT
<urn:uuid:8163deef-ed20-417b-a6ad-7a655a601ac5>
3.671875
949
Knowledge Article
Science & Tech.
47.992006
|Contained in||Block-level elements, inline elements except BUTTON| The BUTTON element defines a submit button, reset button, or push button. Authors can also use INPUT to specify these buttons, but the BUTTON element allows richer labels, including images and emphasis. However, BUTTON is new in HTML 4.0 and not as widely supported as INPUT. For compatibility with old browsers, INPUT should generally be used instead of BUTTON. The TYPE attribute of BUTTON specifies the kind of button and takes the value submit (the default), reset, or button. The NAME and VALUE attributes determine the name/value pair sent to the server when a submit button is pushed. These attributes allow authors to provide multiple submit buttons and have the form handler take a different action depending on the submit button used. Some examples of BUTTON follow: <BUTTON NAME=submit VALUE=modify ACCESSKEY=M>Modify information</BUTTON> <BUTTON NAME=submit VALUE=continue ACCESSKEY=C>Continue with application</BUTTON> <BUTTON ACCESSKEY=S>Submit <IMG SRC="checkmark.gif" ALT=""></BUTTON> <BUTTON TYPE=reset ACCESSKEY=R>Reset <IMG SRC="x.gif" ALT=""></BUTTON> <BUTTON TYPE=button ID=toggler ONCLICK="toggle()" ACCESSKEY=H>Hide <strong>non-strict</strong> attributes</BUTTON> The ACCESSKEY attribute, used throughout the preceding examples, specifies a single Unicode character as a shortcut key for pressing the button. Entities (e.g. é) may be used as the ACCESSKEY value. The boolean DISABLED attribute makes the BUTTON element unavailable. The user is unable to push the button, the button cannot receive focus, and the button is skipped when navigating the document by tabbing. The TABINDEX attribute specifies a number between 0 and 32767 to indicate the tabbing order of the button. A BUTTON element with TABINDEX=0 or no TABINDEX attribute will be visited after any elements with a positive TABINDEX. Among positive TABINDEX values, the lower number receives focus first. In the case of a tie, the element appearing first in the HTML document takes precedence. The BUTTON element also takes a number of attributes to specify client-side scripting actions for various events. In addition to the core events common to most elements, BUTTON accepts the following event attributes:
<urn:uuid:81844278-1d90-49aa-8f86-bff75cc3069e>
3.09375
547
Documentation
Software Dev.
33.395
Android is able to animate any View object and each and every View has a startAnimation() method that launches animation. Animations exist as independent objects that can be applied to the Views. Animation objects can even be populated from XML resources that live in the res/anim directory. You can download the example program from here. Let's look at the animation file below (again, XML mangling due to the limitations of the blog engine). This file can be found as res/anim/magnify.xml in the example program bundle. [?xml version="1.0" encoding="utf-8"?] This is a scaling animation. The attributes mean the followoing: - The content of the view is actually shrinken first (fromXScale, fromYScale=0.5) to half before it starts to grow to double its standard size and 4 times of the initial size (0.5->2.0). This is done both in X and Y axis. - The pivot point from which the object grows is 0% in the X axis. This means that the leftmost points of the object will stay in place and the object grows rightward to double size. Meanwhile, the pivot point on the Y axis is the middle of the object (50%), this means that the object will grow up and down from its center line. - The animation starts immediately (startOffset=0) and finishes in 400 msec. - Just because we animate, the list layout is not recalculated. Hence the generous top and bottom paddings around the list elements. There is enough space provided for the TextView to grow. - In the list row layout (res/layout/row.xml), the layout_width is set to fill_parent. This is seemingly random choice but actually, the program does not work well with wrap_content as layout_width. Whenever a list element is selected, its color changes to highlight. If the width is wrap_content, then the size of the TextView (hence the higlighted screen area) is just the length of the text. When the animation starts, the text grows past this size and the end of the text becomes unreadable. This is very important therefore to allow the list row to occupy the entire available width because then the entire row will be highlighted and the animation cannot grow larger than the highlighted area.
<urn:uuid:dc2926f3-216f-4fb3-a014-54f6cbfc965f>
2.796875
489
Personal Blog
Software Dev.
65.188667
With 700 physicists from 90 different institutions in 20 countries working on an experiment, you expect interesting results. And the DZero experiment at Fermi National Accelerator Laboratory is living up to the expectation. Scientists at Fermilab have been studying a subatomic particle known as the B_s meson (pronounced ‘B sub s’). Their work suggests that this particle actually oscillates between matter and antimatter more than 17 trillion times per second. The data come from over 1 billion events at Fermilab’s Tevatron particle accelerator, and more precise results are expected soon from a different Fermilab collaboration. And the more we learn, the better: exactly how particles turn into their own antiparticles, and with what frequency, is a major issue that could point to answers in an even bigger one, the balance between matter and antimatter in the universe. For if matter and antimatter appeared in equal numbers at the time of the Big Bang, their mutual annihilation should have left nothing behind but energy. So how did matter survive? One solution is that there was, for reasons unknown, an imbalance between matter and antimatter. Back in 1985, physicist John Cramer, in one of his engaging columns for Analog, said that the early universe could have had 100,000,001 protons for every 100,000,000 antiprotons, making the universe we see around us “…the few ragged survivors of the ‘antimatter wars’ of 16 billion years ago.” We’ve revised the date on that, knowing from the WMAP results that the universe’s birth occurred roughly 13.7 billion years ago. But the idea of an early imbalance remains, even if unsatisfying. But the fluctuation of particles into their own antiparticles may be giving us clues about a more robust solution. The frequency of matter/antimatter oscillation has never been measured to this degree of confidence. Studies of such oscillations go back to the 1980s, when a different kind of meson (the B_d meson) was found to oscillate at a higher rate than predicted by theory. The current studies aim to firm up our knowledge of how B_s mesons fit into the picture, with an eye toward uncovering new interactions that any deviation from prediction might reveal. Out of such results could come ways to evaluate exotic supersymmetry theories, whose predictions may be put to the test.
<urn:uuid:a7ee8b09-543e-41e0-8927-6cbcf8b9a0d0>
3.359375
506
Knowledge Article
Science & Tech.
39.758526
Mauquoy, D., Blaauw, M., van, Geel, B., Borromei, A., Quattrocchio, M., Chambers, F.M. and Possnert, G. 2004. Late Holocene climatic changes in Tierra del Fuego based on multiproxy analyses of peat deposits. Quaternary Research 61: 148-158. What was done Changes in temperature and/or precipitation were inferred from plant macrofossils, pollen, fungal spores, testate amebae and peat humification in peat monoliths collected from the Valle de Andorra about 10 km to the northeast of Ushuaia, Tierra del Fuego, Argentina (54° 45' S latitude). Value-enhanced by an improved 14C wiggle-match dating technique, the new chronologies were compared with other chronologies of pertinent data from both the Southern and Northern Hemispheres in an analysis that helped the authors come to their final important conclusions. What was learned Mauquoy et al. report finding evidence for a period of warming-induced drier conditions from AD 960-1020 that "seems to correspond to the Medieval Warm period (MWP, as defined in the Northern Hemisphere)." They note that "this interval compares well to the date range of AD 950-1045 based on Northern Hemisphere extratropical tree-ring data (Esper et al., 2002)," and they thus conclude that this correspondence "shows that the MWP was possibly synchronous in both hemispheres, as suggested by Villalba (1994)." What it means Once again, we have evidence far removed from the site of origin (North Atlantic Ocean) of the concept of the Medieval Warm Period, demonstrating the truly global nature of this prior period of warmth that equals or exceeds that of the present without the benefit of today's supposedly enhanced greenhouse effect, which climate alarmists claim is primarily due to the 100-ppm increase in the air's CO2 concentration that has occurred over the intervening millennium, but more particularly over the last century. The global expression of this prior period of significant warmth bears witness to the fact that the high CO2 concentrations of today need not be the source of our present global warmth, since whatever caused the high global temperatures of the Medieval Warm Period may well be the cause of the high global temperatures of today. And that cause is not CO2. Esper, J., Cook, E.R. and Schweingruber, F.H. 2002. Low-frequency signals in long tree-ring chronologies for reconstructing past temperature variability. Science 295: 2250-2253. Villalba, R. 1994. Tree-ring and glacial evidence for the Medieval Warm Epoch and the 'Little Ice Age' in southern South America. Climatic Change 26: 183-197. Reviewed 16 June 2004
<urn:uuid:43f8e4dd-7d1a-435f-8864-93006b8b56e8>
3.0625
603
Academic Writing
Science & Tech.
53.621693
In an article published in the journal Ecology and Evolution, Forest Service Southern Research Station researchers Susan Loeb and Eric Winters discuss the findings of one of the first studies designed to forecast the responses of a temperate zone bat species to climate change. The researchers modeled the current maternity distribution of Indiana bats and then modeled future distributions based on four different climate change scenarios. “We found that due to projected changes in temperature, the most suitable summer range for Indiana bats would decline and become concentrated in the northeastern United States and the Appalachian Mountains,” says SRS research ecologist Loeb. “The western part of the range (Missouri, Iowa, Illinois, Kentucky, Indiana, and Ohio)—currently considered the heart of Indiana bat maternity range—would become unsuitable under most climates that we modeled. This has important implications for managers in the Northeast and the Appalachian Mountains as these areas will most likely serve as climatic refuges for these animals when other parts of the range become too warm.” In general, bat species in temperate zones such as Indiana bats may be more sensitive than many other groups of mammals to climate change because their reproductive cycles, hibernation patterns, and migration are closely tied to temperature. Indiana bat populations were in decline for decades due to multiple factors, including the destruction of winter hibernation sites and loss of summer maternity habitat. Due to conservation efforts, researchers saw an increase in Indiana bat populations in 2000 to 2005, but with the onset of white-nose syndrome populations are declining again, with the number of Indiana bats reported hibernating in the northeastern United States down by 72 percent in 2011. The study predicts even more declines due to temperature rises from climate change, with much of the western portion of the current range forecast to be unsuitable for maternity habitat by 2060. “Our model suggests that once average summer (May through August) maximum temperatures reach 27.4°C (81.3°F), the climatic suitability of the area for Indiana bat maternity colonies declines,” says Loeb. “Once they reach 29.9°C (85.8°F), the area is forecast to become completely unsuitable. Initially, Indiana bat maternity colonies may respond to warming temperatures by choosing roosts that have more shade than the roosts that they currently use. Eventually, it is likely that they will have to find more suitable climates.” The models the researchers produced provide resource managers guidance on areas that are likely to contain maternity colonies now and in the future, depending on the availability of suitable habitat in those areas. “Managers in the western parts of the range should be aware of the potential changes in summer distributions due to climate change and not assume that declines are due to habitat loss or degradation,” says Loeb. “Management actions that foster high reproductive success and survival will be critical for the conservation and recovery of the species.” Access the full text of the article: http://onlinelibrary.wiley.com/doi/10.1002/ece3.440/abstract Susan Loeb | Source: EurekAlert! Further information: www.fs.fed.us More articles from Ecology, The Environment and Conservation: Atlantic Research Expedition Uncovers Vast Methane-Based Ecosystem 24.05.2013 | University of North Carolina Wilmington Reforestation study shows trade-offs between water, carbon and timber 24.05.2013 | Arizona State University This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers. Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking. Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ... The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist. "The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn. He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ... New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue. The development of new microscopes and fluorescent dyes in ... A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers. Droplets in this toroidal shape made ... Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry. Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada. High manufacturing cost and a short lifetime are still a major obstacle on ... 24.05.2013 | Life Sciences 24.05.2013 | Ecology, The Environment and Conservation 24.05.2013 | Physics and Astronomy 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:54ac870c-5ae6-407e-9e8d-246de3373786>
3.734375
1,305
Knowledge Article
Science & Tech.
40.89089
Why is it that when you touch really cold metal, your skin gets stuck? (Lansing State Journal, November 18, 1992) When you touch cold metal, you skin is at a higher temperature than the metal. The heat from your hand flows from your finger to the metal. Metal conducts the heat so well that if you were to touch a very cold piece of metal, the heat from your skin would be carried away very quickly, which lowers the temperature of you skin. If the metal were cold enough, the temperature near your skin could be lowered below the freezing point of water. The moisture from your skin would then freeze, "sticking" your skin to the metal. By pouring warm water over the skin, the frozen water is melted, freeing your finger.
<urn:uuid:9a6fa4de-1deb-479b-ace0-ae4ebfeba731>
2.78125
158
Q&A Forum
Science & Tech.
69.126818
|May3-11, 05:17 AM||#1| Which one of the statements below specifies the critical basis of the apparatus used in the Michelson/Morley experiment? A) The half-silvered mirror must reflect exactly half the light. B) Mirror 1 and mirror 2 must be identical. C) The distance travelled by light using either path must be equal. D) The components must not move relative to each other. I don't think A or B are right. If the different distances were known C would be fine. If the relative speeds and distances between parts were known then D should also be fine, I would think. Any ideas? |May3-11, 04:46 PM||#2| D is not fine. If the components of the interferometer moved, and you observed a shift in the interference pattern, how would you know whether the shift was due to the components moving or due to the aether changing direction relative to the device? In theory, you can precisely measure the movements of the interferometer's components and account for them. In practice, it's impossible to measure sub-micrometer movements that precisely. |Similar Threads for: Michelson/Morley Experiment| |Michelson–Morley experiment?||Special & General Relativity||229| |Michelson- Morley's experiment||Special & General Relativity||47| |Michelson Morley Experiment||Advanced Physics Homework||1|
<urn:uuid:60918e75-5f3b-4fc2-851b-3fe21946e93a>
3.28125
313
Comment Section
Science & Tech.
50.583551
Wild Things: Life as We Know It Orchids, Baboons, Ancient Reptiles and More... - By T. A. Frail, Jesse Rhodes, Jessica Righthand, Brandon Springer and Sarah Zielinski - Smithsonian magazine, September 2010 Researchers in Ecuador have discovered a new trick from a famously deceptive family of flowers. The Dracula orchid, so named because its modified leaves resemble flying bats, has a central petal that looks like the small, white mushroom where Zygothrica flies mate. The flowers even give off a fungal scent. Why? Flies that congregate on an orchid unwittingly gather pollen and transfer it to the next flower they visit. Learn more about dracula orchids at the Encyclopedia of Life. “Resolution of Body Temperature by Some Mesozoic Marine Reptiles,” Aurélien Bernard et al., Science, June 11, 2010 “Lord of the Flies: Pollination of Dracula Orchids,” Lorena Endara et al., Lankesteriana, April 2010 “Structure of Social Networks in a Passerine Bird: Consequences for Sexual Selection and the Evolution of Mating Strategies,” Kevin P. Oh and Alexander V. Badyaev, The American Naturalist, September 2010 “Strong and Consistent Social Bonds Enhance the Longevity of Female Baboons,” Joan B. Silk et al., Current Biology, August 10, 2010 “The Function of Bilateral Odor Arrival Time Differences in Olfactory Orientation of Sharks,” Jayne M. Gardiner and Jelle Atema, Current Biology, July 13, 2010
<urn:uuid:2f20d302-9740-4148-a948-4d40cd2556a9>
2.953125
350
Content Listing
Science & Tech.
39.163333
I started using Git a few years ago, and to be sure, I cannot say that I know everything about it. However, I know enough of it to say that I am much more efficient using Git than other source control method. In this post, I will summarize some of the commands everyone need to know in order to start using Git. Creating a Repository Creating a repository is just a matter of initialize a directory with git metadata, and add the files to the repository. For example, suppose you want to create a repository of directory /home/john/myproject To do this, you need to use the commands git add . git commit -m 'first version' The first git command just initializes the .git directory, where all git metadata is stored. The second command adds everything under the current directory to the repository. Finally, the git commit command creates the initial revision of the project into the repository. The next common operation in a repository is to make changes to the existing files. After you make the needed changes using your text editor or IDE, the following commands will do the trick: git add ChangedFile1 ChangedFile2 git commit -m 'description of change' The git add is necessary even if the file exists, because without it git doesn’t know if the files needs to be updated or not. In git parlance, git add is used to create the “staging area”, which will be written by the commit. This is a difference from SVN, for example, where anytime we do a commit the system accepts all changes by default. With git you need to be specific about what is being commited. Optionally, if you prefer to use a GUI to view the changes, you can use the command This will invoke the Tk interface, which works both on UNIX and windows (and probably Mac OS X). Deleting and Recovering Another common operation is to remove files. This can be done with notice that this will work if the files hasn’t been modified. If you need to delete a file that has been changed, use To recover a file that has been deleted or changed, you can use the following With the commands above you can do pretty much anything you need in terms of normal usage of a VCS. However, creating branches is where the fun is. Git favors the creation of branches, because creating them is very fast and cheap (as measured in memory usage). To create and use a new branch, type git branch BranchName git checkout BranchName The first command creates the branch, while the second changes to the new branch. You can always go back using where master is the default branch name. Git makes it very easy to create branches, but also to merge the results of two or more branches. To merge a branch called “BranchName”, you can use the command Git does a great work of solving conflicts automatically. However, if it can’t solve the conflict, you will be able to edit the files manually and commit them as normal. You can visualise the results of merges and of other branches using a graphical tool. Just type and you will be able to see all branches in the repository, along with the history. For a short log of changes, you can also type There are thousands of advanced tricks you can learn with git, since each part of the system may work independently of the others. I will give you just a flavor of what you can do. For example, if you want to change the order in which commit appear in a branch, you can use The commit id is the hexadecimal number listed on each commit when you use git log. The command above will call an editor and allow you to edit the commits that have been made since CommitID. You can change the order in which they are applied, merge two or more commits, or even remove some of them. In this way, you can rewrite the history of your commits, and make changes that you wouldn’t be able to do in something as SVN.
<urn:uuid:39ac724f-8805-4243-a886-61bbcd4614ff>
2.875
857
Personal Blog
Software Dev.
52.648447
The physics of a ball rolling inside a cylinder is worked out in the paper by Gualtieri et al; they analyze the various forces involved, notably a Coriolis torque that acts around the -axis. A physical demonstration of the process is described in the paper by Matsuura. We enhanced the work of Gualtieri et al by allowing an initial spinning -axis. This matches the behavior of a golf ball, which would be spinning at a rate proportional to its speed as it falls into the cup (see second snapshot). Solving the differential equations yields a formula of the form . But in order to learn where points are at time requires the numerical solution of a differential equation so that the angular velocity at each instant is as it should be. The angular velocity is computed from by the following equation, where is the radius of the ball, is the radius of the cylinder, and is a constant that represents the (negative of) the angular speed of the ball around the central axis of the cylinder: . Note that is a linear function of . The program for the soccer ball was taken from a Demonstration by Greg Wilhelm. M. Gualtieri, T. Tokieda, L. Advis-Gaete, B. Carry, E. Reffet, and C. Guthman, "Golfer's Dilemma," American Journal of Physics (6), 2006 pp. 497–501. A. Matsuura, "Strange Physical Motion of Balls in a Cylinder," Proc. of Bridges Conference, Banff, 2005.
<urn:uuid:e433e361-a712-4b27-b239-8c9ff32a76b0>
3.265625
332
Knowledge Article
Science & Tech.
59.68087
ON-SITE WASTEWATER DISPOSAL Wastewater from sinks, tubs, showers, dishwashers, clothes washers and toilets flows into the household septic system. The wastewater (averaging about 55 gallons per person per day according to Massachusetts Title 5 Sanitation Code) contains human waste products in addition to other solids, grease, dirt, chemicals, bacteria and viruses. Wastewater enters the septic system where solids settle to the bottom in a sludge layer; grease floats on top as a scum layer. Liquids flow out to the distribution box and from there to the leach field. Bacteria play a major role in converting organic material within the septic system to ammonium and nitrate. These and other forms of nitrogen participate in thenitrogen cycle. Fate of Nitrogen from Septic Systems Some nitrogen is adsorbed to soil particles and does not enter groundwater; some is denitrified to nitrogen gas (N2) and escapes to the atmosphere. However, nitrate (NO3-) is very soluble in water and much of it travels with groundwater to a coastal embayment. In areas where there is no oxygen but there is organic matter, some nitrate can be converted to nitrogen gas. In the presence of oxygen, most of the ammonium (NH4+) is changed to nitrate.
<urn:uuid:38049a77-98c1-4b8d-8ac7-5336e4a00c45>
3.640625
281
Knowledge Article
Science & Tech.
33.591274
The vegetation of an infrequently burned Tasmanian mountain region Kirkpatrick, JB and Harwood, CE (1980) The vegetation of an infrequently burned Tasmanian mountain region. Proceedings of the Royal Society of Victoria, 91 . pp. 79-107. ISSN 0035-9211 |PDF - Requires a PDF viewer| The Mt. Bobs-Boomerang area in southern Tasmania is rugged and mountainous (600-1080m above sea level), with a perhumid cool (Thornthwaire classification) climate and a range of geological substrates inclucling mudstone, sandstone, limestone and dolerite. 164 species of vascular plants, all native to Tasmania, have been recorded in the study area. The subalpine vegetation is composed primarily of rainforest and scrub communities. Fires have had major effects on these communities, but are rare; the period since the Last fire varies between about 50 and 500 years. A small area of herbland and heathland occupies the poorly drained valley fioors and different herbland communities are found on the flats and limestone cliffs around Lake Sydney. Above the treeline, which occurs at about 1000m on Mt. Bobs and the Boomerang, heathland is the major vegetation formation. Herblands are found in sheltered sites with the longest snowlie, and fjaeldmark, much of it associated with a pattern of non-sorted solifluction terraces, occupies the highest, most exposed part of the mudstone-capped Boomerang. Exposure to strong winds, snowlie, substrate type, degree of waterlogging and fire frequency appear to be major environmental delerminants of the plant communities. |Deposited By:||Professor J.B. Kirkpatrick| |Deposited On:||26 Sep 2007| |Last Modified:||18 Jul 2008 20:08| |ePrint Statistics:||View statistics for this ePrint| Repository Staff Only: item control page
<urn:uuid:ec8d3e55-51b0-43a3-b04c-e75f1b3655f3>
3.171875
434
Academic Writing
Science & Tech.
39.39157
There are many different ways of conceptually approaching quantum field theory. I find that the most satisfying way of understanding quantum field theory is to think of it as a cross between special relativity and quantum mechanics. Quantum Mechanics deals with the behavior of some given number of particles which may interact with each other and/or with some external forces. The quantum theory that you’re interested in (This includes the number of particles, all external forces, and the way they interact) can usually be represented by a mathematical operator known as the Hamiltonian. For example, your Hamiltonian might describe the interaction between an electron and a proton, or a single electron sitting in a magnetic field. What special relativity adds to this is the notion that matter and energy are one and the same, (most readily demonstrated by Einstein’s famous formula E=mc2) and therefore particles can be created or destroyed (at a cost of some amount of energy), so that the number of particles is no longer a constant in your theory. Therefore, if we want an accurate picture of quantum mechanics which obeys the laws of special relativity, it is necessary to change how we construct our theory. For example, when you turn on a light switch, you’re creating zillions of photons. There is no theory in nonrelativistic quantum mechanics which can describe this. Conceptually, what Quantum Field Theory does to solve this problem is it “combines” all Quantum theories with N particles, to get a general theory with all possible combinations of particles. Mathematically, it expands your Hilbert Space (the space of all possible quantum states with N particles) into a much bigger space, called Fock Space, the space of all possible quantum states with any combination of the particles in your theory. The cumbersome mathematics of Quantum Field Theory (e.g. infinite-dimensional path integrals) can be radically simplified by making use of what are known as Feynman diagrams. A simple way of understanding a Feynman diagram is that it represents a possible path a system could take to get from one configuration to another. For example, for a photon to get from point a to point b, it might just go directly from a to b without doing anything, or it might split into an electron and a positron, then the electron and positron could collide, annihilating one another, and producing another photon, which propagates the rest of the way to point b. Or this electron-positron process could happen twice. These are three of the infinite possible diagrams which contribute to the process of the photon propagating from point a to point b. For Feynman Diagrams that don’t look like total crap, See Peskin & Schroeder’s An Introduction to Quantum Field Theory, or pretty much any textbook on the subject. One interesting prediction Quantum Field Theory makes is the presence of virtual particles, which appear and disappear out of nowhere; they require no energy cost to appear (i.e. they can violate conservation of energy), as long as they disappear sufficiently quickly. Feynman diagrams for these particles have no endpoints like the diagrams above; they appear only as a combination of connected loops. While quantum field theory has some problems (regularization and renormalization procedures generally have you adding and subtracting infinities in ways that would make most mathematicians lose their cookies), it has provided the most accurate and precise description of particle physics to date. If the quantum theory you’re studying is the standard model, it is possible to accurately predict the electromagnetic force, strong nuclear force, and weak nuclear force. Generalizing to quantum field theory in curved spacetime, it is possible to predict the interaction of said forces in an external gravitational field, however a way has not yet been discovered to “quantize” this gravitational field properly, and so it must be considered as an external force, thus gravitons must still remain a mystery to particle physicists, at least until string theorists can shed some light on the subject.
<urn:uuid:1bc6e14d-53e5-42d9-8296-83fc8e2f9737>
2.859375
902
Personal Blog
Science & Tech.
31.292225
Hey im new here and I need some math help FAST! Is there anyone who can solve these and show the work? Im kind of confused... Sorry if its sloppy, Thanks in advance, if the polygons are regular then the distance from the vertices to the center are equal further you can solve for the angle between the line segment joining the vertices and the center edit: (this is the angle at the center that is contained by the two line segments) for the pentagon the angle would be 360/5 in degrees and for the hexagon the angle would be 360/6 degrees from here you can use trigonometric ratios, pythagorean theorem and area formula for triangles to solve for the area of each in the second one you have the square with length of center to vertices is 8 this makes 4 isosceles triangles with angles 45, 45, 90 you can find the angles that are 90 by doing a similar trick as above: 360/4 so you have a right triangle with length 8 and 8 and you can solve for the hypotenuse to get the length of one side of the square: thus you can get the area the area of the circle is (pi)(r squared) you can find the probability of hitting the square by doing the ratio of the square area to the triangle area area square/area triangle x 100 = % then you can find the probability of hitting the shaded area by taking 100% - the calculated probablity of hitting the square i hope this helps in some way or another
<urn:uuid:2bb622ab-8bb6-4fc7-8d25-2f25a93b673f>
3.484375
332
Q&A Forum
Science & Tech.
30.897418
These two group activities use mathematical reasoning - one is numerical, one geometric. Suppose there is a train with 24 carriages which are going to be put together to make up some new trains. Can you find all the ways that this can be done? How could you put eight beanbags in the hoops so that there are four in the blue hoop, five in the red and six in the yellow? Can you find all the ways of doing this? This magic square has operations written in it, to make it into a maze. Start wherever you like, go through every cell and go out a total of 15! In how many ways could Mrs Beeswax put ten coins into her three puddings so that each pudding ended up with at least two coins? Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? Take 5 cubes of one colour and 2 of another colour. How many different ways can you join them if the 5 must touch the table and the 2 must not touch the table? There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2 litres. Find a way to pour 9 litres of drink from one jug to another until you are left with exactly 3 litres in three of the Using the cards 2, 4, 6, 8, +, - and =, what number statements can Using the statements, can you work out how many of each type of rabbit there are in these pens? Start with three pairs of socks. Now mix them up so that no mismatched pair is the same as another mismatched pair. Is there more than one way to do it? What do the digits in the number fifteen add up to? How many other numbers have digits with the same total but no zeros? My briefcase has a three-number combination lock, but I have forgotten the combination. I remember that there's a 3, a 5 and an 8. How many possible combinations are there to try? There are 44 people coming to a dinner party. There are 15 square tables that seat 4 people. Find a way to seat the 44 people using all 15 tables, with no empty places. Find your way through the grid starting at 2 and following these operations. What number do you end on? El Crico the cricket has to cross a square patio to get home. He can jump the length of one tile, two tiles and three tiles. Can you find a path that would get El Crico home in three jumps? Lolla bought a balloon at the circus. She gave the clown six coins to pay for it. What could Lolla have paid for the balloon? An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! Can you put the numbers 1 to 8 into the circles so that the four calculations are correct? Find the product of the numbers on the routes from A to B. Which route has the smallest product? Which the largest? Imagine that the puzzle pieces of a jigsaw are roughly a rectangular shape and all the same size. How many different puzzle pieces could there be? Kate has eight multilink cubes. She has two red ones, two yellow, two green and two blue. She wants to fit them together to make a cube so that each colour shows on each face just once. Arrange 9 red cubes, 9 blue cubes and 9 yellow cubes into a large 3 by 3 cube. No row or column of cubes must contain two cubes of the same colour. Can you substitute numbers for the letters in these sums? Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100. Ben has five coins in his pocket. How much money might he have? This challenge is to design different step arrangements, which must go along a distance of 6 on the steps and must end up at 6 high. Can you find the chosen number from the grid using the clues? Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes? This problem is based on a code using two different prime numbers less than 10. You'll need to multiply them together and shift the alphabet forwards by the result. Can you decipher the code? Here are four cubes joined together. How many other arrangements of four cubes can you find? Can you draw them on dotty paper? Add the sum of the squares of four numbers between 10 and 20 to the sum of the squares of three numbers less than 6 to make the square of another, larger, number. When intergalactic Wag Worms are born they look just like a cube. Each year they grow another cube in any direction. Find all the shapes that five-year-old Wag Worms can be. There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates Your challenge is to find the longest way through the network following this rule. You can start and finish anywhere, and with any shape, as long as you follow the correct order. Can you find which shapes you need to put into the grid to make the totals at the end of each row and the bottom of each column? Find all the numbers that can be made by adding the dots on two dice. There are 78 prisoners in a square cell block of twelve cells. The clever prison warder arranged them so there were 25 along each wall of the prison block. How did he do it? Place the numbers 1 to 6 in the circles so that each number is the difference between the two numbers just below it. Place the numbers 1 to 10 in the circles so that each number is the difference between the two numbers just below it. Can you order the digits from 1-6 to make a number which is divisible by 6 so when the last digit is removed it becomes a 5-figure number divisible by 5, and so on? Two children made up a game as they walked along the garden paths. Can you find out their scores? Can you find some paths of your own? What happens when you add three numbers together? Will your answer be odd or even? How do you know? Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. Can you use the information to find out which cards I have used? How many models can you find which obey these rules? My coat has three buttons. How many ways can you find to do up all In this problem it is not the squares that jump, you do the jumping! The idea is to go round the track in as few jumps as possible.
<urn:uuid:74e093d5-f760-4689-ab43-6fd71ecaf88b>
3.5
1,591
Content Listing
Science & Tech.
75.350139
Hey guys i have a small problem i have a program that is repeating using "do while" the program basically does some simple calculations lets say u enter a bet it calculates how much you won based on the bet...the problem is i want to tally all the money the person gets when the loop is executed multiple times how can i do that here is the basic structure of the program: int player_balance = 100; //program to be looped// // adjust player_balance inside the loop based on amount won or lost. }while (player_balance > 0); cout << "Sorry, you're out of money" << endl; BTW, what follows while is an expression or condition, not a statement. thank you very much i can now calculate the overall money won...one more question i want to user to restart the program by choosing yes or closing no but also i want the program to only run 4 times...hers the code 1 2 3 4 // program to be looped } while (c== 'y' || cnt<=4) when the c==y is at front it runs regardless if cnt is at 5...likewise cnt <=4 runs to four even if the user chooses to stop before cnt is 4...what am i doing wrong?
<urn:uuid:268245a8-c635-4cd5-ad13-4aa8f926a3f9>
2.8125
274
Comment Section
Software Dev.
78.275543
The Arabia Terra region on Mars is populated with numerous craters, filled with deposits of various materials that, over time, have become severely eroded. The latest images acquired by the HRSC camera show many features of this kind, known as 'yardangs', in Danielson Crater; the different types of material these contain could be explained by changes in the climate. The Robotics and Mechatronics Center (RMC) at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) is exhibiting at AUTOMATICA, the leading international exhibition for automation and mechatronics, which is being held in Munich from 22 to 25 May. Even though it doesn’t quite qualify as a 'proper' planet, the second most massive asteroid in the Solar System, Vesta – which has a diameter of approximately 530 kilometres – exhibits numerous planetary characteristics. This is just one of the many significant results of NASA's Dawn mission, published in the journal Science on 11 May 2012. The Dawn spacecraft has been orbiting Vesta since 16 July 2011. The German Aerospace Center (DLR) is involved in the mission. New images from the HRSC camera on board the Mars Express spacecraft show numerous dried up river valleys and various former crater lakes in the Acidalia Planitia region. They are further evidence of the existence of water on the surface of Mars for an extended period of time. Such areas are of particular interest to the search for microbial life, which may have developed here under these circumstances. Alpine and polar lichens could also survive on Mars. Planetary researchers at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt; DLR) simulated the conditions on Mars for 34 days and exposed various microorganisms to this environment. The Moon continues to be a fascinating research objective for scientists from around the world. The DLR Institute of Planetary Research collaborated with NASA's Lunar Science Institute to hold a two-day Lunar Symposium, which took place on 19 and 20 April 2012 at the Adlershof Forum in Berlin. In the Tharsis volcanic region, almost the size of Europe, the Martian highlands have arched up into a shield several thousand metres in height as a consequence of volcanic processes. Quite a few unusual topographic features can be observed there. Ius Chasma is one of the main graben in Valles Marineris, one of the largest known canyon systems in the Solar System. Over a length of 940 kilometres, Ius Chasma forms the northern boundary between the western half of this enormous valley system and the Martian highlands. Amateur astronomers who on occasion observe Mars through the eyepiece of their telescopes are quite familiar with the region of Syrtis Major; when observing conditions are good, it can be easily identified as a dark spot on Mars. NASA's Dawn spacecraft has been in its lowest orbit around asteroid Vesta since mid-December 2011. During November the orbit was gradually lowered to an altitude of 210 kilometres above the asteroid's surface.
<urn:uuid:d7a7685f-103f-4d67-836d-a02a1af9bb5f>
3.5625
635
Content Listing
Science & Tech.
33.884151
Published by the American Geological Institute and Trends in the Geosciences A NEW LOOK AT NATURAL DISASTERS by Timothy A. Cohn and Kathleen K. Gohn Natural hazards become disasters only when they collide with people. |ABOVE: El Niño brought on heavy rains in 1998 that plagued the northwestern United States with landslides. Pictured is a landslide in Magnolia Bridge, Seattle. The porch at the back of light-colored house was undermined and collapsed. The head of slide was covered with plastic sheets to prevent additional rainwater from entering the slide. Photo courtesy of U.S. Geological Survey| We are paying a high price for the way we live on our beautiful but dangerous planet. Last year the world experienced deadly earthquakes in Turkey, Taiwan, Colombia and Greece; floods and devastating landslides in Venezuela; hurricanes along the Atlantic Coast that forced evacuation of millions; and numerous smaller disasters. As bad as these events were, they were not extraordinary viewed in the context of previous 20th century occurrences. The century began with America’s deadliest natural disaster: the 1900 hurricane that hit Galveston, Texas, and killed at least 6,000 people. The 1902 eruption of Mount Pelee on the island of Martinique destroyed the capital city and took more than 30,000 lives. The Tangshan, China, earthquake of 1976 killed at least 300,000 people, and perhaps more than twice that number. In 1998, Hurricane Mitch killed more than 10,000 people in Central America and devastated the regional economy. While the global death toll remained depressingly high throughout the 20th century, the economic cost of natural disasters has skyrocketed. Seven of the 10 costliest U.S. disasters occurred since 1989. The President’s Office of Science and Technology Policy estimates that, on average, natural disasters cost the United States one billion dollars each week. The floods on the Mississippi in 1993 caused tens of billions of dollars in damages, as did recent California earthquakes and eastern U.S. hurricanes. One such disaster prompted then-Representative Bill Emerson (R-Mo.) and Sen. Ted Stevens (R-Alaska) to note in 1995 that “Hurricane Andrew and California’s Northridge earthquake together cost $24 billion, more than what the government spends annually on running the federal court system, aiding higher education and pollution control, combined.” |Numerous well-intentioned efforts have attempted to reduce the human and economic costs of natural disasters. However, reducing natural disaster losses is not as simple as it first appears. Structural solutions are often unsatisfactory; the human, environmental and economic cost of attempting to engineer a society completely resistant to natural disasters is too high. The United States, for example, has spent billions of dollars on attempts to control flooding with dams, levees, and channelization. These efforts have not only impaired the functioning of natural ecosystems, but, by altering the floodplain, have, in some cases, actually increased flood peaks downstream.|| Is There a Solution? Gilbert White, the author of the first national assessment of natural hazards in 1975, recently wrote: “If the nation is to benefit fully from the growing and deepening knowledge of natural hazards, some effective method must be found to translate that understanding into operative public policy and private procedures.” In 1997, Public Private Partnership 2000 (PPP2000) was established to seek opportunities for government and private-sector organizations to work together to develop new strategies that will reduce vulnerability to natural hazards in the United States. The partnership is cosponsored by the Subcommittee on Natural Disaster Reduction, a subcommittee of the National Science and Technology Council’s Committee on the Environment and Natural Resources; the Institute for Business and Home Safety, a property/casualty insurance organization dedicated to reducing deaths, injuries, property damage, economic losses and human suffering caused by natural disasters; and more than 20 private-sector organizations. The creation of PPP2000 recognized that past approaches to reducing the economic and social impacts of natural hazards cannot fully solve the problem, which is simply too large and too complex to be handled by any one group. Developing durable and comprehensive solutions will require continuing dialog among, and concerted action by, all sectors of our From 1997 to 1999, the partnership conducted a series of public policy forums, bringing together about 100 stakeholders at each forum to discuss reducing the human and economic toll of natural disasters. The topics ranged from public health aspects of natural disasters to the problems of megacities, from structural design issues to business alliances and financial instruments for risk management. Mother Nature vs. Human Nature Much has been learned about specific aspects of natural hazards — for example, the assessment of earthquake hazards, or safe construction of dams. However, while progress in specific technical issues is important, the value of the PPP2000 forums lay in discovering the unexpected patterns and themes that appear when experts in different areas begin to talk to each other. Natural hazards, which the geologic record shows have been shaping the planet for millions of years, are not a problem to be solved but an essential part of how Earth functions. Ecosystems and individual species have evolved to coexist with them, and preventing their occurrence alters the natural system, often in undesirable ways. In 1996, the Department of the Interior was in the unusual position of creating an artificial flood on the Colorado River to mimic the natural spring floods in order to restore streambanks in the Grand Canyon. Similarly, the U.S. Army Corps of Engineers has recently begun to de-channelize the Kissimmee River in Florida to restore the original flood-based ecosystem the Corps had altered 35 years earlier. Wildland fire and beach erosion are other examples of hazards for which the once-standard approach — controlling natural processes — is now considered economically and environmentally unsound. Natural hazards become disasters only when they collide with people. It is natural disasters that require solutions, and lasting solutions must be rooted in the realities of human behavior. As FEMA’s Associate Director of Mitigation Mike Armstrong said: “We can’t control mother nature; we can affect human nature.” Natural disasters are neither acts of God nor simple technical problems. Rather, they result from human decisions about how we choose to live and build. As Dennis Mileti has stated in his 1999 Disasters by Design, natural disasters “are symptoms of broader and more basic problems. Losses from hazards — and the fact that the nation cannot seem to reduce them — result from short-sighted and narrow conceptions of the human relationship to the natural environment.” |Better information about the natural environment is a key to reducing losses, and we can acquire it in three ways: Improved technology for warning systems will yield immediate benefits (lahar warning systems, for example, are being tested along the slopes of Mount Rainier in Washington state). Assessment of hazards across the nation will allow for better land-use planning, building codes and mitigation. And basic research can lead to fundamental breakthroughs like the theory of plate tectonics, which greatly improved our knowledge of earthquake and volcano hazards.|| While the advances in science and technology have been impressive, their utility depends on educated citizens and policy-makers using this information to make better decisions. As USGS Director Charles “Chip” Groat explains, “science by itself will not protect us. Federal, state and local governments, the private sector, volunteer and charitable organizations and individual citizens must work together in applying the science to make our communities safer.” The U.S. government plays important roles in protecting its citizens from hazards: providing emergency response and assisting recovery, funding hazards research and collecting and maintaining hazards information. The government should also consider how its decisions influence our vulnerability to hazards. In recent years the government has revisited policies that have created incentives for reckless behavior. For example, well-intentioned government policies related to hazards insurance have sometimes had unfortunate consequences. Some structures have been damaged and rebuilt scores of times; if not for insurance and taxpayer subsidies, the owners of such structures would have been forced to incorporate the true costs of hazards into their decisions about whether to rebuild. Reducing these repetitive losses — which we all pay for — will require a change in government’s role, from one of rescuing victims to one of providing the information people need to protect themselves from future disasters. This change from reaction to prevention will require an investment in research and planning. Where Do We Go from Here? We have an opportunity to prevent future natural disasters, rather than just pick up the pieces after disasters strike. Just as we need exercise and good diets, not faster ambulances and sharper scalpels, as a first defense against heart disease, we need to develop good habits about how we live and build on this planet. The Chairman of the Subcommittee on Natural Disaster Reduction, William H. Hooke, has put forward a “Bill of Rights,” a vision for a world in which natural hazards do not turn into disasters. In this world: back to Geotimes
<urn:uuid:274407ce-9499-4106-ae92-998e546cf267>
3.640625
1,859
Nonfiction Writing
Science & Tech.
29.093302
To improve performance of graphics output, most graphics devices provide some form of buffering. By default, Scheme's graphics procedures flush this buffer after every drawing operation. The procedures in this section allow the user to control the flushing of the output buffer. Enables buffering for graphics-device. In other words, after this procedure is called, graphics operations are permitted to buffer their drawing requests. This usually means that the drawing is delayed until the buffer is flushed explicitly by the user, or until it fills up and is flushed by the system. Disables buffering for graphics-device. By default, all graphics devices are initialized with buffering disabled. After this procedure is called, all drawing operations perform their output immediately, before returning. graphics-disable-bufferingflushes the output buffer if necessary.
<urn:uuid:13cd9abd-59d1-4883-a412-850b736bdb93>
2.78125
166
Documentation
Software Dev.
27.043077
Joined: 16 Mar 2004 |Posted: Wed Mar 21, 2007 11:58 am Post subject: Reading data stored as nuclear spins - Quantum Computing |Reading data stored as nuclear spins A University of Utah physicist took a step toward developing a superfast computer based on the weird reality of quantum physics by showing it is feasible to read data stored in the form of the magnetic "spins" of phosphorus atoms. "Our work represents a breakthrough in the search for a nanoscopic [atomic scale] mechanism that could be used for a data readout device," says Christoph Boehme, assistant professor of physics at the University of Utah. "We have demonstrated experimentally that the nuclear spin orientation of phosphorus atoms embedded in silicon can be measured by very subtle electric currents passing through the phosphorus atoms." The study by Boehme and colleagues in Germany will be published in the December issue of the journal Nature Physics and was released online Sunday, Nov. 19, 2006. "We have resolved a major obstacle for building a particular kind of quantum computer, the phosphorus-and-silicon quantum computer," says Boehme. "For this concept, data readout is the biggest issue, and we have shown a new way to read data." Boehme, who joined the University of Utah faculty earlier this year, conducted the study with colleagues at the Hahn-Meitner Institute in Berlin and the Technical University of Munich. A Bit about Quantum Computing In modern digital computers, information is transmitted by flowing electricity in the form of electrons, which are negatively charged subatomic particles. Transistors in computers are electrical switches that store data as "bits," in which "off" (no electrical charge) and "on" (charge is present) represent one bit of information: either 0 or 1. For example, with three bits, there are eight possible combinations of 1 or 0: 1-1-1, 0-1-1, 1-0-1, 1-1-0, 0-0-0, 1-0-0, 0-1-0 and 0-0-1. But three bits in a digital computer can store only one of those eight combinations at a time. Quantum computers, which have not been built yet, would be based on the strange principles of quantum mechanics, in which the smallest particles of light and matter can be in different places at the same time. In a quantum computer, one "qubit" – quantum bit – could be both 0 and 1 at the same time. So with three qubits of data, a quantum computer could store all eight combinations of 0 and 1 simultaneously. That means a three-qubit quantum computer could calculate eight times faster than a three-bit digital computer. Typical personal computers today calculate 64 bits of data at a time. A quantum computer with 64 qubits would be 2 to the 64th power faster, or about 18 billion billion times faster. (Note: billion billion is correct.) Researchers are exploring many approaches to storing and processing information in nanoscopic form – on the scale of molecules and atoms, or one billionth of a meter in size – for quantum computing. They include optical quantum computers that would hold data in the form of on-off switches made of light, ions (electrically charged atoms), the size or energy state of an electron's orbit around an atom, so-called "quantum dots" of material and the "spins" or magnetic orientation of the centers or nuclei of atoms. A New Spin on Quantum Computers Boehme's new study deals with an approach to a quantum computer proposed in 1998 by Australian physicist Bruce Kane in a Nature paper titled "A silicon-based nuclear spin quantum computer." In such a computer, silicon – the semiconductor used in digital computer chips – would be "doped" with atoms of phosphorus, and data would be encoded in the "spins" of those atoms' nuclei. Externally applied electric fields would be used to read and process the data stored as "spins." Spin is difficult to explain. A simplified way to describe spin is to imagine that each particle – like an electron or proton in an atom – contains a tiny bar magnet, like a compass needle, that points either up or down to represent the particle's spin. Down and up can represent 0 and 1 in a spin-based quantum computer, in which one qubit could have a value of 0 and 1 simultaneously. In the new study, Boehme and colleagues used silicon doped with phosphorus atoms. By applying an external electrical current, they were able to "read" the net spin of 10,000 of the electrons and nuclei of phosphorus atoms near the surface of the silicon. A real quantum computer would need to read the spins of single particles, not thousands of them. But previous efforts, which used a technique called magnetic resonance, were able to read only the net spins of the electrons of 10 billion phosphorus atoms combined, so the new study represents a million-fold improvement and shows it is feasible to read single spins – something that would take another 10,000-fold improvement, Boehme says. But the point of the study, he adds, is that it demonstrates it is possible to use electrical methods to detect or "read" data stored as not only electron spins but as the more stable spins of atomic nuclei. "We discovered a mechanism that will allow us to measure the spins of the nuclei of individual phosphorus atoms in a piece of silicon when the phosphorus is close [within about 50 atoms] to the surface," Boehme says. With improved design, it should be possible to build a much smaller device that "lets us read a single phosphorus nucleus." Sources: Nanowerk and University of Utah Further information: http://www.nanowerk.com/news/newsid=1047.php This story was first posted on 20th November 2006.
<urn:uuid:37c3d9ab-77ff-463c-8319-fc319bcbad0e>
2.859375
1,226
Comment Section
Science & Tech.
43.78049
Home > News > Turning Gold Dust Into Clean Air March 20th, 2008 Turning Gold Dust Into Clean Air At age 82, William Miller might be excused for kicking back, playing with grandkids or simply puttering around. He's doing none of those things. Instead, Miller is leading a start-up with two big ambitions: to clean up diesel emissions and to spread a new approach to designing a key ingredient in countless chemical reactions, namely catalysts. "I like to be an explorer. This is an exploration," declares Miller. And that approach--the exploration of both a new technology and of the opportunity for using that nascent tool--is at the core of genuine innovation. Over the course of his 53-year career, Miller has been a computer scientist, chairman of a major software maker, an adviser to both venture capitalists and government leaders, and a teacher. These days, he is chairman and co-founder of Nanostellar, a four-year-old, 22-person start-up in Redwood City, Calif. Miller is too savvy to use the hyperbolic language common to most start-up founders. But he exudes quiet confidence that Nanostellar has a shot at making a genuine difference, both in reducing greenhouse gas emissions and in changing commercial chemistry. Nanostellar is developing novel chemical catalysts that promise big improvements over existing ingredients. The company's first product: fine powders of precious metals--gold, platinum and palladium--that when used to coat a filter for a diesel truck or car can reduce its toxic emissions by as much as 40% over existing catalytic converters. At Nanostellar's heart is a computer program that predicts how different compounds will work under specified conditions. Think of it as a design tool for chemists: "We have CAD-CAM tools for mechanical engineers, computer-aided design for circuit makers," says Pankaj Dhingra, Nanostellar's chief executive. "The impact our tool could have on the world of chemicals is absolutely humongous." News and information Aspen Aerogels Announces $22.5 Million Private Placement May 18th, 2013 NanoInk, Inc. Assets To Be Sold May 18th, 2013 Beautiful "flowers" self-assemble in a beaker: Elaborate nanostructures blossom from a chemical reaction perfected at Harvard May 17th, 2013 Scientists capture first direct proof of Hofstadter butterfly effect May 17th, 2013 Moth-Inspired Nanostructures Take the Color Out of Thin Films May 17th, 2013 Add boron for better batteries: Rice University theorists say graphene-boron mix shows promise for lithium-ion batteries May 17th, 2013 DNA-Guided Assembly Yields Novel Ribbon-Like Nanostructures: Approach could be useful in fabricating new kinds of materials with engineered properties May 16th, 2013 Advancements and developments of solid-state nanopores sensors May 16th, 2013 NIA Public Briefing: Nanotechnology and the Council of Europe May 17th, 2013 Nanoadsorbent Synthesized to Remove Toxic Dyes from Textile Industry Wastewater May 16th, 2013 New Stanford Nanoscavengers Could Usher In Next Generation Water Purification May 15th, 2013 INSCX™ exchange to present a nanotechnology-based Emission Reduction Programme, Ankara, Turkey, June 2013 May 14th, 2013
<urn:uuid:b480c69b-49cc-4781-8c54-b5eea24a6912>
2.703125
721
Content Listing
Science & Tech.
31.822424
Understanding W3C Schema Complex Types (10 tags) W3C XML Schemas aren't so hard, says Donald Smith. In four steps he shows how to easily understand and use complex types. W3C XML Schema Design Patterns: Dealing With Change (2 tags) Designing schemas that support data evolution is beneficial in situations where the structure of the XML documents being processed may change as the application matures, but still need to be validated with the original schema. Using W3C XML Schema (2 tags) A comprehensive introduction to XML Schema, a W3C XML language for describing and constraining the content of XML documents. Includes quick reference tables. W3C XML Schema Design Patterns: Avoiding Complexity (2 tags) Previous attempts to define an effective subset of W3C XML Schema have thrown the baby out with the bathwater, says Dare Obasanjo, who proposes a less conservative set of guidelines for working with W3C XML Schema. Using XML Catalogs with JAXP (2 tags) XML Catalogs offer a way to manage local copies of public DTDs, schemas, or any XML resource that exists outside of the referring XML instance document. Find out how to use them in Java with JAXP.
<urn:uuid:af240e04-3e15-45e9-a3fc-0e92333e7b0f>
2.9375
274
Content Listing
Software Dev.
48.525
|Feb23-08, 10:38 AM||#1| Gravity as consequence of universe expansion I would like to suggest an explanation of the nature of the force of gravity. First I will state some well-known facts, then I will suggest an explanation and finally a hint for the way it might be proved correct. 1. Analysis of the spectrum of light from galaxies reveals a shift towards longer wavelengths proportional to each galaxy's distance in a relationship described by Hubble's law indicating that space-time is undergoing a continuous and uniform expansion (Wikipedia). The longer its distance from us, the faster the speed it moves away from us. This fact is usually exemplified with the simile of two points on the surface of an expanding balloon, moving apart from each other as it is inflated. The pull of gravity is usually explained as the fall of objects down a slope in a surface warped (sunk) by more massive objects (simile of the balls on an elastic surface) falling towards a bigger ball down the subsidience the latter creates. Now, this model describes very well HOW objects move in space because of gravity, but not WHY, since they should not fall in abscence of other forces, no matter how big the subsidience is. 2. Apparently the most recent observations claim that this expansion is accelerating. Going back to the simile of the balls on an elastic surface, they could fall towards each other’s holes if this elastic surface was accelarating upwards. Let’s imagine that the whole set of balls on an elastic surface is in a lift or elevator travelling upwards with an increasing speed. Its very acceleration would make the balls warp the surface they are on and the lightest balls would fall towards the heaviest. And here comes my speculation: Imagine the elastic surface (two dimensions) of a sphere or globe (three dimensions) is a simile of our universe (three spatial dimensions) as a surface of a hypersphere (four dimensions) which is expanding at an accelerating speed. That accelarating expansion (like a tour dimensions balloon being blown) make the objects placed on its surface sink warping it. The more massive the objects, the deeper the warp and as a result lighter objects fall into them. This would explain gravity in an easier way than nowaday’s speculations where gravitons (not founded) or masses placed in other dimensions are needed to provoke its effects in our universe. (In my opinion, the latter theory implies an endless series of masses pulling from equally endless dimensions...) Hint to a method to check this speculation: How could we possibly prove this theory right? I suppose we could by comparing the value of the acceleration of the expansion of the universe (I ignore it) and the Gravitational Constant (G = 6.67 × 10-11 N • m2/kg2 ). But that is too far away from my capabilities. I drop the idea for cleverer ones. To sum up, gravity would simply be the result of the warp of the space, caused by the inertia of the mass placed in it and as a consequence of its accelerating expansion. Namely, in a universe with no expansion there would be no gravity and objects would not be attracted to each other. With a zero expansion the Gravity Constant G would be zero. Going back to the simile, if the lift or elevator stopped accelerating, the balls would stop pressing on the elastic surface. |Feb23-08, 03:08 PM||#2| This is not, as I am sure you understand, a new idea. Unfortunately, an analogy is not a theory. It has been shown that the no assumed "elasticity" of the space-time surface that would be warped can result in the observed gravitational force. |Feb24-08, 05:36 AM||#3| Thanks a lot HallsofIvy. Could you recommend me any author or article too learn more about this topic? |Feb24-08, 12:35 PM||#4| Gravity as consequence of universe expansion Blade_Runner, this article has a similar idea, in that the expansion is causing gravity by a shadowing effect. It's not a mainstream idea, as you can imagine. |Mar2-08, 08:22 AM||#5| Interesting idea, especially since the faster an object is moving the more mass it has(as shown in GR) hence it would have a stronger gravitational field. Is this thinking correct anyone? |Similar Threads for: Gravity as consequence of universe expansion| |Expansion of the universe||Cosmology||6| |Expansion of the Universe||Special & General Relativity||12| |Is a zero energy universe a consequence from FRWL equations||Cosmology||1| |Universe expansion||General Astronomy||6|
<urn:uuid:acb92001-8d08-4485-8a1d-148c974e586b>
2.71875
1,010
Comment Section
Science & Tech.
46.028025
The Amateur Scientist; March 2000; Scientific American Magazine; by Carlson; 2 Page(s) In January of last year I described a delightful device for detecting microfluctuations in the earth's magnetic field. The instrument was a sensitive torsion balance consisting of two small rare-earth magnets affixed to a taut nylon fiber with a tiny mirror attached to the fiber to reflect a laser beam onto a distant wall. When the instrument was properly nulled with additional magnets to cancel the earth's average magnetic field, an infinitesimal change in the earth's field rotated the rare-earth magnets and deflected the laser beam. Originally developed by Roger Baker of Austin, Tex., this homemade magnetometer created quite a stir in the amateur community. But the device required constant visual monitoring to collect data, so it wasn't really suitable for serious science. Baker, however, suggested how someone could convert his unit into a research-grade instrument. This month I'm delighted to report that Joseph A. Diverdi, a chemist in Fort Collins, Colo., has met that challenge brilliantly.
<urn:uuid:30486818-31a2-4567-855a-9929f78b58f7>
3.0625
220
Truncated
Science & Tech.
37.153393
Molecular beacons would be ideal diagnostics for detecting point mutations in disease genes if they weren’t so hard to distinguish. These noose-shaped DNA segments are engineered to light up when they bind to target DNA, such as a mutated cancer gene. However, it has been difficult to detect the difference between complete complementarity and binding that is mismatched by one or two nucleotides, because an imperfect match still has a chance—though a smaller one—of binding and fluorescing. Xudong Fan and Yuze Sun of the University of Michigan bypassed the problem by creating an amplification step based on physics rather than biochemistry. They inserted the molecular beacons and target sequences that differed by one nucleotide into the head of a liquid laser, thereby replacing the laser’s light-generating crystal or usual liquid dye with the sample medium. When mismatched, the probes lit up in the laser chamber, but the fluorescence was not strong enough to create the feedback needed to initiate an emitted laser beam. But when the researchers mixed just one part of target sequence DNA with 50 parts of nontarget sequence, the laser emitted bright light, indicating a match. The device is useful for low detection limits, says Weihong Tan at the University of Florida, but setting the threshold for laser activation will take a bit of work, so the tool is not ready for mass consumption just yet. (Angew Chem Int Ed, 51:1236-39, 2012.) . LASER DETECTIONOptofluidic laser cavityOn/off laser (digital scale)Detection of target sequences from a pool of single-base mismatched sequences240: 1 |COMPARING METHODS:||DETECTION||READABLE OUTPUT||IDEAL USE||SIGNAL TO BACKGROUND RATIO| |FLUORESCENCE DETECTION||Regular vial or cuvette||Intensity of fluorescence (analog scale)||Detection of target and single-base-mismatched sequences separately||~ 1|
<urn:uuid:ee74903f-ec2d-41bd-a901-8e30ea3c3158>
3.265625
423
Truncated
Science & Tech.
24.739649
This image shows colliding galaxies. Click on image for full size Scientists have also found long, complex molecules such as amino acids, proteins, and polycyclic aromatic hydrocarbons (PAH's) in distant molecular clouds in interstellar space. But just as the formation of coacervates in the Miller Urey experiiment did not constitute life, the presence of these molecules in deep space does not mean living being exist there. This is page 20 of 20 Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy
<urn:uuid:b11740eb-8fda-4f43-985a-bf8933eda198>
3.125
132
Content Listing
Science & Tech.
27.041023
Super-Kamiokande is a 50,000 ton water Cerenkov detector located at a depth of 2700 meters water equivalent in the Kamioka Mozumi mine in Japan. Click on image for full size Image courtesy of the University of Maryland The Super Kamiokande Super-Kamiokande is a neutrino detector located in the Kamioka Mozumi mine in Japan. Water fills this huge tank. In fact, it is the world's largest underground neutrino detector experiment (built under a joint Japan-US collaboration). Super-Kamiokande is a big cylindrical tank. Its dimensions are about 40 m in diameter and 40 m in height. The walls are covered with about 13,000 photomultiplier tubes (PMT's). These are very sensitive devices that ultimately help scientists track neutrinos. Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: All of the matter and energy in the Universe was initially confined in a very small region. An explosion occurred which caused the Universe to begin expanding. This expansion continues today. ...more The neutrino is an extremely light particle. It has no electric charge. The neutrino interacts through the weak force. For this reason and because it is electrically neutral, neutrino interactions with...more Theories about fusion inside the solar core predict the number of neutrinos that should reach Earth. Experiments on Earth have been set up to detect solar neutrinos in order to test these models. Current...more When the temperature in the core of a star reaches 100 million degrees Kelvin fusion of Helium into Carbon occurs. Oxygen is also formed from fusion of Carbon and Helium together when the temperature is...more A plot of the binding energy per nucleon vs. atomic mass shows a peak atomic number 56 (Iron). Elements with atomic mass less then 56 release energy if formed as a result of a fusion reaction. Above this...more There are several experiments where nuclear fusion reactions have been achieved in a controlled manner (that means no bombs are involved!!). The two main approaches that are being explored are magnetic...more In the Hydrogen bomb the explosion of a nuclear fission charge (atomic bomb) produces the temperature and density so fusion can occur. This fusion results in a sudden release of a large amount of energy...more
<urn:uuid:f5ea0da1-c492-4ac4-b140-da29a8fa822b>
3.46875
547
Content Listing
Science & Tech.
48.072857
WASHINGTON — Theodore H. Maiman , a physicist who built the first operational laser in the United States and promoted its many medical applications after initial public concern that he created a "death ray," died May 5 at Vancouver General Hospital in British Columbia. He was 79 and had systemic mastocytosis, a rare genetic disorder. Lasers amplify light waves of atoms that have been stimulated to radiate and concentrate them in a very narrow, intense beam. They have wide applicability in daily life, from performing surgical procedures to reading bar codes. They are featured in rock concert light shows and can be handy in removing tattoos. Mr. Maiman made his laser discovery May 16, 1960, using a standard high-power flash lamp and a synthetic ruby crystal that fit into the palm of his hand. He described his approach, done for California-based Hughes Research Laboratories on a tight budge, as "ridiculously simple." One of his major breakthroughs was the use of the ruby, which had previously and erroneously been thought unworkable for laser development. Having worked with rubies on the maser, an earlier technology of amplified microwaves, he calculated the range of chromium concentration in ruby crystal needed for the laser experiment. Mr. Maiman performed his work at an aggressive moment in laser research and had antagonistic relationships with many fellow physicists competing to build a workable apparatus. Charles Townes of Columbia University, who invented the maser, was pursuing laser development with his brother-in-law, Arthur Schawlow of Bell Laboratories in New Jersey. In 1960, Townes and Schawlow, both future Nobel laureates, were the first to receive a patent for an optical maser, in essence the laser. But it was a paper patent -- without any functioning device to support it. Mr. Maiman created the first workable laser, and he always thought Townes and other powerful challengers tried to belittle his contribution. In what Mr. Maiman also perceived as an anti-industry bias, a prominent American scientific journal rejected his laser findings. The British journal Nature soon printed it, though. That July, to ensure that competing labs would not steal any publicity, Hughes arranged a New York news conference with Mr. Maiman . "When it was all over, one reporter came up to me and asked me about using the laser in developing weapons," he told an interviewer decades later. "I told him I didn't think it very likely. He asked me if I would deny that the laser could be used that way, and I said no. The next day there were headlines in every newspaper around the country, screaming: `L.A. man discovers science-fiction death ray.' " Years later, after the laser had been widely adopted for medical and industrial uses, he adopted his standard response: "I don't know of anyone who's been killed by a laser, even by accident, but I do know several people who have been healed by lasers." Theodore Harold Maiman was born July 11, 1927, in Los Angeles and raised mostly in Denver. He developed early skill in electrical engineering from his father, an AT&T scientist, and paid his tuition at the University of Colorado by repairing electrical appliances and radios. After receiving an undergraduate degree in engineering physics in 1949, he entered Stanford University, where he received a master's degree and a doctorate in physics. Mr. Maiman immediately joined Hughes, primarily known as an aerospace contractor. He contributed improvements to Townes' maser, including drastically reducing its size and weight. Mr. Maiman began work on the laser in 1959, and he said Hughes never properly backed him financially. All told, he had a $50,000 budget. "It was almost a bootleg project for me," Mr. Maiman told a Vancouver reporter in 2000. "They tried to pull funding from me twice." Frustrated over what he regarded as Hughes' indifference to his work, he left in 1961 and started several businesses to refine and apply laser technology to medical and technical needs. "A laser," he told the New York Times in 1964, " is a solution seeking a problem." This did not extend to national defense, and he criticized the "Star Wars" anti-missile defense system pushed during the Reagan administration. Among his professional honors, he was inducted into the National Inventors Hall of Fame in 1984. Three years later, he won the Japan Prize, a prestigious science and technology honor for which he received $352,000. In recent years, he helped develop the biomedical engineering curriculum at Vancouver's Simon Fraser University.
<urn:uuid:25ef90f8-f0ce-49f0-854b-ee4a3462a622>
3.1875
951
Nonfiction Writing
Science & Tech.
42.869225
Vegetation dynamics in response to fire and slashing in remnants of Western Basalt Plains grasslands Meredith Henderson and Colin Hocking - Vegetation dynamics in response to fire and slashing in remnants of Western Basalt Plains grasslands: Preliminary results (PDF - 193 KB) About the report The Western Basalt Plains is an area of flat to undulating land extending from Melbourne to Hamilton, in the west of Victoria. The area, covering some 10% of the state, historically was dominated by lowland grassland with Themeda triandra, Danthonia spp., Stipa spp. and Poa labillardieri appearing as main canopy species. Since European settlement, however, this area of native grassland has been much reduced and only 0.1 % remain. Less than one-thousandth of a percent of this is reserved for conservation (DCE, 1992; Lunt, 1994). Maintenance of biodiversity is an issue of critical importance in reserve management. The remnants of Western Basalt Plains grassland need to be actively managed through aboveground biomass reduction (Stuwe and Parsons, 1977; McDougall 1989; Lunt, 1994; Lunt and Morgan 1997 [this conference]). Previous studies show that without frequent above ground biomass reduction, Themeda triandra detritus accumulates and tends to swamp out forbs in the inter-tussock gaps (Stuwe and Parsons, 1977; Lunt, 1991). In Western Basalt Plains grasslands, reduction in biodiversity can be attributed to the loss of the inter-tussock forb component. This project investigates the dynamics of the vegetation in remnants of Western Basalt Plains grasslands, in the west of Melbourne, in response to fire and slashing. Results presented here address changes in biomass of Themeda triandra and exotic species with burning and slashing and, gap size changes with burning and slashing.
<urn:uuid:68d6b8af-833b-4cfd-a0de-c0a9e2555268>
2.796875
390
Academic Writing
Science & Tech.
37.082637
While I’ve been so caught up in work and trying desperately to manage blog dramatics today, I very nearly missed writing this post. Today is Ada Lovelace Day, a day when you pick a woman who has inspired great change in this world in the fields of science, technology, engineering and math (STEM), and post about how they’ve influenced your life. Lovelace herself, as the daughter of the tempestuous and erratic poet Lord Byron but raised by her mother in a strict scientific framework, is widely regarded as the world’s first computer programmer. Yes, before computers. It’s no wonder she’s a no-brainer for my pick. Lovelace was deeply intrigued by Babbage’s plans for a tremendously complicated device he called the Analytical Engine, which was to combine the array of adding gears of his earlier Difference Engine with an elaborate punchcard operating system. It was never built, but the design had all the essential elements of a modern computer. In 1842 Lovelace translated a short article describing the Analytical Engine by the italian mathematician Luigi Menabrea, for publication in England. Babbage asked her to expand the article, “as she understood the machine so well”. The final article is over three times the length of the original and contains several early ‘computer programs,’ as well as strikingly prescient observations on the potential uses of the machine, including the manipulation of symbols and creation of music. Although Babbage and his assistants had sketched out programs for his engine before, Lovelace’s are the most elaborate and complete, and the first to be published; so she is often referred to as “the first computer programmer”. Babbage himself “spoke highly of her mathematical powers, and of her peculiar capability — higher he said than of any one he knew, to prepare the descriptions connected with his calculating machine.” Babbage’s computational engine is the stuff of steampunk wet dreams, and this woman — this incredible woman — built computer programs for a nonexistent computer, more than a hundred years before it was ever built. Words fail when trying to describe the awe and reverence I hold for such genius. Who’s your favorite STEM researcher, and why?
<urn:uuid:aa9cb6eb-9ac7-4288-b413-068545223cc9>
3.09375
476
Personal Blog
Science & Tech.
32.978836
The Annihilation of Matter By Dr. Michael Bisconti I am just beginning to develop this article. Modern physics tells us that matter does not exist in the way we think it exists. Matter does not exist in space. How is that possible? Matter is not objective. Matter is an idea. This is easily demonstrated by the dual nature of matter – wave and particle.
<urn:uuid:5a674d0b-4bfd-46a0-a851-40883bc4e280>
2.71875
79
Truncated
Science & Tech.
60.527589
<< Operator (C# Reference) The left-shift operator (<<) shifts its first operand left by the number of bits specified by its second operand. The type of the second operand must be an int. The high-order bits of first operand are discarded and the low-order empty bits are zero-filled. Shift operations never cause overflows. User-defined types can overload the << operator (see operator); the type of the first operand must be the user-defined type, and the type of the second operand must be int. When a binary operator is overloaded, the corresponding assignment operator, if any, is also implicitly overloaded.
<urn:uuid:0bb4d6af-82dc-43e1-830c-0578ab8f10c0>
3.671875
138
Documentation
Software Dev.
34.48
Find information on common issues. Ask questions and find answers from other users. Suggest a new site feature or improvement. Check on status of your tickets. Scattering is a general physical process where some forms of radiation, such as light, sound, or moving particles, are forced to deviate from a straight trajectory by one or more localized non-uniformities in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections that undergo scattering are often called diffuse reflections and unscattered reflections are called specular(mirror-like) reflections. Learn more about quantum dots from the many resources on this site, listed below. More information on Scattering can be found here. No results found nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies.
<urn:uuid:3f0b6ee3-314d-42cb-a5e5-fc07282c11a3>
3.078125
192
Content Listing
Science & Tech.
32.128052
Gravity is the force law that acts universally on all forms of matter and energy. If something exists, it must gravitate. Furthermore, gravity exerts the same acceleration on objects of different masses and energies! A feather is pulled by the Earth with the same acceleration as is an elephant - even a rather large elephant with weight management issues. This is known as the "equivalence principle": the principle that the amount of stuff within an object - i.e. its inertial mass - is directly related to the amount that gravitates - i.e. its gravitational mass. These universal aspects of gravity lead Einstein to propose that gravity is really not a force, but the result of distortions in distances in space. Since all objects have to sail through time and space in a universal way, if the fabric of space-time gets deformed, all things moving on it feel it and can get deflected through a common trajectory. This is the framework of General Relativity: gravity is the curvature of space-time itself; things appear to change course under a gravitational "force" simply because the fabric over which they move is distorted by nearby massive objects. The first video shows a cartoon of this idea. Very recently, the string theorist Erik Verlinde proposed an intriguing new perspective on gravity - one that further capitalizes on the universal character of this force law. Verlinde proposed that gravity may be a statistical effect, an entropic force - not a fundamental one akin to electromagnetism. He illustrates his proposal through an analogy with other known entropic effects such as osmosis or polymer dynamics. In the second video, we see the well-known processes of diffusion and osmosis. When two different solutions of salt water of different concentrations are separated by a semipermeable membrane, over time the two sides equilibrate in salt density. The details of any fundamental force laws acting between the molecules of the water and salt are not important. The water from the less dense solution flows towards the higher density side for statistical reasons: the flow would maximize the level of disorder or entropy in the system (see previous post on the arrow of time). Simply put, the configuration of equilibrated uniform solutions is a more likely arrangement of water and salt molecules and hence will be attained eventually due to statistical considerations. We could be misled to think of this process as one driven by some force that pushes water from one side to the other. However, we should know that this "force" is not a fundamental one, but a fictitious one: it is statistically driven, a collective effect, entropic in origin. This was a larger dose of biology than I intended… so, let's quickly move onto the physics of gravity. Verlinde proposes that gravity is also an entropic effect rather than a fundamental force law. This is motivated from recent results in string theory, but it can in fact be phrased in an independent and self-contained manner. The universal attributes of gravity encourage such a general scheme or interpretation. And recent suggestions from string theory about gravity having a "holographic" nature further bolster it (I'll write more about this in a future post). Verlinde's proposal may suggest that there is no reason to worry about developing a theory of quantum gravity - a rather elusive project of theoretical physics for decades that has only recently been realized within string theory. If gravity is a statistical effect, perhaps there is no sense to worry about its quantum realm at small distances - as much as we would want to worry about forces between molecules in a gas in an effort to understand osmosis. Hence also lies the problem with Verlinde's proposal. At the end of the day, this proposal amounts to the following curious observation: that gravity at large distances has all the hallmarks of a statistical effect. This is not inconsistent however with any microscopic formulation of gravity through string theory - nor does it preclude its need for understanding the small scale structure of space-time… The proposal is a statement about long distance gravity only. It is still an intriguing exercise since it attempts to identify a minimal set of independent physical principles that can realize the well-known long distance attributes of gravity from a statistical point of view. So, the entropic proposal for gravity to me is simply an interesting story - with a great deal of elegance and pedagogical value. However, it is not a framework that can complete or replace a description of space-time at the smallest distances - a description of quantum gravity. Quoting the ghost of Niels Bohr from a century ago, the proposal is crazy - but it is not crazy enough to be correct…
<urn:uuid:9534aa2f-7c37-4203-8183-dfa619509787>
3.75
955
Personal Blog
Science & Tech.
30.734459
What if some phylogenies were simply irresolvable? That is, what if, no matter how much data we collected, it would be impossible to reconstruct, with a high level of certainty, an accurate representation of the tree of life? That would suck. A lot. I have mentioned how this can result from long branch attraction or lineage sorting. But are there any taxa where this appears to be a major problem? Antonis Rokas and Sean Carroll have published an essay in PLoS Biology that addresses the issue of bushes (or irresolvable nodes) in the tree of life. They point out four clades in which no single tree has a high level of support: The types of data considered are gene sequences, parsimony informative characters (PI-characters), and rare genomic changes (RGCs; insertions, deletions, and other events less common that nucleotide substitution). The four clades shown are (A) human/chimp/gorilla, (B) elephant/sirenian/hyrax, (C) tetrapod/coelocanth/lungfish (the vertebrate tree), and (D) chordate/arthropod/nematode (the metazoan tree). The numbers of each type of data that support each tree are shown along with the percent of data sets that support a particular tree. There is an obvious vertebrate and animal bias here, but that’s also where most of the data are. Rokas and Carroll show that, while some data highly support one phylogeny, none of the phylogenies are consistently supported by any of the larger data sets. Why is this? In all four of these examples, the external branches (those leading to the tips of the tree) are longer than the internal branches (those near the root). These trees are considered “bushy”, and this type of topology may lead to incongruent results if not enough mutations accumulate on the short internal branch. It’s the internal branch that tells you which two species should cluster together, and which one is the outgroup. Additionally, the same mutations can occur independently on different branches (homoplasy) which may mislead a tree reconstruction algorithm. This is shown in the following figure. The trees on the left show the ideal scenario: long internal branches and lots of mutations on those branches. The trees on the right are a long branch attraction problem waiting to happen. The vertical hatches are mutations that support the correct tree, while the circles and x’s are homoplasies that support an incorrect tree. The lengths of the branches represent the number of changes that have occurred along that particular lineage. When the internal branches get too short, there is more support for incorrect trees than for correct trees due to an excess of homoplasies. How long must the external branches be to overcome long branch attraction? That depends on the rate of homoplasy. If, for example, 5% of all changes are homoplasies, then the internal branch must be at least 5% as long as the external branches or you will recover the incorrect tree. If the external branches are too long, they can be broken up by adding more species to your sample (i.e., split one of the external branches into an internal branch with two shorter external branches). But that’s only possible if there are more species to be sampled, and for some of the examples above there are not. A common belief amongst systematicians is that by adding more data, one can resolve a bushy tree. Rokas and Carroll point out, however, that extra data may only artificially increase the statistical confidence. Phylogeneticists measure the significance of nodes by randomly resampling their data (with replacement) and reconstructing the tree using this resampled data. The process is repeated thousands of times for each data set, and the number of random samples that support a particular node is reported. That value is known as a bootstrap, and a higher value represents greater confidence in that node. It turns out that these bootstrap values are dependent on the size of one’s data set — the larger the data, the greater the bootstrap value. So, even if a small, medium, and large data set all have 55% of the informative changes supporting one tree and 45% of the changes supporting another tree, the larger data set will give you a tree with higher bootstrap values. Rokas and Carroll argue that high bootstrap support for trees built using whole genome sequences (i.e., the metazoan tree) may lead to a false sense of confidence in that particular topology. Is all hope lost? Are we ever going to fully understand the tree of life? If the branching order of a clade is nearly impossible to resolve, it can be represented as a multifurcation (as opposed to the typical bifurcating tree). Also, the lengths of the branches provide a lot of information. For example, if there are a lot of short internal branches, we know that a clade experienced a rapid radiation. Rokas and Carroll encourage their readers to see this glass as half full perspective rather than look at the bushiness of certain parts of the tree with chagrin. Rokas A, Carroll SB. 2006. Bushes in the tree of life. PLoS Biol 4: e352. doi: 10.1371/journal.pbio.0040352
<urn:uuid:9e075a66-242e-4194-8005-848ea74ce70f>
2.921875
1,130
Personal Blog
Science & Tech.
48.859078
Radar and Sonar William C. Vergara Sometimes, when conditions are right, you can hear your own echo. If you shout "hello," the sound may bounce back at you from a large object. You then hear your own voice coming back. Your returning voice is called an echo. Radar and sonar are electronic devices that use the principle of an echo to detect and locate objects. Both radar and sonar locate objects from the echo of a signal that is bounced off the object. Radar uses radio waves, which are a type of electromagnetic energy. Sonar uses the echo principle by sending out sound waves underwater or through the human body to locate objects. Sound waves are a type of acoustic energy. Because of the different type of energy used in radar and sonar, each has its own applications. The word "radar" was formed from the first letters of the term "radio detection and ranging." A radio wave is a type of electromagnetic radiation. (Microwaves, X-rays, and light waves are other types.) It is the fundamental part of this form of technology. "Detection," as used here, means finding an object or target by sending out a radio signal that will bounce back off the target as a radio echo. "Ranging" means measuring the distance to the target from the radar set (the device that sends out the radio signal and picks up the returning echo). A true radar system uses radio waves. Another system, called optical radar or lidar (from the first letters of the term "light detection and ranging"), is based on the same principle as radar but uses light waves. How Radar Works Radar sets, also called radar systems, come in many different sizes, depending on the job they are expected to do. But all have four main parts -- a transmitter, an antenna, a receiver, and an indicator (display screen). The transmitter produces the radio waves. When a radio wave strikes an object such as an airplane, part of the wave is reflected back to the radar set. The signal is detected by the antenna as a radio echo. The returning echo is sent to the receiver, where its strength is increased, or amplified. The echo is usually displayed as an image that can be seen on the indicator. The usual type of indicator is the plan position indicator, or PPI. On the face of its large tube, the operator sees a map-like picture of the surrounding region. This picture looks as if it were made looking down at the area from high above the radar set. On the indicator, the echoes appear as bright spots, called blips. The blips show where land areas are located. Blips also show the position of targets, such as planes and ships. The radar operator can pick out these targets because they are moving, while the land areas are not. A common type of radar is called pulse radar. This type of radar sends out radio waves in short bursts, or pulses. The distance to a target is determined by the time it takes the signal to reach the target and the echo to return. Radio signals travel at a known speed -- about 186,000 miles (300,000 kilometers) per second, the speed of light. If the radio signal comes back in 1/1,000 second, then the round trip is 186 miles (300 kilometers). The target must be half that far, or 93 miles (150 kilometers) away. Pulsed transmission helps determine the distance more accurately. Why is this so? Imagine that you are about to shout across a canyon to make an echo. If you shout a long sentence, the first words will come back before you can finish. It would be impossible to hear the entire echo clearly because it would be mixed with your own speech. But if you shout a short word the echo comes back crisp and clear with no interference from the transmitter (you). The location of the target in relation to the radar set is found in a different way. The radar antenna sends out radio pulses in a narrow beam, much like the beam of a flashlight. The antenna and its beam are rotated slowly through all possible directions, searching the entire horizon for targets. An echo is reflected from a ship or other object only if the narrow beam happens to strike it. The returning echoes are amplified by the receiver, then go to the indicator, which displays the range and direction of the target. Uses of Radar Radar has both military and civilian uses. The most common civilian use is to help navigate ships and planes. Radar sets carried on a ship or located at an airport pick up echoes from other ships and planes and help prevent collisions. On ships, they also pick up echoes from buoys in channels when the ships enter or leave port. Radar sets help commercial airplanes land when visibility is bad or in the event of mechanical failure. Radar is also used in meteorology, including weather prediction. Weather forecasters use it, normally combined with lidar (optical radar), to study storms and locate hurricanes and blizzards. Doppler radar is based on the principle of the Doppler effect -- that is, the frequency of a wave changes as the source of the wave moves toward or away from the receiver. By analyzing changes in the frequency of reflected radio waves, Doppler radar can track the movement of storms and the development of tornadoes. An improved Doppler radar system called Next-Generation Radar (NEXRAD) can predict weather more accurately and farther into the future. Scientists use radar to track the migrations of birds and insects and to map distant planets. Because it can tell how fast and in which direction a target is moving, radar is used by police to locate speeding automobiles and control street traffic. Similar systems are used in tennis to measure the speed of serves and to call faults. Balloon-borne radar supports officials fighting drug trafficking. Surface-wave radar detects surface waves of the ocean to warn ships of icebergs and nearby vessels. Historically, there have been two main military uses of radar: search radar and fire-control radar. Search radar is the kind already discussed. It continually searches the horizon to find targets. Fire-control radar helps aim a gun or missile so that it will hit the target when fired and must be more accurate than search radar. The U.S. military has also developed specialized types of radar. For example, Miniature Synthetic Aperture Radar (MSAR) is used on aircraft to provide high-quality images in all kinds of weather. History of Radar Radar technology began with experiments using radio waves in the laboratory of German physicist Heinrich Hertz in 1887. He discovered that these waves could be sent through many different materials but were reflected by others. In 1900, a radio pioneer, Nikola Tesla, noticed that large objects could produce reflected radio waves that are strong enough to be detected. He knew that reflected radio waves were really radio echoes. So he predicted that such echoes could be used to find the position and course of ships at sea. Pulse radar was introduced in the United States in 1925. In 1935, radar was patented under British patent law partly as a result of the research led by Scottish physicist Sir Robert Alexander Watson-Watt. This patented radar later developed into the radar system that proved effective against German air raids on Britain during World War II (1939-45). The term "radar" was first used by U.S. Navy scientists during that war. Advances in both military and civilian applications of radar continued throughout the 1900's. By the early 2000's, researchers were targeting their efforts at improving radar's range, quality, imaging, and size and reducing its cost. The word "sonar" comes from the first letters of "sound navigation ranging." Sonar can detect and locate objects under the sea by echoes, much as porpoises and other marine animals navigate using their natural sonar systems. How Sonar Works There are two types of sonar sets: active and passive. An active sonar set sends out sound pulses called pings, then receives the returning sound echo. Passive sonar sets receive sound echoes without transmitting their own sound signals. In active sonar sets, the sound signals are very powerful compared with ordinary sounds. Most sonar sets send out sounds that are millions of times more powerful than a shout. Each ping lasts a fraction of a second. Some sonar sets emit sounds you can hear. Other sonar signals are pitched so high that the human ear cannot hear them. These signals are called ultrasonic waves. ("Ultra" means "beyond," and "sonic" means "sound.") The sonar set has a special receiver that can pick up the returning echoes. The location of underwater objects can then be determined by the length of time that elapses between sending the signal and hearing the returning echo. Uses of Sonar Sonar has many uses. Submarines use sonar to detect other vessels. Sonar is also used to measure the depth of water, by means of a device called a Fathometer. (One fathom equals 6 feet, or about 1.8 meters.) The Fathometer measures the time it takes for a sound pulse to reach the bottom of the sea and return to the ship. Fishing boats use Fathometers to locate schools of fish. Oceanographers use sonar to map the contours of the ocean floor. Sound signals can also be sent into the mud or sand on the ocean floor and strike a layer of rock underneath. An echo then comes back, giving the distance to the rock layer. The same principle is used in searching for oil on land. A sonar pulse is sent into the ground. Echoes come back from the different layers of soil and rock and tell geologists what kinds of soils and rocks are present. This helps them identify areas for drilling that are most likely to contain oil or gas. This subterranean mapping is called seismic exploration. A special kind of sonar used in medicine is called ultrasonography or echoscopy. High-frequency sound waves produce different echoes when reflected by different body organs. Doctors can use these echoes to detect disease and to monitor the growth of an unborn child. Extremely high-frequency sound waves are used in medicine and industry to clean many kinds of materials by shaking loose tiny particles of dirt or other matter. History of Sonar It was nature itself that invented "sonic radar," or sonar, well before humans did. For example, bats fly in the dark with poor sight without hitting obstacles and locate prey by means of sound pulses humans cannot hear. In 1906, American naval architect Lewis Nixon invented the first sonar-like listening device to detect icebergs. During World War I (1914-18), a need to detect submarines increased interest in sonar. French physicist Paul Langévin constructed the first sonar set to detect submarines in 1915. At first, these sonar sets could only "listen" to returning signals. By 1918, Britain and the United States had built sonar sets that could send out, as well as receive, sound signals. The U.S. military began using the term "sonar" during World War II. As with radar, new military applications for sonar are constantly being developed. For example, in the early 2000's, the U.S. Navy introduced a sonar system to help clear military mines.
<urn:uuid:ba84bcc9-0bd4-4111-a58e-1a4ec2f07867>
4.3125
2,339
Knowledge Article
Science & Tech.
52.907782
String Convert To From ASCII From Erlang Community You want to get the number representing the ASCII value of a character, or you want to display the characters associated with a particular ASCII numerical value. All ASCII character values are identical to integers, and strings of characters are identical to lists of integers. Consequently, in most cases character values are already in the format you wish: 1> $C. 67 2> H = $C. 67 3> H. 67 Because integer values are the only character representations (and the $C notation is just a syntactic convenience), values that map to ASCII (or UNICODE) values may be used where characters are desired: 1> io:format("Number ~w is character ~c\n", [101, 101]). Number 101 is character e ok 2> io:format("~w~n", ["Sample"]). [83,97,109,112,108,101] ok 3> io:format("~s~n", [[83, 97, 109, 112, 108, 101]]). Sample ok 4> io:format("~s~n", [lists:map(fun(C) -> C+1 end, "HAL")]). IBM ok
<urn:uuid:bf572c32-fca2-4e31-abb7-5ca0616da372>
3.53125
255
Documentation
Software Dev.
72.935576
|X-Ray Imaging Current-Driven Magnetic Domain-Wall Motion in Nanowires| The quest to increase both computer data-storage density and the speed at which one can read and write the information remains unconsummated. One novel concept is based on the use of a local electric current to push magnetic domain walls along a thin nanowire. A German, Korean, Berkeley Lab team has used the x-ray microscope XM-1 at the ALS to demonstrate that magnetic domain walls in curved permalloy nanowires can be moved at high speed by injecting nanosecond pulses of spin-polarized currents into the wires, but the motion is largely stochastic. This result will have an impact on the current development of magnetic storage devices in which data is moved electronically rather than mechanically as in computer disk drives. The physical effect in current-driven domain-wall motion is based on a spin torque that is exerted by spin-polarized conduction electrons on the magnetic moments in the wire, so that a domain wall moves as the torque rotates the moments. In contrast to today's magnetic hard drives where a disk mechanically spins under a read head that reads the data stored on the disk at fixed positions, the current would move the domain walls, which represent the bits, electronically to a locally fixed read-out sensor. This idea, called racetrack memory and patented in 2004 by Stuart Parkin (IBM Almaden Research Center), would combine the advantages of both solid-state and magnetic memory devices. However, there are still several open questions blocking the jump from physics to technology (including how to boost the readout speed) whose answers depend on a better understanding of the physics involved. To investigate the fundamental processes of spin-torque-driven motion of domain walls in curved ferromagnetic permalloy (Ni80Fe20) wires, a widely used material in disk drives, the collaboration injected pulses of nanosecond duration and of high current density to drive the motion of a single domain wall along the wire. By making polarized x-ray images with XM-1 before and after the current pulse was injected, they tracked the location of the domain wall with 25-nanometer spatial resolution. The results showed that the magnetic domain walls moved at a speed of 110 meters per second, which is very fast on the nanometer scale, 100 times faster than reported before, and is in accord with a theory of spin-torque transfer. It is believed that the nanosecond pulses reduced the chances that a wall would be pinned by imperfections in the crystalline structure during its brief motion, thereby explaining the high speed. Although this is encouraging news for technological development, repetitive pulse experiments also showed that many of the pulses gave smaller speeds or no movement at all, so that the current-driven motion followed a statistical distribution comparable to Barkhausen jumps in the case where domain motion is driven by an applied magnetic field. Since domain-wall pinning associated with disorder plays a significant role in field-driven Barkhausen avalanches, one can assume a similar mechanism in case of current-driven domain-wall motion. The random nature of the domain-wall jumps means that reliable reading and writing await the ability to minimize the effect of inhibiting defects by better control of materials (perhaps by changing the wire geometry). In the meantime, high-resolution soft x-ray microscopy with a 10-nm spatial resolution in combination with ultrafast time resolution in the femtosecond regime and sufficient intensity for snapshot imaging not only will be a powerful analytical tool for characterizing the dynamics in real time with nanoscale spatial resolution, but also provides an accurate experimental tool for testing theoretical models of current-induced phenomena in magnetic materials at the nanoscale. Research conducted by G. Meier, M. Bolte, R. Eiselt, and B. Krüger (University of Hamburg, Germany); D.-H. Kim (Chungbuk National University, South Korea); and P. Fischer (Center for X-Ray Optics, Berkeley Lab). Research funding: U.S. Department of Energy, Office of Basic Energy Sciences (BES), and the German Science Foundation (DFG). Operation of the ALS is supported by BES. Publication about this research: G. Meier, M. Bolte, R. Eiselt, B. Krüger, D.-H. Kim, and P. Fischer, "Direct imaging of stochastic domain-wall motion driven by nanosecond current pulses," Phys. Rev. Lett. 98, 187202 (2007).
<urn:uuid:3fd6a929-31ee-4e08-8bc8-e391c706f07c>
3.046875
944
Academic Writing
Science & Tech.
37.006537
Hall, Charles James: Hall Photon Theory Exploring Force Fields & The Speed of Light Below is a short excerpt from Charles James Hall's Photon Theory. The complete text of the first paper is in the Appendix of Millennial Hospitality III, The Road Home. The complete text of the later paper is in the Appendix of Millennial Hospitality IV, After Hours. The series, an autobiography of Charles James Hall's encounters with the extraterrestrials, he calls the Tall Whites, on the Indian Springs gunnery ranges in the mid 60s can be ordered directly from the publisher AuthorHouse, as well as Amazon, or through any book store in the world. Hall Photon Theory Explores Force Field Which Enable Travel Faster Than the speed of Light Charles James Hall Master of arts Nuclear physics San Diego State University San Diego, California, 1973 Copyrighted on Sep 19, 2006 Hall Photon Theory was originally presented in a scientific paper authored and copyrighted on January 27, 1998 Copyright TXU 836-633 In the study of Physics, a physically real force field is a force field that can store and transfer energy. One physically real force, of course is the force of gravity. The force of gravity generates a physically real gravitational field. As is well known, energy is readily stored and transmitted in a gravitational field. Any two gravitational fields can interact, thereby generating forces between their two parent objects and transferring energy between them. Two more physically real fields that readily store and transfer energy are the well known electric field and the well known magnetic field. Currently the world of physics recognizes only four interactions, or forces. These are gravitation, electromagnetism, the strong force (a short-range force that holds atomic nuclei together), and the weak force (the force responsible for slow nuclear processes, such as beta decay). . . . Hall Photon Theory( HPT) hypothesizes that there exists in the world of physics more fundamental basic physically real forces than Einstein was aware of. HPT theorizes that the equations relating to Einstein's Theories of Relativity give bizarre results for velocities near the speed of light because Einstein's equations are missing the additional terms that should be present in order to adequately describe all of the physically real forces that are present in the physical environment in which the high speed flight is occurring. These additional terms describe additional physically real considerations such as the additional force fields that are present, the design coefficient of the space craft, the detailed nature of the resistance of space itself to motion, turbulence within the various force fields, etc. . . . HPT hypothesizes that scientists may observe the effects of many of these additional physically real forces by carefully restudying and rethinking many of the famous scientific experiments such as the Michelson-Morley experiment.
<urn:uuid:1def17c8-967d-4215-99bc-905a6077bdec>
3
576
Truncated
Science & Tech.
33.36652
A View From Space: Weather Forecasting for Kids Have you ever watched the weather reports on television and wondered how exactly the reporters know what the weather will be days from now? In a way they do guess, but it's definitely not a random guess! Weather forecasters, or meteorologists, use a number of special methods, devices, and computer software to help them out. Day after day, meteorologists use computers to record data about the weather. This large amount of information about the past weather helps them to predict what the weather will be like in the future. As a simple example, suppose you see dark, black clouds in the sky several times, followed by heavy rain. The next time you see those clouds, you'll know immediately that it will probably rain soon. In a similar way, using higher scientific techniques, meteorologists are able to observe the weather and generate patterns and predictions. Meteorologists use a number of symbols that they lay out on a weather map to quickly convey information about the weather at certain points. These symbols display information like temperature, air pressure, cloudiness, wind speed and direction, and precipitation (rain or snow). Quite often, you might hear meteorologists talking about high or low pressure systems. This is quite important in discussing changes in the weather. High pressure systems are masses of fresh, cool air that is usually quite dry. It signals good weather and clear skies. On the other hand, low pressure systems bring heavy, warm, wet air that usually means stormy weather ahead. Another popular term is fronts. A front is the line that forms between two different masses of air. Very often, it creates a storm. A cold front forms when warm and cold air meet from opposite directions and the warm air turns cold. By contrast, a warm front is exactly the opposite. Meanwhile, a stationary front is when the warm and cold air masses coexist and move together without changing. Finally, an occluded front forms when a warm front is overtaken by a cold front. Weather satellites are very important for meteorologists. These satellites transmit data and images from outer space to help scientists see how cloud systems, ocean currents and storms are moving. Doppler Radar is a device that can tell scientists about changes in rain, snow and wind. It helps them to detect changes in the weather early so that they can warn people of upcoming bad weather. When we look at satellite weather images, we can spot clouds or storm systems since they look like large white or gray swirls above the land. By examining the direction they are moving in, we can tell which areas will be affected by that weather. There are many ways in which we can learn about the weather. The newspapers and television weather channel give us daily reports about upcoming weather. Even more useful are local weather websites that also show hourly reports as well as long-term weather predictions. Finally, you can even learn to predict the weather yourself by carrying out experiments at home! To learn more about the weather and weather forecasting, check out these helpful resources.
<urn:uuid:e76cd806-d901-4274-8746-bc65034b0f00>
4.15625
617
Knowledge Article
Science & Tech.
50.321393
Higgs Boson discovery tops Science picks for 2012 - Published: 21/12/2012 at 08:46 AM - Online news: The discovery of the Higgs Boson, an invisible particle that explains the mystery of mass, leads a list of the top 10 scientific advances of 2012 released by the US journal Science. Graphic distributed on July 4, 2012 by the European Organization for Nuclear Research (CERN) shows a representation of traces of a proton-proton collision measured in the Compact Muon Solenoid (CMS) in the search for the Higgs Boson particle. The discovery of the Higgs Boson particle, leads a list of the top 10 scientific advances of 2012 released Thursday by the US journal Science. Without the Higgs Boson, scientists believe, we and all the other joined-up atoms in the Universe would not exist. The God Particle is named after 83-year-old Peter Higgs, a shy, soft-spoken Briton who in 1964 published the conceptual groundwork for the particle. Belgian physicist Francois Englert, 79, separately contributed to the theory. The other major advances, according to Science, are: -- Scientists in Germany used a new technique to sequence the complete genome of an enigmatic group of humans called the Denisovans, based on a tiny sample teased from a finger bone some 80,000 years old found in a cave in Siberia. Nothing was known about the Denisovans other than that they were contemporaries of the Neanderthals, another "cousin" species of modern humans. -- Japanese scientists created viable egg cells using embryonic stem cells from adult mice. The breakthrough raises the possibility that women who are unable to produce eggs naturally could have them created in a test tube from their own cells and then implanted in their body. -- NASA engineers landed the 3.3 ton Mars Curiosity rover on the Red Planet by using an innovative landing system that dangled the vehicle, with its wheels out, at the end of three cables. "The flawless landing reassured planners that NASA could someday land a second mission near an earlier rover to pick up samples the rover collected and return them to Earth," Science said. -- Use of an X-ray laser, which shines one billion times brighter than traditional synchroton sources, allowed scientists to determine the structure of a protein involved in the transmission of African sleeping sickness. "The advance demonstrated the potential of X-ray lasers to decipher proteins that conventional X-ray sources cannot," Science said. -- A new tool let researchers modify or deactivate genes in test animals. This technology could be as effective, and even cheaper, than current gene-targeting techniques and could let researchers focus on specific roles for genes and mutations in healthy and sick people. -- Scientists confirmed the existence of Majorana fermions, particles that can act as their own antimatter and destroy themselves. Scientists believe that "qubits" made of Majorana fermions could be used to more efficiently store and process data than the bits currently used in digital computers. -- The ENCODE Project, which showed that 80 percent of the human genome is active and helps turn genes on and off. The new information could help scientists understand genetic risk factors for diseases. -- A brain-machine interface that allows paralyzed humans to move a mechanical arm with their minds and perform movements in three dimensions. The experimental technology is promising for patients paralyzed by strokes and spinal injuries. -- Researchers in China discovered the final unknown parameter of a model describing how sub-atomic particles known as neutrinos change as they travel at near-light speed. The results suggest that neutrino physics may someday help researchers explain why the universe contains so much matter and so little antimatter. About the author - Writer: AFP Position: News agency
<urn:uuid:b5d00df7-d643-45dd-a36d-023f5f2535f5>
3.125
779
Truncated
Science & Tech.
35.972803
You are confusing class instantiation with class loading. Before the class can be instantiated, the JVM must "know what it is". To do this, it loads the class. Loading takes place when the JVM starts up. I think it's more complex than that, but for now you can think of all your classes being loaded before your program starts. You can load a class later though, during runtime, with the forname method. This is called dynamic because you do it while your program is already running. I know precious little about this subject, but I believe forname makes it possible for you to make a program that creates a new class file and then load it and use it. You can't do this with the new keyword, because the class must already be present. Stephan van Hulst wrote:Loading takes place when the JVM starts up. I think it's more complex than that, but for now you can think of all your classes being loaded before your program starts. Let's take away that delusion, shall we? Even with static loading, the class is loaded once it's needed. That can be when you call a static method, access a static field, or create an instance, but it won't be loaded if it's not needed. I think the static vs dynamic is related to the required class availability. Staticly loaded classes must be available during compile time and runtime, whereas dynamicly loaded classes only need to be available at runtime. That could mean creating a class on the fly during runtime (using the right tools), or downloading the JAR file with the class file in it during runtime and using a different ClassLoader to load it.
<urn:uuid:f8ac7b0a-873f-42c9-9eb1-804af07f2b1d>
3
346
Comment Section
Software Dev.
65.613166
Preamble : Readers should be aware that the abbreviation PFCs is being used for two different classes of classes of products. PFCs (perfluorocarbons), covered by the Kyoto Protocol, are organofluorine compounds that contain exclusively carbon and fluorine. Examples are CF4 (tetrafluoroethane) and C2F6 (hexafluoroethane or perfluoroethane). These substances have high global warming potential but do not bioaccumulate and are considered to be of low order of toxicity. They are gases or volatile liquids. While other compounds which contain atoms other than carbon and fluorine are also sometimes called PFCs, they should in effect be considered specifically as fluorocarbon derivatives. Examples are PFOA (perfluorooctanoic acid: C8HF15O2) and PFOS (perfluorooctanesulfonic acid : C8HF17O3S ). These are typically surfactants with long carbon chains, with uses in fluoropolymer preparations and for water repellancy. The properties of these substances - bioaccumulation and stability - means that voluntary and regulatory measures are being taken to control their use and minimize their emissions to the environment. As part of the accelerated EU phase-out of HCFC solvents use under European Regulation EC 2037(2000) the precision cleaning market is changing, with “not-in-kind” technologies taking over the main market share. “In-kind” solvents such like HFEs & HFCs are expected to reach a level of no more than 15% of the previous volume of HCFC solvents. This value of 15% was a typical conversion factor during the previous change of fluorine containing solvent from CFC-113 to HCFC-141b and the reduction comes from continuing technical advances in equipment. When any piece of equipment reaches the end of its service life, it is good practice to attempt to recover all of the components for safe disposal or, better still, re-use. In the case of refrigeration equipment, the fluid in the system can be drained out and reprocessed to be used again. Recovery of the insulating gas in plastic foam insulation is more difficult but technically possible and, again, the gas may be re-used.
<urn:uuid:40db9bc9-d021-479c-9b6f-2a0af0d43248>
3.09375
488
Knowledge Article
Science & Tech.
27.565044
WIMPs (Weakly Interacting Massive Particles) WIMPs, by definition, interact only very weakly and are expected to have a mass of between 10 and 1000 GeV. The only detectable interaction that WIMPs may undergo with `normal' matter is scattering and this is what dark matter detectors try to observe via different processes. WIMPs come from the theory of supersymmetry and there are several reasonable possibilities. It is understood that the dark matter WIMP must be the Lightest Supersymmetric Particle (LSP), which is stable. The four main LSP contenders are the sneutrino, the gravitino, the axino and the neutralino. The sneutrino is the SUSY boson partner of the neutrino and is expected to have a mass in the range 55.0 GeV - 2.3 TeV. Unfortunately the sneutrino is ruled out as the dominant dark matter component by direct galactic dark matter searches, which have probed the sneutrino-nucleon cross-section region and have not observed these particles. The gravitino is the spin-3/2 fermion SUSY partner to the, as yet undiscovered, graviton. Its weak gravitational coupling to matter, however, means it is almost impossible to directly detect, and so is a less favoured WIMP candidate at present. The axino is the superpartner of the axion and is expected to have similar properties to the gravitino. Axinos are not currently, however, one of the most popular dark matter candidates. There are four neutralinos, which are combinations of the fermion superpartners of the neutral B, Z and Higgs bosons. The lightest neutralino is the favourite of the four to be a dark matter candidate and is expected to have a mass in the range 37 GeV - 500 GeV, with the lower limit coming from collider experiments and the upper limit from WMAP's measurement of the mass-energy density of matter in the Universe. The lightest neutralino is currently the favourite dark matter candidate worldwide. SUSY WIMPs, specifically the neutralino, are currently the most favoured candidate for dark matter, since no extra restrictions need be forced on the SUSY theory for them to account for dark matter, it is simply a natural result of SUSY particles decoupling after the Big Bang. The majority of dark matter experiments, therefore, search for these particles.
<urn:uuid:670971a9-5712-468c-8f1e-b2b9645d948c>
3.125
531
Knowledge Article
Science & Tech.
35.291652
Saturday, February 2, 2002 11:03 AM In addition to what Duncan said there is one more important difference between sleep and wait. When a Thread goes to sleep it retains any monitors that it had prior to going to sleep. When a Thread calls wait it releases the monitor of the object that its waiting on. It is extremely important that you are aware of this because sleep is one of the easiest ways to create a deadlock condition in your code. The deadlock occurs because the Thread went to sleep while holding on to the monitor so other Threads that are contending for the monitor can't get it. If the Thread is asleep for a short time and releases the monitor relatively quickly after waking up you might not even notice that you have a deadlock condition in your code thus the bug can go undetected for a long time. But the problem can begin to compound itself under the following conditions: 1 - There are a lot of Threads contending for the monitor while the Thread is sleeping. So even though the Thread may be sleeping for a short period your code can suffer a serious degradation in performance because of the contention overhead. 2 - The Thread sleeps for an extend period of time. 3 - The sleep is happening in a loop that won't exit until some condition has been met but the condition changing is dependent on some other Thread gaining the monitor that the sleeping Thread is holding on to. This is the classic deadlock scenario. Finally, here is a good Thread programming practice that you should always follow. If there is a call to wait on an object without a timeout being specified, you MUST ensure that eventually there is a call to notify or notifyAll on the same object. Failing to do what I've just said is another major case of deadlock in Java code.
<urn:uuid:6c87f708-2fc2-44fa-87a0-372a1957c7b3>
2.921875
364
Comment Section
Software Dev.
54.617222
Time-Resolved Studies of Light Propagation in Crassula and Phaseolus Leaves Publikation/Tidskrift/Serie: Photochemistry and Photobiology Förlag: American Society for Photobiology Time-resolved transmittance was used to extract in vivo optical properties of leaves of green plants experimentally. In time-resolved transmittance measurements an ultrashort light pulse is directed onto the surface of the object and the transmitted light is measured with a time resolution in the range of picoseconds. A table-top terawatt laser was used to generate 200 fs light pulses at 790 nm with a repetition rate of 10 Hz. The light pulses were focused through a cuvette filled with water to produce white light pulses and optical filters were placed in the beam path to select the wavelength of the light focused onto the leaf surface. The time profiles of the light transmitted through the leaves was recorded with a streak camera having a time resolution of about 2.5 ps. Results from Crassula falcata and Phaseolus vulgaris studied at 550, 670 and 740 nm are reported. The three selected wavelength regions represent medium, high and a low absorption of light, respectively. A library of curves was generated using Monte Carlo simulation, and the absorption and scattering coefficients were extracted by comparison of experimental curves with this library. Our results suggest that in the case of the thin (200 μm) Phaseolus leaves and certainly in the case of the thick (4 mm) Crassula leaves, light scattering plays an important role in light transport through the leaf and should also affect light flux in these leaves. - Physics and Astronomy - Biology and Life Sciences - ISSN: 0031-8655
<urn:uuid:74186cd2-a77c-401e-9375-8643cea3a7b2>
3.046875
361
Academic Writing
Science & Tech.
30.283333
IT SOUNDS all wrongdrilling holes in a piece of wood to make it more resistant to knocks. But it works because the energy from the blow gets distributed throughout the wood rather than focusing on one weak spot. The discovery should lead to more effective and lighter packaging materials. Carpenters have known for centuries that some woods are tougher than others. Hickory, for example, was turned into axe handles and cartwheel spokes because it can absorb shocks without breaking. White oak, on the other hand, is much more easily damaged, although it is almost as dense. Julian Vincent at Bath University and his team were convinced the wood's internal structure could explain the differences. Many trees have tubular vessels that run up the trunk and carry water to the leaves. In oak they are large, and arranged in narrow bands, but in hickory they are smaller, and more evenly distributed. ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:5054aa55-992e-44f1-995b-63508e03751a>
3.90625
210
Truncated
Science & Tech.
49.974342
A revolutionary new computer based on the apparent chaos of nature can reprogram itself if it finds a fault OUT of chaos, comes order. A computer that mimics the apparent randomness found in nature can instantly recover from crashes by repairing corrupted data. Dubbed a "systemic" computer, the self-repairing machine now operating at University College London (UCL) could keep mission-critical systems working. For instance, it could allow drones to reprogram themselves to cope with combat damage, or help create more realistic models of the human brain. Everyday computers are ill suited to modelling natural processes such as how neurons work or how bees swarm. This is because they plod along sequentially, executing one instruction at a time. "Nature isn't like that," says UCL computer scientist Peter Bentley. "Its processes are distributed, decentralised and probabilistic. And they are fault tolerant, able to heal themselves. A computer should be able to do that." Today's computers work steadily through a list of instructions: one is fetched from the memory and executed, then the result of the computation is stashed in memory. That is then repeated – all under the control of a sequential timer called a program counter. While the method is great for number-crunching, it doesn't lend itself to simultaneous operations. "Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program," Bentley says. He and UCL's Christos Sakellariou have created a computer in which data is married up with instructions on what to do with it. For example, it links the temperature outside with what to do if it's too hot. It then divides the results up into pools of digital entities called "systems". Each system has a memory containing context-sensitive data that means it can only interact with other, similar systems. Rather than using a program counter, the systems are executed at times chosen by a pseudorandom number generator, designed to mimic nature's randomness. The systems carry out their instructions simultaneously, with no one system taking precedence over the others, says Bentley. "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions," he says. It doesn't sound like it should work, but it does. Bentley will tell a conference on evolvable systems in Singapore in April that it works much faster than expected. Crucially, the systemic computer contains multiple copies of its instructions distributed across its many systems, so if one system becomes corrupted the computer can access another clean copy to repair its own code. And unlike conventional operating systems that crash when they can't access a bit of memory, the systemic computer carries on regardless because each individual system carries its own memory. The pair are now working on teaching the computer to rewrite its own code in response to changes in its environment, through machine learning. "It's interesting work," says Steve Furber at the University of Manchester, UK, who is developing a billion-neuron, brain-like computer called Spinnaker (see "Build yourself a brain"). Indeed, he could even help out the UCL team. "Spinnaker would be a good programmable platform for modelling much larger-scale systemic computing systems," he says. This article appeared in print under the headline "Machine, heal thyself" Build yourself a brain The systemic computer takes its lead from nature (see main story), but so does Spinnaker, an ambitious project at the University of Manchester, UK, to build a one-billion-neuron computer from microchips. The idea is to create a supercomputer that works just like the human brain using the same ARM chips that power most smartphones. The team wants to do parallel simulation of large-scale neural networks using the equivalent of 1 per cent of the human brain's neuron count. They are well on their way: using chips that model 1000 neurons each, their system has created the equivalent of 750,000 neurons. "We're advancing slowly but steadily," says project leader Steve Furber. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Thu Feb 14 21:37:32 GMT 2013 by Eric Kvaalen "The team wants to do parallel simulation of large-scale neural networks using the equivalent of 1 per cent of the human brain's neuron count. They are well on their way: using chips that model 1000 neurons each, their system has created the equivalent of 750,000 neurons." How does that come out to "well on their way"? The human brain has about 90 milliard neurons ((long URL - click here) ), so 1% would be 900 million neurons. They don't have even 1/1000 of that. Thu Feb 14 23:46:14 GMT 2013 by Michael Dowling I would be happy with a PC running this operating system-no more blue screens of death that were recently plaguing my Win7 machine! Sat Feb 16 13:50:16 GMT 2013 by Tyrone First off, this sounds like the computer equivalent of perpetual motion. Second, despite claiming each system is equal in priority to every other system, there still must be a supervisory program of some type. If not, then such a computer is intrinsically insecure against virus infection. Sat Feb 16 16:16:18 GMT 2013 by Michael Dowling "Crucially, the systemic computer contains multiple copies of its instructions distributed across its many systems, so if one system becomes corrupted the computer can access another clean copy to repair its own code". It may not be the computer equivalent of perpetual motion,but maybe it would be less vulnerable to viral infection: if one system became infected or corrupted,a fresh copy could continue work,and maybe the infected system could be quarantined some way. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:4182bc71-bfe5-4d2d-a544-8947a80013b0>
3.578125
1,336
Comment Section
Science & Tech.
45.164371
Here’s a very cool (and somewhat mesmerizing) way to visualize surface winds across the country. It’s a project by Google computer scientists. Click hint.fm/wind and see animated streams of air come together, flow apart, and accelerate and decelerate. The map uses National Weather Service data, and refreshes hourly. Jason notes “how the swirls in the wind map coincide with the areas of low pressure and how the strongest winds – indicated by the densest concentration of streams – coincide with the regions where temperatures are changing most dramatically.” All Posts By Paul Poteet, unless otherwise indicated. Indiana's Weatherman is sponsored by Homesense Heating & Cooling, Dr. Nancy Halsema at Carmel West Dentistry, and Barefoot Lawn Care.. My Webcam Weather on Connect Madison County is sponsored by Hoosier Park Racing & Casino.Tweet
<urn:uuid:4bec1b73-459f-4a48-a88c-fd135117dfcd>
2.765625
186
Personal Blog
Science & Tech.
41.264983
We all know takeout food sometimes requires special utensils to be eaten properly. The same is true for fish. (The food they’re eating, not takeout fish.) Below, behold the first video of a reef fish using a tool — and traveling a great distance to find it. The orange-dotted tuskfish, a species of wrasse, is the second type of wrasse documented using tools in the past few months. A blackspot tuskfish was caught on camera earlier this year; now the first video has been published. The fish digs around in the sand to find a choice clam, picks it up, then swims for a while until it finds a good rock. It proceeds to throw the clam against said rock to open it. This is a fish, remember — not the type of creature you might expect to see using tools. Dolphins, elephants, rats, sure — but a fish? “It requires a lot of forward thinking, because there are a number of steps involved. For a fish, it's a pretty big deal,” said Giacomo Bernardi, professor of ecology and evolutionary biology at the University of California, Santa Cruz, who shot the video. The fact that this behavior has been seen in other fish indicates it may not be a recent evolution, but a deep-seated behavioral trait in wrasses — and maybe other fishes, too. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:a8bd15c4-77ce-4b3c-8585-40e7a0c883db>
3.109375
345
Truncated
Science & Tech.
59.491596
Oil companies big and small have used technology to find a bounty of oil and natural gas so large that worries about running out have melted away. New imaging technologies let drillers find oil and gas trapped miles underground and undersea. The result is an abundance that has put the United States on track to become the world's largest producer of oil and gas in a few years. Northwestern University researchers have recently developed a graphene-based ink that... A massive telescope buried in the Antarctic ice has detected 28 extremely high-energy... Two scientists in Switzerland have developed a device that can create 3D images of living cells... Specialty drugmaker Biogen Idec said Tuesday it submitted a new injectable multiple sclerosis drug to the Food and Drug Administration for U.S. market approval. The drug, called Plegridy, is intended to treat patients with relapsing forms of multiple sclerosis. An international team of researchers may have found what cause a dramatic cooling near the end of the last major Ice Age more than 12,000 years ago. The recently published study, which involved the study of rock melted into carbon spherules, describes evidence of a major cosmic event near the end of the Ice Age. The ensuing climate change forced many species to adapt or die. Until recently people believed much of the rain forest’s carbon floated down the Amazon River and ended up deep in the ocean. Research showed a decade ago that rivers exhale huge amounts of carbon dioxide, though it left open the question of how that was possible. A new study resolves the conundrum, proving that woody plant matter is almost completely digested by bacteria living in the Amazon River. Duke University engineers have developed a novel method for producing clean hydrogen, which could prove essential to weaning society off of fossil fuels and their environmental implications. The Duke engineers, using a new catalytic approach, have shown in the laboratory that they can reduce carbon monoxide levels to nearly zero in the presence of hydrogen and the harmless byproducts of carbon dioxide and water. The solar industry in Georgia is pushing a power monopoly to expand its use of solar energy as it plans to meet the state's electricity needs over the next two decades. State utility regulators heard testimony Tuesday on the energy plans from Southern Co. subsidiary Georgia Power, which must submit new plans every three years. You're standing near an airport luggage carousel and your bag emerges on the conveyor belt, prompting you to spring into action. How does your brain make the shift from passively waiting to taking action when your bag appears? A new study from investigators at the University of Michigan and Eli Lilly may reveal the brain's "switch" for new behavior. Sandia National Laboratories has developed key components of a software tool to help the Army's PEO GCS analyze countless what-if scenarios that can be manipulated as technology advances and the global environment, the federal budget, or other factors change. Sandia calls this advanced combination of modeling, simulation, and optimization decision support software the Capability Portfolio Analysis Tool (CPAT). Meeting the demand for more data storage in smaller volumes means using materials made up of ever-smaller magnets, or nanomagnets. One promising material for a potential new generation of recording media is an alloy of iron and platinum with an ordered crystal structure. A new study has identified two unique methods for storing energy using wind power. A team from Pacific Northwest National Laboratory and Bonneville Power Administration has located two sites in Washington that could serve as multi-megawatt facilities. They say power for about 85,000 homes each month could be stored in porous rocks deep underground for later use. Columbia University has signed a licensing agreement with Varian Medical Systems for new imaging software that facilitates 3D segmentation, the process by which anatomical structures in medical images are distinguished from one another—an important step in the precise planning of cancer surgery and radiation treatments. Pulsars rotate rapidly, emitting powerful and regular beams of radiation that are seen as flashes of light, blinking on and off at intervals from seconds to milliseconds. Their predictability could be useful for future navigation systems. Built to test and validate next-generation X-ray navigation technology, the Goddard X-ray Navigation Laboratory Testbed will demonstrate the feasibility of this approach. A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics, and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. Over the past few decades, scientists have developed many devices that can reopen clogged arteries, including angioplasty balloons and metallic stents. While generally effective, each of these treatments has drawbacks, including the risk of side effects. A new study analyzes the potential usefulness of a new treatment that combines the benefits of angioplasty balloons and drug-releasing stents, but may pose fewer risks. In a new business, sometimes the better part of wisdom is knowing when to quit, a new study concludes. Even though persistence is a key to business success, entrepreneurs might be more successful if they not only knew when to start a business and take risks, but also knew when to abandon it and find something that provides a greater opportunity, the research team concluded. Waterproof fabrics that whisk away sweat could be the latest application of microfluidic technology developed by bioengineers at the University of California, Davis. The new fabric works like human skin, forming excess sweat into droplets that drain away by themselves, says inventor Tingrui Pan.
<urn:uuid:98be622a-4084-4541-a509-4ffd436ec3e9>
2.9375
1,163
Content Listing
Science & Tech.
31.040474
The greenhouse effect is the process in which the emission of infrared radiation by the atmosphere warms a planet's surface. The name comes from an analogy with the warming of air inside a greenhouse compared to the air outside the greenhouse. The Earth's average surface temperature is about 33°C warmer than it would be without the greenhouse effect. In addition to the Earth, Mars and especially Venus have greenhouse effects. The Earth receives energy from the Sun in the form of radiation. The Earth reflects about 30% of the incoming solar radiation. The remaining 70% is absorbed, warming the land, atmosphere and oceans. For the Earth's temperature to be in steady state so that the Earth does not rapidly heat or cool, this absorbed solar radiation must be very nearly balanced by energy radiated back to space in the infrared wavelengths. Since the intensity of infrared radiation increases with increasing temperature, one can think of the Earth's temperature as being determined by the infrared flux needed to balance the absorbed solar flux. The visible solar radiation mostly heats the surface, not the atmosphere, whereas most of the infrared radiation escaping to space is emitted from the upper atmosphere, not the surface. The infrared photons emitted by the surface are mostly absorbed in the atmosphere by greenhouse gases and clouds and do not escape directly to space. The reason this warms the surface is most easily understood by starting with a simplified model of a purely radiative greenhouse effect that ignores energy transfer in the atmosphere by convection (sensible heat transport) and by the evaporation and condensation of water vapor (latent heat transport). In this purely radiative case, one can think of the atmosphere as emitting infrared radiation both upwards and downwards. The upward infrared flux emitted by the surface must balance not only the absorbed solar flux but also this downward infrared flux emitted by the atmosphere. The surface temperature will rise until it generates thermal radiation equivalent to the sum of the incoming solar and infrared radiation. For more information about the topic Greenhouse effect, read the full article at Wikipedia.org, or see the following related articles: Recommend this page on Facebook, Twitter, and Google +1: Other bookmarking and sharing tools:
<urn:uuid:b0f7bbca-d686-4ead-95ab-cc6f30c40163>
4.25
446
Knowledge Article
Science & Tech.
27.755625
In my mind, where I imagine people are so interested in what I do that they hang on every carefully chosen word I write, I imagine some unspecified mob of readers looking over my I *heart* cryptozoology post and going “Whoa now, pardner!” (yes, you all sound like cowboys in my mind) “You just said there was a difference between cryptozoology and real zoology, but you deal with cryptic species all the time! What’s up with that?” Cryptic species and cryptid species are two very different beasts. Cryptids are the mysterious, unidentified creatures that cryptozoology deals with. They haven’t been studied in depth by virtue of the fact that they haven’t been tracked, captured, or even well documented. In contrast a cryptic species is one that is morphologically and ecologically indistinguishable from another species, but is genetically distinct. A cryptic species has been under our microscopes the whole time, but has been studied as if it were one species when it is really another. A perfect example of this was a discovery in 2006 of a cryptic species of hammerhead shark, physically identical, but genetically distinct, and sharing territory with the… Well, what do we call the original species when a cryptic is discovered? As a unit of evolution, both species are technically cryptic. They’re both morphological indistinguishable from each other, yet genetically distinct. Usually, the dominant population keeps the original scientific name. In the case of the hammerhead, the globally distributed population remained Sphyrna lewini while the Atlantic population was designated the cryptic species awaiting reclassification. In other cases, both species are reassigned. Sometimes, one, both, or many new cryptics receive a three word naming scheme. The hunt for cryptics is often part of a large study to characterize contributors to an ecosystem. As more ecologic studies rely on genetic tools to assess ecologic parameters, more cryptic species are being discovered. The hunt for cryptids is a quest to discover completely new animals that have yet to be described by science.
<urn:uuid:16981fed-8343-46b5-9fa2-b90c4bc10ee6>
2.921875
432
Personal Blog
Science & Tech.
31.025833
Links to information on species of frogs, toads, and salamanders located in the southeastern United States and the U.S. Virgin Islands, with information on appearance, habitats, calls, and status, plus photos, glossary, and provisional data. Manual for research program on the nesting habits of sea turtles of the Virgin Islands, with descriptions of species, nesting behavior, observation methods, record keeping, tagging, and tissue sample collection. (PDF file, 121 pp.) Describes research to assess the effectiveness of the current system and distribution of marine reserves and protected areas in the Virgin Islands and Puerto Rico for conserving reef ecosystems and resources. By measuring the current and historical growth rates of coral skeletons, and using field experiments, we intend to find out whether rising atmospheric CO2 and rising sea levels will cause coral reefs to erode and cease to function.
<urn:uuid:ee15fb27-1238-440f-b620-9a7fe4bb204b>
2.90625
177
Content Listing
Science & Tech.
33.30446
This is an infrared image of Jupiter showing regions which are hot. Click on image for full size The Source of Heat from Within Jupiter Frequently in astromony, the luminoscity of a star is calculated. The luminoscity indicates the energy, and the temperature of the star. When the luminoscity of the outer planets was calculated, that of Jupiter and Saturn was very high, indicating that these planets are giving off a lot of energy, more energy in fact than they are receiving from the sun. There are several ways in which astronomical objects produce energy from inside. The first is thermonuclear fusion. This method of energy production is reserved for stars. Another method is by way of radioactive material within the ground. This method is at work in most terrestrial bodies. For the giant planets the method which seems to be at work is the mere fact that energy is given off from when a planetary body is in the process of shrinking together, or collapsing on itself. The fact that Jupiter is still collapsing together indicates that the process of planet formation is still going on. This process is providing the heat from within which causes the unusual motions in the atmosphere. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: Some materials give off radiation. We say that those substances are "radioactive". Radioactive materials are often, though not always, hazardous to living things. There are many different types of radioactive...more Motions, or currents in the interior of a gas-giant planet such as Saturn may be very different from the motions typical of the earth's interior. A second idea for the motions in the interior of a gas-giant...more Motions in the interior of a planet help carry heat from the inside to the outside. The drawing to the left illustrates a kind of global motion that is typical of motions in the atmosphere as well as the...more The Giant planets do not have the same layered structure that the terrestrial planets do. Their evolution was quite different than that of the terrestrial planets, and they have less solid material inside....more Several interesting phenomena are found at the poles of Jupiter, the largest planet in our Solar System. Three of Jupiter's four large "Galilean" moons are ice-covered and thus reminiscent of Earth's polar...more Frequently in astromony, the luminoscity of a star is calculated. The luminoscity indicates the energy, and the temperature of the star. When the luminoscity of the outer planets was calculated, that of...more This is the temperature profile of Jupiter's entire atmosphere. The horizontal lines indicate the boundaries between the troposphere, the stratosphere, the mesosphere, and the thermosphere. ...more This is a picture of a brown barge. The accompanying clouds, next to the barge, look very similar to earth clouds. ...more
<urn:uuid:568a9586-12b4-442f-97c9-63fb6d18d287>
3.609375
618
Content Listing
Science & Tech.
47.031376
At least 1,000 people have been injured in Russia as the result of a meteor exploding in the air. The energy of the detonation appears to be equivalent to about 300 kilotons of TNT, said Margaret Campbell-Brown of the department of physics and astronomy at the University of Western Ontario. Meanwhile, an asteroid approached Earth but did not hit it, coming closest at about 2:25 p.m. ET. You probably have some questions about both of those events, so here's a brief overview: 1. Are these events connected? The meteor in Russia and the asteroid that passed by on Friday afternoon are "completely unrelated," according to NASA. The trajectory of the meteor differs substantially from that of asteroid 2012 DA14, NASA said. Estimates on the meteor's size are preliminary, but it appeared to be about one-third the size of 2012 DA14. The term "asteroid" can also be used to describe the rock that exploded over Russia, according to the European Space Agency and NASA, although it was a relatively small one. 2. What's the difference between an asteroid and a meteorite and other space rocks? According to NASA, here's how you tell what kind of object is falling from the sky: Asteroids are relatively small, inactive rocky bodies that orbit the sun. Comets are also relatively small and have ice on them that can vaporize in sunlight. This process forms an atmosphere and dust and gas; you might also see a "tail' of dust or gas.
<urn:uuid:7aa1337c-b719-40bf-9c98-51e701dc42b6>
3.484375
314
Q&A Forum
Science & Tech.
52.164395
That sea level rise isn’t the same everywhere. The moon’s pull, oceanic currents, the Earth’s rotation—these all play a role in what ocean water is where. Turns out the U.S. East Coast is experiencing sea level rise three to four times higher than the global average, according to a study from the U.S. Geological Survey in the journal Nature Climate Change. (Scientific American is part of Nature Publishing Group.) That’s bad news for the highly populated region and suggests storm surges are going to prove ever more problematic from New York City to Cape Hatteras.
<urn:uuid:d971b66c-99b7-4bd1-ae05-7827f071b2a9>
2.9375
129
Personal Blog
Science & Tech.
59.308137
The best method to learn about how radio wave propagation works is to understand the basic principles of ionization and atmospheric impacts on shortwave, mediumwave, and other radio signals. The best article I have found on the subject is called HF Propagation from DeltaDX.com. Study it carefully and review the characteristics not just from the notes but compare with what you hear in your own location and note it during each month. This builds a history that you can refer to for predicting conditions during various times of the year. Here is a very basic Gray Line map so you can monitor sunrise/sunset around the world to target stations and nations for listening. You can customize this map for your own region at the Earth View Website at this link. The best page for current propagation conditions is without a doubt from DX.QSL.net. Also here is a tutorial on Space Weather from John A. Kennewell. Lastly this is the link to explain the WWV Geophysical Alert Message broadcast each hour at 18 minutes after on 2.5, 5.0, 10, 15, and 20 Mhz. Current propagation conditions from www.hamqsl.com: Tables from NOAA explaining the various storm levels and alerts are below. For example a K index of 1-3 is quiet, 4-5 active and so on.
<urn:uuid:96425dbe-33a5-4766-873b-75bab1f4bcd5>
2.765625
275
Personal Blog
Science & Tech.
68.211791
I was thinking about Fourier's Law in heat transfer today and for some reason I am just not understanding the relationships it gives us. Fourier's tells us that if the heat transfer rate is kept constant, then a larger thermal conductivity provides a smaller temperature gradient. I am confused about the physical reason behind this or I am just misunderstanding the definition of thermal conductivity. I thought that since thermal conductivity is ease of heat transfer through a material then a high thermal conductivity would mean heat is easily transferred so one side of the material is at a much temperature than the other and a large temperature gradient is created. In fact, this is actually the opposite
<urn:uuid:83f81552-da59-4ca3-b64e-1de6076e05cd>
2.84375
134
Q&A Forum
Science & Tech.
29.04
Climate Literacy Essential Principle 2 *2a. Earth's climate is influenced by interactions involving the Sun, ocean, atmosphere, clouds, ice, land, and life. Climate varies by region as a result of local differences in these interactions. *2b. Covering 70% of Earth's surface, the ocean exerts a major control on climate by dominating Earth's energy and water cycles. It has the capacity to absorb large amounts of solar energy. Heat and water vapor are redistributed globally through density-driven ocean currents and atmospheric circulation. Changes in ocean circulation caused by tectonic movements or large influxes of fresh water from melting polar ice can lead to significant and even abrupt changes in climate, both locally and on global scales. *2c. The amount of solar energy absorbed or radiated by Earth is modulated by the atmosphere and depends on its composition. Greenhouse gases—such as water vapor, carbon dioxide, and methane—occur naturally in small amounts and absorb and release heat energy more efficiently than abundant atmospheric gases like nitrogen and oxygen. Small increases in carbon dioxide concentration have a large effect on the climate system. *2d. The abundance of greenhouse gases in the atmosphere is controlled by biogeochemical cycles that continually move these components between their ocean, land, life, and atmosphere reservoirs. The abundance of carbon in the atmosphere is reduced through seafloor accumulation of marine sediments and accumulation of plant biomass and is increased through deforestation and the burning of fossil fuels as well as through other processes. *2e. Airborne particulates, called "aerosols," have a complex effect on Earth's energy balance: they can cause both cooling, by reflecting incoming sunlight back out to space, and warming, by absorbing and releasing heat energy in the atmosphere. Small solid and liquid particles can be lofted into the atmosphere through a variety of natural and man-made processes, including volcanic eruptions, sea spray, forest fires, and emissions generated through human activities. *2f. The interconnectedness of Earth's systems means that a significant change in any one component of the climate system can influence the equilibrium of the entire Earth system. Positive feedback loops can amplify these effects and trigger abrupt changes in the climate system. These complex interactions may result in climate change that is more rapid and on a larger scale than projected by current climate models
<urn:uuid:bb7bc7b6-0599-4651-862d-6493f59d4f43>
4
477
Knowledge Article
Science & Tech.
25.269535
Why Your Shared Library Code Is Slow **Position Independent Code** When you compile a shared library (with -fPIC, in the case of gcc), you are specifying that you want the compiler to generate *position independent code*. Position independent code never uses straightforward machine addressing modes to access global variables, static variables, or non-static functions. Instead, position independent code accesses these kinds of data indirectly. **PLTs and the GOT** To invoke a non-static function, position independent code uses a so-called *procedure linkage table*, or PLT. A PLT is a section of the address space that is both writable and executable. It holds short sequences of instructions, stub functions. There is one stub per non-static function called. Calls to non-static functions are transformed by the compiler to calls of the stub functions in the PLT. To access a global variable or a static variable, position independent code uses a so-called *global offset table*, or GOT. A GOT is a set of symbol names to be used at link-time (which is load-time, for shared libraries) to specify the names of global variables and static variables referred to by the program/library in which the GOT appears. **Symbol Resolution and Lazy Binding** The operating system's dynamic loader initialises PLT entries at load-time (when a program that uses the shared library is exec'd). Their initial values are a sequence of instructions that invokes the run-time linker (the dynamic loader) to resolve the address of the function to which the PLT entry will indirect. To *resolve* an entry means to find the actual address for the entry's symbol. The address for the resolved entry is then cached. The operating system's dynamic loader *resolves* the PLT entries when a program that is linked to your shared library first uses the symbol in the PLT, and not before. This procedure is called *lazy binding*. Lazy binding is typically the default mode of operation for non-static function invocation in shared libraries. GOT entries are not bound lazily. The operating system's dynamic loader resolves the GOT entries at load time. When a global variable or static variable is referenced, it is done via the resolved GOT entries in the invoking code. **Mechanisms for Indirection** *Resolved PLT Entry* To invoke a function via the PLT, as compared to invoking a function directly, there are many more instructions to execute, and there is a change in the locality of the instruction stream. Here's why. There are two cases to consider: the case in which the PLT entry has already been resolved, and the case in which the PLT entry has not yet been resolved (the first access of the function). First, let us consider the case in which the PLT entry has already been resolved. This is most common. In this case, the calling code simply calls the PLT entry as if it were a function. The resolved PLT entry will be a jump instruction, followed by the address of the non-static function that is actually to be executed. This small sequence of instructions would have been placed there when the PLT entry was bound (lazily). So, to call a non-static function in position independent code, the instructions from the PLT are first loaded into the CPU's instruction stream, then they execute a jump to the address of the desired function. The pseudo-assembly code might look like this: call pltentryfor_foo ; calling foo() indirectly Remember, those code sequences in the PLT do not exist in your shared library. Only the addresses of those code sequences (relative to the calling code) are known at compile time. *Unresolved PLT Entry* Now, let us consider the case in which the PLT entry has not been resolved. This happens when the symbol is first used at run-time. This lazy binding is useful if no code which uses the symbol is ever invoked. In that case, there is no run-time cost. The initial value of any PLT entry is a sequence of instructions which invokes the run-time linker. The the run-time linker resolves the real function address, and stores it in the PLT entry, so that the next time that the PLT entry is invoked, it will call the desired function. This resolution process, performed by the run-time linker, is very costly in comparison to the cost of a simple direct call. The cost is paid only upon the first invocation of the function. Accesses of global variables and static variables are via the GOT. For each access of a such a variable, the compiler generates code to load the appropriate GOT entry. The entry is the address of the desired variable. With that address, the code will then access the value of the desired variable. As compared to non-PIC, this method results in extra code to do the extra load, and then an additional data access to perform extra load itself. The generated code does not know the exact address of the GOT at compile time, but it knows the address of the GOT relative to its own address. Code fragments like the following pseudo-assembly language are common in IA32 PIC: add ebx, GLOBAL_OFFSET_TABLE add eax, [ebx+the_offset_into_the_GOT] __i686.get_pc_thunk.bx function simply puts the value of the PC into the EBX register. To that is added the (possibly negative) offset of the GOT relative to the currently executing code. The result is the address of the GOT in register EBX. Next, the value of the GOT entry with the index corresponding to some desired global variable is loaded into register EAX. After this sequence of instructions, the actual value of the global still has not been fetched. Only the global's address is known. An additional load instruction is required to obtain the value of the global. Note that these indirect function accesses happen *even when functions within the same shared library* are the calling functions! This is because it is permitted for users of the shared library to override a symbol (function name or global variable) with their own versions. This is true for ELF, but not necessarily all DSOs. This behaviour can be overridden with the -Bsymbolic-functions linker flag. One often sees this flag passed to the linker by gcc with a command-line switch that looks like this: If the magnitude of run-time overhead for function invocations and global variable references discussed above reminds of you the the overheads associated with interpreted languages and virtual machines, then you understand. DSOs are trading performance for the various benefits of run-time linking. Many would suggest that the benefits of PIC do not outweigh the performance costs. This run-time overhead might be one reason why your DSO is slow.
<urn:uuid:036afcfe-a46c-4dd9-8f68-94e3c41201df>
2.75
1,458
Documentation
Software Dev.
47.825405
Conrotatory motion maintains C2 symmetry, and disrotatory motion maintains Cs symmetry. We may therefore draw an orbital energy correlation diagram, in which the orbitals of butadiene are classified under Cs symmetry (a',a'',a',a''), on the lhs, and under C2 symmetry (b,a,b,a), on the rhs. The bonds of cyclobutene which are involved ( ) are shown in the middle. Lines are then drawn connecting the cyclobutene orbitals under Cs symmetry (a',a',a'',a''), and under C2 symmetry (a,b,a,b). The diagram clearly shows that the disrotatory path is favoured photochemically and the conrotatory path is favoured thermally. We shall use C2v symmetry with the axes as shown. For H2CO, we can form CH bonding orbitals (a1 and b2), the O lone pair (2pyO=b2) and the CO orbital (b1). Thus the electronic configuration is 1a121b222b221b12. For H2, the orbital has a1 symmetry. For CO, the bonding orbital has a1 symmetry, and the orbitals have b1+b2 symmetry. Thus the separated molecules have electronic configuration 1a122a121b221b12. This is different to the molecule, and therefore there will be an orbital crossing, and the transition state will have a lower symmetry. Indeed the b2 and a1 orbitals must have the same symmetry, and this happens in Cs, where there is only a plane of symmetry. The transition state is shown below, and there is a very high barrier (75 kcal mol-1) The above arguments, entirely based on symmetry, which predict whether their is a barrier in a reaction, form the basis of the Woodward-Hoffmann rules, examples of which are discussed in a Part III Organic Chemistry
<urn:uuid:c33762be-2519-4cc4-be9e-1cd2656ddca9>
2.96875
421
Academic Writing
Science & Tech.
46.007564
The pygmy beaked whale ), also known as the bandolero beaked whale , Peruvian beaked whale and lesser beaked whale , is the smallest of the mesoplodonts and one of the newest discoveries. There were at least two dozen sightings of an unknown beaked whale named Mesoplodon sp. A before the initial classification, and those are now believed to be synonymous with the species. Physical evidence of the species was first described in 1991 from a skeleton and a rotting carcass found in Bahia de La Paz, Baja California in 1990. The body of the pygmy beaked whale is the rather typical spindle shape of the genus, although the tail is unusually thick. The melon is somewhat bulbous and slopes down into a rather short beak. The mouthline in males has a very distinct arch with two teeth protruding slightly from the gum line before the apex. The coloration is typically dark gray on the top and lighter below, especially on the lower jaw, throat, and behind the umbililicus. Males may have a distinct pale "chevron" patterns on their backs. The size for this species in only around 4.5 meters (13 feet) long in mature animals, and around 1.6 meters (5 ft) when born. Population and distribution This beaked whale has been recorded in the eastern tropical Pacific between Baja California and Peru Peru , officially the Republic of Peru , is a country in western South America. It is bordered on the north by Ecuador and Colombia, on the east by Brazil, on the southeast by Bolivia, on the south by Chile, and on the west by the Pacific Ocean.... through sightings and strandings. Another specimen, apparently of the same species, washed up in New Zealand New Zealand is an island country in the south-western Pacific Ocean comprising two main landmasses and numerous smaller islands. The country is situated some east of Australia across the Tasman Sea, and roughly south of the Pacific island nations of New Caledonia, Fiji, and Tonga... , which indicates a presence in the western Pacific as well. No population estimates have been made. Little is known about the group behaviors of this whale, and small groups have been seen. Stomach contents reveal at least one specimen is a fish eater, as opposed to the squid normally eaten by the genus. This species may be quite vulnerable to gillnets in Peru, since scientists found six dead adults in a very small sample. However, there is not enough evidence to determine anything about the species. - MNZ MM002142, collected from Oaro overbridge, south of Kaikoura, New Zealand, 19 October 1993. - Encyclopedia of Marine Mammals. Edited by William F. Perrin, Bernd Wursig, and J.G.M Thewissen. Academic Press, 2002. ISBN 0-12-551340-2 - Sea Mammals of the World. Written by Randall R. Reeves, Brent S. Steward, Phillip J. Clapham, and James A. Owell. A & C Black, London, 2002. ISBN 0-7136-6334-0
<urn:uuid:2f339f86-d83c-4b7b-8d0c-ccc75c7d3b7c>
3.4375
665
Knowledge Article
Science & Tech.
58.79949
As quoted from spaceweather.com ASTEROID FLYBY: NASA radars are monitoring 2005 YU55, an asteroid the size of an aircraft carrier, as it heads for a Nov. 8th flyby of the Earth-Moon system. There is no danger to our planet. At closest approach on Tuesday at 3:28 pm PST (23:28 UT), the 400m-wide space rock will be 324,600 kilometers away, about 85% the distance from Earth to the Moon. Professional astronomers are eagerly anticipating the flyby as the asteroid presents an exceptionally strong radar target. Powerful transmitters at Goldstone and Arecibo will ping the space rock as it passes by, revealing the asteroid's shape and texture in crisp detail, and pinpointing its orbit for future flyby calculations. A movie from JPL explains: Asteroids this big have passed by Earth at similar distances many times before, but this is the first time astronomers have known about the flyby in advance. For instance, a similar encounter occurred in 1976 when 2010 XC15 split the distance between Earth and the Moon. Researchers didn't discover that space rock until 24 years after the flyby. The Nov. 8, 2011, passage of 2005 YU55 thus represents a rare opportunity for asteroid research. Experienced amateur astronomers should be able to photograph 2005 YU55 as it zips through the constellations Aquila and Pegasus glowing like an 11th magnitude star. Even under the full moonlight of Nov. 8th, such a bright asteroid is within reach of mid-sized backyard telescopes. The timing of the flyby favors observers in western Europe and eastern parts of North America. Check Sky & Telescope for observing tips or go straight to JPL for the object's ephemeris.
<urn:uuid:ea10aca6-5f14-435d-afb8-825c496520a5>
3.21875
367
Comment Section
Science & Tech.
54.757353
In C++, polymorphism lets a variable refer to objects of different data types. The only catch is that the different data types must be members of the same inheritance hierarchy and they must be lower in the hierarchy than the variable's data type. The cool part is this ability lets the same instruction behave differently, based on the actual object's data type instead of the variable's data type. Consider the hierarchy of the UML class diagram in Figure 1. Here, I have derived the Manager and Salesperson classes from the Employee class. This lets me use an Employee pointer to point to an Employee, Manager, or Salesperson object. The following code is an attempt at polymorphism: Employee* pEmp = new Manager; cout << *pEmp << endl; However, this attempt fails since the code does not invoke the Manager's insertion operator (<<) as I intendedit invokes the Employee's insertion operator. This happens because the indirection operation (*) is performed first, so *pEmp returns the data type of the pointer. In this case, because pEmp is an Employee pointer, the *pEmp operation returns the Employee data type in spite of pointing to a Manager object. After *pEmp is done, the function call is matched using the Employee data type. Consequently, the Employee's insertion function is matched and not the Manager's function as intended. In C++, polymorphism normally requires virtual methods. Because only methods can be inherited, many programmers think the insertion (<<) and extraction (>>) operators cannot display polymorphic behavior because these operators are implemented as nonmember friend functions. However, there are several ways of making these operators polymorphic. In this article, I compare and contrast three different techniques.
<urn:uuid:ce48c95a-6f24-4434-b63c-7c902a07c6ca>
4.03125
362
Documentation
Software Dev.
34.162091
Order of Operations - Arrange students in pairs. Give each pair of students 16 blank cards. Have them write the numbers 0-9 and the symbols +, -, , ÷, (, and ) on the cards. Have one student use three numbers, two operations, and the parentheses to construct an expression. Have the other student evaluate the expression. Be sure to have the first student check the answer. - Problems like the following are great for helping students practice with grouping symbols: Place two parentheses in this expression to yield 17. 6 + 5 4 3 2 + 1 - When working with parentheses, it is strongly recommended that students use pencil. This will allow students to erase as they evaluate expressions. - Be sure to add words such as parentheses, order, and operation to your students' spelling and vocabulary lists. It is important to try to use these as much as possible so students will become adept with the language of mathematics. - Some students may draw parentheses that look like the letter "C" while others may use too much space between the parentheses and numbers of operations. A good test is to see whether or not other students can read the expression written by another student. - When having students practice working on problems with parentheses, be sure to mix in problems without parentheses that require knowledge of the order of operations.
<urn:uuid:79daf820-7a04-4e70-8116-533fcf3f91dc>
4.96875
273
Tutorial
Science & Tech.
50.525909
A class instance is created by calling a class object (see type-class). A class instance has a namespace implemented as a dictionary which is the first place in which attribute references are searched. When an attribute is not found there, and the instance’s class has an attribute by that name, the search continues with the class attributes. If a class attribute is found that is a user-defined function object or an unbound user-defined method object whose associated class is the class (call it C) of the instance for which the attribute reference was initiated or one of its bases, it is transformed into a bound user-defined method object whose im_class attribute is C and whose im_self attribute is the instance. Static method and class method objects are also transformed, as if they had been retrieved from class C; see type-class. See section [3.4.2] for another way in which attributes of a class retrieved via its instances may differ from the objects actually stored in the class’s __dict__. If no class attribute is found, and the object’s class has a __getattr__ method, that is called to satisfy the lookup. Attribute assignments and deletions update the instance’s dictionary, never a class’s dictionary. If the class has a __setattr__() or __delattr__() method, this is called instead of updating the instance dictionary directly. Class instances can pretend to be numbers, sequences, or mappings if they have methods with certain special names. See special-method-names.
<urn:uuid:c1573511-d138-4975-a269-2bfb5a60cb10>
3.5
321
Documentation
Software Dev.
47.642013
- Futurity.org - http://www.futurity.org - Hermit crabs gather to evict neighbors Posted By Robert Sanders-UC Berkeley On November 5, 2012 @ 11:06 am In Top Stories | 2 Comments UC BERKELEY (US) — Most social animals get together for protection or to mate or hunt, but terrestrial hermit crabs socialize to steal each other’s houses. All hermit crabs appropriate abandoned snail shells for their homes, but the dozen or so species of land-based hermit crabs—popular terrarium pets—are the only ones that hollow out and remodel their shells, sometimes doubling the internal volume. This provides more room to grow, more room for eggs—sometimes a thousand more eggs—and a lighter home to lug around as they forage. But empty snail shells are rare on land, so the best hope of moving to a new home is to kick others out of their remodeled shells, says Mark Laidre, a University of California, Berkeley, postdoctoral fellow who reports this unusual behavior in Current Biology . When three or more terrestrial hermit crabs congregate, they quickly attract dozens of others eager to trade up. They typically form a conga line, smallest to largest, each holding onto the crab in front of it, and, once a hapless crab is wrenched from its shell, simultaneously move into larger shells. “The one that gets yanked out of its shell is often left with the smallest shell, which it can’t really protect itself with,” says Laidre, who is in the department of integrative biology. “Then it’s liable to be eaten by anything. For hermit crabs, it’s really their sociality that drives predation.” Laidre says the crabs’ unusual behavior is a rare example of how evolving to take advantage of a specialized niche—in this case, land versus ocean—led to an unexpected byproduct: socialization in a typically solitary animal. “No matter how exactly the hermit tenants modify their shelters, they exemplify an important, if obvious, evolutionary truth: living things have been altering and remodeling their surroundings throughout the history of life,” writes UC Davis evolutionary biologist Geerat J. Vermeij in a commentary in the same journal. For decades, Vermeij has studied how animals’ behavior affects their own evolution—what biologists term “niche construction”—as opposed to the well-known Darwinian idea that the environment affects evolution through natural selection. “Organisms are not just passive pawns subjected to the selective whims of enemies and allies, but active participants in creating and modifying their internal as well as their external conditions of life,” Vermeij concludes. Laidre conducted his studies on the Pacific shore of Costa Rica, where the hermit crab Coenobita compressus can be found by the millions along tropical beaches. He tethered individual crabs, the largest about three inches long, to a post and monitored the free-for-all that typically appeared within 10-15 minutes. Most of the 800 or so species of hermit crab live in the ocean, where empty snail shells are common because of the prevalence of predators like shell-crushing crabs with wrench-like pincers, snail-eating puffer fish, and stomatopods, which have the fastest and most destructive punch of any predator. On land, however, the only shells available come from marine snails tossed ashore by waves. Their rarity and the fact that few land predators can break open these shells to get at the hermit crab may have led the crabs to remodel the shells to make them lighter and more spacious, Laidre says. The importance of remodeled shells became evident after an experiment in which he pulled crabs from their homes and instead offered them newly vacated snail shells. None survived. Apparently, he says, only the smallest hermit crabs take advantage of new shells, since only the small hermit crabs can fit inside the unremodeled shells. Even if a crab can fit inside the shell, it still must expend time and energy to hollow it out, and this is something hermit crabs of all sizes would prefer to avoid if possible. The UC Berkeley’s Miller Institute funded the research. Source: UC Berkeley Article printed from Futurity.org: http://www.futurity.org URL to article: http://www.futurity.org/top-stories/hermit-crabs-gather-to-evict-neighbors/ URLs in this post: EarthSky.org: http://earthsky.org/science-wire/hermit-crabs-gather-to-evict-neighbors Current Biology: http://www.cell.com/current-biology/abstract/S0960-9822(12)01060-3 commentary: http://www.cell.com/current-biology/abstract/S0960-9822(12)01073-1 UC Berkeley: http://newscenter.berkeley.edu/2012/10/26/hermit-crabs-socialize-to-evict-their-neighbors/ Copyright © 2009 Futurity.org. All rights reserved.
<urn:uuid:b756ac34-06f0-4a03-b03c-ba3d862d55bd>
3.109375
1,123
Truncated
Science & Tech.
47.426326
|Dec21-12, 06:22 AM||#1| solar energy in space Is it possible to use solar energy to provide locomotion in Space? I appreciate that Solar sails have been mooted but is there any way that the energy from the sun can be used in such a way as to approach the source of the light? |Dec21-12, 09:13 AM||#2| If you can eject some mass (using solar energy, if you like), it is possible. If you can find some way to catch and accelerate solar wind, it might be possible, but the density is very low. |Dec25-12, 12:55 PM||#3| A solar sail can bring you anywhere more or less. This results from space mechanics, where you go farther from the Sun (or the attracting body) by accelerating on the orbit, and nearer by braking along the orbit. So in the typical use, a solar sail "tows" and is inclined 45° to the orbital speed and to the Sun's direction, in order to brake or accelerate. Much more efficient, even to go away from the Sun, because paths in a gravity field are very much a matter of kinetic and gravitation energy, but the means used give a speed, so this speed acts far better if it combines parallel with the existing big speed (30km/s for Earth on its orbit!) to produce the V2 effect. Because Sunlight is stronger nearer to the Sun, a Solar sail is in fact better to reach the inner planets or orbits. One Japanese craft presently uses it that way. I also computed that a Solar sail spacecraft, with very reasonable assumptions, would have reached a Solar polar orbit within two years and have stayed there indefinitely: compare it with Ulysses which took years to fly by Jupiter because our rockets can't provide 40km/s for a polar Solar orbit. And then, Ulysses flew once over each pole in many years, then all was over. A sail goes there in two years, stays there, changes the distance and inclination at will... Checkmate! I had some thoughts at Solar sail technology there I'm not pleased with the beams I imagined, but other ideas are good to my eyes: how to thin the polymer film, how to test the unfurling... To go far and fast away from the Sun, say to the heliopause in few years, a sail would first brake to go near, and from that better position, accelerate and escape our Solar system easily. There are "amateur" websites especially for that topic, very interesting, with advanced thoughts. What a sail can't do well is brake when it arrives at an outer planet like Uranus or Neptune. But well, maybe we could have rockets for that purpose, combined with Oberth effect and capture using massive moons. I consider every space agency should have very active and concrete projects on this topic. This technology IS feasible and brings a lot. The Solar wind is extremely weak, much more so than Sunlight pressure. It diminishes less than Sunlight with distance but should remain fainter. It has been proposed only because, as being composed of charged particles, it could be deflected by a huge (really) loop of current whose immaterial area weighs nothing. A wire was considered, an electron beam as well, curved by the Sun's induction. To get more thrust, Sunlight can heat an ejected working fluid, in a "Solar thermal rocket". This is technology at grasp. ESA has a reasearch project for a craft from from Low-Earth-Orbit to geostationary, towing a satellite that has brought the necessary hydrogen, and back to LEO for the next customer, within a few weeks. Perfectly feasible and very useful. I have thrown a few thoughts at it there including a very low-pressure chamber and a windowless hole in the chamber to let the concentrated light in. I will throw more thoughts at it. Some day. I consider this is the proper way to transport people to Mars and back. Chemical rockets and hardware preset in Martian orbit would permit a shorter trip to Mars or a short stay on Mars; Solar thermal rockets enable both at the same time. Other combinations are possible, like Solar electricity that powers an ion thruster and its variants, but they need much more light collecting area than the Solar thermal engine without saving global mass on a big mission like manned Mars. On a smaller mission, for instance hopping from one asteroid vicinity to an other, this has been done by JPL with huge success. |Similar Threads for: solar energy in space| |Energy, Non-coplanar solar system orbits for space ships||Classical Physics||2| |Major to go into Solar Energy ( particularly Nanotechnology in Solar energy )||Career Guidance||1| |Solar Energy Without Solar Cells?||General Physics||4| |The solar systems and space probes||Classical Physics||1|
<urn:uuid:e075a144-48ab-4c43-bcee-830a0adb781b>
3.203125
1,042
Comment Section
Science & Tech.
52.780287
Global Climate Change is threatening the delicate balance of temperatures needed for survival of coral reef life. In the struggle for life on this planet, man has clearly won. On land his traces are everywhere but recently he has gone under water too and the results are devastating. Hundreds of square miles of reef have been poisoned, dynamited, polluted, overfished or indirectly damaged by activities on the adjacent land. Global Climate Change is threatening the delicate balance of temperatures needed for survival of coral reef life. A reef is not vast. Often, but a tiny little rim around an island, a few hundred feet wide then sand; a thin skin of oasis in a vast span of desert. Like a symbiotic relationship, of which there are many on the reef, there are a few places where an interdependency has developed between man and the reef, where they provide services to each other to the benefit of both. One such place is Bonaire, a small Caribbean island. Spurred by the bad example of its sister island Curacao, Bonaire had the foresight in the seventies to put a halt to the destruction of the reef by declaring the sea around the island a Marine Park. Due to its success, the island economy now floats virtually entirely on diving tourism. The tourists are required to follow a course on how to behave under water and are not allowed to use diving gloves, because they are not supposed to touch anything anyway. Here, eco-tourism is bringing improved livelihoods to much of the population. In its own way a symbiotic relationship with life both on land and in the water as the island's wildlife, from the boobies on the cliffs of the North Western tip of the island to the flamingos on the salt flats in the South-East, is also protected. All organisms on the reef are more or less dependent upon each other, but some have developed such an intricate relationship that they could no longer live without their partner. Alliances, partnerships and the enemies they fend off are part of this drama. This photographic exhibit provides a glimpse into the symbiotic relationships of Bonaire and examples of symbiotic relationships in reefs in many other parts of the world, imparting a deep appreciation of the wonder, beauty, and diversity of life of coral reefs from microorganisms to oceanic fish. Photographs by Jan C. Post
<urn:uuid:6939088a-8baf-4f62-a57b-726b5f296872>
3.046875
483
Truncated
Science & Tech.
38.42194
An Asymptote is basically a line in which a graph approaches, but it never intersects the line. For example: In the following given graph of P =1 / Q, (here the x axis is denoted by ‘P’ and y axis is denoted by Q) the line approaches the p-axis (Q=0), but the line never touches it. The line can reach to infinity but it never touch. The line will not actually reach Q = 0, but will always get closer and closer. It means the line Q = 0 is along to the Horizontal Asymptote. Horizontal asymptotes arise mostly when the given function is a fraction where the top part remains positive, but the bottom part goes to infinity. We will see in the example that the value Q = 1 / P is a fraction. If we find the infinity on the x-axis then the top of the fraction remains 1, but the bottom fraction gets bigger and bigger. As a result we can say that the entire fraction actually gets smaller, although it will not be zero. The function will be 1/2, then 1/3, then 1/10, even 1/10000, but never quite 0. Thus, Q = 0 is a horizontal asymptote for the function y = 1 / p. this is how to define asymptote. Now we will see some types of asymptote which are mention below: Vertical asymptote and Horizontal asymptote: Let’s have a small introduction about Horizontal asymptotes. Horizontal Asymptotes: Horizontal Asymptotes are the horizontal lines in which the graph of the function tends to x → + ∞. The horizontal line is given by: ⇒u = c; the given equation is a Horizontal Asymptotes of a function u = f (p); If it satisfy the given equation, and the equation is shown below. ⇒lim p → - ∞ f (p) = c or we can write it as: ⇒lim p → + ∞ f (p) = c. This is the asymptote definition. In coordinate Geometry, an Asymptote of a curve is basically a line which approaches the curve arbitrarily so close that it tends to meet the curve at infinite, but it never intersects the line. In classical mathematics it was said that an asymptote and the curve never meets but modernly such as in algebraic geometry, an asymptote of a curve is a line which is Tangent t...Read More In Geometry, an Asymptote of a curve is a line in which the distance measured between the curve and the line approaches to zero but it tends to infinity. Let's see the different types of asymptote which are shown below: There are two types of asymptotes one is Vertical Asymptote and another is Horizontal Asymptote. Now we will see asymptote formula. The asymptote for...Read More Asymptote of a curve can be defined as a line in such a way that the distance between curve and line reaches to value zero as they tends to infinity. In coordinate Geometry generally there are two type of Asymptote. In Asymptote the distance between the curve and line approaches to zero but it never intersect and tends to infinity. Let’s see different types of asymptote which are shown below: Horizontal asympt...Read More
<urn:uuid:85fbaf1c-0b99-4cae-a7e5-7bff57b6de1f>
4.375
756
Tutorial
Science & Tech.
55.584061
The Java.io.FilterReader class is for reading filtered character streams.Following are the important points about FilterReader: The class itself provides default methods that pass all requests to the contained stream. The Subclasses of FilterReader should override some of these methods and may also provide additional methods and fields. Following is the declaration for Java.io.FilterReader class: public abstract class FilterReader extends Reader Following are the fields for Java.io.FilterReader class: protected Reader in -- This is the character-input stream. protected Object lock -- This is the object used to synchronize operations on this stream. |S.N.||Constructor & Description| |1||protected FilterReader(Reader in) | This creates a new filtered reader. |S.N.||Method & Description| |1|| void close() | This method closes the stream and releases any system resources associated with it. |2|| void mark(int readAheadLimit) | This method marks the present position in the stream. |3|| boolean markSupported() | This method tells whether this stream supports the mark() operation. |4|| int read() | This method reads a single character. |5|| int read(char cbuf, int off, int len) | This method reads characters into a portion of an array. |6|| boolean ready() | This method tells whether this stream is ready to be read. |7|| void reset() | This method resets the stream. |8|| long skip(long n) | This method skips characters. This class inherits methods from the following classes:
<urn:uuid:08bd0f50-d5dc-4a3c-8274-073ceedbce69>
2.90625
354
Documentation
Software Dev.
58.772659
divide by zero this code was compiled with gcc 4.4.5 on Ubuntu, on execution an error message was thrown as "Floating point exception" But on changing datatype of j to float it produced "inf". I m not able to understand why it is not working same with j as int. int i=60;int j=0; "inf" is defined as part of the IEEE float standard. IEEE 754-2008 - Wikipedia, the free encyclopedia in other words the "inf" is not part of an int computer number standard. Because for IEEE floating point numbers, dividing a regular number by 0 yeilds inf. For integers, dividing by 0 generally causes an interrupt. There is no standard result that is used. If your compiler supports IEEE floating point, dividing by zero gives a "special" infinite value. The C standard does not actually require that floating point values be represented using IEEE standard formats. There are real compilers that do not use IEEE formats (and I remember using one compiler that only used IEEE floating point if enabled using a command line switch). Dividing by zero gives undefined behaviour according to the C standard. Which means any result is acceptable (reporting inf, crashing, etc). There is also no requirement for consistent behaviour between types.
<urn:uuid:d080ee5d-e468-48cf-bc34-76ef1e6f74a0>
2.796875
270
Comment Section
Software Dev.
57.209478
Honeybees have many adaptations for defense: Adults have orange and black striping that acts as warning coloration. Predators can learn to associate that pattern with a painful sting, and avoid them. Honeybees prefer to build their hives in protected cavities (small caves or tree hollows). They seal small openings with a mix of wax and resins called propolis, leaving only one small opening. Worker bees guard the entrance of the hive. They are able to recognize members of their colony by scent, and will attack any non-members that try to enter the hive. Workers and queens have a venomous sting at the end of the abdomen. Unlike queens, and unusual among stinging insects, the stings of Apis workers are heavily barbed and the sting and venom glands tear out of the abdomen, remaining embedded in the target. This causes the death of the worker, but may also cause a more painful sting, and discourage the predator from attacking other bees or the hive. A stinging worker releases an alarm pheromone which causes other workers to become agitated and more likely to sting, and signals the location of the first sting. Honeybees are subject to many types of predators, some attacking the bees themselves, others consuming the wax and stored food in the hive. Some predators are specialists on bees, including honeybees. Important invertebrate enemies of adult bees include crab spiders and orb-weaver spiders, wasps in the genus Philanthus (called “beewolves”), and many species of social wasps in the family Vespidae. Vespid wasp colonies are known to attack honeybee colonies en masse, and can wipe out a hive in one attack. Many vertebrate insectivores also eat adult honeybees. Toads (Bufo) that can reach the entrance of hive will sit and eat many workers, as will opossums (Didelphis). Birds are an important threat – the Meropidae (bee-eaters) in particular in Africa and southern Europe, but also flycatchers around the world (Tyrranidae and Muscicapidae). Apis mellifera in Africa are also subject to attack by honeyguides. These birds eat hive comb, consuming bees, wax, and stored honey. At least one species, the greater honeyguide (Indicator indicator) will guide mammal hive predators to hives, and then feed on the hive after the mammal has opened it up. The main vertebrate predators of hives are mammals. Bears frequently attack the nests of social bees and wasps, as do many mustelids such as the tayra in the Neotropics and especially the honey badger of Africa and southern and western Asia. In the Western Hemisphere skunks, armadillos and anteaters also raid hives, as do pangolins (Manis) in Africa. Large primates, including baboons, chimpanzees (< Some insects are predators in hives as well, including wax moth larvae (Galleria mellonella, Achroia grisella), and hive beetles (Hylostoma, Aethina), and some species of ants. In their native regions these tend not to be important enemies, but where honeybees have not co-evolved with these insects and have no defense, they can do great harm to hives. See Ecosystem Roles section for information on honeybee parasites and pathogens. - Beewolves (Philanthus) - Crab spiders (Thomisidae) - vespid wasps (Vespidae) - bee-eaters (Meropidae) - honeyguides (Indicatoridae) - bears (Ursidae) - honey badgers (Mellivora capensis) - skunks (Mephitidae) - toads (Bufo) Anti-predator Adaptations: aposematic - Roubik, D. 1989. Ecology and natural history of tropical bees. New York City, New York, USA: Cambridge University Press. No one has provided updates yet.
<urn:uuid:9c997ce9-b092-46b3-8842-31e8717ba644>
3.921875
846
Knowledge Article
Science & Tech.
33.161774
One of Python’s most useful features is its interactive interpreter. This system allows very fast testing of ideas without the overhead of creating test files as is typical in most programming languages. However, the interpreter supplied with the standard Python distribution is somewhat limited for extended interactive use. The goal of IPython is to create a comprehensive environment for interactive and exploratory computing. To support this goal, IPython has two main components: - An enhanced interactive Python shell. - An architecture for interactive parallel computing. All of IPython is open source (released under the revised BSD license). Interactive parallel computing Increasingly, parallel computer hardware, such as multicore CPUs, clusters and supercomputers, is becoming ubiquitous. Over the last 3 years, we have developed an architecture within IPython that allows such hardware to be used quickly and easily from Python. Moreover, this architecture is designed to support interactive and collaborative parallel computing. The main features of this system are: - Quickly parallelize Python code from an interactive Python/IPython session. - A flexible and dynamic process model that be deployed on anything from multicore workstations to supercomputers. - An architecture that supports many different styles of parallelism, from message passing to task farming. And all of these styles can be handled - Both blocking and fully asynchronous interfaces. - High level APIs that enable many things to be parallelized in a few lines - Write parallel code that will run unchanged on everything from multicore workstations to supercomputers. - Full integration with Message Passing libraries (MPI). - Capabilities based security model with full encryption of network connections. - Share live parallel jobs with other users securely. We call this collaborative parallel computing. - Dynamically load balanced task farming system. - Robust error handling. Python exceptions raised in parallel execution are gathered and presented to the top-level code. For more information, see our overview of using IPython for parallel computing. Portability and Python requirements As of the 0.11 release, IPython works with Python 2.6 and 2.7. Versions 0.9 and 0.10 worked with Python 2.4 and above. IPython now also supports Python 3, although for now the code for this is separate, and kept up to date with the main IPython repository. In the future, these will converge to a single codebase which can be automatically translated using 2to3. IPython is known to work on the following operating systems: - Most other Unix-like OSs (AIX, Solaris, BSD, etc.) - Mac OS X - Windows (CygWin, XP, Vista, etc.) See here for instructions on how to install IPython.
<urn:uuid:429e2bfc-e293-4a18-b803-2fcfabaa272c>
2.75
601
Documentation
Software Dev.
28.14313
What is ZEFF IN CHEMISTRY? Separate topics with commas, or by pressing return. Use the delete or backspace key to edit or remove existing topics. Virtual Chemistry Experiments Home Page. Zeff.html version 2.0 © Copyright 2000, 2009, David N. Blauch ... Best Answer: You can estimate Zeff (Z*) using Slater's rules and the equation: Z*=Z-S Slater's rules: 1. Write out the electronic configuration of the element in the following ... Best Answer: Both 1. & 2. seem to be the same question??? For Na (1S)^2 (2S,2P)^8 (3S)^1 Zeff=Z-S S= (2 x 1.0) + (8 x0.85) = 8.8 Zeff = 11 - 8.8 = 2.2 close to measured valued ... Note: Zeff is also often known as "Z* ". ... Discover Questions in Chemistry. Determine the products of the reaction between 3-methylpentanoic acid and ethanol? URGENT, HELP!? Volume calculation question regarding buffer.? This contribution covers the PowerPoint From slide # 4_ to slide #5 This contribution was created by Abd-Allah Katatney _ and revised by Zeff: Many factors affect the size of an atom. Bunch of easy Chemistry questions in Chemistry is being discussed at Physics Forums. Welcome back, Guest. Auto-Login. Physics Forums. Forums; Register ... Zeff = Z - S So I look, at say Titanium atomic number 81 minus the electrons in the s orbital of the valance, so that's 6s^2. Best Answer: I answered your question about the Zeff of rubidium using Slater's rules a while back. Refer to the link I gave you on Slater's rules. Rubidium has a Zeff of 2.2 ... I am working on chemistry problems and my book is being very unclear. 4 years ago; Report Abuse; by tedhyu Member since: 30 August 2006 Total points: 5,426 (Level 5) ... The Zeff measures the positive nuclear charge felt by the valence electrons. Zeff is directly proportional to En and is inversely proportional to n. As Zeff increases, En increases. As n increases, En decreases. References. Atkins, P. W., and Paula Julio. De. "13.10 Penetration and Shielding." Elements of Physical Chemistry. Oxford: Oxford UP, 2009. Print. Zeff = Z - S = 8 - 3.45 = 4.55 The effective nuclear charge experienced by a valence electron on an O atom is +4.55. 5 years ago; Report Abuse; 100% 1 Vote. 1 person rated this as good; 0 stars - mark this as Interesting! ... Discover Questions in Chemistry. Zeff (ゼフ, Zefu?) is the head chef of Baratie, a former pirate known as "Red-Leg" (赫足, Aka-Ashi... Calculate Zeff for a valence electron in an oxygen atom. and Calculate Zeff for the 4s electron in a copper atom, . and Calculate Zeff for a 3d electron in a copper atom, Cu. 5 years ago; ... Help with chemistry homework Zeff of Na and K? “Zeff” discussion on The Student Room's Chemistry forum ... 'New and now' - where you can catch up on the latest news, blog posts and talking points on The Student Room. A comprehensive set of flashcards for AP Chemistry covering Atoms, Bonds, and Nuclei.. Zeff, effective nuclear charge. AP Chemistry - Atoms, Bonds, and Nuclei Chemistry 121 (Amar) - Fall, 2007. Screening and Effective Nuclear Charge (©2007, François G. Amar, All rights reserved) How to calculate Zeff for chlorine ions and chlorine. what is the difference? 2 years ago; Report Abuse; by biire2u Member since: January 09, 2009 ... (chemistry) graduate open a chemist store in india? What is the difference between chemical reaction and chemical change? Zeff and ionization energy - posted in Chemistry: how do i work out Zeff in He if i have the first ionization energy - 2372kJ/mol and the ionization energy for He+ - 5250kJ/mol. i know that IE is proportional to Zeff squared, but surely i cant work out the actual value for Zeff with only this ... View the step-by-step solution to: "what is Zeff of Rb using Slaters Rule" Chemistry, a branch of physical science, is the study of the composition, properties and behavior of matter. As it is a fundamental component of matter, the atom is the basic unit of chemistry. Chemistry is concerned with atoms and their interactions with other atoms, with particular focus on ... Hi, In the video G2.1, Chad talks about Zeff charges. Zeff = Z (atomic #) - S (screening/core e) Example: Na = 11 - 10 = +1 Mg = 12 - 10 = +2 BUT. Register Now; F.A.Q. Contact Us; LOG IN . Remember Me ... Chemistry; Zeff Affective Nucleus Charge??? Start learning chemistry with the basics. Learn what chemistry is, what chemists do, and why it's important to study chemistry. The Tao of Teamwork: Chemistry, Trust, and Risk Taking; The Tao of Change: Flexibility, Reaction, and Persistence; ... Joel Zeff creates energy. He is a dynamic speaker, improvisational humorist and author. Atomic Structure and Periodicity II Nuclear Reactions AP Chemistry, Chapter 7 & 21 Z-Effective (Zeff) • “Effective Nuclear Charge” – Polyelectronic Atoms – (atoms with more than 1 electron) • “Z-Effective (Zeff)” interaction of ... home / homework help / questions and answers / science / chemistry / part a) what value do you... Close. Looking for Cramster? ... Yes, both methods account for the gradual increase in Zeff. 2. Yes, the first method accounts for the gradual increase in . 3. It seems like for both the Zeff = +7. How does this explain how Cl- is larger? (Ganing e- = always a larger radius?) Thank you! 05-26-2011, 01:25 PM #2. BillyJ. ... Chemistry; Organic Chemistry; Physics; Micro Biology; Anatomy; Biology; Entertainment; Scholarships; General Topics; Support; Leo Zeff was an American psychologist and psychotherapist in Oakland, California who pioneered the use of ecstasy (MDMA) and other psychoactive drugs in psychotherapy in the 1970s. Answers.com > Wiki Answers > Categories > Science > Chemistry > Elements and Compounds > What is the effective nuclear charge experienced by calcium's valence electrons? ... Zeff=S-Z. in this case Z will be the sum of the shielding electrons and s is the nuclear charge. Chemical Periodicity The periodic table is arranged in rows according to increasing atomic number. ... Zeff = Z - (effective nuclear charge) Z = nuclear charge, = amount of shielding Li example Electrons with lower l quantum numbers shield those with higher l quantum numbers. What is chemistry? Here is a dictionary definition for chemistry as well as a more in-depth description of what chemistry is. Introduction to chemical science for college and advanced-HS General Chemistry; Part 1 of 6. What is chemistry? From a database of frequently asked questions from the Introduction section of General Chemistry Online. Zeff is also an active staff member at Portsmouth Regional Hospital. Dr. Zeff's plastic surgery practice is conveniently located in Stratham, New Hampshire (NH), near Maine. ... specialized chemical peels and Photodynamic Therapy (PDT). A comprehensive set of flashcards for AP Chemistry covering Atoms, Bonds, and Nuclei.. Higher Zeff: Li or Be?. AP Chemistry - Atoms, Bonds, and Nuclei I was wondering.. it was taught to me that the Zeff increases down the group due to the penatraing power of of orbitals as the n=principal quantum number, value goes up. if that is the case, wont the number of shells added to the atom affect the Zeff? i mean as the no of shells increases, the ... Chemical Forums > Chemistry Forums for Students > Physical Chemistry Forum > Ionization energy, Electron Repulsion,Potential energy, Zeff of Li+ Ion Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to add wvchemistry 's video to your playlist. Chemistry is a physical science that studies the spontaneous reactions that occur between two or more chemicals, or the chemical makeup of various organic and inorganic ... Is Joel Zeff the right speaker for your event? Listen in as Shawn Ellis, ... Chemistry, Trust, and Risk Taking. It’s hard to explain, but we all know what happens when a partnership or group has chemistry: it’s important in any relationship. Richard L. Zeff, MD is an experienced professional in the field of Cosmetic Surgery - Doctor Zeff has a practice in Stratham, NH ... The chemical exfoliation goes one step further in smoothing out fine lines and wrinkles. There are 25 professionals with last name Zeff, who use LinkedIn to exchange information, ideas, and opportunities. Home; What is LinkedIn? Join Today; Sign In; Name Search. First name; Last name; ... Her patients present with chemical based issues, ... What type of chemical reaction in which a compound is broken down into simpler substances? ChaCha Answer: Metabolism- chemical reacti... Best Answer: Aahh!!! I used to call them the maplotters. Those people with a big yard full of old broken cars. The wife with plastic curlers in the hair and a ciggarette ... What is Chemistry? At one time it was easy to define chemistry. The traditional definition goes something like this: Chemistry is the study of the nature, properties, and composition of matter, and how these undergo changes. Dr. Ted Zeff; Contact; Appointments. Appointments; Workshops and Presentations; For Professionals; Articles; Tips. Coping Strategies; Healing Insomnia; Other Resources; Z Shop; Video; Watch Dr Ted discuss coping strategies for HSPs & sensitive males; Downloads; Articles; Chemistry is not an isolated discipline, for it merges into physics and biology. The origin of the term is obscure. Chemistry evolved from the medieval practice of alchemy. It's bases were laid by such men as Boyle, Lavoisier, Priestly, Berzelius, Avogadro, Dalton and Pasteur. Chemistry - DrBob222, Monday, October 31, 2011 at 1:09pm 17 Cl = 1s2 2s2 2p6 3s2 3p6 ... Equation 7.1: Zeff=Z-S Zeff Cl-, Zeff K+ Chemistry - thapelo, Thursday, April 4, 2013 at 2:35pm 5.75,7.75 Answer this Question. First Name: School Subject: Get the answer to "What is the definition of S in chemistry?" at Answers Encyclopedia, where answers are verified with credible reference sources like Encyclopedia.com. Such a reaction is called as decomposition reaction. 2011 is the international year of Chemistry. But what do Australians know about it? If you didn't find what you were looking for you can always try Google Search Add this page to your blog, web, or forum. This will help people know what is What is ZEFF IN CHEMISTRY
<urn:uuid:7db597ec-c051-4027-938c-1f9fd1950810>
3.875
2,506
Comment Section
Science & Tech.
66.398251
Assembly: Microsoft.Xna.Framework (in microsoft.xna.framework.dll) A texel represents the smallest unit of a texture that can be read from or written to by the GPU. A texel is composed of 1 to 4 components. Specifically, a texel may be any one of the available texture formats represented in the SurfaceFormat enumeration. A Texture2D resource contains a 2D grid of texels. Each texel is addressable by a u, v vector. Since it is a texture resource, it may contain mipmap levels. Figure 1 shows a fully populated 2D texture resource. Figure 1. Texture2D Resource Architecture This texture resource contains a single 3×5 texture with three mipmap levels.
<urn:uuid:d2339b6d-2cc0-4e2d-9a9d-9460040371d9>
2.78125
158
Documentation
Software Dev.
48.641554
Fig 1: The top three panels show typical photographic views of deep-sea species. The lower panels show views that more closely approximate the real view under natural light. Note that animals that appear quite conspicuous under the submersible floodlights, such as the starfish and the fish, are fairly cryptic under ambient illumination. Note also that certain animals, such as the small white starfish in the rightmost panels, are still quite visible under natural illumination. The blue color in the lower panels is merely added for effect, given that most vision by deep-sea species is monochromatic. Click image for larger view and credits. Sönke Johnsen, PhD The world does not look the same to everyone. If we do not take this into account, we cannot really understand the lives of other animals. One of our goals during this mission is to see the ocean world as its inhabitants do, concentrating on two situations far from our own experience: 1) vision in the deep sea and 2) UV vision. As Figures 1 and 2 show, the visibility of an animal depends a great deal on the illumination. Deep sea animals often appear quite conspicuous when photographed by full-spectrum cameras under full-spectrum illumination. In reality however, they are viewed under nearly monochromatic blue light by animals without color vision. Fig 2: This deep-sea galatheid crab is highly visible under the full-spectrum submersible lights. At blue-green wavelengths, however, it is well camouflaged. At red wavelengths it is again quite conspicuous, because the disruptive coloration on the legs is no longer effective. Click image for larger view and credits. These images (made by using only the blue channel of the digital imaging system) show that under these conditions the world looks quite different. Some species become quite well-hidden, others remain very obvious. Therefore, if we are to understand vision and coloration in the deep sea, we must view this world through the eyes of its inhabitants. One of our goals on this mission is to do just this. Unfortunately, the light at the depth of the coral reef is too dim for good photography. Also, the sub pilots do not like to drive around in the dark! Therefore, we will cover the still camera of the Johnson-Sea-Link submersible with filters that will mimic the response of the typical deep-sea visual system under the ambient light found at 300 m depth. We determined what sort of filter to use by first measuring or obtaining the following: - the underwater spectrum at that depth (see below) - the spectral sensitivity of typical deep-sea inhabitants, measured by co-explorer Tammy Frank - the color of the lights on the submersible - the color sensitivity of the camera on the submersible These four pieces of information together tell use what filter we need to put in front of the camera. With this filter in place, we are now seeing the world like a fish in the deep. While we may not think of the shallow world as holding any visual mysteries, it does hold at least one -- ultraviolet (UV) light. This light is invisible to us, but not to many of the animals out there. In fact it looks as though about half the fish that live either near the surface or in coral reef habitats are able to see UV light. Fig 3: Simultaneous images of a coral reef at green and UV wavelengths. Note how the increased background light silhouettes the fish in the UV image. Click image for credits. A common reason given for UV vision is that it may help animals find food. While seeing any new wavelength band helps, UV light is special in a few ways. First, opaque animals are more visible in the UV because the background water near the surface is quite bright at these wavelengths (Figure 3). Second, while many plankton are clear as glass at visible wavelengths, they are often quite visible in the UV due to increased light scattering and the presence of UV-protective pigments (i.e. suntan lotion). In fact, the presence of both UV light and predators with UV vision presents a real problem for transparent plankton – do I protect myself from the sun, or from animals that want to eat me? We will study this problem by filming transparent animals underwater using specialized UV cameras and by measuring the underwater UV levels using spectroradiometers.
<urn:uuid:1f1157d0-b55a-4d4a-91d5-53cd7cf919c0>
3.140625
901
Academic Writing
Science & Tech.
48.806126
I think I've found a concise definition for type safety. I found it on the C2 wiki, which is a great source for programming related info. Anyway here it is: Any declared variable will always reference an object of either that type or a subtype of that type. A more general definition is that no operation will be applied to a variable of a wrong type. There are additionally two flavors of type safety: static and dynamic. If you say that a program is type safe, then you are commenting on static type safety. That is, the program will not have type errors when it runs. You can also say that a language or language implementation is type safe, which is a comment on dynamic type safety. Such a language or implementation will halt before attempting any invalid operation. Taking the first sentence. This rules out any type of conversion so int->float is type unsafe, it rules out any type of dynamic cast too. So that basically rules out C, C++, Java and C# as type safe. Moving on to the main paragraph we see that aswell as static type safety there is also the concept of dynamic type safety. Using this as our bench mark, still rules out C and C++, but deems Java and C# to be dynamically type safe (if we choose to ignore the issues surrounding conversions of primitives of course). This laxer definition of type safety also includes languages like Smalltalk, Python and Ruby. So all modern OO languages are dynamically type safe. If this is true, what is the dynamic versus static typing debate all about? Is type safety an oxymoron? Reading on further on the C2 wiki: There are various degrees of type safety. This is different from TypeChecking. See also StronglyTypedWithoutLoopholes, which is another term for (at least) dynamic type safety. So using the "degrees of type safety" argument, Java could be said to be more type safe then say Smalltalk. This kind of makes sense, since even though Java is not fully static type safe, it is partially so. So type safety is relative. So you can rate languages on their degree of type safety. Statically typed languages are more type safe then dynamically typed languages generally. If you click on the link CategoryLanguageTyping you will find out that what we usually refer to as static typing isn't actually called static typing at all, the proper name is Manifest Typing, Static Typing means something else and includes Type Inference. Given the common use of the term static typing, I have chosen up to now not to use the proper term which is in fact Manifest Typng. So what does all this buy us? At best we are partially type safe if we choose to use a language like Java. Partially? Is that useful? Either I'm safe or I'm not right? For example, when releasing to production, I can't tell the QA Manager that I believe my program is partially safe. He wants to know whether my program is safe. So how do I know that my program is Safe? Well simple, I test it! I could go into strong versus weak typing and the consequences, but the links are there if you're interested. No program is Type Safe, and to claim so is a bit of an oxymoron. IMO typing is no substitute for well thought out tests, but type checks can help to detect and track down bugs (either at compile time or runtime). Where I believe manifest typing is useful is in improving the readability of code, improving the comprehension of large systems, and improving tooling support for code browsing and editing. Examples of this is the code completion and refactoring features in Eclipse. Smalltalk has these features too, but with manifest type annotations, tools have that much more information to work with. The downside of manifest typing is that all type systems use 'structural types'. Structural types are based on the structure of your code. Depending on the code annotations available, manifest structural types can limit expressiveness. This is why languages like Scala have invented a more expressive set of type annotations, to overcome the type constraints imposed by languages like Java. Strongtalk's type annotations are even more expressive. This had to be the case because the Strongtalk type annotations had to be applied to the existing Smalltalk 'blue book' library, and this was originally written without any manifest type constraints whatsoever. The other downside of manifest types is that your code is more verbose. So ideally what you want is : * Manifest type annotations that can express intent and do not constrain you (or no type annotations at all or type inference) * Strongly typed without loop holes at runtime * Tests that tell you whether your code is safe. Type safe doesn't exist, and partial type safety is a poor substitute for the above!
<urn:uuid:72ca9eaa-aedd-46d6-a50e-c07c6dfc776e>
3.078125
996
Comment Section
Software Dev.
44.549212
Returns the length in characters of the value of EXPR. If EXPR is omitted, returns the length of . If EXPR is undefined, returns Like all Perl character operations, length() normally deals in logical characters, not physical bytes. For how many bytes a string encoded as UTF-8 would take up, use length(Encode::encode_utf8(EXPR)) (you'll have first). See Encode and perlunicode.
<urn:uuid:cc65718b-27a5-4bee-add7-48bb86613c61>
2.859375
101
Documentation
Software Dev.
53.046
The Sun's energy, of course, comes from fusion. I think there's a small and totally insignificant amount of fission going on as well. The majority of the Sun's mass is hydrogen, and the vast majority of what isn't hydrogen is helium (with the ratio changing over billions of years as hydrogen is fused into helium). But since the Sun formed from the same nebula as Earth and the other planets, it should contain all the same elements that Earth has. According to this web page, "About 67 elements have been detected in the solar spectrum." Thus it's very likely that the Sun contains small amounts of uranium-235. The environment is such that it can't form enough of a concentrated mass to trigger a fission chain reaction, but U-235 can, with a low probability, decay by spontaneous fission. Other very heavy isotopes can do the same thing. I've never heard of any research that indicates that spontaneous fission actually occurs in the Sun, but given its composition it seems almost inevitable that it would, in trivial and insignificant quantities. For all practical purposes, the answer is no, there is no significant nuclear fission in the Sun. But strictly speaking, yes, there probably is. Even if it were significant, it wouldn't produce any kind of "perpetual motion". To do that, you'd need, for example, element A fusing into element B, and then element B fissioning back into element A. That can happen (rarely), but it can't produce net energy; if one of the reactions produces energy, the other must absorb energy. Oh yes, also hydrogen bombs (link is to Martin Beckett's answer). The fission and fusion are both exothermic (i.e., they produce energy), but the fission applies to very heavy elements and the fusion to very light ones.
<urn:uuid:47c2e64d-f275-4973-ba76-4800e3cb3271>
3.171875
385
Q&A Forum
Science & Tech.
54.267348
Tuesday, February 16, 2010 Oceans: Earth Ball Catch Beg, borrow, steal or buy an inflatable globe or other spherical representation of the Earth that can be tossed around the classroom. Have students throw the globe to one another around the room. When a student catches the globe, he/she should look to see if his/her left thumb is on water or land. The student will call out "land" or "water" and the teacher (or another student) keeps a tally of land and water catches on the board. At the end of the game, analyze the data and you should find that about 70% of the time, the student's thumb was on water. A great introduction to a study of oceans and water - emphasizing the large percentage of the Earth that is covered with water.
<urn:uuid:edfd9074-90fa-4a64-b0d0-2ea47bb1c3f4>
3.34375
166
Personal Blog
Science & Tech.
61.904804
Single-channel recording is achieved by pressing a fire-polished glass pipette, which has been filled with a suitable electrolyte solution, against the surface of a cell and applying light suction. Under such conditions, the glass pipette and the cell membrane will be less than 1 nm apart. The tighter the seal will have two advantages, 1) better electrical isolation of the membrane patch and 2) a high seal resistance reduces the current noise of the recording, permitting good time resolution of single channel currents, currents whose amplitude is in the order of 1 pA. Classically, three different configurations of the patched membrane can be used for single-channel recording: cell-attached, outside-out and inside-out patches. Cell-attached configuration contacts the cell membrane forming a gigaohm seal. Long-term stable recordings with low background noise can be performed in this configuration with minimal disruption to the intracellular milieu. For the outside-out configuration, the external surface of the patch is exposed to the external recording media. Offering the opportunity to repetitively expose the channels to different drugs and at various concentrations. In the inside-out patch configuration, it is the internal face of the membrane that is exposed to the external solution. This provides access to intracellular receptor binding sites and also enables studies of intracellular signaling pathways. It is now possible to record single-channel current activity from many cell types, that is, from mammalian species, insects, invertebrates and also plants. The recording of single-channel currents enables detailed kinetic analyses of native and recombinant ion channels, including those that have been subject to natural or intended mutations to their structure.
<urn:uuid:413687f6-3fcb-4de6-bf80-cd500d9311ac>
3.21875
340
Knowledge Article
Science & Tech.
21.560627
Science Fair Project Encyclopedia Johann Deisenhofer (born September 30, 1943) is a German biochemist who, along with Hartmut Michel and Robert Huber, received the Nobel Prize for Chemistry in 1988 for their determination of the structure of a membrane-bound complex of proteins and co-factors that is essential to photosynthesis. Deisenhofer earned his doctorate from the Technical University Munich for research work done at the Max Planck Institute for Biochemistry in Martinsried , West Germany, in 1974. He conducted research there until 1988, when he joined the scientific staff of the Howard Hughes Medical Institute and the faculty of The University of Texas Southwestern Medical Center at Dallas. Together with Michel and Huber, Deisenhofer determined the three-dimensional structure of a protein complex found in certain photosynthetic bacteria. This membrane protein complex, called a photosynthetic reaction center, was known to play a crucial role in initiating a simple type of photosynthesis. Between 1982 and 1985, the three scientists used X-ray crystallography to determine the exact arrangement of the more than 10,000 atoms that make up the protein complex. Their research increased the general understanding of the mechanisms of photosynthesis and revealed similarities between the photosynthetic processes of plants and bacteria. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:9e722a1f-9c05-4aa4-b811-2f8fe72ec961>
3.765625
293
Knowledge Article
Science & Tech.
24.366172
McIntosh's Law of Thermodynamics by Derek Potter Creationists often say that evolution is contrary to the second law of thermodynamics (2LT or SLOT) and hence impossible. Such a statement is liable to be met with uncomprehending stares from physicists who are not familiar with creationism. As its name suggests, thermodynamics is about the flow of heat. Why on earth should a law about heat flow have any bearing on whether evolution is possible? It turns out that 2LT refers to a physical quantity called entropy which is sometimes loosely described as a measure of the microscopic disorder in a system. Entropy is also a indicator of the 'mixed-upness' of a system and indicates thermal energy that is no longer available for conversion into useful work. Entropy cannot decrease spontaneously in an isolated ("closed") system. Hence, say creationists, order cannot arise spontaneously in nature. The argument goes back at least as far as Henry Morris's The Remarkable Birth of Planet Earth, 1974, and is hinted at in The Genesis Flood (Whitcomb and Morris, 1961). However it is obvious that this argument is wrong for at least three reasons. Firstly, living things are not isolated systems. Individual organisms are not isolated because they feed off sunlight or each other. Secondly, their genetic information, which is what we are interested in, is not a thermodynamic system at all. Thirdly, the meaning of the word "disorder" in 2LT is very precisely defined in terms of microstates. It has nothing to do with structure or function in living things. Any of these alone is sufficient to refute the Argument from 2LT. Thus the argument is traditionally answered by a one-liner: "living things are not a closed system". End of subject. Nevertheless one cannot help but feel a twinge of sympathy for creationists who see their pet sophistry swept aside on a technicality about heat flow. It doesn't seem to stop them though, as they continue to tell us that new information cannot arise in nature. By now, of course, we are light-years away from thermodynamics. It is true that Shannon information (e.g. data on a computer disk) has a mathematical parallel to thermal entropy. Yet the irony is that 2LT, if it applied at all, would actually mean that such information can only ever increase spontaneously! But in any case, Shannon information has nothing to do with the meaning or usefulness of the information as required to make it relevant to evolution. Again, creationists seem to miss these two rather important points, either of which instantly invalidates their argument. So there the matter would probably have rested were it not for the fact that creationists have a specialist in thermodynamics, Professor Andrew McIntosh, in their ranks. Non-UK readers should note that in the UK, "professor" is not merely a courtesy extended to all teachers, it is a prestigious title bestowed in recognition of someone's authority in a subject. If anyone knows about 2LT it should be McIntosh. Yet the Argument from Authority must surely be undermined by a Press Release from his employers, the University of Leeds, saying "Professor Andrew McIntosh's directorship of Truth in Science, and his promotion of that organisation's views, are unconnected to his teaching or research at the University of Leeds in his role as a professor of thermodynamics. As an academic institution, the University wishes to distance itself publicly from theories of creationism and so-called intelligent design which cannot be verified by evidence." Compartmentalization of one's ideas is one thing, but it is clear that McIntosh does not play that game when away from work. A comprehensive summary of his theories is to be found on the Answers in Genesis website under the heading Andrew McIntosh, mathematics at This is an extract from one of his books, In Six Days. The document misapplies quite simple physics to "prove" irreducible complexity and intelligent design. Given McIntosh's acknowledged authority in thermodynamics, it behoves us to see exactly what he is saying. This will require reference to the basic physics behind thermodynamics, namely statistical mechanics. The subject itself is non-controversial and can be checked out in Wikipedia [see this link] or an undergraduate text. The problems arise in where exactly its theorems can be applied and where they cannot. Andrew McIntosh, mathematics McIntosh devotes the opening section to a discussion of World Views. This has no bearing on science, though one gets the impression he is trying to soften his readers up into a frame of mind that is more receptive to the subsequent material. In particular, it seems we need to be open to the idea of an invisible un-named thing that likes to tinker with biochemistry when no-one's looking, its wonders to perform. Be that as it may, the next section is entitled Order and the Second Law of Thermodynamics, which sounds more promising. Here, McIntosh describes entropy in strictly thermal terms and is very clear that 2LT is inviolate. He then sweeps on to assure us that "entropy is effectively a measure of the disorder in that system"... "In overall terms, disorder increases, cars rust and machines wear out." He then applies this principle to the genome. Finally, for no obvious reason, he cites Prigogine, saying "Despite attempts by G. Nicolis [and I.] Prigogine and coworkers to find auto-organization by random processes within living creatures, sustained order can never be achieved, because no new information is available." Now this is a truly amazing sequence. 2LT is a law. It cannot be broken. The principle that things fall apart is a homely truism about man-made objects. It is already well documented as Sod's Law. McIntosh subsumes both in a universal principle saying "There is a fundamental law in the universe to which there is no known exception... In overall terms, disorder increases... No spontaneous reversal of this process has ever been observed for a closed system." Which is true - as long as you mean microscopic disorder. However it is clear that McIntosh wants to apply it to macroscopic things since he uses machines breaking down as the example. So important is this "fundamental law" - hitherto unknown to science - that it deserves a name in honour of its discoverer: McIntosh's Law. Unfortunately it is clearly untrue. All systems without exception are subject to 2LT. However, 2LT does not forbid macroscopic order from arising spontaneously. Just shake some mud with some water and leave it to stand. Lo and behold, the particles sort themselves out and you get beautiful layers forming at the bottom of the jar, coarse material at the bottom, fine stuff at the top. McIntosh's examples are selective. Cars certainly rust and machines do wear out - but crystals form and embryos grow as well. Both 2LT and McIntosh's examples illustrate a fairly simple principle called the Equal Probability Assumption (EPA), which is made in statistical mechanics. Briefly it says that all microstates are equally likely. Obviously, if we see a system with a certain amount of "order" imposed on it, most microstates are ruled out: only the ones that meet our criterion for order are possible. However, since nature has no preferences, it follows that, given enough time for it to settle, the system invariably ends up in one of the much more abundant "disordered" states. This is the basis of 2LT. The notion of order only comes into it because, by definition, order is only a small subset of all possible conditions. This leads to the popular illustration "rooms get untidy, they don't tidy themselves". The reason is that there are many more ways for a room to be untidy than there are for it to be tidy and the room has no overwhelming preference for some states over others. The EPA itself is sufficient as a basis for statistical mechanics. It can be precisely justified in the case of microstates because the fundamental laws governing the evolution of the system are time-reversible - which leads fairly naturally to the EPA through some rather mathematical reasoning. (Mathematicians would probably work with volumes of phase space rather than "numbers of microstates".) However, the EPA is not justified in the case of states defined on macroscopic variables. Temperature and pressure, for example, just define a phase space of two dimensions. The phase space of the microstates, where EPA applies, may have of the order of Avagadro's number of dimensions! Not surprisingly, nature does prefer states where the temperature and pressure are evened out: there are many more such "disorderly" microstates than "orderly" ones where the gas molecules are rounded up into a corner of the box. As a matter of fact, the same reasoning could be applied to any identifiable subset of the microstate ensemble. If we choose to label some states as "gruesome" then EPA entails that gruesomeness cannot spontaneously increase in an isolated system. McIntosh's example of rusting is not fundamental physics either. In fact, the chemical reaction is a complicated re-arrangement of electronic orbitals and the process involves the release of energy as heat. The heat energy has countless possible modes, both as photons radiated away and as phonons vibrating the solid. Hence the process is unlikely to reverse itself significantly, though the odd molecule may very well break up in a spontaneous reversal from time to time. McIntosh's Law therefore applies in many situations, but especially in cases where humans have a preference for particular states of the system and nature does not. McIntosh puts it the other way round, implying that nature actually abhors order. This is incorrect. McIntosh's Law arises precisely because, in most cases, nature has no preferences at all. There is no mysterious "fundamental law" about it. But what of those instances when order does arise spontaneously? Crystallisation is a good example. There are even some substances, like ceric sulphate, that are less soluble in hot water than cold. Classical thermodynamics tells us the entropy increases as heat is added. Yet, in this case, the crystalline "order" increases at the same time. Clearly this is another exception to McIntosh's Law. 2LT and McIntosh's Law are not even remotely interchangeable. In some cases, nature does have preferences (though never at the microscopic level). Nature has a preference for things to lie on the ground. Apples and walls fall down and, what is more, they stay down. This is because the laws of mechanics are not reversible: kinetic energy is dissipated as heat on impact. Otherwise the apples would bounce for ever. Yet on one view, apples lying all at the same height is more orderly than having them bouncing all over the place. The order arises because macroscopic collisions are dissipative - irreversible. Thus nature has a preference for some states: the ones that there is no escape from. Gravity without dissipation just leads to bouncing and no orderly layers of apples on the ground. McIntosh's Law ought to apply to piles of apples but it is generally not held to do so because it only applies to certain types of order. One must assume that McIntosh means states that create the "appearance of design". Machines wear out, living things mutate and die out. The last steps in the argument take us from McIntosh's Law to some arcane work that failed to produce auto-organization, then, finally, to his interpretation that this failure was "because no new information is available". The only hint as to what he means by that is the preceding statement: "That which is dead ... has no teleonomy within it to convert the sunís energy to useful work". It is beginning to look as though he is not talking about information at all but purpose. Yet he wants to apply Mcintosh's Law to purpose itself: presumably because it always works at the microscopic level of 2LT and often works in everyday situations where we prefer machines to be free from rust. This impression is confirmed by the next section entitled Entropy, information and the living world. Here McIntosh argues against abiogenesis by chance, a pointless task really as no-one suggests that life arose that way - unless, of course, the universe is ridiculously big, in which case almost anything will happen by chance somewhere. But it is clear that McIntosh is suggesting that the organisation of the first replicators needed a lot of structure, function or purpose and that such things cannot have arisen spontaneously because it's contrary to his Law - and his Law is proved by the fact it works for 2LT. Oddly enough, he grudgingly acknowledges that science doesn't invoke pure chance at all but suggests a kind of molecular evolution that preceded cellular life. He says "Though Dawkins has argued for a seemingly endless series of small advantageous mutations singled out by natural selection operating at the micro level there are formidable arguments against his position" and then proceeds to cite Behe's ideas of irreducible complexity to claim that chance is the only mechanism available to orthodox science. "The microbiologist Behe has also ably rebutted Dawkins... there is no mechanism in Darwinian evolution to add new information to a species at the macro level." Which, of course, is precisely where Behe is wrong since there clearly is - that is the whole point of Darwin's theory: that "macro" changes arise out of many "micro" changes. Behe, of course, compounds his own error by asserting that changes to the "information" do occur but they are always harmful. McIntosh does not distingish between abiogenesis and evolution when discussing the DNA code. This, then, is the build-up to irreducible complexity. Since all scientists except creationists agree with Dawkins and most laugh at Behe, why does McIntosh attack this strawman? Thus McIntosh as good as says '2LT is really an unbreakable law about entropy but it's also a manifestation of the rule of thumb that everything degrades. So the rule of thumb is an unbreakable law and the genome must degrade. In the next section entitled Flight in the natural world Ė complexity all can observe, McIntosh demonstrates that he cannot imagine how flight could evolve, from which he deduces that it did not. For example "One can envisage the odd scenario of the supposedly half-evolved hummingbird either with the ability to hover and a sparrow beak, unable to feed, or the long beak but no ability to hover, which would mean flying into the flower with no ability to stop!" It just isn't worth analysing such a simplistic Argument from Incredulity. The rest of the article is predominantly more of the same. A flat refusal to admit that "information" - in the loose sense of function - is added to the genome all the time by mutation and natural selection. McIntosh's thinking is quite clearly revealed towards the end. "Those [examples] which have been considered in this chapter could be added to by many more intricately balanced mechanisms which overwhelmingly testify to a creative hand." Only they don't. They only ever occur in systems which differ radically from man-made machines. The systems replicate. No lathe ever made another lathe except by the agency of a human operator. Living things do it on their own. If McIntosh could demonstrate that living things popped up fully formed then he could argue that their design speaks of intelligence. But since they replicate spontaneously, with variation and natural selection, his analogy is completely false. After that, the fact that his argument cannot be traced back to thermodynamics but only to a law of his own devising may come as no surprise. © Derek Potter 2007 Notes: There is a discussion on Derek Potter's article above in the BCSE's public forum. It is in Science sub-forum. Please feel free to join the forum and comment, make suggestions, debate and so on. Science just Science has a pretty good article on its wiki about the abuse by creationists of the 2nd Law of Thermodynamics. This is a pretty good primer for thse trying to get their heads around it and can be found at http://www.justscience.org.uk/wiki/tiki-read_article.php?articleId=15.
<urn:uuid:04f4cf34-e8ac-42a2-9e5d-e1f6c99e0b31>
3.15625
3,380
Nonfiction Writing
Science & Tech.
42.801001
Testing an application prepared to run concurrent code can become a nightmare for old-fashioned testing platforms. Multicore testing requires new techniques, new expertise and new hardware. For example, you cannot guarantee a parallelized application's accuracy testing it on computers with single core microprocessors. I'm going to borrow a sentence from Bram Stoker's "Dracula": We learn from failure, not from success! One of the most frustrating experiences with multicore programming could be a parallelized application generating unexpected random problems. However, if this application had successfully passed the testing process, it would be even a more annoying situation. Why could this happen? Because testing techniques also have to go parallel. Usually, the best computers (workstations or servers) are dedicated to run the final version of the applications. Nowadays, there is a great probability of having at least four or more logical processing cores in a server (four hardware threads). You can parallelize an existing algorithm and you can debug it using a dual-core CPU (two logical processing cores, two hardware threads). Then, an extensive testing process could be performed on many different dual-core computers (again, two logical processing cores, two hardware threads). The application could offer accurate results, it could work as expected. However, when running the application on the server, something could go wrong. A hidden bug could appear, a bug generated by an unexplored concurrency. Two hardware threads do not guarantee real concurrency all the time the algorithm is scheduled to run in parallel. The great problem is the operating system, the scheduler, the kernel and all the other processes and software threads that are competing for processing time. They can avoid some real concurrency to happen because two threads are not always running in parallel. This situation could solve some concurrency bugs. It's a question of time. Some instructions are not running on parallel, they are not running at the same time because there are other threads stealing processing time. However, when you move to the parallel processing power offered by the server, the additional hardware threads (logical cores) offered by this computer would enable the software threads to run in parallel. Hence, real concurrency will happen. Pure concurrency bugs will appear because the instructions that produce the problem will run exactly at the same time. How can you detect these pure concurrency bugs? You have to use the appropriate hardware to let real parallelism happen. You cannot test a parallelized algorithm running on single core microprocessors. You need more logical cores, more hardware threads. You have to use the adequate hardware according to the kind of parallelization you're willing to create. It doesn't mean that you need 256 logical cores to develop an application that could be capable of scaling to this number of cores. However, it means that sometimes, two logical cores aren't enough. Once you face this kind of horrible and difficult to detect bugs, you'll learn to create better parallelized algorithms. You'll learn many things from failure. The recently launched Intel Parallel Studio offers an excellent toolbox to detect these bugs. It is available for C/C++ programming languages. Most modern IDEs are adding features to help the developers to detect and solve these bugs. However, I do believe Intel Parallel Studio is the most complete toolbox. I'd love to see versions for .Net and the JVM (Java Virtual Machine) in the future. Don't forget to check your testing platforms and environments before deploying the final version of a parallelized application. Doing so, you'll avoid terrifying concurrency nightmares.
<urn:uuid:13a83a72-aa36-49ad-9903-73869de9f688>
3.8125
729
Personal Blog
Software Dev.
36.836539
Wayne Stollings wrote: The recent CO2 concentration increase in the atmosphere. The seasonal cycle and the average trend Looking at the seasonal cycle, we see the true CO2 absorptive ability of the biosphere. The present 295ppm is significantly higher than the 180ppm average of the last 1.1 million years, and significantly higher than the 230ppm of the max in the range of that period. Yet you still have denialists here thinking or pushing that humans are not to blame for the malevolent climate changes of now and worse in the relatively near future. This used to be a forum free of such nonsense as spewed by snow and cotton. Monitoring stations across the Arctic this spring are measuring more than 400 parts per million of the heat-trapping gas in the atmosphere. The number isn't quite a surprise, because it's been rising at an accelerating pace. Years ago, it passed the 350 ppm mark that many scientists say is the highest safe level for carbon dioxide. It now stands globally at 395. So far, only the Arctic has reached that 400 level, but the rest of the world will follow soon. "The fact that it's 400 is significant," said Jim Butler, global monitoring director at the National Oceanic and Atmospheric Administration's Earth System Research Lab in Boulder, Colo. "It's just a reminder to everybody that we haven't fixed this and we're still in trouble." Carbon dioxide is the chief greenhouse gas and stays in the atmosphere for 100 years. Some carbon dioxide is natural, mainly from decomposing dead plants and animals. Before the Industrial Age, levels were around 275 parts per million. For more than 60 years, readings have been in the 300s, except in urban areas, where levels are skewed. The burning of fossil fuels, such as coal for electricity and oil for gasoline, has caused the overwhelming bulk of the man-made increase in carbon in the air, scientists say.
<urn:uuid:5c3b888a-01f8-4629-b579-0e7a2c8ad4c4>
2.765625
400
Comment Section
Science & Tech.
55.173939
Found a new "Ultimate" guide for molecular cloning construct design. It seems useful to most researchers. This Molecular Cloning Guides covers: Subcloning is a technique used to move a particular gene of interest from a parent vector to a destination vector in order to further study its functionality. Use PCR reactions to amplify a insert DNA fragment, followed by sub-cloning (restriction endonuclease digestion of insert and vector DNA and fragment ligation). Use DNA oligos and PCR reactions to synthesize a gene without source DNA as template followed by Subcloning (restriction endonuclease digestion and fragment ligation). TA TOPO Cloning: TA Cloning is a subcloning technique that doesn't use restriction enzymes and is easier and quicker than traditional subcloning. Directional TOPO Cloning: Use PCR reaction to amplify a DNA fragment. The resulting PCR products have four additional bases (CACC) at the 5´ ends that are from the specially designed forward PCR primer. With a special ligation kit, this fragment is directly ligated into a linearized vector DNA (D-TOPO Vector, which contains GTGG overhangs at the 5’ end) without pre-digestion with restriction endonucleases. The fragment can only be inserted in forward orientation. This is a cloning method based on the site specific recombination of lambda bacteriophage. Found a new Ultimate guide for molecular cloning construct design No replies to this topic
<urn:uuid:f29a6877-e260-4ea7-8493-4332c7067381>
3.21875
310
Comment Section
Science & Tech.
29.345049
In his lab, USC's Dave Hutchins is simulating possible future atmospheres and temperatures for the Earth. He says he's trying to figure out how tiny organisms that form the base of the food web will react to a more carbon-intense ocean. Burning fossil fuels doesn't just put more carbon into the atmosphere and help warm the climate. It's also changing the chemistry of sea water. KPCC's Molly Peterson visits a University of Southern California researcher who studies the consequences of a more corrosive ocean. Tailpipes and refineries and smokestacks as far as the eye can see in Los Angeles symbolize the way people change the planet's climate. They remind Dave Hutchins that the ocean's changing too. Hutchins teaches marine biology at USC. He says about a third of all the carbon, or CO2, that people have pushed into earth’s atmosphere ends up in sea water – "which is a good thing for us because if the ocean hadn’t taken up that CO2 the greenhouse effect would be far more advanced than it is." He smiles. Hutchins says that carbon is probably not so good for the ocean. "The more carbon dioxide that enters the ocean the more acidic the ocean gets." On the pH scale, smaller numbers represent more acidity. The Monterey Bay Aquarium Research Institute estimates we've pumped 500 million tons of carbon into the world's oceans. Dave Hutchins at USC says that carbon has already lowered the pH value for sea water. "By the end of this century we are going to have increased the amount of acid in the ocean by maybe 200 percent over natural pre-industrial levels," he says. "So we are driving the chemistry of the ocean into new territory – into areas that it has never seen." Hutchins is one of dozens of scientists who study the ripples of that new chemistry into the marine ecosystem. Now for an aside. I make bubbly water at home with a soda machine, and to do that, I pump carbon dioxide. That is a very unscientific, informal, high-speed version of what's happened since people started burning fossil fuels, and that makes water more acidic. One big consequence of acid in the ocean is that it reacts with other chemicals and stunts coral's growth, and makes it harder for mollusks and clams to grow shells. Just for fun, I'm carbonating some sea water, and dropping shells into the jars. I'll post pictures of the results here. Poking stuff to see how it reacts is one of Dave Hutchins' favorite things to do. "It’s a manipulative experiment, we call it. You change something in a controllable way and that will give you a lot of insight into how the unperturbed system works," he says. In his lab at USC, Dave Hutchins shows me containers of water – labeled with numbers that represent years. He carefully pumps measured mixes of air into those containers – to imitate a more carbon-intense atmosphere in the future – and he monitors tiny organisms called phytoplankton to see how they do. His lab is a more controlled environment than the open ocean. "Everything is changing for them out there right now. Not just acidity, or temperature. Not just the pH. But everything that supplies them," Hutchins says, getting more animated. "Nutrients, light, other organisms they have to compete with are shifting around, and it's important to understand how they're going to react." Hutchins says a more corrosive ocean chemistry could shift the entire food web. At the bottom of that web, drifting plant-like organisms may feed well off chemicals that become more common. Further up the food chain, Hutchins says, sea snails and other animals with shells may struggle – and that could spell trouble for the Pacific salmon and larger fish that eat them. He says ocean acidification is like several experiments at once, out of control, overlapping and nowhere near over. "That CO2 is coming from us, it’s coming from China, India, deforestation in Brazil and Southeast Asia," he says. "And it’s entering the global ocean, but the impacts of that are going to make themselves felt on a local basis." Dave Hutchins says it's alarming that he and other scientists find it so hard to predict how chemistry will change the ocean in the next hundred years. But he adds that unpredictability also gives him good reason to work harder at finding out.
<urn:uuid:0b32932b-036c-4faf-86d4-c9423fbe40c9>
3.71875
936
Audio Transcript
Science & Tech.
54.309302
Overview of Atomic Units How Much Do I Already Know? OverviewTwo of the more important units are the Bohr radius and the hartree: Many times it is useful to express Hartrees, a unit of energy, in more familiar terms, specifically joules and/or kilocalories per mole. A table of constants and conversion factors is provided to assist in converting values appropriately! © Copyright 1999-2000 The Shodor Education Foundation, Inc.
<urn:uuid:2873ac72-6d69-4a8d-8c77-a774235aebe7>
3.34375
95
Knowledge Article
Science & Tech.
31.161033
Freshwater Pearl Mussel Habitat and Distribution The freshwater pearl mussel has very specialised habitat requirements which are found in many upland rivers in Scotland. Pearl mussels live partially or totally buried in coarse sand or fine gravel, often around boulders and other large rocks that help to stabilise the river bed. Pearl mussels also require fast-flowing water that is free from pollution. But, despite having a shell that is made from calcium, they are only found in water that is low in this mineral. Hence their original distribution in Scotland reflects that of non-calcareous rock. They were once found throughout Scotland except for the Tweed catchment, parts of the Central belt and the sandstone area of Caithness. But now, due to a number of threats, pearl mussel distribution has been reduced to a few remnant populations in Southern Scotland and some more abundant populations in the Highlands. Pearl mussels are known to be sensitive to pollution with lower oxygen levels, slight increases in nutrient levels, silt and heavy metals all causing damage. As mussels often live buried in the river bed and require clean flowing water, anything that clogs up the spaces in the sediment can be extremely damaging. This is particularly true for juveniles that most often live entirely buried in the river bed and they can be the most vulnerable to enriched or silty conditions. In the few populations where pearl mussels are still abundant, it is thought they can play an important role in helping to cleanse the river water. An adult can filter up to 50 litres of water every day and therefore a very large population will help to maintain the high water quality needed by the resident fish species and other animals and plants living in the river.
<urn:uuid:37567933-ee2e-4708-be1a-2687fddfb19e>
4.125
354
Knowledge Article
Science & Tech.
38.035495
An advantage compared with earlier Fortran standards is that the standard now requires that the compiler can signal if the user deviates from the permitted standard. It is required that a Fortran 90 compiler can signal - use of syntax not defined in the standard. - violation of the syntax rules. - use of kinds not available. - use of obsolete constructs (or statements). - use of non-Fortran characters (for example, Swedish or Cyrillic characters) outside of character strings or comments. - violation of the area of validity for variable names, names of the DO-loops and the corresponding names like IF, CASE and operators. - the reason that a program is not accepted by the compiler. The above means that it is permitted to include extensions to Fortran 90. It has to be possible to ask the program to signal for any extensions outside the Fortran 90 standard.
<urn:uuid:bf4a9203-dcbd-4782-a425-5c2d7098e7da>
3
193
Documentation
Software Dev.
46.193227
Evolution is the process of change through time. Many things evolve. Language evolves so that English spoken four hundred years ago during the time of playwright Shakespeare was very different than English spoken today. Technology evolves over time as new things are invented and modified. Scientific theories evolve as more data are collected and analyses are done. Life evolves over very long timescales as species change over many, many generations. This is called biological evolution. Evolution is the foundation for the modern sciences of paleontology (the study of fossils and the history of life), ecology (the study of ecosystems), and biology (the study of life) including medical science. The links below hightlight resources on Windows to the Universe that are about biological evolution and related topics. The Evidence of Evolution Exploratour examines the scientific evidence of biological evolution. Take the tour to travel through 10 web pages about the scientific theory that explains how and why living things change over many generations. Take the whole tour or explore individual topics listed below. Topics include: - What is Evolution - Fossil Evidence of Macroevolution - Leonardo DaVinci's Findings and the Ages of Fossils - The Theory of Natural Selection - What Is a Theory Anyway? - Dog Breeds: An Example of Artificial Selection - Peppered Moths: An Example of Natural Selection - Bacteria and Antibiotics: An Example of Evolution by Natural Selection - Molecular Evolution - Evolution Resources to Explore More! Evolution in the News - Analysis of Protein from T. rex Bone Confirms Dinosaurs' Link to Birds - Fossil Record Suggests Insect Assaults on Foliage May Increase with Warming Globe - Missing Link Between Whales and Four-Footed Ancestors Discovered - Did Life First Form in a Mica Sandwich at the Bottom of an Ancient Sea? - Using Leaves From the Past to Tell the Future! - Digging Woolly Rhinos - Did T-Rex Have a Plant-Eating Relative? - Diamonds are Ancient History - Old Rocks Give New Clues about Ancient Earth Podcasts about Evolution - Evolution Revolution - A new study suggests human evolution has been supercharged the past 40,000 years. (1 min. 30 sec.) from NSF - Distant Whaletive - A research team has discovered the missing link between whales and their four-footed ancestors. (1 min. 30 sec.) from NSF - Got Mica? - Earth's first life may have formed inside a primordial soup that was sandwiched between the many layers of the mineral mica. (1 min. 30 sec.) from NSF Scientists who Researched Evolution - Charles Darwin was the first scientist to publish a comprehensive theory of evolution in the19th century. - Stephen Jay Gould expanded Darwin's theory with his own concept of punctuated equilibrium in the 20th century. - Leonardo da Vinci did a little bit of everything! When he was not painting the Mona Lisa, he was a scientist and discovered how sedimentary rocks and fossils are formed. - Earth History - Geologic Time - Sedimentary Rocks - Sedimentary Facies - Changing Planet: Changing Mosquito Genes Classroom Activity and Video
<urn:uuid:a4ab0ea9-87a0-4208-94b2-3961be484786>
3.703125
678
Content Listing
Science & Tech.
33.446659
Isn't this bat cute! Click on image for full size Courtesy of Corel Photography Tropical Rain Forest Mammals Did you know that some mammals can fly? That's right, bats and flying squirrels are mammals! There are many kinds of bats in the tropical rain forest. There are large bats, like the Indian flying fox. It's wings spread out over 5 feet! Then there are the vampire bats. These bats suck the blood of animals for food. But don't worry, they only take a little! In case you were wondering, bats in the rain forest don't sleep in caves. You can find them hanging from the trees, sometimes covering a whole tree! Shop Windows to the Universe Science Store! Our online store includes issues of NESTA's quarterly journal, The Earth Scientist , full of classroom activities on different topics in Earth and space science, as well as books on science education! You might also be interested in: Have you ever been to a rainforest? Rainforests have very different trees than the ones you might climb in your yard. Thousands of species of plants and animals live in the rain forests of the world. But...more This picture was taken from high above our planet. Looking at the Earth from very far away like this we can see that some parts of our planet look light in color, and some parts look dark. The color of...more Did you know that many species of birds live in the desert? You have probably heard of the roadrunner or seen the cartoon. The roadrunner is a real bird that lives in the desert! It prefers to run rather...more Deserts are very hot and dry places. Deserts get very little rain each year. So how do plants and animals live here? This section on the desert ecosystem will explain how! Do you know what a desert looks...more There are all kinds of insects in the desert! Some of them cause a lot of problems. The locusts fly from place to place, eating all the plants they see. But not all desert bugs are bad. There isn't a...more There are many species of mammals in the desert! Many of them dig holes in the ground to live in. These holes are called burrows. Rats and hamsters live in burrows. Bigger mammals, like the wild horse,...more Biomes are large areas of the world where there are similar plants, animals, and other living things. The living things are adapted to the climate. Explore the links below to learn more about different...more
<urn:uuid:8574b88f-a194-4a89-8b9b-a9eb322e019f>
3.25
533
Content Listing
Science & Tech.
72.458612