text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Just Ask Antoine! Atoms & ions Energy & change The quantum theory Electrons in atoms The periodic table Acids & bases History of chemistry Why is the anion suffix -ide used to name molecular compounds? - ...It states in my book that all binary compounds,molecular or ionic, all end in -ide. But why is this true if ONLY anions get a different ending? Anish, your book is correct. Binary molecular compounds are named as though they were ionic. Since one-element anions always have an -ide ending, binary molecular compounds will always have an -ide ending too. But there is a difference in the way binary molecular and binary ionic compounds are named. there are often many different molecular binary compounds for the same two elements, numerical prefixes (mono, di, tri, tetra, penta, hexa, hepta, octa, nona, deca, undeca, and dodeca) are usually used with molecular compounds. The prefixes are usually omitted in ionic compound names, because the names of the cations and anions are usually all you need to know to work out the stoichiometry of the compound. For example, the molecular compounds SF4 and SF6 are sulfur tetrafluoride and sulfur hexafluoride; but the ionic compounds FeF2 and FeF3 are iron(II) fluoride and iron(III) fluoride.
<urn:uuid:0845306a-769c-4be7-895b-e9a1c5f300f2>
3.296875
316
Q&A Forum
Science & Tech.
35.152988
[Home] [Puzzles & Projects] [Delphi Techniques] [Math Topics] [Library] [Utilities] A program is required to somehow overwrite or modify email addresses in a text file to make them unidentifiable. Background & Techniques I recently needed to send the log file from a newsletter mailing to help diagnose a minor problem. However the log file contained the email addresses of each recipient and sending them in unsecured plain text did not seem like a good idea. This program obfuscates (confuses) email address one of two ways; replacing the name portion with a random word or replacing the entire email address with an "Address removed" phrase. In both cases, email addresses are identified by the embedded @ symbol. As an exercise that might be useful some day, I added an option to obfuscate the word following a given word. For example, changing the word following the word "Password:". Two sample test files are included in the downloadable zip files below. No worry, the addresses and passwords have already been obfuscated J. Non-programmers are welcome to read on, but may want to jump to bottom of this page to download the executable program now. Designing the inputs was a problem for a couple of reasons. Typically word delimiters do not distinguish between the beginning and ending of a word, but I had decided to use "@" as the end of word delimiter to tell me that this was the name portion of an address. The start of word delimiters were the traditional ones such as space, comma, colon semicolon, etc. So we needed start and end delimiters specified separately. Also, allowing the user to specify the delimiters as a string of characters works fine except for the invisible space character. I added a separate check box to specify the "space is a delimiter" condition. For the second mode, replacing the entire address, I needed to recognize words containing an "@" sign as addresses, so a "word contains" option was added to the inputs. Finally for the "Password" option, we need a way to indicate that the word following the recognized word is the one to be obfuscated. Program logic is simple; a loop until "end-of-file" reads a line from the selected input file, checks for words requiring changes, modifies the line as directed, and writes the line to the output files whether it was modified or not. Suggestions for Further Explorations Copyright © 2000-2013, Gary Darby All rights reserved.
<urn:uuid:f5de287c-4698-4c4b-8f5e-4864f3d65386>
3.046875
524
Documentation
Software Dev.
38.700812
I‘m peppered with emails asking me if articles like this one (which claims there is no Greenhouse Effect at all on Venus) could be right. Michael Hammer has some 20 patents in spectroscopy, and he explains why the Greenhouse Effect — where CO2 and other gases absorb and emit infra red — is very real, and backed by empirical evidence. The calculations using the Stefan-Boltzmann Law on the atmosphere of Earth and Venus, argue that the Greenhouse Effect is not-detectable. But not-detectable (by that method) does not “prove” the effect is zero. Other methods — like satellite observations of Earth’s atmosphere, and countless lab experiments, tell us that the Greenhouse Effect is real. (The Stefan-Boltzmann Law is used to create the first graph below). Huffman’s calculations suggest other factors are more important than greenhouse gases (with which we heartily agree) and that Hansen et al were barking up the wrong tree by pretending that Venus “shows” us anything much about the Greenhouse Effect. (Indeed, the IPCC mention “Venus” in their first Assessment Report back in 1990 as one of the three key reasons.) So here in middle-of-the-road centrist land, the people who claim Earth could become more like Venus are wildly exaggerating, but the people who claim that Venus “proves” that the Greenhouse Effect doesn’t exist are just as wrong. For your average reader, sorry it is esoteric, but there will be avid interest by some science-aficionados in this topic. Guest Post: Michael Hammer Joanne sent me an email to ask my opinion on the “Huffman blog”: There is no Greenhouse Effect on Venus. Let’s start with a plot of the long wave infrared emission from Earth as seen by the Nimbus satellite. This is not model output, it is real experimental data. A plot is shown below: The horizontal axis shows wavenumbers, which are the reciprocals of wavelengths. The vertical axis shows the energy density of the radiation from the Earth observed by Nimbus. The dotted overlaid traces represent the emission spectrum from black bodies at various temperatures (calculated from Plank’s law which is known with a very high degree of surety). Without any atmosphere, Earth’s emission pattern as seen from space would look like one of these dotted lines. Note the big bite out of the spectrum at around 660 wavenumbers and the smaller bite at around 1000 wavenumbers. The former is at the CO2 absorption line and the latter at the Ozone (O3) absorption line. Those two bites represent energy that is not being radiated to space that would be if there was no atmosphere. In short it is the signature of a green house gas reducing Earth’s radiation to space at the green house wavelengths. I invite those who disagree to give an alternate explanation for what is causing these notches. So how to explain the Venus data cited in Mr Huffman’s blog? Shown below is a plot of the temperature versus altitude profile for Venus with the corresponding profile for Earth overlayed (from site http://www.datasync.com/~rsf1/vel/1918vpt.htm ) The similarly is not as close as claimed. In fact the two profiles are only similar between 50km and 60 km and quite different at other altitudes. Source: Shade Tree Physics. Earth data based on The Digital Dutch 1976 Standard Atmosphere Calculator. Venus: J.M. Jenkins, P. G. Steffes, D.P. Hinson, J.D. Twicken, and G.L. Tyler in their article, Radio Occultation Studies of the Venus Atmosphere with the Magellan Spacecraft, Icarus, Vol. 110, 79-94, 1994. OK but what about that bit between 50km and 60km – surely that’s till needs some explanation. Well distance from the sun is not the only thing affecting received sunlight. Venus is covered with clouds which according to the same source have their tops at an altitude of around 60 km. These clouds give Venus an albedo of 0.6 which means 60%of the suns energy is reflected back out to space, 40%is absorbed. Earth by comparison has an albedo of 0.3 which means 70% is absorbed. The net difference in energy absorbed between Earth and Venus is thus the inverse ratio of distance to the sun squared time the difference in fraction of energy absorbed. = (93/67) 2 * 0.4/0.7 = 1.1 Venus only absorbs 10% more solar energy than does Earth yet its temperature at equivalent atmospheric pressure is 66C vs 14C. The difference in black body emission is 749 watts/sqM versus 390 watts/sqM. The close equivalence cited by Mr Huffman would appear to only exist if one ignores the difference in albedo. The green house effect is real however that does not mean its incremental impact is anywhere near as great as claimed by AGW advocates. The real debate is about how large the effect is. Specifically, whether the direct effect of about 0.8C per doubling of CO2 is amplified by claimed massive positive feedbacks or diminished by negative feedbacks such as occur in all stable systems. But that is a different issue. Often combined with the argument that there is “no greenhouse effect on Venus” is the argument that its not possible due the second law of thermodynamics, Michael Hammer helped to write: Why greenhouse gas warming doesn’t break the second law of thermodynamics and So what is the Second Darn Law? Michael Hammer is an electrical engineer who has spent over 30 years conducting research for a major international spectroscopy company. In the course of this work he generated around 20 patents which have been registered in multiple countries. Patents are rarer and more rigorous than peer reviewed papers, only available for economically valuable work, and costing thousands of dollars to process and maintain. Spectroscopy deals with the interaction between electromagnetic energy (light) and matter and it is this interaction which forms the basis of the so called “green house effect” in the atmosphere. Other posts by Michael Hammer. WUWT has a post by Ira Glickstein on the Greenhouse effect as seen in the emission spectra.
<urn:uuid:3cddf73a-a27a-4328-bc54-848ab9d2c4ca>
3.0625
1,347
Personal Blog
Science & Tech.
55.420588
Name: Matt D Carter Why do organisms mutate? Not sure if you mean "how" or "how come." If the second, then it's because this ensures a variation in the characteristics of the offspring. Each new generation of cheetahs includes a few who can run faster (but need to sleep more, say) and a few who can't run as fast (but who can go longer without food), and a lot who are pretty much the same as their parents. Then the environment does its job, and whoever isn't good enough at getting food dies and has no kids. The genes for the abilities that don't do the job get deleted from the species, and the average ability of the cheetah to get food rises. By and by they become real good at it. You still need to preserve the ability to adapt to the environment at this stage because the environment changes. Evolution is what makes species improve, and the two key parts of it are mutation and natural selection (the deletion process; it's not "natural" in the sense of "normal" but "natural" in the sense of "done by Nature.") Now if you mean "how," then the answer I think is usually assumed to be deliberate errors in duplicating DNA by the cell. The machinery for copying DNA is just set up so that every now and then it copies the recipe for a new cell as "1 egg and 1 eye of newt" instead of "1 egg and 1 teaspoon baking soda." X-rays and certain chemicals increase the rate of mutation, but I'm not sure this is significant in general. Click here to return to the Biology Archives Update: June 2012
<urn:uuid:7fce203f-d31d-4da2-b96d-7ab1c901cb41>
2.796875
367
Q&A Forum
Science & Tech.
57.925
Pair production is not the same as decay of a particle. A particle can decay into two components according to its decay probability without needing an extra interaction. A lamda in its rest frame will decay into a proton and a pion, for example, within a predictable decay time . There is no rest frame for the photon since its mass is 0 and it is always travelling with the velocity of light. If it were to decay spontaneously into an electron positron pair, they do have a rest mass and a rest frame, and their invariant mass would be at least 2*m_e, which should have been the mass of the photon. A contradiction. It can interact though with the fields of other particles . How does the photon interact? The interaction probabilities can be calculated given the charges of the target particles, the easiest way using Feynman diagrams. One can envisage a photon as sequentially turning into virtual loops of e+e- . One of the virtual electrons interacts with the field of a real charged particle exchanging enough energy and momentum so that both e+ and e- become real while energy and momentum are conserved in a three body interaction. The nucleus helps by ensuring the momentum in the final state (e+e_Nucleus) to be the same as the one in the initial state (photon nucleus).
<urn:uuid:734b5528-8f19-45f6-b536-e3e2e125e5c5>
3.875
275
Q&A Forum
Science & Tech.
47.34306
This method can be used when simple, unquantified, base call quality values are available. Instead of simply counting base type frequencies it sums the quality values. Hence a column of 4 bases A, A, A and T with confidence values 10, 10, 10 and 50 would give combined totals of 30/80 for A and 50/80 for T (compared to 3/4 for A and 1/4 for T when using frequencies). As with the unweighted frequency method this sets the confidence value of the consensus base to be the the fraction of the chosen base type weights over the total weights (62.5 in the above example). The quality cutoff parameter controls which bases are used in the calculation. Only bases with quality values greater than or equal to the quality cutoff are used, otherwise they are completely ignored and have no effect on either the base type chosen for the consensus or the consensus confidence value. In the above example setting the quality cutoff to 20 would give a T with confidence 100 (100 * 50/50). In the event that more than one base type is calculated to have the same weight, and this exceeds the consensus cutoff, the bases are assigned in descending order of precedence: A, C, G and T. This is Rule IV of Bonfield,J.K. and Staden,R. The application of numerical estimates of base calling accuracy to DNA sequencing projects. Nucleic Acids Research 23, 1406-1410 (1995).
<urn:uuid:fd4916e5-d4ce-4069-be4c-462e8a602a8b>
2.734375
301
Documentation
Science & Tech.
57.973171
According to astronomers studying background radiation data gathered by the Planck Space probe, the universe is 80-million years older than previously thought. So now when somebody asks you how old the universe is, you can confidently tell them, “80-million years older than previously thought” because you never knew the original figure in the first place. WTF are they teaching in school these days? The Planck space probe looked back at the afterglow of the Big Bang, and those results have now added about 80 million years to the universe’s age, putting it at 13.81 billion years old. The findings released Thursday bolster a key theory called inflation, which says the universe burst from subatomic size to its now-observable expanse in a fraction of a second. The probe, named for the German physicist Max Planck, the originator of quantum physics, also found that the cosmos is expanding a bit slower than originally thought, has a little less of that mysterious dark energy than astronomers had figured and has a tad more normal matter. But scientists say those are small changes in calculations about the universe, whose numbers are so massive. Not gonna lie, trying to wrap my head around the scale of the universe and how it was formed and are their infinite universes — that kind of thinking makes my head hurt. I’m a simply man, you know? Some might argue too simple. Others would probably argue mentally deficient. And you know what I call those people? Friends and family. “Don’t forget us.” And Geekologie readers. Thanks to Pyrblaze, who, like me, can’t even fathom 13.81-billion years and starts spazzing out whenever the Burger King drive-thru line is takes too long. LIKE IT WAS TODAY.Related Posts: The long nights, relentless Christmas adverts and brisk chill in the air are all signs the year is coming to an end, and what better way to see in the next than with British documentary legend Sir David Attenborough? The first episode of his new three-part natural history series Galapagos 3D, written and presented by the man himself, will be airing New Year’s Day on Sky 3D in the UK. Like most of his projects, it’s sure to be a stunning visual treat that’ll make you forget about even the worst of New Year hangovers. So, don’t forget to stoke the fire, switch on your 3D TV, and enjoy an educational tour of the Galapagos Islands to start off your 2013. Source: BSkyBRelated Posts: The Human Rights Watch has just issued a 50-page report titled ‘Losing Humanity: The Case Against Killer Robots‘ that urges governments to ban the development of fully autonomous robots designed to kill. A one page addendum to the report written by yours truly adds, “Just ban them all so we can go get drunk and take turns punching each other in the privates.” A solid piece of legislature if I do say so myself. “Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, Arms Division director at Human Rights Watch. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.” “Losing Humanity” is the first major publication about fully autonomous weapons by a nongovernmental organization and is based on extensive research into the law, technology, and ethics of these proposed weapons. It is jointly published by Human Rights Watch and the Harvard Law School International Human Rights Clinic. Agreed — no robot should ever be given the power to decide if a human being lives or dies. That power should only be given to me. And I say NUKE THE ENTIRE PLANET. *mashing big red button* “You do realize that’s just a fake button we installed to see if you’d push it, right?” Um, YEAH — I realized it last night when I snuck out of bed to hit it the first time. Thanks to Kringle Fantastico, MarkE and NoodleRamen Konbu egg, who promised to stand up and fight the robots which is awesome because now I won’t have to. *stretching out on sofa* Don’t let me down, guys! Latest missive from the games console wars: Sony has announced cumulative sales of its PlayStation 3 reached 70 million units on November 4, a few days short of the machine’s sixth birthday. Microsoft’s rival Xbox 360 console achieved the 70 million cumulative sales milestone back in September, according to Microsoft’s FY12Q3 earnings, although the Xbox 360 is about a year older than the PS3 so has had longer to clock up sales. The Nintendo Wii, also launched back in fall/winter of 2006, still leads the pack — having achieved worldwide sales of 97.18 million units as of the end of September. Sony has also announced that sales of the PlayStation Move controller — its answer to rivals’ gesture-based games peripherals such as Microsoft’s Kinect and Nintendo’s Wii Remote — passed 15 million unit sales globally on November 11. The Move controller launched back in September 2010 and is now supported by more than 400 titles, says Sony. The cumulative number of software titles for PS3 has reached 3,590 with more than 595 million units sold worldwide. While the PlayStation Network, launched at the same time as the PS3, now operates in 59 countries and regions. Sony said the PSN gives PS3 owners access to 170,000 pieces of downloadable digital content including 57,000 game content. The look and feel of the PS3 has evolved over the years, with a more streamlined design, a larger hard disk drive and new features added via software updates. Back in September Sony launched a version of the console that’s more than 50 percent smaller and lighter than the original PS3 — and a quarter smaller and a fifth lighter than the slim PS3 model launched back in 2009. Sony’s release follows below TOKYO, Nov. 16, 2012 /PRNewswire/ – Sony Computer Entertainment Inc. (SCE) today announced that the cumulative sales of the PlayStation®3 (PS3®) computer entertainment system reached a milestone of 70 million units*1 worldwide as ofNovember 4, 2012 – less than six years after the platform launched in 2006. SCE also announced that sales of the PlayStation®Move motion controller surpassed 15 million units*1 worldwide as of November 11, demonstrating continued growth and momentum of the PS3® platform. The PS3® system has delivered high quality, award-winning entertainment experiences since its launch. Throughout its lifecycle, the PS3® system has continued to evolve with more streamlined design, larger Hard Disk Drive (HDD) capacity, and new features through software updates. In September 2012, SCE launched the new PS3® system, which has a reduced volume and weight of more than 50 percent compared to the original PS3® model, and of 25 percent and 20 percent respectively compared to the slim PS3 model launched in 2009. The new PS3® has been well received by consumers around the world. Along with the introduction of PS3® in November 2006, SCE launched PlayStation®Network, which now operates in 59 countries and regions*2 around the world. PlayStation Network supports free community-centric online gameplay, exclusive games from independent developers and major publishers, and a broad range of entertainment applications across movies, music, and sports. PS3 owners can access 170,000 downloadable digital content including 57,000 game content worldwide from PlayStation Network*3. In October 2012, SCE redesigned PlayStation®Store for PS3®, offering a more streamlined and accessible store experience, including a stunning new user interface, simple search, and powerful content discovery. The new store is now available in Europe and North and Latin America with more countries and regions to follow. PlayStation®Plus, the subscription service package on PlayStation®Store that offers exclusive benefits such as discounts on games or online storage for game saves, started to offer an “Instant Game Collection” in North America and Europe in July 2012. The Instant Game Collection enables PS Plus members to enjoy popular titles from third party developers and publishers as well as SCE Worldwide Studios at no extra cost. SCE has also enhanced the content offering for PS Plus members in Japan in November. Introduced in September 2010, the PlayStation®Move motion controller that enables users to intuitively play games is now supported by a wide range of titles with more than 400 as of November 2012, including Sports Champions 2, LittleBigPlanet Karting (Sony Computer Entertainment). Additionally, this month marks the global launch of Wonderbook™, a new peripheral that delivers the next evolution of storytelling and a unique experience exclusively on PS3. Wonderbook*4 uses the PlayStation®Eye camera to take augmented reality to spectacular new places, while drawing players into new worlds and allowing them to interact with stories as they tilt or rotate it, or simply turn the pages*5. PS3® has gained tremendous support from 3rd party developers and publishers worldwide. Cumulative number of software titles for PS3® reached 3,590 with more than 595 million units sold worldwide*6. More exciting and attractive new titles are to be released from third party developers and publishers as well as SCE Worldwide Studios, including Assassin’s Creed III (Ubisoft Entertainment.), Call of Duty: Black Ops 2 (Activision, Inc), PlayStation All-Stars Battle Royale (Sony Computer Entertainment), towards the holiday season. In addition, software titles that support “cross platform feature” such as LittleBigPlanet 2: Cross Controller Pack, PlayStation, Sly Cooper: Thieves in Time (Sony Computer Entertainment) are also expected to be released. With these titles, SCE will deliver a ground-breaking gaming experience by leveraging the capabilities of both PS3® and PlayStation®Vita. SCE will continue to further expand the PS3® platform and create a world of computer entertainment that is only possible on PlayStation. Give it to me straight, is that where Dr. Manhattan is from or not? A rogue planet with no star to orbit was recently discovered 100-light years away from earth. It’s caused scientists to speculate that sunless planets are a lot more common than previously thought and realize a lot of planets run away from home at an early age and never return. “F*** warmth and growing plants — I want it to be night time all the time,” I imagine them saying before packing a little suitcase and climbing out the window on a bedsheet. The free-floating object, called CFBDSIR2149, is likely a gas giant planet four to seven times more massive than Jupiter, scientists say in a new study unveiled Wednesday. The planet cruises unbound through space relatively close to Earth (in astronomical terms), perhaps after being booted from its own solar system. “If this little object is a planet that has been ejected from its native system, it conjures up the striking image of orphaned worlds, drifting in the emptiness of space,” study leader Philippe Delorme of the Institute of Planetology and Astrophysics of Grenoble in France said in a statement. Hoho, so they don’t run away from home — they get kicked out! That’s like, way sadder. Now I feel bad. I didn’t realize things were going to get this depressing. You should probably pour me a cocktail to cheer me up a bit. “You already have a drink.” Yeah well it needs a friend. Back me up, Pluto. “Sooooo lonely.” Thanks to Bria, who agrees rogue planets are the coolest planets because they live by their own rules. Except gravity and stuff, they still have to follow those. That’s just the man trying to keep you down! Question by scrount: What will technology be like 100 years from now? What new inventions will there be by that time? Look at us here in 2011 compared to the early 1900s. While we don’t have flying cars or something even greater like light speed travel or time travel, our technology is way more advanced now than it was 100 years ago. Something like HD TV will probably be the new black and white TV by that time. How much futher do you think we’ll progress in another 100 years? And not just speaking of technology but society as a whole. Answer by Jeff Wattstechnology in the future will make us all look like chickens and we could finally figure out why the chicken crossed the road. Give your answer to this question below!Related Posts: Picture kind of related: he may have gotten tickets. In a story way better than the piece of shit movie with a similar premise, a German man who forgot where he parked his car in 2010 during a night of drunken revelry has just located his vehicle after almost two years of searching. No word if he’s sober now, but I’m definitely not. Authorities discovered it by chance last month after a traffic warden noticed that its inspection stickers had expired – 4 kilometres from the spot where the now 33-year-old craftsman originally thought he had parked. “The weird thing is that it turned up so far away, although the owner was pretty sure of where he had left it,” said police spokesman Alexander Lorenz. In the trunk were 40,000 euros worth of tools including power drills and electric screwdrivers, Lorenz said Whoa whoa whoa — there were 40,000 euros (~$ 52,000) worth of tools in the trunk?! If that had been my car I would have definitely made a better effort to find it. You know, posted MISSING signs and stuff like that. Maybe offered a reward. “And not paid it?” *wink* Also, the authorities should probably go ahead and slap ol’ Kraut von Boozenstein with a retroactive DUI. Thanks to vince, who once got so drunk he forgot where he lived. Haha, WELCOME TO MY LIFE. At Tesla’s event, CEO Elon Musk has finally taken the wraps off of its Superchargers, which it has already set up in four California cities and expects to cover the US with nationally in the next two years. According to Musk, the solar powered systems will put more power back into the grid than the cars use while driving. Oh, and for you Model S owners? You will always be able to charge at any of the stations for free. According to Musk, the economies of scale developed while building the Model S have helped it get costs down on the chargers, although he did not offer specifics. During the event we also saw video of drivers charging their vehicles at stations today that Tesla apparently constructed in secret. GigaOm reports they’re using technology from (also owned by Musk) SolarCity, and can charge a Model S with 100 kilowatts good for three hours of driving at 60mph in about 30 minutes. Filed under: Transportation If you dig Quentin Tarantino flicks, an eight movie, 10-disc Blu-ray boxed set is on the way that will probably pique your interest. Lionsgate and Miramax are collaborating on the Tarantino XX set, which captures 20 years of the filmmaker’s career and includes Reservoir Dogs, True Romance, Pulp Fiction, Jackie Brown, Kill Bill Vol. 1, Kill Bill Vol. 2, Death Proof and Inglourious Basterds. Additionally, there’s special collectible packaging and artwork (shown after the break, along with the press release and full list of specs) and the two extra discs are filled with five hours of all new bonus interviews, retrospectives and the like. It seems unlikely to answer the mystery of what was in Marcellus Wallace’s briefcase, but it will be available November 20th with an MSRP of $ 119.99, although Amazon is currently listing it at $ 83.97. Are animations of Curiosity’s Mars landing not enough to feed your space exploration appetite? Try this on for size: a group of scientists from the Harvard-Smithsonian Center for Astrophysics and the Heidelberg Institute for Theoretical Studies have generated what’s billed as a full-fledged simulation of the universe. Arepo, the software behind the sim, took the observed afterglow of the big bang as its only input and sped things up by 14 billion years. The result was a model of the cosmos peppered with realistically depicted galaxies that look like our own and those around us. Previous programs created unseemly blobs of stars instead of the spiral galaxies that were hoped for because they divided space into cubes of fixed size and shape. Arepo’s secret to producing accurate visualizations is its geometry; a grid that moves and flexes to mirror the motions of dark energy, dark matter, gasses and stars. Video playback of the celestial recreation clocks in at just over a minute, but it took Harvard’s 1,024-core Odyssey super computer months to churn out. Next on the group’s docket is tackling larger portions of the universe at a higher resolution. Head past the jump for the video and full press release, or hit the source links below for the nitty-gritty details in the team’s trio of scholarly papers. Filed under: Science
<urn:uuid:9ccf1ddd-f6c1-4bb3-a6f4-0b728c0a48f4>
2.875
3,663
Comment Section
Science & Tech.
51.432314
The Fried Egg Nebula has cracked open a rare hypergiant star. A telescope in Europe recently captured the best image of the unusual cosmic phenomenon. The reason for the name is because the nebula resembles a fried egg with a yellow, yoke-like center, and a milky white around it. The yellow hypergiant which makes up the Fried Egg Nebula is the closest to Earth. At just 13,000 light years away, the monster is called IRAS 17163-3907. How exciting that The Very Large Telescope at the European Southern Observatory allowed astronomers to make this unique find so close to their home planet. Can you believe that the hypergiant at the center of the Fried Egg Nebula is 1,000 times larger than the Sun, and it shines 500,000 times brighter? That is incredible. If you looked at it in comparison to this solar system, Earth would actually be very deep within the huge yellow star. Jupiter would be the first planet not actually inside the star. Another interesting thing about IRAS 17163-3907 is that it could very well be the next Supernova in the sky, because it is likely to die an explosive death. Unfortunately this is not a phenomenon that you will be around to see. The video below zooms in on the amazing Fried Egg Nebula. Like this article? See more by Kate James at Gather.com
<urn:uuid:263acaab-dd73-4977-87c8-ae54d2509bce>
3.28125
282
Truncated
Science & Tech.
55.999139
We will write our charge distribution as a function and our current distribution as a vector-valued function , though these are not always “functions” in the usual sense. Often they will be “distributions” like the Dirac delta; we haven’t really gotten into their formal properties, but this shouldn’t cause us too much trouble since most of the time we’ll use them — like we’ve used the delta — to restrict integrals to smaller spaces. Anyway, charge and current are “conserved”, in that they obey the conservation law: which states that the mount of current “flowing out of a point” is the rate at which the charge at that point is decreasing. This is justified by experiment. and we get the whole electric field by integrating this over the charge distribution. This, again, is justified by experiment. The Biot-Savart law says that electric currents give rise to a magnetic field. Given the current distribution we have the differential contribution to the magnetic field at the poinf : which again we integrate over the current distribution to calculate the full magnetic field at . This, again, is justified by experiment. The electric and magnetic fields give rise to a force by the Lorentz force law. If a test particle of charge is moving at velocity through electric and magnetic fields and , it feels a force of But we don’t work explicitly with force as much as we do with the fields. We do have an analogue for work, though — electromotive force: One unexpected source of electromotive force comes from our fourth and final experimentally-justified axiom: Faraday’s law of induction This says that the electromotive force around a circuit is equal to the rate of change of magnetic flux through any surface bounded by the circuit. Using these four experimental results and definitions, we can derive Maxwell’s equations: When we worked out Ampères law in the case of magnetostatics, we used a certain identity: which we often write as That is, the rate at which the charge at a point is increasing is the negative of the divergence of the current at that point, which measures how much current is “flowing out” from that point. This may be clearer if we integrate this equation over some macroscopic region : The rate of change of the total amount of the charge within is equal to the amount of current flowing inwards across the boundary of , so this flow of current is the only way that the charge in a region can change. This is another physical law, borne out by experiment, and we take it as another axiom. But we might note something interesting if we couple this with Gauss’ law: Or, to put it slightly differently: Recall that in deriving Ampère’s law we had to assume that was divergence-free; when things are not static, the above equation shows that the composite quantity is always divergence-free. The derivative term isn’t associated with any electric charge moving around, and yet it still behaves like a current for all intents and purposes. We call it the “displacement current”, and we add it into Ampère’s law to see how things work without the magnetostatic assumption: This additional term is known as Maxwell’s correction to Ampère’s law.
<urn:uuid:2d25a8b0-223a-4af1-a990-b9c1aa83357f>
3.59375
717
Academic Writing
Science & Tech.
36.30802
Learn more physics! If there was a way it could be measured,hyphothetically, what would the gravity effect be at absolute center of the earth??? - Billy Mills (age 65) Wadley, Georgia USA Since there would be about equal amounts of earth at equal distances in every direction, there wouldn't be any gravitational field. It wouldn't have any particular direction to point. People making descriptions of the hot iron in the center of the Earth (to understand the Earth's magnetism) do need to use good descriptions of the gravitational field, and it does fall to zero at the middle. (published on 10/22/2007) Follow-up on this answer.
<urn:uuid:ba8d1eda-255a-4df9-9914-473ebc2af1d7>
3.5
144
Q&A Forum
Science & Tech.
60.135659
2003 Lemelson-MIT Prize Winner While a student at the California Institute of Technology, Leroy Hood received words of wisdom from his mentor William Dreyer: "If you want to practice biology, do it on the leading edge and if you want to be on the leading edge, invent new tools for deciphering biological information." With this advice, Hood invented some of modern molecular biology's core instruments, profoundly impacting research and medicine. His DNA Sequencer made possible the Human Genome Project (to identify the nearly 30,000 genes in human DNA). |Photo by Dale DeGabriele Hood and colleague Stephen Kent created the protein synthesizer, an instrument that assembles long peptides from amino acid subunits. Hood (and others) also invented the DNA synthesizer for synthesizing DNA fragments—a key development for gene mapping and the polymerase Working with a team at Caltech, Hood developed a prototype automated DNA sequencer in 1985—a machine that rapidly determines the order of the four letters across the strings of DNA in cells. The process involves labeling each of the four DNA letters with different fluorescent dyes, using a laser to make the DNA chemicals glow in red, green, blue or orange, and then reading those signals by a computer. Over the next 17 years, this machine made DNA sequencing 3,000 times faster, facilitating the Human Genome Project, for which Hood was an early advocate and key player. In 1992, Hood created the Department of Molecular Biotechnology at the University of Washington, succeeded by the Institute for Systems Biology in Seattle, WA—co-founded in 2000. Integrating biology, medicine, computation and technology, the Institute focuses on how systems operate by studying all of their elements together. Inspired by his teachers while a young boy growing up in Montana, Hood has worked to improve science education in grades K-12, emphasizing analytical and inquiry based thinking. Hood received his B.S. in Biology from Caltech (1960), his M.D. from Johns Hopkins University (1964), and his Ph.D. in biochemistry from Caltech (1968). Hood was the recipient of the Kyoto Prize for Advanced Technology (2002), the Lasker Award for Studies of Immune Diversity (1987) and others. He has founded or co-founded more than 10 companies, including Applied Biosystems and Amgen, that commercialize Hood is currently working with NanoString Technologies to create a high-speed, ultra-sensitive bar-coding system for identifying individual molecules. In September 2007, Hood was inducted into the National Academy of Engineering. He is now one of only seven people who have been elected to all three National Academies. Institute for Systems Biology
<urn:uuid:81022007-125f-4b26-b175-663ea1c670a9>
3.265625
599
Knowledge Article
Science & Tech.
34.38
Amazing Discoveries in the Amazon One of the most extraordinary species, the Ranitomeya amazonica, a frog with an incredible burst of flames on its head, and contrasting water-patterned legs. The frog’s main habitat is near the Iquitos area in the region of Loreto, Peru, and is primary lowland moist forest. The frog has also been encountered in the Alpahuayo Mishana National Reserve in Peru. A member of the true parrot family, the Pyrilia aurantiocephala has an extraordinary bald head, and displays an astonishing spectrum of colors. Known only from a few localities in the Lower Madeira and Upper Tapajos rivers in Brazil, the species has been listed as ‘near threatened’, due to its moderately small population, which is declining owing to habitat loss. The Amazon River dolphin or pink river dolphin was recorded by science in the 1830s, and given the scientific name of Inia geoffrensis. In 2006, scientific evidence showed that there is a separate species of the Amazon river dolph – Inia boliviensis – of the dolphin in Bolivia, although some scientists consider it a subspecies of Inia geoffrensis. In contrast to the Amazon River dolphins, their Bolivian relatives have more teeth, smaller heads, and smaller but wider and rounder bodies. A blind and tiny, bright red new species of catfish that lives mainly in subterranean waters. Found in the state of Rondonia, Brazil, the fish Phreatobius dracunculus began to appear after a well was dug in the village of Rio Pardo, when they were accidentally trapped in buckets used to extract water. The species has since been found in another 12 of 20 wells in the region. Did you know...The Amazon region comprises the largest rainforest and river system on Earth. It consists of over 600 different types of terrestrial and freshwater habitats, from swamps to grasslands to montane and lowland forests, and it houses an incredible 10% of the world’s known species, including endemic and endangered flora and fauna. Learn more about the Amazon Our work in the Amazon This work is being achieved through: - Promoting the responsible use of natural resources and sustainable management - Ensuring environmental and social standards for infrastructure development, particularly road and dam projects - Developing national programmes for reducing emissions from deforestation - Consolidating and expanding protected areas Join us in our mission to protect the Amazon and our living planet
<urn:uuid:121b2bac-a2ff-47f9-90cf-fdbe800609e5>
3.515625
524
Knowledge Article
Science & Tech.
29.597452
The achievement was detailed in the Sept. 7 edition of the journal Nature, then cited in an Oct. 10 article in the New York Times on the soccer-ball-shaped carbon molecules, which some believe may have a host of medical and industrial uses. With Scott on the research team are former BC visiting scientist Marc Gelmont of Israel and seven colleagues from Albert-Ludwigs University in Germany. "They said it couldn't be done," Scott said of the creation of the fullerene, or "buckyball," so named because of a resemblance to renowned architect R. Buckminster Fuller's geodesic dome. Fullerenes are new elemental forms of carbon, like diamond and graphite, which have been identified only in the past 15 years. Harold Kroto of Great Britain and Americans Robert Curl Jr. and Richard Smalley won the Nobel Prize for Chemistry in 1996 for their discovery of the 60-atom "buckyball," shaped like a soccer ball. Prof. Lawrence Scott (Chemistry) with "buckyball" model. While fullerenes of varying sizes have since been fashioned, each representing a new elemental form of carbon, geometry dictates 20 atoms in size is the smallest a buckyball can be. But no one had actually created one that tiny, until Scott and his colleagues did. In so doing, the team confirmed controversial theories on the nature and properties of these carbon molecules. "This greatly expands the range of size of fullerenes, while opening up their range of possible uses in the future," said Scott. "If you can make the smallest one, you can, in principle, make any of them between 20 and 60 carbon atoms." He said potential applications of the research include the development of AIDS-HIV protease inhibitors and other medicines, and the creation of light-emitting electronic materials used in flat-screen televisions and pocket calculators. Return to October 19 menu to Chronicle home page
<urn:uuid:c38c4afe-7946-4bff-9e83-e4af27ace496>
3.40625
409
Nonfiction Writing
Science & Tech.
42.868757
A charge Q = 2.70 x 10-04 C is distributed uniformly along a rod of length 2L, extending from y = -27.4 cm to y = +27.4 cm, as shown in the diagram above. A charge q = 4.75 x C, and the same sign as Q, is placed at (D,0), where D = 89.5 cm. Consider the situation as described above and the following If the statement is true, answer `T', if it is false, answer `F', and if the answer cannot be determined from the information provided, answer `C'. For example if `B' and `C' are true and there is not enough information to answer `D' and the rest are false, then answer `FTTCF'. A)The net force on q in the y-direction does not equal zero. B)The magnitude of the force on charge q due to the small segment dy is dF=(kqQ)dy/(2Lr*r) C)The net force on q in the x-direction equals zero. D)The total force on q is in the east direction. E)The charge on a segment of the rod of infinitesimal length dy is given by dQ=(Q/L*L)dy
<urn:uuid:1d8eb9c1-8881-40d2-abcc-7030267ff57b>
3.75
286
Q&A Forum
Science & Tech.
90.971068
I am using Runtime.getRuntime.exec("javac " + path + file) to compile a java file...it works just fine. Then when I use the same code to run it, Runtime.getRuntime.exec("java " + path + class) it comes back with a NoClassDeff error. Doesn't that usually mean that the classpath is not set correctly? But if I can do it with javac then why not java? Thanks Cardwell Joined: Sep 29, 2000 Is the JVM putting the output class file in the classpath? "JavaRanch, where the deer and the Certified play" - David O'Meara Joined: Jan 30, 2000 My guess: use the -d option of javac to make javac create the class files in appropriate directories for their packages. E.g. if Test.java has a class Test in package mypkg, then <pre>javac -d classdir Test.java</pre> will create the file in classdir/mypkg/Test.class. I've never figured out why this isn't automatic, but you need to specify the option.
<urn:uuid:3a2fa520-2ac0-4d35-8c2b-484c52439e4a>
2.75
245
Comment Section
Software Dev.
76.770537
© CI/photo by Stephen Richards A rapid assessment survey of one of the three highest priorities for study in Papua New Guinea. The Muller Range had been recognized as a High Biodiversity Priority area by the Papua New Guinea Conservation Needs Assessment and was included in a World Heritage nomination. The rugged topography and difficulty of accessing the sparsely populated interior of the Muller Range have made documentation of the area difficult. The RAP team consisted of both PNG and international scientists with expertise in plants, invertebrates (ants, katydids, odonates, spiders), mammals, birds, reptiles and amphibians. Meet the team >> A site on the southern edge of the Muller Range in central-western Papua New Guinea. The scientists spent a week at each of three camps at 500, 1,600 and 2,875 meters altitude. 500 m. Camp 1 was on a low ridge in lowland rainforest at the base of the Muller Range. Surprisingly, the vegetation here contained some floristic elements more typical of montane forests elsewhere in New Guinea, presumably reflecting the incredibly wet environment at this site. Camp 1 also had the highest overall biological diversity found at the three sites and many new species of ants, katydids, odonates (dragonflies and damselflies) and spiders were documented. 1,600 m. Camp 2 was wet, steep, and located in montane Nothofagus forest where the shrubs, trees, and even the ground were covered with dripping moss. The abundance and diversity of invertebrates was lower here than in the lowland forest, and permanent water was scarce so that odonates were absent altogether. However a number of new and interesting species of katydids, ants and spiders were found, along with several new frog species and many new plants. Camp 2 also had a high abundance of small mammals, many possums, and signs of longbeaked echidnas and tree kangaroos. These larger animals have been hunted to rarity or extinction in many areas of New Guinea, and our sightings of tree kangaroos and numerous possums and cuscus indicate that hunting pressure is low and fauna populations in the remote interior of the Muller Range are still healthy. 2,875 m. Camp 3 was located in an unusual mosaic of sub-alpine fernland and extremely dense and mossy upper-montane forest. Many orchids and rhododendrons were present and, although animal diversity was low, many species found by the RAP team were of great interest biogeographically including new species of frogs, katydids, stick insects and possibly ants. The Muller Range is an unexplored, biologically unknown region of Papua New Guinea that straddles Southern Highlands and Western Provinces and represents a major gap in our knowledge of PNG's biological diversity. The objectives of the RAP survey were to: - Collect biodiversity data for the area to aid local and regional conservation, management, and corridor planning, - Contribute to a greater understanding of the fauna and flora in the Muller Range. Incredibly, the RAP team discovered over 120 species new to science including 9 new plants, two of which have already been described by the team's senior botanist Wayne Takeuchi. Other highlights included 22 new vertebrates (frogs and mammals) and over 100 new insects (damselflies, ants, and orthopterans including katydids and stick insects) and spiders. See the species found >> The team also observed many Birds of Paradise, and a number of bird species with plumage patterns distinctly different from populations elsewhere in New Guinea, suggesting that the Muller Range is a biogeographically significant area. The results of this expedition are being made available to the National and Provincial Governments in the hope that our remarkable biological discoveries will add impetus to the declaration of the Muller Range Karsts as a World Heritage Area. Plants: Approximately 700 species documented, including nine species new to science. Also several extremely significant distributional records of poorly known genera and species, thus providing valuable data on distribution and conservation status. Many taxa photographed for the first time. Spiders: About 90 species of spiders were caught belonging to about 19 families, including 'true spiders' (Araneomorphae), and mygalomorphs (Mygalomorphae). It is likely, given prior knowledge of PNG spiders, especially in the Muller Range area, that over 50 percent of the spider species are new to science. Particularly significant discoveries include the first subsocial species of the genus Anelosimus from New Guinea and an entirely new genus of theridiid spider. Ants: over 200 species of ants recorded from the lowland site, which is very high diversity, of which a high number- 29-are new species for science. Fewer species at higher elevations but several additional new species documented. Three species at Camp 3 (2875 m) represents the highest recorded elevation for ants in New Guinea. Katydids and other orthoptera: approximately 150 species of which as many as 30ew to science. Diversity at 500 m camp exceptionally high, may have exceeded 300 species with further sampling effort. Dragonflies and damselflies: 44 species, of which at least 6 are new species for science. An extremely significant discovery was that larvae of the damselfly genus Papuagrion are semi-terrestrial and live in Pandanus leaves above the forest floor. This is a lifestyle never reported for dragonflies or damselflies anywhere in the world before. A number of significant distributional records were also made, and the high diversity of odonata at Camp 1 (40 species) indicates the importance of freshwater conservation in these habitats. Herpetofauna (frogs and reptiles): More than 60 species documented, including 20 frogs new to science. Also extremely valuable data on distributions and habitat requirements of poorly-known frog species. Birds: Approximately 130 species documented, including at least 12 species of Birds of Paradise. Several birds with unusual plumage or appearance require taxonomic investigation. Mammals: 23 species of mammals, including likely 2 new species (a bat and a possum). Also sightings of Doria's Tree Kangaroo, many possums, and numerous signs of Long-beaked Echidnas (Zaglossus) indicating healthy populations of large mammals survive in this area.
<urn:uuid:378c67f9-9ff8-477a-9dbf-56818f6e7eca>
3.796875
1,316
Knowledge Article
Science & Tech.
29.313773
weevil, common name for certain beetles of the snout beetle family (Curculionidae), small, usually dull-colored, hard-bodied insects. The mouthparts of snout beetles are modified into down-curved snouts, or beaks, adapted for boring into plants; the jaws are at the end of the snout. The bent antennae usually project from the middle of the snout. The largest weevils are about 3 in. (7.6 cm) long, with the average length being about 1/4 in. (0.6 cm). The snout varies greatly in length among the different species; in the curculios, or nut weevils, it may be longer than the body. Different weevil species attack different parts of plants—fruits, seeds, leaves, stems, or roots. In most species the female lays her eggs inside the plant tissue, on which the growing larvae feed. The granary weevil and rice weevil are serious pests of stored cereal grains. The thousands of other destructive weevil species include the sweet-potato, vegetable, alfalfa, clover leaf, strawberry, and pine weevils, as well as the cotton boll weevil, the most serious weevil pest in the United States. The seed weevils, including the bean weevil, are not true weevils, but boring beetles of another family; they feed on leguminous crops, such as peas and beans. Weevils cause millions of dollars' worth of damage annually. The bark beetles, or engraver beetles, are related to the weevils. True weevils are classified in the phylum Arthropoda, class Insecta, order Coleoptera, family Curculionidae. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on weevil from Fact Monster: See more Encyclopedia articles on: Zoology: Invertebrates
<urn:uuid:a4aaa30a-727a-4ace-851c-c33d28e27c83>
3.734375
410
Knowledge Article
Science & Tech.
51.353337
What causes the jet stream that is caused by aircraft? Is it dependant on the altitude of the aircraft? Do passenger aircraft have enough speed to cause a vapor trail? Most jet contrails are produced from passenger aircraft. On a clear summer day it's not unusual to see the sky criss-crossed with jet contrails. The temperature of the air is more important than the altitude or the speed of the jet. Jet fuel (like most fuels) contains water, which is vaporized by the heat of the jet engine. This is expelled into the air behind the airplane and mixes with the very cold air. As the jet contrail cools to the dew point or frost point, the water vapor turns into water droplets, or more often ice crystals, forming the visible contrail. Something similar happens with the exhaust from the tailpipe of a car during the cold temperatures of winter; you see a visible plume of water vapor as the exhaust from the car cools. You will also see this happening in the exhaust from a power plant stack. The principle is the same. David R. Cook Atmospheric Physics and Chemistry Section Environmental Research Division Argonne National Laboratory Contrails are "clouds" produced by the exhaust of aircraft. At high enough altitudes where the temperatures are cold enough, the combustion byproducts contain enough moisture and condensation nucleii to cause the moisture to condense and form clouds along the track of the aircraft. These contrails usually spread out and dissipate as the cloud mixes with the ambient air. But sometimes, if conditions are right, the clouds persist, and can even expand, sometimes spreading nearly over the whole sky. Here is a link that has a good discussion of contrails and some photos. Wendell Bechtold, meteorologist Forecaster, National Weather Service Weather Forecast Office, St. Louis, MO Click here to return to the Weather Archives Update: June 2012
<urn:uuid:7040e366-b98a-480f-b574-c25d8bd1f937>
3.890625
428
Q&A Forum
Science & Tech.
49.40015
|Aug3-09, 11:39 AM||#1| question about conservation of energy/mass I've been thinking about this problem today. I tried reasoning my way through it and I haven't been able to. I might be totally missing something obvious, so if so please feel free to laugh and point it out, but for me right now this is confusing. It deals with the conservation of energy/mass: Let's say that a 10,000 kg spaceship is at rest in outer space. The crew decides that they want to then begin moving, so they fire their engines. They accelerate at 500 m/s^2 for 10 seconds. F = mA F = (10,000 kg)(500 m/s^2) F = 5,000,000 N xt = x0 + v0*t + (A*t^2)/2 xt = 0 + 0 + ((500 m)*(10 m/s^2)^2)/2 xt = 25,000 m W = Fd W = (5,000,000 N)(25,000 m) W = 1.25 x 10^11 J Their engines expended 1.25 x 10^11 joules of energy doing this process (I'm assuming 100% efficiency for simplicity, but even if it weren't it shouldn't change the nature of my problem). vt = v0 + A*t vt = 0 + (500 m/s^2)(10 s) vt = 5,000 m/s KE = (m*v^2)/2 KE = ((10,000 kg)(5,000 m/s)^2)/2 KE = 1.25 x 10^11 J Since there was 100% efficiency, it makes sense that the energy that the ship used would then be converted to its kinetic energy. This is conservation of energy, in that the energy stored as fuel is now the ship's kinetic energy. Now the crew of the ship decides they want to stop moving after reaching their destination. The ship then chooses to fire their rockets in the same fashion, except the opposite direction: 500 m/s^2 for 10 seconds. The force applied to the ship by the engines is negative (because the acceleration is in the opposite direction as the burst tomorrow) and the work done on the ship is negative. Therefore, after the 10 seconds the ship's velocity will be 0 and therefore kinetic energy will be 0. However, during the burn, the engines were still expending energy as they were fired 1.25 x 10^11 J. One would think that because the ship is reducing its kinetic energy, the engines would actually be taking that energy back in and storing it for use later. However, in this example, the engines are actually expending more energy. So now to conclude, the ship now has 0 joules of KE and over the process has expended 2.5 x 10^11 joules of energy. What has happened to this energy from a standpoint of conservation of energy? Where is it? I don't understand how it was conserved, because although in the first half it would had been transferred to KE, once the ship decelerated I see it was being "lost forever". What am I missing? physics news on PhysOrg.com >> Promising doped zirconia >> New X-ray method shows how frog embryos could help thwart disease >> Bringing life into focus |Aug3-09, 12:23 PM||#2| You need to think about what the 'engines' are doing. Consider the case of some kind of thrusters. These work by converting stored chemical potential energy in the fuel into kinetic energy. The fuel is ignited in the thrusters, causing the fuel to be expelled from the thrusters and the spaceship to be propelled forwards (conservation of momentum/Newton's 3rd law). This increases the kinetic energy of the fuel and of the spaceship. The fuel is ignited in the thrusters and directed such that it is expelled outwards in the direction of travel. This causes a backward acceleration (i.e. deceleration) of the spaceship (again conservation of momentum/Newton's 3rd law). This increases the kinetic energy of the fuel and decreases the kinetic energy of the spaceship. However this time the fuel has more kinetic energy (because the spaceship was already travelling in the direction it was expelled), which balances the loss in kinetic energy of the spaceship. Energy remains conserved. Of course in reality the expelled fuel has mass, so you would need to use the full version of Newton's second law... but that's a separate issue |Aug3-09, 05:45 PM||#3| Ah, that makes sense. Thanks! |Similar Threads for: question about conservation of energy/mass| |Mass energy conservation||General Physics||5| |conservation of mass/energy||Cosmology||17| |Conservation of energy-mass||Special & General Relativity||20| |conservation of mass and energy??????||Biology||14| |One question on conservation of mass and energy||Introductory Physics Homework||0|
<urn:uuid:11d79626-ec37-4736-b705-2393ee10c472>
3.046875
1,095
Comment Section
Science & Tech.
74.587121
Posted by greg2213 on May 3, 2010 Never mind that hundreds of peer-reviewed papers have shown that the Medieval Warm Period (MWP) was warmer than the present period, and word-wide, certain groups of people have scoffed at the idea that there is nothing unusual about modern times, climate-wise. Now, even Jones, Briffa, and other unimpeachable sources say that it was warmer back then (at least in Greenland.) See: The Medieval Warm Period in Greenland Posted in MWP & LIA | Tagged: Briffa, In the Past, Jones | Leave a Comment » Posted by greg2213 on February 15, 2010 One of the Chief Scientists behind Global Warming says that any modern warming is similar to, and not statistically significantly different from prior warming. Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical? An initial point to make is that in the responses to these questions I’ve assumed that when you talk about the global temperature record, you mean the record that combines the estimates from land regions with those from the marine regions of the world. CRU produces the land component, with the Met Office Hadley Centre producing the marine component. Temperature data for the period 1860-1880 are more uncertain, because of sparser coverage, than for later periods in the 20th Century. The 1860-1880 period is also only 21 years in length. As for the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different (see numbers below). I have also included the trend over the period 1975 to 2009, which has a very similar trend to the period 1975-1998. So, in answer to the question, the warming rates for all 4 periods are similar and not statistically significantly different from each other. This was from a BBC interview with Dr, Jones., via WattsUpWuthThat Caveats: so why is Dr. Jones saying this? New data? New review of data? Tired of fighting? Tired of pressure to conform? CYA in case AGW falls apart? Posted in Scientists Say | Tagged: Jones, Lindzen, temp stations | 1 Comment »
<urn:uuid:6d422728-9cda-4aa4-895c-c0fa404e2d1f>
3.03125
480
Personal Blog
Science & Tech.
57.178421
Science Finds Amazing New Uses for Sound (Jul, 1931) Science Finds Amazing New Uses for Sound by DR. SERGIUS P. GRACE Assistant to Vice President, Bell Telephone Laboratories As told to J. EARLE MILLER Thanks to astounding discoveries made recently in the field of sound, you will soon be able to talk around the world, deaf mutes will hear, and communication in battle areas will be revolutionized. The amazing inventions which make such feats possible are described in this article. IN A recent lecture on the new marvels being developed in the Bell Laboratories I placed my finger against the ear of one of the members of the audience, and he “heard” music and speech, though not a sound was audible on the stage. That was electrostatic projection of speech directly into the human brain. The speech had been transformed into high voltage electric current, passed through my body, while the ear drum and surrounding tissue of the subject acted as one plate of a condenser receiver. Just a laboratory experiment, so far, yet it is one of many new discoveries that are opening new leads into the field of hearing, helping improve devices for the transmission of sound, and bringing hearing back to the deaf. It is quite possible, as some research workers have suggested, that science may make it practical to “hear” without using the ear drum at all. Hearing, it has been discovered, is partly an electrical manifestation in which a minute current is generated in the auditory nerve. At Princeton University not long ago electrical contacts were established to the audi- tory nerve and brain of a cat, the minute current in the nerve amplified through radio tubes, and words spoken into the ear of the living cat in one room were reproduced through loud speakers in another. At Bell Laboratories we not only have developed special apparatus to make it easier for deaf people to use the telephone, but we have also created several portable outfits, known as audiphones, which help these unfortunate ones to join again in conversation with their friends. Few people realize that there is a definite upper limit to the loudness that the ear can stand, so that extreme cases of deafness are beyond the aid of even the most potent amplifier. Under the guidance of Dr. Harvey Fletcher we have developed the audiometer, an instrument to measure hearing. It shows conclusively that no two cases of deafness are quite alike, and that hearing-aid that proves successful for one person may be quite unsuited for another. Dr. Fletcher has developed a simple method in which some one reads off lists of words, at a distance of three feet from the pick-up device. One list contains fifty words such as “bat, bite, boot, beat” in which the vowel is different for each; the other has fifty words in which the consonant differs such as “by, high, thy, guy, why”. After making this test with several hearing aids, and scoring errors in the consonants twice as heavily as in the vowels, the user can decide which device best suits his particular kind of deafness. The projection of speech direct into the brain is only one of the fantastic marvels of recent years. Every day the air is filled with trans-oceanic radio telephone messages, winging their way between New York and Europe, South America, Australia and South Africa. From any telephone in the United States you can call any telephone subscriber in those far flung quarters of the globe, but when you talk no eavesdropper can listen in. Far down in lower New York your voice passes through a speech inverter mechanism which turns it into a new language, unintelligible to the ear. With patient practice one can learn some of the inverted words, but many are sounds which the human larynx cannot master. The speech inverter is quite simple in theory. Normal telephone speech falls between vibrations of 100 cycles up to 2,900 cycles. When the speech is inverted, for every vibration is substituted a new one whose frequency is equal to some selected constant frequency, less the frequency of the original sound. We use 3,000 cycles for the constant frequency, and a deep bass note of 100 cycles is thus transformed into a high soprano of 2,900 cycles, while the high notes are inverted to deep bass. The result is much like passing light through a camera lens, which inverts the image, so the top of the picture is focused on the bottom of the film in the camera, and the bottom of the image appears at the top. At the other end of the trans-oceanic radio phone circuits similar inverters reverse the process, and the party at the other end hears normal speech. If there is another war the speech inverter may revolutionize the secrecy problem, not only in radio conversations, but on the telephone lines at the front. For, by simply changing the cycle according to a pre-determined arrangement, the enemy would be unable to pick up and translate the mes- sages, even with a similar machine. Learning to speak a few words of the inverted language, so you can talk into a microphone and have them come out of the inverter changed into intelligible English, is an interesting experiment. If, for example, you can say “Cyaneon Playafeen Acecilofin” into the transmitter, the inverting apparatus will repeat back “Illinois Telephone Association”. There are always portions of the sounds that no human throat can master. Such a simple word as company, for example, becomes crink-a-nope. Speaking of possible war uses of new sound apparatus, the “talking light”, one of the most interesting things still in the laboratory experimental stage, offers an opportunity to develop secret wireless communication from the front lines to the rear. The talking light is an electric arc in which the flame acts as a loud speaker, and can be made to speak with almost the volume of a good dynamic speaker. The principle behind the phenomenon was discovered by the inventor of the telephone, Alexander Graham Bell, and Hammond V. Hayes, one of the early Bell System engineers. They found that speech could be transmitted by a beam of light, and also that when a telephone transmitter was connected across the terminals of an electric arc between carbon rods the flaming arc would reproduce the words spoken into the transmitter. At the same time beams of light were sent out which could be used to transmit speech several miles. We have found that the light not only talks, but that it is modulated by the voice current just as the glow of a neon lamp is modulated by the signals reaching a television receiver. It is possible by using photoelectric cells to pick up the beam of modulated light to reproduce it directly at a distant point as spoken words. Such a system would be the last word in directional wireless, for no one could listen in save by inserting a photoelectric tube in the beam, and with the latter directed from the front lines toward the rear, the enemy could not eavesdrop. Audiences listening to some of my talks on our work at the Bell Laboratories have been mystified by the fact that, though I am constantly walking about the stage, and no microphone is in sight, my voice reaches them, greatly amplified, through public address loud speakers. The secret is a tiny microphone, no larger than a quarter, hidden in the breast pocket of my coat, and connected to a trailing wire passing down to the floor inside my clothing. Originally a development of an improved transmitter for telephone operator, the device is now serving in a totally different field. Another use for small and extremely sensitive microphones is in guarding the vaults of the Federal Reserve and many other banks. Tiny microphones are set in the vault walls, and they are so sensitive that a hand tapping against the vault will set them off, while they are so adjusted that they remain insensitive to footsetps passing near, to trucks in the street, and to the rumbling subway trains. A very clever electrical engineer might possibly locate one of the circuits, and, after study, determine just the right amount of resistance to put into the line and so circumvent the microphones while bank robbers could work. So, to guard against that bare chance, the watch service officers are provided, with a device by which, at frequent intervals, they can sweep through a whole cycle of resistance changes and determine whether any one has been tampering with the circuit. The cleverest bank robber doesn’t have a chance with this system. Another laboratory experiment which always fascinates the public is the speech delay mechanism, which, in one form, is just a clever amusement device, and, in another, is an important part of the trans-oceanic radio telephone system. With a long coil spring connected to a telephone transmitter at one side of the stage it is possible to so delay the movement of the voice vibrations along the wire that you can speak in one end and hear the words come out the other quite a bit later. That’s more or less a me- chanical affair, the shape of the coiled spring actually delaying the movement of the waves. But down in lower New York every message passing through on the radio-telephone circuits is delayed for a couple of hundredths of a second by electrical means. The trans-Atlantic wireless-telephone service can work in only one direction at a time, because the sending and receiving stations are tuned to the same wave-length. A voice current going to Europe is picked up also by the American receiver, and unless the wire line between receiver and transmitter is blocked, the signal would “loop the loop”, causing the circuit to howl. If you are talking from New York to London you control the circuit so long as your voice continues. When you stop, relays must work to reverse the channel, so the person at the other end can answer. Your voice, as electrical impulses, travels with the speed of light, while the relays, being mechanical, must have a fraction of a second to work. That’s where the speech-delay apparatus comes in. The man in London may start talking instantly, but the apparatus will store up his voice in London for an instant while the relays are doing their work. While primarily concerned with the development of the telephone, we are constantly contributing to many other fields. One of the most recent developments was the eliminating of the last disturbing sound from talking picture films. In the early days of sound recording on movie film there were many outside noises. Eventually they were eliminated one by one, until finally only a swishing sound remained. By changing the mechanism of the light valve at the recording end it is now possible to eliminate this last noise, and a remarkable change in talking pictures was the result. Research in the recording of talking picture sounds on wax discs also has resulted in a marked improvement in reproduction. We have gone back to Edison’s original method, as used on the old-time cylinder records. He utilized what is known as “hill and dale” recording, the cutting stylus engraving a line of varying depth, instead of the waving, side to side method which Berliner developed with the first disc records. The hill and dale method was not a success because of the limitations of the acoustical method of recording. In following the wavy line on present day records, the reproducing needle is thrown back and forth from side to side of the groove, and does not follow accurately the path traced by the cutting stylus. By reverting to the hill and dale method, and using a permanent needle in the form of a sapphire point, our engineers have attained practically perfect reproduction. The reproducing mechanism itself is a marvel of lightness, weighing so little that the playing life of records is extended indefinitely. We have records which have been played a thousand times without appreciable wear. Besides contributing to such widely separate fields as better hearing and the talking pictures, our staff also is assisting in the research designed to isolate the cause of cancer, and in other biological fields. This is an offshoot of our investigation into the crystalline structure of metals and alloys, a subject of great importance in the manufacture of telephone apparatus. Francis F. Lucas, with his photo-micrographic equipment which utilizes ultra-violet light and magnifies as much as 5,000 diameters, has been able to explain many obscure things about why metals harden under heat treatment, and why they crack in service. Applying his technique in the field of biology, he has been able to take pictures of living cells without the use of stains, which might damage or change their structure. He is able to take photographs of successive layers right through the cell, at intervals of one one-hundred thousandth of an inch. In other words, if the cell is one-thousandth of an inch thick he can get one hundred photographs showing its structure at as many layers. Pictures taken of the surface of brain tissue have shown such startling things that plans are being considered for improved apparatus by which a “map” of a section of brain may be made, just as aerial photographic maps are made by assembling large numbers of photographs taken at appropriate intervals. If the full power of Dr. Lucas’ equipment were utilized, a map of a section of brain only one-fourth of an inch square could be made with an enlargement to more than 104 feet square. Of course, a larger area would first be photographed in smaller detail, and then special sections selected for further magnification and study. There is no apparent connection between our work and study of cells, but the photo-micrographic equipment has already contributed toward one of the major improvements in telephonic communication, an improvement which is saving at least $10,000,000 worth of lead per year in making telephone cables. Lead hardened with antimony has been been used for cable sheaths, but they had an annoying habit of breaking down after a few years of use. Dr. Lucas discovered by micro-photographs that the antimony after a certain period disassociated itself from the lead. As a result of that discovery a new mixture was found, one so much stronger that a thinner, lighter lead covering could be used, and the result was an enormous saving in lead. His studies of all the metals and alloys which go into telephone equipment manufacture also have contributed to general improvements in many lines. Another field to which the telephone industry has contributed, is the improvement of radio broacasting. Crystals to control the frequency of a broadcasting station and keep it on its exact wave length .have been developed, and so successfully that two stations in Iowa, under the same management, are broadcasting all the time on the same wave length, instead of giving only half time service as would be necessary if they shared the wave.
<urn:uuid:cda62c99-c425-4f6b-beff-dc1a7e3ce701>
3.546875
3,047
Nonfiction Writing
Science & Tech.
38.84945
As if living in Hawaii weren’t a great enough life, scientists have found a kind of caterpillar there that lives the best of both worlds—in water and on land. In the Proceedings of the National Academy of Sciences, Daniel Rubinoff’s team found that 12 species in the Hawaiian moth genus Hyposmocoma are amphibious in their caterpillar stage, the first amphibious insects ever found. While most caterpillars are terrestrial (living on land), there are a few—0.5 percent—that are aquatic. However, all of the caterpillars seen before preferred either one or the other. Even classical amphibians, like the toad, often live mainly in one environment and seldom return to the other, perhaps just to lay eggs. But the Hyposmocoma caterpillars seem to have adopted a chilled-out Hawaiian way of life, comfortable with whatever environment they might be in. “They can stay underwater for an indeterminate period of time, or out of the water,” said Rubinoff, an entomologist. “There’s no other animal that I’m aware of that can do that” [Honolulu Advertiser]. Rubinoff was actually studying the moth because of a different quirk: In its caterpillar stage, the insect builds a sort of container for itself from silk and whatever base material might happen to be lying around. Researchers have also found cases in the shapes of cigars, candy wrappers, oyster shells, dog bones and bowties. “We’re running out of names to describe them,” Rubinoff says [Science News]. During an excursion to document this weirdness, a surprise shoved him in a different direction: Rubinoff saw caterpillars he previously thought to be landlubbers living happily in water. So he brought a bunch of specimens to the lab, first testing how they took to water. When the insects flourished, he stranded them in petri dishes with only a bit of carrot and no water. The caterpillars seemed equally at ease in both situations. Whether they’re under water or without a drop of moisture for the duration of their adolescence, “these guys don’t care,” says Rubinoff [ScienceNOW]. They do have a preference for faster-moving water rather than still pools, however. Rubinoff says the caterpillars don’t have gills, but rather breathe through their skins while underwater. Thus, a rushing, oxygen-laden stream in their best friend, and their strong silk anchors them against the current. You can always count on the isolation of islands to spur weird and cool examples of evolution. Hyposmocoma doesn’t disappoint. Rubinoff guesses from his genetic analysis that they’ve been evolving in Hawaii for 20 million years, and he guesses there are actually twice as many species as the 400 already discovered. In 2005, Rubinoff described a caterpillar that hunts down and eats snails. Other caterpillars in this genus feed mostly on rotting wood in the manner of termites, which are relative newcomers to Hawaii [Science News]. DISCOVER: The Clever Tricks That Let Caterpillars Reach Butterflyhood (photo gallery) 80beats: A Gentleman Frog That Takes Monogamy & Parenting Seriously 80beats: Tricky Caterpillars Impersonate Queen Ants to Get Worker Ant Protection Discoblog: Frogs Pee Away Scientists’ Attempts To Study Them Image: Patrick Schmitz and Daniel Rubinoff
<urn:uuid:66e99843-80f2-4a38-81ac-23640d7cbf8d>
3.515625
741
Personal Blog
Science & Tech.
41.209058
With about 6000 species worldwide, the morphological diversity within the brushfoots is immense. There have been decades of debates about how to classify the group and what traits are important and useful. For our purposes, the uniting characteristic of the brushfoots is the reduction of the front pair of legs into small, brush-like appendages that serve no real function, rather like the human appendix or tailbone. As a result, while they still have 3 pairs of legs (an insect characteristic), only two of those leg pairs are actually functional. Brushfoots are some of our largest and recognizable butterflies, including the monarch (Danaus plexippus), painted lady (Vanessa cardui), California tortoiseshell (Nymphalis californica), and mourning cloak (Nymphalis antiopa).
<urn:uuid:bfcd1cfc-75bf-4302-81fd-46023d71976b>
3.328125
166
Knowledge Article
Science & Tech.
22.560923
Roll over Einstein: Pillar of physics challenged Published: September 23, 2011 A startling find at one of the world’s foremost laboratories that a subatomic particle seemed to move faster than the speed of light has scientists around the world rethinking Albert Einstein and one of the foundations of physics. Now they are planning to put the finding to further high-speed tests to see if a revolutionary shift in explaining the workings of the universe is needed — or if the European scientists made a mistake. Researchers at CERN, the European Organization for Nuclear Research outside Geneva, who announced the discovery Thursday are still somewhat surprised themselves and planned to detail their findings on Friday. If these results are confirmed, they won’t change at all the way we live or the way the universe behaves. After all, these particles have presumably been speed demons for billions of years. But the finding will fundamentally change our understanding of how the world works, physicists said. Only two labs elsewhere in the world can try to replicate the results. One is Fermilab outside Chicago and the other is a Japanese lab put on hold by the March tsunami and earthquake. Fermilab officials met Thursday about verifying the European study and said their particle beam is already up and running. The only trouble is that their measuring systems aren’t nearly as precise as the Europeans’ and won’t be upgraded for a while, said Fermilab scientist Rob Plunkett. “This thing is so important many of the normal scientific rivalries fall by the wayside,” said Plunkett, a spokesman for the Fermilab team’s experiments. “Everybody is going to be looking at every piece of information.” Plunkett said he is keeping an open mind on whether Einstein’s theories need an update, but he added: “It’s dangerous to lay odds against Einstein. Einstein has been tested repeatedly over and over again.” Going faster than light is something that is just not supposed to happen according to Einstein’s 1905 special theory of relativity — the one made famous by the equation E equals mc2. The speed of light — 186,282 miles per second (299,792 kilometers per second) — has long been considered a cosmic speed limit. “We’d be thrilled if it’s right because we love something that shakes the foundation of what we believe,” said famed Columbia University physicist Brian Greene. “That’s what we live for.” The claim is being greeted with skepticism inside and outside the European lab. “The feeling that most people have is this can’t be right, this can’t be real,” said James Gillies, a spokesman for CERN. CERN provided the particle accelerator to send neutrinos on a breakneck 454-mile (730-kilometer) trip underground from Geneva to Italy. France’s National Institute for Nuclear and Particle Physics Research collaborated with Italy’s Ran Sass National Laboratory for the experiment, which has no connection to the atomic-smashing Large Hadron Collider, which is also located at CERN. Gillies told The Associated Press that the readings have so astounded researchers that “they are inviting the broader physics community to look at what they’ve done and really scrutinize it in great detail.” That will be necessary, because Einstein’s special relativity theory underlies “pretty much everything in modern physics,” said John Ellis, a theoretical physicist at CERN who was not involved in the experiment. “It has worked perfectly up until now.” And part of that theory is that nothing is faster than the speed of light. CERN reported that a neutrino beam fired from a particle accelerator near Geneva to a lab in Italy traveled 60 nanoseconds faster than the speed of light. Scientists calculated the margin of error at just 10 nanoseconds, making the difference statistically significant. Given the enormous implications of the find, they spent months checking and rechecking their results to make sure there were no flaws in the experiment. More breaking news - Ex-Old Field treasurer arrested in $60K theft - Capital One laying off 26 in LI vault closure - Cuomo eyes tax-free zones near SUNY schools - Median CEO pay rises to $9.7 million in 2012 - North Shore-LIJ installs operating room cameras - Mather Hospital eyes Brookhaven aid to expand - LI carwash owner to pay back workers $75,000 - US home sales tick up to highest in 3 ½ years - Fox show brings messy workplaces to television - Bernanke signals Fed to maintain stimulus efforts Copyright 2013 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Subscribers get free access to our whitepaper library. Recent topics include: Already a subscriber? Claim your Whitepapers here.
<urn:uuid:3dccc6db-0283-414c-930b-c55487543ede>
2.796875
1,042
Truncated
Science & Tech.
42.993553
The popular JUnit test framework is integrated into Android. JUnit, if used properly, brings two major benefits to the test implementation. - JUnit enforces the hierarchical organization of the test cases - The pattern JUnit is based on guarantees the independence of the test cases and minimizes their interference. The most important impact of using JUnit is the way one test method is executed. For each test method, the TestCase descendant class is instantiated, its setUp() method is invoked, then the test method in question is invoked, then the tearDown() method is invoked then the instance becomes garbage. This is repeated for each test method. This guarantees that test methods are independent of each other, their interference is minimized. Of course, "clever" programmers can work around this pattern and implement test cases that depend on each other but this will bring us back to the mess that was to be eliminated in the first place. JUnit also provides ways to collect test classes into test suites and organize these suites hierarchically. JUnit has been included into Android and Android-specific extensions were provided in the android.test package. Android test cases classes should inherit from android.test.AndroidTestCase instead of junit.framework.TestCase. The main difference between the two is that AndroidTestCase knows about the Context object and provides method to obtain the current Context. As many Android API methods need the Context object, this makes coding of Android test cases much easier. AndroidTestCase already contains one test method that tests the correct setup of the test case class, the testAndroidTestCaseSetupProperly method. This adds one test method to every AndroidTestCase descendant but this fact rarely disturbs the programmer. Those familiar with the "big", PC-based JUnit implementations will miss a UI-oriented TestRunner. The android.test.AndroidTestRunner does not have any user interface (not even console-based UI). If you want to get any readable feedback out of it, you have to handle callbacks from the test runner. You can download the example program from here. Our example program contains a very simple test runner UI, implemented as an Activity. The UI provides just the main progress indicators, details are written into the log so this implementation is nowhere near to the flexible test runners of the "big" JUnit. The test suite contains two test case classes. One of them (the MathTest) is really trivial. It demonstrates, however, the hidden testAndroidTestCaseSetupProperly() test method that explains the two extra test cases (the test runner will display 5 test cases when we have really only 3). The main lesson from the other test case class (ContactTest) is that individual test cases must be made independent from each other. This test case class inserts and deletes the test contact for each test method execution, cleaning up its garbage after each test execution. This is the effort that must be made to minimize interference. ExampleSuite organizes these two test classes into a suite. As we respected the JUnit test method naming conventions, we can leave the discovery of test methods to JUnit. At this point, we have everything to write structured, easy to maintain tests for Android, using the Android test instrumentation mechanism to drive the UI. This will be my next step.
<urn:uuid:0c4c03de-aaff-4347-9d26-3ea9327e0ff8>
2.828125
679
Personal Blog
Software Dev.
41.566308
Alas, if anyone had a solid answer to this, we'd publish it and become famous exoplanet scientists. In our own solar system, certainly most of the planetary mass is in the outer planets. The reason that is often cited has to do with the ice line. Beyond a certain distance, water and methane and other such materials are solid rather than gaseous. More solid matter $\Rightarrow$ faster accretion $\Rightarrow$ bigger final products. However, the relative sizes of the outer planets amongst one another is still not so easily determined. If you believe the Nice model, which suggests Uranus and Neptune have swapped orbits, then the outer planet masses were originally monotonically decreasing with distance from the Sun, but there are no quantitative predictions regarding this. As it turns out, though, there are hundreds of planetary systems out there that look nothing like our Solar system. Radial velocity and transiting surveys have found plenty of "hot Jupiters" - Jupiter-mass and above planets orbiting closer to their stars than Mercury around the Sun. The transiting data obtained with the Kepler mission is given on the Kepler website. Note the large masses at small separations. On the other hand, with direct imaging, we have found large objects orbiting their stars much further than Neptune orbits the Sun. See for instance Fomalhaut b and HR 8799 c and b. There may very well be underlying trends, but seeing them amongst all this diversity will require a lot more data. Right now exoplanet science is in its infancy - it is more exploratory than systematic, especially given how hard it is to detect small planets. The only hard restriction is if a proposed arrangement of planets (usually close together and massive) proves to be dynamically unstable on (astronomically) short timescales. Since such systems would not last for very long, we do not expect to see many of them. Of course, calculating such dynamics can be almost as tricky as doing the dust simulation you wanted to avoid. Ultimately, if you are procedurally generating systems for some visualization/gaming software, no one can fault you for having a little poetic license when it comes to inputting parameters for what these systems may look like.
<urn:uuid:c5bd8f24-52df-4f6e-969f-84d5889f1dc1>
3.375
456
Q&A Forum
Science & Tech.
36.367357
I understand that when you pluck a guitar string, then a bunch of harmonic frequencies are produced rather than just the frequency of the desired note. If this is true, why does C2 sound so different ... Close ended instruments have twice the wavelength, because the wave must travel twice the distance to repeat itself. Why must a wave reach a lower density medium (air in this case) to repeat? When ... can you recommend some instructive online source where I could find some information about physics of musical theory. It can be either basic or advanced. I'd like to improve my guitar play ...
<urn:uuid:d4d7dbd9-157f-4ec4-a620-57976ddd0ae7>
3.3125
121
Q&A Forum
Science & Tech.
59.736432
Researchers have constructed a molecular catalyzer that can oxidize water to oxygen very rapidly. In fact, these scientists have managed to reach speeds approximating those of natural photosynthesis. The speed with which natural photosynthesis occurs is about 100 to 400 turnovers per seconds. Scientists have now reached over 300 turnovers per seconds with their artificial photosynthesis. The research findings play a critical role for the future use of solar energy and other renewable energy sources. Last 5 posts in Science Updates - Artificial forest for solar water-splitting: First fully integrated artificial photosynthesis nanosystem - May 16th, 2013 - Significant improvement in performance of solar-powered hydrogen generation - May 15th, 2013 - Solar panels as inexpensive as paint? - May 13th, 2013 - 'Power plants': How to harvest electricity directly from plants - May 9th, 2013 - Value in concentrating solar power to add to electric grid calculated - May 7th, 2013
<urn:uuid:01382e96-0000-4e6d-84b3-69d552e16af0>
3.53125
191
Content Listing
Science & Tech.
20.543467
An amazing article from the NYTimes: Across millions of acres, the pines of the northern and central Rockies are dying, just one among many types of forests that are showing signs of distress these days. From the mountainous Southwest deep into Texas, wildfires raced across parched landscapes this summer, burning millions more acres. In Colorado, at least 15 percent of that state’s spectacular aspen forests have gone into decline because of a lack of water. “A lot of ecologists like me are starting to think all these agents, like insects and fires, are just the proximate cause, and the real culprit is water stress caused by climate change,” said Robert L. Crabtree, head of a center studying the Yellowstone region. “It doesn’t really matter what kills the trees — they’re on their way out. The big question is, Are they going to regrow? If they don’t, we could very well catastrophically lose our forests.”How bad could this get? We don't have the science on that yet. But just as a yardstick of comparison, if forest loss became severe enough that forest became net neutral in carbon emissions, the rate of growth of CO2 concentrations would increase by 50%. Forest management and better tilling practices are the most promising ways to induce carbon sequestration. But how much hope is there of that we are not even fully supporting ending tropical deforestation or thinning our own fire-prone western forests? Many scientists had hoped that serious forest damage would not set in before the middle of the 21st century, and that people would have time to get emissions of heat-trapping gases under control before then. Some of them have been shocked in recent years by what they are seeing. “The amount of area burning now in Siberia is just startling — individual years with 30 million acres burned,” Dr. Swetnam said, describing an area the size of Pennsylvania. “The big fires that are occurring in the American Southwest are extraordinary in terms of their severity, on time scales of thousands of years. If we were to continue at this rate through the century, you’re looking at the loss of at least half the forest landscape of the Southwest.”
<urn:uuid:f2417002-eda1-4536-affd-4c064e7f2248>
3.15625
466
Personal Blog
Science & Tech.
50.055917
We’ll stick with the background vector space with inner product . If we want another inner product to actually work with, we need to pick out a bilinear form (or sesquilinear, over . So this means we need a transformation to stick between bras and kets. Now, for our new bilinear form to be an inner product it must be symmetric (or conjugate-symmetric). This is satisfied by picking our transformation to be symmetric (or hermitian). But we also need our form to be “positive-definite”. That is, we need for all vectors , and for equality to obtain only when . So let’s look at this condition on its own, first over . If is antisymmetric, then we see by taking the adjoint, and thus must be zero. But an arbitrary transformation can be split into a symmetric part and an antisymmetric part . It’s easy to check that . So the antisymmetric part of must be trivial, and the concept of being “positive-definite” only makes real sense for symmetric transformations. What happens over ? Now we want to interpret the positivity condition as saying that is first and foremost a real number. Then, taking adjoints we see that . Thus the transformation must always give zero when we feed it two copies of the same vector. But now we have the polarization identities to work with! The real and imaginary parts of are completely determined in terms of expressions like . But since these are always zero, so is the rest of the form. And thus we conclude that . That is, positive-definiteness only really makes sense for Hermitian transformations. Actually, this all sort of makes sense. Self-adjoint transformations (symmetric or Hermitian) are analogous to the real numbers sitting inside the complex numbers. Within these, positive-definite matrices are sort of like the positive real numbers. It doesn’t make sense to talk about “positive” complex numbers, and it doesn’t make sense to talk about “positive-definite” transformations in general. Now, there are three variations that I should also mention. The most obvious one is for a transformation to be “negative-definite”. In this case, we have , with equality only for . We can also have transformations which are “positive-semidefinite” and “negative-semidefinite”. These are just the same as the definite versions, except we don’t require that equality only obtain for .
<urn:uuid:4daab168-7d7a-499f-b254-599027cca174>
2.6875
555
Personal Blog
Science & Tech.
47.867692
Science Fair Project Encyclopedia Inge Lehmann (May 13, 1888 - February 21, 1993), Fellow of the Royal Society (London) 1969, was a Danish seismologist who, in 1936, argued that the Earth must not only have a molten interior, but a solid core at the center, which deflects P waves. She also wrote a book called P, which dealt with P waves and other aspects of seismography. She was awarded the Tagea Brandt Rejselegat twice, in 1938 and 1967. See also: Richard Dixon Oldham The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:7a7ccfad-e5ea-457c-af04-b9141ed58661>
3.09375
146
Knowledge Article
Science & Tech.
58.003173
Science Fair Project Encyclopedia PNA can also refer to the Palestinian National Authority or Pakistan National Alliance . PNA is peptide nucleic acid, a chemical similar to DNA or RNA but differing in the composition of its "backbone." DNA and RNA have a ribose sugar backbone, whereas PNA's backbone is composed of repeating N-(2-aminoethyl)-glycine units linked by peptide bonds. The various purine and pyrimidine bases are linked to the backbone by methylene carbonyl bonds. PNAs are depicted like peptides, with the N-terminus at the first (left) position and the C-terminus at the right. Since the backbone of PNA contains no charged phosphate groups, the binding between PNA/DNA strands is stronger than between DNA/DNA strands due to the lack of electrostatic repulsion. Early experiments with homopyrimidine strands (strands consisting of only one repeated pyrimidine base) have shown that the Tm ("melting" temperature) of a 6-base thymine PNA/adenine DNA double helix was 31°C in comparison to an equivalent 6-base DNA/DNA duplex that denatures at a temperature less than 10°C. Mixed base PNA molecules are true mimics of DNA molecules in terms of base-pair recognition. PNA/PNA binding is stronger than PNA/DNA binding. Synthetic peptide nucleic acid oligomers have been used in recent years in molecular biology procedures, diagnostic assays and antisense therapies. Due to their higher binding strength it is not necessary to design long PNA oligomers for use in these roles, which usually require oligonucleotide probes of 20-25 bases. The main concern of the length of the PNA-oligomers is to guarantee the specificity. PNA oligomers also show greater specificity in binding to complementary DNAs, with a PNA/DNA base mismatch being more destabilizing than a similar mismatch in a DNA/DNA duplex. This binding strength and specificity also applies to PNA/RNA duplexes. PNAs are not easily recognized by either nucleases or proteases, making them resistant to enzyme degradation. PNAs are also stable over a wide pH range. Finally, their uncharged nature should make crossing through cell membranes easier, which may improve their theraputic value. It has been hypothesized that the earliest life on Earth may have used PNA as a genetic material due to its extreme robustness, and later transitioned to a DNA/RNA-based system. See RNA world hypothesis for related information. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:56fbb378-12b1-4d92-99e0-774de3dcfdb8>
3.4375
569
Knowledge Article
Science & Tech.
32.800453
Learning Haskell with Chess 1 Exercise 1 1.1 Learning Targets - recapitulate Haskell types (type, data, product and sum types) - equality (in particular if using Helium) - pretty printing pieces, boards, ... - Define data types that represent boards, squares, positions, pieces and game states. - Helium: Implement suited eq-functions. - Implement a function prettyBoard, that transforms a board into a clearly arranged string representation (human readable :-)). Support this function with auxiliary functions that pretty print pieces, squares, ...
<urn:uuid:539f6395-8823-49d9-a64a-114a353a538d>
3.140625
120
Content Listing
Software Dev.
36.983459
Time and again, extreme claims about global warming (aka global climate change) turn out to be lacking in one major aspect. That aspect is truth. Today’s story from the London Telegraph tells how the warmest October on record could be explained, considering the unusual cold, snow, and ice activity around the world during the month. It seems that NASA’s Goddard Institute for Space Studies, run by Gore apologist, and often inaccurate Dr. James Hansen, recorded October as the hottest on record. This was startling. Across the world there were reports of unseasonal snow and plummeting temperatures last month, from the American Great Plains to China, and from the Alps to New Zealand. China’s official news agency reported that Tibet had suffered its “worst snowstorm ever”. In the US, the National Oceanic and Atmospheric Administration registered 63 local snowfall records and 115 lowest-ever temperatures for the month, and ranked it as only the 70th-warmest October in 114 years.
<urn:uuid:b80b7022-4a4f-465b-a889-11fb4c2b0125>
2.984375
207
Personal Blog
Science & Tech.
44.431667
(PHP 4 >= 4.3.0, PHP 5) proc_close — Close a process opened by proc_open() and return the exit code of that process proc_close() is similar to pclose() except that it only works on processes opened by proc_open(). proc_close() waits for the process to terminate, and returns its exit code. If you have open pipes to that process, you should fclose() them prior to calling this function in order to avoid a deadlock - the child process may not be able to exit while the pipes are open. Returns the termination status of the process that was run. In case of an error then -1 is returned. Note: Unix Only: proc_close() is internally implemented using the waitpid(3) system call. To obtain the real exit status code the pcntl_wexitstatus() function should be used.
<urn:uuid:619408ee-f4f4-4d6b-961b-e0e384674044>
2.71875
188
Documentation
Software Dev.
59.025997
The sun put on quite the display this May Day. It unleashed a colossal wave of super-hot plasma into space. Known as a coronal mass ejection (CME), this spectacular event was caught on camera by NASA scientists. Are you a poet and you didn't even know it? Well, now you can be. For those of the galactic persuasion, NASA has a chance to make your work famous with a trip to the red planet! A high-tech NASA telescope in orbit escaped a major collision with a former Soviet-era Russian spy satellite last year. The crash would have resulted in an explosion that would have released as much energy as two and a half tons of explosives. The second mission of the newly developed Vega launcher has entered its final preparation phase one day before liftoff. The lightweight launcher is readied to take off in the night of May 3 with a trio of satellites as its payload. Most of us may not be aware of the significance that moons had throughout history. They played great rolls surrounding harvests and festivals during ancient times. And they've also made many historical marks surrounding cultural customs and even death. You may not be able to journey to Mars yourself, but your message sure can. NASA is inviting members of the public to submit their names and a personal message online for a DVD that will be carried aboard a spacecraft that will study the upper atmosphere of the Red Planet. The Asteroid that was selected to be explored by a NASA mission has a new name, after a naming contest involving thousands of young students around the world. A third-grade student in North Carolina won with his entry 'Bennu', the name of an Egyptian avian deity. The OSIRIS-REx spacecraft is planned... If Saturn were human, it would have had a lot of plastic surgery--or just have some really great genes. The planet has somehow maintained its youthful appearance, complete with a bright surface. Now, scientists have discovered how. A documentary that recently premiered this month actually shows proof that aliens may exist. Blasting off into space has just gotten a little bit pricier--and by "a little bit," we mean a lot. NASA has just signed a new deal that will keep American astronauts flying on Russian spacecraft through 2017, but at a cost of $70.7 million per seat. A new image of the space-based X-ray observatory Chandra shows the enormous cloud of hot gas resulting from the epic collision of two large, colliding galaxies. This unusually large reservoir of gas contains as much mass as 10 billion Suns, spans about 300,000 light years, and radiates at a temperat...
<urn:uuid:96df60b3-d6ad-4189-bfb6-28d880fb3e3a>
2.734375
544
Content Listing
Science & Tech.
57.91998
Do we need to retrench, pull back from the coasts because of sea-level rise or rebuild differently? Where do we go from here? We need an integrated approach that covers three areas: engineering, ecologically based adaptation and policies. Let's take engineering. There is a whole spectrum of engineering approaches, like innovative designs for subway grates. Right now we have open grates. We need to design and implement grates that close. We need to see what we can do with the tunnels, which is much more challenging. We need to think about drains. Some of those are innovative engineering solutions, some involve standards and regulations, like building culverts to withstand more drainage. At the other end of the engineering spectrum is tidal barriers. What we recommended on that issue is: it needs to be studied. We need to begin the next set of feasibility studies: a significant study of the costs, benefits, feasibility in our New York coastal estuary, and the economic, environmental and societal costs and benefits. There is a lot of cost involved in creating such a big, engineered structure. There would be environmental changes we would need to understand. On the societal side: Which areas are protected? Some would be, some would not be, and how do we deal with the communities in different places? The second part is ecologically based adaptation. There are the wetlands [that can absorb] coastal flooding, but there is also inland flooding. We also coexist with a wonderful, mixed deciduous and evergreen forest here. How can we coexist with a forest ecosystem in more effective ways, because of the damage caused by trees falling into power lines or trees falling into houses? Then again, having forested areas is good in terms of absorbing storm water. How can we really develop ecosystem-based adaptations that are effective for our region? The third area is design and policy. Right now, what we call the societally acceptable level of risk is basically the one-in-100-year storm. Sandy was a wake-up shout for us all to think about: Is that the right level or do we need to change that? Is it really now the one-in-500-year storm? Or, as in Europe, is it even more? Then there's coastal communities. Can there be a reduction in insurance premiums for homeowners who take adaptive measures? That's an incentive for doing a better job. Overarching all of this is design, urban planning. What we really need to do is recover, rebuild and create a vibrant and sustainable coastal city region. Let's do this in creative ways. For example, the Dutch are not just looking to engineering solutions, they are looking at a mix of solutions. So there are the iconic floating houses but they are also doing a lot with raising apartment buildings and allowing water to slosh in and out when floods come. We have to accept that we are a coastal region. There are going to be coastal floods. How do we live with it? Is there enough room in New York City for things like wetlands to make a significant impact? We have about 1,500 miles [2,400 kilometers] of coastline in the estuary. One of the things we need to do is see what areas are available to restore or maintain or reconstitute. The [New York City Department of Environmental Protection] has already started that with the Bluebelt Program [preserving wetlands for storm water management in Staten Island]. How did those work during Sandy? It was a very large storm. Did they work for a certain amount of time or up to what level of flooding? What's the potential for expanding those areas?
<urn:uuid:9c274719-d654-465b-979e-36acb20d9dc0>
2.6875
746
Audio Transcript
Science & Tech.
53.17338
A UVV MAX is a region of lifting air in the atmosphere. This lifting is on the large scale. Some example lifting processes are orographic lifting, low level warm air advection, divergence aloft, frontal lifting, and convergence in the lower troposphere. The region with the greatest amount of lifting will be at the bulls eye of the UVV MAX. High UVV makes cloud formation and precipitation formation very likely. Rising air cools and condenses out moisture once it rises enough to reach saturation. Generally, a UVV of 6 or greater will lead to clouds and precipitation (if air is not initially too dry). For drier air, higher values of UVV will be needed to generate clouds and precipitation. For further information on UVV, click here.
<urn:uuid:5586622f-217e-4bbe-a4d0-18189e5df7bc>
3.4375
159
Knowledge Article
Science & Tech.
49.256178
Title: National database for calculating fuel available to wildfires Author: McKenzie, Donald; French, Nancy H.F.; Ottmar, Roger D. Source: Eos. 93(6): 57-58 Description: Wildfires are increasingly emerging as an important component of Earth system models, particularly those that involve emissions from fires and their effects on climate. Currently, there are few resources available for estimating emissions from wildfires in real time, at subcontinental scales, in a spatially consistent manner. Developing subcontinental-scale databases and applications in fire science requires a framework that uses both fine-scale and coarse-scale data with attention to minimizing extrapolation errors, while ensuring spatial consistency in outputs. The estimation of actual fuel amounts is likely the greatest source of uncertainty in calculating carbon release and other emissions from wildfires, particularly large fires that burn multiple vegetation types. To reduce this uncertainty, the Fuel Characteristic Classification System (FCCS) provides both a conceptual framework and a software tool for quantifying fuels over spatial domains from a few square meters to many square kilometers. Keywords: smoke emissions, carbon accounting, climate change, fuels View and Print this Publication (811.54 KB) - We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information. - This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain. - You may send email to firstname.lastname@example.org to request a hard copy of this publication. (Please specify exactly which publication you are requesting and your mailing address.) Get the latest version of the Adobe Acrobat reader or Acrobat Reader for Windows with Search and Accessibility McKenzie, Donald; French, Nancy H.F.; Ottmar, Roger D. 2012. National database for calculating fuel available to wildfires. Eos. 93(6): 57-58.
<urn:uuid:93e2fca5-0068-4030-9b0e-73d84f894420>
2.9375
406
Truncated
Science & Tech.
33.135065
Climate and Global Change Warm near the equator and cold at the poles , our planet is able to support a variety of living things because of its diverse regional climates . The average of all these regions makes up Earth's global climate . Climate has cooled and warmed throughout Earth history for various reasons. Rapid warming like we see today is unusual in the history of our planet. The scientific consensus is that climate is warming as a result of the addition of heat-trapping greenhouse gases which are increasing dramatically in the atmosphere as a result of human activities.
<urn:uuid:f640adaa-2d33-4c40-bbd7-03ad27b4bcc4>
3.578125
115
Knowledge Article
Science & Tech.
37.192545
(Note: We discuss carbon dioxide because it contributes to slightly over half of current greenhouse warming, but we must remember that methane, CFCs, ozone, and nitrous oxide together account with slightly less than half). When I was a graduate student at the University of Washington, learning about weather and climate, I thought climate was boring, compared to tornadoes or thunderstorms. You averaged the temperature, rainfall, or wind, or – whatever – to get the climate of an area. This changed around 1970, when I saw someone give a talk on the disturbing fact that the carbon dioxide in our atmosphere was increasing at a site on Mauna Loa in Hawaii. The speaker told us that he thought it was possible that this might make Earth’s climate warmer over time. (marked as “1″ on Figure 3). This was truly amazing! To me, the percentage of carbon dioxide was one of those numbers you memorized for class, like the conversion factor from Fahrenheit to Celsius. It was not supposed to change. A few months later, another speaker said that the average global temperature, as far as he could see, would go down for a year or two after a volcanic eruption spewed dust into the stratosphere, and then warm up after the dust settled out. He didn’t think much else was happening. Figure 3. Yearly average carbon dioxide concentration collected at Mauna Loa Observatory, Hawaii, USA. Data from C.D. Keeling and T.P. Whorf, and the Carbon Dioxide Research Group, Scripps Institute of Oceanography. Ppmv = parts per million by volume. For example 300 ppmv means that out of 1,000,000 molecules in the mixture of gases we call air, 300 are carbon dioxide. In the meantime, the carbon dioxide kept increasing. In 1997, I worked for the first time with scientists who were measuring how much carbon dioxide was going between the surface and the air at a site near Wichita, Kansas, USA. To take this measurement, we needed a reference value for carbon dioxide, and we used “360 ppmv” (parts per million by volume). From this graph, we were clearly behind – the mean value had already gone up to 364 ppmv. In 2002, we took similar measurements in the same area, and I was surprised to see how much the value had changed in only five years. The amount of carbon dioxide in Earth’s atmosphere has changed a lot over geologic time. - At the end of the Permian period (about 250 million years ago) scientists have estimated that carbon dioxide in the atmosphere was a high as 10 times what it is today. - During the mid-Cretaceous period, the dinosaurs also lived in a “greenhouse” world. Again, scientists estimate carbon dioxide could be as high as ~10 times what it is today. Geologic evidence supports a warmer climate in both cases, especially in Polar Regions. This has a lot to do with the large changes that take place when snow disappears.
<urn:uuid:49b97676-baaa-4d81-aa44-6be861600bc1>
3.5
643
Personal Blog
Science & Tech.
55.507903
(a) Time evolution in the proposed experiment for past-future entanglement extraction. In the first time interval, qubit P interacts with the vacuum field. After a certain time with no interaction, qubit F interacts with the field, getting entangled with qubit P. (b) To activate and deactivate qubit-field coupling, the magnetic flux is varied. Image credit: Sabín, et al. ©2012 American Physical Society Typically, for two particles to become entangled, they must first physically interact. Then when the particles are physically separated and still share the same quantum state, they are considered to be entangled. But in a new study, physicists have investigated a new twist on entanglement in which two qubits become entangled with each other even though they never physically interact. “We show that it is possible in a real experiment to entangle two systems that neither interact with each other nor interact with a common resource at the same time, and without the need of measurements,” Sabín told Phys.org. “The trick is to use the correlations between different times – between past and future – contained in the vacuum of a quantum field.” In quantum theory, the quantum field is the system that contains all particles that are too small to be described classically. Although no particles exist in the vacuum region of a quantum field, physicists have known since the 1970s that this vacuum contains quantum correlations, or entanglement. “The vacuum is globally nothing, but locally is roughly like a cloud consisting of bunches of pairs of particles that die too fast to be detected,” Sabín said. “In quantum field theory, these are called quantum fluctuations. These quantum fluctuations are correlated if we consider different regions of space, and different regions of time as well.” If this vacuum entanglement could be extracted from the vacuum and transferred to actual particles, it could become more than just an odd quantum property and potentially serve as a useful resource for quantum information applications. But experimentally realizing the extraction of vacuum entanglement has been very difficult. The physicists, Carlos Sabín, Borja Peropadre, Marco del Rey, and Eduardo Martín-Martínez at the Institute of Fundamental Physics at the Spanish National Research Council (CSIC) in Madrid (Sabín is now at the University of Nottingham in the UK, and Martin-Martinez is now at the University of Waterloo in Ontario, Canada), have published a paper on this new kind of entanglement in a recent issue of Physical Review Letters. Via Qubits that never interact could exhibit past-future entanglement.
<urn:uuid:d131704f-51c5-4812-8d29-d7591fb1b96a>
3.5625
542
Academic Writing
Science & Tech.
32.905344
Which invasive species is most established in the Bay? More than 5,000 alien species have become established in North America since the founding of the English Colony at Jamestown. Of these, the Maryland Invasive Species Council has identified about 100 that present a serious threat in our region. Which one is the worst? That depends. If you care about Bay grasses, the mute swan might be your species of greatest concern. For a crabber, it might be the mitten crab. And if you are a herring, the blue catfish is a terrifying new predator lying in wait. The Chesapeake Bay Program identified six species that present immediate biological threats in Bay ecosystems: Maryland has ongoing programs to address each of them, and in many cases, their numbers have been dramatically reduced. Giant reed is present in the greatest numbers. Its populations cover thousands of acres and occur in every Maryland coastal community. They are still expanding, but the plant has been removed from many natural habitat areas. The zebra mussel has been kept out of Maryland waters by sharp-eyed boaters and fishermen who have cleaned the hitch-hikers from their gear. These species have the capacity to quickly expand their populations and can rapidly become a major problem, so it is critical that control measures are applied constantly. - Jonathan McKnight
<urn:uuid:b63a1a00-11f5-42f4-8cdf-dbb90f7b7ace>
3.15625
276
Q&A Forum
Science & Tech.
46.31027
Wellman, C.H., Osterloff, P.L. and Mohiuddin, U. (2003) Fragments of the earliest land plants. Nature, 425 (6955). pp. 282-285. ISSN 0028-0836Full text available as: The earliest fossil evidence for land plants comes from microscopic dispersed spores. These microfossils are abundant and widely distributed in sediments, and the earliest generally accepted reports are from rocks of mid-Ordovician age (Llanvirn, 475 million years ago). Although distribution, morphology and ultrastructure of the spores indicate that they are derived from terrestrial plants, possibly early relatives of the bryophytes, this interpretation remains controversial as there is little in the way of direct evidence for the parent plants. An additional complicating factor is that there is a significant hiatus between the appearance of the first dispersed spores and fossils of relatively complete land plants (megafossils): spores predate the earliest megafossils (Late Silurian, 425 million year ago) by some 50 million years. Here we report the description of spore-containing plant fragments from Ordovician rocks of Oman. These fossils provide direct evidence for the nature of the spore-producing plants. They confirm that the earliest spores developed in large numbers within sporangia, providing strong evidence that they are the fossilized remains of bona fide land plants. Furthermore, analysis of spore wall ultrastructure supports liverwort affinities. |Copyright, Publisher and Additional Information:||© 2003 Nature Publishing Group| |Academic Units:||The University of Sheffield > Faculty of Science (Sheffield) > School of Biological Sciences (Sheffield) > Department of Animal and Plant Sciences (Sheffield)| |Depositing User:||Repository Officer| |Date Deposited:||20 Sep 2004| |Last Modified:||08 Feb 2013 16:46| Actions (login required)
<urn:uuid:320316fc-ce87-4d42-9884-134148f2296d>
3.59375
406
Academic Writing
Science & Tech.
29.009159
This is pretty tedious. The IPCC tells us that carbon dioxide actually has very little influence on earth’s greenhouse effect. That’s probably sent a lot of greenhouse hysterics into a dead faint so I’d better explain. The IPCC provides the formula to demonstrate that CO2 constitutes less than 10% of earth’s greenhouse effect as defined by Trenberth et al of about 330 Watts per meter squared (W/m2) by defining each doubling of atmospheric CO2 (5.35*LN(2) = 3.7) and since repeated doubling is only powers of 2 we know that once CO2 reached 512 ppmv (29) it would deliver 9 x 3.7 = 33.3 W/m2 or 10% of Trenberth’s total “Back Radiation” or greenhouse effect. Since we’ve long defined greenhouse effect as 33 °C and know that CO2 delivers less than 10% of that or 3.3 °C in total at 512 ppmv from 33 W/m2 (9 doublings from 1ppmv) we know that each doubling of atmospheric carbon dioxide delivers just 0.37 °C (33/9). See, we don’t even need any marvelous magical multipliers because this is what it is doing in the real world inclusive of all operating feedbacks of both signs. Makes you wonder what all the fuss is about, doesn’t it? However, Lemonick is handwringing: Coal is the most abundant and cheapest fossil fuel on the planet, but it’s also the dirtiest in terms of how much heat-trapping carbon dioxide (CO2) it spews into the atmosphere when you burn it. One possible way of dealing with coal’s globe-warming effect is to capture the CO2 from coal exhaust and bury it deep underground in a process known as carbon capture and sequestration, or CCS. Opponents of the idea have argued, however, that among other potential dangers, CCS could trigger earthquakes. And for years, proponents have said, “tell us something we didn’t know.” Geologists have been aware since the 1960’s that pumping liquids and gases into underground rock formations can trigger earthquakes by adding just a little extra pressure to existing faults in a sort of straw-that-broke-the-camel’s-back effect. In 2011 alone, subsurface injection of wastewater from mining operations was blamed or suspected in quakes that shook Arkansas, Colorado, Ohio and Oklahoma. But they were small earthquakes, causing minimal damage and no injuries at all, and if that’s the worst consequence of keeping a lid on global temperatures, it might well be worth it. Or maybe not. In a new analysis published last week in Proceedings of the National Academy of Sciences, Mark Zoback and Steven Gorelick of Stanford University point out that in order to be effective, CCS projects need to keep CO2 out of the atmosphere for thousands of years — and that earthquakes too small to endanger life or property could nevertheless create leaks that would make the whole thing a waste of time. The bottom line, according to Zoback: “CCS is a risky proposition. Not that it’s impossible, or even inappropriate. It should be done. But at a global scale, it’s not likely to reduce CO2 emissions significantly.”
<urn:uuid:fae9ce24-128f-4547-927a-22202318abb0>
3.28125
722
Personal Blog
Science & Tech.
61.197102
HTML5 has a lot of cool things in it, but the one thing I wish I could remove are data-attributes, because of the crimes against clean front-end code that it seems to encourage. What is this clean web code you speak of? This is the language we use to declaratively set the visual properties of our UI. It consists of a path matching syntax, and a series of rules. Clean css is a) readable, b) doesn’t repeat itself too much, and c) is modular (i.e. you shouldn’t have styles intended for one thing leak into another thing) CSS is very hard (and frustrating) to learn, and even harder to write well. 1 2 3 4 5 6 7 8 9 There is nothing talking about whether this sits at the left, right, or bottom of the page. There is nothing that talks about how the links should be pjaxing the main content div of the app. All it describes is a navigation widget at a very high level. - the DOM This is where all of those things come together. The DOM is the in memory representation of your UI. It has event handlers bound to elements, it has styles, and it changes dynamically. When you hit view source in your browser, you are looking at the html. When you open the web inspector, you are looking at the DOM (made to look html, due to how confused people are about these things). The role of data attributes Data attributes are a new way of serializing information into a DOM node about what it represents, so that you are not forced to use the class attribute improperly. For example, a blog post could look like this We are using an article tag to represent the post, its class tells us what type of post it is (a video), and the data attribute is used to tell us something about it. This seems pretty obvious to me, class is for type of thing being represented, data is for that things data. 1 2 3 4 5 6 Now, this looks like a very elegant solution to a common problem. But it’s not really using data attributes the way they are intended to be used. First we have the data-remote="true". Why would you use a data attribute for something that obviously should be a class? data-confirm are even worse, since they have a) nothing to do with data, and b) have no business being in the HTML. Why does it matter that rails co-opts data attributes? In the small scale, it really doesn’t matter at all. More then that, it works very well. You can make arguments about purity and aesthetics, but at the end of the day, we are co-opting technology that was intended to model papers and blog posts, and using it to build applications. Rails as a whole is meant to build things like base camp, which is the smaller end of mid-sized application, so if you are building that kind of app then they will serve you well (just like the rest of the default rails stack). or completely baffling things, like this One reason we strive for clean code is because it is easy to read. Since HTML is already a very verbose language, this becomes more important. Keeping things simple and focused is the heart of clean HTML, and the previous two examples are almost the antitheses of that. After 10 years we have finally gotten people to stop using inline styles, and the rails community is replacing that with something much worse to maintainable html, inline behaviour. Ok fine, rails is doing it wrong, but there are still valid use cases, right?
<urn:uuid:f27861a8-5dde-4815-989c-25cb1426960b>
2.71875
763
Personal Blog
Software Dev.
63.883715
The Local Group contains more than 35 galaxies, most of which are dwarf ellipticals and irregulars with low mass; this complicated system may be considered as being formed by two main galaxies, M31 and the Milky Way, with other dynamically less important satellites belonging either to one of them or to the pair. This picture is derived from galactic luminosities, but when possible dark matter is taken into account, it is not clear at all. M31 has a visible mass of about 4 × 1011M. The Milky Way, 1011M. Next are M33 with 4 × 1010M, LMC with about 2.3 × 1010M, SMC with 6.3 × 109M, IC10 with 3 × 109M and other minor members. Note that this list, when ordered following the total mass, could be changed. For instance, LMC has a visible mass of 1/5 the mass of the Milky Way. As it has been suggested that irregulars may contain more dark matter than bright galaxies, the total mass of LMC could be as large as, or even more massive than, that of the Milky Way. In this case it could no longer be considered our "satellite". Let us however retain the more standard viewpoint and consider that M31 and the Milky Way are dynamically dominant and form a binary system. M31 has a line-of sight velocity of -300 km s-1, and therefore it is approaching us. Taking into account our motion of rotation within the galaxy of about 220kms-1, it is easy to deduce that the speed of M31 with respect to the centre of our Galaxy is about -125kms-1. Both galaxies are approaching one another, with M31 therefore being an exception in the general motion of expansion of the Universe. There are different interpretations of this fact: a) "Ships passing in the night" Besides the expansion velocity following Hubble's law, galaxies have a peculiar velocity. For instance, our galaxy is moving with respect to the CMB black body at about 620kms-1. Within a cluster, peculiar motions are also of the order of 600kms-1. Even if these high velocities could be interpreted in other ways, such as bulk motions of large inhomogeneities or only characteristic of rich clusters, it is evident that some thermal-like peculiar velocities of this order of magnitude characterize the velocity dispersion of present galaxies, once the Hubble flow is subtracted. If we write for the velocity of a galaxy where Vi is independent of ri, for distances less than V/H0, Hubble's law becomes imprecise and of little use, peculiar velocities being larger than expansion velocities. The law is imprecise for distances shorter than about 10 Mpc and becomes absolutely unsuitable for r<1 Mpc. Therefore, a simple interpretation for the approaching motion of M31 is that it is due to pure initial conditions, and is unrelated to the mass of the Local Group. Van der Bergh suggested that our Galaxy and M31 might not form any coherent system, and that both galaxies "were passing each other as ships pass in the night" (Lynden-Bell, 1983). b) The "timing" argument of Kahn and Woltjer. The most widely accepted interpretation of the negative velocity of M31 was first given by Kahn and Woltjer (1959). They assumed that this double system has negative energy, i.e. it is held together by gravitational forces. However, considering visible matter only, they estimated the kinetic energy of the system to be about 1.25 × 1058 erg, and the gravitational energy -6 × 1057 erg. Even with an apparent positive energy (unbounded system) they considered the possibility of large quantities of intergalactic material in the form of gas, which would render the total energy negative. This gaseous intergalactic mass was not confirmed by later observations. Instead, today, the argument of Kahn and Woltjer is considered as a proof for either the existence of large dark matter halos surrounding M31 and the Milky Way or (at least) a large common DM super halo pervading the Local Group. They deduced, with a simple order of magnitude argument, that the effective mass was larger than 1.8 × 1012M, about six times larger than the reduced mass of M31 and the Milky Way. Lynden-Bell (1983) has presented a more precise description. It is interesting to note, also in this historic paper, that Kahn and Woltjer (1959) considered that the ram pressure produced by this hypothetical intergalactic gas, due to the motion of both galaxies with respect to it, was responsible for warps of both galaxies. This hypothesis for the origin of warps has today been largely forgotten, but it could explain the coherence in the orientation of the warps of M31, M33 and the Milky Way shown by Zurita and Battaner (1997). This coherence can only be explained by the hypothesis of Kahn and Woltjer and by the magnetic hypothesis (Battaner, Florido and Sanchez-Saavedra, 1990, 1991; Battaner, 1995; Battaner, Florido, 1997; Battaner and Jimenez-Vicente, 1998; Battaner et al. 1991; see also Binney, 1991, and Kuijken, 1997). Coming back to the "timing" argument, let us obtain a similar order of magnitude, by an argument closer to that presented by Lynden-Bell (1983). Suppose that the pregalaxies later to become M31 and the Milky Way were formed at Recombination. Inhomogeneity seeds were previously developed, but at Recombination, photon decoupling allowed matter to freely collapse. Identifying Recombination as the epoch of the Local Group birth, at about 106 years after the Big Bang, is equivalent to this birth being produced at the very beginning of the Universe, as 106 years is negligible when compared with 14 Gyr, at present. Then the Universe was much smoother, so we can assume a vanishing initial transverse velocity. The Local Group, i.e. the two galaxies, were born so close to each other that gravitation was stronger than the expansion effect, so that we assume that during the period of the birth of both galaxies, there was a negligible relative velocity between them, in the line connecting them. Therefore, we assume that 14 Gyr ago, both galaxies were at rest with respect to each other, and since then their mutual gravitational attraction has reduced their separation and is responsible for the 125 kms-1 approaching velocity observed today. The general equations for the orbit in the framework of Newtonian Mechanics adopt the following parametric form where r is the mutual distance, t the time and the eccentricity, while and a are constants. The parameter is called the eccentric anomaly. The sum of the masses of both galaxies, M, is related to these constants, through If were zero, we would have r = a (constant) and = t, e.g. a circular orbit with a constant velocity. But given that our initial transverse velocity was assumed to be null, our orbit cannot be circular, but rather, it will become approximately a straight line. We thus consider = 1. Figure 13. Different possibilities to understand the negative radial velocity of M31. Figure 13 presents various possibilities: the first possibility provides the lowest mass and we will concentrate on this one. We have At the birth (approximately, at the Big Bang) we set t = t1. Then, = 0, as we have assumed. 0, always, as otherwise (37) would imply = 0. Therefore, sin = 0, which gives either = 0 or = . But = 0 would imply r1 = 0, while we have started with the distance of the galaxies being a maximum (2a). Therefore, = . Hence, r1 = 2a (as expected), t1 = , = 0, = 2. At the present time, we set t = t2. Then because t2 - t1 = 14, if we adopt 1 Gyr as time unity. We know r2 = 650 (taking 1 kpc as distance unity). We also know = - 125 (if we adopt 1 kpc/Gyr as unity for the velocity; 1 km/s 1 kpc/Gyr !) With (16) and (43) and taking the value of given by (40) Defining, = - Taking the numerical values for r2, and T, the solution of this equation, approximately, gives = 1.59, = 4.73. Hence (with (40)), = 0.18Gyr-1. Therefore (Note that the time of the Big Bang is t1 = 17Gyr. We are not taking the Big Bang as the origin of time!). Then, with (38), we have: In our modest calculation, at the beginning both galaxies were 2a = 1324 kpc apart and they were at rest. Now they are 650 kpc apart (about half the initial distance) and they are approaching at 125 km/s. With all these values, we deduce for the mass of the pair of galaxies which is clearly much more than the visible mass of the pair of about 5 × 1011M. Despite the long calculation, the order of magnitude is just given by M = V2r/G, where r and V are the distance and velocity of M31. c) In the above argument we considered two mass points with mutual attraction, but the dark matter apparently encountered may be distributed in a single extended halo. If the force of gravity acting on the Galaxy were due to this Local Group super-halo, the equation to be integrated would be where is the density of the intergalactic medium, which, for simplicity we assume to be constant. In this case the angular velocity of the periodic motion would be We can, as before, obtain detailed values of and the initial distance between the new born Milky Way and the centre of the Local Group, identified with the position of M31. In this case (r=a, = 0 at t = 0; the origin of time is now the Big Bang, approximately. Now, a is the maximum separation of the Milky Way, instead of 2a, as in the previous case). We adopt r = 650kpc, = - 125kpc/Mpc, T = 14Gyr as before, Dividing the formulae For the density of dark matter in the Local Group, we obtain This value is much lower than the minimum value estimated by Kahn and Woltjer (about 1.6 × 10-28gcm-3) and slightly higher than the critical density to close the Universe ( 10-29gcm-3). The common halo hypothesis is not easy to reject. d) The Local Group, rather than two main galaxies and several satellites together with some minor members, should be considered as a primordial inhomogeneity which has only recently collapsed to form its present galactic members. Like any other inhomogeneity it has evolved through the radiation dominated epoch with = / R, decaying transverse velocities and increasing radial velocities in a moderate collapse. Then inhomogeneities reached an acoustic epoch, which for masses typical of the Local Group began at z = 105 approximately (see later, Fig. 22). After the Recombination epoch the Local Group pursued its process of collapse with the relative density contrast increasing as R, where R is the cosmological scale factor, the transverse velocities decreasing as R-1 and - what is most important for our purposes- the radial velocities increasing as R1/2. After that, the collapse became non-linear and these variations with the cosmic scale factor became complicated and faster. As 1 we find ourselves in the non-linear regimen, but we will consider a linear evolution to find typical orders of magnitude. In this picture a naive formula relating the present velocity V0 of an inhomogeneity with present size and actual relative density contrast is (Battaner, 1996) If the Milky Way and M31 were condensations within the Local Group, V0 would be identified with the relative velocity between these two galaxies, with and being typical parameters characterizing the size and the density contrast of the Local Group. This interpretation of the negative recession velocity of M31 is fully compatible with the scenario of an approach between the two galaxies within an expanding universe but somewhat in contrast with present hierarchical models, in which small structures form first, which will be accounted for later. As the velocities, before Recombination, do not reach high values (Florido and Battaner, 1997) we can start our calculations at Recombination. From the above formula, taking V0 125km/s, H0 = 60km/(sMpc) and 0.65Mpc, we obtain 5.5. Then where < > is the average density in the Universe. Hence, for the Local Group Let us adopt for < > = 0.3 × 10-29gcm-3, thus obtaining Let us compare the different results. Methods c) and d) give a similar order of magnitude, about 2.7 × 10-29gcm-3. The mass corresponding to this density depends on the volume. The density surely decreases outwards. Suppose a moderate equivalent radius of 650 kpc; then the mass of the Local Group would be 4 × 1011M, which is approximately the visible mass. Or suppose an equivalent radius of 1 Mpc. In this case, we obtain 1.5 × 1012M, in reasonable agreement with method a). Not only should the results be compared, but also the basic formulae when the numerical coefficients close to unity are ignored. Essentially, methods b) and c) use M V2r/G, where V is the approaching velocity of M31 and r its distance. Of course, the more detailed arguments presented provide a more precise result, but which cannot greatly differ from this value ( 2.3 × 1012M). However, method d) is quite different. The order-of-magnitude lying behind the calculation is of the type M < >. In a critical Universe < > = 3H02/8G. This method is not intrinsically related to the other two. The orders of magnitude coincide because, curiously, V/r is of the order of H0. In most pairs the orbital period is of the order of H0-1. Summarizing, unless M31 and the Milky Way are like "ships passing in the night" (a possibility that cannot be totally disregarded), the Local Group seems to have 4 times more mass than we see as stellar light. But we don't know where this mass lies, whether in galactic dark matter halos or in a large common super halo. The difficulties encountered in the interpretation of the closest binary system are translated to the interpretation of other binary systems.
<urn:uuid:2837604f-0d28-4c04-a85b-296e5c110e91>
2.828125
3,133
Academic Writing
Science & Tech.
51.812223
Three children are going to buy some plants for their birthdays. They will plant them within circular paths. How could they do this? On my calculator I divided one whole number by another whole number and got the answer 3.125 If the numbers are both under 50, what are they? Have a go at this well-known challenge. Can you swap the frogs and toads in as few slides and jumps as possible? I have fifteen cards numbered $1 - 15$. I put down seven of them on the table in a row. The numbers on the first two cards add to $15$. The numbers on the second and third cards add to $20$. The numbers on the third and fourth cards add to $23$. The numbers on the fourth and fifth cards add to $16$. The numbers on the fifth and sixth cards add to $18$. The numbers on the sixth and seventh cards add to $21$. What are my cards? Can you find any other solutions? How do you know you've found all the different solutions?
<urn:uuid:66dcb9a2-704f-40de-a5ba-0a697590b0bd>
2.765625
222
Q&A Forum
Science & Tech.
92.29111
The fact that ADT are closed makes it a lot easier to write total functions. That are functions that always produce a result, for all possible values of its type, eg. maybeToList :: Maybe a -> [a] maybeToList Nothing = maybeToList (Just x) = [x] Maybe were open, someone could add a extra constructor and the maybeToList function would suddenly break. In OO this isn't an issue, when you're using inheritance to extend a type, because when you call a function for which there is no specific overload, it can just use the implementation for a superclass. I.e., you can call printPerson(Person p) just fine with a Student object if Student is a subclass of In Haskell, you would usually use encapsulation and type classes when you need to extent your types. For example: class Eq a where (==) :: a -> a -> Bool instance Eq Bool where False == False = True False == True = False True == False = False True == True = True instance Eq a => Eq [a] where == = True (x:xs) == (y:ys) = x == y && xs == ys _ == _ = False == function is completely open, you can add your own types by making it an instance of the Note that there has been work on the idea of extensible datatypes, but that is definitely not part of Haskell yet.
<urn:uuid:48af348d-5270-4dd2-a24c-8abb3857fdcb>
2.921875
325
Q&A Forum
Software Dev.
49.931752
How to determine atmospheric extinction Use this table to refine your comet observing. October 26, 2009 In the December 2009 issue, I wrote the story, "How to observe comets." In it, I suggest all observers estimate how bright a comet appears. To get the whole picture, however, you must factor in one other thing. This photograph of Comet Halley was taken during its closest pass by Earth — 39 million miles. Photo by NOAO You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:0e0a6393-e952-4af9-89ce-1ff9f7151ccd>
3.25
259
Truncated
Science & Tech.
43.75674
Radiation Induced Chemical Reactions 1947-67 Brookhaven over the last twenty years has taken a leading role in the study of chemical effects of radiation and in the conversion of this subject from empirical groping to a sophisticated branch of chemical science. The aim in this field is, first, to infer what reactions actually are produced by radiation in various systems, in terms of the nature and behavior of the chemical species formed in intermediate stages of the over-all process, and, second, to explain how these reactions are brought about by electronic excitation of the irradiated material. Understanding radiation effects in water is basic both to fundamental radiobiology and to design and control of water-moderated reactors. The nature of the reducing and oxidizing radicals formed in water irradiation was first demonstrated here. It was shown by a study of salt effects that the predominant reducing species has a negative charge and is a hydrated electron. This not only introduced an important new species to chemistry, but also made possible the first correct treatment of water decomposition and re-formation in nuclear reactors. Accurate measurement of absolute reaction rates of the hydroxyl and perhydroxyl radicals in irradiated water, difficult to make because [the reactions are] so rapid, were also first done here. Now a great many laboratories are engaged in studying formation, properties, and reaction rates of the hydrated electron and of the hydroxyl and other radicals. A new superoxide of hydrogen, H203, was first found and characterized at BNL. N. F. Barr and A. O. Allen, "Hydrogen Atoms in the Radiolysis of Water," J. Phys. Chem. 63, 928 (1959). H. A. Schwarz, "Determination of Some Rate Constants for the Radical Processes in the Radiation Chemistry of Water," J. Phys. Chem. 66, G. Czapski and H. A. Schwarz, "The Nature of the Reducing Radical in Water Radiolysis," J. Phys. Chem. 66, 471 (1962). The physical processes underlying chemical changes in irradiated liquids are not as clear-cut as in gases or in ionic or valence-bonded crystals. The slowing down and capture of free electrons formed in liquids by ionization are not well understood. One approach to this problem is to measure the number of free ions which escape immediate recombination. The first complete absolute measurement for an organic liquid of this quantity and of its temperature coefficient was made at Brookhaven; the results have led to some improved theories of the processes of electron moderation. Recently made measurements of ion yields in various liquids have shown surprising differences between related compounds, which points up the degree of ignorance that still prevails in this field. A. Hummel and A. O. Allen, "Ionization of Liquids by Radiation. I. Methods for Determination of Ion Mobilities and Ion Yields at Low voltage," J. Chem. Phys. 44, 3426 (1966). A. Hummel, A. O. Allen, and F. H. Watson, Jr., "Ionization of Liquids by Radiation. II. Dependence of the Zero-Field Ion Yield on Temperature and Dielectric Constant," J. Chem. Phys. 44, 3431 (1966). Systems of interest in biology and technology are often not pure substances or simple solutions but heterogeneous systems containing many different material phases. The effects of radiation on heterogeneous systems were first looked at here and the interesting result was found that energy initially taken up in a solid was transferred to molecules adsorbed on its surface, where it selectively excited certain states and thus produced specific modes of decomposition. Study of such systems has been taken up at many other laboratories. J. G. Rabe, B. Rabe, and A. O. Allen, "Radiolysis and Energy Transfer in the Adsorbed State," J. Phys. Chem. 22, 1098 (1966). Last Modified: June 28, 2012
<urn:uuid:e3aa125e-02da-46ea-a191-005d124fa3d8>
3.28125
894
Knowledge Article
Science & Tech.
48.226414
MakefilesMakefiles are something of an arcane topic--one joke goes that there is only one makefile in the world and that all other makefiles are merely extensions of it. I assure you, however, that this is not true; I have written my own makefiles from time to time. In this article, I'll explain exactly how you can do it too! Understanding Make -- BackgroundIf you've used make before, you can safely skip this section, which contains a bit of background on using make. A makefile is simply a way of associating short names, called targets, with a series of commands to execute when the action is requested. For instance, a common makefile target is "clean," which generally performs actions that clean up after the compiler--removing object files and the resulting executable. Make, when invoked from the command line, reads a makefile for its configuration. If not specified by the user, make will default to reading the file "Makefile" in the current directory. Generally, make is either invoked alone, which results in the default target, or with an explicit target. (In all of the below examples, % will be used to indicate the prompt.) To execute the default target: % maketo execute a particular target, such as clean: % make cleanBesides giving you short build commands, make can check the timestamps on files and determine which ones need to be recompiled; we'll look at this in more detail in the section on targets and dependencies. Just be aware that by using make, you can considerably reduce the number of times you recompile. Elements of a MakefileMost makefiles have at least two basic components: macros and target definitions. Macros are useful in the same way constants are: they allow you to quickly change major facets of your program that appear in multiple places. For instance, you can create a macro to substitute the name of your compiler. Then if you move from using gcc to another compiler, you can quickly change your builds with only a one-line change. CommentsNote that it's possible to include comments in makefiles: simply preface a comment with a pound sign, #, and the rest of the line will be ignored. MacrosMacros are written in a simple x=y form. For instance, to set your C compiler to gcc, you might write: CC=gccTo actually convert a macro into its value in a target, you simply enclose it within $(): for instance, to convert CC into the name of the compiler: $(CC) a_source_file.cmight expand to gcc a_source_file.cIt is possible to specify one macro in terms of another; for instance, you could have a macro for the compiler options, OPT, and the compiler, CC, combined into a compile-command, COMP: COMP = $(CC) $(OPT)There are some macros that are specified by default; you can list them by typing % make -pFor instance, CC defaults to the cc compiler. Note that any environment variables that you have set will be imported as macros into your makefile (and will override the defaults). TargetsTargets are the heart of what a makefile does: they convert a command-line input into a series of actions. For instance, the "make clean" command tells make to execute the code that follows the "clean" target. Targets have three components: the name of the target, the dependencies of the target, and finally the actions associated with the target: target: [dependencies] <command> <command 2> ...Note that each command must be proceeded by a tab (yes, a tab, not four, or eight, spaces). Be sure to prevent your text editor from expanding the tabs! The dependencies associated with a target are either other targets or files themselves. If they're files, then the target commands will only be executed if any of the dependent files have changed since the last time the command was executed. If the dependency is another target, then that target's commands will be evaluated in the same way. A simple command might have no dependencies if you want it to execute all the time. For example, "clean" might look like this: clean: rm -f *.o coreOn the other hand, if you have a command to compile a program, you probably want to make the compilation depend on the source files to compile. This might result in a makefile that looks like this: CC = gcc FILES = in_one.c in_two.c OUT_EXE = out_executable build: $(FILES) $(CC) -o $(OUT_EXE) $(FILES)Now when you type "make build" if the dependencies in_one.c and in_two.c are older than the object files created, then make will reply that there is "nothing to be done." Note that this can be problematic if you leave out a dependency! If this were an issue, one option would be to include a target to force a rebuild. This would depend on both the "clean" target and the build target (in that order). The above sample file could be amended to include this: CC = gcc FILES = in_one.c in_two.c OUT_EXE = out_executable build: $(FILES) $(CC) -o $(OUT_EXE) $(FILES) clean: rm -f *.o core rebuild: clean buildNow when rebuild is the target, make will first execute the commands associated with clean and then those associated with build. When Targets FailWhen a target is executed, it returns a status based on whether or not it was successful--if a target fails, then make will not execute any targets that depend on it. For instance, in the above example, if "clean" fails, then rebuild will not execute the "build" target. Unfortunately, this might happen if there is no core file to remove. Fortunately, this problem can be solved easily enough by including a minus sign in front of the command whose status should be ignored: clean: -rm -f *.o core The Default TargetTyping "make" alone should generally result in some kind of reasonable behavior. When you type "make" without specifying a target in the corresponding makefile, it will simply execute the first target in the makefile. Note that in the above example, the "build" target was placed above the "clean" target--this is more reasonable (and intuitive) behavior than removing the results of a build when the user types "make"! Reading Someone Else's MakefileI hope that this document is enough to get you started using simple makefiles that help to automate chores or maintain someone else's work. The trick to understanding makefiles is simply to understand all of your compiler's flags--much (though not all) of the crypticness associated with makefiles is simply that they use macros that strip some of the context from an otherwise comprehensible compiler command. Your compiler's documentation can help enormously here. The second thing to remember is that when you invoke make, it will expand all of the macros for you--just by running make, it's very easy to see exactly what it will be doing. This can be tremendously helpful in figuring out a cryptic command. Advanced Makefile Tricks Learn about special macros and other fancy uses of makefiles
<urn:uuid:15f954cd-52fa-47cb-98bd-c02ecd5950b1>
3.140625
1,543
Tutorial
Software Dev.
44.178618
lightning, electrical discharge accompanied by thunder, commonly occurring during a thunderstorm. The discharge may take place between one part of a cloud and another part (intracloud), between one cloud and another (intercloud), between a cloud and the earth, or earth and cloud; more rarely observed is the electrical discharge sometimes called "upward lightning," a superbolt between a cloud and the atmosphere tens of thousands of feet above the cloud. Lightning may appear as a jagged streak (forked lightning), as a vast flash in the sky (sheet lightning), or, rarely, as a brilliant ball (ball lightning). Illumination from lightning flashes occurring near the horizon, often with clear skies and the accompanying thunder too distant to be audible, is referred to as heat lightning. Charges are believed to accumulate in cloud regions as ice particles and droplets collide and transfer electric charges, with smaller, lighter ice particles and droplets carrying positive charges higher and heavier particles and droplets carrying negative charges lower. In a lightning strike on the ground, a negatively charged leader propagates from a negatively charged cloud region in a series of steps toward the ground; once it gets close to the ground a positively charged streamer rises to meet it. When the streamer meets the leader, an electrical discharge flows along the completed channel, creating the lighting flash. Long-lasting lightning flashes with lower current are more damaging to nature and humans than shorter flashes with higher currents. Lightning may also be produced in snowstorms or in ash clouds created by volcanic eruptions. Space probes have photographed lightning on Jupiter and recorded indications of it on Venus, Saturn, Uranus, and Neptune. Benjamin Franklin, in his kite experiment (1752), proved that lightning and electricity are identical. See also lightning rod. More on lightning from Fact Monster: See more Encyclopedia articles on: Weather and Climate: Terms and Concepts
<urn:uuid:44337ccc-89bb-4dfa-8020-ae6a141e4ca9>
3.671875
378
Knowledge Article
Science & Tech.
27.375409
Western Bark Beetle Strategy Glossary Aggregating pheromone - chemical compound released by an insect (male or female) to attract others of its species. Bark beetles - group of beetles, mainly of the family Scolytidae, whose adults bore through the bark of host trees to lay their eggs, and whose larvae tunnel and feed under the bark. Generation time - the time it takes for an organism to develop from egg to reproductive adult. Host specific - one species (e.g. bark beetle) lives on or within another specific host species (e.g ponderosa pine). Larva - an individual that emerges from the egg and differs markedly from the adult form. Bark beetle larvae are white, legless grubs. Life history - the combined activities and requirements of a species throughout its life, including patterns of development, environmental or habitat requirements. mating strategies, predators and their avoidance, etc. Limiting resource - an essential material required by an organism such that limited quantities of the material limit the ability of the organism to successfully complete its life cycle and reproduce. Mass attack - the aggregation of bark beetle adults on one host tree in sufficiently large numbers to overcome the host’s defense mechanisms. Natural Recovery - The use of natural processes to revegetate an area after a disturbance (such as bark beetle infestation) and the acceptance of resulting conditions, even though it may take many years to attain stocked forested conditions. Niche - the place where an organism lives, including habitat, food resources, nesting resources, behavioral patterns of the organism, and a variety of other resources that are species specific. Pupa - the stage of insect development between the last larval instar and the emergence of a new adult. Typical adult features begin to form. Salvage harvesting - removal of trees that are dead, dying, or deteriorating to utilize remaining merchantable wood before it becomes worthless. Sanitation harvesting - removal of infected or infested dead or damaged trees, or of susceptible trees, to prevent or reduce the spread of the infestation. Sanitation harvesting is intended to remove currently infested trees where beetle life stages still remain under the bark. This is the most efficient form of beetle management. The objective is to reduce residual beetle populations available for subsequent attack within the stand or in adjacent susceptible stands. Resilience - The ability of a social or ecological system to absorb disturbances while retaining the same basic structure and ways of functioning, the capacity for self-organization, and the capacity to adapt to stress and change. Restoration - The process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed. Tree hazard - Tree hazard is the likelihood of property damage or personal injury from tree failure.
<urn:uuid:199f2436-a019-441c-b9c5-078a9c960ada>
3.4375
569
Structured Data
Science & Tech.
23.326452
The 20 Hottest Years on Record Global Warming 101 Global average surface temperatures pushed 2005 into a virtual tie with 1998 as the hottest year on record. For people living in the Northern Hemisphere—most of the world's population—2005 was the hottest year on record since 1880, the earliest year for which reliable instrumental records were available worldwide. Because most global warming emissions remain in the atmosphere for decades or centuries, the energy choices we make today greatly influence the climate our children and grandchildren inherit. We have the technology to increase energy efficiency, significantly reduce these emissions from our energy and land use, and secure a high quality of life for future generations. We must act now to avoid dangerous consequences. The year 2005 exceeded previous global annual average temperatures despite having weak El Niño conditions at the beginning of the year and normal conditions for the rest of the year. (El Niño is a period of warmer-than-average sea surface temperatures in the east-central Pacific Ocean that influences weather conditions across much of the globe.) In contrast, the record-breaking temperatures of 1998 were boosted by a particularly strong El Niño. The record heat of 2005 is part of a longer-term warming trend exacerbated by the rise of heat-trapping gases in our atmosphere that is due primarily to our burning fossil fuels and clearing forests. Nineteen of the hottest 20 years on record have occurred since 1980 (see table). The record surface temperatures of the past 20 years reinforce other indications that global warming is under way. For example, the observed rise in average surface temperatures has been accompanied by warming of the atmosphere and oceans, and increased melting of ice and snow. These observations, summarized briefly below, paint a consistent picture of widespread and significant changes in global climate over the past several decades. Evidence of Twentieth Century Global Warming Warming of the Troposphere A 2005 re-analysis of satellite observations of temperature trends in the troposphere—the layer of atmosphere extending about five miles up from Earth's surface—uncovered errors in previous studies. The updated studies show that air temperatures have increased in the past 20 years or so, consistent with the fundamental understanding that increases in surface temperatures are accompanied by increases in air temperatures above the surface. The new results are also consistent with recent increases in tropospheric water vapor, which would be expected when rising temperatures accelerate ocean evaporation. By comparing several sets of data from satellites and weather balloons, these new atmospheric analyses account for drifts in satellite orbits and changes in instrumentation over the measurement period. While the corrected results represent only one of several pieces of global warming evidence, they are important in part because the earlier flawed analysis has often been cited. Melting of Snow and Ice Further evidence of widespread warming comes from observations of seasonal snow and frozen ground coverage. The extent and duration of frozen ground have declined in most locations. Snow cover in the Northern Hemisphere has declined about five percent over the past 30 years, particularly in late winter and spring, and the freezing altitude has risen in every major mountain chain. Alpine and polar glaciers have retreated since 1961, and the amount of ice melting in Greenland has increased since 1979. Over the past 25 years, the average annual Arctic sea ice area has decreased by almost five percent and summer sea ice area has decreased by almost 15 percent. The collapse of the Larsen Ice Shelf off the Antarctic Peninsula appears to have no precedent in the last 11,000 years. Melting of the Greenland Ice Sheet Satellites are used to map the extent and duration of snowmelt on the Greenland ice sheet. The dark red area represents the extent of snowmelt in 2005—the most extensive in the 27-year history of data collection. Figure courtesy of NOAA and CIRES Warming of the Oceans Oceans comprise 97 percent of Earth's water. They have an average depth of approximately 13,000 feet (4 kilometers). It takes a great deal of heat to raise the temperature of this huge body of water, and the oceans have absorbed the bulk of Earth's excess heat over the past several decades. (See figure, "Estimates of Earth's Heat Balance.") From 1955 to 1998, the upper ~9,800 feet (3,000 meters) of the ocean have warmed by an average 0.067 degrees Fahrenheit (0.037 Estimates of Earth's Heat Balance The oceans have absorbed the bulk of Earth's excess heat over the past several decades. If only a small fraction of the heat currently stored in the oceans were released, it would significantly warm the atmosphere and melt the world's glaciers. For a hypothetical example, if the average temperature of the world's oceans increased by 0.18 degree Fahrenheit (0.1 degree Celsius) and this heat was transferred instantly to the atmosphere, the air temperature would increase by about 180 degrees Fahrenheit (100 degrees Celsius). In reality, the circulation and redistribution of this excess heat is a slow process. Even if we could maintain atmospheric CO2 concentrations at today's level, stored heat released by the oceans will cause Earth's average surface temperature to continue rising approximately one degree Fahrenheit (half a degree Celsius) in the coming decades. To put this into perspective, this is the same as the global average temperature rise that occurred over the last century. The warming of the oceans and the melting of glaciers worldwide have already caused sea levels to rise during the twentieth century, and most of this rise has come in the past few decades. The Role of Natural Variability Human-induced warming is superimposed on natural processes to produce the observed climate. Because these natural fluctuations (which are always present) play a role in determining the precise magnitude and distribution of temperature in a particular year, record warmth in any one year is not in itself highly significant. What is noteworthy, however, is that global average temperatures experienced a net rise over the twentieth century, and the average rate of this rise has been increasing. When scientists attempt to reproduce these twentieth century trends in their climate models, they are only able to do so when including human-produced heat-trapping emissions in addition to natural causes. The years 1998 and 2005 are so similar (i.e., within the error range of the different analysis methods or a few hundredths of a degree Celsius) that independent groups (e.g., NOAA, NASA, and the United Kingdom Meteorological Office) calculating these rankings based on reports from the same data-collecting stations around the world disagree on which year should be ranked first. Annual global rankings are based on combined land-air surface temperature and sea surface temperature since 1880. Dr. Marcia Baker (professor emeritus in Earth and Space Sciences and Atmospheric Sciences at the University of Washington) prepared this summary with input from Dr. Brenda Ekwurzel (climate scientist at the Union of Concerned Scientists). Arctic Climate Impact Assessment. 2004. Impacts of a Warming Arctic. Cambridge, UK: Cambridge University Press. Available at http://www.acia.uaf.edu. Arrenhius, S. 1896. On the influence of carbonic acid in the air upon the temperature of the ground. Philosophical Magazine 41:237-276. Barnett, T.P., D.W. Pierce, and R. Schnur. 2001. Detection of anthropogenic climate change in the world's oceans. Science 292:270-274. Domack, E., D. Duran, A. Leventer, S. Ishman, S. Doane, S. McCallum, D. Amblas, J. Ring, R. Gilbert and M. Prentice. 2005. Stability of the Larsen B Ice Shelf on the Antarctic Peninsula during the Holocene Epoch. Nature 436:681-685. Fu, Q., C. M. Johanson, S. G. Warren and D. J. Seidel. 2004. Contribution of stratospheric cooling to satellite inferred tropospheric temperature trends. Hansen, J., L. Nazarenko, R. Ruedy, M. Sato, J. Willis, A. Del Genio, D. Koch, A. Lacis, K. Lo, S. Menon, T. Novakov, J. Perlwitz, G. Russell, G. A. Schmidt, N. Tausnev. 2005. Earth's energy imbalance: Confirmation and implications. Science Intergovernmental Panel on Climate Change. 2001. Climate Change 2001: The Scientific Basis. Cambridge, U.K.: Cambridge University Press. Krabill, W., E. Hanna, P. Huybrechts, W. Abdalati, J. Cappelen, B. Csatho, E. Frederick, S. Manizade, C. Martin, J. Sonntag, R. Swift, R. Thomas and J. Yungel. 2004. Greenland Ice Sheet: Increased coastal thinning. Geophysical Research Levitus, S., J. Antonov, and T. Boyer. 2005. Warming of the world ocean, 1955-2003. Geophysical Research Letters 32. Mears, C.A., and F.J. Wentz. 2005. The effect of diurnal corrections on satellite-derived lower tropospheric temperatures. Science 309:1548-1551. Mote, P. W., A..F. Hamlet, M.P. Clark and D. P. Lettenmaier 2005. Declining mountain snowpack in western North America. Bulletin of the American Meteorological Rodhe, H., and R.J. Charlson, eds. 1998. The Legacy of Svante Arrhenius: Understanding the Greenhouse Effect. Royal Swedish Academy of Sciences, Stockholm University. Santer, B.D., T.M.L. Wigley, C. Mears, F.J. Wentz, S.A. Klein, D.J. Seidel, K.E. Taylor, P.W. Thorne, M.F. Wehner, P.-J. Gleckler, J.S. Boyle, W.D. Collins, K.W. Dixon, C. Doutriaux, M. Free, Q. Fu, J.E. Hansen, G.S. Jones, R. Ruedy, T.R. Karl, J.R. Lanzante, G.A. Meehl, V. Ramaswamy, G. Russell, and G.A. Schmidt. 2005. Amplification of surface temperature trends and variability in the tropical atmosphere. Science 309:1551-1556. Sherwood, S., J. Lanzante, and C. Meyer. 2005. Radiosonde daytime biases and late-20th century warming. Science 309:1556-1559. Siegenthaler, U., T.F. Stocker, E. Monnin, D. Lüthi, J. Schwander, B. Stauffer, D. Raynaud, J.-M. Barnola, H. Fischer, V. Masson-Delmotte and J. Jouzel. 2005. Stable carbon cycle-climate relationship during the late Pleistocene. Steffen, K., and R. Huff. 2005. Greenland Melt Extent, 2005. Cooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado at Boulder and National Oceanic and Atmospheric Administration (NOAA). Available at http://cires.colorado.edu/science/groups/steffen/greenland/melt2005. United Kingdom Climate Research Unit (CRU). 2005. Global Temperature for 2005: Second warmest year on record. Norwich U.K. Available at http://www.cru.uea.ac.uk/cru/press/2005-12-WMO.pdf U.S. Department of Energy (DOE). 2005. Emissions of Greenhouse Gases in the United States 2004. DOE/EIA-0573(2004). Washington DC. Available at ftp://ftp.eia.doe.gov/pub/oiaf/1605/cdrom/pdf/ggrpt/057304.pdf. U.S. National Aeronautics and Space Administration (NASA). 2005. Global Temperature Trends: 2005 Summation. NASA Goddard Institute for Space Studies (GISS). New York, NY. Available at http://data.giss.nasa.gov/gistemp/2005/. U.S. National Oceanic and Atmospheric Administration (NOAA). 2006. Climate of 2005 –Annual Report. National Climate Data Center (NOAA) Asheville, NC. Available at http://www.ncdc.noaa.gov/oa/climate/research/2005/ann/global.html Read from Looking Glass News Ice Cap is Melting at a Frighteningly Fast Rate the Mall, Heating the Planet Our Cross to Bear business & global warming Warming Debate Suppressed Global Warming Denial Lobby Faces "Catastrophic Loss of Species" Half of 2006 Is Warmest on Record says Earth's temp at 400-year high roof of world turns to desert Warming Hits Canada's Remotest Arctic Lands ice swells ocean rise Was America's Warmest on Record temperature off Santa Barbara now highest in 1,400 years bears drown as ice shelf melts
<urn:uuid:04bf35e3-2665-4368-b3a7-15c6057e2aec>
3.78125
2,919
Content Listing
Science & Tech.
58.821431
Exploring Mars Image Center Most of these image collections are based on Lunar and Planetary Institute slide sets and include explanatory captions, a locator map, a glossary, and suggested references for further study. |The Red Planet: A Survey of Mars An overview of Mars, including its volcanos, the Valles Marineris canyon system, features formed by running water, the SNC meteorites, and its two small moons, Phobos and Deimos. Recent images from Mars Pathfinder and the Hubble Space Telescope are also included. 40 images. |Volcanoes on Mars Illustrates various geologic features on Mars, including some of the best examples of Viking Orbiter images that include constructional volcanic landforms. 20 images. |Stones, Wind, and Ice: A Guide to Martian Impact Craters Illustrates the diversity of martian impact craters and demonstrates their role in understanding the geological evolution of Mars. 30 images. |The Winds of Mars: Aeolian Activity and Landforms An overview of the types of aeolian activity and landforms found on Mars. 30 images. |Ancient Life on Mars?? An examination of martian meteorites retrieved from Antarctica and recent evidence pointing to possible life on ancient Mars. 40 images. |Three-dimensional Images of Mars A selection of three-dimensional stereo images of Mars. These images require red-blue stereo glasses for viewing. 10 images. Other stereo images of Mars can be found in LPI's 3-D Tour of the Solar System.
<urn:uuid:43130bc8-810d-43ce-baee-f2c9f032c373>
3.140625
322
Content Listing
Science & Tech.
36.174346
Our voyage through the solar system, in search of conditions in which life could originate, has encountered an enormous diversity of habitats--from the dry soils of Mars to the sulphurous geysers of Io and the ocean of Europa. Several space missions are planned for the coming decade that may shed more light on the issues highlighted by this course. Until then we can only speculate on the presence, or otherwise, of micro-organisms on bodies other than Earth, within our close neighbourhood. If we are going to speculate, then we need not confine ourselves to the solar system--the concept of a Habitable Zone, as discussed before, could extend to other areas in the universe. This session takes a look at other stars, in our galaxy and beyond, whose associated planets might be home to living organisms. The Sun, as captured by the SOHO spacecraft. © ESA / NASA Only a few types of stars are likely to support planetary systems. For a planetary system to exist, the presence of dust, or at least some type of solid material, is necessary. Without several generations of stellar cycling to produce these elements, no planets could form. Our sun has various characteristics that are essential for the existence of planets: Age: At 4.56 billion years old, the sun is young enough to have high amounts of elements other than hydrogen and helium. The oldest stars to not contain enough of the other elements to allow for the existence of planets. Stability: The sun has been stable over an extended period, allowing time for life not just to arise, but to evolve into the vast biodiversity that is seen on the Earth today. Unstable stars, such as rapidly rotating pulsars, that are susceptible to sudden outbursts of radiation, explosion or extremely rapid spin are much less likely to provided long stable periods in which life could evolve. Size: Size is important. Stars much bigger than the sun tend to have a much shorter lifetime--the sun's total lifetime will be 10 billion years whilst a very large star might only exist for 1 or 2 billion years. This would not leave a long enough stable period for life to evolve. Additionally, massive stars emit much more ultraviolet radiation and this can lead to the stripping of the atmosphere followed by the destruction of organic molecules on the planet's surface. Massive stars are therefore unlikely hosts for life-bearing planetary systems. Stars that are smaller than the sun are cooler and emit less heat. So for a planet to be within the Habitable Zone of such a star, it would have to orbit close to the star, resulting in problems from higher gravitational attraction. So although there are many billions of stars in our galaxy, many of them are types (too old, too big, too small, too energetic) that are unlikely to support planetary systems for sufficient lengths of time for lifeforms to arise or evolve. Two galaxies colliding: NGC 2207 on the left, IC 2163 on the right. © NASA/ Space Telescope Science Institute. Galaxies are vast accumulations of stars and they evolve and change with time. They collide and merge with each other, cannibalising dust and gas from their companions in the process. The observation programme at the Anglo-Australian Observatory has so far located about one hundred thousand galaxies that are distributed through the universe in sheets and ribbons, with voids in between them. If the type of star is important for life, then so too is the location of the star within a galaxy, and also the type of galaxy. Dust-poor elliptical galaxies are likely to be depleted in the elements necessary for planet building. Spiral galaxies (like our Milky Way) have plenty of dust mixed in with the gas, and so have the potential for planet formation. Within galaxies there are regions where conditions for life are not favourable. Towards the central hub of a spiral galaxy, the stars are closer together and this results in higher gravitational instabilities for the individual stars. These conditions also result in higher radiation pressures and stellar winds. Examining the position of the sun in this context, we can see that it is located in the Habitable Zone of the Milky Way. It is far out enough to be relatively isolated from its nearest neighbours but it is not on the fringes of the galaxy where stars are formed at a lower rate. In these regions it is unlikely that sufficient amounts of heavier elements (carbon, oxygen, silicon etc) will have been produced to allow planet-building to take place. On the basis of the parameters detailed above, the search for life outside our solar system should be directed towards Earth-like planets, orbiting sun-type stars at distances considered to be within the Habitable Zone of the observed star, which should be located within the Habitable Zone of its galaxy. The techniques and instrumentation needed to observe planets outside our solar system have been developed relatively recently. The first observation of a planet orbiting a sun-type star was made in 1995, when a large body was detected orbiting the star 51 Pegasi. Since then, more than 50 giant planets have been detected around nearby sun-type stars, and more are detected each month. So far, none of the planets discovered has had the characteristics that would designate it as a likely candidate for further study as the host for life: they have all been Jupiter-like gas giants and situated close to their stars. The next stage in the search for extra-solar planets can only come with technical improvements. Both the European Space Agency and NASA have missions in the planning stage (Darwin and Terrestrial Planet Finder respectively), which will have the capability both to detect Earth-like planets and to determine their characteristics. The discovery of micro-organisms on Earth that are able to survive in conditions of extreme heat, cold, pressure and radiation encourages the view that there may be life elsewhere. There is an abundance of organic materials and water within the solar system, and the potential for their presence on planets around other stars. But even if microbial life is widespread within the solar system or galaxy, there is no guarantee that it has managed to survive and evolve into sentient or intelligent beings. If the results of missions planned for the next decade are as inconclusive as those carried out in the past, then perhaps, although we might not be alone, we do not yet have anyone with whom we can talk.
<urn:uuid:f94020cb-1a34-47a8-ac10-85a8486a3dc1>
4.0625
1,280
Knowledge Article
Science & Tech.
36.148228
Bryozoans can be found in the following collections that may be useful for ocean acidification studies: The Discovery collections originate from a number of different expeditions that took place from 1901-1999. The collection of pteropods may be of particular importance for ocean acidification research, and there are a wide variety of other marine specimens as well as ocean bottom deposits and residues. Learn more. Scientists on the 1872-76 HMS Challenger expedition collected a vast amount of natural history material from around the globe, which is accompanied by extensive taxonomic and summary reports. Get information about the Challenger collections housed at the Natural History Museum. See a summary of what can be found in the Dry Invertebrate Store. These collections include beach sand, forams, bryozoa and diatoms - including foram and diatom ooze from the HMS Challenger and Porcupine expeditions. Find out more. The collections at these institutions were surveyed for the UK Ocean Acidification Research Programme.
<urn:uuid:a909ca8d-4c5e-42d4-9a36-ede251c7cd4c>
3.359375
207
Knowledge Article
Science & Tech.
25.317424
Opening a new window into the mysteries of animal design and the nature of life, biologists described here today how they had decoded almost the entire genetic rule book for making the Drosophila fruit fly, an organism whose study is deeply interwoven with the progress of modern biology. The achievement closes a cycle in scientific history that began in 1910, when Thomas Morgan of Columbia University chose Drosophila as the experimental animal with which he and his students would work out many of the basic principles of genetics. The announcement today is a milestone in the effort to sequence the human genome, because it seems to validate a high-risk decoding strategy adopted by Dr. J. Craig Venter of the Celera Corporation, which is competing against a public consortium of university centers to be first to decode the human genetic inheritance. Despite the rivalry in decoding the human genome, the work described here was a collaboration between a team of scientists led by Dr. Venter and another led by Dr. Gerald M. Rubin, the University of California biologist who directs the public consortium's work on the fly. Celera reported in December that it had sequenced the fruit-fly genome. But the results are only now being made public, both in lectures given here at the annual meeting of the world's fly biologists and in articles in this Friday's issue of Science. ''Because fly cell biology and development have much in common with mammals, this sequence may be the Rosetta stone for deciphering the human genome,'' two biologists not connected with the sequencing, Dr. Thomas B. Kornberg and Dr. Mark A. Krasnow, write in a commentary in Science. They call completion of the fly genome ''a monumental technical feat.'' Celera chose the fruit-fly genome as a pilot project for tackling the human genome, which is far larger. The Celera approach with which the fly was sequenced involves assembling the genome from millions of small fragments in a single giant computation. The public consortium, by contrast, follows a more cautious strategy of collecting the small pieces into segments and then joining these into a complete genome. Arriving for an afternoon session today, 1,300 fly biologists at their annual meeting found on their chairs a gift from Celera, a CD-ROM of the genome sequence of their favorite organism. Dr. Venter was given a standing ovation after the president of the fly biologists' association, Dr. Gary Karpen of the Salk Institute, said of the Drosophila genome that ''we are about to be handed an incredible tool that many of us only dreamed about for many years.'' Borrowing from the marketing language of computer programmers, the Venter-Rubin teams termed today's version of the fly genome ''Release 1,'' meaning that perfected editions would be published in the future. At present there remain 1,299 gaps in the 120 million chemical units of DNA that have been decoded. The gaps are small and are expected to be closed within a few months. Some biologists say the Drosophila work does not guarantee that the Venter approach will meet with similar success with the human genome. ''Though a major accomplishment, I don't think it proves they can assemble the human genome,'' said Dr. Richard Durbin, deputy director of the Sanger Centre in Britain, a leading member of the consortium. But in an interview earlier this week, Dr. Venter said, ''We are 100 percent certain that there are no problems in assembling the human genome.''
<urn:uuid:7d4025ce-aa6d-4036-a12b-9d9345a78647>
3.15625
717
Truncated
Science & Tech.
44.027688
An Introduction to WSIL Pages: 1, 2 Locating WSIL Files Once an inspection document has been created, a consumer needs to be able to find it. WSIL's decentralized document-based model would make locating these files difficult if it were not for a couple of simple conventions defined in the specification. The first convention employs the use of a fixed filename of located in common entry points. Consumers would only need to ping a URL such as http://example.org/inspection.wsil or http://examples.org/services/inspection.wsil to discover the file's existence for retrieval. The second convention employs embedded references in other documents, such as HTML or other WSIL documents. WSIL advocates use of a meta tag in an HTML document that establishes a link to the inspection documentation location. This is similar in nature to the way stylesheets, RSS syndication files, and weblog channel rolls can be autodiscovered. (However, they make use of the HTML's link, which is the appropriate tag to use for this function.) A Hypothetical Use of WSIL by Google Let's suppose for a moment that Google launched its Web services API without the fanfare with which it was greeted. How would Google "advertise" the availability of their services, particularly to other Web-services-aware applications? How would a service consumer discover and bind to Google's services? First, Google would start by creating a WSIL document. Example 2 is what their inspection document might look like. In keeping it simple, I'm assuming once again the WSDL file contains only one service description. Example 2: A Hypothetical WSIL for Google <?xml version="1.0" encoding="UTF-8"?> <inspection xmlns="http://schemas.xmlsoap.org/ws/2001/10/inspection/"> <abstract>The Google Search API</abstract> <service> <abstract>Google Search</abstract> <description referencedNamespace="http://schemas.xmlsoap.org/wsdl/" location="http://api.google.com/GoogleSearch.wsdl"/> <description referencedNamespace="http://www.w3.org/1999/xhtml" location="http://api.google.com/documentation/search.html"/> </service> </inspection> Note the two description tags in the service block. One points to a WSDL file, and the other to an a XHTML document, presumably with a description of the service. (This HTML document doesn't actually exist on Google's site, but it illustrates WSIL's ability to reference multiple, different service descriptions for different purposes.) Once the inspection document has been created, Google could place it at http://www.google.com/inspection.wsil or perhaps http://api.google.com/inspection.wsil. Google could also embed the following HTML in its Web pages: <meta name="serviceInspection" content="http://www.google.com/inspection.wsil" /> There is no cost to doing both. In fact, it's advantageous to do so, because it gives potential service consumers the option of locating the inspection document as they prefer. The WSIL specification provides for a certain amount of extensibility through the use of XML namespaces. This extensible mechanism is important because it allows for the evolution of service descriptions and repositories without having to revise the base specification. Currently, the WSIL specification defines extensions for WSDL and UDDI. Example 3 illustrates what an inspection file referencing a complex WSDL file or a UDDI repository might look like. In sticking to the basics, we won't go into the details of how these extensions are utilized. Example 3: A Sample WSIL document With WSDL and UDDI Extensions <?xml version="1.0" encoding="UTF-8"?> <inspection xmlns="http://schemas.xmlsoap.org/ws/2001/10/inspection/" xmlns:wsilwsdl="http://schemas.xmlsoap.org/ws/2001/10/inspection/wsdl/" xmlns:wsiluddi="http://schemas.xmlsoap.org/ws/2001/10/inspection/uddi/" > <abstract>Acme Industries Public Web Services</abstract> <service> <name>Store Finder Service</name> <abstract>A service to perform a geographical search of Acme store locations.</abstract> <description referencedNamespace="http://schemas.xmlsoap.org/wsdl/" location="http://example.org/services/storefinder.wsdl"> <wsilwsdl:reference> <wsilwsdl:referencedService xmlns:ns1="http://example.org/services/storefinder.wsdl"> ns1:StoreFinder<wsilwsdl:referencedService> </wsilwsdl:reference> </description> </service> <link referencedNamespace="urn:uddi-org:api"> <abstract>Acme Industries Public e-Commerce Services</abstract> <wsiluddi:businessDescription location="http://example.org/uddi/inquiryapi"> <wsiluddi:businessKey>3C9CADD0-5C39-11D5-9FCF-BB3200333F79 </wsiluddi:businessKey> <wsiluddi:discoveryURL useType="businessEntity"> http://example.org/uddi?3C9CADD0-5C39-11D5-9FCF-BB3200333F79 </wsiluddi:discoveryURL> </wsiluddi:businessDescription> </link> </inspection> In this example, unlike the previous examples, we make reference to a WSDL file that is presumed to have multiple service descriptions, of which we identify the reference description. We also provide a link to a UDDI repository that contains the service descriptions for Acme Industries. Simplicity begins to give way to extensibility, but the utility and adaptability provided makes it a reasonable tradeoff. Other bindings for service description mechanisms are easily authored by following a few simple rules, which are documented in section 5.0 of the WSIL 1.0 specification. XMethods, a directory of publicly-available Web services, is an earlier adopter of WSIL and has developed a binding extension for its service. A section of their WSIL file is listed in example 5. Example 4: A subset of XMethods.net's WSIL File <service> <abstract>Develop Your Own Applications Using Google</abstract> <description referencedNamespace="http://schemas.xmlsoap.org/wsdl/" location="http://api.google.com/GoogleSearch.wsdl"/> <description referencedNamespace="http://www.xmethods.net/"> <wsilxmethods:serviceDetailPage location="http://www.xmethods.net/ve2/ViewListing.po?serviceid=73855"> <wsilxmethods:serviceID>73855</wsilxmethods:serviceID> </wsilxmethods:serviceDetailPage> </description> </service> The extension defined by the wsilxmethods provides a URI of a dynamically-generated HTML page detailing the service. The extension also provides the service's XMethod ID separately, which could be helpful in utilizing other functionality of the A Good Start That Is Far From Perfect Hopefully, at this point you now understand the advantages and utility of WSIL in Web service discovery, and are interested in putting it to use. IBM and Microsoft have done an admirable job in developing WSIL, but it does suffer from issues that are commonplace in the initial release of a specification that was developed by a closed select committee. Here I will note some of these issues that I found in my review of the specification. It is by no means a complete and thorough list, but simply my personal observations from working with markup languages over the years. The specification misuses the metatag for embedding a link in a service provider's HTML. As I mentioned previously, CSS stylesheets, RSS syndication files, and weblog channel rolls facilitate autodiscovery through the linktag, which, unlike meta, was designed for such a use. The specification could use more meta data tags, such as the name of the service owner or a support email address, to better document the service description. The versatile nametags are helpful, but may not be sufficient enough. Take XMethod's use of WSIL into consideration. The WSIL specification assumes the service provider is delivering an inspection document, which in not true in the XMethods case. In this case, additional meta information would be helpful and warranted. Ideally, the meta data could also be extensible to allow the embedding of more granular forms of meta information for specific problem domains. The tag semantics of the specification could use some refinement to keep the document smaller and more readable. For example, the referencedNamespaceused by the linktags could have been represented as referencedNamespaceis syntactically more description, nsrefis smaller and more consistent with the XML Namespace declaration ( xmlns) and HTML hyperlink references ( href). Another such instance is the locationattribute, which defines a URI and could instead be represented as Another naming nit: index.*is the common convention for naming a document that is served by default when a specific document is not specified in an HTTP request. WSIL should follow this same convention and use index.wsilfor fixed name discovery convention. WSIL is a simple, lightweight mechanism for Web service discovery that complements, rather then competes with, UDDI. WSIL's model is a decentralized one that is document-based, and leverages the existing Web infrastructure already in place. While UDDI can be seen as a phone book, WSIL is more like a business card. Looking at it in another way, WSIL is like the RSS of Web services. I find WSIL intriguing because of its simplicity and lightweight implementation. It's what the Web services space needs now. I would go so far as to argue that WSIL should have preceded UDDI as the solution we all need for services discovery, and is a logical stepping stone towards extensive Web services repository services for those who eventually require them. As a low-function, lightweight specification that leaves the processing logic to the developer, I am equally as intrigued and enthusiastic about the potential for innovative and novel applications that will undoubtedly arise from the accessibility of this information. Who knows what our understanding of "service discovery" could be in a year or two? Timothy Appnel has 13 years of corporate IT and Internet systems development experience and is the Principal of Appnel Internet Solutions, a technology consultancy specializing in Movable Type and TypePad systems. Axis contains the WSIL toolkit that IBM donated to the Apache Project. http://www.oreillynet.com/pub/wlg/2113 Return to ONJava.com.
<urn:uuid:2f170692-cb4d-49b6-b9c6-01ac28727c0b>
2.78125
2,404
Nonfiction Writing
Software Dev.
39.759259
Eight Weeks of Prototype: Week 1, Beginning with Prototype DOM Traversal Methods In addition to being able to select elements in the DOM using select(), Prototype also allows you to select elements using the down(), next() and previous() methods. These methods help you to easily find elements relative to a given element. select(), each of these methods is used to retrieve exactly one element (if found). Because of this (and because each element is returned with the extended Prototype functionality), you can chain these calls together, as you will see at the end of this section. The up() method To find an element further up the DOM tree from a given element (that is, to find one of its ancestors) you can use the up() method. If no arguments are passed to up(), then the element's parent is returned. If you don't just want an element's parent element but rather one of its other ancestors there are several different combinations of arguments that can be used. Firstly, you can specify a numerical index. For instance, using up(0) will retrieve the element's parent (the same as omitting the argument), using up(1) will return the element's grandparent, and using up(2) will return the great-grandparent. Alternatively, you can pass a CSS selector to up(). The first matched element is returned. For example, if you have an image inside a HTML table (e.g. <table><tr><td> <img /> </td></tr></table>), you can use imgElt.up('table') to retrieve the table element (in this case using just imgElt.up() might return the <td> element instead). You can also specify a numerical index along with a CSS selector. For example, if you have an image within two nested div elements, you can select the outer div by using imgElt.up('div', 1) (the first element from the target has an index of 0, which is the default value if the second argument isn't specified). Listing 7 shows some examples of how to use the up() method to find an element's ancestors. One of the most useful aspects of up() is that you can easily find an element without caring which elements lie between the element you want to find and the element you're searching on. That is, because you can use selectors to find the parent, you don't mind whether the element is the parent, the grandparent or otherwise. The down() method down() method is the opposite of the up() method, in that it searches within an element's descendants rather than in its ancestors. That is, it looks for elements within the target element. up(), you can either specify no arguments, a numerical index, a CSS selector, or a CSS selector with a numerical index. Specifying no arguments will result in the first child being returned. down() is very similar to using the select() method covered earlier in this article, except that only a single element is returned using down() (remember that select() returns an array). Because of this, we can deduce that someElt.down('.foo') is effectively equivalent to The important difference is that trying to reference an particular element when using select() call returns an empty array. This is not an issue when using Listing 8 shows some examples of using down() to find an element's descendants. The next() and previous() methods You can find sibling elements (that is, any element with the same parent element as the search target) using the next() and previous() methods. As suggested by their names, next() finds siblings elements that appear in the document after the search target, while previous() finds only siblings that appear before the search target. The arguments used for next() and previous() work in the same manner as with down(). That is, you can use a numerical index or a CSS selector. Listing 9 shows several examples of using next() and Chaining traversal calls together Because calls to down(), next() and previous() each return a single element that has been extended with extra Prototype functionality, we can chain calls to these functions together. For example, calling elt.down().up() will return the original element elt (note, however, that calling elt.up().down() will not necessarily return the original element; this will depend on the ordering of elements within elt's parent). Similarly, elt.next().previous() will also return Obviously there is little use for these examples in particular, however you may encounter situations where chaining these calls together is extremely useful. One such example might be to search all siblings of an element. Using elt.next(someSelector) only finds siblings before the given element, while elt.previous(someSelector) only finds siblings after the element. If you wanted to search either before or after, you could do so by using When chaining calls together, there is a risk that one of the later calls in the chain may cause an error due to an earlier call not returning an element (for instance, calling previous() on an element with no siblings will not return a valid element). Because of this, you should only chain your calls together when you know it cannot fail. Otherwise, you should make each call in a separate statement and check the return values accordingly.
<urn:uuid:5600b974-3dc8-44d7-b800-a6ad46a841f5>
3.796875
1,151
Tutorial
Software Dev.
50.74732
Although there is substantial evidence that Northern Hemisphere species have responded to climatic change over the last few decades, there is little documented evidence that Southern Hemisphere species have responded in the same way. Here, we report that Australian migratory birds have undergone changes in the first arrival date (FAD) and last date of departure (LDD) of a similar magnitude as species from the Northern Hemisphere. We compiled data on arrival and departure of migratory birds in southeast Australia since 1960 from the published literature, Bird Observer Reports, and personal observations from bird watchers. Data on the FAD for 24 species and the LDD for 12 species were analyzed. Sixteen species were short- to middle-distance species arriving at their breeding grounds, seven were long-distance migrants arriving at their nonbreeding grounds, and one was a middle-distance migrant also arriving at its nonbreeding ground. For 12 species, we gathered data from more than one location, enabling us to assess the consistency of intraspecific trends at different locations. Regressions of climate variables against year show that across south-east Australia average annual maximum and minimum temperatures have increased by 0.17°C and 0.13°C decade⁻¹ since 1960, respectively. Over this period there has been an average advance in arrival of 3.5 days decade⁻¹; 16 of the 45 time-series (representing 12 of the 24 species studied) showed a significant trend toward earlier arrival, while only one timeseries showed a significant delay. Conversely, there has been an average delay in departure of 5.1 days decade⁻¹; four of the 21 departure time-series (four species) showed a significant trend toward later departure, while one species showed a significant trend toward earlier departure. However, differences emerge between the arrival and departure of short- to middle-distance species visiting south-east Australia to breed compared with long-distance species that spend their nonbreeding period here. On average, short- to middle-distance migrants have arrived at their breeding grounds 3.1 days decade⁻¹ earlier and delayed departure by 8.1 days decade⁻¹, thus extending the time spent in their breeding grounds by ~11 days decade⁻¹. The average advance in arrival at the nonbreeding grounds of long-distance migrants is 6.8 days decade⁻¹. These species, however, have also advanced departure by an average of 6.9 days decade⁻¹. Hence, the length of stay has not changed but rather, the timing of events has advanced. The patterns of change in FAD and LDD of Australian migratory birds are of a similar magnitude to changes undergone by Northern Hemisphere species, and add further evidence that the modest warming experienced over the past few decades has already had significant biological impacts on a global scale.
<urn:uuid:10332d17-c836-4593-8b7e-60c3ae4f8e6a>
3.28125
576
Academic Writing
Science & Tech.
35.073248
In the 1970s, Apollo-era astronauts left a seismic experiment on the moon. Now, new analysis of 30-year-old data from the Apollo Passive Seismic Experiment may give new insights into the lunar core. Researchers report that the lunar core may be much like the core of the Earth, with a solid inner core and molten outer core. However, outside the moon's outer core, the moon also has a thick partial melt layer, according to the new analysis. We'll talk about what the researchers did, and what they discovered. Produced by Annette Heist, Senior Producer
<urn:uuid:9b6d0aa4-6993-49ab-8e2b-eea1e106e70f>
3.15625
120
Truncated
Science & Tech.
49.681816
Have you ever wondered how clouds form? We all learn the water cycle in school – water falls from the clouds in the form of rain or snow and collects on the ground. The water on the ground heats up and turns to vapor and the vapor travels up into the atmosphere and creates clouds. But how do those clouds form? Here’s an experiment that demonstrates how the water molecules join together and form a cloud. Before you start on your own cloud, let’s learn a little more about clouds. A cloud is a lot of droplets of water and or ice crystals, depending on the temperature. The droplets float in the air molecules. Even though we don’t see them, water molecules are in the air all around us. These airborne water molecules are called water vapor. When the molecules are bouncing around in the atmosphere, they don’t normally stick together. Clouds on Earth form when warm air rises and its pressure is reduced. The air expands and cools, and clouds form as the temperature drops below the dew point. In other words, cold air cannot hold as much water vapor as warm air. Invisible particles in the air in the form of pollution, smoke, Q: Tell us a little about who you are… A: I am a very personable and outgoing person who loves to make the best out of every situation. I feel as though no man or woman is better than the next and that we all need to work close to one another to really expand our horizons in life. Having the privilege to work for such a great company has allowed me to have a multitude of opportunities that I never thought were possible or that I would ever even come across in my life. I have been with the company now for 3 1/2 years and still wake up every morning excited to come to my job and work with my friends. It really takes a unique work environment to be able to say that your coworkers are not just team members but lifelong friends and that is one of my favorite things about working at Steve Spangler Science. Q: What do you do at Steve Spangler Science? A: I help oversee the production of the fun kits and educational products that we provide to our great customers. I also We’re back from the Warner Bros. studios in Burbank, California with lots of fun stories from our latest appearance on the Ellen DeGeneres Show. When I say ‘we’… I mean ‘we’ because there’s no way I could pull these segments off by myself. Jeff Brooks, Carly Reed and Lisa Brooks traveled with me and worked hard backstage and on the outside location shoot to make everything run smoothly. Unlike other talk shows, the people at the Ellen Show are used to pulling off big stunts… but even this one had everyone a little on edge because no one really knew what was going to happen to all of those ping pong balls. Watch the video… The line-up of science demos was as follows… Cloud in a Bottle – a really visual way of creating a water vapor cloud instantly in a 2-liter bottle. The second demo was a Dust Explosion using a very fine spore called Lycopodium, a fine yellow powder derived from the spores of Lycopodium clavatum (stag’s horn club moss, running ground pine). By itself, the powder is not flammable. When the fine powder is dispersed in the air and each particle is surrounded by oxygen, it’s very flammable…and the
<urn:uuid:224eea93-acbc-4445-9488-4248b4d2a6c1>
3.34375
728
Personal Blog
Science & Tech.
58.237566
In this section... 7.2.1 Predefined Numeric Types 7.2.2 Ada Model 7.2.4 Accuracy Constraints 7.2.6 Precision of Constants 7.2.7 Subexpression Evaluation 7.2.8 Relational Tests |Summary of Guidelines from this section| Standard. Use range and digits declarations and let the implementation do the derivation implicitly from the predefined types. Integeron a machine with a 16-bit word. The first example below allows a compiler to choose a multiword representation if necessary. type Second_Of_Day is range 0 .. 86_400;rather than type Second_Of_Day is new Integer range 1 .. 86_400;or subtype Second_Of_Day is Integer range 1 .. 86_400; This applies to more than just numerical computation. An easy-to-overlook instance of this problem occurs if you neglect to use explicitly declared types for integer discrete ranges (array sizes, loop ranges, etc.) (see Guidelines 5.5.1 and 5.5.2). If you do not provide an explicit type when specifying index constraints and other discrete ranges, a predefined integer type is assumed. Language Ref Manual references: 3.5.4 Integer Types, 3.5.7 Floating Point Types, 3.5.9 Fixed Point Types, 8.6 The Package Standard, C Predefined Language Environment, F Implementation-Dependent Characteristics Language Ref Manual references: 3.5.7 Floating Point Types, 4.5.7 Accuracy of Operations with Real Operands Floating point calculations are done with the equivalent of the implementation's predefined floating point types. The effect of extra "guard" digits in internal computations can sometimes lower the number of digits that must be specified in an Ada declaration. This may not be consistent over implementations where the program is intended to be run. It may also lead to the false conclusion that the declared types are sufficient for the accuracy required. The numeric type declarations should be chosen to satisfy the lowest precision (smallest number of digits) that will provide the required accuracy. Careful analysis will be necessary to show that the declarations are adequate. Language Ref Manual references: 3.5.7 Floating Point Types, 4.5.7 Accuracy of Operations with Real Operands, 13.7.3 Representation Attributes of Real Types Language Ref Manual references: 4.5.7 Accuracy of Operations with Real Operands, F Implementation-Dependent Characteristics Language Ref Manual references: 2.7 Comments, 3.5.4 Integer Types, 3.5.6 Real Types, 4.5.7 Accuracy of Operations with Real Operands See also Guideline 3.2.5. Language Ref Manual references: 2.4 Numeric Literals, 3.2 Objects and Named Numbers, 4.1 Universal Expressions, 4.9 Static Expressions and Static Subtypes Language Ref Manual references: 3.3 Types and Subtypes, 3.3.2 Subtype Declarations, 3.4 Derived Types, 4.4 Expressions, 4.5 Operators and Expression Evaluation, 4.5.7 Accuracy of Operations with Real Operands >=to do relational tests on real valued arguments, avoiding the abs (X - Y) <= Float_Type'Small -- (1) abs (X - Y) <= Float_Type'Base'Small -- (2) abs (X - Y) <= abs X * Float_Type'Epsilon -- (3) abs (X - Y) <= abs X * Float_Type'Base'Epsilon -- (4) And specifically for "equality" to zero: abs X <= Float_Type'Small -- (1) abs X <= Float_Type'Base'Small -- (2) abs X <= abs X * Float_Type'Epsilon -- (3) abs X <= abs X * Float_Type'Base'Epsilon -- (4) /=) are a general problem in real valued computations. Because of the way Ada comparisons are defined in terms of model intervals, it is possible for the values of the Ada comparisons A < Band A = Bto depend on the implementation, while A <= Bevaluates uniformly across implementations. Note that for real values in Ada, " A <= B" is not the same as " not (A > B)". Further explanation can be found in Cohen (1986) pp.227-233. Type attributes are the primary means of symbolically accessing the implementation of the Ada numeric model. When the characteristics of the model numbers are accessed symbolically, the source code is portable. The appropriate model numbers of any implementation will then be used by the generated code. Although zero is technically not a special case, it is often overlooked because it looks like the simplest and, therefore, safest case. But in reality, each time comparisons involve small values, evaluate the situation to determine which technique is appropriate. Regardless of language, real valued computations have inaccuracy. That the corresponding mathematical operations have algebraic properties usually introduces some confusion. This guideline explains how Ada deals with the problem that most languages face. Language Ref Manual references: 4.5.2 Relational Operators and Membership Tests
<urn:uuid:8c554dc4-0b31-4ee8-b5f6-4e070144bdf9>
2.859375
1,104
Documentation
Software Dev.
51.444411
Glenn Chaple's observing basics: A star by any other name February 2005: What is the identity of the star BD +88°8? How about HD 8890, BS 424, or SAO 308? If you're still shaking your head, I have a surprise for you. February 1, 2005 |Quick trivia question: What is the identity of the star BD +88°8? Never heard of it? How about HD 8890, BS 424, or SAO 308? Still drawing a blank? Surely you have encountered Struve 93 or ADS 1477 during your nighttime astronomical forays. If you're still shaking your head, I have a surprise for you. All of the aforementioned codes actually are catalog names for one famous star.| You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:55996e6f-32e8-4b49-8560-ec52a9d5d807>
2.6875
320
Truncated
Science & Tech.
60.430034
Nature: Thanks to their ability to switch on and off and amplify signals, transistors are a key component of electrical devices. Several projects have demonstrated the possibility of creating transistors that are controlled by photons instead of electrical signals. The most recent, created by Wenlan Chen of MIT and her colleagues, relies on only a single photon to achieve this functionality. The proof-of-concept project uses a cooled cloud of cesium atoms and a principle called electromagnetically induced transparency in which a photon with a specific energy switches the cloud of cesium atoms between excited and ground states. When the atoms are in ground states, they allow light to pass through the cloud. Neither Chen’s transistor nor other optical transistors are likely to replace traditional ones any time soon, however, as the size and energy costs are significantly greater. MIT Technology Review: Last week researchers revealed an optical invisibility cloak large enough to hide a human, but it was limited to only working in a single direction. That achievement has already been surpassed by Hongsheng Chen of Zhejiang University in China and his colleagues. They realized that with most invisibility cloaks, researchers worried about maintaining light’s phase and polarity, but that it isn’t necessary for visible light because humans aren’t sensitive to changes in those characteristics. That realization allowed Chen and colleagues to use conventional optical components to steer light around a hidden central area. They demonstrated two versions of their cloaking device: The first is square, which hides the central area from four directions; the second is a hexagon, which works in six directions. Nature: Raman spectroscopy and scanning tunneling microscopes (STMs) have been used together to produce increasingly more detailed images of molecules. A new advancement in the pairing has produced pictures of individual molecules as well as measurements of the strength of the molecules’ bonds. By itself, Raman spectroscopy uses a laser to cause molecules to vibrate, and the way the light is scattered can be used to determine the molecule’s structure. The technique doesn’t work well with small samples. An STM uses a very fine metal tip held just 1 nm from a surface to allow electrons to quantum tunnel across the gap, and the strength of the electric current is used to map surfaces with atomic resolution. An international team of researchers has succeeded in using an STM to narrow the focus of the Raman laser. They use another laser to create oscillations in the STM tip’s electric field. When the frequency of those oscillations matches the frequency of the Raman laser, the beam becomes significantly stronger. At 0.3 nm, the resolution of the new arrangement is still less detailed than that of other techniques, such as atomic force microscopy, but the Raman–STM pairing allows for measuring the strength of molecular bonds as well. Science News: To mask data transferred between a sender and a receiver, researchers have developed a cloaking device that creates holes in time and space. Joseph Lukens of Purdue University in West Lafayette, Indiana, and colleagues—whose paper was published online in Nature—manipulated the flow of photons so that minute gaps occurred in the light wave. They then injected an electrical signal, consisting of binary data, that went undetected because the data bits passed through the gaps. Such a device could advance the field of secure communications by concealing transmissions from potential eavesdroppers. New York Times: An Israeli company has developed a device that can read aloud text and identify objects. Designed for the visually impaired, OrCam consists of a camera mounted on a pair of glasses, a small computer that is worn in a pocket, and a bone-conduction earpiece. According to OrCam’s website, all you have to do is point and OrCam “will understand what you want on its own, whether it’s to read, find an item, catch a bus or cross the road.” It can read newspapers and books, labels on medicine bottles, and text on computers and phones. OrCam can also be taught by the wearer to recognize personal items, credit cards, and currency as well as the faces of family and friends. The device uses computer vision algorithms developed by Amnon Shashua, a computer science professor at Hebrew University; fellow faculty member Shai Shalev-Shwartz; and former grad student Yonatan Wexler. Shashua has also supplied similar camera technology to the automobile industry. OrCam is one of a half-dozen devices currently being developed in the field of computer vision. New Scientist: Sony has developed a flat-panel electronic display that is more energy efficient and provides richer color than conventional LCD displays. The new display uses quantum dots—tiny semiconductors that confine electric charge and emit light. Each dot’s color depends on its size and shape. Although each spans just 10% of the visible spectrum, the dots can be mixed to produce 100% of the color range visible to the human eye. The company that developed the technology, QD Vision, claims that its Color IQ technology provides “the most radiant reds, brilliant blues and gorgeous greens you will ever see.” Other benefits of the new technology include lower costs and the potential for extremely thin displays. BBC: Signals sent over fiber-optic cables collect noise as they travel: The farther a signal travels, the noisier it becomes. Now, Xiang Liu of Bell Laboratories and his colleagues have demonstrated a way to increase both the speed and the distance that a fiber-optic signal can travel and still maintain its clarity. They duplicated a signal and sent the pair of light beams down a cable. At the end of the cable, when the two signals were recombined, the parts of one of the signals that were disrupted by noise were overwritten by the unchanged parts of the other signal. The overwriting technique allows for the removal of signal repeaters, which are used in fiber optics to extend the distance a signal can travel but which tend to limit the power and speed of the data transmission. Liu’s test setup used a 12 800-km-long fiber-optic cable—longer than the longest transoceanic cable—and reached speeds of 400 Gb/s, four times the speed of the best commercial cable systems. Popular Science: In combat, knowing you are being watched is just as important as being able to watch your enemy. San Diego-based defense contractor Torrey Pines Logic is developing a robotic system that can determine when someone is using a viewing device to look toward the system. The Beam 100 Optical Detection System uses laser pulses with a range of just over 1000 m in a 360-degree field of view. When the pulses are reflected back to the system, it examines the light for the signature characteristics of optical glass. Optical glass could be any sort of lens used in binoculars, cameras, or even rifle scopes. If it detects a reflection that it judges to be optical glass, it informs the user and indicates the direction and distance. The device is designed purely as a detection system; the use of lasers or other weapons specifically to blind is forbidden by international law. Independent: A large, unmapped, densely forested area of eastern Honduras may be the site of an ancient city called Ciudad Blanca, first reported by Hernán Cortés in 1526. Cortés never found the city and neither have any subsequent explorers. Now Steve Elkins, a filmmaker and amateur archaeologist, has teamed with archaeologists from Colorado State University to use lidar to map part of the area. Lidar creates a 3D topological map of the ground and structures on it by firing billions of pulses of laser light that can penetrate the organic forest canopy. From data collected over one week, the researchers mapped a 155-km2 area, which revealed what may be a network of plazas and pyramids. Possibly dating back to 500 CE, the city also appears to have had paved roads, parks, and advanced irrigation systems. To prevent looting, the city’s precise location has not been revealed. In partnership with the Honduran government, Elkins plans to lead a ground expedition to explore the area and make a documentary film of the effort. MIT Technology Review: Most video display systems use light-emitting backplanes covered with filters that create the individual pixels. That system requires that the light source remain on, which drains battery power. In addition, the filters reduce the brightness of the screen’s light. Lumiode, a startup in New York, has developed a display technology that uses an array of LEDs as individual pixels, with each LED covered by a layer of silicon to control the amount of light emitted. The company’s prototype is a 50 x 50 array of LEDs just 1 mm2. Lumiode claims that the display is 30 times brighter and 10 times more efficient than other displays. However, the display is currently limited to just a single color, although the company plans to add a color-controlling layer on top of the LED wafer. Lumiode CEO Vincent Lee says that he expects his company to develop a 320 x 240 pixel prototype within the next year. Lee hopes that his company can partner with electronics makers to incorporate the display in heads-up devices such as Google Glass or to create displays on car windshields.
<urn:uuid:65fcca32-625c-4464-97d5-5f2848eb28ae>
3.859375
1,927
Content Listing
Science & Tech.
38.152936
Part 14 - Exceptions A mechanism designed to handle runtime errors or other problems (exceptions) inside a computer program. Exceptions are very important, as they are raised whenever an error occurs in the system. (Or at least they should be.) An exception stops the program if it is not caught. Which stopped the program. To handle the situation, exceptions must be caught. Exceptions are either caught in a try-except statement, a try-ensure statement, or a Exceptions are derived from the simple This prevents the code from stopping and lets the program keep running even after it would have normally crashed. There can be multiple except statements, in case the code can cause multiple Try-ensure is handy if you are dealing with open streams that need to be closed in case of an error. As you can see, the ensure statement didn't prevent the Exception from bubbling up and causing the program to crash. try-except-ensure combines the two. If you don't solve the problem in your There are times that you want to Exceptions of your own. Execute is called with an improper value of i, then the Exception will be In production environments, you'll want to create your own - Think of an exercise
<urn:uuid:9af48e83-d256-49d7-87b4-bea6a1ee8f1e>
4.1875
276
Documentation
Software Dev.
52.097727
Not nearly as spectacular as astrophysical or planetary missions, magnetospheric studies are nevertheless closer to the Earthling's wellbeing, since plasma environment in the immediate vicinity of the Earth is far greater factor than distant supernovae explosions or methane in the Martian atmosphere. Earth’s magnetosphere shields us from space and solar radiation, acting as an interface between the Sun and the planet and converting solar energy to ‘space weather’ phenomena. On the other hand, the Sun being the source of life on our planet is a source of disturbance as well, occasionally spurting clouds of energetic particles that interact with Earth's magnetosphere and initiate geomagnetic storms. More than 10 spacecraft are currently looking into the changes in plasma environment, both inside and outside the Earth’s magnetosphere, providing data that link up the Sun and the Earth (and even more with interplanetary probes carrying plasma instruments). The data are already used for ‘space weather’ prediction, which can forestall actual events up to 90 minutes or less, depending on solar wind velocity. Russia, despite its recent failures in planetary research, has been more successful in plasma studies, beginning from INTERBALL mission (4 spacecraft in Earth’s proximity working in 1995 - 2001) and counting three CORONAS solar observatories, the last of which Coronas-Foton was unfortunately lost well before scheduled date due to service systems’ malfunction. Currently, two missions that can be launched in the near future are being discussed. RESONANCE: The more the better RESONANCE mission includes four similar spacecraft designed to measure plasma parameters of the Earth’s inner magnetosphere. It succeeds the earlier INTERBALL and current Cluster missions, the latter designed by European Space Agency. All of them share common idea that simultaneous observations made in different points can substantially enhance our understanding of fast plasma processes. INTERBALL, in particular, was a part of International Living with a Star Program that included several spacecraft in different regions of the near-Earth space. The idea of simultaneous observations light brings interesting results. For example, recent measurements made by similar instruments onboard Cluster mission and Mars Express spacecraft around Mars have given substantial proof of the hypothesis that magnetic field protects our planet from loosing oxygen, while Mars, being exposed to solar wind in the absence of inner magnetic field, cannot withstand its ‘ripping’ effect. While following this general pattern of multi-spacecraft observations, RESONANCE is unique as well thanks to its orbit, which allows four spacecraft to stay in the same region of the magnetosphere for a long time. Moreover, as the distance between the spacecraft is changeable, multi-scale observations are also possible. The mission will use new type of bus, namely MKA-FKI (short from ‘small spacecraft for fundamental space research’), currently under development by Lavochkin design bureau. International collaboration on the project includes Russia, Ukraine, Austria, Bulgaria, Greece, Poland, Czech, Slovakia, the USA, Finland, and France. The launch will be performed by pairs and is scheduled for the end of 2014 — beginning of 2015. Interheliozond: approaching the Sun On the other end of the solar-terrestrial chain is the Sun whose variability acts as a pacemaker for terrestrial cycles. To understand solar phenomena, high spatial and temporal resolution is crucial. Interheliozond attempts to penetrate deeper into the solar corona and to look at the star from different angles. The project planned for 2017 implies that a spacecraft will be sent along a long trajectory involving gravitational maneuvers near Venus to the solar vicinity, approximately to the point 21 million km from the Sun (1/7 of the distance from the Sun to the Earth). Moreover, as the spacecraft will be orbiting the star thrice as fast as the Earth, it will provide data from other regions of the solar surface, otherwise invisible from the Earth, even from the polar regions of the star since the spacecraft will temporarily leave the ecliptic plane. 0If implemented, such project will significantly contribute to solar physics that currently seeks for new methods to observe the Sun with ever greater resolution and precision. The quality of data is crucial for 'space weather' prediction, which becomes no less significant as the number of satellites increases.
<urn:uuid:2daef1c8-4c6b-496c-ac99-3f66e9f86586>
3.625
876
Knowledge Article
Science & Tech.
20.551684
See also the Dr. Math FAQ: Browse High School Euclidean/Plane Geometry Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Pythagorean theorem proofs. - The Role of Postulates [03/29/2003] Who decided what were postulates and what were theorems? Why is it okay that postulates aren't proven? - Rotating a Point [04/08/2003] Find the image of a triangle with vertices A(0,1),B(-2,0),and C(-4,-5) under a rotation of 90 degrees counterclockwise about the origin. Is there a formula I can use instead of drawing a picture? - Rule of Three [01/23/2002] How high above the surface of the earth must a person be raised to see 1/3 (one third) of its surface? - Scanning for Mountains [06/02/1999] I am standing on a hill scanning the horizon to see a mountain. How high must the mountain be for high-tech optical equipment to be any use? - Sextant Theorem [05/13/2001] What mathematical theorem is behind using a sextant, and can it be - The Shortest Crease [12/29/1997] A piece of paper is 6 units one side and 25 units on another side... - Short History of Geometry [09/15/2001] Were there any people who helped to develop geometry besides Euclid? - Side Length of a Regular Octagon, Without Trigonometry [09/19/2003] How can I find the length of one side of a regular octagon that has a (side-to-side) diameter of 16 feet, without using trigonometry? - Software for Displaying Geometric Shapes [9/29/1995] Do you know where I could find a geometry program that would display - Spherical Triangles [10/26/1996] Why can't you use the Pythagorean formula to measure the distance between two points on Earth? - Spiral Baffle in a Cylinder [6/24/1996] We have a cylinder, 48 inches in diameter, to put a spiral baffle inside... What would the radius be? - Square Peg, Round Peg [08/22/1997] Which fits better, a square peg in a round hole, or a round peg in a - Sum of Two Arcs [01/30/2003] Three points are taken at random on the circumference of a circle. What is the chance that the sum of any two arcs so determined is greater than the third? - Surface Area and Volume: Cubes and Prisms [05/27/1998] What is the definition of surface area and volume? What are the differences and similarities between surface area and volume? - Symmetry Proof [09/27/2001] Given an angle with vertex O and a point P inside the angle, drop perpendiculars PA, PB to the two sides of the angle, draw AB, and drop perpendiculars OC, PD to line AB. Then show that AC=BD. - Tangent of 90 Degrees [5/24/1996] Why is the tangent of 90 degrees undefined? - Tanker Bearings [06/11/2000] From ship A, the bearing of an oil tanker is 300 degrees; from ship B, 1000 m due west of A, the bearing of the tanker is 060 degrees. Is the oil tanker the same distance from A as from B? - Taping a Cylinder [01/29/2001] If I want to wrap sticky tape around a cylinder to cover it, what is the relation between the diameter of the cylinder, the thickness of the tape, and the angle between the diameter of the cylinder and the length of the - Theorems and Postulates [12/02/2006] If SSS, SAS, and AAS are theorems, why do other books still use them - Thinking About Proofs [09/24/1997] How do you know what statement to write next in a proof? What reasons do - Three Colors on a Plane [03/04/2003] Given three colors, green, red, and blue, that must be painted onto a plane. The entire plane is to be covered. When the three colors are placed on the plane every straight line that can be drawn will pass through exactly two colors. - Three-dimensional Counterparts for Two-dimensional Objects [03/04/1998] Three-dimensional counterparts for lines, polygons, perpendicular lines, and collinear lines. - A Three-Legged Stool [06/26/2001] Why is a three-legged stool steady, while a four-legged stool can be - Three Spheres in a Dish [08/04/1999] What is the radius of a hemispherical dish if 3 spheres with radii of 10 cm are inside the dish and the tops of all 3 spheres are exactly even with the top of the dish? - Three Tapers and a Length [8/25/1995] I have three tapers (or angles) that intersect with each other. Theoretically, if I were given the total length from 0 to Y and the total height 0 to X and of course the three angles, is it possible to calculate each angles length and height? - Tiling a Floor [06/30/1999] How many square yards is a 12ft. by 15ft. room? How many 8" x 8" tiles would you need for a 30 sq. ft. room? - Traceable Mathematical Curves [10/27/1997] Is there any way to tell just by looking if a curve is traceable or not? Is there some property of a curve that will tell you this? Do curves have - Translation [9/11/1996] What does translation mean? - Trapezoid Diagonals and Midpoints of Parallel Sides [03/04/2002] In a trapezoid, why are the midpoints of the parallel sides collinear with the intersection of the diagonals? - Triangle and Circle with same Center [7/8/1996] An equilateral triangle and a circle have the same center... find the length of the side of the triangle. - Triangle and Circumscribed Circle [03/23/1998] How can you find the radius of a circle circumscribed around any triangle given the three outside points of the triangle. - A Triangle in a Circle [05/26/2000] Suppose you randomly place 2 points on the circumference of a circle. What is the probability that a 3rd point placed randomly on the circle's circumference will form a triangle that will contain the center of the - Triangle in Randomly Colored Plane [10/28/2002] Prove: Assume that all points in the real plane are colored white or black at random. No matter how the plane is colored (even all white or all black) there is always at least one triangle whose vertices and center of gravity (all 4 points) are of the SAME color. - Triangles within a Triangle [11/10/1996] If multiple small equilateral triangles are drawn within a larger one, what is the relation between the number of small triangles lying on the base of the big triangle and the total number contained within the big - A Triangle with Three Right Angles [12/01/1999] How can you make a triangle with three right angles? - Trisecting a Line [11/03/1997] How would you trisect a line using a compass and a straight edge? - Trisecting a Line [01/25/1998] Is it possible to trisect a line? (Using propositions 1-34, Book 1 of - Trisecting a Line [01/30/1998] How do I trisect a line using only a straightedge and compass? - Trisecting a Line Segment [08/13/1999] How can I measure one-third of a line of an unknown length using a compass and a straightedge? - Trisecting an Angle [06/15/1999] I've come up with a method of approximately trisecting any angle. Can you tell me how accurate it is?
<urn:uuid:b922b552-1a0e-4674-8915-f62f8c5c757d>
3.125
1,851
Q&A Forum
Science & Tech.
69.50813
Plants and Freezing I'd like to know is it possible to freeze a plant and revive it? If so, what mechanisms are involved? While this might be possible with some rare species of plants, in general it is not viable to do this. When you freeze a plant the water in each of the cells freezes, which expands and lyses the cell--it basically makes the cell wall bust open. When this happens to all of the cells in the plant it withers very quickly and dies because it no longer has any structural support and cannot segregate the components of each cell as needed. This is a difficult question because it depends upon the plant, the freezing temperature, the freezing history (i.e. has the plant had the opportunity to adapt to colder weather), and a number of other variables. Some plants even generate their own "antifreeze". There is a very interesting book entitled "Ice" by Mariana Gosnell that devotes a couple of chapters to the freezing mechanisms in plants. I think you will find a complete answer to your It is possible to freeze certain plants that have the capability to withstand freezing. Often such plants are from temperate climates and live more than just a single growing season. Jim Tokuhisa, Ph.D. Click here to return to the Botany Archives Update: June 2012
<urn:uuid:8ab6ded3-78b3-4991-ae82-d13354161d09>
2.6875
296
Q&A Forum
Science & Tech.
55.464725
Pomona College Magazine Volume 40, No. 3 Sidebar: Pluto or Bust Three billion miles. Nine years of rocket-powered space flight, just to get to a place where the sun is merely the brightest star in the sky. Ask Colleen Hartman why she fought so hard to help sell NASA on the upcoming "Pluto mission," and the former director of the space agency's Outer Planets Program doesn't miss a beat. "I get excited about Pluto because we've never been there," says the relentlessly energetic Dr. Hartman, who spent more than a decade beating the drums for the $488-million "New Horizons" voyage to distant Pluto, now scheduled for liftoff in 2006. "Pluto is the ninth and last planet in the solar system--and it's the only one we've never seen up close. With this mission, we're going to get a glimpse of the tiniest planet in our solar system, and also at the nearby Kuiper Belt. This is a totally new region for space exploration. Who knows what surprises may be waiting for us, three billion miles out there?" An astrophysicist who's also an accomplished computer engineer, Hartman says she's especially intrigued by the ingeniously conceived power plant that will keep the golf cart-sized Pluto probe racing toward its target. "The trip to Pluto will be a long, long, cold journey," says the veteran space-explorer, "and the biggest problem you face is finding an energy source that will power the instruments and other devices on board. "You can't do it with electrical batteries, because they're big and heavy and they don't last very long. And you can't use sunlight, because once you get past Mars, solar radiation falls off rapidly and it becomes far too weak to use for energy." The solution? "All you need is a radioisotope thermal-electric generator [RTG], which uses decaying radioactive material [plutonium] to produce heat that can be converted into electricity. "With an RTG, you've got an almost limitless supply of energy to power all the electrical devices on board. And of course, you can also use 'celestial mechanics' to add speed at times, by taking advantage of the 'slingshot effect' produced by gravity, as the spacecraft passes various planets." So what will the "New Horizons" probe hopefully find in 2015, as it soars within 6,000 miles of Pluto, with a diameter only one-fifth as large as Earth's? "I don't think anyone knows the answer to that question," says Hartman, while noting that some astronomers believe the runt of the solar system may actually be an enormous comet. "When it comes to Pluto, you're stepping off the edge of the known solar-system world. And that's exactly why I've always been so enthused about this project. You know, when I was a student at Pomona, I used to complain that all the big discoveries had already been made. Galileo had already looked through his telescope, and the microbe-hunters had already discovered germs with their microscopes. "But after studying the planets for 20 years at NASA, I've come to see how wrong I was, as a student. The fact is, we've barely scratched the surface. We've only begun to understand the planets and the deep space beyond them. "I think the Pluto mission will be a wonderful way for all of us to keep reminding ourselves about the mystery and the wonder waiting for us, as we continue our quest to explore the universe." |Top of Page|
<urn:uuid:25c2e9a3-73ae-4f3a-94f8-195a0969e015>
2.921875
752
Truncated
Science & Tech.
53.871881
SchrodingerZ writes "Though solar eclipses are fairly common on Earth (much more in the southern hemisphere), yesterday the Mars Curiosity Rover caught sight of a partial solar eclipse in Gale Crater on the Red planet. The martian moon Phobos took a small bite out of the sun on the 37th day (Sol 37) of the rover's martian mission. The Curiosity Rover was able to take a picture of the rare event through a 'neutral density filter that reduced the sunlight to a thousandth of its natural intensity.' This protects the camera from the intense light rays seen during an eclipse or looking directly at the sun. It is possible a short movie of the event could be compiled from the data in the near future. More solar transits of Mars's moon (including the second moon Deimos) are predicted to happen in the days to come."
<urn:uuid:fddc18d3-9e81-4b12-aed2-eb1e9a8c04b8>
3.0625
174
Comment Section
Science & Tech.
49.024643
Launch Date: August 30, 1991 Mission Project Home Page - http://www.lmsal.com/SXT/ The Yohkoh Mission is a Japanese Solar mission with US and UK collaborators. It was launched into Earth orbit in August of 1991 and provided valuable data about the Sun's corona and solar flares. The satellite carried four instruments - a Soft X-ray Telescope (SXT), a Hard X-ray Telescope (HXT), a Bragg Crystal Spectrometer (BCS), and a Wide Band Spectrometer (WBS). Yohkoh suffered a spacecraft failure in December 2001 that has put an end to this mission. During the solar eclipse of December 14th the spacecraft lost pointing and the batteries discharged. The spacecraft operators were unable to command the satellite to point toward the Sun. There were four instruments on the satellite that detect energetic emissions from the Sun: The Bragg Crystal Spectrometer (BCS) consists of four bent crystal spectrometers. Each is designed to observe a limited range of soft x-ray wavelengths containing spectral lines that are particularly sensitive to the hot plasma produced during a flare. The observations of these spectral lines provide information about the temperature and density of the hot plasma, and about motions of the plasma along the line of sight. Images are not obtained, but this is offset by enhanced sensitivity to the line emission, high spectral resolution, and time resolution on the order of one second. The Wide Band Spectrometer (WBS) consists of three detectors: a soft x-ray, a hard x-ray, and a gamma-ray spectrometer. They were designed to provide spectra across the full range of wavelengths from soft x-rays to gamma rays with a time resolution on the order of one second or better. Like the BCS, images are not obtained. The Soft X-Ray Telescope (SXT) images x-rays in the 0.25 - 4.0 keV range. It uses thin metallic filters to acquire images in restricted portions of this energy range. SXT can resolve features down to 2.5 arc seconds in size. Information about the temperature and density of the plasma emitting the observed x-rays is obtained by comparing images acquired with the different filters. Flare images can be obtained every 2 seconds. Smaller images with a single filter can be obtained as frequently as once every 0.5 seconds. The Hard X-Ray Telescope (HXT) observes hard x-rays in four energy bands through sixty-four pairs of grids. These grid pairs provide information about 32 spatial scales of the x-ray emission. This information is combined on the ground to construct an image of the source in each of the four energy bands. Structures with angular sizes down to about 5 arc seconds can be resolved. These images can be obtained as frequently as once every 0.5 seconds.
<urn:uuid:12e0f5ac-eebf-4764-8df2-e10d6b669934>
3.34375
588
Knowledge Article
Science & Tech.
52.344466
Added 1 new A* page:A lot of what NASA does isn't just looking out at space, but looking back at Earth *from* space. For instance, how 'bout them forests?| A map released in 2010 showed forest heights around the world, with a particular focus on the continental United States. Using readings from reflected laser pulses ("LIDAR") collected by three satellites--NASA's ICESat, Terra, and Aqua--the agency was able to determine the heights of trees in forest, and compile that into a detailed map. The highest forests were found in the Pacific Northwest (yay! :) and parts of southeast Asia. Here's the map of forest heights in the United States: image by NASA Earth Observatory/Image by Jesse Allen and Robert Simmon/Based on data from Michael Lefsky (source) They went one better in 2011, putting together even more detailed maps, collaborating "with the U.S. Forest Service and the U.S. Geological Survey (USGS) to assemble a national forest map from space-based radar and optical sensors, computer modeling, and a massive amount of ground-based data." An estimated 5 million trees were measured! The "space-based radar" data came "from the Shuttle Radar Topography Mission, which was flown on the space shuttle Endeavour in 2000." Wikipedia says Endeavour was outfitted with two radar antennae for the mission: "One antenna was located in the Shuttle's payload bay, the other – a critical change from the SIR-C/X-SAR, allowing single-pass interferometry – on the end of a 60-meter (200-foot) mast that extended from the payload bay once the Shuttle was in space. The technique employed is known as Interferometric Synthetic Aperture Radar," which uses "differences in the phase of the [reflected radar] waves" to determine heights in great detail. They combined that data with Landsat satellite images and on-the-ground measurements by the U.S. Forest Service to come up with a very detailed map of biomass across the country--not just how high the trees are, but how dense they are, so they were also able to map the overall carbon content of the forests. Here's the US biomass map: image by NASA Earth Observatory; map by Robert Simmon, based on multiple data sets compiled and analyzed by the Woods Hole Research Center. Data inputs include the Shuttle Radar Topography Mission, the National Land Cover Database (based on Landsat) and the Forest Inventory and Analysis of the U.S. Forest Service. Caption by Michael Carlowicz. (source) Here's a zoomed-in view of the Pacific Northwest: image by Robert Simmon, based on data from Woods Hole Research Center (source) Those grid patterns in the center (just below Seattle, which is on the mid-east side of that big inlet, which is Puget Sound) are from logging! |The coastal Pacific Northwest of the United States has the tallest trees in North America, averaging as much as 40 meters (131 feet) in height. It has the densest biomass—the total mass of organisms living within a given area—in the country.| |A rule of thumb for ecologists is that the amount of carbon stored in a tree equals 50 percent of its dry biomass. So if you can estimate the biomass of all the trees in the forest, you can estimate how much carbon is being stored. If you keep tracking it over time, you can know something about how much carbon is being absorbed from the atmosphere or lost to it.| |In a recent report by the U.S. Forest Service, researchers noted that while the federal government owns slightly less than 50 percent of the forest land in the Pacific Northwest, it controls more then 67 percent of the old-growth in the region. That percentage is rising not because of new federal acquisitions, but because harvesting removed about 13 percent (491,000 acres) of old- and “late-sucessional” forest on non-federal lands. (The main reason for old-growth loss on federal lands is forest fire.)| Here's the Seattle area at the released map's highest detail level; my biomass is just to the left of that biggish lake in the north central area!
<urn:uuid:e2dcbd00-19a3-427b-a752-22623c09b00c>
4.03125
895
Knowledge Article
Science & Tech.
50.999276
Summary for Ozyptila nigrita (Araneae) Explore Regional Distribution Please log on and add a note on this species About this speciesDistribution The species is confined to southern England. It is widespread in north-western and central Europe as far north as Sweden, but has not been recorded from Ireland. Habitat and ecology The spider occurs mainly on short calcareous chalk and limestone grassland, often in stony areas, and especially near the coast. It is also occasionally found on sand dunes. Males are adult from March to July, with a peak of activity in May and June, and again from August to October, females from March to September. UK Biodiversity Action Plan priority species. The species is abundant on some calcareous grassland sites, but rather local. It appears to have declined sharply over the past 20 years. The loss of calcareous grassland to agricultural improvement, and possibly public pressure at some coastal sites. Management and conservation Maintain short calcareous grassland by grazing and possibly rotational disturbance. Text based on Dawson, I.K., Harvey, P.R., Merrett, P. & Russell-Smith, A.R. (in prep.). References
<urn:uuid:be8d6a24-37b7-4808-832d-51e818ef38ae>
2.953125
260
Knowledge Article
Science & Tech.
44.963865
First results from a new instrument: Greenland--Greenland's ice sheets are melting extensively, even in some inland areas, according to an image generated from data obtained by a Japanese climate-observation satellite.That picture is astounding. Here's a picture of melting days in 1992, and again ten years ago in 2002: Data from the Japan Aerospace Exploration Agency's Shizuku satellite shows the ice has been in retreat most noticeably in the southern part of the vast island. "In the south, ice is melting in many locations, even in inland areas at high altitudes," said Kazuhiro Naoki, who analyzed the satellite data. In the image, the different hues of blue represent how many days the ice melted. Darker blue indicates where ice melted for longer periods. The Shizuku satellite, which was carried into space on an H-2A rocket in May, observed the ice sheets between July 3 and 9. The data was analyzed at JAXA's Earth Observation Research Center. Here's five years ago: The spread of a lengthening melt across the south is very evident. The solid band of melt anomalies on the western coast is absent in 1992, a broken line in 2002 and 2007, and a solid line now. Is there any part of Greenland that will be safe from melt in 2050, 2060, 2070? Do the models project extension of melt into the interior of Greenland of the kind we're seeing? More to come.
<urn:uuid:b9ad016d-f6d7-4df4-8608-9ef0663eb557>
3.421875
307
Personal Blog
Science & Tech.
51.040085
Further to Greenland's current warming & melting glacier's, I bring you this analysis of a paper published in Nature Geoscience: model simulations show that "ice acceleration, thinning and retreat begin at the calving terminus and then propagate upstream through dynamic coupling along the glacier." What is more, they find that "these changes are unlikely to be caused by basal lubrication through surface melt propagating to the glacier bed," which phenomenon is often cited by climate alarmists as a cause of great concern with respect to its impact on sea level. Nick et al. conclude that "tidewater outlet glaciers adjust extremely rapidly to changing boundary conditions at the calving terminus," and that their results thus imply that "the recent rates of mass loss in Greenland's outlet glaciers are transient and should not be extrapolated into the future [italics added]." And if this advice is followed, the extreme sea-level-rise scenarios promoted by the alarmists, such as Gore and Hansen, fail to materialize. Kinda like a dam on a river, huh? You dam the water, it backs up. The dam breaks, the backed up water flows. Yeah, the ice is a bit more dense but same principle. Word watch: We've seen the abject failure of the term "Global Warming" as temperatures cooled. This gave way to the second abject failure of "Climate Change," a ludicrously neutral term on which everyone can agree--climate changes, so what? The new one creeping around is "Global Weirding" in which almost every notable weather event is branded an inexplicable anomaly. Nice to see such despair in action on the PR front. OK, so some warmists are stuck in the last hunnert years, ignoring any data older than that & completely missing factors which have periodicities much greater than a century. Fine. We got that covered, too. In this analysis the authors have been able to provide evidence that the mid 20th century had far more extreme conditions than any subsequent time: The two researchers report that with respect to all discrete five-year periods (pentads) between 1950 and 2004, "the 2000-04 pentad has the second longest mean predicted melt duration on Novaya Zemlya (after 1950-54), and the third longest on Svalbard (after 1950-54 and 1970-74) and Severnaya Zemlya (after 1950-54 and 1955-59) [italics added]," which findings clearly reveal the 1950-54 pentad to have experienced the longest melt season of the past 55 years on all three of the large Eurasian Arctic ice caps. In spite of almost everything we have heard from climate alarmists over the past two decades about global warming becoming ever more intense, especially in the Arctic, conditions during the middle of the past century seem to have been even more extreme in this respect than they have been at any subsequent time, especially on these three major ice caps and their associated glaciers. Holy smokes, take a day off, galavantin' all over the province, free lunch from a fellow artist/photographer (thank you again!), beer & BS, gossiping about ehMac & the research world explodes! Background: When I was in university, I had the privilege of taking a couple of topics courses from Archie Stalker, now deceased & formerly of the Geological Survey of Canada (fantastic Quaternary scientist). He was the kind of old school, hands on researcher that would tell you to lick you finger & stick it in the sediment you analyzing, then place it in your mouth & feel the texture. You could always tell if the sample was a paleosol, a lacustrine sediment, a clay or something else just by putting it in your mouth. Very cool. He had personally hiked & mapped more of southern Alberta than anyone else before or since. Hard to keep up with him, even at 65. At any rate, soils. A brief analysis of a paper published in Quaternary Research wherein the authors analyze soils in the Italian Alps & conclude that the Roman Warm Period & the Medieval Warm Period were both warmer & longer than the current warming period. Among a number of other interesting findings, Giraudi determined that between about 200 BC and AD 100 -- i.e., during the Roman Warm Period -- "soils developed in areas at present devoid of vegetation and with permafrost," indicative of the likelihood that temperatures at that time "probably reached higher values than those of the present [italics added]." He also concluded that "analogous conditions likely occurred during the period of [the] 11th-12th centuries AD, when a soil developed on a slope presently characterized by periglacial debris," while noting that "in the 11th-12th centuries AD, frost weathering processes were not active and, due to the higher temperatures than at present [italics added] or the longer duration of a period with high temperatures [italics added], vegetation succeeded in colonizing the slope." He also determined that "the phase of greatest glacial expansion (Little Ice Age) coincides with a period characterized by a large number of floods in the River Po basin," and that "phases of glacial retreat [such as occurred during the Roman and Medieval Warm Periods] correlate with periods with relatively few floods in the River Po basin." This study provides a double refutation of the climate-alarmist claim that late 20th-century temperatures were the warmest of the past two millennia. And it demonstrates that in this part of Europe, cooler periods have generally experienced less flooding than have warmer periods. Italics from the analysis, bold mine. Also wanted to give credit where credit is due: In a sense, I'm glad all these warmists have come out with their claims about AGW. The opportunity provided has been a wonderful impetus to research the truth about the complexity of global warming & all the natural causes & cycles contained therein. moving past the google wars and it's google link duels. It seems the much celebrated anti climate change champion and scientist, is in a heap of trouble, once again. Andrew Weaver Sues Tim Ball for Libel University of Victoria Professor Andrew Weaver, the Canada Research Chair in Climate Modelling and Analysis, has filed suit for libel against freelance climate change denier Tim Ball. The suit (attached below) arises from an article that Ball penned for the right-wingy Canada Free Press website, which has since apologized to Weaver for its numerous inaccuracies and stripped from its publicly available pages pretty much everything that Ball has ever written. In the article, Ball, a former geography professor at the University of Winnipeg with an indifferent academic record and a lifetime peer-reviewed literature output of just four articles (none of them in atmospheric physics), assailed Weaver as uninformed about climate, unqualified to teach and compromised by his lavish funding, accusations for which he offered no proof whatever. Weaver, a member of the Royal Society of Canada who has authored more than 190 papers, was also a lead author on three of the four reports of the Intergovernmental Panel on Climage Change (IPCC), and is lined up as a lead author on the fifth. He's also won pretty much all the academic and teaching awards that are available to a Canadian professor who has not yet had his 50th birthday. Ball, famously slow to notice the obvious, apparently didn't realize that he was overmatched. Of course, it's not the first time. Ball sued University of Lethbridge Professor Dan Johnson in October 2006 over imagined slights in a letter to the editor that Johnson had written to the Calgary Herald. When both Johnson and the Herald filed a devastating Statements of Defence, Ball turned tail and ran. But regardless that the suit had exposed the numerous falsehoods that once coloured Balls resume - and regardless that a University of Calgary audit confirmed that Ball had been accepting money that had been sluiced through a university slush fund that had been set up to conceal the money's oil industry origins, Ball has continued to write and speak, claiming some higher knowledge of the workings of climate change - actually, of the lack of climate change. Just one more crack in the warmists theory of AGW causing severe weather. It apparently doesn't. Magnetic Polar Shifts Causing Massive Global Superstorms Superstorms can also cause certain societies, cultures or whole countries to collapse. Others may go to war with each other. (CHICAGO) - NASA has been warning about it…scientific papers have been written about it…geologists have seen its traces in rock strata and ice core samples… Now "it" is here: an unstoppable magnetic pole shift that has sped up and is causing life-threatening havoc with the world's weather. Forget about global warming—man-made or natural—what drives planetary weather patterns is the climate and what drives the climate is the sun's magnetosphere and its electromagnetic interaction with a planet's own magnetic field. When the field shifts, when it fluctuates, when it goes into flux and begins to become unstable anything can happen. And what normally happens is that all hell breaks loose. Magnetic polar shifts have occurred many times in Earth's history. It's happening again now to every planet in the solar system including Earth. The magnetic field drives weather to a significant degree and when that field starts migrating superstorms start erupting. The superstorms have arrived The first evidence we have that the dangerous superstorm cycle has started is the devastating series of storms that pounded the UK during late 2010.
<urn:uuid:ba74f3a5-6e9c-4a12-bdab-a2df6d379adb>
2.84375
1,972
Comment Section
Science & Tech.
36.031847
is not a new phenomenon, nor is extinction. Many times in Earths history, the climate has changed sometimes rapidly and drastically and species have become extinct. At least five times, more than 50 percent of species inhabiting the planet have died out, and as few as 2 to 4 percent of the species that have ever lived are believed to survive today. Some scientists say that in the face of impending climate change, the world may be headed into another mass extinction event. As temperatures warm, the American pika, which lives on moist, cool mountaintops, such as Mount Evans in Colorado, shown here, does not have much room for upslope migration. Courtesy of Steven Morello. The difference today is that the world is inhabited by close to 7 billion people and biodiversity has been put into small refuges rather like islands, said Richard Leakey, one of the worlds foremost anthropologists and wildlife ecologists, at the Stony Brook World Environmental Forum, which he convened last May on Long Island to discuss climate change and biodiversity. Scientists, he said, need to be talking now about what climate change is going to do to life as we have known it. Developing a better understanding of climates effects on various species, as well as better protecting and connecting the existing refuges, will help better prepare the world for any changes to come, meeting participants said. In its Third Assessment Report, the Intergovernmental Panel on Climate Change (IPCC) estimated that Earth will warm by between 1.4 and 5.8 degrees Celsius by 2100. While humans may be able to adapt to warming at the low end of this range, other life forms might face more serious consequences; warming at the high end of the range could be catastrophic for all life on the planet, considering 5 to 7 degrees Celsius is the difference between an ice age and an interglacial period, says Stephen Schneider, a climatologist at Stanford University who has served on the IPCC. Still, the climate change debate is characterized by deep uncertainty, Schneider says, noting that there will always be uncertainty about future events. Still, he says, if the IPCC projections are correct even on the lower end of the range, likely effects could include more frequent heat waves and less frequent cold spells; increased weather extremes, including drought and storms; loss of farming productivity; and rising sea levels and sea-surface temperatures. No place will be immune, he says, including areas set aside as protected habitats. More than 1.9 million species have been cataloged on Earth, but scientists believe that at least 5 million to 30 million species exist, according to the World Conservation Union (IUCN). Over the past 500 years, human activity has forced 844 known species to extinction, and 15,589 known species are facing extinction right now. The current extinction rate, since A.D. 1500, is estimated to exceed the natural extinction rate by 100 to 1,000 times, IUCN says. And climate change will only exacerbate this rate as further stress is put on an already stressed system, says Lee Hannah, a climate change biologist at the Center for Applied Biodiversity Science with Conservation International. But just as climate change will not affect the whole world equally, it will not affect all species in the same way, Hannah says. The fossil record clearly shows, he says, that species respond individually to climate change, not as coherent communities. So although scientists are already seeing some of the changes to come, especially in higher latitudes, such as birds migrating and breeding earlier in spring, and fish moving to cooler waters farther north, it is important to conduct bioclimatic modeling studies to give us a better picture of what could happen, Hannah says. Three hundred species of these exotic plants called proteas, which are endemic to the Cape Floristic Region of South Africa, stand a 21 to 40 percent chance of extinction if the climate warms as projected in mid-range estimates by the Intergovernmental Panel on Climate Change, according to new research in BioScience. Courtesy of Guy Midgley. Not only warming temperatures pose problems, says Thure Cerling, a geologist at the University of Utah in Salt Lake City. The water balance will also change, causing trouble for species that depend on certain equilibriums of precipitation and evaporation. For example, the American pika, which depends on moist, cool mountaintop climates, is quickly facing extinction due to climate change. Because the small rodent-like mammals already live in tiny niches atop mountains, they do not have much room to move up-slope and they are not physiologically designed to migrate, according to the World Wildlife Fund (WWF). Although some species can migrate, such as the grizzly bear, species that depend on cooler temperatures, such as those that live in higher latitudes or altitudes such as pikas or polar bears, will be even more threatened because of less room for habitat expansion, Hannah says. Climate change impacts are equally dramatic in the oceans, says Jane Lubchenco, a marine ecologist at Oregon State University in Corvallis. Were already seeing increased sea-surface temperatures, upwelling, more storms, increased acidification and circulation changes, she says. Although scientists do not know enough yet about all the effects of these changes on marine organisms, Lubchenco says, they do know that corals, which cannot migrate, are bleaching and dying quickly, and fish that can migrate, such as tuna, are moving to cooler waters. Climate change is a reality that at this point cannot be turned around, Hannah says. But we dont have to throw up our hands into the air in exasperation, Cerling adds. We dont have to lose the rest of the megafauna we have on the planet. But we do have to do something now to protect it if we dont want to lose it. A key step in that process is resilience building, says Lara Hansen, chief scientist of the Climate Change Program at WWF. Resilience building changes the way protected areas and resources are managed by considering not only what the ecosystems or habitats (and everything in the ecosystems) need right now, but also what they might need 20, 50 or 100 years from now. Part of what ecosystems need is more connectivity between protected areas a way to change what are now postage-stamp-sized refuges surrounded by human activities to interconnected systems that give plants and animals more room in which to operate, she says. About 12 percent of Earths land surface is protected, says Jeff McNeely, chief scientist at IUCN, while less than 1 percent of the ocean is protected, Lubchenco says. Merely setting aside land or ocean acreage, however, is insufficient, Lubchenco says its hugely important to pay attention to whats happening around the reserve as well as whats happening inside. As the climate changes, for example, threatened species may need to change locations to survive, McNeely says. Having spaces between and surrounding protected areas managed in ways that do not discourage species from spreading out would then become key, he says. Even better, would be to manage these in ways that actively encourage dispersal, for example, by creating national forests and building wildlife underpasses or corridors where highways cut through the habitat, such as has been done in the Los Angeles area, he says. Protected areas are great, but they wont [preserve biodiversity] alone, Hansen says. Countries also need to take active steps to reduce greenhouse gas emissions to curtail global warming, she says. Indeed, Hannah says, we need to stop anything that is currently threatening ecosystems because climate change will only heighten the threats. It is important to emphasize that extinctions estimated due to climate change are not inevitable, he says, but if we cant do the simple stuff like protecting parks now, we have little hope of addressing a complex threat like climate change later. Climate Change Program at World Wildlife Fund World Conservation Union (IUCN) Stephen Schneider's Web site Jane Lubchenco's Web site Stony Brook World Environmental Forum Center for Applied Biodiversity Science at Conservation International Convention on Biological Diversity Back to top
<urn:uuid:9673532a-e17a-43d6-a5ea-30ed37b1765e>
3.9375
1,682
Knowledge Article
Science & Tech.
32.367554
Revision Control with Arch: Introduction to Arch Arch quickly is becoming one of the most powerful tools in the free software developer's collection. This is the first in a series of three articles that teaches basic use of Arch for distributed development, to manage shared archives and script automated systems around Arch projects. This article shows you how to get code from a public Arch archive, contribute changesets upstream and make a local branch of a project for disconnected use. In addition, it provides techniques to improve performance of both local and remote archives. Revision control is the business of change management within a project. The ability to examine work done on a project, compare development paths and replicate and undo changes is a fundamental part of free software development. With so many potential contributors and such rapid release of changes, the tools developers use to manipulate these changes have had to evolve quickly. Early revision control was handled with tape backups. Old versions of a project would be dragged out of backup archives and compared line by line with the new copy. The process of restoring a backup from tape is not quick, so this is not an efficient method by any means. To work around this lag, many developers kept old copies of files around for comparison, and this was soon integrated into early development tools. File-based revision control, such as that used by the Emacs editor, uses numbered backup files so you can compare foo.c~7~ with foo.c~8~ to see what changed. Versioned backup files even were integrated into the filesystem on some early proprietary operating systems. For nearly two decades, the preferred format for third-party contributions to free software projects has been a patch file, sometimes called a diff. Given two files, the diff program generates a listing that highlights the differences between them. To apply the changes specified in the diff output, a user need only run it through the patch program. In the 1990s, the Concurrent Versions System (CVS) became the default for managing the changes of a core group of developers. CVS stores a list of patches along with attribution information and a changelog. A primitive system of branching and merging allows users to experiment with various lines of development and then fold successful efforts back into the main project. CVS has its limitations, and they are becoming a burden for many projects. First, it does not store any metadata changes, such as the permissions of a file or the renaming of a file. In addition, check-ins are not grouped together in a set, making it difficult to examine a change that spanned multiple files and directories. Finally, nearly all operations on a remote CVS repository require that a new connection be opened to the server, making it difficult for disconnected use. Efforts such as the Subversion Project have come a long way toward fixing the flaws found in CVS. Subversion is effectively a CVS++, and it supports file metadata change logging and atomic check-ins. What it still requires is a centralized server on the network that all clients connect to for revision management operations. A new generation of revision control systems has sprung up in the past few years, all operating on a distributed model. Distributed revision control systems do away with a single centralized repository in favor of a peer-to-peer architecture. Each developer keeps a repository, and the tools allow easy manipulation of changes between systems over the network. Projects such as Monotone, DARCS and Arch are finding popularity in a world where free software development happens outside of well-connected universities, and laptops are much more common. One of the most promising distributed systems today is GNU Arch. Arch handles disconnected use by encouraging users to create archives on their local machines, and it provides powerful tools for manipulating projects between archives. Arch lacks any sort of dedicated server process and uses a portable subset of filesystem operations to manipulate the archive. Archives are simply directories that can be made available over the network using your preferred remote filesystem protocol. In addition, Arch supports archive access over HTTP, FTP and SFTP. One advantage to not having a dedicated dæmon is that no new code is given privilege on your server machine. Thus, your security concerns are with your SSH dæmon or Web server, which most system administrators already are keeping tabs on. Another advantage is that for most tasks no root privilege is needed to make use of Arch. Developers can begin using it on their own machines and publish archives without even installing Arch on the Web server machine. This affects the pattern of adoption as well. Using CVS or Subversion is a top-down decision made for an entire project team, although Arch can be adopted by one or two developers at a time until everyone in the group is up to speed. |Speed Up Your Web Site with Varnish||Jun 19, 2013| |Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013| |Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013| |Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013| |Weechat, Irssi's Little Brother||Jun 11, 2013| |One Tail Just Isn't Enough||Jun 07, 2013| - Speed Up Your Web Site with Varnish - Containers—Not Virtual Machines—Are the Future Cloud - Linux Systems Administrator - Non-Linux FOSS: libnotify, OS X Style - Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer - Senior Perl Developer - Technical Support Rep - UX Designer - RSS Feeds - Reply to comment | Linux Journal 3 hours 32 min ago - Yeah, user namespaces are 4 hours 48 min ago - Cari Uang 8 hours 19 min ago - user namespaces 11 hours 13 min ago 11 hours 38 min ago - One advantage with VMs 14 hours 7 min ago - about info 14 hours 40 min ago 14 hours 41 min ago 14 hours 42 min ago 14 hours 44 min ago Free Webinar: Hadoop How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster. Some of key questions to be discussed are: - What is the “typical” Hadoop cluster and what should be installed on the different machine types? - Why should you consider the typical workload patterns when making your hardware decisions? - Are all microservers created equal for Hadoop deployments? - How do I plan for expansion if I require more compute, memory, storage or networking?
<urn:uuid:7d39614a-a9e2-4b5f-b07c-e5a7b42cfaf5>
2.8125
1,471
Content Listing
Software Dev.
36.629154
When a car ignition coil increases the voltage from 12v DC to 10,000v DC, what happens to the value of the current? Voltage and current have a direct relationship. Does the current increase also? Nope. As voltage is increased, current DEcreases. What stays the same is power, which is voltage times current. Richard Barrans Jr., Ph.D., M.Ed. Department of Physics and Astronomy University of Wyoming No, Paul, it is the inverse, for transformers. what is conserved is power. So when voltage goes _up_ by a factor of N, the current goes _down_ by a factor of N. But I should tell you that is for an efficient AC transformer, which transfers energy continuously, and is never suddenly interrupted. Ignition coils waste some power and are also calculated differently. They store and discharge energy. While the 12v is applied and the primary-coil current ramps up, energy is being stored. The energy stored is E = (1/2) * L * I^2, where L is the inductance of the primary coil, and I is the current. (It is exactly like the formulas for energy on a charged capacitor: E = (1/2) * Capacitance * Voltage^2 or kinetic energy of a moving mass: E = (1/2) * Mass * velocity^2. ) After the current is ramped up, all further time running high current adds no more magnetic stored energy, it is just wasted in the resistance of the wire, making the coil get hot. (Many gimmicks on market to minimize that. Capacitor-discharge ignition is one of them.) Then a switch (the distributor points) opens and stops the primary current. That forces a corresponding current to flow in the secondary, about 1000x less current, but capable of charging the spark cable to 1000x higher voltage. More, actually, if the cable's capacitance is small. That stored energy insists on getting out somewhere, and merely charging the cable to high voltage is not quite enough. At some point the spark-plug reaches its breakdown voltage, and all the stored energy is discharged into that spark. End of story until next time the points close, re-starting the 12v charge-up. It is hard to get a good value of L of an ignition coil, but E = (1/2) * Voltage * ChargeTime * FinalCurrent also works. Charge-time is the time mentioned above, the time during which the primary coil current is linearly increasing. (if it's not linear, rather an exponential decay to he final current, extrapolate the initial ramp rate with a straight line to where it would reach the final current, and read that time interval.) It's some milliseconds; you can see it on an oscilloscope when hooked up to measure current in the primary side of the coil. And I think the actual turns ratio of wire-windings inside an ignition coil may be less than 1000:1, maybe it is around 200:1. The top 5x or 10x of voltage increase happens because of the inductive voltage surge that always happens when an inductor with current is suddenly open-circuited. The peak voltage the ignition coil can reach must always be higher than what the spark-plug requires to spark over. Maybe 2x higher. All that being said, thinking of it as a transformer still kind of works, including the inverse relationship of voltage and current. Yes they have a direct relationship, but not in the manner you see with typical transformers in steady-state applications. In a step-up transformer, output voltage is greater, but at a lower current than the input current (conservation of energy). On the 12V side of an ignition coil, current ramps up over time (inductors resist changes in current) until it reaches a few amps, slowly building up energy in the magnetic field. When it gets disconnected, the magnetic field collapses and creates "back EMF" at a much higher voltage on the secondary side because of the turns ratio. There is no current until the voltage reaches the dielectric strength of the air-fuel mixture at the spark plug, at which point the gasses ionize and form nearly a short circuit. At this point there is a very brief spike of current (around 10-100 microseconds long) that spikes and decays. The current can be quite high (nearly a tenth of an amp), but being relatively brief, does not violate conservation of energy. Yes, voltage is directly related to Current times Resistance. (Ohm's Law) When you have a step up voltage coil what happens is the voltage increases But the current decreases. In physics you have the law of the conservation of energy (Joules). Power is rate of energy consumption (Joules/sec) Electrical Power (Watts) is the product of current (amps) times voltage So if you increase the voltage, the current decreases unless you have what is called an "active device" to add power, Like a battery, alternator/generator, or Power company adding power to the "Passive devices" just consume power, "active devices" add power to a system. (but they really don't add power they just convert power from one form (chemical power in batteries) to electrical power. Click here to return to the Physics Archives Update: June 2012
<urn:uuid:b2f20f11-b3a5-4b73-81d1-14375c0b9159>
2.90625
1,195
Q&A Forum
Science & Tech.
53.035269
Animals and Homing Name: Belle M. How do birds like homing pigeons know where to go to a place that is hundreds of miles away? Cats and dogs can do this to does it have to do with the earth's magnetic fields This is one of the great mysteries of the animal world. I don't think anyone knows the answer for sure yet, despite much research, but from what we know so far, it is probably a combination of many different clues, depending on the species, the area, the season, and the weather. There is evidence that pigeons and some other birds can sense magnetic fields, also some pretty clear evidence that many birds use the stars and sun to help them. Simple visual clues may also be used for short distances or over familiar territory. If you check a good library or do an internet search on "homing" or "bird migration" you should turn up many interesting references. Click here to return to the Zoology Archives Update: June 2012
<urn:uuid:9d0ac0f9-071c-4a5a-8ddb-921c1805be0b>
3.1875
214
Knowledge Article
Science & Tech.
52.535698
6. Recent HRIBF Research - Use of 7Be Beam in Wear Studies (Proof-of-Principle Experiment) [U. Greife (Colorado School of Mines), spokesperson] Currently in the United States, about 200,000 hip-joint replacement surgeries are performed each year. Worldwide, the number is nearly 1 million [DeG04, Feh00]. Unfortunately, these implants do not last forever, but seem to have useful lifetimes (limited by wear) between 10 and 20 years. The aim of wear studies on artificial hip joints is to extend the lifetime of the implants through development of more durable materials, as well as to make recommendations for the patients on lifestyle (activity) choices. Substantive work has been going on in the past decade to improve the lifetime of metal-plastics joints. The wear of plastics is nearly exclusively measured by gravimetric methods. Due to the low wear, long test times are necessary to achieve reasonable accuracy. Also, fluid soak from the lubrication fluids can lead to high systematic errors in this method. We have performed a proof-of-principle experiment at the Holifield Radioactive Ion Beam Facility (HRIBF) to show the principal viability of a radiotracer method based on uniformly implanted 7Be for wear analysis. The radioactive 7Be for the experiment was obtained from the ATOMKI cyclotron institute in Debrecen, Hungary, where it had been produced via the 7Li(p,n)7Be reaction. After chemical extraction from the 7Li matrix at HRIBF, the 7Be matrial was transferred to a sputter cathode for injection in the HRIBF tandem accelerator. A new 7Be implantation setup had been developed at the Colorado School of Mines and was installed at a free beam line at HRIBF (Fig. 6-1). The activity available resulted in 7Be-beam currents of 105-106 ions per second at the sample location. Figure 6-1: 7Be implantation setup during beam time at HRIBF. The 8-MeV energy of the beam was transformed into a broad energy distribution (measured with a silicon detector in the implantation position) using a wheel of 20 foils (increasing thickness from zero to 10 μm in 0.5 μm increments) and additional energy "smearing" foils. Based on the foil thicknesses an activity plateau (with depth) in the polyethylene from 0 to approximately 9μm was achieved on which wear studies were subsequently performed. Total 7Be implantation doses varied from 109 to 1010 nuclei on the 7 pins used. For artificial hip joints with metal-plastic couplings, a typical wear range lies between 0.1-1 mg/(cm2*1 million motion cycles). This corresponds to a depth wear of about 1.08-10.8μm per 1 million motion cycles. Motion simulators work at a speed of about 1 Hz, giving a time of about 2 weeks for one million cycles, well matched to the 7Be half-life. The wear studies were performed with a specifically designed motion simulator supplied and operated by Rush University Medical Center in Chicago. The Pin-on-disk (POD) design replicates the motion trajectories of artificial joints. For this experiment 2 types of plastic materials were available: One conventional high-density polyethylene, the other cross-linked high-density polyethylene. The latter material had been advertised as significantly superior to the previously used conventional materials. However, due to the lower wear and the problems of the gravimetric method with fluid soak a direct quantitative comparison had not been possible. The wear studies were performed at Argonne National Laboratory and used a 20% Germanium detector setup for 7Be gamma detection. The plastic pins underwent a known number of wear cycles in the POD system (lubricated with bovine serum) before being cleaned and transferred for activity measurement. The results of the complete wear experiment (extending over 4 months) are depicted in Fig. 6-3. Shown in the figure is the fraction of activity worn off (natural decay corrected) as a function of wear cycles. Clearly visible is the different behaviors of the conventional material (rising group of 4 samples) and the cross-linked material (relatively flat group of 3 samples) with a preliminary result of a wear ratio of app. 13 (40.6%/106 cycles conventional; 3.1%/106 cycles cross-linked). Error analysis and further simulations of implantation depth are still ongoing. Figure 6-3: Cumulative 7Be activity wear loss as a function of wear cycles. This proof-of-principle experiment shows the usefulness and practical potential of the use of 7Be implantion as a radiotracer for wear studies. Further analysis and experiments have to look at improvements in the activity measurements (to reduce scatter and systematical error), the influence of radiation dose on mechanical properties (will provide upper limits on allowable 7Be implantation dose) and the possibilities of extending the method to natural materials. The experiment was performed as a collaboration between the Colorado School of Mines (U. Greife, L. Erikson, N. Patel), Rush University Hospital (M. Wimmer, Y. Dwiwedi, M. Laurent), Oak Ridge National Laboratory [K. Chipps (Rutgers), D. Bardayan, J. Blackmon (LSU), C. Gross, D. Stracener, M. Smith, C. Nesaraja, R. Kozub (TTU)] and Argonne National Laboratory (E. Rehm, I. Ahmad)).
<urn:uuid:8e675e81-1382-4634-a4ab-abff95c02e61>
2.921875
1,167
Academic Writing
Science & Tech.
44.731244
Fick Calendar Home A Near Miss On August 18, 2002, asteroid 2002 NY40 passed within 325,000 miles of Earth. If this fact doesn't scare you just a little bit, it should. This is a near-miss - it passed just 1.3 times farther from the Earth than the distance to the Moon. 2002 NY40 is an example of a Near-Earth Object, or NEO, and is one of thousands of these space rocks that range in size from a few feet up to a mile or more across. This one is about 2200 feet in diameter. Should it hit the Earth, it would blast a crater about 4.4 miles across. Though 2002 NY40 is small compared to the asteroid that killed the dinosaurs 65 million years ago, it still represents a threat - if it hits a city, it would create a natural disaster unlike any in recorded history, equivalent to 50,000 Hiroshima-type bombs. This image is actually a series of short (1 second) images, with 1 second in between - it took just 90 seconds for the asteroid to cross this field... it was moving so quickly across the sky that it moved the diameter of the full moon in a bit less than 4 minutes! 2002 NY40 does not pose a threat in the future, according to calculations. But keep in mind that most NEOs are found after they've already passed close to the Earth...
<urn:uuid:8ff2f07b-8477-43e8-a40d-d762876714af>
3.484375
307
Knowledge Article
Science & Tech.
79.411857
Atlas recording spawning and nursery areas of fish in the Great Lakes and associated rivers listed by area and then by species. A 14-volume atlas in PDF format. Published in 1982 by the U.S. Fish and Wildlife Service. Manual for research program on the nesting habits of sea turtles of the Virgin Islands, with descriptions of species, nesting behavior, observation methods, record keeping, tagging, and tissue sample collection. (PDF file, 121 pp.) Description of the use of a miniature video-camera system deployed at nests of passerine species in North Dakota to videotape predation of eggs or nestlings by animals such as mice, ground squirrels, deer, cowbirds and others. Literature review of sago pondweed, a submersed angiosperm that attracts waterfowl, but is also a nuisance plant that clogs irrigation systems. Includes classification, distribution, habitat, physiology, management, and economics.
<urn:uuid:24fabb3d-f5d5-49de-938b-8025a10ee86e>
2.6875
192
Content Listing
Science & Tech.
39.136429
This is an image of Saturn in false color. Click on image for full size Saturn's Belts and Zones The striped cloud bands on Saturn, like Jupiter, are divided into belts and zones. In a belt, the wind flows very strongly in one direction only. In a zone, the wind flows very strongly in exactly the opposite direction. These kinds of winds are called "zonal winds". Measurements show that the winds of Saturn, within a belt or a zone can reach 500 m/sec (1100 miles/hour), but are usually 100 m/sec (225 miles/hour). On Earth, the most powerful winds during a hurricane can blow 100 miles per hour. So these winds of Saturn provide for a pretty rough environment. Shop Windows to the Universe Science Store! Learn about Earth and space science, and have fun while doing it! The games section of our online store includes a climate change card game and the Traveling Nitrogen game You might also be interested in: The striped cloud bands on Jupiter are certainly not as straight as they appear to be in this picture! The picture shows that the striped pattern is divided into belts and zones. The belts and zones of...more The most important motions in the atmosphere are winds. The major winds in Saturn's atmosphere are the zonal winds which are made of zones and belts. Zones are high pressure systems and belts are low pressure...more Saturn's atmospheric environment is one of strong gravity, high pressure, strong winds, from 225 miles per hour to 1000 miles per hour, and cold temperatures of -270 degrees to +80 degrees. With winds...more The striped cloud bands on Saturn, like Jupiter, are divided into belts and zones. In a belt, the wind flows very strongly in one direction only. In a zone, the wind flows very strongly in exactly the...more The position of the planets in the solar nebula greatly affected their 1. size and 2. composition. This is because of the effect of how cold it was in the nebula. 1. The nebula was a lot warmer close to...more As shown in this picture, while they were forming in the solar nebula, the nucleii of the planets-to-be (called protoplanets) drew material to themselves from the cloud of gas and dust around them. The...more Astronomers have discovered a strange shape in Saturn's atmosphere. The shape is a hexagon. The hexagon is near Saturn's North Pole. Scientists aren't quite sure why Saturn has the hexagon shape in its...more
<urn:uuid:57793021-10a4-432b-b3be-5f3a713f9c5a>
3.5
535
Content Listing
Science & Tech.
66.320458
An Open Universe Click on image for full size An Eternal Universe If the universe does not contain enough matter to stop its expansion it will continue to expand forever. Using the currently understood laws of physics we can project into the future what the Universe may look like in very distant eras. Two astrophysicists at the University of Michigan have outlined the future history of the Universe. They have divided the future into Eras. The current Era is known as the Stelliferous or Star-Filled era. In this era the Universe is filled with stars and galaxies and planets as it is today. At the end of this era all stars have exhausted their fuel and have died leaving behind only remnants of their once glorious era. The next era is known as the Degenerate era. In this era the universe is made of dead planets, brown dwarfs, white dwarfs, nuetron stars, black holes, and some theoretical forms of dark matter. At the end of this era all protons, which compose the nuclei of all atoms, disintigrate. The next Era is the Black Hole era because black holes will be the only gravitaionally important objects left in the universe. However, Black holes do not last forever. They evaporate by a strange radiation process. After that the Universe shall be composed of only radiation and particles which have an infinite lifetime such as electrons, positrons, and neutrinos. From this point on interesting things might continue to happen but we have reached the limits of our knowledge. Shop Windows to the Universe Science Store! The Summer 2010 issue of The Earth Scientist , available in our online store , includes articles on rivers and snow, classroom planetariums, satellites and oceanography, hands-on astronomy, and global warming. You might also be interested in: There is a radiation that fills the universe, called Cosmic Microwave Background Radiation (CMB). CMB radiation is the heat left over from the time after the Big Bang, when the universe was really hot!...more Some of the best news of the week is that the Microwave Anisotropy Probe (MAP) was launched successfully last Saturday! Liftoff on its Delta II rocket occurred on time on June 30, 2001. The MAP teams says...more In the 1960's, the United States launched some satellites to look for very high energy light, called Gamma Rays. Gamma Rays are produced whenever a nuclear bomb explodes. The satellites found many bursts...more During the early 1900's, which is not very long ago, astronomers were unaware that there were other galaxies outside our own Milky Way Galaxy. When they saw a small fuzzy patch in the sky through their...more Neutron Stars are the end point of a massive star's life. When a really massive star runs out of nuclear fuel in its core the core begins to collapse under gravity. When the core collapses the entire star...more Spiral galaxies may remind you of a pinwheel. They are rotating disks of mostly hydrogen gas, dust and stars. Through a telescope or binoculars, the bright nucleus of the galaxy may be visible but the...more When stars like our own sun die they will become white dwarfs. As a star like our sun is running out of fuel in its core it begins to bloat into a red giant. This will happen to our sun in 5 Billion years....more
<urn:uuid:581ba702-b04c-422b-a719-a38ab41cc327>
3.59375
702
Content Listing
Science & Tech.
54.986409
In a brief review regarding the nature of supernovae, Italian astronomer Nino Panagia highlights the major issues associated with the current understanding of supernovae. Panagia points out that while supernovae of Type Ia are used as perfect “standard candles” (that is, distance markers), they are not so perfect. Type Ia supernovae provide the evidence for the accelerated expansion of our universe and if their results are questioned, so must the conclusions about our universe. Read about the questions that remain regarding supernovae explosions and their nature, online here. You hear a lot about supernovae explosions, but did you ever think what it’s like to be there when a Type II (core-collapse) supernova explodes? In a paper released today, Canadian astronomers provide the latest computer simulations of Type II supernovae. You can see their animations, which have been enhanced with the effects of the additional heating caused by neutrinos, online at http://www.cita.utoronto.ca/~fernandez/alpha_movies.html And if you wish to read about how these simulations were done, you can read their paper online at http://xxx.lanl.gov/PS_cache/arxiv/pdf/0812/0812.4574v1.pdf If you think cosmologists are settled about the best model of the universe, you are mistaken. Just today, there appeared a paper online at http://xxx.lanl.gov/PS_cache/arxiv/pdf/081/0812.3912v1.pdf which seeks to provide an alternative to the standard dark energy model while still explaining the supernovae Type Ia results which lead us to believe in the accelerating expansion of the universe. In another paper available online at http://xxx.lanl.gov/PS_cache/arxiv/pdf/0809/0809.3761v2.pdf and accepted for publication in Physical Review Letters, Canadian astronomers provide yet another alternative view of cosmology, without the need for the standard view of dark energy. And yet another paper on dark energy modeling was released today, and is available online at http://xxx.lanl.gov/PS_cache/arxiv/pdf/081/0812.3901v1.pdf within which the interactions of dark energy and cold dark matter are examined. Cosmic dust is an important field of study as it is the dust in nebulae which helps form things such as stars and planets. So where did all the dust out there come from? One NASA scientist joined a group of Japanese astronomers in modeling the formation and evolution of dust in primordial supernovae. They use observations of supernova remnants from the oldest known supernovae (called Population III supernovae) to test the results of their models. Learn more about the origins of cosmic dust online at http://xxx.lanl.gov/PS_cache/arxiv/pdf/0812/0812.1448v1.pdf And if you are curious about cosmic dust in general, you may want to take a look at a popular level presentation done by a colleague (Joe Weingartner) at http://physics.gmu.edu/~joe/NOVAC.pdf The accelerating expansion of the universe and the existence of dark energy was necessary to explain the first results of a survey of Type Ia supernovae. In a continuing report about high redshift (most distant) supernovae, scientists of the ESSENCE team provide the latest information from their data after four years of operation. If you were curious about the acronym, ESSENCE stands for “Equation of State: SupErNovae trace Cosmic Expansion.” To learn more about this effort to better understand dark energy and the accelerating expansion of the universe, see the team’s latest paper at http://xxx.lanl.gov/PS_cache/arxiv/pdf/0811/0811.4424v1.pdf
<urn:uuid:e7d35175-60ea-405b-ae67-b0fac487dc4d>
3.0625
848
Content Listing
Science & Tech.
51.333865
Yesterday morning, when scanning the news at spiegel online, a headline in the science section made me curious: Bleistift statt schwarzer Löcher, pencils instead of black holes. That short piece turned out to be a quite sensible description of a recent experiment on the Klein paradox in single layers of graphite. There was even a link to the original paper on the archiv: cond-mat/0604323. I had heard before of the funny features of electrons in graphene, as these single atomic layers of graphite are called, and I followed up the story. While I am not an expert on these things, I find them quite remarkable and interesting. In 1929, Oskar Klein, of Klein-Gordon and Kaluza-Klein fame, applied the Dirac equation to the typical textbook problem of an electron hitting a potential barrier. While in nonrelativistic quantum mechanics, the electron can tunnel into the barrier, albeit with an exponential damping, in the relativistic problem, something strange happens if the the barrier is on the order of the electron mass, V ∼ mc². Then, as Klein found out, the barrier is nearly transparent for the electron, and even perfectly transparent in the limit of infinite barrier height. Oskar Klein (www-groups.dcs.st-and.ac.uk/~history/Biographies/Klein_Oskar.html) This very odd situation, called the Klein paradox, is nowadays usually explained by the effect of pair creation: The barrier, which is repulsive for electrons, is attractive for positron. Thus, there are positron states inside the barrier with the same energy level as the incoming electron state. This means that electron-positron pairs are created, which are responsible for the transparency of the barrier. A steep and high potential barrier implies a very strong electric field. The pair creation at the barrier thus corresponds to the pair creation in strong fields. Experimental evidence for this effect - the so-called charged vacuum - was long sought-after in heavy ion collision, but so far without success. The problem is that electric fields strong enough for the spontaneous creation of electron-positron pairs occur only in the vicinity of superheavy nuclei, with Z ∼ 170. Such nuclei do not exist in nature - they have to be created, albeit for a very short while, in heavy ion collisions. The problem with the experimental verification of spontanous pair creation in high-energy physics is, obviously, the electron mass, which necessitates very strong fields. Things would be much easier if one would have massless charged Dirac particles at hand. Enters the graphene: Carbon atoms in graphite from very neat layers with a hexagonal, honeycomb structure. This layered arrangement of the atoms explains nicely the properties of graphite, such as its suppleness, which is why it is used in pencils. There are now even pictures of the honeycomb structure, thanks to atomic force microscopy: Graphite layer (www.physik.uni-augsburg.de/exp6/imagegallery/afmimages/afmimages_e.shtml) Graphite is a quite good electric conductor. If one prepares single layers of graphite, or graphene, the conductance electrons are constrained to this layer. Now, in this two-dimensional system, the peculiar hexagonal structure leads to a linear relation between momentum and energy for the excitation of conductance electrons. Thus, these electronic excitations behave as massless Dirac fermions, instead of massive electrons! This remarkable feature has been exploited in several recent experiments - and one of these experiments is the experimental study of the Klein paradox referred to in the spiegel piece. The barrier in the experiment is created by some semiconductor material inserted into the graphene layer. Applying different electrostatic potentials to the semiconductor, the barrier height for the massless quasi-electrons can be tuned. Now, potential differences of some 100 meV instead of some 0.5 MeV do the job for reaching the regime of the pair creation and the Klein paradox. In the experiment, reflexion and transmission coefficients are measured, and they correspond neatly to Klein's calculations! This is definitely one more example where some of the standard textbook situations of quantum mechanics is, actually, realised in a beautiful experiment. Much more information about the experiment, and the special features of graphene, can be found on the News and Publications web page of the Mesoscopic Physics Group at the University of Manchester who actually started the experimental exploration of graphene, and did the Klein paradox experiment. For the Klein paradox as such, I am studying now a paper from the arxiv, quant-ph/9905076. But what about the Black Holes? The spiegel piece probably took it form a news item at Science: Black Hole in a Pencil. I guess, Bee is much more qualified to comment on that, once she will find a little time to breathe. The point is, I suppose, is that charged small black holes would naturally provide strong enough electric fields for pair creation, and thus for testing situations as in the Klein paradox in experiment. If only charged black holes could be produced more easily than nuclei with Z = 170... Physics Klein paradox Graphene Pair Creation
<urn:uuid:d7343616-d905-48a4-b41f-8cad14562c8c>
2.984375
1,099
Personal Blog
Science & Tech.
40.996807
An experiment aimed at recreating the domestication of wolves and their subsequent evolution into the modern dog has yielded an unexpected benefit for anyone who has ever wished to be the owner of a pet fox. |Tod from ''The Fox and The Hound'| The experiment was begun in 1959 by biologist Dmitry Belyaev, and continues today under the supervision of Lyudmila Trut. Starting with a base population of 30 male foxes and 100 vixens selected from the calmest foxes that researchers on the team could find in fur farms in Siberia, the project initiated a selective breeding program. From the first generation bred from the base population, the tamest kits were selected to breed the next generation and so on. An unaltered group was created as a control, as well as a group bred selectively for aggressive traits. In order to make sure that any changes noted are due to the selective breeding, human contact with the foxes is kept to a minimum. The grading is strict, and the process involves the kits undergoing a series of tests beginning at one month of age, and continuing until they reach sexual maturity at around eight months, when they are assigned into one of four classes, Class III, II, I or IE. Class III is made up of the least domesticated foxes, which are aggressive to or fearful of the handlers. Class II foxes are calm when handled but are not affectionate. Members of Class I were the tamest, until Class IE was established after the sixth generation of foxes displayed behaviors so like that of dogs that they were labeled the “Domesticated Elite”. (Trut 1999) |Trut with one of the tame foxes in Siberia| The researchers were surprised at how quickly the traits of domesticated animals appeared in their foxes. The morphological changes observed in only the ninth generation (1969) included the first kits being born with piebald colouring. This was attributed to the fact that kits were being selected for breeding due to their calm demeanor and receptivity to human contact, and the coat colour of the foxes did not lead to the survival issues that it would in the wild, due to them being unable to camouflage easily. (Ratliff 2011) More important to the experiment were the behavioral changes. By only the second generation (1960), the kits were more approachable. By the fourth (1964), the kits would wag their tails and approach humans on their own, and even allow themselves to be petted. By the sixth generation (1966) many foxes displayed full affinity to handlers, following them around like dogs and even licking the handlers affectionately. This led to the creation of Class IE. (Trut 1999) In recent years the team have begun the process of obtaining permits to sell the surplus foxes as pets. The selling of the surplus elite as pets will help raise funds to continue the research, and is also a welcome alternative for the research team to selling the foxes to fur farms. (Ratliff 2011) The ongoing experiment has produced many breakthroughs in the study of domestication of wild animals and the researchers also hope to provide insight into human domestication, and the development of our sociality. The research has gone a long way towards answering the question of domestication and how it is achieved. Trut, L N 1999, Early Canid Domestication: The Farm-Fox Experiment, Researcher at the Institute of Cytology and Genetics, Novosibirsk, Russia, viewed 19 March 2012, Ratliff, E 2011, Animal Domestication: Taming the Wild, Contributor to National Geographic Magazine, viewed 19 March 2012, Walt Disney Productions 1981, The Fox and The Hound, image retrieved 20 March 2012 from, Child D, n.d. Retrieved on 20 March 2012 from, Glebova N, 2011 retrieved on 20 March 2011 from,
<urn:uuid:4254c73e-5c01-4811-b70c-627144f74b69>
3.515625
806
Knowledge Article
Science & Tech.
42.151618
Tuesday, April 6, 2010 The Ethiopean Amber Image courtesy PNAS/ Matthias Svojtka This is really neat: the American Museum of Natural History reports the discovery of a nice chunk of Cretaceous amber from Ethiopia. While pieces of the translucent golden fossilized tree resin are well known from other parts of the world, this is the first significant piece from Cretaceous Africa. It dates to about 95 million years old, at which point Africa was an island continent separated from South America by a narrow sea. The discovery is presented in the latest issue of the Proceedings of the National Academy of Sciences. Resin is the sticky stuff that gets all over my hands when I gather fallen pine limbs for backyard campfires. You're probably familiar with its ability to trap insects and other organic material; it played a key role in that Spielberg movie. You know the one. It was an adaptation of a novel. Not Minority Report. Chemical analysis reveals that the resin may be derived from a flowering tree, or angiosperm, similar to members of a living group related to legumes. The Cretaceous was the period when angiosperms really diversified, and many of the lines that wound up producing today's familiar flowers, trees, and fruits and vegetables popped up. If this resin was from such a tree, it would be great, but it may alternatively be from a previously unknown kind of conifer. Such a dense piece of an ancient ecosystem contains a wealth of information, so it is being examined by scientists from more than a dozen institutions around the world. So far, they've turned up 30 insects and spiders, as well as plant material, fungi, and bacteria. It's like sending a robotic probe back in time and having it bring back a terrarium's worth of critters to study. As pretty as it is, this hunk of fossil resin probably won't make as big of a splash as, say, a big theropod. But what it reveals about the ecosystem in which the dinosaurs of ancient Ethopia lived is arguably of much greater value than the skeleton of one creature. While big dinosaurs amp up the imagination like a shot of adrenaline, pondering the complicated interactions of every member of the ecosystem gives a more sustained rush, a point Scott Sampson makes very nicely in Dinosaur Odyssey. It used to be common to plop dinosaurs into Mordor-like wastelands of fuming volcanos, maybe throwing the odd palm tree into the background. Thanks to discoveries like this, that lazy old image is just about dead.
<urn:uuid:c2a1bd43-7b72-486a-9c89-c14693b19fb7>
3.09375
526
Personal Blog
Science & Tech.
43.911
An electrolyte which gives ions of which at least one is of colloidal size. This term therefore includes hydrophobic sols , ionic association colloids , and polyelectrolytes. PAC, 1972, 31, 577 (Manual of Symbols and Terminology for Physicochemical Quantities and Units, Appendix II: Definitions, Terminology and Symbols in Colloid and Surface Chemistry) on page 607 IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook
<urn:uuid:ab0f91ae-00ef-48b8-b83a-8eb47812197d>
3.046875
202
Structured Data
Science & Tech.
58.49886
Credit: Image courtesy of K. Iwasawa, G. Miniutti and A. Fabian and ESA. What must it be like near a black hole? The intense gravity generated by the singularity at the heart of the black hole should produce all sorts of strange behavior, as predicted by Einstein's Theory of Relativity. We probably won't ever have a first-hand account, since even if we could bridge the immense distance to even the nearest black hole, it's doubtful anyone could survive the voyage. Perhaps the closest we can come is shown in the above image. This image was obtained with the XMM-Newton X-ray observatory. The upper panel shows the variation of radiation produced by iron nuclei arising near the central supermassive black hole in the active galaxy NGC 3516. This variation is similar to a model, shown in the bottom panel, in which the emission arises from a spot on the accretion disc around the central black hole, illuminated by a corotating flare located at a radius of only 3.5 - 8 Schwarzschild radii, the "size" of the black hole. Last Week * HEA Dictionary * Archive * Search HEAPOW Each week the HEASARC brings you new, exciting and beautiful images from X-ray and Gamma ray astronomy. Check back each week and be sure to check out the HEAPOW archive! Page Author: Dr. Michael F. Corcoran Last modified Friday, 20-Apr-2012 15:11:22 EDT
<urn:uuid:d9e00aa4-33e3-4608-8fe8-a3a46367e337>
3.71875
320
Truncated
Science & Tech.
57.582591
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...centimetres in width, while the butterfly Ornithoptera victoriae of the Solomon Islands has a wing span exceeding 30 centimetres. One of the longest insects is the phasmid (walkingstick) Pharnacia serratipes, which reaches a length of 33 centimetres. The smallest arthropods include some parasitic wasps, beetles of the family Ptiliidae, and mites that are less than 0.25 millimetre... What made you want to look up "Pharnacia serratipes"? Please share what surprised you most...
<urn:uuid:9ba3da42-942b-4257-b542-43006477c9ba>
2.84375
154
Knowledge Article
Science & Tech.
48.493821
In fluid dynamics, turbulence or turbulent flow is a fluid regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time. Flow that is not turbulent is called laminar flow. The (dimensionless) Reynolds number characterizes whether flow conditions lead to laminar or turbulent flow; e.g. for pipe flow, a Reynolds number above about 4000 (A Reynolds number between 2100 and 4000 is known as transitional flow) will be turbulent. At very low speeds the flow is laminar, i.e., the flow is smooth (though it may involve vortices on a large scale). As the speed increases, at some point the transition is made to turbulent flow. In turbulent flow, unsteady vortices appear on many scales and interact with each other. Drag due to boundary layer skin friction increases. The structure and location of boundary layer separation often changes, sometimes resulting in a reduction of overall drag. Because laminar-turbulent transition is governed by Reynolds number, the same transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large scale structures. The energy "cascades" from these large scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. In two dimensional turbulence (as can be approximated in the atmosphere or ocean), energy actually flows to larger scales. This is referred to as the inverse energy cascade and is characterized by a k − (5 / 3) in the power spectrum. This is the main reason why large scale weather features such as hurricanes occur.
<urn:uuid:7b797acf-95bb-40c4-8dd0-10c27b6a57ed>
3.640625
440
Knowledge Article
Science & Tech.
36.620712
|MadSci Network: Evolution| I'll assume from your mention of dentition that you are sticking with mammals only. I think this is fine because this is the group that is most easily identifiable to a class of first year students and they show the most divergence. Eyesight and dentition are good starts and can be easily shown in a lab setting. Another anatomical difference is in the digestive tract. Carnivores have rather simple digestive tracts with few specialized organs. The cecums/appendices are usually reduced. Because of the high amount of animal fat digested most carnivores have a gall bladder. Because plant material is so diffcult to digest, herbivores usually have very specialized organs such as a four chambered stomach or an enlarged cecum for fermentation. Ruminants, such as cows, we regurgitate and remasticate their food to aid in digestion. Because herbivores take in very little animal fat, they usually have a reduced or absent gall bladder. Because mammals are generally rather similar to one another, I don't think there are going to be many differences between herbivores and carnivores. There will be differences in digestive enzymes between the two groups due to the difference in nutrition. There will most likely be greater differences between the various groups of herbivores because of the great differences in the nutritional quality of various plants. Remember, some plants are poisonous to some organisms, but perfectly edible to others. Carnivores rarely have this problem because all meat is usually edible. Behaviorly, carnivores are generally considered more intelligent. Dogs, cats, bears, and even dolphins are considered very intelligent animals. Even chimpanzees, which are considered the most intelligent animal besides humans, are now known to sometimes hunt smaller monkeys. This heightened intelligence may stem from a predators need to 'problem solve' while hunting and ambushing. Another small difference is in social groups. Some herbivores, such as zebra, gather in very large groups which allows for protection in numbers. Some carnivores, such as canines, form groups, but they are not as large as herbivores. Both herbivores and carnivores can have well structured and highly organized groups. Try the links in the MadSci Library for more information on Evolution.
<urn:uuid:91d5ba3d-812c-4459-b19c-393965bb6ff1>
3.296875
461
Comment Section
Science & Tech.
27.466853
A somewhat derogatory term “bourbakism” proliferates in many public discussions about mathematics and the ways of teaching mathematics. We hear many funny anecdotes about commutativity as a method of calculation as well as separate addition of nominators and denominators. Professional mathematicians and teachers divide into the hostile groups that discuss with alienation and indignation of the medieval scholastics the “problem of the naturalness of zero” as well as the priority rights between the relations “greater than,” “greater than or equal to,” and “strictly greater than.” All these stories and philippics are nice and true to some extent but rest upon a clear-cut misunderstanding. It stands to reason to recall that there was no teacher whose name was Bourbaki. It is also reasonable to bear in mind that the treatise of Bourbaki is written as imitation of of Bourbaki’s Elements of Mathematics is exactly the style of Any serious criticism of the books by Bourbaki bases on pretensions to their content rather than style. Bourbaki’s treatise is evidently incomplete. Many important mathematical theories are absent or covered inadequately. A few volumes present the dead ends of exuberant theories. All these shortcomings are connected with the major capital distinction between the books by Euclid and Bourbaki. In his Euclid set forth the theory that was almost complete in his times, the so-called “Euclidean” plane and space geometry. Most of this section of science was made clear once and forever in the epoch The Bourbaki project was implemented in the period of very rapid progress in mathematics. Many books of the treatise became obsolete at the exact moment of publication. In particular, functional analysis had been developing contrary to what one might imagine reading the book Topological Vector Spaces . But to a failure was doomed the heroic and ambitions plan of Bourbaki to present the elements of the whole mathematics of the twentieth century in a single treatise along the methodological lines of Euclid. Mathematics renews and enriches itself with outstanding brilliant achievements much faster than the books of Bourbaki’s treatise were compiled. There is no wonder that the mathematical heroes who create the twentieth century mathematics have distinctly and immediately scented the shortcomings of Bourbaki. The treatise encountered severe criticism and even condemnation since it omits many important topics. As usual, this serious criticism convened all sorts of educationists, would-be specialists in “propaedeutics” and are hardly aware of what is going on in the real mathematics. Everyone knows that to criticize a book for incompleteness is a weak argument since it is strange to judge an article for what is absent in this article. Grudges against the content of the treatise transform by necessity to the criticism of its form. The terseness, conciseness, and lapidary of the style of exposition fall victim to criticism and even ostracism by the adversaries of the malicious “bourbakism” in education. one of the famous mathematicians of the past, observed with a witty smile: Also, if examined “objectively,” Euclid’s work ought to have been any educationist’s nightmare. The work presumes to begin from a beginning; that is, it presupposes a certain level of readiness, but makes no other prerequisites. Yet it never offers any “motivations,” it has no illuminating “asides,” it does not attempt to make anything “intuitive,” and it avoids “applications” to a fault. It is so “humorless” in its mathematical purism that, although it is a book about “Elements,” it nevertheless does not unbend long enough in its singlemindedness to make the remark, however incidentally, that if a rectangle has a base of 3 inches and a height of 4 inches then it has an area of 12 square inches. Euclid’s work never mentions the name of a person; it never makes a statement about, or even an (intended) allusion to, genetic developments of mathematics; it makes no cross references, except once, the exception being in proposition 2 of Book 13, where the text refers to, and repeats the content of, the “first theorem of the tenth book,” which, as it happens, is Euclid’s “substitute” for the later axiom of Archimedes. Euclid has a fixed pattern for the enunciation of a proposition, and, through the whole length of 13 books, he is never tempted to deviate from it. In short, it is almost impossible to refute an assertion that the Elements is the work of an unsufferable pedant and martinet... Euclid’s work became one of the all-time best sellers. According to “objective” Pestalozzi criteria, it should have been spurned by students and “progressive” teachers in every generation. But it nevertheless survived intact all the turmoils, ravages, and illiteracies of the dissolving Roman Empire, of the early Dark Ages, of the Crusades, and of the plagues and famines of the later Middle Ages. And, since printing began, Euclid has been printed in as many editions, and in as many languages, as perhaps no other book outside the Bible.1 Euclid’s book is a totally appalling, terse and formal presentation of axioms, definitions, lemmas and theorems without any motivation and digression, lacking any illuminating examples from physics, economics, social or spiritual life. However, it is the book that lives about two and a half millennia and shows no indication of dying. In contract, the textbooks fail to survive the gerontological tests that define the area of a figure by sowing it with some grain or cutting it off a sheet of paper. We must avoid mixing together the full-time and extramural forms of training, the transfer and saving of knowledge. The Babylonian texts on mathematics are in fact problem-books with solutions. This style of teaching is still alive. However, no problem-book of any sort can compare with in its long-term impact on mathematics and culture as a whole. Any student’s notes of a mathematical course still remind us of and its successor in style, Bourbaki’s Elements In common parlance, bourbakism “formalistic structural mathematics,” whatever the bizarre term means. In fact, this vogue word rarely implies something more than a simple reference to the century-old tradition of shortening and saving mathematical theories in axiomatic form. This marvelous and noble tradition stems from the writings of Euclid. Elimination of extravagancy and pursuit of consistency, clarity, terseness, and rationality of exposition stimulate, organize, and discipline mind and thought, revealing the intrinsic beauty and harmony of mathematics. It is exactly the impersonal style of Euclid’s Elements lacking any temporal inklings, that makes them especially valuable and allows anybody to understand what they tell us when centuries have elapsed. The “verbal” problems, practical motivations and emphasis on a person’s creativity as well as the subjective coloring of exposition and present-day allusions are absolutely obligatory gadgets in the tool-kit for training. However, the particular products of these immortal teaching tools are rather volatile, momentary, and fragile; they often die at the spur of the moment of their enunciation. Science must preserve old knowledge as well as meet the challenges of nowadays by solving the new and pending problems. Therefore, teaching has the twofold task of preserving and transferring knowledge, “filling of a pail” in combination with “lighting of a fire,” i.e., the initiation and stimulation of creative search into new knowledge. There is no reason to oppose the transfer and preservation of knowledge and the training of creativity and practical skills in raising and solving the problems of today. Preservation of mathematical knowledge in the impersonal and dry style of textbooks never excludes the possibility of creative search of the teacher. On the contrary, the style of Euclid presupposes perpetual creativity, calling the teacher for finding and using subtle personal adjustments, subjective keys and even mysteries for igniting students' interest in mathematics, the understanding of its place and role in science, industry, and other areas of public life as well as for training skills of application of mathematics in practical problems. The everlasting duty of the teacher is to destroy the obstacles to the understanding of mathematics, reveal the liberating essence of its free thinking, and explain that mathematics is the most human of all human sciences. There is no math without a man or a woman. The physical world still prevails but math vanishes without men and women. We people do math. We do it, thinking about everyone and we do it for everybody. The purpose and essence of mathematics reside in the freedom it brings to us. Mathematics welcomes everyone, combining free access, democracy, and openness with the indisputable prohibition of any prejudice, subjectiveness, and arbitrariness of judgements. One of the most personalized sciences which requires everybody’s personal effort for solving a however simple arithmetical problem, mathematics has learned to make the complex the simple and comprehensible to each of us. The most human of sciences, mathematics has elaborated its beautiful “unhuman” form of the objective transfer of knowledge in writing—the classic style of the Hellenistic Elements There are no royal ways to mathematics; the road to mathematics was paved by Euclid. The style of Euclid not only lives in the books by Bourbaki but also proliferates in hundreds of thousands of students' notes throughout the world. This style is an achievement and article of pride of our ancient science.
<urn:uuid:d9d3d89b-05f3-407d-8af2-92e1cdbc9e5c>
3.171875
2,264
Nonfiction Writing
Science & Tech.
26.964448
The world's second largest ice cap may be melting three times faster than indicated by previous measurements, according to newly released gravity data collected by satellites. The Greenland Ice Sheet shrank at a rate of about 239 cubic kilometres per year from April 2002 to November 2005, a team from the University of Texas at Austin, US, found. In the last 18 months of the measurements, ice melting has appeared to accelerate, particularly in southeastern Greenland. "This is a good study which confirms that indeed the Greenland ice sheet is losing a large amount of mass and that the mass loss is increasing with time," says Eric Rignot, from NASA's Jet Propulsion Laboratory in Pasadena, California, US, who led a separate study that reached a similar conclusion earlier in 2006 (See Greenland's glaciers are speeding to the ocean). His team used satellites to measure the velocity of glacier movement and calculate net ice loss. Yet another technique, which uses a laser to measure the altitude of the surface, determined that the ice sheet was losing about 80 cubic kilometres of ice annually between 1997 and 2003. The newer measurements suggest the ice loss is three times that. "Acceleration of ice mass loss over Greenland, if confirmed, would be consistent with proposed increased global warming in recent years, and would indicate additional polar ice sheet contributions to global sea level rise," write the University of Texas researchers in the journal Science. The satellites that provided the new data are results the Gravity Recovery and Climate Experiment (GRACE) pair. These identical US and German satellites fly 220 kilometres from one another. They use a microwave ranging system and Global Positioning System to measure precisely the distance between one another. Tiny changes in that distance reflect changes in the Earth's gravity field, which in turn is a measure of the density of part of the Earth. "The gravity data are spectacular in providing precise information about what is happening to the ice sheets," says NASA climatologist James Hansen, director of the Goddard Institute for Space Studies in New York, US. "They provide the net effect of mass change, due to both melting and snowfall changes. It confirms our expectation that the warming climate will cause Greenland ice to shrink." Based on the glaciology of the region, Rignot says he does not think that the north-eastern part of Greenland's ice cap has lost as much ice as the Texas team suggests - 74 cubic kilometres annually. Other factors could account for the discrepancy, acknowledges Clark Wilson, one of the University of Texas team. For instance, scientists do not fully understand the ocean tides in the Arctic Ocean, and there are not a lot of weather stations to monitor air pressure there. GRACE only measures changes in gravity due to changing mass - it cannot tell if that results from changes in air, water, rock or ice. So to find changes due to ice loss alone, the researchers have to subtract the estimated contribution of water and air. If that is not well known, it results in higher uncertainties in the interpretation. "We're hoping as time goes on, we'll have improved tide models, improved atmospheric pressure estimates and also better ways to use the GRACE data themselves," Wilson told New Scientist. The Greenland Ice Sheet holds about 2.85 million cubic kilometres of ice - 10% of the world's ice mass. If it all melted, it would raise the average sea level about 6.5 metres. This is not GRACE's first measurement of an ice sheet. Another team at the University of Colorado, Boulder, US, similarly used the GRACE system to show that the Antarctic ice sheet was losing about 152 cubic kilometres annually from 2002 to 2005 (See Gravity reveals shrinking Antarctic ice). "We should be making plans for the next generation of gravity satellites, but with the cutback in NASA funding for Earth science, this is not happening," says Hansen, who earlier in 2006 accused officials at NASA headquarters of trying to stop him from speaking out on greenhouse gas emissions (See Top climatologist accuses US of trying to gag him). Journal reference:Science (DOI:10.1126/science.1129007) If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Thu Nov 08 19:49:31 GMT 2007 by Susana Lopez I feel really sad that the ice caps are starting to melt. Not only is it affecting the animals that live there but it will affect us soon, because the sea level is rising. I hope there is still hope in order to prevent the ice caps from melting and for global warming to stop. I am glad that there is people that are concerned about it. Mon Nov 26 21:35:51 GMT 2007 by Mike Gillen False! The sea levels are not rising. The northern caps are melting, yet the souther ice caps are growing. This is all natural. You watch way too many political debates and cnn. Fri Dec 07 02:29:30 GMT 2007 by Adam Pauls I object. I am 9 yers old and even I know that global warming is very bad and dangerous to my future. If we can't stop polluting the air we will all die. Thu Jan 22 17:27:24 GMT 2009 by brian I think it would be nice if the plannet was a little warmer. I like summer. Don't you? We should increase greenhouse gas output so we can speed it up. I don't plan on going to the icecaps but I do plan on going to the BEACH Sat Nov 21 21:08:22 GMT 2009 by Ole Heinrich Unfortunately there are no animals living on the ice cap §.-) Southern Ice Caps Growing At Record Rate Mon Nov 26 21:40:03 GMT 2007 by Mike Gillen Global Warming is a political ploy. Fri Jan 04 06:25:14 GMT 2008 by Tblom Ummm... Actually there is evidence that Antartic ice IS shrinking, like the Ross ice shelf disintegrating. (approx. The size of Rhode Island) and also that glaciers are falling into the sea faster than ever before. (On both ends of the world, do your research, don't listen to republicans!) All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:1cbc8d05-4e31-4b1d-98cf-7236e61728c0>
3.890625
1,392
Comment Section
Science & Tech.
54.718559
Last updated 05 April 2013, created 20 March 2012, viewed 660 Use these pictures from the Encyclopaedia Britannica to learn about volcanoes. A volcano is an opening in Earth's crust. When a volcano erupts, hot gases and melted rock from deep within Earth find their way up to the surface. This material may flow slowly out of a fissure, or crack, in the grou More…nd, or it may explode suddenly into the air. Volcanic eruptions may be very destructive. But they also create new landforms. There are more than 1,500 potentially active volcanoes in the world today.
<urn:uuid:f011e68b-2ab4-4df1-9f0d-8b52f5fc39d8>
3.6875
129
Knowledge Article
Science & Tech.
61.205
The Science Guys Science Guys > June 2001 Why does a stream of water from a faucet become smaller at it falls? Everyone has seen this phenomenon in their home. Turn on the water and adjust it so that the water flows in a steady, smooth manner (called laminar flow). You will observe that the stream narrows as it falls toward the sink. Why does this happen? Water does have a cohesiveness that holds it together but that is not why the stream gets smaller. At first thought it appears there is less water at the bottom of the stream than at the top but this is not the case. First we are talking about a smooth, steady flow. By smooth we mean non-turbulent and by steady we mean the stream stays the same from one moment to the next. That is, it does not change in time. This being the case, the amount of water in any section of the stream stays the same. Pick any inch of the stream and the amount of water in that section remains constant over time. For this to be valid, then the amount of water flowing into that section must equal the amount flowing out of that section. Or phrased a more general way, the amount of water flowing through any cross-section of the stream per second (the flow rate) at any point must be the same. How can we represent the amount of water flowing through any cross-section of the stream? Let’s imagine a special highway and let’s follow a group of cars as they travel down the highway. There won’t be turnoffs, exit, or entrance ramps, and furthermore you tell the drivers that the same number of cars must pass any given point on the highway every second. To maintain this constant flow rate, when the highway is broad, the drivers know they must slow down because the road can accommodate more cars. But when the highway narrows the drivers must speed up to maintain the constant flow rate because fewer cars can pass abreast down a narrow highway. Therefore the flow rate for the cars is proportional to both the cross-section size of the highway and the speed of the cars. Now consider two points along the highway. At point one, the flow is proportional to the cross-section (A1) and the speed (v1) at that point. And at point two, the flow is proportional to the cross-section at point two (A2) and the speed at point two (v2). Since the same number of cars must pass both points, then A1 times v1 must equal A2 times v2 . Although some people do not appreciate mathematical expressions, this fact is probably best represented in that manner, (A1) x (v1) = (A2) x (v2) . In fluid physics this equation is called the equation of continuity, which simply says "what flows in must equal what flows out." The water that emerges from the faucet is falling. What happens to any object that falls under the influence of gravity? It travels faster the further it falls (at least over short distances). From the above mathematical expression, one can understand that if we have a higher velocity (larger v2) at the bottom of the stream, then the cross-section (A2) is going to have to be smaller in order for the flow rate to remain the same. Thus the size of the stream (A2) gets smaller the further (and faster) the water falls. If the stream falls far enough, the water reaches a terminal speed and the size of the stream will stop decreasing in size or becoming smaller as it falls.
<urn:uuid:e143d54e-0fa0-4d4c-a26e-80cc19266ab7>
3.953125
750
Tutorial
Science & Tech.
67.900078