text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
JSON: Writing Output July 15, 2011 In the previous exercise we wrote a function to read JSON input and parse it into an object in the native language. In today’s exercise we write the inverse function. Your task is to write a function that takes a JSON object and writes it in text format. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:bcad628a-425a-489a-bd3d-d118243215e0>
2.8125
101
Tutorial
Software Dev.
56.145058
.f the number gets interpreted as an integer, hence (int)0 instead of the desired .f tells the compiler to interpret the literal as a floating point number of type float. There are other such constructs such as for example 0UL which means a (unsigned long)0, whereas a plain 0 would be an .f is actually two components, the . which inicates that the literal is a floating point number rather than an integer, and the f suffix which tells the compiler the literal should be of type float rather than the default double type used for floating point literals. Disclaimer; the "cast construct" used in the above explanation is not an actual cast, but just a way to indicate the type of the literal. If you want to know all about literals and the suffixes you can use in them, you can read the C++ standard, (1997 draft,) or alternatively, have a look at a decent textbook, such as Stroustrup's The C++ Programming Language. As an aside, in your example (float)1/3 the literals 3 are actually integers, but the 1 is first cast to a float by your cast, then subsequently the 3 gets implicitly cast to a float because it is a righthand operand of a floating point operator. (The operator is floating point because it's lefthand operand is floating point.) Edit: expanded a bit on the precise meaning of .f and included the links to further documentation.
<urn:uuid:31c1d47e-10ff-40bd-8704-b18a373aadea>
3.90625
315
Q&A Forum
Software Dev.
58.949945
The world’s astronomers, under the auspices of the International Astronomical Union (IAU), have concluded two years of work defining the lower end of the planet scale - what defines the difference between “planets” and “solar system bodies”. If the definition is approved by the astronomers gathered 14-25 August 2006 at the IAU General Assembly in Prague, our Solar System will consist of 12 planets: Mercury, Venus, Earth, Mars, Ceres, Jupiter, Saturn, Uranus, Neptune, Pluto, Charon and 2003 UB313. The three new proposed planets are Ceres, Charon (Pluto’s companion) and 2003 UB313. There is no change in the planetary status of Pluto. Read more: http://www.iau2006.org/mirror/www.iau.org/NEWS.55.0.html In this artist’s impression the planets are drawn to scale, but without correct relative distances. Credit: The International Astronomical Union/Martin Kornmesser Evidence continues to mount that the next solar cycle (Solar Cycle 24) is beginning. For the second time in a month, a backward sunspot has appeared. The first backward spot, sighted on July 31st, was tiny and fleeting. The latest, however, is big and sturdy, bipolar sunspot 905: “Backward” means magnetically backward. Compared to how sunspots have been during the past 11-year solar cycle, the north and south magnetic poles of sunspot 905 are reversed. This is what happens when one solar cycle gives way to another–sunspots reverse polarity. The onset of Solar Cycle 24 is big news, because the cycle is expected to be intense, but don’t expect any big storms right away. Solar cycles take years to ramp up to full power. The next Solar Max is expected in 2010. Courtesy Space Weather.com The IAU (International Astronomical Union) members gathered at the 2006 General Assembly agreed that a “planet” is defined as a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit. This means that the Solar System consists of eight “planets” Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. A new distinct class of objects called “dwarf planets” was also decided. It was agreed that “planets” and “dwarf planets” are two distinct classes of objects. The first members of the “dwarf planet” category are Ceres, Pluto and 2003 UB313 (temporary name). More “dwarf planets” are expected to be announced by the IAU in the coming months and years. Currently a dozen candidate “dwarf planets” are listed on IAU’s “dwarf planet” watchlist, which keeps changing as new objects are found and the physics of the existing candidates becomes better known. The “dwarf planet” Pluto is recognised as an important proto-type of a new class of trans-Neptunian objects. The IAU will set up a process to name these objects. From IAU Press Release, 24 August, 2006 Learn more: http://news.bbc.co.uk/go/em/fr/-/2/hi/science/nature/5282440.stm Thanks for visiting our new website. We are in the process of getting this site up and going and adding links and other information that can help you discover, enjoy, experience and learn astronomy. Our Stargeezer graphics atop this web page were designed by my friend Dave Tunnell. While I’m the astronomy wizard, Dave is the creative spark behind our site. Visit his website at www.tunnellvision.net I welcome your thoughts and questions. Drop me a line anytime at firstname.lastname@example.org
<urn:uuid:a8956ae6-f876-4130-84cd-1b10b82a8884>
3.4375
879
Personal Blog
Science & Tech.
56.508961
Episode 411: Describing magnetic fields The field around a permanent magnet should be familiar to your students. In practice, where we want a controllable field, we use electromagnets. In this episode, students learn about these fields and the factors that determine their strength and direction. - Demonstration and discussion: The field around a permanent magnet (20 minutes) - Student experiment: Field plotting (20 minutes) - Student questions: Revision questions on magnetic fields (20 minutes) - Student experiments: Measuring flux densities (30 minutes) - Discussion: Mathematical formulae (10 minutes) - Student questions: Calculating flux density (10 minutes) Demonstration and discussion: The field around a permanent magnet Your specification may require the study of the magnetic field due to a permanent magnet but even if this is not the case, such work forms a good introduction to magnetic fields. The use of two permanent magnets will remind students that there is a magnetic field around each magnet. (This can be done quickly with an OHP or by allowing the students to experiment with a pair of magnets.) Like other fields, the magnetic field is a way of describing a region of space where other magnets will experience a force. It can be represented by field lines that show both the size and direction of the force. How is the field strength represented? How is its direction shown? - By arrows showing the direction a compass points or 'free north pole' moves. Can we find a ‘unit’ or ‘free’ pole? A discussion of why not will introduce/remind students of magnetic domains. If there is no 'unit pole', then in any definition of the magnetic field, it is not possible to simply extend the idea of unit charge/mass found in electric and gravitational fields. How can we show up magnetic fields? This gives the opportunity to do some field plotting with iron filings or plotting compasses. There may be a computer program available to extend this further. Iron filings and a horseshoe magnet (Advancing Physics) If your specification requires it, then this is a good time to define neutral points as places where two or more fields cancel out. Student experiment: Field plotting Having covered magnetic fields for permanent magnets, you can move on quickly to revise the basic magnetic field patterns due to the electric current in a long straight wire, small flat coil and solenoid. Again, this revision is a reminder of pre-16 ideas and demonstrations. Students can look at some field patterns. If you use the worksheet, you will have to explain that flux is a new term that, for the moment, is simply being used as another word for the field pattern. Its significance will become much clearer quite quickly and it would probably confuse students if a more formal approach were used at this stage. The work is useful because it introduces alternating fields from an alternating current and shows how a search coil can be used to investigate these. Episode 411-1: Magnetic field shapes seen as flux patterns (Word, 174 KB) For some specifications, this will serve as a good revision of the basic pre-16 ideas used to describe magnetic fields and it will be possible to move quickly on to the idea of flux density and the force on a conductor. Student questions: Revision questions on magnetic fields The ideas covered above can be reinforced with an activity based on using magnets in automatic train protection. One section suggests that students ‘check that’ but this could be made into a written exercise before a couple of questions are attempted. Episode 411-2: Brush up on magnetism (Word, 43 KB) Some more questions, revising basic ideas about magnetic fields. Episode 411-3: Magnetism reminders (Word, 39 KB) Student experiments: Measuring flux densities Some specifications require a more detailed investigation of the magnetic fields due to currents. Your students should be able to measure the fields due to a long straight wire (sometimes a difficult experiment in which to get good results), a small flat coil and a solenoid. There are many possible approaches and the choice of apparatus will depend on what you have available. A calibrated Hall probe is useful, but the nature of the relationships can be deduced with ac and a search coil. (If you use a calibrated probe then you will need to explain that the unit for field /flux density is the tesla (T) and that this will be defined very soon.) Whichever flux measurement technique is available, you need only set your students the task of establishing how the flux density depends on the current flowing and the distance (radial distance from a long wire, and along the axis of a flat coil or solenoid). Episode 411-4: Fields near electric currents (Word, 198 KB) Discussion: Mathematical formulae For a long, straight, current-carrying wire, students will probably find that the field is proportional to the current but the 1/r relationship for distance is not always easy to confirm. Offer them the equation B = µoI/2πr where µo = 4π × 10-7 N A-2 is a constant known as the permeability of free space, and ask if their results are compatible with this. For a solenoid, students should be able to check the relationship of field to both current and the number of turns per unit length. Hence B = µoNI/L The mathematical formula for the field for a small flat coil is not required. For a coil wound around iron field is given B = µNI/L where µ depends on the type of iron or other magnetic core material. Student questions: Calculating flux density Episode 411-5: Flux and flux density (Word, 96 KB) Download this episode Episode 411: Describing magnetic fields (Word, 750 KB)
<urn:uuid:27d4c0bc-fa2a-4147-95af-9ec3618afdbd>
4.5
1,222
Truncated
Science & Tech.
43.370235
Temporary satellites are a result of the gravitational pull of Earth and the Moon. Both bodies pull on one another and also pull on anything else in nearby space. The most common objects that get pulled in by the Earth-Moon system’s gravity are near Earth objects (NEOs) — comets and asteroids are nudged by the outer planets and end up in orbits that bring them into Earth’s neighbourhood...One implication is that the study of the cosmos can be facilitated by visiting/sampling these temporary moons rather than trying to access more distant bodies. They found that the Earth-Moon system captures NEOs quite frequently. “At any given time, there should be at least one natural Earth satellite of 1-meter diameter orbiting the Earth,” the team said. These NEOs orbit the Earth for about ten months, enough time to make about three orbits, before leaving. 24 January 2012 When did the earth have two moons? The last time was in the autumn of 2006. But after orbiting the earth for less than a year, it departed. Details via PhysOrg:
<urn:uuid:84846220-55ba-47ca-8208-b07580b3e2ea>
4.28125
226
Personal Blog
Science & Tech.
59.892482
Let’s look at the dimensions of antisymmetric tensor spaces. We worked out that if has dimension , then the space of antisymmetric tensors with tensorands has dimension One thing should leap out about this: if is greater than , then the dimension formula breaks down. This is connected with the fact that at that point we can’t pick any -tuples without repetition from basis vectors. So what happens right before everything breaks down? If , then we find There’s only one independent antisymmetric tensor of this type, and so we have a one-dimensional vector space. But remember that this isn’t just a vector space. The tensor power is both a representation of and a representation of , which actions commute with each other. Our antisymmetric tensors are the image of a certain action from the symmetric group, which is an intertwiner of the action. Thus we have a one-dimensional representation of , which we call the determinant representation. I want to pause here and point out something that’s extremely important. We’ve mentioned a basis for in the process of calculating the dimension of this space, but the space itself was defined without reference to such a basis. Similarly, the representation of any element of is defined completely without reference to any basis of . It needs only the abstract vector space itself to be defined. Calculating the determinant of a linear transformation, though, is a different story. We’ll use a basis to calculate it, but as we’ve just said the particular choice of a basis won’t matter in the slightest to the answer we get. We’d get the same answer no matter what basis we chose.
<urn:uuid:a8b322de-ba95-474e-80fa-1f30b242362e>
2.953125
363
Personal Blog
Science & Tech.
46.745625
Red-crowned roofed turtle (Batagur kachuga) |Also known as:||Bengal roof turtle| |Synonyms:||Emys kachuga, Emys lineata, Kachuga kachuga| The red-crowned roofed turtle is classified as Critically Endangered (CR) on the IUCN Red List (1). Information on the red-crowned roofed turtle (Batagur kachuga) is being researched and written and will appear here shortly. This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact: IUCN Red List (June, 2012)
<urn:uuid:da7d112b-5968-4d29-8c50-4d211ad31708>
2.75
154
Knowledge Article
Science & Tech.
27.787255
What is ASP.NET? Y'know, a lot of people talk about ASP.NET, but it's surprisingly hard to find a clear and coherent definition of exactly what it is and how it fits into the world of Web development. Fortunately, Wikipedia has a very nice definition: "ASP.NET is a set of web development technologies marketed by Microsoft. Programmers can use it to build dynamic web sites, web applications and XML web services. It is part of Microsoft's .NET platform and is the successor to Microsoft's Active Server Pages technology." The Wikipedia entry continues with: "Even though ASP.NET takes its name from Microsoft's old web development technology, ASP (Active Server Pages), the two differ significantly. Microsoft has completely rebuilt ASP.NET, based on the CLR shared by all Microsoft .NET applications. Programmers can write ASP.NET code using any of the different programming languages supported by the .NET framework, usually (proprietary) Visual Basic.NET, JScript .NET, or (standardized) C#, but also including open-source languages such as Perl and Python. ASP.NET is faster because the entire web site is precompiled to one or a few dll files on a Web Server and the Web Site runs faster compared to the previous scripting technology. "ASP.NET attempts to simplify developers' transition from Windows application development to web development by allowing them to build pages composed of controls similar to a Windows user interface. A web control, such as a button or label, functions in very much the same way as its Windows counterpart: code can assign its properties and respond to its events. Controls know how to render themselves: whereas Windows controls draw themselves to the screen, web controls produce segments of HTML which form part of the resulting page sent to the end-user's browser. "ASP.NET uses the .NET Framework as an infrastructure. The .NET Framework offers a managed runtime environment (like Java), providing a virtual machine with JIT and a class library. "The numerous .NET controls, classes and tools can cut down on development time by providing a rich set of features for common programming tasks. Data access provides one example, and comes tightly coupled with ASP.NET. A developer can make a page to display a list of records in a database, for example, significantly more readily using ASP.NET than with ASP." Not enough data? There's also an official ASP.NET site that offers tons of information, downloads, developer tools, starter kits, and much more. Indeed, one of the more popular tools on this site is Visual Web Developer 2005. In any case, I hope this answers your questions. In addition to the ASP.NET site, remember that there are also a range of ASP.NET books available too, if you prefer learning that way. More Useful Windows PC Help Articles: ✔ What's the easiest way to speed up my Windows PC laptop? My Dell Windows 7 laptop is starting to drive me crazy. It's so darn slow! I've had it a few years and I...✔ Can I force a Windows 7 OS system update? Every so often Windows 7 pops up a window and tells me that there are updates. That's nice, but how can I force...✔ Fix Google Drive (Gdrive) no longer supported (out of date)? I'm a big Google fan and have been using Google Drive for quite a while as a cloud storage device. It's a breeze,...✔ How do I open or unpack a RAR archive on my Windows PC? I have some ".rar" files on my Windows 8 PC and am curious how to unpack them to see what's inside. From a...✔ How can I shrink my Windows 8 Desktop file icons? I've figured out most things in the new Windows 8 interface, except I can't see how to easily reduce or shrink the file... Let's stay in touch! Sign up for my weekly AskDaveTaylor Newsletter and you'll receive even more tech and gadget help right to your inbox, along with exclusive news and industry updates. It's good stuff. I promise! I do have a comment, now that you mention it! Check This Out Too... Look for Answers All Our Categories Apple iPad Help Articles and Reviews Auctions and Online Shopping Blogs and Blogging Building Web Site Traffic Business and Management Computer and Internet Basics d) None of the Above Google Gmail Help Google Plus Help Industry News and Trade Shows iPhone and Cell Phone Help iPod, Sony PSP and MP3 Player Help Kindle Fire Help Mac OS X Help Pay Per Click (PPC) Advertising Search Engine Optimization (SEO) Shell Script Programming Tech Support Video Help The Writing Business Twitter, LinkedIn and Social Network Help Unix and Linux Help Video Game Tips and Help Windows PC Help Find Me on Google+ ADT on G+
<urn:uuid:ccdbd13d-9aa9-45ba-a9a8-cfba9e2aa39a>
3.0625
1,033
Q&A Forum
Software Dev.
62.53971
Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. discovery of photovoltaic effect The development of solar cell technology stems from the work of the French physicist Antoine-César Becquerel in 1839. Becquerel discovered the photovoltaic effect while experimenting with a solid electrode in an electrolyte solution; he observed that voltage developed when light fell upon the electrode. About 50 years later, Charles Fritts constructed the first true solar cells using... What made you want to look up "Antoine-Cesar Becquerel"? Please share what surprised you most...
<urn:uuid:db9f1f7a-ddf9-4644-b0c8-923d09dc7a97>
3.46875
151
Knowledge Article
Science & Tech.
40.771768
Rocking movement in the anti-stress protein Hsp90 Proteins are the motors of the cell: They transport, among other things, nutrients, move our muscles, convert substances chemically or fold other proteins. The so-called heat shock protein Hsp90 is eminently important for our cells since it plays a decisive role in many basic processes – in humans as well as in bacteria or yeasts. For example, it is decisive in folding polypeptide chains into functioning proteins with very precisely defined spatial structures. Especially when cells are exposed to stress through heat or poisonous substances, Hsp90 production increases to keep the damage in check. The anti-stress protein is a dimer (which consists of two identical proteins) and can be roughly divided into three segments: the N terminal domain at the top, the middle domain and the C terminal at the bottom. Hsp90 taps the energy it requires for its work from the slow splitting of ATP, the fuel of every cell. In this process, the two strands move in opposing directions, albeit only a few nanometers. Some time ago we were able to observe this scissor-like N terminal movement in realtime. Our recent experiments show that the familiar one-ended scissor movement at the N terminal domains has to be extended to a rocking movement at both ends of the protein.Hsp90 opens and closes in a scissors-like manner at the C terminal as well – something hitherto unknown in dimers. We used the so-called FRET technology (FRET = Förster Resonance Energy Transfer) by attaching two fluorescent molecules at precisely defined positions in the Hsp90 and using these as a molecular ruler: When one pigment is illuminated, the other glows with increasing intensity the closer the two pigments get to each other. Using this effect, we were able to observe the nanometer-scale, double-ended scissor movement in individual Hsp90 dimers. Particularly interesting is that the double scissor movements at the N and C terminals are closely coupled: The Hsp90 dimer obviously opens and closes in alternation at each end, like a rocker. Surprisingly ATP bound at the N terminal domains regulates the motion at the C terminal end. Thus Hsp90 has to communicate internally across an unusually long distance of almost ten nanometers. The Hsp90 molecules (size 5-10 nm) were caged in lipid vesicles with a diameter of about 200 nm (not to scale). The vesicles were immobilized via biotinylated lipids onto a solid substrate in a micro fluidic chamber and mounted in a prism-type TIR microscope. Single molecule fluorescence from the donor and acceptor were detected simultaneously by an EMCCD camera (A). Matching time traces were overlaid (B) and FRET efficiencies determined (C). The cumulated histogram of all FRET efficiencies shows two states that are clearly separated by a threshold (D), which allow to determine the rate constants from a separation of the time trace into open and close states (E). "Dynamics of heat shock protein 90 C-terminal dimerization is an important part of its conformational cycle" C. Ratzke, M. Mickler, B. Hellenkamp, J. Buchner and T. Hugel PNAS, Online Early Edition in the week of August 23, 2010 PhD student in the group of Prof. Thorsten Hugel, TU Munich 2006 - 2008 Master of Science in Biochemistry, TU Munich 2005 - 2006 Bachelor of Science in Biochemistry, TU München 2003 - 2005 Grundstudium in Biochemistry, Universität Regensburg M. Mickler, M. Hessling, C. Ratzke, J. Buchner & T. Hugel: The large conformational changes of Hsp90 are only weakly coupled to ATP hydrolysis Nature Structural & Molecular Biology 16, 281 - 286 (2009)
<urn:uuid:9f888fe6-2a42-43d1-84fa-7014a3161296>
3.265625
834
Academic Writing
Science & Tech.
43.621572
AFAIK, Windows is in C because that was what was available back then, but C++ is much more powerful than C. True that you can get things done in C, but you can get things done better in C++. For example, I wish you good luck trying to manage collections in C. You'll have to use C arrays, meaning you will have to keep dynamically allocating memory, copying data around removing pointers, etc. It is just not worth it. In C++, you could create a simple array class that would do it for you: static const int C_StdCapacityIncrease = 32; Array(int initialSize = 0; int capacityIncrease = C_StdCapacityIncrease) : _arrPtr(initialSize ? new T[initialSize] : 0) , m_size(initialSize), _capacity(initialSize), _capacityIncrease(capacityIncrease) _capacity += capacityIncrease; T *tempPtr = new T[_capacity]; memcpy(tempPtr, _arrPtr, m_size); _arrPtr = tempPtr; int Add(const T &item) if (_capacity <= m_size) _arrPtr[m_size] = item; _arrPtr = 0; That would be just an array class with very basic functionality, of course, and look at all those lines, because all that is needed to properly maintain a C array. In C, you would have to code standalone procedures to take care of all that, like this: //The C way void* CreateArray(int initialSize, size_t sizeOfItem) return malloc(initialSize * sizeOfItem); void FreeArray(void* arrPtr) The C way then imposes a lot more work to each array you create: You must ensure you delete memory yourself, you must keep track of sizes and capacities yourself, etc. And don't even get me started in the elimination of an item in the array in the middle of it. :-S C++ produces far better code, IMHO. P. S. : I know that if you are using C++ the best would be to use std::vector. I just made an array class as an example of the complexities that can be abstracted into a class.
<urn:uuid:445dbc8f-8c40-4514-a1a8-d5470c1bffa2>
2.828125
488
Comment Section
Software Dev.
50.749385
What History Teaches Us About Our Environmental Challenges It seems that the environmental challenges we face are truly daunting. That we may never be able to survive them, even if we do our best to do so. A study by MIT professor Susan Solomon says it's often helpful — and heartening — to look to the past. Solomon points out that recent decades have seen major environmental progress: In the 1970s, the United States banned indoor leaded paint following evidence that it was poisoning children. In the 1990s, the United States put in place regulations to reduce emissions of sulfur dioxide — a move that significantly reduced acid rain. Beginning in the 1970s, countries around the world began to phase out leaded gasoline; blood lead levels in children dropped dramatically in response. During this period, Solomon herself contributed to a milestone in environmental protection: In 1985, scientists discovered that the Earth’s protective ozone layer was thinning over Antarctica. In response, Solomon led an expedition whose atmospheric measurements helped show that chlorofluorocarbons (CFCs) — chemicals then used in aerosols and as coolants in refrigerators and air conditioners — were to blame for ozone depletion. Her discovery ultimately contributed to the basis for the United Nations' Montreal Protocol, an international treaty designed to protect the ozone layer by phasing out CFCs and other ozone-depleting chemicals. "I find it tremendously uplifting to look back at how our world has changed," says Solomon, now the Ellen Swallow Richards Professor of Atmospheric Chemistry and Climate Science at MIT. "I think young people today are growing up at a time when they don't know that we actually have made tremendous progress on a whole series of past environmental challenges," Solomon says. "Climate change has been called the mother of all environmental issues --- and I think our approach to this problem can only be better informed if we understand better what we've done in the past." Industrial Plant photo via Shutterstock. Read more at MIT.
<urn:uuid:acd29e49-c35b-42f6-872b-2d6cba4ca2b7>
3.28125
407
Truncated
Science & Tech.
29.588537
Get flash to fully experience Pearltrees Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python is an easy to learn, powerful programming language. Python is a general-purpose , high-level programming language whose design philosophy emphasizes code readability . Python's syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C , [ 11 ] [ 12 ] and the language provides constructs intended to enable clear programs on both a small and large scale. [ 13 ] Python supports multiple programming paradigms , including object-oriented , imperative and functional programming styles.
<urn:uuid:2fdf3a4a-26cb-45a4-91aa-3c5addd45c08>
3.140625
142
Knowledge Article
Software Dev.
24.954797
Sail-World.com : Crabs help Great Barrier Reef coral combat white syndrome Crabs help Great Barrier Reef coral combat white syndrome 'Crabs help Great Barrier reef fight deadly disease' F. Joseph Pollock A particular species of crab has been assisting Great Barrier Reef coral in combating white syndrome, a deadly disease that causes coral tissue to disintegrate. Joseph Pollock, from James Cook University’s School of Marine and Tropical Biology and the Australian Institute of Marine Science (AIMS), has been studying the disease, and its unlikely helper, the 'furry coral crab'. The study, Cymo melanodactylus crabs slow progression of white syndrome lesions on corals, describes how coral-associated crabs help protect their coral hosts from disease, both of which appear throughout the Indo-Pacific. Mr Pollock said another research team at JCU had originally reported on an association between white syndrome and these crabs. Initially, it was thought the crabs were to blame for the disease, rather than helping cure it, he said. 'Researchers originally speculated that they may actually cause the disease, since diseased colonies have high numbers of crabs and it is known that these crabs can eat coral tissue,' he said. 'I have been doing a number of assays to determine the cause of this disease and it didn’t seem that the crabs could cause the amount of destruction you see with this disease, in which the coral tissue essentially just falls off of the coral skeleton. 'This was the first study to report that they actually slow the disease progression. To my knowledge, this is also the first study to demonstrate that coral-dwelling invertebrates have the potential to slow disease progression on their host.' Mr Pollock said it was not entirely clear how the crabs slowed the disease progression. 'We hypothesize that it may be in a manner similar to maggot debridement therapy, an ancient medical treatment that is actually still in use today.' Crabs help Great Barrier reef fight deadly disease - F. Joseph Pollock Mr Pollock likened it to actor Russell Crow’s character in the film Gladiator when he has his shoulder wound cleaned by maggots. 'Essentially, the crabs could be slowing the disease by simply feeding on sloughing coral tissue and potentially harmful microbes at the lesion front.' Mr Pollock said he had been studying white syndrome coral disease at Lizard Island, about 240km north of Cairns and 27km off the Far North Queensland coast, for about two years but this set of experiments was quite brief. 'We basically collected healthy and diseased coral colonies from the field, manipulated the crab numbers, and observed the fragments for three weeks.' The disease had a devastating effect on corals, he said. 'The disease is pretty nasty. Essentially, the corals tissue just falls off of the skeleton and it is often fatal to the coral. Imagine your skin and muscle starting to fall off at your fingertips and spreading over the entire body leaving behind only skeleton. 'It is also interesting that we found these crabs to be very strongly attracted to white syndrome colonies. This means that when a coral is infected with the disease, crabs from nearby coral colonies could migrate to the diseased colony, slowing the disease. This could be a very interesting feedback mechanism whereby these crabs help to slow coral disease on reefs.' The experiments were performed by Mr Pollock and Mr Sefano Katz, a collaborator from Israel who is completing an internship at JCU. JCU’s Professor Bette Willis and Dr David Bourne from AIMS provided feedback and guidance on the manuscript. This work was funded through by a Lizard Island Research Foundation Fellowship awarded to Mr Pollock for study at Lizard Island Research Station, a facility of the Australian Museum for two years. Photographs are copyright by law. If you wish to use or buy a photograph you must contact the photographer directly (there is a hyperlink in most cases to their website, or do a Google search.) with your request. Please do not contact as we cannot give permission for use of other photographer’s images.
<urn:uuid:901a1474-7ea4-4b09-b9f4-07b514d3e58d>
2.78125
856
Truncated
Science & Tech.
43.56125
Anticipating snow and cabin-fever over the holidays, I planned our biggest experiment yet, which I called 'The Great Paper Airplane Experiment.' Our local Boys and Girls Club (special thanks to Reggie Brodie at BGCAA for making it available!) happens to have a fabulous new facility with a full size basketball court and stage, which was the perfect indoor venue for flying paper airplanes. The goal of the experiment was simple -- does the length of paper influence how far a paper airplane flies? For the sake of the experiment, each scientist was given three sheets of paper: 8 1/2 X 8 1/2 inches, 8 1/2 X 11 inches and 8 1/2 X 15 inches. With the help of our local printer (big thanks to everybody at Freestate Press here in Annapolis!) we color coded the sizes -- orange short planes, light blue medium planes, and pink long planes. Our participants had all the practice paper they could fold and were asked to fold any paper airplane design they chose in three configurations -- short, medium, and long. After practicing, we marked off 'launch pads' and laid out distances in tape on the floor in concentric rings every five feet. To measure the longest flights we used a surveyor's tape measure. Finally, I printed out 'test flight logs' on which each scientist could write down his or her name, the plane's name if it had one, the launch type (how hard the plane was thrown), the flight pattern (did it fly curvy or straight), and the distance each plane flew. Here's a pdf of the test flight logs We invited over 60 kids from Annapolis to help us create the planes and gather the data we would need to test our hypothesis. Parents were encouraged to help design, fold, and fly the planes. We ended up with a good variety of planes. Most people folded variations on the basic 'dart' design, but we had some very creative entries. Again, the goal was to see in general if a certain length of paper was better than another. Several hundred planes were made of all shapes and sizes, and our scientists had a great time trying to figure out what made a good paper airplane. I am attaching our complete data set because if I didn't, I would probably be accused of faking the data since it came out so regular and perfect. The planes built with square paper flew an average only 10.5 feet, while the standard letter sheet planes flew an average 12.5 feet. Finally, the longest planes flew the longest distance, averaging 16.25 feet -- significantly longer than the other planes. I have to say that as I watched the planes fly and looked over the Test Flight Logs I wasn't confident that we would get a statistically accurate result. In fact, I tabulated the results manually after creating the spreadsheet to confirm the numbers -- it just looked too perfect! Feel free to use the parent's guide below to conduct your own experiment. And if you upload your data to the comments sections I will include them with ours in an update. So start folding! We ended the day handing out Certificates of Achievement to all the scientists who help us advance the art and science of paper airplane aerodynamics. Another great launch This is a great experiment for kids of all ages that teaches both science and engineering principles. All you need is several sheets of paper of various sizes and your favorite paper airplane design. Starting with a standard sheet of paper, fold your favorite design of paper airplane -- it can be just about any design you can think of -- then launch your plane. Next, take a standard sheet of paper and cut it down to a square. Then fold the same design as best you can again. The plane will either be shorter or longer (depending on how you oriented the sheet when building the first plane.) Give this plane a toss. Does if fly better or worse than the last one? Have its aerodynamic properties improved or become worse? Next, take a legal sheet of paper and fold the same design again as best as you can. Now the plane will either be very short and wide or very long and narrow. How does it fly? Have you improved its aerodynamic properties or made them worse? We decided that the distance flown would be our measure of 'success', but you can modify the experiment to suit your needs. For example -- what makes a good stunt flyer? Or a good glider? We measured and recorded as many flights as we could and averaged our data. Finally, look at your paper airplane. What attributes does it have that are improved by different sheet sizes? Are the wings bigger in one size than others? Is the plane more stable in one configuration than the others? What can you do to maximize the design based on the different sheets? What can you do to the design to make it fly better? The internet is full of paper airplane resources -- too many to list here! You can find some here, here and here. Type 'paper airplane' into any search engine and start folding. Also, Scholastic, Inc. has instructions for schools that want to participate in the 'National Paper Airplane Contest' .
<urn:uuid:c751b2da-80a4-4179-aaa3-9b8d363ae858>
2.796875
1,065
Personal Blog
Science & Tech.
60.868093
On multi-developer software projects, you can sometimes speed up every developer's builds a lot by allowing them to share the derived files that they build. SCons makes this easy, as well as reliable. To enable sharing of derived files, in any SConscript file: Note that the directory you specify must already exist and be readable and writable by all developers who will be sharing derived files. It should also be in some central location that all builds will be able to access. In environments where developers are using separate systems (like individual workstations) for builds, this directory would typically be on a shared or NFS-mounted file system. Here's what happens: When a build has a every time a file is built, it is stored in the shared cache directory along with its MD5 build signature. On subsequent builds, before an action is invoked to build a file, SCons will check the shared cache directory to see if a file with the exact same build signature already exists. If so, the derived file will not be built locally, but will be copied into the local build directory from the shared cache directory, % scons -Q cc -o hello.o -c hello.c cc -o hello hello.o % scons -Q -c Removed hello.o Removed hello % scons -Q Retrieved `hello.o' from cache Retrieved `hello' from cache Note that the CacheDir feature still calculates MD5 build sigantures for the shared cache file names even if you configure SCons to use timestamps to decide if files are up to date. (See the Chapter 6 chapter for information about the CacheDir may reduce or eliminate any potential performance improvements from using timestamps for up-to-date decisions. Actually, the MD5 signature is used as the name of the file in the shared cache directory in which the contents are stored.
<urn:uuid:c40d7c99-2b9f-4e90-b7bf-0f4ee1be9318>
2.984375
412
Documentation
Software Dev.
41.366777
Naked Egg Experiment Which came first, the rubber egg or the rubber chicken? This experiment answers the age-old question, "Which came first, the rubber egg or the rubber chicken?" It's easy to make a rubber egg if you understand the chemistry of removing the eggshell with vinegar. What you're left with is a totally embarrassed naked egg and a cool piece of science. - Raw egg - Graduated cylinder or tall glass - Place the egg in a graduated cylinder or tall glass and cover the egg with vinegar. - Look closely at the egg. Do you see any bubbles forming on the shell? Leave the egg in the vinegar for a full 24 hours. - Change the vinegar on the second day. Carefully pour the old vinegar down the drain and cover the egg with fresh vinegar. Place the glass with the vinegar and egg in a safe place for a week - that's right, 7 days! Don't disturb the egg but pay close attention to the bubbles forming on the surface of the shell (or what's left of it). - One week later, pour off the vinegar and carefully rinse the egg with water. The egg looks translucent because the outside shell is gone! The only thing that remains is the delicate membrane of the egg. You've successfully made an egg without a shell. Okay, you didn't really make the egg - the chicken made the egg - you just stripped away the chemical that gives the egg its strength. How does it work? Let's start with the bubbles you saw forming on the shell. The bubbles are carbon dioxide gas. Vinegar is an acid called acetic acid - CH3COOH - and white vinegar from the grocery store is usually about 5% acetic acid and 95% water. Egg shells are made up of calcium carbonate. The vinegar reacts with the calcium carbonate by breaking the chemical into its calcium and carbonate parts (in simplest terms). The calcium part floats around in the solution while the carbonate part reacts to form the carbon dioxide bubbles that you see. Some of the vinegar will also sneak through, or permeate, the egg's membrane and cause the egg to get a little bigger. This flow of a liquid from one solution through a semi-permeable membrane and into another less concentrated solution is called osmosis. That's why the egg is even more delicate if you handle it. If you shake the egg, you can see the yolk sloshing around in the egg white. If the membrane breaks, the egg's insides will spill out into the vinegar. Yes, you've made a pickled egg! Allowing the egg to react with the carbon dioxide in the air will cause the egg to harden again. Amazing! Science Fair Connection: To be an effective science fair project, something has to change in the experiment. The Naked Egg experiment described above is really just a cool science demonstration because it doesn't contain a variable, or something that changes that allows you to run more tests and make comparisons. If you want to use the Naked Egg experiment for the science fair, consider one of the following questions or, better yet, make up your own! - Do organic or free-range eggs have an eggshell that is stronger or weaker than generic eggs? Conduct your own test on several different kinds of eggs all at the same time to observe any differences in the time required for the vinegar to dissolve the shell. - What happens if the egg is hardboiled? Does the shell still break down in the vinegar? How does that compare to using a raw egg? - Try using concentrated vinegar instead of traditional vinegar. Does it make a difference? If you want to cut down on the time it takes for the eggshell to disappear, try using either 1 M hydrochloric acid or 3 M hydrochloric acid. Be careful - this is really strong stuff! (Note: The acid version of the Naked Egg experiment is only recommended for teachers and other scientists.) - What would happen if you put an egg in one glass with vinegar and then put a different object in vinegar in another glass--maybe a piece of fruit with an outer "shell" or peel like an orange? Soak the objects in the same amount of vinegar for the same amount of time and then compare the results. Just remember that to be "Science Fair Certified," a project must change something, create a new experiment, and then make comparisons. If you are able to do that with the variable that you choose, you should be on your way to a great project! April 20th, 2011 Click the thumbnail below to see the video. jon - November 9, 2012 Its cool how the vineger eats at the shell eating the calsium of the egg. Then turns into a bouncy ball! lol Naked Egg can also be Shrinking Egg Stephany - April 21, 2011 Love the Naked Egg experiment. I use in class it to show semi-permeability. After making the naked egg with the vinegar, put the naked egg in a glass with corn syrup for a day or two, and watch the egg shrink to just the size of the yolk. The kids LOVE it! Levi - November 28, 2010 i did this experiment in school and it was pretty fun tevis jones - November 18, 2010 i think that this experiment is really awesome.. im also planning on doing it for my science fair project at school. Sam - November 3, 2010 This sounds like a COOL thing to do! If your science teacher is gross like mine then this is the project for you!!! Kristin - April 13, 2010 I have been doing this experiment with my pre-kindergarten class for several years. 2 years ago we were bouncing it on the floor like a rubber ball. It's fun but I don't recommend bouncing too many times. Last year I wasn't patient enough and tried to show my class what was happening by taking the egg out after just a 3 or 4 days and the nasty thing popped in my hands. Egg yolk everywhere. It is a cool experiment though. My students make predictions about what they think will happen to the egg. My favorite this year is that "a dinosaur will hatch." Abi - March 4, 2010 I love this video soo much that I'm going to do it for a Science Fair Project. You all should really try it for a Science Project. It's fun to watch the bubbles start to form once you put the egg in the vinegar. Annie Los Angeles, California - January 18, 2010 I loved to see the naked egg without the shell.
<urn:uuid:41b56212-e32d-47de-9e45-834bbc892a71>
3.734375
1,371
Personal Blog
Science & Tech.
67.476396
If gravity only attracts mass and If light doesnt have mass why does it appear to bend when passing a planet ? Question asked by: knowitall No comments have been added to this question "If gravity only attracts mass and If light doesnt have mass why does it appear to bend when passing a planet ?". Find out more about Astronomy Astronomy Questions and Answers gravity and light Questions and Answers Next question: I have no advanced knowledge of physics or math, but it does seem to me that there are repeated patterns in the universe and I wonder if "the big crunch" and what happens in a black hole may be versions of the same pattern? Become a Member! It's Free >>> Share on Facebook: mass  bend  appear  passing  planet  attracts  gravity More Questions:Why Are All The Objects Say Sun ,moon,earth ,planets Etc Are Spherical In Shape? What Are The Names Of Saturns Moons? Could We Divert A Comet From Impacting The Earth? I Am Doing A Project On Space And I Need To Know How Many Moons Venus Has. What Is The Moon Callisto Like?
<urn:uuid:fd57174a-f7ce-4aa6-b221-c9c29d36700c>
2.875
254
Q&A Forum
Science & Tech.
61.180703
If E.T. is out there, it may be a lot easier to find him than we thought mostly because there are a lot more places for him to live. Scientists looking for life (or at least earthlike life) have always obeyed a simple rule: follow the water. Biology is a wet process, after all and generally the wetter the better. Now, the Herschel Space Observatory has spotted an infant solar system 175 million light years from Earth that seems fairly awash in primordial water. The finding suggests many more such systems may be out there and offers tantalizing clues about how our own biologically rich world began as well. Herschel, which was launched by the European Space Agency in 2009, hovers in space 930,000 miles (1.5 million km) from Earth at what's known as a Lagrange point, a gravitationally quirky spot where the pull of the planet Earth and the sun balance out. This allows a spacecraft placed just so to remain locked in place on the far side of the planet, shielded from solar interference. In the case of Herschel, that's important, because the readings it takes are exquisitely precise, scanning the skies in the far infrared and submillimeter wavelengths. Turning its gaze toward a star known as TW Hydrae a comparatively cool orange dwarf just 10 million years old the telescope recently found a vast disk of dusty material moving in a solar orbit about 200 times as far from the star as Earth is from our own sun. Dust is just dust in the visible spectrum, but operating in the extreme infrared, Herschel was able to spot the surprising signal of water lots and lots of water created as ultraviolet light from the star knocked individual water molecules free from the traces of ice that cling to the dust grains. "These are the most sensitive [infrared] observations to date," said NASA project scientist Paul Goldsmith, who collaborates with the European investigators in analyzing Herschel findings. "It is a testament to the instrument builders that such weak signals can be detected." What struck Goldsmith and the others was not just the vast quantity of water ice surrounding TW Hydrae, but also its location. Water halos have been found in the warm inner reaches of young solar systems before, but the proximity of the solar fires usually blasts the vapor farther into space where it gets locked up as ice in outer planets and moons. That's what happened in our own solar system, and helps explain why Mercury, Venus and Mars are so dry and the distant gas giants are so icy. What that model doesn't explain, of course, is how Earth got so wet. One of the prevailing theories has long been that incoming comets crashed into our planet, carrying water ice with them. That scenario became even more plausible as a result of two studies earlier this month one that found that comets in our solar system carry the same chemical signature as the water in Earth's oceans; and another that discovered what amounts to a hailstorm of comets striking a planet circling the Eta Corvi, a bright star visible in our northern hemisphere. What's happening out there could have easily happened here. The new findings push the knowledge frontier further since the colder region where the TW Hydrae vapor disk was found is exactly where comets could more easily form, but where the raw materials for that to happen had not been seen until now. Says Herschel astronomer Michiel Hogerheijde of the Leiden Observatory in the Netherlands: "Our observations of this cold vapor indicate enough water exists in the disk to fill thousands of Earth's oceans." None of this means that TW Hydrae will necessarily give rise to a garden spot like Earth. Water is a necessary ingredient for life as we know it, and comets are handy couriers, but a lot of other tumblers have to fall just right for biology to take hold. Still, if astronomical history not to mention simple arithmetic suggests one thing, it's that what happens in one spot in the cosmos has a pretty fair chance of being repeated at least a few times in the infinitely vast spaces beyond. The possibility that that kind of repetition includes life is beginning to seem more compelling than ever.
<urn:uuid:5d6cff7a-a9e6-4dc0-ad56-ac2a9aaad3f6>
3.71875
853
Truncated
Science & Tech.
45.410385
On to another topic of far greater global importance... It is astounding to me that people who would seem to have a modicum of scientific sense can even vaguely entertain the various concepts of "carbon sequestration" as being in any way valid approaches to "solving" the problem of carbon releases from burning fossil fuels. The problems of scale here are huge, and shold be obvious to anyone who sits down for a quiet moment to think about the global carbon cycle. Let's just look at the U.S. piece of the pie here. Our annual carbon releases from burning petroleum and coal run around 2 x 1015 gC/yr. This is an enormous amount of matter. It averages 7,000,000 gC per capita each year -- 7 metric tonnes of carbon for each of us to be somehow captured out of the atmosphere and trapped permanently in solid (or aqueous solution) form. We can't capture it as pure carbon; that would use up pretty much all the energy that we got from burning the fuel in the first place. All the proposed schemes capture it either as CO2 or biomass. Both of these weigh considerably more per mole of C. Of course, CO2 is not stable in solid or aqueous form at the surface of the earth. There are proposed elaborate schemes to transfer captured CO2 directly into the crust or deep ocean for long-term storage; the practicality of these schemes on the massive scale required is highly dubious. Dubious as well are the conclusions that this "stored" CO2 would remain in place for thousands of years without adverse geological, ecological, or oceanographic consequences. Additionally, direct CO2 capture is only applicable to large-scale point sources (i.e. big power plants); these only represent about half of the problem. Realistically we're probably talking about making calcium carbonate (CaCO3 ) out of the CO2 and then disposing of this product somehow. This compound weigh about 8 times as much as does the carbon it contains. So now we're talking about 50 metric tonnes of solid waste per person in the U.S. to be disposed of. This is about 60 times greater than the per capita creation of municipal solid waste in this country. Imagine a solid waste disposal problem 60 times bigger than what we already face! Or, picture for every coal train headed towards a U.S. power plant, and every oil tanker headed ito a U.S. port, eight trains and tankers loaded with sequestered carbon headed the other direction -- headed to where? An additional issue with trapping CO2 this way is the rest of the reaction. Where does all this calcium come from? Fortunately Ca is a very common element on earth, though much of the solid deposits are carbonates and alumiosilicates that are not really useful for carbon trapping. Some proposals involve direct catalyzed reactions with CaO (as well as other rock-forming metal oxides) to produce stable carbonates. Calcium oxide is in theory an abundant mineral in the crust; however it occurs mostly intermingled molecularly with the other common metal oxides of the earth, rarely in pure form. Does this mean that for every barrel of oil or ton of coal we extract, we will now be required to also mine 10 times as much ordinary rock to crush up to yield the metal oxides? Is increasing our rate of strip mining by an order of magnitude the direction we want to take here? The simplest source of calcium for making carbonates would probably be the chloride and sulfate salts. These are reasonably common; seawater effectively contains vast reserves of CaCl2 in aqueous solution. The reaction is simple: just bubble the CO2 through the solution, and the calcium carbonate forms without the need for a catalyst or any additional energy inputs. Still, gathering and concentrating all these calcium salts will require massive energy and infrastructure. Once you manage this, you then run against the next problem. Reactions for trapping CO2 that way generally look like this: O --> CaCO3 + 2 HCl Oops... for every molecule of CO2 you trap, you produce 2 molecules of hydrochloric acid. Calcium sulfate will produce sulfuric acid instead. Any reaction that produces a stable carbonate from CO2 and a mineral salt will produce an acid as well. This means that those 60 metric tonnes of calcium carbonate we make for each person each year will be paired with about 100,000 liters (nearly 30,000 gallons) of concentrated hydrochloric acid also needing disposal. Nationwide this will be the equivalent of about 30 trillion liters of concentrated HCl -- 30 cubic kilometers of one of the most corrosive substances known. Exactly what will we do with this As an additional note, if we just passively allow or actively or encourage the shallow ocean to fix all this CO2 instead of doing it ourselves in our own facilities, we are doing the exact same reaction, just effectively dumping all these waste products directly in the ocean. The atoms won't just go away, and the stoichiometries are unavoidable. What about biomass? Terrestrial net primary production (NPP, the amount of carbon taken from the atmospere and turned into biomass by living plants) averages roughly 500 gC m-2 -- higher in some places, lower in others, but within a factor of 2 of this value for most land areas. This means that our U.S. annual carbon releases are equivalent to the NPP of 4 x 1015 of land surface. This is nearly half of the 9.1 x 1015 land area of the entire U.S. including Alaska. So what we are talking about here is somehow taking half of all the NPP of the entirety of the U.S. and capturing it forever so that it will never be returned to the atmosphere. Anyone who has ever looked out a window and examined the real world should know immediately that this is ludicrous. Biomass capture could never be, in any practical sense, more than a small drop from a huge bucket of global carbon. Some engineers who do realize these insurmountable obstacles instead turn to "geoengineering" solutions. Many of these take the form of seizing control of the global marine ecosystem, turning it upside down (in some cases literally), and converting the global ocean into a giant, intensively managed carbon sink. Others do the same to the stratosphere. Let's set aside the obvious problem of the fact that in a world where we can't even keep our highways paved we will never find the resources for this sort of massive global infrastructure undertaking. Let's focus on the more obvious matter: anyone with the smallest bit of environmental scruples should be revolted, horrified, and up in arms against this sort of "solution." There is only on solution to the global carbon problem: stop burning so many fossil fuels. This will happen eventually. If it is not done willingly (and it never has been before, on the long-term and large-scale), it will happen when it is forced by economic and environmental constraints and harsh realities. P.S. If you find errors in my numbers please let me know and I will fix them! Though there might be quantitative slip-ups, the qualitative issues will not change.
<urn:uuid:a98883e6-2816-42e3-8701-60e159c872f0>
2.765625
1,511
Personal Blog
Science & Tech.
52.497239
It is widely known that green plants play a critical role in global carbon cycle by sequestering carbon dioxide from the atmosphere and converting it into organic compounds by using solar energy through the process of photosynthesis and releasing oxygen as a by product. In addition to oxygen plants from their green leaves have been shown to emit a number of complex organic compounds collectively called volatile organic compounds (VOCs). VOCs are a complex mixture of carbon; hydrogen compounds containing chemical species (excluding elemental carbon, carbon monoxide, and carbon dioxide), which are volatile at normal temperature and pressure. In precise terms VOCs are those organic compounds whose vapor pressure range from 0.13 kPa to 101.3 kPa at 293K. VOCs also include oxygenated, halogenated and sulphur containing hydrocarbons. VOCs are basically grouped into methane and non-methane hydrocarbons (NMHCs). VOCs are emitted both from anthropogenic and natural sources. The important anthropogenic sources on NMVOCs include, fossil fuel combustion, processing of organic chemicals and organic wastes. From anthropogenic sources globally 103 Tg NMVOCs are emitted yearly. The author and his co-workers at Jawaharlal Nehru University (JNU) have estimated that from India about 8 million tonnes of NMVOCs are emitted per annum.
<urn:uuid:a6361964-4bc8-419a-be99-20c863838755>
3.703125
279
Knowledge Article
Science & Tech.
23.899662
The greek word, “Zeo”, means to boil, and “lithos” means stone, thus zeolite means the rock that boils. Because of their unique porous properties, zeolites are used in a variety of applications . What are zeolites ? Zeolites are honeycomb like small rigid crystals working much like a sponge. When heated, the pores open. Acting like squeezed sponges, zeolites filter substances by trapping large molecules. This helps some chemical reactions to take place. For instance, zeolites in laundry detergent exchange magnesium and calcium ions from hard water with their own sodium ions. That exchange improves the lathering effect of the detergent in the water, which has now become soft due to the exchange of ions. Referred to as molecular sieves, zeolites contribute to a clean, safe environment in various different ways. They are often used to remove toxic wastes, in water softening and purification, and in the separation and removal of gases and solvents. They are therefore used to filter air and water to help clean up the environment.. In golf courses, zeolites help hold water and distribute plant nutrients throughout the grass. The crystal structures can be loaded with nitrogen and potassium required by the plants and combined with other slow-dissolving salts such as calcium and phosphorus. The zeolites store this multivitamin combination for plants and release it slowly as and when needed for growth. This method has the added advantage of preventing loss of water and nutrients to the ground.
<urn:uuid:36b9bc4e-a30b-4500-a10c-c57814264b9a>
3.453125
320
Knowledge Article
Science & Tech.
43.255011
|The Most Energy Efficient Building In America - Science Insider Reported May 2011 HOW CAN HOMES CONSERVE ENERGY? There are many ways in which houses can conserve energy. Improvements in energy-efficient lighting can reduce power usage by as much as 65 percent. In fact, if every American household changed just five of the most-used lighting fixtures to energy-efficient technology, they would save a total of $6 billion in costs and reduce power usage by the equivalent of the annual output of more than 21 power plants. Many homes have high-performance, energy-efficient windows -- featuring double glazing or special coatings -- to reduce heat loss in cooler climates and heat gain in warmer climates. These two factors account for 50 percent of a home's heating and cooling needs. Replacing window frames with low-conductance materials like wood, vinyl and fiberglass can also improve a home's insulating capability. ON THE GRID: The nation's power grid boasts more than 6,000 inter-connected power generation stations. Power is sent around the country via half a million miles of bulk transmission lines carrying high voltage charges of electricity. From these lines, power is sent to regional and neighborhood substations, where the electricity is then stepped down from high voltage to a current suitable for use in homes and offices. The system has its advantages: distant stations can provide electricity to cities and towns that may have lost power. But unusually high or unbalanced demands for power -- especially those that develop suddenly -- can upset the smooth flow of electricity. This can cause a blackout in one section of a grid, or ripple through the entire grid, shutting down one section after another, making it difficult to restore power from neighboring stations. The American Society of Civil Engineers and the Materials Research Society contributed to the information contained in the TV portion of this report. If you would like more information, please contact: National Renewable Energy Laboratory U.S. Department of Energy, Golden Field Office
<urn:uuid:2cf97311-132e-489e-aeca-94cef4835537>
3.328125
404
Truncated
Science & Tech.
31.137893
|Browsing: All Content in Algebra II for Rational Root Theorem Browse discussions||Login to Subscribe / Save Results||Resource Name||Topic (Course)||Technology||Type||$?||Rating| |Watch Your P's and Q's||Rational Root Theorem (Algebra II)||TI-Nspire||Activity||| Students will investigate rational roots of polynomials graphically and numerically. Students will use the Rational Zero Theorem and test roots by plugging them into the given function using spreadsh... More: lessons, discussions, ratings, reviews,...
<urn:uuid:127d57af-a7fe-447d-9886-5e7f0c327bb5>
2.75
124
Content Listing
Science & Tech.
38.803262
Before i posted this,i tried reading my notes again and again! so any help....id be thankful! here is the problem... consider f(x) = lnx, with A element in (0,1) confirm the continuity of f in (0,1) Let x0 be an element in (0,1) then |lnx - lnx0| = | ln x/x0 | = ln(1 - (x0 - x)/x0) this is smaller than or equal to 2|(x0 - x)/x0| if |(x0 - x)/x0| < 1/2 ok so first.. why is |(x0 - x)/x0| < 1/2?? did someone decide to randomnly pick 1/2...? and why is |(x0 - x)/x0| < 1/2... where did the ln go..? wouldnt it make more sense if it was .. ln|(x0 - x)/x0| < 1/2 ? Take delta = min (epsilon*(x0)/2 , 1/2*x0) how was delta chosen to be these?? thank you! any help ... would save me a lot of time..effort. im looking through books..online.. believe me im trying but this is logic (and i am poor at it!) i havent written up the whole problem,only the stage i am stuck so far!
<urn:uuid:f772f0eb-2ffd-421c-b84c-31768b65c0b8>
2.828125
322
Q&A Forum
Science & Tech.
116.780044
I have always been intrigued by the extraordinary insights of the self-taught mathematician Srinivasa Ramanujan. He worked in almost complete isolation from the mathematical community, and independently rediscovered many existing results while also making his own unique contributions. He didn’t even share notation with the rest of the community, somehow finding his way without being led. I’m convinced that this remarkable life must be showing us something about the very nature of the thoughts he followed – something we have neglected about the nature of mathematics itself. He was brought to my attention again when Scientific American wrote about India’s response to his 125th birthday on December 22. Last year Prime Minister Manmohan Singh declared 2012 to be a National Mathematics Year in India in honor of Ramanujan. But then, even more newsworthy, I found a number of reports about how mathematicians were able to show that a hunch Ramanujan had about the properties of a class of functions (that were never before heard of) was correct. The story was reported in the Daily Mail on December 28. While on his death-bed in 1920, Ramanujan wrote a letter to his mentor, English mathematician G. H. Hardy, outlining several new mathematical functions never before heard of, along with a hunch about how they worked. Decades years later, researchers say they’ve proved he was right – and that the formula could explain the behaviour of black holes. ‘We’ve solved the problems from his last mysterious letters,’ Emory University mathematician Ken Ono said. In each of the accounts of this development, some reference was made to the fact that Ramanujan’s insight was contained in a dream. From Daily News and Analysis: Ramanujan, a devout Hindu, thought these patterns were revealed to him by the goddess Namagiri. However, no one at the time understood what he was talking about. The same statement appeared in the Daily Mail with an image of the goddess. A more thorough discussion of Ramanujan’s insight can be found in the article What is a Mock Modular Form? published by the American Mathematical Society. From the Huffington Post: Ramanujan believed that 17 new functions he discovered were “mock modular forms” that looked like theta functions when written out as an infinte sum (their coefficients get large in the same way), but weren’t super-symmetric. Ramanujan, a devout Hindu, thought these patterns were revealed to him by the goddess Namagiri. Ramanujan died before he could prove his hunch. But more than 90 years later, Ono and his team proved that these functions indeed mimicked modular forms, but don’t share their defining characteristics, such as super-symmetry. In developing mock modular forms, Ramanujan was decades ahead of his time, Ono said; mathematicians only figured out which branch of math these equations belonged to in 2002. “Ramanujan’s legacy, it turns out, is much more important than anything anyone would have guessed when Ramanujan died,” Ono said. I enjoyed this video posted on youtube that helps bring both the math story and the personal story to life. It was suggested here that, given Ramanujan’s religious life, it isn’t really a surprise that he would attribute his vision to a Hindu goddess. But a statement like that is just suggesting that we needn’t think about it. There is something to think about, and it’s not whether mathematics is really divine or not. While there may be no easy way to address this question, the peculiarities of Ramanujan’s work should encourage some of us to wonder about how spiritual vision, dream vision, waking vision, and what I’m tempted to call cognitive vision (the perception of pure structural meaning) are related.
<urn:uuid:3eedb26e-dc63-4d3d-9778-c0ff2865700b>
3.171875
826
Personal Blog
Science & Tech.
38.899946
The ANOVA F-test to compare the means of k normally distributed populations is not applicable when the variances are unknown, and not known to be equal. A spacial case, k=2, is the famous Behrens-Fisher problem (Behrens, 1929; Fisher, 1935). Welch (1951) test was proposed to fill this void, a generalization to his 1947 previous paper (Welch, 1947). This m-file works without all the data samples. But only with the size, mean and variance samples. Syntax: function welchanova(x,alpha) x – data nx2 matrix (Col 1 = data; Col 2 = sample code) alpha - significance level (default=0.05) - Summary statistics from the samples - Decision on the null-hypothesis tested
<urn:uuid:9a57f667-1db7-4cd1-84a6-5a24633f8867>
2.921875
176
Knowledge Article
Science & Tech.
59.252095
What relationship, if any, is there among the quality, size, and amount of fruit found in an area, and the number and variety of birds found in the same area? On coffee plantations in Latin America, the relationship between the 2 is unmistakable and compelling. Coffee plantations make up 42% of all dedicated cropland in northern Latin America. Many coffee plantations are agricultural areas that combine trees with crops, also known as agroforestry systems. Shade grown coffee farm Because of the nature of these plantations, a larger variety and number of trees can be incorporated onto the sites. This diversity and quantity of trees, particularly fruit-bearing trees such as guava and banana trees, makes coffee plantations an excellent location to study the relationship between the abundance of fruit and the number of species of fruit-eating birds. For farmers and other land management specialists concerned about the conservation of birds and other animals, figuring out this relationship can help them support local species. For this study, scientists from the University of Georgia, Southeast Partners in Flight, and the Smithsonian Migratory Bird Center conducted research at the University of Georgia's San Luis Research Station in Northwestern Costa Rica. In 2008 the team surveyed birds and fruit trees for 10 months at 6 coffee plantations to determine if birds responded to the availability of fruit. This study was different from previous surveys because the team used a different metric to determine the abundance of fruit resources in the area. This measurement is called Fruit Energy Availability, or FEA. FEA is a new concept that incorporates 3 pieces of information: This combined value is a better indicator of the value of the tree to birds that eat its fruit. Of the 113 bird species the team recorded, 80 were observed eating fruit during the study. Calorie values were calculated for 27 plant species producing fruits consumed by the birds on the 6 coffee plantations. The abundance of fruit varied from location to location and changed over time due to seasonal temperatures and rainfall. The team discovered a direct relationship between the average monthly FEA for each site and the number of different bird species found in each area, or species richness. The study clearly showed that plantations with higher average monthly FEA values also had more birds. FEA was found to be an excellent tool for predicting bird species richness. For example, the types of birds observed on the 6 coffee plantations were very similar. However, bird species richness varied between plantations. During the study, 65% of the time there were only 3 or fewer tree species with fruits found on the plantations. The FEA value for the tree species present explained 52% of the variation in bird species richness. Additionally, once the FEA value reached a certain threshold, in this case 12,000 kcal, birds such as parrots, pigeons, new world flycatchers, and thrushes would migrate into the habitat. Once the FEA dropped below 500 kcal, birds would leave. Consistent, high quality sources of food lead to stable bird populations. Birds that eat fruit are an important part of the ecosystem, spreading the seeds of the plants they eat throughout the forest. Sadly, fruit-eating birds that spread seeds over long distances are predicted to have a much higher extinction rate over the next century than most other birds. However, the incorporation of FEA into land management practices of agroforestry areas such as coffee plantations can help with bird conservation. By evaluating the amount of energy a tree's fruit can provide and including a variety of plants with high FEA values, agricultural areas such as the coffee plantations in this study can help ensure bird species survival. This article summarizes the information in this publication: Bird community response to fruit energy. 2010. Valerie E. Peters, Rua Mordecai, C. Ronald Carroll, Robert J. Cooper, Russell Greenberg. Journal of Animal Ecology 79(4): 824–835. 1. The abundance and predictability of food resources have been posited as explanations for the increase of animal species richness in tropical habitats. However, the heterogeneity of natural ecosystems makes it difficult to quantify a response of animal species richness to these qualities of food resources. 2. Fruit-frugivore studies are especially conducive for testing such ecological theories because fruit is conspicuous and easily counted. Fruit-frugivore research in some locations has demonstrated a relationship between animal abundance and fruit resource abundance, both spatially and temporally. These studies, which typically use fruit counts as the variable of fruit abundance, have never documented a response of species richness at the community level. Furthermore, these studies have not taken into account factors influencing the detection of an individual within surveys. 3. Using a combination of nonstandard approaches to fruit-frugivore research, we show a response of bird species richness to fruit resources. First, we use uniform and structurally similar, one-ha shade-grown coffee plots as replicated experimental units to reduce the influence of confounding variables. Secondly, we use multi-season occupancy modelling of a resident omnivorous bird assemblage in order to account for detection probability in our analysis of site occupancy, local immigration and local emigration. Thirdly, we expand our variable of fruit abundance, Fruit Energy Availability (FEA), to include not only fruit counts but also fruit size and fruit quality. 4. We found that a site’s average monthly FEA was highly correlated (0·90) with a site’s average bird species richness. In our multi-season occupancy model 92% of the weight of evidence supported a single model that included effects of FEA on initial occupancy, immigration, emigration and detection. 5. These results demonstrate that fruit calories can broadly influence the richness of a neotropical bird community, and that fluctuations of FEA explains much of the site occupancy patterns of component species. This study shows that in depauperate, managed landscapes fruit resource abundance supports more species and fruit constancy allows for higher levels of avian persistence, an important practical concept for conservation planning. Teachers, Standards of Learning, as they apply to these articles, are available for each state.
<urn:uuid:97846231-ce32-49a3-a196-978738d6ae9b>
3.875
1,242
Knowledge Article
Science & Tech.
34.223437
First light from the Far-Infrared Spectroscopy of the Troposphere (FIRST) instrument Article first published online: 4 APR 2006 Copyright 2006 by the American Geophysical Union. Geophysical Research Letters Volume 33, Issue 7, April 2006 How to Cite 2006), First light from the Far-Infrared Spectroscopy of the Troposphere (FIRST) instrument, Geophys. Res. Lett., 33, L07704, doi:10.1029/2005GL025114., et al. ( - Issue published online: 4 APR 2006 - Article first published online: 4 APR 2006 - Manuscript Accepted: 2 MAR 2006 - Manuscript Revised: 13 FEB 2006 - Manuscript Received: 2 NOV 2005 We present first light spectra that were measured by the newly-developed Far-Infrared Spectroscopy of the Troposphere (FIRST) instrument during a high-altitude balloon flight from Ft. Sumner, NM on 7 June 2005. FIRST is a Fourier Transform Spectrometer designed to measure accurately the far-infrared (15 to 100 μm; 650 to 100 wavenumbers, cm−1) emission spectrum of the Earth and its atmosphere. The flight data successfully demonstrated the FIRST instrument's ability to observe the entire energetically significant infrared emission spectrum (50 to 2000 cm−1) at high spectral and spatial resolution on a single focal plane in an instrument with one broad spectral bandpass beamsplitter. Comparisons with radiative transfer calculations demonstrate that FIRST accurately observes the very fine spectral structure in the far-infrared. Comparisons also show excellent agreement between the atmospheric window radiance measured by FIRST and by instruments on the NASA Aqua satellite that overflew the FIRST flight. FIRST opens a new window on the spectrum that can be used for studying atmospheric radiation and climate, cirrus clouds, and water vapor in the upper troposphere.
<urn:uuid:d7be7568-5a0d-4677-8ddb-47422d7233fb>
2.796875
397
Academic Writing
Science & Tech.
41.400784
I'm reading Nano: The Essentials by T. Pradeep and I came upon this statement in the section explaining the basics of scanning electron microscopy. However, the equation breaks down when the electron velocity approaches the speed of light as mass increases. At such velocities, one needs to do relativistic correction to the mass so that it becomes... We all know about the famous theory of relativity, but I couldn't quite grasp the "why" of its concepts yet. This might shed new light on what I already know about time slowing down for me if I move faster. Why does the mass of an object increase when its speed approaches that of light?
<urn:uuid:a397a31b-1da4-4dfa-a492-d8058e726b0d>
2.765625
138
Q&A Forum
Science & Tech.
63.367807
Major Section: MISCELLANEOUS Sometimes an event will announce that it is ``redundant''. When this happens, no change to the logical world has occurred. This happens when the logical name being defined is already defined and has exactly the same definition, from the logical point of view. This feature permits two independent books, each of which defines some name, to be included sequentially provided they use exactly the same definition. When are two logical-name definitions considered exactly the same? It depends upon the kind of name being defined. deflabel event is never redundant. This means that if you have a deflabel in a book and that book has been included (without error), then references to that label denote the point in history at which the book introduced the label. See the note about shifting logical defuns) event is redundant if for each function to be introduced, there has already been introduced a function with the same name, the same formals, and syntactically identical :measure, type declarations, body (before macroexpansion), and an appropriate mode (see the discussion of ``appropriate Exceptions: (1) If either definition is declared (see xargs), then the two events must be identical. (2) It is permissible for one definition to have a t and the other to have no explicit guard (hence, the guard is implicitly t). (3) The :measure check is avoided if we are skipping proofs (for example, during include-book), and otherwise, the new definition may have a (:? v1 ... vk), where (v1 ... vk) enumerates the variables occurring in the measure stored for the old definition. verify-guards event is redundant if the function has already had its guards verified. defthm event is redundant if there is already an axiom or theorem of the given name and both the formula (after macroexpansion) and the rule-classes are syntactically identical. Note that a defaxiom can make a subsequent defthm redundant, and a defthm can make a subsequent defaxiom redundant as well. defconst is redundant if the name is already defined either with a defconst event or one that defines it to have the defstobj is redundant if there is already a defstobj event with the same name that has exactly the same field descriptors (see defstobj), in the same order, and with the same :renaming value if supplied for either event. defmacro is redundant if there is already a macro defined with the same name and syntactically identical arguments, guard, and body. defpkg is redundant if a package of the same name with exactly the same imports has been defined. deftheory is never redundant. The ``natural'' notion of deftheory forms is that the names and values of the two theory expressions are the same. But since most theory expressions are sensitive to the context in which they occur, it seems unlikely to us that two deftheorys coming from two sequentially included books will ever have the same values. So we prohibit redundant theory definitions. If you try to define the same theory name twice, you will get a ``name in use'' error. in-theory event is never redundant because it doesn't define any push-untouchable event is redundant if every name supplied is already a member of the corresponding list of untouchable symbols. remove-untouchable event is redundant if no name supplied is a member of the corresponding list of untouchable symbols. reset-prehistory event is redundant if it does not cause any change. set-body event is redundant if the indicated body is already the defdoc events are never redundant because they don't define any name. encapsulate event is redundant if and only if a syntactically encapsulate has already been executed under the same include-book is redundant if the book has already been included. Note About Appropriate Modes: Suppose a function is being redefined and that the formals, guards, types, stobjs, and bodies are identical. When are the modes ( logic) ``appropriate?'' Identical modes are appropriate. But what if the old mode was :program and the new mode is This is appropriate, provided the definition meets the requirements of the logical definitional principle. That is, you may redefine ``redundantly'' :program mode function as a :logic mode function provide the measure conjectures can be proved. This is what does. Now consider the reverse style of redefinition. Suppose the function was defined in :logic mode and is being identically redefined :program mode. This is inappropriate. We do not permit the downgrading of a function from :logic mode to :program mode, since that might produce a logical world in which there were theorems about a :program mode function, violating one of ACL2's basic assumptions. Note About Shifting Logical Names: Suppose a book defines a function fn and later uses fn as a logical name in a theory expression. Consider the value of that theory expression in two different sessions. In session A, the book is included in a world in which fn is not already defined, i.e., in a world in which the book's definition of fn is not redundant. In session B, the book is included in a world in which fn is already identically defined. In session B, the book's definition of fn is used as a logical name in a theory expression, it denotes the point in history at which introduced. Observe that those points are different in the two sessions. Hence, it is likely that theory expressions involving will have different values in session A than in session B. This may adversely affect the user of your book. For example, suppose your book creates a theory via deftheory that is advertised just to contain the names generated by the book. But suppose you compute the theory as the very last event in the book using: (set-difference-theories (universal-theory :here) (universal-theory fn))where fnis the very first event in the book and happens to be a defunevent. This expression returns the advertised set if fnis not already defined when the book is included. But if fnwere previously (identically) defined, the theory is larger than advertised. The moral of this is simple: when building books that other people will use, it is best to describe your theories in terms of logical names that will not shift around when the books are included. The best such names are those created by Note About Unfortunate Redundancies: Notice that our syntactic criterion for redundancy of does not allow redefinition to take effect unless there is a syntactic change in the definition. The following example shows how an attempt to redefine a function can fail to make any change. (set-ld-redefinition-action '(:warn . :overwrite) state) (defmacro mac (x) x) (defun foo (x) (mac x)) (defmacro mac (x) (list 'car x)) (defun foo (x) (mac x)) ; redundant, unfortunately; foo does not change (thm (equal (foo 3) 3)) ; succeeds, showing that redef of foo didn't happenThe call of macro macwas expanded away when the first definition of foowas processed, so the new definition of macis not seen in foois redefined; yet our attempt at redefinition failed! An easy workaround is first to supply a different definition of foo, just before the last definition of fooabove. Then that final definition will no longer be redundant. The phenomenon illustrated above can occur even without macros. Here is a more complex example, based on one supplied by Grant Passmore. (defun n3 () 0) (defun n4 () 1) (defun n5 () (> (n3) (n4))) ; body is normalized to nil (thm (equal (n5) nil)) ; succeeds, trivially (set-ld-redefinition-action '(:warn . :overwrite) state) (defun n3 () 2)If now we execute (thm (equal (n5) nil)), it still succeeds even though we expect (> (n3) (n4))= (> 2 1)= t. That is because the body of n5was normalized to nil. (Such normalization can be avoided; see the brief discussion of :normalizein the documentation for defun.) So, given this unfortunate situation, one might expect at this point simply to redefine n5using the same definition as before, in order to pick up the new definition of n3. Such ``redefinition'' would, however, be redundant, for the same reason as in the previous example: no syntactic change was made to the definition. The same workaround applies as before: redefine n5to be something different, and then redefine n5again to be as desired. A related phenomenon can occur for encapsulate. As explained above, an encapsulate event is redundant if it is identical to one already in the database. Consider then the following contrived example. (encapsulate () (defun foo (x) x)) (set-ld-redefinition-action '(:warn . :overwrite) state) (defun foo (x) (cons x x)) (encapsulate () (defun foo (x) x)) ; redundant!The last encapsulateevent is redundant because it meets the criterion for redundancy: it is identical to the earlier encapsulateevent. A workaround can be to add something trivial to the encapsulate, for example: (encapsulate () (deflabel try2) ; ``Increment'' to try3 next time, and so on. (defun foo (x) x)) The examples above are suggestive but by no means exhaustive. Consider the following example. (defstub f1 () => *) (set-ld-redefinition-action '(:warn . :overwrite) state) (defun f1 () 3) (defstub f1 () => *) ; redundant -- has no effectThe reason that the final defstubis redundant is that defstubis a macro that expands to a call of encapsulate; so this is very similar to the immediately preceding example.
<urn:uuid:1e05b61c-831e-489e-9aa3-b86e484d4882>
2.765625
2,310
Documentation
Software Dev.
39.226362
The Universe is rarely static, although the timescales involved can be very long. Since modern astronomical observations began we have been observing the birthplaces of new stars and planets, searching for and studying the subtle changes that help us to figure out what is happening within. The bright spot located at the edge of the bluish fan-shaped structure in this Hubble image is a young star called V* PV Cephei, or PV Cep. It is a favourite target for amateur astronomers because the fan-shaped nebulosity, known as GM 1-29 or Gyulbudaghian’s Nebula, changes over a timescale of months. The brightness of the star has also varied over time. “This planet is not terra firma. It is a delicate flower and it must be cared for. It’s lonely. It’s small. It’s isolated, and there is no resupply. And we are mistreating it. Clearly, the highest loyalty we should have is not to our own country or our own religion or our hometown or even to ourselves. It should be to, number two, the family of man, and number one, the planet at large. This is our home, and this is all we’ve got.” — Scott Carpenter, Mecury 7 astronaut Every day is Earth day 🌍 Space News of the Day: Three Exoplanets May Be Life-Sustainable After four years circling space looking for new planets, the Kepler spacecraft has identified three planets that look like they could possibly sustain life. The first two, known as Kepler-62e and Kepler-62f (shown above, top and middle), are approximately 1,200 light-years away and have estimated temperatures of -3 degrees C (26.6 F) and -65 degrees C (-85 F) respectively. The third planet, Kepler-69c (shown above, bottom) boasts a summer day-like temperature of 27 degrees C (80.6 F). Some scientists think these planets could actually be covered in oceans, but they are unsure if they would be composed of water or some other liquid. Oh, I love Space News! Solar Storms, With a Chance of Proton Showers Sun with Solar Flare Image Credit: NASA Solar Dynamics Observatory Total Solar Eclipse Phases Credit: Jerry Lodriguss Ever wonder what happens if you cry in space?
<urn:uuid:717045a7-aac6-44f5-9004-cb1e20f6c5be>
3.328125
503
Personal Blog
Science & Tech.
57.003584
Light is a form of energy. To create light, another form of energy must be supplied. There are two common ways for this to occur, incandescence and luminescence. Incandescence is light from heat energy. If you heat something to a high enough temperature, it will begin to glow. When an electric stove's heater or metal in a flame begin to glow "red hot", that is incandescence. When the tungsten filament of an ordinary incandescent light bulb is heated still hotter, it glows brightly "white hot" by the same means. The sun and stars glow by incandescence. Luminescence is "cold light" that can be emitted at normal and lower temperatures. In luminescence, some energy source kicks an electron of an atom out of its lowest energy "ground" state into a higher energy "excited" state; then the electron returns the energy in the form of light so it can fall back to its "ground" state. With few exceptions, the excitation energy is always greater than the energy (wavelength, color) of the emitted light. If you lift a rock, your muscles are supplying energy to raise the rock to a higher-energy position. If you then drop the rock, the energy you supplied is released, some of it in the form of sound, as it drops back to its original low-energy position. It is somewhat the same with luminescence, with electrical attraction replacing gravity, the atomic nucleus replacing the earth, an electron replacing the rock, and light replacing the sound. There are several varieties of luminescence, each named according to the source of energy, or the trigger for the luminescence: Fluorescence and Photoluminescence are luminescence where the energy is supplied by electromagnetic radiation (rays such as light, which will be discussed later). Photoluminescence is generally taken to mean "luminesce from any electromagnetic radiation", while fluorescence is often used only for luminescence caused by ultraviolet, although it may also be used for other photoluminescences. Fluorescence is seen in fluorescent lights, amusement park and movie special effects, the redness of rubies in sunlight, "day-glo" or "neon" colors, and in emission nebulae seen with telescopes in the night sky. Bleaches enhance their whitening power with a white fluorescent material. Photoluminescence should not be confused with reflection, refraction, or scattering of light, which cause most of the colors you see in daylight or bright artificial lighting. Photoluminescence is distinguished in that the light is absorbed for a significant time, and generally produces light of a frequency that is lower than, but otherwise independent of, the frequency of the absorbed light. Chemiluminescence is luminescence where the energy is supplied by chemical reactions. Those glow-in-the-dark plastic tubes sold in amusement parks are examples of chemiluminescence. Bioluminescence is luminescence caused by chemical reactions in living things; it is a form of chemiluminescence. Fireflies glow by bioluminescence. Electroluminescence is luminescence caused by electric current. Cathodoluminescence is electroluminescence caused by electron beams; this is how television pictures are formed on a CRT (Cathode Ray Tube). Other examples of electroluminescence are neon lights, the auroras, and lightning flashes. This should not be mistaken for what occurs with the ordinary incandescent electric lights, in which the electricity is used to produce heat, and it is the heat that in turn produces light. Radioluminescence is luminescence caused by nuclear radiation. Older glow-in-the-dark clock dials often used a paint with a radioactive material (typically a radium compound) and a radioluminescent material. The term may be used to refer to luminescence caused by X-rays, also called photoluminescence. Phosphorescence is delayed luminescence or "afterglow". When an electron is kicked into a high-energy state, it may get trapped there for some time (as if you lifted that rock, then set it on a table). In some cases, the electrons escape the trap in time; in other cases they remain trapped until some trigger gets them unstuck (like the rock will remain on the table until something bumps it). Many glow-in-the-dark products, especially toys for children, involve substances that receive energy from light, and emit the energy again as light later. Triboluminescence is phosphorescence that is triggered by mechanical action or electroluminescence excited by electricity generated by mechanical action. Some minerals glow when hit or scratched, as you can see by banging two quartz pebbles together in the dark. (The visible light emitted is often a secondary fluorescence effect, from electroluminescence in the ultraviolet). Thermoluminescence is phosphorescence triggered by temperatures above a certain threshold. This should not be confused with incandescence, which occurs at higher temperatures. In thermoluminescence, heat is not the primary source of the energy, only the trigger for the release of energy that originally came from another source. It may be that all phosphorescences have a minimum temperature, but many have a minimum triggering temperature below normal temperatures and are not normally thought of as thermoluminescences. Optically stimulated luminescence is phosphorescence triggered by visible light or infrared. In this case red or infrared light is only a trigger for release of previously stored energy.
<urn:uuid:88aa856f-4c75-47ae-a461-d50c6bfeeba8>
4
1,181
Knowledge Article
Science & Tech.
23.465575
Science Fair Project Encyclopedia The American sycamore (Platanus occidentalis), also known as American plane and Buttonwood, is one of the species of Platanus native to North America, where it is rather confusingly very often just called Sycamore, which can refer to other types of tree. It forms a massive tree, typically reaching up to 30-40 metres high. In its native range, it is often found in riparian and wetland areas. The range extends from Iowa to Ontario and Maine in the north, Nebraska in the west, and south to Texas to Florida. Closely related species (see Platanus) occur in Mexico and the southwestern states of the U.S.A. It is sometimes grown for timber, and has become naturalised in some areas outside its native range. American sycamore is susceptible to Plane anthracnose disease (Apiognomonia veneta, syn. Gnomonia platani), an introduced fungus naturally found on the Oriental plane P. orientalis, which has evolved considerable resistance to the disease. Although rarely killed or even seriously harmed, American sycamore is commonly partially defoliated by the disease, rendering it unsightly as a specimen tree. As a result, American sycamore is not often planted; the more resistant London plane (P. x hispanica; hybrid P. occidentalis x P. orientalis) being preferred instead. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:0ceaa0a6-f5e5-4cf8-ac8f-3e9e3509979d>
3.546875
327
Knowledge Article
Science & Tech.
29.761053
To ensure the comparability of observation results and to estimate seasonal and year-to year variations in oceanographic variables, it was suggested in Stockholm as early as 1899 that measurements should be made at standard depths and on standard sections. At the beginning of the 20th century observations started on the Kola Section in the Barents Sea (Knipovich 1906), and by the 1930s, a network of such sections had been developed in the area (Figure 3.2.1). In the last 50 years regular observations of ecosystem components in the Barents Sea have been conducted both at sections and by area covering surveys from ship and airplanes. In addition, there are conducted many long and short time special investigations, designed to study specific processes or knowledge gaps. Also, the quality of large hydrodynamical numeric models is now at a level where they are useful for filling observation gaps in time and space for some parameters. Satellite data and hindcast global reanalysed datasets are also useful information sources. Old “G.O. Sars” and “Vilnius” under an intercalibration run of acoustic equipment. The observation system of the ecosystem and human activities in the Barents Sea are based on existing time-series of data collected by a number of Norwegian and Russian institutes. The contribution of different institutes to this monitoring is reflected in Tables 3.1.1-3.1.2. Table 3.1.1 - Table 3.1.2. Click to enlarge Monitoring methods are often developed for one or several target species or ecosystem variables (e.g. temperature and salinity). Utilisation of a measurement platform is essential for building up a broad knowledge of the ecosystem structure and variability, and therefore observations are conducted as broadly as possible. However, it is an impossible task to monitor all species in the ecosystem (e.g. ~3000 species of benthos, ~200 species of fish, ~25 species of marine mammals, etc). Therefore, historically, the main effort on biological monitoring is on the key species, but in the last years there have been more focus on species diversity and trophic interactions. During a year an ecosystem component (e.g. zooplankton) is often monitored by multiple measuring platforms (e.g. sections, surveys, fixed stations, etc). Therefore this chapter is basically divided on two parts. The first part describes the monitoring “platforms”, in a broad understanding of the word (chapter Monitoring of the ecosystem - Monitoring platforms). The second part describes the monitoring from the ecosystem component perspective (chapter Monitoring of the ecosystem - Monitoring divided by ecosystem components). It should be emphasised, that even though the institutions participating in the preparation of this report are responsible for the vast majority of ecosystem monitoring in the Barents Sea, others are also conducting monitoring in this ocean. This report basically focuses on the monitoring conducted by the institutions that have contributed to the report.
<urn:uuid:c48cf392-a581-4410-9665-5bb4da11f382>
3.1875
613
Knowledge Article
Science & Tech.
35.516051
Ethan A. Huff - March 14, 2011 Since 2006, the use of nanoparticles in consumer products has skyrocketed by over 600 percent. Nanotechnologies, which involve the manipulation of elements and other matter on the atomic and molecular scale, are now used in over 1,300 commercial and consumer products. And that number is expected to jump nearly three-fold by 2020. But are these nanoparticles safe for humans and the environment, particularly when used in food-related applications? According to data provided by the Project on Emerging Nanotechnologies (PEN), a group formed in 2005 for the purpose of “creat[ing] an active public and policy dialogue” on nanotechnology, nanoparticles are now used in everything from car batteries and appliances, to aluminum foil and non-stick cookware. The “Food and Beverage” section of PEN even includes various vitamin and mineral supplements that contain nanoparticles, as well as McDonald’s hamburger boxes.
<urn:uuid:a402fabf-099a-4a42-a416-b88c922804a1>
2.90625
203
Personal Blog
Science & Tech.
27.385705
Arthropods first appeared over 560 million years ago and are now the most abundant and diverse group of multicellular metazoans on Earth with over a million species described, and millions more left to be discovered! Arthropods are bilateral animals with internal and external segmentation. Most have a distinct head region and regional specialization along the rest of the body. Each body segment originally had a pair of segmented appendages. They usually have a pair of compound eyes in addition to one or more simple eyes. Arthropods have a body which is covered by a well-developed exoskeleton, so they need to moult to increase in size. They have an open circulatory syetem and a complete gut. Their nervous system is very similar to that of worms with a dorsal brain and a pair of ventral nerve cords. Most arthropods reproduce sexually, but they may have direct, indirect or mixed development. This subphylum contains scorpions, spiders and "sea spiders". All of these groups share a similar body architecture, being divided into an anterior prosoma (the cephalothorax) and a posterior opisthosoma (abdomen). Uniramous appendages with many joints are found on each segment. The four pairs of walking legs of chelicerates lack extensor muscles, but possess flexor muscles. All members of this subphylum have chelicerae and pedipalps as their first and second prosomal appendages. The rest of the appendages very from group to group, but all cheilicerates lack antennae. Gas exchange is accomplished by book gills, book lungs, or tracheae, while waste products are excreted by coxal glands and/or Malpighian tubules. Members of this order are specialized predators. Their compact bodies can be divided into two regions; the gnathosoma (mouth) and the idiosoma (body). Larval mites have a short gnathosoma with stocky pedipalps on either side of paired chelicerae. Their legs have a strong, curved claw to help the larvae stay attached to their host. The gnathosoma of adults is a simply, short channel that leads directly to the esophagus. Their pedipalps are well-developed and are used both as a tactile organ and to hold on to prey. Mites usually have two pairs of lateral eyes, with a medial eye spot between them. Their eyes detect light intensity, direction and wavelength. Gas exchange occurs mainly through the integument by diffusion. Larger species have a network of tubes to take environmental gases directly to their internal organs, while mites with thick exoskeletons have pores in the integument to let gases through. Mites osmoregulate by having large porous areas of their cuticle for the exchange of salts to maintain water balance. Water mites have an open circulatory system. All organ systems are found in the hemocoel and are bathed in hemolymph. The hemolymph circulates through the hemocoel by body movements. Undigested food accumulates in the midgut and is absorbed by the hemolymph. The hemolymph deposits these wastes in the excretory tubule as insoluble crystals. They are ultimately discharged through the excretory pore by pressure generated by body movements. Water mites have a complex life style, with several stages and multiple hosts. Eggs are laid on plants, wood or stones. The six-legged larvae attach themselves to an insect host and feed on its bodily fluids. When they are fully engorged, they detach from their host and attach themselves to plants where they enter an inactive stage. During this time, the larval tissues are resorbed and the body is reconfigured into an eight-legged deutonymph. Deutonymphs are an active, sexually immature stage lacking the completely hardened body of the adult and the final arrangement of the setae on the body. They are predaceous and usually feed on the insect larvae which they parasitized as larvae. The deutonymph stage is the major growth stage of water mites. After reaching maturity, deutonymphs find a plant or soft substrate, embed their chelicerae and transform into another inactive stage, which emerges as an adult in a few days. Adults become sexually mature a few days after emerging. Many species reproduce without touching a member of the other sex. The male drops a spermatophore (a packet of sperm) that the female picks and stores in spermathecae until needed. Mites only feed on fluids drawn from their hosts or prey. They use their muscular pharynx to suck the fluids out of their prey and into their digestive system. Mites can have serious impacts on other aquatic invertebrates. They often parasitize 20 - 50% of a population of adult aquatic insects. This parasitism affects the growth, and reproductive ability of the insects. The free-living forms are often major predators on small aquatic organisms such as fish eggs, insect larvae, and small crustaceans. Mites do not have any major predators. Adults and deutonymphs are predaceous and often feed upon the immature stages of the species that they parasitize as larvae. Crustaceans are invertebrates belonging to the phylum Arthropoda and include such familiar groups as barnacles, crabs, crayfish, lobster, water fleas and pill bugs. Crustaceans are key players in aquatic food webs. The majority of plankton in freshwater is composed of cladocerans and copepods, who are the major consumers of phytoplankton. Benthic crustaceans are often both scavengers and consumers of plant life found on the lake bottom. Collectively, these crustaceans serve as a key food source for fish, especially during their juvenile stage. Aside from their role in aquatic food webs, the largest species of crustaceans are of considerable economic importance. Lobster, shrimp and even freshwater crayfish each support important fishing industries. They are also an increasingly important target for aquaculture activities. In fact, the value of crustaceans produced in aquaculture is already as great as that of fish! Adults of the smallest species are less then 0.1 mm in length and weigh less than 1 mg. By comparison, the heaviest crustacean is the mud crab which reaches a peak weight of 40 kg. The Japanese spider crab is the largest living arthropod, with a leg span of 4 m. Many crustaceans employ standard sexual reproduction, while other crustaceans reproduce by cyclic or obligate parthenogenesis, where males are unknown or rare. Females in parthenogenetic systems produce eggs which do not require fertilization to develop. Aside from this variation in mating system, many crustaceans produce two types of eggs: one which develops immediately, while the other which may diapause for up to several hundred years. Crustaceans show extraordinary diversity in body shape and form bearing anywhere from 3 to 50 pairs of limbs. However, crustaceans do share common features such as jointed, paired appendages, and two pairs of antennae. All crustaceans are enclosed in a protective exoskeleton made of chitin, which must be shed (or "moulted") to accommodate growth. Most crustaceans are carnivores or scavengers, though herbivores and detritivores are not uncommon. Cannibalism can occur at very high densities, or when individuals have just moulted and are vulnerable to attack. Food is taken into the mouth and passed to the gastric mill where it is ground into small particles. Digestion occurs in the midgut and waste is passed out of the hindgut. All crustaceans have an open circulatory system and employ either haemoglobin or haemocyanin as a respiratory pigment. Most crustaceans have a dorsal heart, but some smaller crustaceans simply circulate their hemolymph with body movements. Crustaceans osmoregulate in freshwater by producing copious amount of urine. Most freshwater crustaceans have thoracic and abdominal gills with which they exchange gases while the rest simply diffuse gases across their body integument. Crustaceans have developed a complex tripartite brain and paired, ganglionated ventral nerve cords. They often possess both compound eyes and a array of simple eyes. Zooplankton show particular sensitivity to light as they undergo daily migrations up and down the water column to stay in the best light conditions. Chemosensory systems have evolved to allow them to locate food and mates while avoiding predators.
<urn:uuid:ee3efdda-30a9-4fa5-8372-c0901baf18cb>
3.984375
1,814
Knowledge Article
Science & Tech.
34.55159
Found 0 - 10 results of 59 programs matching keyword " solar viewing" Watch the beginning of Venus’s transit across the disk of the sun, one of the rarest astronomical events. Watch the conclusion of Venus’s 6.5-hour journey across the disk of the sun, one of the rarest astronomical events. Senior Exploratorium Scientist, Paul Doherty demonstrates how you can make your own sun viewer. You can safely view sunspots, eclipses and transits with this equipment that you may have laying around the house! To learn more about the upcoming Transit of Venus visit: http://www.exploratorium.edu/venus/question3.html This After Dark event presented a collection of objects, organizations, and activities use various alternative energy sources, and also looked at sustainably raised food. Astronomer Dr. Isabel Hawkins's journey to the stars began with two chance moments of enchantment with celestial bodies in her native Argentina. Inspired by the mystery of the sky, she went on to study physics and astronomy in California and then to work for 20 years as a research astronomer at UC Berkeley. Now retired from research and devoted to inciting a love of the stars and sky in young people, Dr. Hawkins reflects on her own initial moments of inspiration, on sharing her love of stars with others, and on how astronomy can, and should, remind us of our connection to one another, under a canopy of mystery. Dr. Laura Peticolas is a physicist at UC Berkeley's Space Physics Research group. She studies the Aurora to learn more about the Earth and the workings of our Solar System. She's currently working with NASA's Mars data to understand why the Martian aurora looks the way it does. In this podcast she discusses her research, her inspiration and how and why scientists sonify data. We tour the NOAA Atmospheric Research Observatory at the South Pole where scientists are monitoring carbon dioxide levels, CFCs, solar radiation, and the ozone hole. An Exploratorium and NASA Sun-Earth Connection Education Forum Event Overnight eclipse viewing party at Exploratorium begins July 31, 2008 at 9pm. and continues through Friday, August 1 in the wee hours. San Francisco's Exploratorium brings its fifth eclipse expedition team to remote Xinjiang Province in Northwestern China, very close to the Mongolian border, where the Exploratorium will webcast a total solar eclipse live to the world. Spend the Night at the Exploratorium! See the eclipse in person live at the Exploratorium. Pack your sleeping bag and camp out on the museum floor for an overnight eclipse party...or come to the viewing party in Second Life and enjoy the live webcast, exhibits, and music. How does the interaction of solar radiation on sea ice effect climate change? Please join us as we chat live with Dr. Don Perovich, an expert in the fields of albedo effect, and sea-ice mass balance on climate. Want to get off the grid but think it’s just too expensive? UCB's Dr. Jeff Grossman explains how nanotechnology may be used to make solar panels cheaper. We’ll also hear from philosopher Patrick Lin of the Nanoethics Group about ethical dilemmas that crop up when we try to improve our lives through nanotechnology.
<urn:uuid:00b49044-30f9-4b29-9670-ea9590d14ebc>
2.78125
681
Content Listing
Science & Tech.
51.004428
The last article was on <stdlib.h> Standard Library. This article is on <assert.h> Diagnostics for Programmers. I am assuming a knowledge of c programming on the part of the reader. There is no guarantee of accuracy in any of this information nor suitability for any purpose. If used properly assertions will allow programmers to much more easily document and debug your code with no impact on runtime performance. Assertions are not meant to be used for production code as they cause the program to terminate with an error condition. Since assertions are never to be used in production code they are not useful in finding runtime errors such as a failure to allocate memory. You must still handle failed return conditions of all function calls the same as always. Instead, what assertions allow you to do is document the assumptions that you make as you program and allow you to debug the obvious logic errors that you have made. As you program around these logic errors you can modify your assertions to not die on errors that you are now handling. The example is rogers_example06.c . In this program I will demonstrate the use of assertions by using a simple program that asks for two numbers and then divides the first number by the second. Compile the program with the following: gcc -DNDEBUG rogers_example06.c -o assert and then run ./assert and try to divide by zero. The flag -NDEBUG will cause your assertion to generate no runtime code. This flag should be used in all production environements Your program will core dump with almost no indication of the problem. Now recompile the program with the following: gcc rogers_example06.c -o assert Now run the program again and again try to divide by zero. This time it should be much more apparrent what the problem is and very easy to locate the exact line that had the problem. As always, if you see an error in my documentation please tell me and I will correct myself in a later document. See corrections at end of the document to review corrections to the previous articles. void assert(int expression); A correction to Part *: The Standard C Library, P. J. Plauger, Printice Hall P T R, 1992 The Standard C Library, Parts 1, 2, and 3, Chuck Allison, C/C++ Users Journal, January, February, March 1995 STDIO(3), BSD MANPAGE, Linux Programmer's Manual, 29 November 1993
<urn:uuid:a6706e94-5eaa-4084-8cac-e419f49b0e8d>
2.8125
517
Documentation
Software Dev.
55.800662
Introducing Dr. Bulinski: Hermit Crab Researcher (Part 2) Miles Lightwood (ML): Another idea you shared was an alternate approach to addressing the shortage with printed shells. Please explain. Dr. Katherine Bulinski (KB): One approach that was initially considered is that these printed shells could be distributed in the natural habitat if a hermit crab shortage had been identified. We discussed the use of biodegradable plastics for such an application so that the environmental impact should be less than if you used a non-biodegradable plastic. I think it is very important to be wary of manipulating a natural ecosystem by introducing man-made (or in this case, machine made) products as we may not recognize all of the possible effects of our actions. Another more immediately practical approach would be to encourage people that have hermit crabs to use the printed shells instead of the natural shells available at pet stores. Part of the reason for the shell shortage in some parts of the world is over-collecting of shells in different regions. In the natural environment, empty snail shells would either be used by a hermit crab, be broken apart naturally to become carbonate sand, slowly dissolve into ocean water, become buried or become a surface for small organisms to grow on. When possible we should try to leave natural ecosystems as untouched by people as possible, so printing shells for commercial use so that natural shells can remain a part of the ecosystem would be a positive goal for this project. ML: Is there any advice or insight on hermit crabs or their shells you can provide for Project Shellter contributors? KB: Hermit crabs evolved over time to use the shells of certain species of gastropod as homes. Most snail shells curve to the right and hermit crabs evolved to have an abdomen that curves to the right to make effective use of the shape of the snail shell. In my paper I cite research that shows crabs select shells based upon several criteria: - Opening (aperture) size - the opening must be the correct size for the crab to use its larger left claw as kind of protective barrier - Opening (aperture) shape - some crabs prefer round openings, while others may prefer oval openings - Shell length and width - the shell must be the correct size to allow the crab to fully retract when threatened - Shell weight/thickness - the crab expends energy to haul around the shell and therefore it must not be too heavy. The shell must be sturdy enough to withstand being carried around and to withstand possible attacks from predators, so it must not be too light either. - Shell damage - in the natural world, many of the shells that are used by hermit crabs are not pristine. Many have holes and chips along the aperture but are still used because shells are so limited in certain ecosystems. Hermit crabs prefer shells that are undamaged as crabs in damaged shells are more easily evicted by other hermit crabs and these crabs are also more vulnerable to predators. When printing shells for this project it may be necessary to experiment with any or all of these properties to create a shell that a hermit crab will call home. Additionally, the interior surface of a natural shell is smooth, so the lines I see on 3D prints might need to be sanded or otherwise smoothed. ML: Do you have any last thoughts to share with Project Shellter contributors? KB: Hermit crabs play an important scavenging role in both marine and terrestrial ecosystems and I hope this project helps to conserve the snail shells that are found in their natural habitat. I wish all contributors success and am looking forward to watching the project unfold! ML: One more thing: do you think a printed shell will have the sound of the ocean in it like a real one? KB: (Laughs) I don’t know, but there’s only one way to find out! Thank you Dr. Bulinski for acting as research advisor! I look forward to sharing the crowd-sourced science of Project Shellter with you. Project Shellter is social! Follow, share and contribute to help save hermit crabs by keeping natural shells in the wild! |Tagged with||dr. katherine bulinski, hermit crab, hermit crab shell, miles lightwood, project shellter, save the hermit crabs, shell, shells, shellter, shelter, teamteamusa||One comment|
<urn:uuid:d9928361-3feb-44a2-8df9-047d0038d486>
3.46875
908
Audio Transcript
Science & Tech.
44.336723
Plants influence everything from food chains to climate change. Yet we know little about the traits that allow plants to survive and adapt to their habitat or to a warming planet. We are a long way from being able to harness these adaptations for our own benefit. Our failure to understand these traits is in part due to the split of biology research into molecular–cellular and ecological–evolutionary disciplines. A new generation of plant scientists is needed to realize the full potential of plant genetics. Plants form the basis of most food chains on the planet. To pass on their genes, plants must find mates, avoid being eaten and compete for resources in an ever-changing environment — all while being rooted to the spot. They have evolved a myriad of strategies to deal with these environmental challenges. Most adaptation strategies are chemical, many involving the production of secondary metabolites, such as alkaloids and steroids, which we, in turn, rely on as the basis of our pharmacological recipe book. Some 100,000 secondary metabolites have been discovered thus far, and technological advances will probably see this number double in the next decade. The environment shapes plants, but plants also influence the environment. They store carbon, fix nitrogen and produce oxygen1. They shape weather patterns, provide flood defence, purify water, provide food, and offer solace and inspiration. With nearly 7 billion humans affecting the environmental composition of the planet, however, plants are being forced to function under conditions outside their recent evolutionary experience, and it is unclear what the knock-on effects will be. Modelling studies are beginning to unravel the links between our changing environment and ecosystem health, but more research is needed to inform legislation.
<urn:uuid:a2bccebc-3b27-4778-8e05-d37fb67a35af>
3.90625
341
Knowledge Article
Science & Tech.
31.889326
2 space rocks hours apart point up the danger Bill Cooke, head of the Meteoroid Environments Office at NASA's Marshall Space Flight Center in Huntsville, Ala., said the space agency takes asteroid threats seriously and has poured money into looking for ways to better spot them. Annual spending on asteroid-detection at NASA has gone from $4 million a few years ago to $20 million now. "NASA has recognized that asteroids and meteoroids and orbital debris pose a bigger problem than anybody anticipated decades ago," Cooke said. Schweickart's B612 Foundation — named after the asteroid in Antoine de Saint-Exupery's "Le Petit Prince" — has been unwilling to wait on the sidelines and is putting together a privately funded mission to launch an infrared telescope that would orbit the sun to hunt and track asteroids. Its need cannot be underestimated, Schweickart warned. Real life is unlike movies such as "Armageddon" and "Deep Impact." Scientists will need to know 15, 20 or 30 years in advance of a killer rock's approach to undertake an effective asteroid-deflection campaign, he said, because it would take a long time for the spacecraft to reach the asteroid for a good nudge. "That's why we want to find them now," he said. As Chodas observed Friday, "It's like a shooting gallery here." Associated Press writer Alicia Chang in Los Angeles contributed to this story.
<urn:uuid:238db8a1-4026-4966-8adf-279a0708647d>
3.140625
295
Truncated
Science & Tech.
45.908845
Plasma shock waves Launched by two Russian rockets in the summer 2000, the four Cluster spacecraft then used their own thrusters to get to an elliptical orbit with a closest distance to the Earth of nearly 20 000 km, and getting as far away as nearly 120 000 km: nearly a third of the distance to the moon. Cluster?s task is to sample the Earth?s magnetosphere. Cluster senses many different physical processes, including the shockwave generated when the solar wind slams into the Earth. Our planet and its magnetic field are an obstacle to the supersonic plasma as it flows away from the Sun. Therefore a ‘bow shock’ forms to slow down and deflect the plasma around the Earth. Shocks form wherever an obstacle sits in a supersonic plasma flow. The Hubble space telescope has captured images of a bow shock, about 0.25 light years across, formed ahead of a star ploughing through the Orion Nebula. Astrophysical shocks are very energetic (they’re the source of some of the highest-energy particles we know about) but to understand how those particles get such huge amounts of energy, we need to know more in general about how shocks work in plasmas. The Earth's bow shock is a good case to study, because it's close enough that we can get plenty of information back about the physics happening there. Cluster is not the first spacecraft to visit the Earth’s magnetosphere. So what makes it so different? The answer is that the four spacecraft fly in close formation, giving us four tracks through whatever physical process is happening around the spacecraft. It's therefore possible to sample a volume of space and make measurements in three-dimensions. Imperial College, London developed 3-axis magnetometers, University College London’s Mullard Space Science Laboratory designed and built electron detectors and Sheffield University developed the wave processors for each satellite. Find our more about the Cluster mission at the European Space Agency's website!
<urn:uuid:5b65925f-b516-4c78-b38c-33f869cc74c3>
4.125
406
Knowledge Article
Science & Tech.
47.67656
The Science Guys Science Guys > January 2003 What is gravity and where is it the strongest in the United States? Of the four fundamental forces - gravity, electromagnetic, strong nuclear, and weak nuclear - gravity is the one with which people are most familiar. Isaac Newton is credited with ’discovering gravity’. What he actually did was give us an analytical interpretation of gravity, that is, describe the quantities that determine the gravitational force on an object. According to Newton, any two objects have an attractive force trying to pull them together. The magnitude of this force depends upon the mass of each object and the distance between the centers of the two objects. Mathematically, we say the force of gravity depends directly upon the masses of the objects and inversely upon the distance between the objects squared. [ F = G M1 M2 / D2 ] The G in the relationship is a constant that is called the universal gravitational constant. For everyday objects like people, cars, balls, and planes the force of gravity between any two of these objects is so tiny it is insignificant. However when one of the objects is very massive, such as the Earth, then the force of gravity becomes significant. Your weight is actually just the force of gravity between your body's mass and the Earth's mass. We feel the Earth pulling on us with a force that we call gravity. The ’force of gravity’ is often expressed in terms of the acceleration the gravitational force will give an object when the object is dropped. This acceleration of gravity is written as a small g and is used to describe the strength of gravity at different locations on Earth as well as on other planets. In general, the closer the centers of two objects, the greater the force of gravity becomes. Therefore, you would expect gravity in the United States to be stronger wherever you are closest to the center of the Earth. The lowest spot in the U.S. is Death Valley, so gravity would be expected to be stronger there (where you are closest to the Earth’s center). However, the Earth is not uniform material throughout its interior and gravity depends upon exactly how much mass is between you and the center of the Earth. Therefore as you move around the U.S. the acceleration due to gravity (g) varies from about 9.79 to 9.81 meters per second squared. The Earth’s average is 9.80 m/s2 (32 ft/s2) which is generally reported as the acceleration of gravity on Earth. This means a dropped object will speed up 32ft/s every second it falls assuming no air resistance. For comparison, the acceleration of gravity on the Moon is only 1.63 m/s2 due to its composition and smaller size. Newton’s explanation of gravity is adequate for everyday life. However, to understand the most elaborate details of gravity one must include Einstein’s gravitational theory, which is mathematically complex.
<urn:uuid:c577fc7c-3027-4d19-89df-a2706c1f98b2>
3.5625
605
Knowledge Article
Science & Tech.
51.646222
Atomic Energy Levels (More about Bohr's Atomic Model for Hydrogen) This electric force _is_ the centripetal force that holds the electron in a circular orbit, The total energy of the system is From setting the electric force equal to the centripetal force, we know so the total energy is This equation connects energy with orbital radius r. But what determines r? Bohr found that the following condition gave the desired results: An electron is allowed only to be in a state or orbit such that its angular momentum , L = mvr, is equal to an integer n multiplied by h divided by 2, m vn rn = n ( h / 2 ) , n = 1, 2, 3, . . . where h is again Planck's constant (h = 6.63 x 10-34 J s). v and r now carry subscripts as vn and rn to indicate that they correspond to a particular value of n. n is called a quantum number. Restricting the angular momentum to particular, discrete values is referred to as the quantization of angular momentum. This quantization means the energy is now restricted to particular, discrete values, Using the centrepetal force equation and the quantization of angular momentum, we can solve for rn, n = 1, 2, 3, . . . This means the total energy is restricted to the following particular, discrete, quantized values, n = 1, 2, 3, ... Evaluating this gives E1 = - 13.6 ev En = E1 / n2 We can represent this on an energy-level diagram, Recall what negative energies mean. The hydrogen atom is a bound state. We must provide 13.6 eV of energy -- or do 13.6 eV of work -- to break it apart or to separate the proton and electron so they are infinitely far away from each other. Photons are emitted as the hydrogen atom makes a transition from one allowed state to another allowed state. The photon's energy is equal to the difference in energy of those two states. Absorption of photons is the reverse process of emission. If a photon with energy equal to the difference in energy of two states of an atom passes by, that photon may be absorbed and its energy will put the atom into a higher energy state. The photon's energy equals the change in energy of the atom because energy is conserved. If the photon's energy is not equal to the difference in energy of two states of the atom, the photon will not be absorbed. This explains the line spectra observed in absorption spectra . In continuous or white light, photons of all wavelengths are present. Only those with particular energies (or wavelengths) corresponding to differences in energy will be absorbed; all others pass by untouched. (c) Doug Davis, 1997; all rights reserved
<urn:uuid:ed752b03-3efa-4f2e-a04b-2a80d55e3f16>
3.71875
601
Academic Writing
Science & Tech.
56.871042
The challenges currently presented by our world are unprecedented in scale and urgency: climate change; deforestation; freshwater shortage; biodiversity loss; food insecurity; desertification and more, all of which require resolute address. The recent fires in Russia and tragic flooding in Pakistan have demonstrated the increasing severity of climate change at only 0.8° Celsius global average temperature rise, while also serving as stark precursors of what may be to come at 2°C or more. One of the challenges with the current climate change mitigation approach is that it focuses too much on CO2 reductions, which have a very long lifetime. Dr. David Archer of University of Chicago in the US states: “The idea that anthropogenic CO2 release may affect the climate for hundreds of thousands of years has not reached general public awareness.” What this means is that any CO2 reduction today may lower future heating, but it will not bring about planetary cooling in our lifetimes. Instead, scientists and the Arctic Council are recommending focus we focus on reducing “shorter lived climate forcers,” to create rapid cooling by reducing methane, ozone and black carbon. Methane and black carbon are much more potent than CO2 in terms of trapping heat in the atmosphere, and methane dissipates out of the atmosphere in about 12 years. Black carbon goes in a few months and ozone in a few hours. Averaged over 20 years, methane traps around 72 times more heat than CO2. Over a 5-year period, methane is seen to be even more potent, trapping around 100 times more heat than CO2. At the same time, methane leaves the atmosphere within a decade, while CO2 will stay around warming the planet for hundreds, possibly thousands of years. The UN points out that livestock is the single greatest source of methane, at 36 percent of total anthropogenic methane. Furthermore, because methane is a building block of ozone, reductions of methane anywhere will reduce ozone, which is the third most prevalent greenhouse gas after CO2 and methane. Short-lived Climate Forcers Until recently, black carbon was not actually considered to have a warming potential at all. Black carbon or soot is produced with the burning of vegetation and fossil fuels and is 680 times more heat trapping than CO2. One of the biggest problems that black carbon creates is its absorption of heat from the suns rays, thus also warming the surrounding air. When this sooty residue lands on the snowy regions of our planet, not only does it melt the snows faster but it also darkens the surface, consequently causing less reflection of the suns rays back into the atmosphere. Over 90 percent of the black carbon emitted by nations in the arctic region comes from agriculture, forest or peat fires. Scientists found that 60% of the black carbon particles in Antarctica were carried there by the wind from South American forests which are burned to clear land mainly for livestock grazing or the growing of soy for animal feed. Because over 80 percent of agriculture in the Amazon is for cattle grazing and raising soya for animals, reducing consumption of animal products is possibly the fastest way to reduce shorter lived climate forcers. With black carbon staying in our atmosphere for a matter of a few weeks as opposed to CO2, which remains for 100 years or more, addressing black carbon is a key to mitigating climate change. Two thirds of the plant and animal species on earth reside in tropical forests, of which 20 million hectares are destroyed every year releasing 2 billion tons of CO2 into the atmosphere, contributing 20-to 25% of global warming. According to the UN Environmental Program Year Book 2009, and a World Bank report, our planet is quickly approaching “tipping points” which will cause drastic climate changes within the next few years. One of the greatest dangers is the collapse of the Greenland or West Antarctic ice sheets, entailing subsequent sea level rise of 20 feet which would submerge low-lying islands, coastal regions and flood river deltas upon which the global food supply depends, perhaps as early as 2100. The UN reports that desertification is not only accelerating, also contributing to climate change, the loss of vegetation reducing carbon sinks and increasing emissions from biodegrading plants.
<urn:uuid:375858f4-b926-4811-a9fa-43ff932eaee3>
3.84375
859
Comment Section
Science & Tech.
35.202292
The Pacific will compress by 250 miles at the Equator during the 7 of 10 S American roll. This 250 mile compression is the sum total of 47 miles due to the tilting of the Philippine Plate, and 78 miles due to the folding of the Mariana Plate and Mariana Trench. An additional 125 miles of compression comes from the overlapping of the Pacific Plate parts in the center of the Pacific. ZetaTalk Explanation 12/4/2010: The Mariana Islands on the lifting eastern edge of the Mariana Plate will tilt and move an estimated 47 miles closer to the Philippine Islands. The Mariana Plate and the Mariana Trench to the east of this plate will essentially disappear, having folded, with only the Mariana Islands in a tentative survival situation. This provides an estimated 125 miles of room for S America to roll to the west, but the plate boundaries in the central Pacific have also been steadily adjusting. Overall S America now has 250 miles to roll, dragging the Caribbean and pushing over the Cocos and Nazca plates before it. This 250 miles is the degree of rip in the south Atlantic Rift, affording the African Plate roll room to maneuver. The Pacific, per the Zetas, is not one plate but four. ZetaTalk Explanation 6/2010: The Pacific Plate is assumed to be a single plate, but it is not. Hawaii, which rides higher after every major adjustment in the area, is rising, and this can only be the case if there is subduction of a plate somewhere, pushing the plate that Hawaii rides on up. The Society Island are on a chain that forms a line with the Hawaii Islands, and such a rise is not a coincidence. This is also a fault line, where a plate that is subducting under the Americas is rising commensurately along these island chains. There is a fault line running from Kamchatka to the Society Islands, and both will rise during the pole shift. When compression along this border occurs the Aleutian Islands would be pulled into a bow, snap with quakes, and the ocean buoys would show heaving of the floor due to temporary compressed magma in the area. This occurred on June 24, 2011 when the Aleutian Islands suffered a 7.2 quake and buoys in the area showed a heaving or undulating floor in the days before the quake. Clearly, something happened on the ocean floor! This was reflected all the way to the plate under Australia on that same day, June 24, 2011, as noted by a sharp eye'd Pole Shift ning member. The curve under Sumatra and Java lifted, while the Coral Sea experienced more water. As the Zetas have long explained, the Earth changes, plate movements of the 7 of 10 scenarios especially, cause rock layers to slide across each other, releasing trapped methane gas. Birds are acutely sensitive to methane, thus the canary in the coal mine situation where miners are warned of methane in the air when their canaries stop chirping. Bird deaths around the world were featured in Issue 211 of this newsletter, and were mentioned within ZetaTalk in 2005 and 2007 when there were Earth ZetaTalk Explanation 1/9/2007: Why were these Earth farts and moving ground experienced from Italy and the UK throughout the US and even in Australia, all seemingly simultaneously? We have explained that the plates of the globe have been loosened up, the rock fingers holding them firmly against one another broken off, so a fluidity has Methane kills by asphyxiation, a lack of oxygen. - Health Effects of Methane - When methane is present at high concentrations, it acts as an asphyxiant. Asphyxiants displace oxygen in the air and can cause symptoms of oxygen deprivation (asphyxiation). It is not expected to cause unconsciousness (narcosis) due to central nervous system depression until it reaches high concentrations. Massive fish kills have been reported the past few months. Many of these are due to the heavy winter in the northern hemisphere, where so much ice formed over inland lakes that the fish were deprived of oxygen. But excluding all reports that have any kind of an explanation, such as cold fronts killing tilapia in the Philippines, or fish dying in calderas that might be emitting volcanic gasses, or aquatic life in a frozen inland lake or downstream from a possible pollutant discharge, there are still reports worldwide, affecting a wide variety of species, as noted on the Pole Shift ning. Those cases, which have no explanation from the experts or occurred on ocean beaches where pollution is unlikely to have caused the kill, remain a mystery. Only plate movement with resultant sudden release of a methane cloud can explain these deaths. - Massive Fish Die-Off likely due to Oxygen Depletion March 8, 2011 - Redondo Beach officials said initial assessments suggest oxygen depletion in the King Harbor basins caused the massive fish die-off. - 1,000's of Dead Fish have been Washing up on Western Australian Beaches March 10, 2011 - Dead fish, eels and crayfish have been found on shores - with the coastline of Green Head, some 290km north of Perth, covered in now-rotting carcasses. In a cleanup operation at Green Head, 20km north of Jurien Bay, more than 15,000 dead fish were collected. - 50 Dead Birds fall from the Sky near Sterling [Kansas] April 7, 2011 - The strangest part was they all died at within minutes of each other. - The Dead Turtles Found on the Beach, Australia April 15, 2011 - 10 dead green turtles were found on an Australian beach in the Boyne Tannum Sand Island and near the mouth of the River Boyne. Autopsy was carried out, after which experts have excluded from the list of possible causes of death blows on fishing boats, the depletion of starvation and predation. - Mass Discards "Sea Frog" on the Beach Kamchatka Peninsula April 18, 2011 - In much of the coastline of the island observed the massive Bering Sea emissions of frogs. The mass emissions of marine frogs on the island were not recorded even once during the entire period of many years of - Fears of Ocean Toxicity off Cape Town April 20, 2011 - Hundreds of dead abalone have washed up on Melkbos beach near Cape Town in recent weeks, prompting fears that the ocean in the area between Bloubergstrand and Melkbosstrand is toxic. - Ecuador, Playa Villamil Dead Fish Row May 23, 2011 - A row of dead fish, at least five feet wide and 3 km in length, appeared on Saturday on the beach, at the height of 2 km road to the Villamil Data, in Canton Beach, Province Guayas. - Mystery Disease Kills 300 Sheep Within an Hour May 28, 2011 - A Saudi farmer who went into his barn to take his 300 sheep on their daily pasturing was shocked when he found them all dead. The farm said he checked the sheep an hour earlier and they were all alive in their barn at his far in the western town of Qunfudha. - 600 Dead Penguins Wash up in Uruguay June 8, 2011 - About 600 dead penguins, along with dead turtles, dolphins and albatrosses have washed up on Uruguay's Atlantic coast over the past few days. Experts are trying to determine what has killed the animals. Global Warming Fail According to Al Gore, the Global Warming debate is alive and well, and not a debate at all! Despite the embarrassment of fraudulent data and the fact that the Earth changes are not lining up with projections, Gore plunges on. But matters are getting heated (no pun intended). Tempers are flaring. - Scientists Face Death Threats Over Climate June 20, 2011 - The chief of one of Australia's peak science bodies says she has received a death threat relating to the climate change debate. The Federation of Australian Science and Technological Societies (FASTS) says misleading claims about climate science are spilling over into attacks on the credibility of scientific research in general. - Al Gore Blasts Obama On Climate Change For Failing To Take 'Bold Action' June 22, 2011 - Former Vice President Al Gore is going where few environmentalists - and fellow Democrats - have gone before: criticizing President Barack Obama's record on global warming. In a 7,000-word essay for Rolling Stone Magazine, Gore says Obama has failed to stand up for "bold action" on global warming and has made little progress on the problem since the days of Republican President George W. Bush. He says the president "has simply not made the case for action." Per the Zetas, admitting that it was all a hoax, a part of the Planet X cover-up, would be just too humiliating. ZetaTalk Perspective 6/25/2011: How to salvage the Global Warming excuse? This has been the darling of the cover-up crowd, the reason for the erratic weather, the high tides and rising seas, even the reason the world's populace should shift to a more vegetarian diet and less reliance on fossil fuels. Poorly managed, this ultimately was revealed for the hoax it is, data manipulated at the highest levels. Now those who have been hurt by legislated demands that they adjust their carbon footprint are furious and demanding blood. Should Al Gore apologize? The Global Warming hoax marched past the revelation of their fraudulent data and they will march past lack of cooperation from Obama, holding firm as the alternative is unthinkable. The alternative is humiliation, but more than that it would raise the question of why such a fraud was considered necessary. However, apologies will not be forthcoming due to the humiliation factor, even when the presence of Planet X is obvious. Electromagnetic pulse, strong enough to stop all electronics on Air France 447 when it flew over the magnified Atlantic Rift, will increasingly create havoc on Earth. This is also true of the charged tail of Planet X, which has been wafting over the Earth, creating a blood red Moon on occasion. Clocks have started surging. Clocks and watches come in many varieties. Some, atomic clocks or watches, regularly check in with the Navy's master clock and reset if out of sync. This includes the Internet, and servers or PC's attached to the Internet, which in the main adjust their time to the Navy's. But most clocks are simply plugged into the electric grid or running on battery. If using the grip, they may require manual adjustment when the power is out. So why should those clocks suddenly be running fast? This occurred in Sicily in early June, 2011 as reported in - Clocks in Sicily Mysteriously Jump Ahead June 9, 2011 - For over a week, the digital clocks in Catania have been perplexing owners by skipping ahead 15 to 20 minutes every day. The inexplicable time changes caught the attention of two computer technicians in Sicily, who turned to Facebook to confirm the phenomenon. Experts in the University of Catania's electronic engineering department have been unable to give a singular answer to explain the irregularity. The Zetas have warned that mankind's electronics will have interference as Planet X approaches and the charged tail wafting the Earth will increasingly be pointed directly at the Earth. ZetaTalk Prediction 2/17/2007: We have stated that electromagnetic surges will occur, increasingly, as Planet X approaches and in particular as the tail wafts past the Earth. Tail effects are already being felt by humans, who report vertigo and nausea due to the gas in the tail. The compass has been erratic for a couple years now, but as the tail is charged, will have more problems than the past. The magnetic fields of Planet X and Earth clash, and this is not a steady state but a surging affair. In addition, there are more particles than electrons and magnetons involved, particles man is unaware of, that surge. The Zetas also predicted that the establishment would thrash about to explain electronic interference, a surge or brownout. ZetaTalk Explanation 7/18/2009: There is too much complexity in electrical systems to make a blanket statement. Clearly Air France 447 had electrical interference, as did the Airbus crash off the coast of Yemen. The Airbus is completely dependent upon electrical controls, unlike other manufacturers which allow for manual controls during an emergency. Likewise, trains such as the DC Metro and the Disney Monorail were relying upon wireless signals for their automatic system. This is a window of vulnerability for electromagnetic interference. Where a blanket statement cannot be made, it is clear that interference is occurring when communications on the Internet bounce via satellites, during phone service, and even during the use of household appliances. There have been reports on this chat that appliances which refused to work were found to be sound and working later. Excuses will always be given in the press, no matter how illogical, as the rule is still to deny the presence of Planet X. If and when the Sun begins to have sunspots or activity that can be blamed, this will be cited as the cause. Lies may be spread about the activity of the Sun in order to allow this route to be used, so the public will not know the truth. Up until the present time, the Sun has been too quiescent, so such a claim would be questioned. But a desperate cover-up knows no bounds. An early excuse was to blame the Sun, which was due to have its solar maximum in 2011 or 2012. But the Sun has been too quiescent. Now what? The establishment has amazed everyone by coming up with an excuse to cover the surging electronic clocks. They are going to run a year long exercise in the US to create surging on the electric grip. This is to start in July, 2011. How on Earth does that explain what is happening in Sicily now? And from comments on the Pole Shift ning, clearly happening in the US already. And if they want to experiment, why not do it in the lab? This shows the desperation of the - Power-Grid Experiment Could Confuse Electric Clocks June 24, 2011 - A yearlong experiment with America's electric grid could mess up traffic lights, security systems and some computers - and make plug-in clocks and appliances like programmable coffeemakers run up to 20 minutes fast. "A lot of people are going to have things break and they're not going to know why," said Demetrios Matsakis, head of the time service department at the U.S. Naval Observatory, one of two official timekeeping agencies in the federal government. Since 1930, electric clocks have kept time based on the rate of the electrical current that powers them. If the current slips off its usual rate, clocks run a little fast or slow. The group that oversees the U.S. power grid is proposing an experiment that would allow more frequency variation than it does now. The test is tentatively set to start in mid-July. This won't change the clocks in cellphones, GPS or even on computers, and it won't have anything to do with official U.S. time or Internet time. East Coast clocks may run as much as 20 minutes fast over a year, but West Coast clocks are only likely to be off by 8 minutes. You received this Newsletter because you subscribed to the ZetaTalk Newsletter service. If undesired, you can quickly Unsubscribe.
<urn:uuid:3a4bc6b3-5655-4b06-bcf9-71398dfc81df>
3.46875
3,333
Comment Section
Science & Tech.
49.531589
The pair of observers that make up NASA’s Solar Terrestrial Relations Observatory (STEREO) have been traveling since 2006 to reach opposite sides of our star, and they just beamed back the first 360-degree solar images. The satellites are in the same orbital path as Earth, more or less, and have just taken up their final positions — one is where we’ll be in three months, and the other where we were three months ago. (The first has NASA’s least imaginative name to date: STEREO A, for “ahead.” The second is called STEREO B, for…you can probably guess.) [TIME] Seeing the far side of the sun isn’t just a scientific curiosity. It could also helps researchers figure out the sun’s violent outbursts, like the coronal mass ejections that could endanger astronauts and foul up satellites if one headed for Earth. “Nothing can hide from us anymore,” says Joseph Gurman, an engineer at NASA’s Goddard Spaceflight Center in Greenbelt, Maryland and project scientist on the STEREO mission. The wraparound coverage allows researchers to see violent eruptions emanating from the Sun’s surface that they might otherwise miss, he adds. [Nature] And, says project scientist Richard Harrison, sometimes the best way to see what’s headed right at you is to have eyes on the side. “By being away from the Sun-Earth line, you can look back at the space between the Sun and the Earth and see any of these clouds, these coronal mass ejections that are thrown out of the Sun and are coming our way – you can even see these things passing over the Earth. Those are the key to what Stereo’s all about.” [BBC News] For plenty more about this catalog of the entire sun, check out the post by DISCOVER blogger Phil Plait. 80beats: NASA Probe Will Head to the Sun, Withstand 2600-Degree Heat 80beats: New Looks at the Moon’s Mysterious Core & the Sun’s Scalding Corona DISCOVER: Space Weather and the havoc it can cause DISCOVER: Seeing Sun Storms in Stereo
<urn:uuid:180d74ac-5958-443f-ac46-3d815da8a672>
3.5
479
Personal Blog
Science & Tech.
49.650272
One of the new features that we are adding to the language is the concept of a private partial method. In order to discover the rationale behind this feature, let's consider a scenario where a designer generates some code and wishes to allow the user to customize certain behavior at certain points in the generated code. Typically in object oriented systems, there are a few ways that generated code can provide hooks for user customization. The traditional way is that the generated code defines a base class (say Customer), and several virtual/abstract methods are defined and consumed in the base class (say, a ValidateAddress method is defined and consumed in the SubmitChanges method). The user then provides a class that derives from Customer and he overrides ValidateAddress so that custom validation can be performed. 1: Class Customer 2: Protected MustOverride Sub ValidateAddress() 3: Public Sub SubmitChanges() 4: ' do some work here 6: ' do some work here 7: End Sub 8: End Class 10: Class MyCustomer : Inherits Customer 11: Protected Overrides Sub ValidateAddress() 12: ' validate the address here 13: End Sub 14: End Class This approach has problems; in particular, requiring a user to inherit from a class to provide customization can lead to fragile code, coupling, etc (see your standard Design Patterns on why patterns that prefer aggregation over inheritance is preferred). One solution for this is to make use of delegates; instead of the user creating a new class and overriding methods, the user can provide functions that perform the custom validation and pass delegates to the SubmitChanges method. 2: Public Delegate Sub ValidateAddressDelegate() 3: Public Sub SubmitChanges(validateAddress As ValidateAddressDelegate) 10: Sub validateAddress() 11: ' validate the address here 12: End Sub 14: Sub UseCustomer(c As Customer) 15: c.SubmitChanges(AddressOf validateAddress) 16: End Sub As you can tell by the example, there are some small difficulties with this approach; where should the method validateAddress live? Should I define a new class to hold these methods to be passed off as delegates? Or a new module? And you can imagine that if you want to validate every field in say a database using this approach, you have to create delegate types for each field, and this can get messy. VB9 makes things slightly easier because we are introducing relaxed delegates (more on this in a later post), but overall, this is an ok solution, maybe not the best. Another possible solution might be to create an event and then raise the event, so that clients who are interested in the even just need to add a handler to the event. The problem here is that the event mechanism is very generic, and it's hard to limit the scope of the event. In general, the problem with all of these solutions is the fact that validation is exposed by the generated code is visible outside the generated code. What if I only wanted the user's partial class to customize the validation? None of the above solutions really fit the bill, because the fact that there's customization going on leaks. This is the problem that partial methods solve. Consider the following example: 1: ' Generated by a designer 2: Partial Class Customer 3: ' Notice the Partial keyword, and the fact that the method body is empty 4: Partial Private Sub ValidateAddress(addr As Address) 5: End Sub 7: Public Sub SubmitChanges() 8: ' do some work here 10: ' do some work here 11: End Sub 12: End Class 14: ' This is the user part of the partial class 15: Class Customer 16: ' Note that the signature of ValidateAddress matches the partial signature 17: ' but with the partial keyword removed 18: Private Sub ValidateAddress(addr As Address) 19: ' Do some validation 20: End Sub 21: End Class Line 4 above introduces the concept of the partial method declaration. The use of the Partial keyword indicates to the compiler that this is the declaration of the method. The method body is empty. Notice that in same class, SubmitChanges calls the partial method. Then in the user class (your class that customizes the partial class defined by the designer), you can provide an implementation for the partial method (line 18). The Visual Basic IDE has some cool support to help you generate the exact signature stub, so that you don't have to. When the compiler sees the implementation and it matches a private partial method declaration, the method is now "live" and the call to ValidateAddress in SubmitChanges (line 9) calls the ValidateAddress method you specified in line 18. So now you can extend the code generated by the designer, without leaking the customization points outside the class. As a bonus, if your customer class does not provide an implementation for the ValidateAddress method (ie, lines 18 to 20 are removed), then the compiler does not have an implementing method for the ValidateAddress partial method, and as a result, the call in line 9 is completely removed by the compiler. So there is no overhead in partial methods at all. In future versions of Visual Basic, we may revisit and extend the design of partial methods, but I think what we have in VB9 is valuable for these sets of scenarios. In particular, you'll find that the LINQ to SQL designer makes use of partial methods in the code that it generates. This is all cool and all, but how can you take advantage of partial methods? For the most part, your interaction with partial methods will be through code generated by designers such as the LINQ to SQL designer, using the pattern I described above. In addition, you may choose to split a class across separate files for sake of collecting and factoring ideas into their own files; in this case, you may want to separate "stub" or "boilerplate" logic with customization logic. Here, it may make sense to define partial methods and use them in the stub partial class, and customize it in the partial class defined in another file. As always feel free to let me know if you have any questions.
<urn:uuid:7371da23-f1ac-4182-828c-4f2df0000ab5>
2.828125
1,302
Documentation
Software Dev.
35.081604
|Mean distance from Earth||150,000,000 km| |Visual brightness (V)||-26.8m| |Relative diameter (dS/dE)||109| |Surface area||6.09 × 1012 km2| |Volume||1.41 × 1027 m3| |Mass||1.9891 × 1030 kg| |Relative mass to Earth||333,400| |Density||1411 kg m-3| |Relative density to Earth||0.26| |Relative density to water||1.409| |Surface gravity||274 m s-2| |Relative surface gravity||27.9 g| |Surface temperature||5780 K| |Temperature of corona||5 × 106 K| |Luminosity (LS)||3.827 × 1026 J s-1| |Period of rotation| |At equator:||27d 6h 36m| |At 30° latitude:||28d 4h 48m| |At 60° latitude:||30d 19h 12m| |At 75° latitude:||31d 19h 12m| |Period of orbit around |2.2 × 108 years| The Sun, sometimes called Sol, is the star in our solar system. The planet Earth and all of her sister planets, both the other terrestrial planets and the gas giants, orbit the Sun. Other bodies that orbit the Sun include asteroids, meteoroids, comets, Trans-Neptunian objects, and, of course, dust. The Sun is a main sequence star, with a spectral class of G2, meaning that it is somewhat bigger and hotter than the average star but far smaller than a red giant star. A G2 star has a main sequence lifetime of about 10 billion years, and the Sun is probably about 5 billion years old, as determined by nucleocosmochronology. At the center of the Sun, where its density is 1.5 × 105 kg m-3, thermonuclear reactions (nuclear fusion) convert hydrogen into helium. 3.9 × 1045 atoms undergo nuclear reactions there every second. This releases energy which escapes from the surface of the Sun as light. Physicists are able to replicate thermonuclear reactions with hydrogen bombs. Sustained nuclear fusion on earth for electricity generation may be possible in the future, with nuclear fusion reactors. All matter in the Sun is in the form of plasma due to its extreme temperature. This makes it possible for the sun to rotate faster at its equator than it does at higher latitudes, since the Sun is not a solid body. The differential rotation of the Sun's latitudes causes its magnetic field lines to become twisted together over time, causing magnetic field loops to erupt from the sun's surface and trigger the formation of the Sun's dramatic sunspots and solar prominences[?]. For some time it was thought that the number of neutrinos produced by the nuclear reaction in the Sun was only one third of the number predicted by theory, a result that was termed the solar neutrino problem. When it was recently found that neutrinos had mass, and could therefore transform into harder-to-detect varieties of neutrinos while en route from the Sun to Earth, measurement and theory were reconciled. Observation of the Sun can reveal such phenomena as: Several newspapers are called The Sun. Sun is a commonly used name for the computer company Sun Microsystems.
<urn:uuid:1877909a-933e-493a-b8c3-770fc4274b03>
3.734375
752
Knowledge Article
Science & Tech.
63.302959
In Starting E and Elmer, we see the various ways to get started interacting with an E interpreter. In Example: Finding Text, we introduce the major concepts you need to get started programming in E as a conventional language. We use these concepts to write some simple functions for finding text in files on your disk. In Standalone E Programs, we see how to package our findall function so that it can be invoked from our operating system's shell (MS-DOS or bash), and how to turn it into a launchable GUI application that prompts for its arguments using Swing dialog boxes. In A 15 Minute Introduction to E, Marc Stiegler takes us on a quick tour of E's highlights, focusing on the features that distinguish E from other languages. In Lambda-Based Objects, we see how to define new objects in E. E's object definitions are generalizations of the function definitions you've already written earlier in the tutorial. E objects have all the features found in objects made from traditional classes and prototypes, yet are actually simpler to define, and can even solve problems beyond the scope of traditional class structures. In Introducing Remote Objects, we see how to give objects on different machines access to each other. An object may send a message to any object it has access to, whether it's local or remote. All the inter-machine communication is protected by the strong cryptography of E's Pluribus protocol. Armed with secure distributed objects, Secureit-Echat is a secure two person chat program, written by Marc Stiegler in 5 pages of E (about 3 of which are user interface) and posted at his site. It's a great small example of how to write distributed secure applications in E. In Simple Money Example, we see how a single page of E code can implement a payment system with most of the security properties required for real distributed electronic money. Money is the simplest interesting example of a Smart Contract -- an arrangement by which various mutually suspicious entities -- objects or people, it doesn't matter -- may attempt to cooperate with each other while not becoming vulnerable to each other. In the end, one may say that normal object programming is about patterns of computation and abstraction, whereas programming in E is about patterns of cooperation without vulnerability. A world in which cooperation is less risky may be a more cooperative world. Unless stated otherwise, all text on this page which is either unattributed or by Mark S. Miller is hereby placed in the public domain.
<urn:uuid:724d1b4a-6033-4018-824a-7b25060b8000>
3.125
507
Content Listing
Software Dev.
39.51246
Not just birds, but also a few species of bats face a long journey every year. Researchers have studied the migratory behavior of the largest extant family of bats, the so-called “Vespertilionidae” with the help of mathematical models. They discovered that the migration over short as well as long distances of various kinds of bats evolved independently within the family. - Birds migrate together at night in dispersed flocks, new study indicatesMon, 7 Jul 2008, 13:28:53 EDT - Study finds that long-distance migration shapes butterfly wingsThu, 11 Feb 2010, 10:30:05 EST - Migratory behavior affects the size of brains in birdsThu, 29 Apr 2010, 10:19:37 EDT - Study of polar dinosaur migration questions whether dinosaurs were truly the first great migratorsTue, 21 Oct 2008, 16:15:01 EDT - Smithsonian scientists report changes in vegetation determine how animals migrateWed, 11 May 2011, 16:51:20 EDT
<urn:uuid:aeeb4075-a84f-469b-a5cd-a1703d9b9189>
3.96875
205
Content Listing
Science & Tech.
25.264944
3. The entropy change, ΔS, associated with the isothermal (i.e. constant temperature) change in volume of n moles of an ideal gas from volume V1 to volume V2 is given by the equation ΔS = nR ln(V2/V1) where R is the gas constant. & If an expansion occurs (V2 > V1) use Equation 1 to deduce whether there will there be an increase or decrease in the entropy of the gas. If V2>V1, V2/V1>1, so ln(V2/V1)>0, and as n and R are both >0, we have:
<urn:uuid:d6714938-7963-4c54-b091-59aa868afcf9>
3.15625
145
Q&A Forum
Science & Tech.
84.24813
Have you tried solving Challenge #10 yet? Go try it first if you haven’t. It’s not too hard, I promise! All of these can be shown by just plugging in the values of and , but I’d like to show you some easier ways. The first problem is easy — we know that is a solution to the equation ; that is, . Some simple algebra shows that . This is kind of interesting when you think about it. In English, this says that squaring is exactly the same as adding 1. Problems 2 and 3 can be solved simultaneously. Since and are the roots of the polynomial , we know it can be factored as Multiplying out the right side, we get For this equation to be true, we obviously must have and . There is not a nice slick way to prove #4 (at least, not that I am aware of). But doing it by plugging in values isn’t so bad: Next up: we’re going to use these properties to prove some astonishing facts about the relationship between the golden ratio and Fibonacci numbers! Stay tuned!
<urn:uuid:a25c0b2d-8d15-4ff3-b695-4fc344eeaa36>
2.984375
242
Personal Blog
Science & Tech.
74.509467
We’ve all seen the traditional grade-school scale models of the solar system. Maybe you made one years ago in science class out of painted Styrofoam balls or colored construction paper. Or maybe you've seen one of those giant models hanging from the ceiling of a science museum. Big colorful globes, some with rings around them, some painted swirly colors, others looking more like pitted rocks. For most people, that’s their basic mental image of the solar system. Bright yellow sun in the middle with all the different colored balls circling around it. Neatly contained in an orderly lineup, like different-sized houses on a street. One thing these models that we have become used to can’t show us is the actual scale involved. We’ve all heard of “space.” It’s the place out there, where the guys with the Right Stuff got to go and eat rehydrated meatloaf out of a plastic bag. It's where our cell phone satellites are, and where sci-fi heroes drive around at warp speed visiting strange new worlds, like one might pop in and out of stores at the mall. But how much space is actually in space? That’s an easy question to answer — a lot — but a hard one to really understand. We’re just not made to comprehend sizes and distances like that. We don’t have to. We live here, on Earth, and always have. It’s a finite place, and even then we have a hard time comprehending the size of it. We know about these other places beyond our planet, and we've all heard the numbers representing the miles from here to there … 240,000 miles (to the moon), 34 million miles (to Mars), 93 million miles (to the sun), etc. Big numbers. But they're just that: numbers. The human mind just doesn't work well with those numbers. We might know it’s five miles to work, we get 25 miles to a gallon of gas and Aunt Louise lives in Boca Raton, about 900 miles away. But 34 million miles? OK, that’s far … right? Yeah, it sure is. And you know what there is to do in between? No Applebee’s, no rest stops, no trees, no rocks, no air, no nothing. Just space. Lots and lots and lots and lots and lots of space. No kidding, right? I mean, that’s why it’s called space. It’s real, and it’s there, right now. But still, it's hard to picture and even harder to show with grade-school models. Space is difficult to represent in print and unwieldy to replicate as a model, but on a Web page it can be captured fairly simply. This scrollable Web page accurately shows the sheer distance between the planets, in relative scale size too. Consider that the sun makes up 99 percent of all the matter in the solar system, and you see why it’s so big on the first page. Even big ol' Jupiter is inside that remaining 1 percent of matter (and a good-sized portion thereof). At the bottom of the page, there should be a scroll bar. Use the right arrow to start your trip on a horizontal track across the span of space separating the planets. If you try to drag the bar with your mouse, you’ll be going too quickly, so be sure to use the right arrow. Just start scrolling. Mercury. Closest planet to the sun. Not so “close,” is it? Keep going. Second planet in. Venus. Nasty place. Very hot. Wave hi. Earth. Scroll, scroll, scroll … *Bink.* Mars. There are the four “inner” planets. You’ve done a whole lot of scrolling but have only covered a 10th the space of the page. Keep going … Jupiter. Big huh? Well, compared with Earth, yes, but probably not as majestic as you’d expected, based on the models you’ve always seen. The sun could eat it for a snack. (And someday it might.) If you have a minute or two to burn, you can scroll all the way to Saturn, then Uranus, Neptune and finally little demoted Pluto. Then the page stops. It doesn’t keep going out to all the other icy little worlds that exist far beyond Pluto in the vast Kuiper Belt, or into the swarm of snowballs that sometimes become comets, way way out in the Oort Cloud, so very far away yet still held by the sun’s gravity. Even out there, where it looks like just another extra-bright star in the sky and sheds slightly more heat than the pad of sticky notes on your desk, the sun still has the gravitational upper hand. And that’s just our solar system. Our family of worlds. Our painted Styrofoam balls. There are a lot more solar systems out there even farther away. Many of the stars you see at night have their own — with their own planets, their own Jupiters and Neptunes, and maybe even their own Earths (named differently, of course) with their own little models of their solar systems and websites trying to demonstrate what it really means to talk about space. Image credit: NASA
<urn:uuid:70d35542-6d8e-4166-a061-13bc988ea02e>
3.203125
1,149
Nonfiction Writing
Science & Tech.
74.002338
Reply to comment The essence of mathematical modelling is simplification. Natural events are the result of multiple interrelated processes, themselves dependent on a history of other processes, in an almost endless web of cause and effect. To study a system, the mathematical modeller begins by identifying the crucial aspects of the system, those that seem to characterize it. Initially, she need not concern herself with how such characteristics come to be, only with how to describe them in simple mathematical terms. This is a very human approach. Many of us will have encountered it when learning a language, and probably even day to day while learning the universal language of mathematics. We may have been given examples of similar triangles or of imaginary numbers to play around with and get a feel for, before being taught the underlying theory. And you will no doubt be aware that as the mathematical journey continues you will uncover yet more subtle and profound detail behind the ideas as they were first presented. The story of Dr. Moss and the mongooses Going about their daily, mongoosey activities. Picture copyright Spook Skelton So it is with our mathematical modeller, Dr. Moss. At first, she will throw out many details of the system and concentrate on its most obvious characteristics. Let us take the example of modelling the spread of disease through a population of mongooses. Mongooses are an excellent example for our purposes because most people won't have a clue what they look like, where they live, or how they go about their daily, mongoosey activities. All of these are vital pieces of information, you might think, and will surely influence any ideas Dr. Moss may form on the spread of disease through their population. Although you would be right to think like this, at such an early stage Dr. Moss is quite happy to ignore these details and consider instead the basic concepts of "a disease" which spreads "somehow" between various "identical" members of "a population". As with that quiet glass of water on the table, the teeming life of the problem will only come into view if we later apply some magnification. The concept of understanding a process by omitting the details is not a new one. Indeed, this could almost serve as a definition of science, and certainly this is how science has progressed since the Renaissance, and one could argue that sections of early Greek and Arabic sciences pioneered the approach. A crystal clear - if somewhat gory - example is provided by Robert Hooke, a British scientist of the late 17th century. You may have encountered Hooke's Law in physics: the tension in a spring is directly proportional to its extension. Like many learned men of his time (and as we sadly know, nearly all famous individuals of this time were men), Hooke had his thumb in many scientific and technological pies, being heavily involved in the already-influential Royal Society in London. Apart from investigating springs, air pumps, watches, telescopes, microscopes and much more besides, he also performed some truly awful experiments on animals. On one occasion he established the function of the lungs beyond reasonable doubt by opening up the chest of a dog and keeping it alive by pumping air into those organs. This reductionist approach to understanding, which aims at breaking complex phenomena down into component parts and seeking to understand these before integrating them into a coherent picture, is continued to this day. Despite having suffocated many animals in his air pump chambers (to investigate the effects of low air pressure on a living being) and vivisected others, even Hooke was reluctant to repeat the dog experiment, only doing so when others performed it so badly he could no longer stand the suffering of the creatures. Dr. Moss, however, need not get her hands dirty. She begins by writing down very simple equations that she thinks contain the principal characteristics of the diseased mongoose population. These equations will involve both a rate of change of the proportion of the population succumbing to disease, and some unknown parameters, which we will consider shortly. The resulting equations will be differential equations. These are equations involving the rate of change of quantities either in time, or in space, or in both, and are a part of calculus. Buffaloes per orange...The equations Dr. Moss initially writes down will be independent of the particulars of the disease, geography, climate, and the mongooses themselves, and may serve other situations equally well. These specifics enter the parameters of the system, or influence it from its boundaries in time and space. It is here that levels of complexity and correspondence with the observed world begin to emerge. Firstly, note that we talk in terms of the proportion of the population currently tucked up with the mongoose equivalent of chicken soup. The idea is that it doesn't matter how many mongooses there are (beyond reasonable constraints: the spread of disease through a population of two mongooses is very dull), only the percentage of them that are sick, falling sick, recovering, and contagious. This is an example of normalising, and is an extremely useful trick. Another important point here is that the equations make sense in broad terms when they are reinterpreted. If she is not careful, Dr. Moss may end up writing a lovely equation that measures the spread of disease not in percentage per day, but in buffaloes per orange. Indeed, when she takes all such units out of the equation, she will be in one of two equally attractive positions. She may find that all the units disappear from the equations (they cancel exactly with each other) in which case she or anyone else is able to measure quantities in whichever units they favour - including buffaloes per orange. Alternatively, the units may gang up with each other in what are known as dimensionless groups. This means that the units are still explicitly present in the equations, but they can be gathered together into expressions which reduce to a single number with no units, so she is still free to measure in buffaloes per orange. Additionally, Dr Moss is able to consider different types of solution depending on the size of the dimensionless groups. This type of dimensional analysis is very powerful indeed and is also a great way to become famous: most dimensional groups are named after the person who popularised them. Let us look over Dr. Moss's shoulder and consider an example of a dimensionless group in her mongoose problem. A gathering of units could well yield the dimensionless group SDCAM0/F, which we shall call Moss's number, where S is the mean susceptibility of each mongoose to the disease, D is the density of mongooses in their environment, C is the contagiousness of the disease, A is the mean area covered by a mongoose in his or her daily wanderings, M0 is the mean weight of a mongoose and F the amount of food available to each mongoose. Now, D is measured in "mongooses per unit area" and so has dimensions Length-2; S is a scalar (an ordinary dimensionless number); and so on. If we write down the units for each of these terms, all the dimensions cancel and what remains is therefore a dimensionless number. It is clear that Moss's number is very important and immediately gives us plenty of qualitative information on the system. If the equation is such that the rate of spread of the disease is proportional to Moss's number, then when it is large we can see that the disease will rapidly spread through the population. If it is small, however, most mongooses will be safe. Looking at the number this makes sense: a large value could be telling us that the disease is highly infectious or that there is so little food that the mongooses are weak. A small number could indicate that there are so few mongooses that they rarely come into close contact and thus the disease spreads only slowly. There is a formal mathematical method known as asymptotic analysis that can be used to extract more quantitative results from considering large and small limits of dimensionless groups. Back to reality As she gets more of a feel for the system, and begins to see some results from the simplified model, Dr. Moss will build more and more realism into her model. She may try to include factors such as the age variation in the population, or how an individual's daily activities contribute to its likelihood of becoming infected. These factors will usually add further terms to the governing differential equation, or system of differential equations. Or they may change the initial or boundary conditions of the system - the data which filter in through the edges of the environment, or when the disease first appears. Depending on the type of equations involved, such changes can alter the outcome dramatically. Linear equations are reasonably predictable: once you've found a solution for given initial and boundary conditions, you know the rest. An example we met earlier is Hooke's Law: if you double the length of the spring, you double its tension: simple. Nonlinear equations are entirely different beasts. They exhibit what is technically called chaos, though the name is a little misleading because their behaviour is not random - it is controlled by the equations, after all - but can be incredibly rich in detail. A great example of a nonlinear system exhibiting chaos is all around you: the weather. As with all chaotic systems, a tiny change in the initial conditions can have a dramatic effect: drop the temperature of a small patch of ocean by less than a degree and you could create a monsoon on the other side of the world. Imagine if extending a spring by just a small amount caused the tension to increase by a factor of 10, and a further slight extension produced an almost zero tension! Chaos in every sense of the word: tornado damage Photo copyright FEMA It is probably fair to say that everything is controlled by nonlinear systems of equations, all borrowing data from one another and nudging each other into new solution regimes. But Dr. Moss will have looked first only at the linearised versions, gradually increasing the complexity and nonlinearity until greater and greater correspondence with reality was found. Some systems are so complicated that only linearised versions can be contemplated by hand, and Dr. Moss may decide ultimately to program her computer to try to solve the nonlinear equations. Mongooses - but will they come to the phone? Image courtesy of African Wildlife Foundation, photo by Darryl and Sharna Balfour Thus a mathematician has, in the comfort of her office, sat down and systematically worked through an ever-improving system of equations. She has started with the simplest possible case that captures features of the problem. Then, as she began to understand the dynamics of the problem, she will have talked to mongoose experts (that is, scientists whose expertise is mongooses: experts who are mongooses tend not to come to the telephone) and built more and more of the real world into her equations. Finally, she may have been able to characterize the phenomenology of the disease, and explain why perhaps small changes in apparently unrelated factors can yield wildly different prognoses for our mongooses. The chances are she won't have needed to leave her office, and next week is another week. About the author Phil Wilson gained the MSci in Mathematics from University College London in 2000. His interest in all areas of maths blossomed there, and he took courses in most disciplines, culminating in a thesis on the proof of Fermat's Last Theorem. He is currently nearing the end of his PhD at UCL, where he is studying the high-speed flow of air through convoluted ducts. He hopes to pursue a career in research and teaching. He will be teaching Fluid Dynamics at UCL in the Autumn before starting a postdoctoral position modelling biological cell dynamics at Tokyo University.
<urn:uuid:f9c111a8-d4e5-445a-a587-d0df695dfef0>
2.71875
2,420
Comment Section
Science & Tech.
39.463663
This text can be taken as an introduction to General Relativity, by its way to independently explain, justify and use the Einstein's equation of General Relativity, in a simplified way thanks to the simplicity of this particular problem. However it assumes the concept of Riemannian curvature in 2 dimensions. An introduction to general relativity will be developed later to explain this concept as well as the more general form of Riemannian curvature in any dimension and the Einstein's equation. The variables here will describe properties of space-time and matter as implicit function of the time t defined as the age of the universe, that is the age of a galaxy present at that place, whose movement followed the universal expansion. Their derivative with respect to time will be denoted by the prime notation Let us fix a choice of 2 galaxies, and let the variable r denote their distance. More precisely, r is defined by summing up the distances in a chain of aligned intermediate galaxies between both, all taken at the same age. In other words, we may just take 2 nearby galaxies so that r will be small and we shall only consider properties at the first order of approximation relatively Thus the Hubble "constant" (which varies in time) is H = r'/r. The cosmological principle gives symmetries to space-time so that among the 20 components of space-time curvature, only 6 are nonzero, those of the "diagonal" of the 6*6 matrices of curvature in a natural coordinate system around a point, describing the internal curvature of small surfaces in each of the 6 directions of planes defined by pairs of coordinates : (x,y), (x,z), (y,z), (x,t), (y,t), (z,t). Moreover the symmetries of the problem make the first 3 equal to each other, and also the last 3, so that only 2 variables will describe the components of the space-time curvature in an expanding homogeneous universe. The space curvature (that of a "flat surface" in space), will be denoted R (like "Riemann curvature"). This is the usual notation for the Gaussian curvature of a surface. Next we introduce the mass density ρ, the energy density U = c2 ρ, and the pressure P. Take a third galaxy such that the area of the triangle formed by the 3 galaxies is equal to r2. This property is conserved along time, as the area of any expanding surface keeps the same proportionality to r2. Let K = sum of angles - π. As each galaxy sees each other galaxy fleeing radially, each angle of the triangle remains constant. Therefore K is also constant in time. The equation of 2-dimensional curved geometry says that K= k.r2 where k is the Gaussian curvature of the surface. But this surface is curved in space-time, towards the time direction. Therefore the value of k is a sum of 2 terms : k = R+h where h is the term due to the external curvature of an instantaneity surface (t=constant) in the time direction. This term is due to universal expansion. If space-time was flat, we would have r proportional to the time t, the galaxies of the same age t would form a sphere with time radius t, and we would have H=1/t. Changing the geometry of space-time into Euclidean geometry, the external curvature of an instantaneity surface would also be 1/t = H, and the Gaussian curvature would be h= t-2 = H2 taking the same units for time and space. Distinguishing time and space units and converting this result to Minkowskian geometry, gives h = - H2/c2 This describes how things go in a slice of space-time between two nearby ages, disregarding the rest of space-time : the expansion H describes the difference of direction of nearby time vectors (orthogonal to the instantaneity surface) in proportion to their distance, and thus the external curvature of the surface. It gives a relativistic small negative ( -1/c2) contribution to the intrinsic curvature, corresponding to the intrinsic curvature of a sphere with time radius (that is a hyperboloid in the flat Minkowskian space-time). In conclusion, we get the geometric equation: K = (R - H2/c2).r2 = R.r2 - r'2/c2 (which gives back K= - r'2/c2 in a flat Moreover, as K is constant along time, its first derivative is 0 : R'.r2 + 2 R.r.r'= 2r'.r"/c2 Consider an expanding region of space with volume r3 and internal energy E = mc2 = U r3 (so U is the energy density). The internal pressure P in this expanding volume induces a variation of energy E'= - P(r3)'= - 3 P.r'.r2. Dividing both terms by r3 gives U' + 3 U.H + 3 P.H = 0 This equation relates the 10-dimensional components of the stress-energy tensor with the 20-dimensional curvature tensor. Here the study is restricted to a simple case where only two variables appear on each side: On one side are the two variables of space curvature R and space-time curvature r"/r. On the other side are the energy density U and the pressure P. The relation must not involve the variable H, as this does not describe a physical quantity at the considered point but only a way in which physical quantities vary around it. It will be expressed by making both equations proportional to each other. (Sorry this is not absolutely rigorous proof, but already suggestive, and the formulas are accurate expressions of General Relativity in this case, so...) Let us denote the proportionality coefficient 3/G*. In fact we will have G*=8πG/c4, where G is the gravitational constant. So we have G*U' = 3R' G*(U+P) = 2(R − r"/rc2) Finally, eliminating R between both equations gives the following equation for the evolution of the expansion rate Let us make the equation of the universal expansion look like the equation of movement of a particle in a field of potential energy. From above we have r'2= c2 (R.r2 - K). Since K is constant, let us interpret -Kc2 as the "total energy" (though it has nothing to do with an energy anymore), r'2 as the "kinetic energy". So the "potential energy" is V= -c2 Rr2 = -c2 r2 (G*U+Λ)/3 = -8πG m/3r - c2 r2Λ/3 The term -c2 r2Λ/3 coming from the cosmological constant, will be the dominating term for large values of r. Such a potential, proportional to -r2, makes r diverge at exponential speed; while it can be neglected for small values of r. In the case if P=0 all along, the parameter m of total mass inside the volume r3 is a constant, so that the remaining term of the potential -8πG m/3r behaves as a Newtonian gravitational potential, in -r-1. But it is generally variable otherwise. Near the Big Bang we have periods dominated by hot matter and radiative energy, so that most of the energy consists in particles going at or near the speed of light. In this case we have P=U/3. Let us resolve this by the equation of mechanics: U' + 3 U.H + 3 P.H = 0 where H = r'/r. So, U' + 4U.r'/r =0. (ln U)'=-4(ln r)' ln U = -4 ln r + constant U is proportional to r-4. Thus the "potential" is now proportional to - r-2. If we have a matter at rest not interacting with a radiative background then both separately contribute to the "potential" with respective terms in -r-1and - r-2. Back to : List of physics theories Set Theory and foundations of mathematics
<urn:uuid:bcc0028c-3433-4d94-8c40-57f8841975fa>
3.765625
1,843
Academic Writing
Science & Tech.
57.355459
Coastal & Marine Geology InfoBank Our Mapping Systems The USGS and Science Education USGS Fact Sheets ground penetrating radar Comment: 02:35 - 03:59 (01:24) Source: Annenberg/CPB Resources - Earth Revealed 24, Waves, Beaches and Coasts Keywords: "ocean wave", beach, energy, ripple, wind, crest, trough Our transcription: When a wave approaches the beach, it's not the water itself that's advancing, but a surge of energy which is moving through the water. It's like the ripple that runs across a field of grain when the wind blows. The individual stalks don't run across the field; they simply bend as the wind strikes them. Or take the wave at a football game which creates the allusion that the spectators are rippling around the stadium when all they're actually doing is standing up or sitting down. The same principle applies to water waves. Consider what happens to a floating object as a wave of energy passes through the water. That object tends to stay more or less in the same place tracing a circular motion as it bobs up and down. The individual particles composing the wave behave in a similar way. As the crest of the wave arrives, it lifts the particle up and forward, and then when the trough of the wave follows, the particle falls down and backward. Like the stalk of grain or the football fan, the particle returns to its original position after the disturbance is passed. Geology School Keywords
<urn:uuid:efd57bd8-9a48-44e3-91d5-80d765d97927>
3.796875
326
Knowledge Article
Science & Tech.
54.008225
February 21, 2012 - Earthquakes by Time of Day: I downloaded all earthquakes >= to 6.0 magnitude, from 1973 to 2012, from the USGS website. I then categorized them by time of day. The results are presented in the graph above. The axis at the bottom (X) reflects the time of day, e.g., ’0AM is between Midnight and 59 minutes and 59 seconds, ’1AM is between 1:00AM and 1:59AM, etc. The left axes (Y) are the number of earthquakes. The gold line running through the data is a linear best fit line. What struck me as odd was the significant variance between hours of the day and earthquakes given that the sample size was so large (N = 5,329). The lows at 9AM (203), 4PM (202) and 10PM (201) are almost two standard deviations away from the average, which means that they are 95% outside a normal curve and very unusual. The highs at 2PM (246), 6PM (242) and 11PM (240) are also statistically significant, because they are greater than two standard deviations away from the mean. The number of earthquakes between 1AM and 3AM are fairly high and fall off to a low at 9AM. The quantity then increases from 9AM and peaks at 2PM. From 2PM, it plunges and then forms a rather chaotic pattern of highs and lows. Heating and cooling of the outer crust of the earth changes from the night to the day. Cold tends to contract and heat expands. On average, if around 6AM is sunrise then about 2PM would represent the higher heat of mid-day and perhaps explain the high of 246 quakes at 2PM? Then there is a drop-off to 4PM and another peak around sunset at 6PM. However, the large variability during the night or day time is not explained and I am perplexed as to why this occurs. With a sample this large, I would expect a normal distribution! (Credit: Data – USGS, Narrative – W. G. Foster) The Master of Disaster
<urn:uuid:f3207bc9-4901-4ced-b844-f5e2d3368d11>
3.21875
440
Personal Blog
Science & Tech.
76.340345
The structure of the synapse In order to understand the synapse and the changes in it associated with learning, we need to know its structure. There are over 1000 different proteins in a synapse, and we need to their distribution and their numbers, and how changes in these factors modulate synaptic activity. Electron microscopy offers a useful way to examine changes in synapse structure, but identifying the proteins is difficult. Ordinary light microscopy cannot resolve spacing sufficiently small to determine the organization of proteins in the synapse. Super-resolution fluorescent microscopy does have the capability of doing so, and when coupled with electron microscopy offers our best chance to unravel the structure of the synapse. (Fluorescent) Photo Activated Localization Microscopy ((F)PALM) is a method capable of localizing macromolecules within a few nanometers (Hess et al. Biophys. J. 2006; Betzig et al. Science 2006) . In the method, one localizes the position of an isolated fluorophore by determining the center of the photon distribution emanating from it. The accuracy of the position is equal to the point spread function (PSF≈standard resolution) of the light of the light microscope divided by the square root of the number of photons collected from each fluorophore. Thus if we collect 100 photons from each fluorophore, we can do 10 times better than the standard resolution. The trick is to record from one fluorophore at a time. This is done using photoctivatable or photoswitchable fluorescent proteins. One simply turns on one of the fluorophores, images it until it bleaches, determines the center of the distribution, and then turns on a second fluorophore. One in effect resolves in time in order to resolve in space. The method is quite slow and hence for accurate mapping, one needs to use a fixed specimen. At present, chemical fixation methods do not preserve structures perfectly and can destroy some of the fluorophores. The reason for carrying out PALM at cryo temperatures is two fold. First the rate of bleaching is reduced so that more photons can be collected. More photons translate directly into better precision of localization. Second, cryofixation better preserves structures than chemical fixation and it does not destroy fluorophores such as PA-GFP. The key is to construct a cold stage capable of preserving the structure at temperatures below -140° C while allowing one to use a high numerical aperture objective. At temperatures above -140° C, the amorphous ice in which the sample is embedded crystallizes and alters the sample’s structure. A high numerical aperture objective minimizes the PSF, increases the number of photon collected, and hence increases the accuracy of localization. The research program, which is being carried out in the laboratory of Professor Gina Turrigiano, is to construct such a cold stage and a fluorescent microscope. The microscope has been built from optical parts and sits atop an optical table. The instrument has no microscope body incorporated in it, which would restrict the space available for a cold stage. Two wavelength fluorescence is possible with the current design. One is for green fluorescence such as that produced by photoactivatable green fluorescent proteins. The other is for red fluorescence from a different fluorescent protein. The light sources are three diode lasers. One is for activation or switching, one for exciting green fluorescence, and one for exciting red fluorescence. The camera is the highly sensitive EMCCD from Ixon. The xyz stage is from Mad City Labs. All these elements are fully automated with LabView. The cold stage is still under design and construction. In the current design, we plan to transfer in a frozen specimen on a 3 mm electron microscope grid, image it and return it to a cryo grid box for further imaging in an electron cryomicroscope. The combined use of super-resolution cryo light microscopy and electron cryomicroscopy on the same synapse will combine the best elements of both methods permitting one to localize and visualize the structures within the synapse. Wolanin PM, Baker MD, Francis NR, Thomas DR, DeRosier DJ, Stock JB. (2006) Self-assembly of receptor/signaling complexes in bacterial chemotaxis. Proc Natl Acad Sci U S A 103, 14313-8. [abstract] Mercogliano, CP and DeRosier, DJ. (2006). Gold nanocluster formation using metallothionein: mass spectrometry and electron microscopy. J. Mol. Biol. 355, 211-23. Wolf, M. DeRosier, D. J. and Grigorieff, N. (2006) Ewald sphere correction for single-particle electron microscopy. Ultramicroscopy 106, pp 376-82. [abstract] Tilney, LG and DeRosier, DJ (2006) How to make a curved Drosophila bristle using straight actin bundles. Proc Natl Acad Sci USA 102, 18785-92.[abstract] [free article] Thomas DR, Francis NR, Xu C, DeRosier DJ. (2006) The three-dimensional structure of the flagellar rotor from a clockwise-locked mutant of Salmonella enterica serovar Typhimurium. J. Bact. 188, 7039-48.[abstract] [free article] DeRosier, D. (2006) Bacterial flagellum: visualizing the complete machine in situ. Curr. Biol. 16, R928-30. [abstract] Mercogliano, C. and DeRosier, D.J. (2007) Concatenated Metallothionein as a Clonable Gold Label for Electron Microscopy. J Struct Biol. 2007 Oct;160(1):70-82. [abstract]\ Wolfberg AJ, DeRosier, D, Roberts T, Syed Z., Acker D, and du Plessis A. (2008) A comparison of subjective and mathematical estimations of fetal heart rate variability. Journal of Maternal-Fetal and Neonatal Medicine. 21, 101-104. [abstract] Lattman, E and DeRosier, D. (2008) Why Phase Errors Affect the Electron Density Function More than Amplitude Errors. Acta Crystallographica A. 64(Pt 2):341-4. [abstract] Hampton, CM, Liu, J, Taylor, DW, Ouyang, G, DeRosier, DJ and Taylor, KA. (2008) The 3D Structure of Villin as a Unique F-actin Cross-Linker. Structure. 16(12):1882-91.[abstract] DeRosier, DJ. (2010) 3D Reconstruction from Electron Micrographs: a Personal Account of its Development. Methods Enzymol. 481:1-24. [abstract] Last review: August 9, 2011
<urn:uuid:32390422-7e50-49a0-a628-d7c6116ca424>
3.296875
1,472
Academic Writing
Science & Tech.
48.525632
yellow wood sorrel Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. ...ejecting the true seed. The leaflets, as in other species of the genus, fold back and droop at night. Besides the wood sorrel, about 20 other species occur in North America, among which are the yellow wood sorrel (O. stricta), of the eastern United States and Canada, with yellow flowers; the violet wood sorrel (O. violacea), of the eastern United States, with rose-purple... What made you want to look up "yellow wood sorrel"? Please share what surprised you most...
<urn:uuid:7e4179de-78b0-4fab-a4ce-059ad7855aec>
2.859375
153
Knowledge Article
Science & Tech.
63.797862
To introduce concepts of astronomy, it is often best to start with actual observing projects to engage the student/pupil in visualizing the solar system that the earth is a part of. The most obvious starting place for this is to look at what causes our days and nights (the earth rotating on its axis once every 24 hours from west toward the east, so that the sun appears to rise in the east and set in the west), and therefore look at both the sun and the moon. The moon's apparent motion on the sky. The moon also rises in the east and sets in the west (as do the planets, comets, and the stars), but the moon varies quite a bit in rising and setting times (with respect to the sun) because the moon is also orbiting the earth (making a complete circuit around the earth in just under a month). One can see the moon's movement across the sky in only minutes when it is near the horizon (rising or setting), and noticeably over the course of an hour when it is higher in the sky -- as the earth is turning on its axis. It takes the moon about 29.5 days to go from new moon to new moon -- the time that the moon is closest to the sun in the sky. When the moon is closest to the sun, it is invisible from the ground -- lost in the sun's glare, with our looking at the dark side of the moon (the sunlit part facing toward the sun and away from us. As the moon moves eastward around 13 to 15 degrees per day in the sky in its orbit about the earth (there are 360 degrees around the whole sky for it to complete its orbit in 27.3 days), it moves with respect to the background stars noticeably each day, causing it to rise nearly an hour later each day/night when compared to the previous day/night. Note that the moon is bright enough to be visible in broad daylight through most of its orbit around the earth. Lunar phases. The motion of the moon around the earth over a month permits us to view the moon from different angles with respect to the sun. As it emerges from the sun's glare in the evening sky after new moon, it appears for a few days as a crescent moon. After about a week, it reaches "first quarter" phase, where it appears half lit (the other half that is lit is facing out into space, away from the earth). After yet another week, the moon comes to "opposition", where it is opposite the sun in the sky, so that we see it fully lit ("full moon"); the moon is up all night long around full moon. After that, as it moves into the morning sky, less and less of the moon appears lit as each day/night passes, so that about a week after full moon, it reaches half-lit phase again (called "last quarter" moon). And a week after that, it has made full-circle to reach new moon again, after passing through several days of appearing as a crescent moon in the morning sky. Good diagrams that show lunar phases with respect to the moon's position with respect to the earth and the sun in space can be viewed by clicking on the links here and here. A list of dates of lunar phases for the years 2001-2025 is given at this NASA website. Eclipses. Also, the moon's orbit is inclined with respect to the earth's orbit about the sun, so that only occasionally (a couple of times a year) is the moon exactly lined up in space with the sun and the earth; when this happens, an eclipse occurs. A solar eclipse happens at new moon, when the moon crosses in front of the sun (with the moon appearing black because of the relative brightness of the sun). A lunar eclipse happens at full moon, when the earth passes between the moon and the sun (and so the earth's shadow is cast onto the moon's surface, and we see the moon getting darker and then brighter again over a couple of hours). A list of lunar eclipses observable for the years 2008-2015 is given at this NASA website. The moon close-up. The moon appears uneven in shading and texture to the naked eye. With binoculars, mountains and craters become visible; the moon has no atmosphere, so no clouds to hide its surface. The larger the optical instrument (telescopes), the more detail on the lunar surface that can be seen. The first spacecraft visited the moon in the 1960s, and since then we have mapped the entire surface with high-resolution photography (and brought back lunar samples via the several Apollo manned landings on the moon). A good website for close-up lunar maps and images can be seen by clicking here. A nice gallery of lunar images can be found by clicking here. Another good gallery of images of the moon can be found here. Detailed technical information on the moon, including data concerning its size, mass, etc., are given at this NASA website.
<urn:uuid:343301e8-4430-4f41-9515-4db41e2db056>
4.1875
1,028
Knowledge Article
Science & Tech.
62.252034
Today NASA's Jet Propulsion Laboratory announced the results of their study which shows Expanding Earth is false. Using GPS, satellite orbits, satellite laser ranging, and radio astronomy, NASA is convinced there is no significant change in the Earth's width at all. The only "change" measured equaled out to 0.004 inches (0.1 millimeters), which is well within the margin of error. This change (or margin of error), equaling a width of a human hair, would only account for 4.1 miles (6.6 kilometers) of growth/contraction in the last 65 million years. |More than 4.1 miles is needed to change this to the Earth we know. Image from Celestial Matters.|
<urn:uuid:0c8e87b6-6174-47eb-bce2-607c9facbb71>
3.34375
149
Comment Section
Science & Tech.
67.484506
For Question 44 Considering that the loop is complete, we know that the electric field at the center will be zero. Now consider an element on the ring of length ∂l. The electric field exerted by this element at the loop center must be equal in magnitude and opposite in direction to the field exerted by the rest of the loop at the its center. So E(at center due to the remaining wire) = E(at center due to ∂l) Which gives the desired answer. this is not a SHM motion,but periodic d = ut + 1/2 a t2 u = 0 and a = qe / m t = (2md / qe)1/2 total time taken to reach the wall is : T = 2t Preparing for JEE? Kickstart your preparation with new improved study material - Books & Online Test Series for JEE 2014/ 2015 @ INR 5,443/- For Quick Info Find Posts by Topics
<urn:uuid:4943b51c-763c-42ec-8e99-f180d7550340>
2.6875
209
Tutorial
Science & Tech.
67.00545
No matter how hard you try, maybe giving up maths straight after leaving high school, you inevitably end up having to solve a bleeding problem some time. Or you enjoy the simplistic logic of maths. Either way, these 10 easy tricks to solving a few harder problems will be of use to you. The first trick is similar to the times 10 trick where you just add a zero to multiply anything by ten. This one is for multiplying by 11. Take the original number and imagine a space between the two digits (in this example we will use 52: 5_2 Now add the two numbers together and put them in the middle: 5_(5+2)_2 That is it – you have the answer: 572. 10 Easy Arithmetic Tricks – [Listverse]
<urn:uuid:e3ce1b37-044e-4826-ba2e-f7cd1daf8a39>
3
163
Listicle
Science & Tech.
65.752564
Thomas Robinson of Glamorgan is trying to patent (GB2334292) a simple idea that could save lives, especially for unskilled do-it-yourself roofers or aerial and satellite dish installers. Before any ladder is used, the would-be climber throws a ball connected to a line over the roof. The line is pulled tight from the other side of the house to lift up a saddle unit, which sits astride the ridge. Lines from each side of the saddle are then fed through windows on opposite sides of the house and tied to secure the saddle. An inertia reel belt hangs from the saddle and locks the climber tightly if they fall, just as a seat belt locks if pulled too quickly. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:3d5d2e7f-643f-4b04-9cbf-7e146bbb8c98>
2.71875
175
Truncated
Science & Tech.
54.99
Video: Bubble multiplies BUBBLES live on in their offspring. A bursting bubble begets daughter versions of itself - which could influence everything from glass-making to atmospheric processes. The process of a large air-bubble popping on a surface has been described, but researchers had no insight into the mechanism, says James Bird of Harvard University. To explore it, Bird's team created hemispherical bubbles using water and glycerol and popped them. High-speed cameras revealed how the popping can trap toruses of air. One torus is formed as the initial rupture grows. Flung outwards by surface tension, the rim of the rupture folds back onto the main body of the bubble. Meanwhile, the outer parts of the bubble collapse down to the surface, crimping off a second air torus. This structure is unstable and collapses into smaller bubbles, which in turn produce even tinier ones (Nature, DOI: 10.1038/nature09069). The team found that increasing the fluid's viscosity prevents these bubble rings. This may remove unwanted bubbles from industrial processes such as glass-making. The process may also play a part in the mixing between atmosphere and oceans, as smaller bubbles tend to absorb gas faster than big ones and are better at spitting out aerosol droplets when they pop. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article Or In Purities ? Thu Jun 10 13:52:56 BST 2010 by Peter Take a glass of tonic and you'll find that bubles originate at the glas at places where the glass isnt normal, it might be dirt or a scratch.. Simple investigation done in 5 minutes :) All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:5984bd45-dcce-4bc3-ac80-7c9854b37269>
3.453125
475
Truncated
Science & Tech.
52.924382
What Is Hibernateby James Elliott, author of Hibernate: A Developer's Notebook - Hibernate is a free, open source Java package that makes it easy to work with relational databases. Hibernate makes it seem as if your database contains plain Java objects like you use every day, without having to worry about how to get them out of (or back into) mysterious database tables. It liberates you to focus on the objects and features of your application, without having to worry about how to store them or find them later. This article discusses the following: Most applications have some need to work with data. Java applications, when running, tend to encapsulate data as networks of interconnected objects, but those objects vanish in a puff of logic when the program ends, so there needs to be some way to store them. And sometimes the data is already "out there" before the application is even written, so there needs to be a way to read it in and represent it as objects. Writing code by hand to perform these tasks is tedious and error-prone, and can represent a major portion of the effort involved in the overall application. As good object-oriented developers got tired of this repetitive work, their typical tendency towards enlightened laziness started to manifest itself in the creation of tools to help automate the process. When working with relational databases, the culmination of such efforts were object/relational mapping tools. There have been a variety of such tools, ranging from expensive commercial offerings to the EJB standards built into J2EE. In many cases, however, the tools introduce their own complexity, force the developer to learn detailed rules for using them, and to modify the classes making up their applications to conform to the needs of the mapping system. As the tools evolved to handle ever more rigorous and complex enterprise requirements, the intricacies required to use them started to overwhelm the savings you could obtain by doing so in simpler, common cases. This has led to something of a revolution favoring more lightweight solutions, of which Hibernate is an example. Hibernate doesn't get in your way; nor does it force you to change the way your objects behave. They don't need to implement any magical interfaces in order to be blessed with the ability to persist. All you need to do is create an XML "mapping document" telling Hibernate the classes you want to be able to store in a database, and how they relate to the tables and columns in that database, and then you can ask it to fetch data as objects, or store objects as data for you. Compared to most of the alternatives, it's almost magical. There isn't room in an introductory article like this to work through a concrete example of building and using a Hibernate mapping document (that's what the first couple of chapters in my Hibernate: A Developer's Notebook are for, after all). And there are some good examples already on the Web and in the Hibernate online documentation; see "Learning More," on page 2. It really is straightforward, though. Properties in the application objects are associated with appropriate database structures in a simple, natural way. At runtime, Hibernate reads the mapping document and dynamically builds Java classes to manage the translation between the database and Java worlds. There is a simple, intuitive API in Hibernate to perform queries against the objects represented by the database. To change those objects you just interact with them normally in the program, and then tell Hibernate to save the changes. Creating new objects is similarly simple; you just create them in the normal way and tell Hibernate about them so they can get stored to the database. The Hibernate API is simple to learn and interacts quite naturally with the flow of your program. You invoke it in sensible places, and it does what you'd like it to. The benefits it brings in terms of automation and code savings greatly outweigh the short time it takes to learn. And you get an additional bonus in that your code doesn't care (or even have to know) what kind of database you're using. My company has had projects forced to change database vendors late in the development process. This can be a horrible mess, but with Hibernate it has required nothing more than a single change to the Hibernate configuration file. This discussion has assumed that you already had a relational database set up, as well as Java classes to map, through the creation of the Hibernate mapping document. There is a Hibernate "toolset" that works at build time to support different workflows. For example, if you've got the Java classes and mapping document, Hibernate can create (or update) the necessary database tables for you. Or, starting with just the mapping document, Hibernate can generate the data classes for you, too. Or it can reverse engineer your database and classes to sketch out a mapping document for you. There are also some alpha plugins for Eclipse to provide intelligent editing support and graphical access to these tools right from within the IDE. If you're in a Hibernate 2 environment, fewer of these tools are provided, but there are third-party options available. Pages: 1, 2
<urn:uuid:68c17ab9-ebf3-4f5b-b731-9d6cb5ce865f>
2.953125
1,089
Truncated
Software Dev.
36.105778
(PHP 4, PHP 5) tempnam — Create file with unique file name Creates a file with a unique filename, with access permission set to 0600, in the specified directory. If the directory does not exist, tempnam() may generate a file in the system's temporary directory, and return the name of that. The directory where the temporary filename will be created. The prefix of the generated temporary filename. Note: Windows uses only the first three characters of prefix. Returns the new temporary filename, or FALSE on failure. |4.0.6||Prior to PHP 4.0.6, the behaviour of the tempnam() function was system dependent. On Windows the TMP environment variable will override the dir parameter, on Linux the TMPDIR environment variable has precedence, while SVR4 will always use your dir parameter if the directory it points to exists. Consult your system documentation on the tempnam(3) function if in doubt.| |4.0.3||This function's behavior changed in 4.0.3. The temporary file is also created to avoid a race condition where the file might appear in the filesystem between the time the string was generated and before the script gets around to creating the file. Note, that you need to remove the file in case you need it no more, it is not done automatically.| Example #1 tempnam() example $tmpfname = tempnam("/tmp", "FOO"); $handle = fopen($tmpfname, "w"); fwrite($handle, "writing to tempfile"); // do here something Note: If PHP cannot create a file in the specified dir parameter, it falls back on the system default. On NTFS this also happens if the specified dir contains more than 65534 files.
<urn:uuid:82fd5697-5f38-4a27-80bb-4c13be024068>
3.609375
379
Documentation
Software Dev.
55.725403
SCU has 1,050 kilowatts of solar panels on four of its buildings, making it the third largest roof- top system among all American colleges and universities. The system generates 1.5 million kilowatt-hours of clean energy and eliminates 511 metric tons of carbon dioxide annually. That’s equivalent to taking 127 small cars off the road for an entire year. Among the many initiatives it’s launched, SCU has been adding more solar panels to help the University reach its goal of becoming carbon neutral by 2015. The AASHE Campus Solar Photovoltaic Installations database found that converting to solar energy is becoming easier for higher education institutions like SCU because of a 40 percent drop in installation costs over the past four years and new financing mechanisms to hedge against rising electricity prices. The data also revealed that installed solar capacity jumped 450 percent over the past three years in the higher education sector. Furthermore: - The 137 megawatts (MW) of solar capacity installed on higher education campuses to date is equivalent to the power used by 40,000 U.S. homes. - The market in 2010 for on-campus solar installations was over $300 million in the U.S. - Higher education solar installations in 2010 made up 5.4 percent of the total 956 MW installed that year in the U.S. - Since 2009, the median project size has grown six-fold. - Only five states installed more solar in 2010 than the 52 MW installed on U.S. campuses in 2010. Charts and additional analysis from AASHE are available here Learn more about Santa Clara University’s sustainability efforts here
<urn:uuid:7cf0d098-3cc4-4626-acbb-d6593d665830>
2.859375
342
Knowledge Article
Science & Tech.
50.941659
The World's Worst Invasive Mammals Animals as common as goats, deer, rabbits or mice can have a devastating effect on other wildlife - By Jess Righthand - Smithsonian.com, December 20, 2010 House mouse (© Redmond Durrell / Alamy) Apart from humans, mice (Mus musculus) are thought to be the most widely distributed animal in the world. Humans and mice have carried on a somewhat imbalanced partnership over the past 8,000 years: mice take shelter in man-made structures like houses and pass on diseases such as bubonic plague and salmonella. Mice can devour crops and human food reserves. And perhaps second only to eating, the thing mice do best is breed. Females have five to ten litters per year of around six young each. Their numbers sometimes even reach plague status, with millions of mice yielding extensive economic damage by eating stored food or digging up crops. Mice have also been shown to prey on albatross chicks and cause breeding failures in albatross and petrel populations in places like Gough Island in the South Atlantic.
<urn:uuid:0654cf47-99db-4fa5-bcc2-b356d5b370a9>
3.21875
227
Truncated
Science & Tech.
41.327716
Content in this section supports the concept of growing crops in space and the symbiotic relationship between plants and space travelers. Plants in space are beneficial for a number of reasons. They provide nourishment for the body when eaten as food, and they improve the quality of indoor air. Plants take the carbon dioxide from air to produce oxygen that humans can breathe. Find information about how plants, people, microbes and machines work together in self-contained space vehicles. NASA Engineering Design Challenge: Lunar Plant Growth Chamber What’s for dinner on the moon? Astronauts will need to grow food when they return to the moon and eventually travel to Mars. Join the challenge to design and build a lunar plant growth chamber. Educator Guides for Lunar Plant Growth Chambers: Life Science Themed Units and Camps NASA's Summer of Innovation Project provides theme-based units: The Body, Food, Life Out There?, Plants, and Survival. Professional development training modules are available for educators on the website. Hydroponic Systems Activity Students work with one or more hydroponic systems and collect data for a four-week period to determine which system resulted in the best plant growth. Liftoff to Learning: Plants in Space Elementary students participated in a plant growth experiment with astronauts on the space shuttle. Order the DVD from the Central Operation of Resources for Educators. Segments from the video are listed below. Our World: Plants in Space Find out how plants use light to make their own food in a process called photosynthesis. NASA Edge: Space Life Science Lab Meet NASA scientists Dr. Carlos Calle and Dr. Ray Wheeler as they talk about work done inside the Space Life Sciences Laboratory. Dr. Calle talks about the challenges of protecting NASA assets from dust in space, the moon, Mars and other stellar locations. Dr. Wheeler talks about growing plants in space that could help astronauts protect themselves from radiation via their diet. Space Seeds Return to Earth Seed pods from a commercial gardening experiment aboard the International Space Station were returned to Earth. In Search of Moon Trees Scattered around our planet are hundreds of trees that were grown from seeds that had been to the moon and back again. Find out if one is in your neighborhood. You're "stuck" on the moon or relocated to Mars. How are you going to survive for months and possibly years without resupply? This is the challenge you face in the Biogenerative Life Support System Sim. BLiSS Sim is only available for the iPad
<urn:uuid:489d1942-ad23-40c6-a250-666b90abf67b>
3.703125
520
Content Listing
Science & Tech.
50.129455
Genetically engineered canola resistant to two common herbicides has been found growing widely along roadsides in North Dakota, one of the first instances of a biotech crop establishing itself in the wild. This might not even be a problem at all, although critics of biotech crops might conceivably point to it as an example of how hard it is to stop the spread of “gene pollution.’’ If this is a problem, it’s because a canola plant growing outside of a canola field – on a road or in a field of wheat, for example – could be considered a weed. And if it’s resistant to a widely used herbicide, it would remove one option for killing it, although other herbicides could do the job. “If there’s a problem in North Dakota, it’s that these crop plants are becoming weeds,’’ said Cynthia L. Sagers, an associate professor of biology at the University of Arkansas who led the study. The results are to be presented on Friday in Pittsburgh at the annual meeting of the Ecological Society of America. Read more…
<urn:uuid:ea7967eb-8468-49a1-b0bf-36f0b57acb04>
3.09375
235
Truncated
Science & Tech.
42.706022
Although real-time operating systems and applications have been available for multicore systems for some years, shared-memory parallel systems still pose some severe challenges for real-time algorithms, particularly as the number of CPUs increases. These challenges can take the form of lock contention, memory contention, conflicts/restarts for lockless algorithms, as well as many others. One technology that has been recently added to the real-time arsenal is read-copy update (RCU), which permits deterministic read-side access to read-mostly data structures, even in the face of concurrent updates. In some cases, updates may also be carried out in a deterministic manner. RCU was accepted into the Linux kernel in late 2002, and a real-time variant of RCU was been accepted into Linux for real-time use in early 2008. This real-time variant of RCU resulted in significant reductions in the Linux kernel's scheduling latency. More recently, user-level implementations of RCU have appeared on the scene. This talk will give a brief overview of RCU and how it may be used to solve some interesting classes of problems that arise when constructing shared-memory parallel real-time systems. Download paper as pdf file
<urn:uuid:fb06d93c-8040-4bb4-9c40-73b8e18455c9>
2.75
247
Academic Writing
Software Dev.
28.334
Search Course Communities: Course Topic(s): Ordinary Differential Equations | Analytic Methods This webpage displays fully-worked examples showing the solution of an exact first-order initial value problem. While there are an essentially endless supply of examples, all have the derivative equal to a rational function of (x) and (y). This is part of Andrew Bennett's online text "Elementary Differential Equations" (http://www.math.ksu.edu/math240/book/index.html). Resource URL: http://www.math.ksu.edu/math240/book/chap1/foex.php To rate this resource on a 1-5 scheme, click on the appropriate icosahedron: Creator(s): Andrew Bennett Contributor(s): Andrew Bennett This resource was cataloged by Douglas MeadePublisher: Kansas State University, Department of Mathematics, Andrew Bennett Resource copyright: Andrew Bennett This review was published on July 10, 2011 Be the first to start a discussion about this resource.
<urn:uuid:708a4058-c6c2-4157-9578-f2103ec7bd0a>
2.84375
222
Content Listing
Science & Tech.
42.901631
Sophie built a small tower, made of bricks, in her back garden. On top of it she fitted a large glass light-bulb holder. The diagram shows it as part of a circle. Centre, C, is 20 centimetres above the top of the wall. 1. Calculate the radius of the circular bulb holder. 2. Use this to find the total height of the structure. Here's the diagram :
<urn:uuid:de27a51c-657c-42ba-99b3-9f94fd16957a>
3.421875
94
Q&A Forum
Science & Tech.
84.992794
3D X-Ray Microtomography Reconstructions At coal measure localities such as Mazon Creek exceptional preservation of organic material can be found within early-diagenetic siderite (iron carbonate) nodules. The rapid envelopment of fossils within this concretionary material provides a high level of compressional resistance reducing and in many cases eliminating compaction. Therefore, recovery of well-preserved three-dimensional botanical material with cellular preservation is likely. Traditional methods of study involve serial sections using a combination of thin-sections and/or cellulose acetate peals (e.g. Drinnan et al., 1990). However, in employing this process, data loss through sawing is inevitable. Wafer-cut section through Stephanospermum braidwoodensis. In recent years, advances in computational power and a reduction in the cost of X-ray microtomography (XMT) has seen this technology increasingly used for paleontological research; yet the number of palaeobotanical research projects that have used any sort of scanning technology remains low. XMT is a non-destructive technique that enables the capture and visualisation of 3D high resolution internal data from fossils, in a way previously unattainable. Alan Spencer and Mark Sutton of Imperial College London and Jason Hilton from the University of Birmingham in the UK have recently used this technique in combination with wafering to describe a new species of Medullosan pteridosperm ovule from the Mazon Creek biota: Stephanospermum braidwoodensis. Their 3D reconstruction of the ovule correlated the geometries of different layers with tissue characteristics gathered from wafered sections, with the methodological combination presenting a virtual reconstruction of the specimen and also enabling positioning of serial sections of the holotype in predetermined positions. Besides hugely reducing the amount of material lost during the cutting porcess their study suggest that there are still more new species to be discovered from the Mazon Creek assemblage by the use of these exciting new methodologies. Spencer, A.R.T., Hilton, J., Sutton M.D., 2013. Combined methodologies for three-dimensional reconstruction of fossil plants preserved in siderite nodules: Stephanospermum braidwoodensis nov. sp. (Medullosales) from the Mazon Creek lagerstätte. Review of Palaeobotany and Palynology 188, 1-17. http://www.sciencedirect.com/science/article/pii/S003466671200231X Drinnan A.N., Schramke, J.M., Crane, P.R., 1990. Stephanospermum konopeonus (Langford) comb. nov.: a medullosan ovule from the Middle Pennsylvanian Mazon Creek Flora of northeastern Illinois, U.S.A. Botanical Gazette 151, 385–401. http://www.jstor.org/stable/10.2307/2995410 Geoscience Precision Cutting Facility at the University of Birmingham. http://www.birmingham.ac.uk/facilities/bgpcf/index.aspx Mazon Creek plant fossils at the Field Museum. http://fieldmuseum.org/explore/our-collections/mazon-creek-flora Mazon Creek invertebrates at the Field Museum. http://fieldmuseum.org/explore/multimedia/mazon-creek-fossil-invertebrates Mazon Creek fossils at the Smithsonian. http://paleobiology.si.edu/mazoncreek/index.html Mazon Creek fossilsat the Illinois Stae Museum. http://www.museum.state.il.us/exhibits/mazon_creek/index.html
<urn:uuid:11e43eac-64a1-46bd-a370-7bff13ae502c>
3.203125
809
Knowledge Article
Science & Tech.
34.489527
You can put this solution on YOUR website! This article is about the mathematical term. For other uses, see Slope (disambiguation). Search Wiktionary Look up slope in Wiktionary, the free dictionary. The slope of a line is defined as the rise over the run, m = Δy/Δx. In mathematics, the slope or gradient of a line describes its steepness, incline, or grade. A higher slope value indicates a steeper incline. The slope is defined as the ratio of the "rise" divided by the "run" between two points on a line, or in other words, the ratio of the altitude change to the horizontal distance between any two points on the line. Given two points (x1,y1) and (x2,y2) on a line, the slope m of the line is
<urn:uuid:484972c0-654e-4c21-ac72-6e7b645890fd>
3.703125
183
Knowledge Article
Science & Tech.
55.370451
This root program defines a procedure that builds an environment from lists of names and values. (code zip (vars vals r k) ((prim done null? vars) (if done ((prim p car k) (jump p k r))) (prim vars-hd car vars) (prim vals-hd car vals) (prim vars-tl cdr vars) (prim vals-tl cdr vals) (prim bind cons vars-hd vals-hd) (prim spine cons bind r) (const zip (code zip ...)) (jump zip vars-tl vals-tl spine k)) ...) zip calls itself; the value in the (code zip ...)) instruction is a circular pointer. This avoids an explicit recursive environment construct, or a special top-level environment. The final ... in the stucture hides the memo table used by cogen. The code is graphically represented like this: Code structures start with a rectangle labeled with the name of the code. Each straight list of instructions is represented with an oval. Lists ending with an if instruction are labeled with the name of the tested variable, those ending with a jump are linked with a solid edge to the called code. If the jump target is unknown (as when a procedure returns) then the oval is terminal.
<urn:uuid:bdfa83ce-5015-420a-be31-331183969947>
3.046875
292
Documentation
Software Dev.
78.536154
- Bulgarian (bg) - Czech (cs) - Danish (da) - German (de) - Greek (el) - English (en) - Spanish (es) - Estonian (et) - Finnish (fi) - French (fr) - Hungarian (hu) - Icelandic (is) - Italian (it) - Lithuanian (lt) - Latvian (lv) - Maltese (mt) - Dutch (nl) - Norwegian (no) - Polish (pl) - Portuguese (pt) - Romanian (ro) - Slovak (sk) - Slovenian (sl) - Swedish (sv) - Turkish (tr) We breathe from the moment we are born until the moment we die. It is a vital and constant need, not only for us but for all life on Earth. Poor air quality affects us all: it harms our health and the health of the environment, which leads to economic losses. But what does the air we breathe consist of and where do the various air pollutants come from? The extent of the sea ice in the Arctic reached a new record low in September 2012. Climate change is melting the sea ice in the region at a rate much faster than estimated by earlier projections. The snow cover also shows a downward trend. The melting Arctic might impact not only the people living in the region, but also elsewhere in Europe and beyond. Copenhagen, 2 July 2011. Up to 150 mm of rainfall in two hours – a city record since measurements began in the mid-1800s. Homes destroyed. Citizens and emergency services struggled to cope. This is one example of how excessive extreme weather events can affect a European capital – events that are expected more often under climate change. Forests are essential to our survival and well-being. Forests clean our air, our water, our soil and they regulate our climate, amongst many other things. Trees and forests are not always associated with urban landscapes. However, there too they provide invaluable, often invisible, services. Simply by acting as 'green oasis' in our concrete jungles, they offer recreation and health services for many European citizens. In August 2007, local health authorities in Italy detected a high number of cases of an unusual illness in Castiglione di Cervia and Castiglione di Ravenna, two small villages divided by a river. Almost 200 people were affected and one elderly man died (Angelini et al., 2007). Lower speed limits on motorways are generally associated with road safety. But several European countries are now debating whether they also benefit the environment and, if so, how much. There is no simple way of measuring the environmental benefits of lower speed limits but several factors clearly play a key role. In modern societies, almost everything consumes energy. It is not only electronic gadgets, household appliances or street lighting that need it. Bringing water to our homes or food products to our supermarkets also require energy. Current consumption and production patterns demand a steady and often increasing energy supply. Climate change is happening. The current global average temperature is already about 0.7-0.8 degree Celsius above the pre-industrial level. Even if greenhouse gas (GHG) concentrations had stabilized in the year 2000, temperatures are predicted to increase by 1.2 degrees Celsius above the pre-industrial level by the end of the 21st century. In Eastern France and Western Germany there is 3000km2 of a biosphere reserve called ‘Parc Naturel Régional des Vosges du Nord – Pfälzerwald’. It is the largest uninterrupted forest area in Western Europe. For the first time the waste in Greenland has been analyzed and the result is alarming. All households and industries need to get better at separating their waste. It’s a crucial mission and everyone needs to be involved, if Greenland is to have a cleaner and greener future. It is estimated that honey bees are the most valuable pollinators of crops worldwide. But in recent years there has been a global trend of honey bees declining in numbers. The way in which they live means that they fly out and collect pollen from plants and pollinate them. In a modern world this means also bringing back pesticides, which is killing them or making them vulnerable to diseases. In the cities they are not exposed to pesticides, so The Project City Bees give bee populations a helping hand, help pollinate our world, and produce some of the cleanest honey around. As the source of substantial and rapidly growing greenhouse gas emissions, transport must clearly be part of a global agreement to mitigate climate change. Every winter the gates of Copenhagen's famous Tivoli Gardens, an old-world amusement park in the city centre, open to officially mark the beginning of the extended Christmas period. This December the twinkling lights of Tivoli will most likely be outshone by COP 15 — the most important global climate change meeting ever — as thousands of diplomats, politicians, business people, environmentalists, media and climate experts from around the globe flock to the Danish capital. Cities and towns are highly vulnerable to the impacts of climate change and will need to find innovative ways to adapt. Now is the time to start rethinking urban design and management — yet few have taken concrete action. Barcelona is becoming a leader in solar energy use, Malmö is developing a carbon neutral residential area and London is setting ambitious greenhouse gas reduction targets. Cities are joining in the fight against climate change. 'Our water is shut off once or twice a month, sometimes more,' says Baris Tekin from his apartment in Besiktas, an historic district of Istanbul, where he lives with his wife and daughter. 'We have about 50 litres of bottled water in the apartment for washing and cleaning, just in case. If the water is off for a really long time we go to my father's place or to my wife's parents,' says Baris, an economics professor at Marmara University. A fisherman's tale: on the night of 6 October 1986 lobster fishermen from the small town of Gilleleje, north of Copenhagen, fishing the Kattegat Sea, found their nets crammed with Norway lobster. Many of the animals were dead or dying. About half were a strange colour. We already have much information to guide strategic climate change response measures at the EU, national, regional and local levels. But the effectiveness and efficiency of actions can be improved with more and better information.
<urn:uuid:692cdbf8-0475-41ac-babb-df99daf65bd0>
3.03125
1,341
Content Listing
Science & Tech.
44.762501
An international team led by scientists from the Max-Planck-Institute for Radio Astronomy has succeeded in observing the heart of a distant quasar with unprecedented sharpness, or angular resolution. The observations, made by connecting radio telescopes on different continents, are a crucial step towards a dramatic scientific goal: to depict the supermassive black hole at the centre of our own galaxy. To emulate the classical mechanics of physics found in space on full-scale replica spacecraft on Earth requires not only a hefty amount of air to 'float' the object, but a precision, frictionless, large surface area that will allow researchers to replicate the effects of inertia on man-made objects in space. The U.S. Naval Research Laboratory recently got that capability with a one-of-a-kind 75,000 gravity offset table made from a single slab of concrete. Two giant donuts of this plasma surround Earth, trapped within a region known as the Van Allen Radiation Belts. The belts lie close to Earth, sandwiched between satellites in geostationary orbit above and satellites in low Earth orbit are generally below the belts. A new NASA mission called the Radiation Belt Storm Probes, due to launch in August 2012, will improve our understanding of what makes plasma move in and out of these electrified belts wrapped around our planet. Through a labyrinth of hallways deep inside a 1950s-era building that has housed research that dates back to the origins of U.S. space travel, a group of scientists in white coats is stirring, mixing, measuring, brushing and, most important, tasting the end result of their cooking. Their mission: Build a menu for a planned journey to Mars in the 2030s. A NASA-created application that brings some of the agency's robotic spacecraft to life in 3D now is available for free on the iPhone and iPad. Called Spacecraft 3D, the app uses animation to show how spacecraft can maneuver and manipulate their outside components. Using computer simulations, researchers from the California Institute of Technology have determined that if the interior of a dying star is spinning rapidly just before it explodes in a magnificent supernova, two different types of signals emanating from that stellar core will oscillate together at the same frequency. This could be a piece of "smoking-gun evidence" that would lead to a better understanding of supernovae. A research team using Hubble’s powerful vision to scour the Pluto system to uncover potential hazards to the New Horizons spacecraft has located yet another satellite to the icy dwarf planet Pluto. The moon is estimated to be irregular in shape, 6 to 15 miles across, and in a co-planar orbit with other moons in the system. Its discovery prompts discussion on how such a complex collection of moons occurred. Technology that helps ground-based telescopes cut through the haze of Earth's atmosphere to get a clearer view of the heavens may also be used to collect better data at cutting-edge X-ray lasers like the Linac Coherent Light Source at SLAC National Accelerator Laboratory. NASA's Kennedy Space Center in Florida has announced a new partnership with Cella Energy Inc. that could result in vehicles being powered by hydrogen. The company has formulated a way to store hydrogen safely in tiny pellets that still allow the fuel to be burned in an engine. Because of its rocket work, Kennedy has the infrastructure and experience necessary to handle hydrogen safely. For the first time, researchers at Aalto University in Finland have located where the sounds associated with the northern lights are created. The auroral sounds that have been described in folktales and by wilderness wanderers are formed about 70 m above the ground level in the measured case. Scientists have, for the first time, directly detected part of the invisible dark matter skeleton of the universe, where more than half of all matter is believed to reside. The discovery, led by a University of Michigan physics researcher, confirms a key prediction in the prevailing theory of how the universe's current web-like structure evolved. A team of scientists has created an "MRI" of the sun's interior plasma motions, shedding light on how it transfers heat from its deep interior to its surface. The result upends our understanding of how heat is transported outwards by the sun and challenges existing explanations of the formation of sunspots and magnetic field generation. As a powerful summertime storm, known as a derecho, moved from Illinois to the Mid-Atlantic states on June 29, expanding and bringing high levels of destruction with it, NASA and other satellites provided a look at various factors involved in the event, its progression and its aftermath. Data from NASA's Cassini spacecraft have revealed Saturn's moon Titan likely harbors a layer of liquid water under its ice shell. Researchers saw a large amount of squeezing and stretching as the moon orbited Saturn. They deduced that if Titan were composed entirely of stiff rock, the gravitational attraction of Saturn would cause bulges, or solid "tides," on the moon only 3 ft in height. Spacecraft data show Saturn creates solid tides approximately 30 ft in height, which suggests Titan is not made entirely of solid rocky material. In a bold plan unveiled Thursday, a group of ex-NASA astronauts and scientists wants to launch its own space telescope to spot and track small and mid-sized space rocks capable of wiping out a city or continent. They contend that while astronomers routinely look for planet killers like the one that may have wiped out the dinosaurs, not enough attention is paid to smaller objects. About 800 extra-solar planets have been discovered so far in our galaxy, but the precise masses of the majority of them are still unknown. The only previous way to determine mass was to observe a transit, during which the planet’s host is eclipsed. Now, scientist Mercedes López-Morales has, for the first time, determined the mass of a non-transiting planet. In 1969, an exploding fireball tore through the sky over Mexico, scattering thousands of pieces of meteorite across the state of Chihuahua. More than 40 years later, the Allende meteorite is still serving the scientific community as a rich source of information about the early stages of our solar system's evolution. Recently, scientists from the California Institute of Technology discovered a new mineral embedded in the space rock—one they believe to be among the oldest minerals formed in the solar system. Turbulent jet streams, regions where winds blow faster than in other places, churn east and west across Saturn. Scientists have been trying to understand for years the mechanism that drives these wavy structures in Saturn's atmosphere. Recent images from NASA’s Cassini spacecraft has revealed the source from which the jets derive their energy. Scientists have mapped Shackleton crater with unprecedented detail, finding possible evidence for small amounts of ice on the crater's floor. Using a laser altimeter on the Lunar Reconnaissance Orbiter spacecraft, the team essentially illuminated the crater's interior with laser light, measuring its albedo, or natural reflectance. The scientists found that the crater's floor is in fact brighter than that of other nearby craters—an observation consistent with the presence of ice. The European Space Agency's Euclid mission to explore the hidden side of the universe—dark energy and dark matter—reached an important milestone that will see it head towards full construction. The European Space Agency (ESA) assembled a top engineering team then challenged them to devise a way for rovers to navigate on alien planets. Six months later, a fully autonomous vehicle was charting its own course through Chile's Mars-like Atacama Desert. In the dead of a Martian winter, clouds of snow blanket the Red Planet's poles—but unlike our water-based snow, the particles on Mars are frozen crystals of carbon dioxide. Most of the Martian atmosphere is composed of carbon dioxide, and in the winter, the poles get so cold—cold enough to freeze alcohol—that the gas condenses, forming tiny particles of snow. Now researchers have calculated the size of snow particles in clouds at both Martian poles from data gathered by orbiting spacecraft. Building a terrestrial planet requires raw materials that weren't available in the early history of the universe. The Big Bang filled space with hydrogen and helium. Chemical elements like silicon and oxygen had to be cooked up over time by stars. But how long did that take? How many of such heavy elements do you need to form planets? Scientists had long observed the unusual properties of lunar topsoil but had not taken much notice of the microparticles and nanoparticles found in the soil and their source was unknown. When these tiny glass bubbles were examined, they differed greatly from what is usually found in similar structures on Earth. During a powerful solar blast on March 7, the Fermi Gamma-ray Space Telescope detected the highest-energy light ever associated with an eruption on the sun. The flare produced such an outpouring of gamma rays—a form of light with even greater energy than X-rays—that the sun briefly became the brightest object in the gamma-ray sky.
<urn:uuid:e4569520-f109-4cad-86e6-1d1bf303b0a2>
3.28125
1,846
Content Listing
Science & Tech.
36.191283
by Donna Hesterman Gainesville, FL (SPX) Sep 21, 2012 An international team of scientists is rewriting a page from the quantum physics rulebook using a University of Florida laboratory once dubbed the coldest spot in the universe. Much of what we know about quantum mechanics is theoretical and tested via computer modeling because quantum systems, like electrons whizzing around the nucleus of an atom, are difficult to pin down for observation. One can, however, slow particles down and catch them in the quantum act by subjecting them to extremely cold temperatures. New research, published in the journal Nature, describes how this freeze-frame approach was recently used to overturn an accepted rule of thumb in quantum theory. "We are in the age of quantum mechanics," said Neil Sullivan, a UF physics professor and director of the National High Magnetic Field Laboratory High B/T Facility on the UF campus - home of the Microkelvin lab where experiments can be conducted in near-absolute zero temperatures. "If you've had an MRI, you have made use of a quantum technology." The magnet that powers an MRI scanner is a superconducting coil transformed into a quantum state by very cold liquid helium. Inside the coil, electric current flows friction free. Quantum magnets and other strange, almost otherworldly occurrences in quantum mechanics could inspire the next big breakthroughs in computing, alternative energy and transportation technologies such as magnetic levitating trains, Sullivan said. But innovation cannot proceed without a proper set of guidelines to help engineers navigate the quantum road. That's where the Microkelvin lab comes in. It is one of the few facilities in the world equipped to deliver the extremely cold temperatures needed to slow what Sullivan calls the "higgledy-piggledy" world of quantum systems at normal temperatures to a manageable pace where it can be observed and manipulated. "Room temperature is approximately 300 kelvin," Sullivan said. "Liquid hydrogen pumped into a rocket at the Kennedy Space Center is at 20 kelvin." Physicists need to cool things down to 1 millikelvin, one thousandth of a kelvin above absolute zero, or -459.67 degrees Fahrenheit, to bring matter into a different realm where quantum properties can be explored. One fundamental state of quantum mechanics that scientists are keen to understand more fully is a fragile, ephemeral phase of matter called a Bose-Einstein Condensate. In this state, individual particles that make up a material begin to act as a single coherent unit. It's a tricky condition to induce in a laboratory setting, but one that researchers need to explore if technology is ever to fully exploit the properties of the quantum world. Two theorists, Tommaso Roscilde at the University of Lyon, France, and Rong Yu from Rice University in Houston, developed the underlying ideas for the study and asked a colleague, Armando Paduan-Filho from the University of Sao Paulo in Brazil, to engineer the crystalline sample used in the experiment. "Our measurements definitively tested an important prediction about a particular behavior in a Bose-Einstein Condensate," said Vivien Zapf, a staff scientist at the National High Magnetic Field Laboratory at Los Alamos and a driving force behind the international collaboration. The experiment monitored the atomic spin of subatomic particles called bosons in the crystal to see when the transition to Bose-Einstein Condensate was achieved, and then further cooled the sample to document the exact point where the condensate properties decayed. They observed the anticipated phenomenon when they took the sample down to 1 millikelvin. The crystal used in the experiment had been doped with impurities in an effort to create more of a real world scenario, Zapf said. "It's nice to know what happens in pure samples, but the real world, is messy and we need to know what the quantum rules are in those situations." Having performed a series of simulations in advance, they knew that the experiment would require them to generate temperatures down to 1 millikelvin. "You have to go to the Microkelvin Laboratory at UF for that," she said. The lab is housed within the National High Magnetic Field Laboratory High B/T Facility at UF, funded by the National Science Foundation. Other laboratories can get to the extreme temperature required, but none of them can sustain it long enough to collect all of the data needed for the experiment. "It took six months to get the readings," said Liang Yin, an assistant scientist in the UF physics department who operated the equipment in the Microkelvin lab. "Because the magnetic field we used to control the wave intensity in the sample also heats it up. You have to adjust it very slowly." Their findings literally rewrote the rule for predicting the conditions under which the transition would occur between the two quantum states. "All the world should be watching what happens as we uncover properties of systems at these extremely low temperatures," Sullivan said. "A superconducting wire is superconducting because of this Bose-Einstein Condensation concept. If we are ever to capitalize on it for quantum computing or magnetic levitation for trains, we have to thoroughly understand it." University of Florida Understanding Time and Space Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. University of Granada investigates location of the Island of Stability of Super-Heavy Elements Lisbon, Portugal (SPX) Sep 20, 2012 An international research group - with the participation of the University of Granada - has achieved to measure the effects of layers on super-heavy elements, which provides useful data on the nuclear structure of these as-yet undiscovered elements in Nature. These results might be useful to locate the so-called "Island of Stability" introduced by a theory that states the existence of high ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:6cd1fc01-836e-4f05-b111-ee2c66bc6261>
3.015625
1,336
Truncated
Science & Tech.
26.704937
Very high level language (VHLL) is a high level programming language designed to reduce the complexity and amount of source code required to create a program. VHLL incorporates higher data and control abstraction abilities. A very high level programming language is also known as a goal-oriented programming language. Sleek and simple, VHLLs supports rapid prototyping of software programs and applications. Generally, VHLLs don't require typical variable declaration and supports autotyping of routine tasks and advanced memory management services. Although designed for limited and specific use, modern very high level languages may be applied to a broad and versatile range of software products and services. VHLL examples include Python and Ruby. Read More »
<urn:uuid:3fcdcfc4-a11c-44b9-8e95-df0ae2ae7783>
2.953125
149
Knowledge Article
Software Dev.
23.331316
An extensively peer-reviewed study published last December in the Journal of Atmospheric and Solar-Terrestrial Physics indicates that observed climate changes since 1850 are linked to cyclical, predictable, naturally occurring events in Earth’s solar system with little or no help from us. The research was conducted by Nicola Scafetta, a scientist at Duke University and at the Active Cavity Radiometer Solar Irradiance Monitor Lab (ACRIM), which is associated with the NASA Jet Propulsion Laboratory in California. It takes issue with methodologies applied by the U.N.’s Intergovernmental Panel for Climate Change (IPCC) using “general circulation climate models” (GCMs) that, by ignoring these important influences, are found to fail to reproduce the observed decadal and multi-decadal climatic cycles. As noted in the paper, the IPCC models also fail to incorporate climate modulating effects of solar changes such as cloud-forming influences of cosmic rays throughout periods of reduced sunspot activity. More clouds tend to make conditions cooler, while fewer often cause warming. At least 50-70% of observed 20th century warming might be associated with increased solar activity witnessed since the “Maunder Minimum” of the last 17th century. Dr. Scafetta’s study applies an astronomically-based model that reconstructs and correlates known warming and cooling phases with decadal and multi-decadal cycles associated with influences of planetary motions, most particularly those of Jupiter and Saturn. This “astronomical harmonics model” was used to address various cycles lasting 9.1, 10-10.5, 20-21, and 60-62 year-long periods. The 9.1-year cycle was shown to be likely related to decadal solar/lunar tidal oscillations, while those of ten years and longer duration relate to planetary movements about the Sun that may have solar influences that modulate electromagnetic properties of Earth’s upper atmosphere which can regulate the cloud system. Scafetta’s findings contradict IPCC claims that all warming observed from 1970 to 2000 has been man-made (“anthropogenically-induced”) based upon models that exclude natural quasi 20-year and 60-year climate cycle contributions. These cycles have been clearly detected in all global surface temperature records of both hemispheres since 1850, and are also evident in numerous astronomical records. The 60-year cycle is particularly easy to observe in significant surface temperature maxima that occurred in 1880-1881, 1940-1941, and 2000-2001. These momentarily warmer periods coincided with times when orbital positions of Jupiter and Saturn were relatively close to the Sun and Earth. A 60-year modulation cycle also corresponds with warming/cooling induced in the ocean surface which appears to correlate with the frequency of major Atlantic hurricanes, and is seen in the sea level rise since 1700 as well as in numerous ocean and terrestrial records dating back centuries. Further evidence of a 60-year cycle is referenced in ancient Sanskrit texts among observed monsoon rainfall cycles. Scafetta believes that a natural 60-year climate cycle associated with astronomical cycles may also explain calendars adopted in traditional Chinese, Tamil and Tibetan civilizations, since all major ancient civilizations knew about 20-year and 60-year Jupiter and Saturn cycles. Indeed, Scafetta pointed out to me that in the Hindu tradition, the 60-year cycle is known as the cycle of Brihaspati, the name of Jupiter, and that every 60 years special ceremonies are celebrated by some populations, such as the Sigui ceremony among the Dogon people of Africa. Proper reconstructions of natural 20-year and 60-year cycles, along with other independent studies, indicate that the IPCC has seriously overestimated human climate contributions. For example, according to all GCM simulations, increased CO2 concentrations should have produced an increased tropical warming trend with altitude, which is contrary to what balloon and satellites observations actually show. GCM interpretations also allege that volcano activity may have contributed an offsetting 0.1-0.2 degrees of cooling influence between from 1970 to 2000. However, that conclusion appears to significantly overestimate the volcano signal because the models predicted deep and large cooling spikes associated with eruptions which are observed to be much smaller in global surface temperature records. Accordingly, this too suggests that the 1970-2000 warming effect attributed to anthropogenic influences should be reduced. Moreover, some of the observed 0.5 degrees of warming recorded by surface stations during the 1970-2000 period which IPCC models associated with human greenhouse gases emissions, may be explained by improperly corrected urban “heat island” effects and other land use change influences. Finally, three major available global surface temperature record sources report a steady-to-cooling trend since 2001. These measurements contradict the strong warming predicted by all IPCC models during the same period that are attributed primarily to a continuing increase in CO2 emissions. Indeed, only one global surface record source shows a slight increase in the temperature since 2001. This occurred because missing temperature data needed to be adjusted or filled in to complete the records…which appears to be the case with NASA Goddard Institute for Space Studies model data resulting from poor sampling during the last decade for Antarctic and Arctic regions and the use of a 1200 km smoothing methodology. The Duke University/NASA JPL study estimates that as much as 0.3 degrees of warming from 1970 to 2000 may have been naturally induced by the 60-year modulation during the warming phase, amounting to at least 43-60% of the 0.5-0.7 degrees allegedly caused by human greenhouse emissions. Additional natural warming can be explained by increased solar activity during the last four centuries, as well as simply being part of a natural and persistent warming recovery since the end of the Little Ice Age of AD 1300-1900. Nicola Scaletta concludes that the scientific method requires that a physical model fulfill two conditions…it must be able to reconstruct as well as predict (or forecast) direct physical observations. Here, he argues that all climate models used by the IPCC can do neither. “They seriously fail to properly reconstruct even the large multi-decadal oscillations found in the global surface temperature which have climatic meaning. Consequently, the IPCC projections for the 21st century cannot be trusted.” In fact, he argues that “By not properly reconstructing the 20-year and 60-year natural cycles we found that the IPCC GCMs have seriously overestimated also the magnitude of the anthropogenic contribution to recent warming.” Unlike the current IPCC models, the astronomical harmonics model can have real climate forecasting value. By combining current trend information with natural cycle patterns Scafetta believes that the global temperature “may not significantly increase during the next 30 years mostly because of the negative phase of the 60-year cycle.” He goes on to say: “If multi-secular natural cycles (which according to some authors have significantly contributed to the observed 1700-2010 warming and may contribute to an additional natural cooling by 2100) are ignored, the same projected anthropogenic emissions would imply a global warming by about 0.3-1.2 degrees C by 2100, contrary to the IPCC 1.0-3.6 degree C projected warming.” Scafetta projects that the global climate may remain approximately steady until 2030-2040 (as was observed from the 1940s to the 1970s) because the 60-year cycle entered into its current cooling phase around 2000-2003. The climate may further cool if additional natural long and short-term cycles also enter into cooling phases. In fact the present warm period may well be at the top of a natural millennial cycle as previously occurred during Roman and Medieval times. When I asked Nicola how confident he is about his prognosis, he responded: “Of course there is a need to wait and see, and as I say in the paper, additional cycles may be needed for a better forecast. After all, ocean tides are currently predicted with 30-40 astronomical harmonic constituents, while in the proposed model I used only four harmonics. However, in the paper I did show that once the proposed model was calibrated from 1850 to 1950, it has been able to reproduce the decadal and multi-decadal modulation of the temperature observed from 1950 to 2011, and vice versa. Since 2000 the model has well captured the steady-to- cooling trend shown in temperature data, while all IPCC GCMs have failed the prognosis by predicting steady warming.” So, as the old expression goes…time will tell. And assuming that his model prediction will prove to be correct, let’s enjoy basking in warmth while it lasts. Started in year 2010, ‘Climate Himalaya’ initiative has been working on the mountain and climate related issues in the Himalayan region of South Asia. In the last two years this knowledge sharing portal has become one of the important references for the governments, research institutions, civil society groups and international agencies, those have work and interest in Himalayas. The Climate Himalaya team innovates on knowledge sharing, capacity building and climatic adaptation aspects in its focus countries like Bhutan, India, Nepal and Pakistan. Climate Himalaya’s thematic areas of work are mountain ecosystem, water, forest and livelihood. Read>>
<urn:uuid:9797dd2d-cccc-495c-808c-f4bcf1a49c9d>
3.28125
1,905
Nonfiction Writing
Science & Tech.
29.63673
Core Text is an advanced, low-level technology for laying out text and handling fonts. It is designed for high performance and ease of use. The Core Text API, introduced in OS X v10.5, is accessible from all OS X application environments. It is also available in iOS 3.2. The Core Text layout engine is designed specifically to make simple text layout operations easy to do and to avoid side effects. The Core Text font programming interface is complementary to the Core Text layout engine and is designed to handle Unicode fonts natively, unifying disparate OS X font facilities into a single comprehensive programming interface. This document is intended for developers who need to do text layout and font handling at a low level. If you can develop your application using higher-level constructs, such as NSTextView, then you should use the Cocoa text system, introduced in Text System Overview. If, on the other hand, you need to render text directly into a Core Graphics context, then you should use Core Text. More information about the position of Core Text among other OS X text technologies is presented in “OS X Text Technologies.” Organization of This Document This document is organized into the following chapters: “Core Text Overview” describes the Core Text system in terms of its design goals and feature set. It also introduces the opaque types that encapsulate the text layout and font handling capabilities of the system. “Common Operations” presents snippets of code with commentary illustrating typical uses of the main Core Text opaque types. In addition to this document, there are several that cover more specific aspects of Core Text or describe the software services used by Core Text. Core Text Reference Collection provides complete reference information for the Core Text layout and font API. CoreTextTest is a sample code project that shows how to use Core Text in the context of a complete Carbon application. CoreTextArcCocoa is a sample code project that illustrates the use of fonts, lines, and runs in a Core Text Cocoa application. Core Foundation Design Concepts and Core Foundation Framework Reference describe Core Foundation, a framework that provides abstractions for common data types and fundamental software services used by Core Text. The following documents provide entry points to the documentation describing the Cocoa text system. Text System Overview gives an introduction to the Cocoa text system. Text Layout Programming Guide for Cocoa describes the Cocoa text layout engine. © 2010 Apple Inc. All Rights Reserved. (Last updated: 2010-03-03)
<urn:uuid:1a97b96a-c3b7-46b3-be84-65ec27fa047b>
2.96875
519
Documentation
Software Dev.
35.891571
A Lisp program consists of expressions or forms (see Forms). We control the order of execution of these forms by enclosing them in control structures. Control structures are special forms which control when, whether, or how many times to execute the forms they contain. The simplest order of execution is sequential execution: first form a, then form b, and so on. This is what happens when you write several forms in succession in the body of a function, or at top level in a file of Lisp code—the forms are executed in the order written. We call this textual order. For example, if a function body consists of two forms a and b, evaluation of the function evaluates first a and then b. The result of evaluating b becomes the value of the function. Explicit control structures make possible an order of execution other than sequential. Emacs Lisp provides several kinds of control structure, including other varieties of sequencing, conditionals, iteration, and (controlled) jumps—all discussed below. The built-in control structures are special forms since their subforms are not necessarily evaluated or not evaluated sequentially. You can use macros to define your own control structure constructs (see Macros).blog comments powered by Disqus
<urn:uuid:eb89ced8-a02f-4cf9-b15e-2c23361c6db6>
3.765625
247
Documentation
Software Dev.
43.818343
Metadata record for data from ASAC Project 244 See the link below for public details on this project. From the abstracts of the referenced papers: Photoinhibition of Grimmia antarctici (Grimmia is now known as Schistidium) during the summer at Casey, East Antarctica, was indicated by a reduction in photosynthetic capacity (light saturated photosynthetic rate), photosynthetic efficiency (photon ... yield of O2 evolution), photochemical quantum yield (ratio of variable to maximum fluorescence) and rate of fluorescence quenching when plants were exposed to moderate light at low temperature. We suggest that photoinhibition is a major factor limiting bryophyte productivity in Antarctic ecosystems. Variation in leaf pigmentation from green to ginger is observed for Ceratodon purpureus (Hedw.) Brid. in Antarctica. Electron microscopy of ginger and green leaves reveals less thylakoid stacking, a response to greater light exposure, in the ginger leaves. In extremely exposed sites C. purpureus has low chlorophyll a/b ratios which correlate with decreased 77K chlorophyll fluorescence, indicating damage to chlorophyll a. Pigment analysis of ginger moss shows that even when the chlorophyl a/b ratio has not decreased the pigment composition differs from green moss. The increase in anthocyanin and decrease in chlorophyll concentrations largely account for the visual change from green to ginger. The ratio of total carotenoid to chlorophyll varies from 0.35 in green moss to 0.55 in the ginger moss, with violaxanthin increased preferentially. Since these changes in pigmentation are consistent with photoprotection and they are linked to light depended variations in chloroplast structure, it appears that photoprotective pigments are a useful adaptation for the bright Antarctic environment.
<urn:uuid:f18b037e-17aa-42de-8000-fb0fc8cd4f9b>
2.78125
385
Academic Writing
Science & Tech.
24.091414
Can you find a relationship between the number of dots on the circle and the number of steps that will ensure that all points are A game in which players take it in turns to choose a number. Can you block your opponent? A game that tests your understanding of remainders. Make a line of green and a line of yellow rods so that the lines differ in length by one (a white rod) List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? Given the products of adjacent cells, can you complete this Sudoku? Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? Follow this recipe for sieving numbers and see what interesting patterns emerge. For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target? Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Rectangles are considered different if they vary in size or have different locations. How many different rectangles can be drawn on a chessboard? This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . . Take any pair of numbers, say 9 and 14. Take the larger number, fourteen, and count up in 14s. Then divide each of those values by the 9, and look at the remainders. A mathematician goes into a supermarket and buys four items. Using a calculator she multiplies the cost instead of adding them. How can her answer be the same as the total at the till? What is the smallest number with exactly 14 divisors? Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some. . . . Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Find the highest power of 11 that will divide into 1000! exactly. In this activity, the computer chooses a times table and shifts it. Can you work out the table and the shift each time? Which pairs of cogs let the coloured tooth touch every tooth on the other cog? Which pairs do not let this happen? Why? Here is a machine with four coloured lights. Can you develop a strategy to work out the rules controlling each light? What is the smallest number of answers you need to reveal in order to work out the missing headers? Can you find a way to identify times tables after they have been shifted up? Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? How many integers between 1 and 1200 are NOT multiples of any of the numbers 2, 3 or 5? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. What is the remainder when 2^2002 is divided by 7? What happens with different powers of 2? The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors? Given the products of diagonally opposite cells - can you complete A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. The number 4396 can be written as just such a product. Can. . . . Complete the following expressions so that each one gives a four digit number as the product of two two digit numbers and uses the digits 1 to 8 once and only once. Explain why the arithmetic sequence 1, 14, 27, 40, ... contains many terms of the form 222...2 where only the digit 2 appears. A number N is divisible by 10, 90, 98 and 882 but it is NOT divisible by 50 or 270 or 686 or 1764. It is also known that N is a factor of 9261000. What is N? A challenge that requires you to apply your knowledge of the properties of numbers. Can you fill all the squares on the board? Do you know a quick way to check if a number is a multiple of two? How about three, four or six? 6! = 6 x 5 x 4 x 3 x 2 x 1. The highest power of 2 that divides exactly into 6! is 4 since (6!) / (2^4 ) = 45. What is the highest power of two that divides exactly into 100!? I put eggs into a basket in groups of 7 and noticed that I could easily have divided them into piles of 2, 3, 4, 5 or 6 and always have one left over. How many eggs were in the basket? What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties? The clues for this Sudoku are the product of the numbers in Find the number which has 8 divisors, such that the product of the divisors is 331776. Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. A collection of resources to support work on Factors and Multiples at Secondary level. Can you find any perfect numbers? Read this article to find out more... What is the value of the digit A in the sum below: [3(230 + A)]^2 = What is the largest number which, when divided into 1905, 2587, 3951, 7020 and 8725 in turn, leaves the same remainder each time? Data is sent in chunks of two different sizes - a yellow chunk has 5 characters and a blue chunk has 9 characters. A data slot of size 31 cannot be exactly filled with a combination of yellow and. . . .
<urn:uuid:b105b380-c373-4620-a09c-8cedaccf5592>
3.328125
1,609
Content Listing
Science & Tech.
82.815199
I’ve been fortunate enough to travel around the world as TIME’s environment correspondent: to rainforests, Himalayan mountains, coral reefs. But far and away the most singular spot I ever visited was the far north of Greenland. I saw vast flat ice sheets that spread in all directions, without end. I saw icebergs that could dwarf an ocean liner, suspended in the purest cerulean blue. I saw glaciers so vast as to seem indestructible—and I heard earth-shattering crack when they broke apart. As the environmental photographer James Balog shows in his new book Ice: Portraits of Vanishing Glaciers, the deep ice of the poles is far from invulnerable. Balog’s breathtaking pictures show how erosion and melting temperatures take their toll on the glaciers, shrinking and carving them. The result is dazzling to look at, but terrifying for the planet. As glaciers melt, more and more water flows into the oceans, raising sea levels. This July scientists were surprised to see the entire Greenland ice sheet essentially turn to mush for a few days in an unusually rapid thaw. Arctic sea ice melted to its smallest extent this summer in decades. The day could well come when the only place will be able to see glaciers will be photographs like those in Ice.
<urn:uuid:262a6ce2-24b5-457f-8053-eb104a3d4fbd>
3.1875
267
Truncated
Science & Tech.
53.44
- Leaving a Smaller Footprint - Students work to offset carbon emissions generated by class field trip Leaving a Smaller Footprint - Students work to offset carbon emissions generated by class field trip May 27, 2011 Three vans travelling 3,000 miles can leave an awfully big carbon footprint—a 6.2-metric-ton footprint, to be exact. That massive footprint was the one potential downside to the 10-day field trip that was the capstone of UI professor Art Bettis’ spring semester geoscience course, Geology Field Trip: Selected National Parks. The College of Liberal Arts and Sciences course, which met once a week during the spring semester, focused on the geologic, biologic, and cultural resources of the National Park System, as well as management and environmental issues. It concluded with a 10-day trip to Florisant Fossil Beds and Great Sand Dunes National Parks in Colorado and Tent Rocks National Monuments and Valles Caldera National Preserve in New Mexico. So, at the beginning of the semester, Bettis issued a challenge to the 15 students in his class: find a way to offset that carbon in order to neutralize the environmental impact of the trip. He didn’t require the students to participate in the challenge, but every one of them wanted to get involved. To read the complete article by Anne Kapler in Spectator, a monthly newsletter for alumni, http://spectator.uiowa.edu/2011/may/smallerfootprint.html
<urn:uuid:93c3181d-e2e2-4fad-a5be-545b80ac46fc>
3
314
Truncated
Science & Tech.
42.9425
Learn more physics! I have a strand of LED lights that need 12 volts to push the color lights and 15 volts to push an all white strand. I have no knowlodge of how electricity works. How can I run these lights using regular battries? These lights are my portable samples so I need something small. These lights can run off 12 volts 24/7 and use 1/4 volt a month (so Im told) - Art (age 46) Sure, you can run LEDs off ordinary batteries. It sounds like 8 standard 1.5 V batteries in series (end-to-end) will work for one set, and 10 for the other. How long will the batteries last? My memory of LED's is that a typical one fairly brightly lit will draw (very roughly) about 0.01 Amp of current. If that's right, then each battery will supply about 0.015 W of power. That's a lot less than a battery supplies in a regular flashlight. My guess is that even with AA batteries, you should be able to run for days. If you need longer between battery changes, and if you have room, you could switch to C or D batteries. I have no idea what "use 1/4 volt a month" means. Maybe it was some other unit? (published on 10/22/2007) Follow-up on this answer.
<urn:uuid:32409e6e-19ae-43c7-9ccf-267db88f546d>
2.734375
289
Q&A Forum
Science & Tech.
88.286945
EEGs can now record brainwaves without the need for electrodes to be inserted into the brain or even for them to be placed on the scalp. The figure shows a brainwave trace for periods when the eyes are open and when they are closed. The red regions respond to the alpha wave (eyes closed) at a frequency of around 9 Hz. (Courtesy University of Reported by: Harland et al., Applied Physics Letters, 21 October 2002 Sussex Physical Electronics and Instrumentation Group
<urn:uuid:9bf0fc88-8759-4467-9409-8a86da44c416>
2.8125
105
Knowledge Article
Science & Tech.
49.459028
Lunar Dust Bowl Despite evidence from two space probes in the 1990s, radar astronomers say they can find no signs of thick ice at the moon's poles. If there is water at the lunar poles, the researchers say, it is widely scattered and permanently frozen inside the dust layers, something akin to terrestrial permafrost. |Lunar Clementine mission shows the South Pole of the Moon. The permanently shadowed region center showed earlier evidence of meteor cratering and ice never exposed to direct sunlight, but Arecibo radar reveals dust. Credit: NASA/DOD Clementine Using the 70-centimeter (cm)-wavelength radar system at the National Science Foundation's (NSF) Arecibo Observatory, Puerto Rico, the research group sent signals deeper into the lunar polar surface -- more than five meters (about 5.5 yards) -- than ever before at this spatial resolution. "If there is ice at the poles, the only way left to test it is to go there directly and melt a small volume around the dust and look for water with a mass spectrometer," says Bruce Campbell of the Center for Earth and Planetary Studies at the Smithsonian Institution. Campbell is the lead author of an article, "Long-Wavelength Radar Probing of the Lunar Poles," in the Nov. 13, 2003, issue of the journal Nature . His collaborators on the latest radar probe of the moon were Donald Campbell, professor of astronomy at Cornell University; J.F. Chandler of Smithsonian Astrophysical Observatory; and Alice Hine, Mike Nolan and Phil Perillat of the Arecibo Observatory, which is managed by the National Astronomy and Ionosphere Center at Cornell for the NSF. Suggestions of lunar ice first came in 1996 when radio data from the Clementine spacecraft gave some indications of the presence of ice on the wall of a crater at the moon's south pole. Then, neutron spectrometer data from the Lunar Prospector spacecraft, launched in 1998, indicated the presence of hydrogen, and by inference, water, at a depth of about a meter at the lunar poles. But radar probes by the 12-cm-wavelength radar at Arecibo showed no evidence of thick ice at depths of up to a meter. "Lunar Prospector had found significant concentrations of hydrogen at the lunar poles equivalent to water ice at concentrations of a few percent of the lunar soil," says Donald Campbell. "There have been suggestions that it may be in the form of thick deposits of ice at some depth, but this new data from Arecibo makes that unlikely." Says Bruce Campbell, "There are no places that we have looked at with any of these wavelengths where you see that kind of signature." |The Arecibo radio telescope is currently the largest single-dish telescope in the world used in radio astronomy. In 1974, Arecibo was used to broadcast a message from Earth to the globular star cluster M13. Credit: NAIC - Arecibo Observatory, David Parker / Science Photo Library The Nature paper notes that if ice does exist at the lunar poles it would be considerably different from "the thick, coherent layers of ice observed in shadowed craters on Mercury," found in Arecibo radar imaging. "On Mercury what you see are quite thick deposits on the order of a meter or more buried by, at most, a shallow layer of dust. That's the scenario we were trying to nail down for the moon," says Bruce Campbell. The difference between Mercury and the moon, the researchers say, could be due to the lower average rate of comets striking the lunar surface, to recent comet impacts on Mercury or to a more rapid loss of ice on the moon. What makes the lunar poles good cold traps for water is a temperature of minus 173 degrees Celsius (minus 280 degrees Fahrenheit). The limb of the sun rises only about two degrees above the horizon at the lunar poles so that sunlight never penetrates into deep craters, and a person standing on the crater floor would never see the sun. The Arecibo radar probed the floors of two craters in permanent shadow at the lunar south pole, Shoemaker and Faustini, and, at the north pole, the floors of Hermite and several small craters within the large crater Peary. In contrast, Clementine focused on the sloping walls of Shackleton crater, whose floor can't be "seen" from Earth. "There is a debate on how to interpret data from a rough, tilted surface," says Bruce Campbell. The Arecibo radar probe is a particularly good detector of thick ice because it takes advantage of a phenomenon known as "coherent backscatter." Radar waves can travel long distances without being absorbed in ice at temperatures well below freezing. Reflections from irregularities inside the ice produce a very strong radar echo. In contrast, lunar soil is much more absorptive and does not give as strong a radar echo. The Moon is believed to play an important role in Earth's habitability . Because the Moon helps stabilize the tilt of the Earth's rotation, it prevents the Earth from wobbling between climatic extremes. Without the Moon, seasonal shifts would likely outpace even the most adaptable forms of life. |SMART mission surveys the moon. In addition, because our moon is lifeless, it is one of the most appealing places to look for the preserved records of life elsewhere. At least according to recent estimates for the amount of ejected rocks that might survive there, the Moon may hold clues from the early history of Mars, Venus and Earth. For more than 40 years, the Moon has been visited by automated space probes and by nine manned expeditions, six of which landed on its surface. On the night of September 27th, Europe's lunar probe called SMART-1 launched on a technology demonstration mission to the moon. Much remains to be learnt about our closest neighbour, and SMART-1's payload will conduct observations never performed before in such detail. The Advanced/Moon Micro-Imaging Experiment (AMIE) miniaturised CCD camera will provide high-resolution and high-sensitivity imagery of the surface, even in poorly lit polar areas. The highly compact infrared spectrometer will map lunar materials and look for water and carbon dioxide ice in permanently shadowed craters. Recent Lunar Timelines - Japanese Hiten, Lunar Flyby and Orbiter - Michael Rampino and Richard Strothers propose Earth could be periodically struck by comets dislodged from orbits when the solar system passes through galactic plane - US Dept. Defense/NASA Clementine mission, Lunar Orbiter/Attempted Asteroid Flyby - First commercial lunar mission, AsiaSat 3/HGS-1 , Lunar Flyby - Lunar Prospector launches and enters lunar orbit - Lunar Prospector tries to detect water on the Moon (polar impact) - Lunar soil samples and computer models by Robin Canup and Erik Asphaug support impact origin of moon - SMART 1, to launch lunar orbiter and test solar-powered ion drive for deep space missions - Japanese Lunar-A, Lunar Mapping Orbiter and Penetrator, to fire two bullets 3 meters into the lunar soil near Apollo 12 and 14 sites - Japanese SELENE Lunar Orbiter and Lander, to probe the origin and evolution of the moon Related Web Pages Making the Moon Review of Theories of Moon-Forming Impact (Planetary Science Institute) Big Bang, New Moon (SwRI) Center for Earth and Planetary Studies Ion Drive to the Moon SMART-1: Chips Off the Terrestrial Block Treasures from the Lunar Attic End of an Era, Dawn of Another?
<urn:uuid:ce0d312a-a297-48ce-a3cf-79aa3b5219ad>
3.9375
1,587
Knowledge Article
Science & Tech.
34.500292
by Robert Stonefield There is a significant weather event that happens every winter but is usually an afterthought, high winds. High winds are more common during the winter months, especially if you live in the higher elevations. These high winds are linked to cold frontal passages (Fig. 1) and developing coastal lows called Nor'easters (Fig. 2) off the Mid-Atlantic or New England coast. In the wake of a cold front, strong pressure rises and an increasing westerly low level jet will bring strong and gusty winds to the area. In the case of a Nor’easter, the pressure gradient between the deepening coastal low and high pressure to the west tightens and generates strong west to northwest winds across the area. Sustained winds of 20-40 mph with gusts up to 60 mph or more are common during these high wind events, especially across the higher terrain. Accompanying these winds are very cold temperatures. Combining the winds and cold temperatures, wind chill values usually drop into the single digits, sometimes below zero, across the mountains. During these significant events, the National Weather Service (NWS) will issue high wind watches, warnings or advisories. High wind watches are issued when the risk of a high wind event (>/=40 mph), sustained for 1 hour or more; or >/=58 mph of any duration, is significant in the 12 to 48 hour time frame, but occurrence, location, severity, or timing is uncertain. High wind warnings are issued when winds of >/=40 mph, sustained for 1 hour or more; or >/=58 mph of any duration, is occurring, imminent, or has a significant probability of occurrence within 36 hours. Advisories are issued for wind events not quite as strong as the high wind thresholds, and have a significant probability of occurrence in the first 36 hours. Wind advisory criteria is 31-39 mph sustained for 1 hour or more; or 46-57 mph of any duration. These events are defined as non life-threatening by themselves, but they could become life-threatening if caution is not exercised. Figure 1. Cold frontal passage (blue) with wind direction (red). Figure 2. Developing low pressure system off the Mid Atlantic coast with wind direction (blue). Figure 3. WFO Blacksburg County Warning area geographical break down. High winds events (cold front and developing low pressure systems) have occurred every year (Fig.4) since the beginning of this study (1993). The most high wind reports from a cold frontal passage in one year was 2003 with 91 reports. The most reports in a year from a developing coastal low was 2007 with 93 reports. Figure 4. Cold front and developing coastal low high wind reports by year. Generally, the stronger cold fronts do not cross the region until the winter months (Fig. 5) of December, January and February. From time to time, these strong systems can pass across the region as early as October and as late as May. Figure 5. Cold fronts and developing coastal lows high wind reports by month. High winds have been reported at all hours (Fig. 6) of the day and night. Typically, the high wind reports overnight are along and west of the Blue Ridge of southwest Virginia and northwest North Carolina. The winds are generally the strongest in the morning just after sunrise and in the afternoon when mixing occurs. Figure 6. Cold fronts and developing coastal lows high wind reports by time of day. During a cold front or developing coastal low high wind event, most high wind reports are along the Blue Ridge of southwest Virginia and northwest North Carolina (Fig.7). With a developing coastal low, a moderate showing of high wind reports are further east of the Blue Ridge and into the piedmont counties. Figure 7. Cold fronts and developing coastal lows high wind reports by county. On February 10, 2008, an exceptionally strong wind event occurred from a passing of a cold front. Hurricane force wind gusts of 74 mph or more were reported across some mountain locations. All 40 Counties across Blacksburg's area of responsibility reported numerous power outages, large trees being uprooted, and property damage. Power lines that were downed from falling trees and limbs sparked several wildfires across the area. Three of the largest wildfires were Little Cuba (2700 acres) in Craig County, Black Horse (1500 acres) in Bedford County, and Green Ridge Mountain (about 4000 acres) in Roanoke County. The Black Horse fire in Bedford County (Figure 8-10), was started by an all-terrain vehicle operated on a restricted trail. These fires took state and local agencies and National Guard soldiers 3 days to get under control. Rain falling on the third day was a big help in controlling these fires. Figure 8/9. Black Horse wildfire in Bedford County. Figure 10. Infrared satellite displaying wildfires across the region on February 10, 2008.
<urn:uuid:e5008edf-d34c-4876-859c-ea34dcb768e6>
3.703125
1,007
Knowledge Article
Science & Tech.
58.194318
|Convection in Thunderstorms| |Building Atmospheric Giants The up and down motions associated with convection help fuel monstrous thunderstorms. A thunderstorm feeds off of warm air underneath it. Warm air near the ground rises because it's less dense. When the air reaches the base of the cloud, water vapor in the air condenses and builds onto the cloud. When the water vapor condenses it releases some heat, which warms the air around it. This air now rises because it's less dense, and the process continues again and again. The air inside of a cloud continuously rises and falls, similar to a pot of boiling water. The cloud continues to build on itself until it reaches the tropopause, the point 10 - 12 kilometers above the ground where the atmosphere becomes stable. The tropopause acts like a lid, and forces the cloud to spread out at the top. This is why thunderstorms sometimes have an anvil shape. The thunderstorm will continue to grow as long as has a source of warm air underneath it. Once the supply of warm air is cut off, such as when falling rain cools the air under the cloud, the massive cloud will dissipate.
<urn:uuid:8736339b-15db-4340-910e-d973e86606a0>
4.34375
242
Knowledge Article
Science & Tech.
61.051136
Color composite infrared image using adaptive optics on the Gemini North telescope. Dust obscures this star forming region at optical wavelengths but is visible at longer infrared wavelengths. (Resolution = 0.12 arcseconds FWHM) Photo Credit: Gemini Observatory, US National Science Foundation, and University of Hawaii Institute for Astronomy Gemini North Image at "J", 1.25µm |The image on the left shows G45.45+0.06 using light with a wavelength slightly longer (redder) than is visible to the human eye. The image on the right was taken using even redder light to reveal the power of infrared radiation for exploring star forming regions.|| Gemini North Image at "K", 2.2µm Most of the stars in this cluster are not seen at visible wavelengths. They are still buried within the large cloud of dust and gas out of which they formed, and the dust absorbs the visible light from the stars while some of the redder or infrared light escapes. The reddest objects are likely to be more deeply buried in the cloud and may still be in the process of accumulating material from the cloud to form a star. The cluster is designated as G45.45+0.06. The brightest star (at lower left) is in the foreground and is not part of the cluster. The diffuse infrared light seen in this image is both starlight reflected off dust particles in the cloud and glowing hydrogen gas that is heated by the most massive young stars in the cluster. The three bright stars near the lower right hand corner of the image may be responsible for most of the hydrogen heating and are about 10 times more massive than our sun and more than 100,000 times brighter. Image analysis, combined with follow-up spectroscopy, will enable astronomers to determine such things as the ages of the stars in the cluster, the principal heating sources and what may cause an interstellar cloud collapses to form new stars. This image was obtained with the Gemini North telescope on Mauna Kea using the University of Hawaii's infrared camera, QUIRC, with the Adaptive Optics system called Hokupa'a. Adaptive optics systems use deformable mirrors to correct for the effects of atmospheric distortions of starlight resulting in significantly sharper images. The United States National Science Foundation has provided financial support to the University of Hawai'i Adaptive Optics Program. Color Composite: "H" band (Blue); "K" band (Green); Narrow band Brgamma filter (Red). Technical Information for the Observations Back to the Gemini North Dedication Page
<urn:uuid:0b6c3a25-d790-41e6-99ce-ee1024dacb91>
3.46875
537
Knowledge Article
Science & Tech.
46.184353
Turning Over a New Leaf Researchers are trying to copy a trick of nature by turning sunlight into fuel. By Nicola Jones Plants perform a kind of magic: they take the basic ingredients of a little sunlight, water and air, and turn them into fuel to grow leaves and fruit. If only mankind could do the same thing — turning those ephemeral ingredients not into the kind of fuel that produces tomatoes, but the kind that turns on lights and powers a car... Want to share your thoughts on this article? Write to us at email@example.com
<urn:uuid:4430e9af-a2a1-44c1-a3de-bd268d0b5326>
2.984375
117
Truncated
Science & Tech.
66.306082
1. Common types of shells include sea shells, snail shells, turtle shells and eggshells. 2. There are more than 50,000 varieties of mollusk shells. Some open like clam shells, while others are shaped in a spiral and have a single hold where the animals enters and exits the shell. 3. Seashells are the external skeletons of a class of marine animals called Mollusks. People and mammals have their skeletons on inside of their bodies, but mollusks have their skeletons on the outside. 4. Seashells are primarily made of calcium. 5. Shells protect the creatures from predators, strong currents and storms. 6. Shells also help camouflage some animals. 7. Some seashells have holes in them. The holes were made by predators who drilled or chipped their way through the shell to get at the animal inside. 8. Shells are big business. Shells are sold in tourist attractions near oceans. Jewelry and adornments for clothing or household items are sold all over the world. 9. Ancient peoples cleaned out and removed the living organisms from the shells, and used them as containers for food and water. 10. Hermit crabs use discarded mollusk shells for self-protection. As the hermit crab grows, it will look for larger shells to use for protection.
<urn:uuid:53f1f975-ad84-43e3-a90a-0190d3bbf177>
3.515625
282
Listicle
Science & Tech.
66.49978
Within the last three decades there have been large wildfires consuming the forests in nearly all the mountain ranges above the desert southwest. Just below the ranges, the lack of rains combined with invasive species has caused additional wildfires that have devastated portions of the Sonoran Desert. The link between drought and fire has pre-historic roots and host David Yetman and Tom Swetnam from the University of Arizona's Laboratory of Tree-Ring Research travel through the desert to higher elevations that contain evidence of drought, fire, and civilization. There is evidence that droughts drove early civilizations out of their dwellings on the Colorado Plateau and forced them to move nearer to the Rio Grande River. Yetman also ventures through a dog-hair thicket that has become dangerous because of previous land management practices and the lack of regular fire to regulate its growth. Additionally featured in this episode is a hike through Organ Pipe Cactus National Monument to see how scientists study the adaptability of desert plants to long-term and short-term droughts. Visit the Website: http://originals.azpm.org/thedesertspeaks/ Episode #1612 / Length: 26 minutes
<urn:uuid:fd4e204e-3c81-4cf3-897f-536d25366ccd>
3.265625
236
Truncated
Science & Tech.
30.21753
Lab scientists have developed a rugged, inexpensive neutron detector—made largely of plastic—that could be mass-produced to provide more-widespread border screening for nuclear contraband. Government agencies are currently fielding neutron detectors at seaports, airports, rail yards, and border crossings to detect contraband plutonium from its neutron emissions. The aim is to foil terrorist attempts to smuggle a plutonium-fueled nuclear bomb or its plutonium parts into the country. Detonating a nuclear bomb in a city would be devastating. But preventing such an attack is not easy because there are so many entry points to the United States. Each year, 7 million freight containers are unloaded at nearly 400 seaports; 800,000 commercial airline flights and 130,000 private flights land on U.S. soil; and 11 million trucks and 2 million railroad cars enter the country from Canada and Mexico. At each of the fifty or more vehicular border crossings, there are at least ten traffic lanes. To cover all these entry points would require several thousand neutron detectors, possibly tens of thousands. The most commonly deployed neutron detector—a proportional counter—costs at least $30,000 for a model with a detection area of 1 square meter. Ten thousand of these detectors would cost at least $300 million. Los Alamos scientist Kiril Ianakiev has developed an attractive alternative: a new breed of neutron detector. The detector's major parts include spark plugs, welding gas, and a briefcase-sized block of plastic that forms its body. The detector is rugged and inexpensive enough to be widely deployed—which is the whole idea. [figure: detector prototypes] Ianakiev's detector is also a good neutron detector: it detects 10 percent of the neutrons emitted by plutonium-240 that strike it. (Weapons-grade plutonium typically contains about 5 percent plutonium-240.) By comparison, a proportional counter detects 15 percent of the neutrons. But a proportional counter is also nearly ten times more expensive. One of Ianakiev's detectors with a 1-square-meter detection area will cost about $4,000. Ten thousand detectors would cost only $40 million. Leveraging the Microchip If the electric field is high enough, however, the electrons gain enough energy to ionize more gas atoms, a process that produces more electrons. The resulting "avalanche" of electron-ion pairs—called gas multiplication—amplifies the current pulse. In the early days of radiation detectors, it was far easier to amplify the current pulses with gas multiplication than it was to amplify them with vacuum tubes, which had just been invented in 1906. Now, however, an inexpensive microchip can amplify the current pulses without gas multiplication, allowing Ianakiev to develop a detector that overcomes the limitations of early detector designs. (The sidebar explains how gas-filled radiation detectors work.) neutron + 3He triton + proton, and neutron + 10B 7Li + alpha particle, where Li is lithium, He is helium, B is boron, and the superscripts are isotopic numbers. A triton is the nucleus of a tritium atom (hydrogen-3); an alpha particle is the nucleus of a helium atom. The reaction rates are significant only for neutrons with kinetic energies close to the thermal energy of their surroundings, about 0.025 electronvolt at room temperature. For the 1-million-electronvolt neutrons emitted by plutonium-240, the reaction rates are about one-thousandth those of thermal neutrons. To be detected, therefore, the plutonium neutrons must first lose energy in many glancing blows with a succession of nuclei, a process called moderation. Because light nuclei such as those from hydrogen atoms efficiently moderate neutrons, neutron detectors usually include a block or sheet of a hydrogenous moderator, such as paraffin or polyethylene. Tough, Smart, and Modular Embedded in the detector's body are electronic modules that condition and analyze the detection signal and monitor detector performance. An onboard microprocessor makes the detector easy for untrained operators to use and permits detectors to be networked. The bottom of the detection cell looks like an oversized metal soap dish. Deposited on the cell's inner surface is a thin layer of lithium-6, which absorbs moderated neutrons and produces alpha particles and tritons. The layer is thick enough for a high reaction rate yet thin enough for about half of the tritons and alpha particles to penetrate the layer and ionize the cell's gas. The optimal thickness for the layer was calculated by Los Alamos scientist Martyn Swinhoe. Because lithium will not bond directly to polyethylene, the lithium is deposited on a metal substrate that does bond to the plastic. The substrate also prevents the gases emitted by polyethylene from entering the detection volume. A flat polyethylene lid with a lithium undercoat covers the top of the cell and provides a flat surface for an O-ring gas seal. The lid is bolted to the detector's body. Filling the cell with argon at atmospheric pressure simplifies adding the gas during detector manufacture and eliminates the safety problems of pressurized vessels. With the lid in place, the detection volume is completely enclosed by metal, which improves detection sensitivity by shielding the volume from the electrical noise produced by power lines and other external sources. The metal enclosure is also the detector's negative electrode. A thin aluminum sheet on the detector's exterior electrically shields the embedded electronic modules. The cell's positive electrode is a metal ball screwed onto the end of a modified spark plug, which extends from the lid into the detection cell. In addition to providing an insulated connection to the positive electrode, the spark plug—built to withstand the harsh, percussive environment of an internal combustion engine—will not vibrate if the detector is bumped. [figure: computer rendering of detection cells] The shortest dimension of the detection cell—its depth—equals the longest distance a lithium-produced triton will travel in argon at atmospheric pressure before coming to rest. Because a lithium-produced alpha particle will travel an even shorter distance, both the tritons and the alpha particles ionize as much argon as possible, providing maximum detection sensitivity. [figure: detector performance] Head-to-Head with the Proportional Counter Because a proportional counter uses gas multiplication, its detection signal is highly sensitive to gas impurities. Thus, the gas in a proportional-counter tube must be at least 99.999 percent pure. In fact, about half the cost of a helium-3 proportional-counter tube is in its high-purity gas. In contrast, Ianakiev's detector—which does not use gas multiplication—works even with inexpensive welding-grade argon, which has a purity of 99.5 percent. Furthermore, the small amounts of oxygen, water vapor, and carbon dioxide slowly emitted from the detector's interior surfaces will be absorbed by the lithium coating, so that outgassing will not affect detector performance for twenty years or more. Finally, because the proportional counter's wire electrode can easily be made to vibrate—and thereby to produce spurious signals—the detectors are susceptible to shock and vibration. Supported by a robust spark plug, the relatively massive spherical electrode in Ianakiev's detector resists vibration. Operated by the Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA Inside | © Copyright 2007-8 Los Alamos National Security, LLC All rights reserved | Disclaimer/Privacy
<urn:uuid:187db21d-d8f9-4534-a626-7ae49aa290c9>
3.5
1,560
Knowledge Article
Science & Tech.
33.586917