text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Most Active Stories - Dr. Paul Booth, DePaul University – Cultural Meaning of Doctor Who - Where Did That Fried Chicken Stereotype Come From? - Dr. Frank Elgar, McGill University – Psychological Health and Family Meals - NY AG Breaks Cigarette Trafficking Ring, Hints Terror Ties - Complaints Voiced At Forum About VA Claims Backlog Thu October 4, 2012 Dr. Bärbel Hönisch, Columbia University – Ocean Acidification In today’s Academic Minute, Dr. Bärbel Hönisch of Columbia University’s Lamont-Doherty Earth Observatory reveals how rising levels of CO2 are not only warming the atmosphere, but accelerating the acidification of the oceans as well. Bärbel Hönisch is Assistant Professor of Earth and Environmental Science at Columbia University and a geochemist at Columbia’s Lamont-Doherty Earth Observatory. Her teaching and research interests include chemical oceanography and paleoceanography. With colleagues, she recently published a study of ocean acidification covering the last 300 million years. Dr. Bärbel Hönisch – Ocean Acidification Carbon dioxide emissions from human activities are warming the air and oceans, but it is now clear that these are not our only problems. About a quarter of all CO2 we emit dissolves in the oceans, where it reacts with seawater to form a weak acid. With all the CO2 we are releasing today, the chemistry of the oceans is now changing faster than at any time in the last 300 million years. The potential strain this poses for marine life has been studied in many laboratory experiments, where organisms like corals, oysters, crabs and plankton are exposed to projected future CO2 levels. Many of these show negative effects--but some show no change, or even positive effects. This makes it hard to predict future ecosystem changes. That is why my colleagues and I have turned to the geological record to look for past ocean acidification events that could give us clues. When comparing past and present, we have to look for massive, rapid CO2 releases, because only these compare to what is happening today. The geologic event that best fits this pattern happened about 56 million years ago, when a massive natural release of fossil carbon caused a global temperature increase of 9 to 16 degrees Fahrenheit; massive dissolution of carbonate shells at the seafloor; and extinction among organisms on the seafloor and near the sea surface. This happened despite the fact that the CO2 release and resulting ocean acidification back then was at least 10 times slower than what is happening today. It is unlikely that manmade ocean acidification will kill all life in the oceans. But judging from geological records, it is rather likely that some species will go extinct, and some of them we may miss dearly.
<urn:uuid:6fa81ce2-e113-4b62-b1d9-827e097d21e6>
2.984375
591
Audio Transcript
Science & Tech.
33.97866
Despite what science tells us about climate change, the world continues to pump carbon dioxide into the atmosphere. Last year, increases in emissions from rapidly growing economies like China and India offset reductions that richer countries have made. In short, the picture looks grim. Some scientists think it’s best not to wait for an international climate treaty with teeth to come along. Instead, they’ve come up with a novel way to stave off the effects of excess carbon: by putting it in the deep sea. It’s a process known as ocean carbon sequestration. Writer Peter Friederici joins us to explain what that means and why, in a recent article for the science magazine Miller-McCune, he calls it the world’s “best bad idea.”
<urn:uuid:595ed12d-0a81-4523-b95a-8f6d32867e91>
3.078125
161
Truncated
Science & Tech.
53.193228
Arts, sciences and humanities build healthier, more livable, vital communities. They are essential to a strong education system. They contribute enormously to our economy. Moths are typically neglected by people interested in natural history even though there are many hundreds of colorful and interesting species that live in our area. Yet we hardly know they are all around us. If you were to ask the average person about moths, they would probably get concerned about their sweaters. One reason that moths are so poorly known is that they are nocturnal and hard to see unlike their close relative the butterflies. But another reason is that until now there has never been a popular, easy to use field guide to their identification. Our guest tonight is SEABROOKE LECKIE, biologist and writer. Together with David Beadle, she has co-authored the new PETERSON GUIDE TO MOTHS OF NORTHEASTERN NORTH AMERICA. Tune in tonight and learn about this fascinating group of insects, why we know so little about them even today and how to see more these mysterious night flying creatures in your own backyard.
<urn:uuid:01c38d28-4173-4f7e-8eb2-0df3fbe5404e>
3.078125
226
Truncated
Science & Tech.
40.364441
- Tokens that use double quoted strings or regexes do interpolation. - Hence, they can be make context sensitive, by referring to other items in the production rule. For instance, by interpolating - The <matchrule:...> is another way of building dynamic rules. The part after the colon is evaluated as a double quoted string each time the directive is encountered, and the value it evaluates to is taken as the name of the subrule to
<urn:uuid:72d142cc-1dac-47bb-862e-0ccf751bebd3>
2.84375
97
Documentation
Software Dev.
40.73
HP OpenVMS Systems Documentation Guide to the POSIX Threads Library 2.5 Process-Shared Synchronization Objects You can create synchronization objects (that is, mutexes, condition variables, and read-write locks) that protect data that is shared among threads running in different processes. These are called process-shared synchronization objects. 2.5.1 Programming Considerations On Tru64 UNIX systems, a process-shared synchronization object is a kernel object. Performing any operation on such an object requires a call into the kernel and thus is of higher cost than the same operation on a process-specific synchronization object. When debugging a process-shared synchronization object, the debugger cannot currently display the mutex, nor its owner or waiting threads. As is the case for process-specific synchronization objects, a process-shared synchronization object must be initialized only once; you cannot initialize it in each process that uses it. For independent processes that share a common synchronization protocol using process-shared synchronization objects, there must be some mechanism to determine which single process will initialize those objects. For example, if multiple processes connect to a named memory section, all but one will fail, and the one successful process should have the responsibility of initializing any global process-shared synchronization objects in that memory section. (The other processes must also use some mechanism for waiting until the process-shared object is initialized before attempting to use the shared memory You can create a mutex that protects data that is shared among threads running in different processes. This is called a process-shared mutex. Create a process-shared mutex by using the routine to set the process-shared attribute in an initialized mutex attributes object and then use that attributes object in a call to You can create a condition variable used to communicate changes to data that is shared among threads running in different processes. This is called a process-shared condition variable. Create a process-shared condition variable by using the You can create a read-write lock that protects data that is shared among threads running in different processes. This is called a process-shared read-write lock. Create a process-shared read-write lock by using the Each thread can use an area of memory private to the Threads Library where it stores thread-specific data. Use this memory to associate arbitrary data with a thread's context. This allows you to add user-specified fields to the current thread's context or define global variables that have private values in each thread. A thread-specific data key is shared by all threads within the process---each thread has its own unique value for that shared key. Use the following routines to create and access thread-specific data: If a call to one of these routines returns an error, synchronization is not guaranteed. For example, an unsuccessful call to pthread_mutex_trylock() does not necessarily provide actual synchronization. Synchronization is a "protocol" among cooperating threads, not a single operation. That is, unlocking a mutex does not guarantee memory synchronization with all other threads---only with threads that later perform some synchronization operation themselves, such as locking a mutex. 3.3 Sharing Memory Between Threads Most threads do not operate independently. They cooperate to accomplish a task, and cooperation requires communication. There are many ways that threads can communicate, and which method is most appropriate depends on the task. Threads that cooperate only rarely (for example, a boss thread that only sends off a request for workers to do long tasks) may be satisfied with a relatively slow form of communication. Threads that must cooperate more closely (for example, a set of threads performing a parallelized matrix operation) need fast communication---maybe even to the extent of using machine-specific hardware operations. Most mechanisms for thread communication involve the use of memory, exploiting the fact that all threads within a process share their full address space. Although all addresses are shared, there are three kinds of memory that are characteristically used for communication. The following sections describe the scope (or, the range of locations in the program where code can access the memory) and lifetime (or, the length of time use of the memory is invalid) of each of the three types 3.3.1 Using Static Memory Static memory is allocated by the language compiler when it translates source code, so the scope is controlled by the rules of the compiler. For example, in the C language, a variable declared as extern is shared by all scopes where the name is defined anywhere, and a static variable is private to the source file or routine, depending on where it is declared. In this discussion, static memory is not the same as the C language static storage class. Rather, static memory refers to any variable that is permanently allocated at a particular address for the life of the program. It is appropriate to use static memory in your multithreaded program when you know that only one instance of an object exists throughout the application. For example, if you want to keep a list of active contexts or a mutex to control some shared resource, you would not want individual threads to have their own copies of that data. The scope of static memory depends on your programming language's scoping rules. The lifetime of static memory is the life of the program. 3.3.2 Using Stack Memory Stack memory is allocated by code generated by the language compiler at run time, generally when a routine is initially called. When the program returns from the routine, the storage ceases to be valid (although the addresses still exist and might be accessible). Generally, the storage is valid while the routine runs, and the actual address can be calculated and passed to other threads; however, this depends on programming language rules. If you pass the address of stack memory to another thread, you must ensure that all other threads are finished processing that data before the routine returns; otherwise the stack will be cleared, and values might be altered by subsequent calls, page fault handling, or other interrupts. The other threads will not be able to determine that this has happened, and erroneous behavior will result. The scope of stack memory is the routine or a block within the routine. The lifetime is no longer than the time during which the routine or 3.3.3 Using Dynamic Memory Dynamic memory is allocated by the program as a result of a call to some memory management routine (for example, the C language run-time routine malloc() or the OpenVMS common run-time routine LIB$GET_VM). Dynamic memory is referenced through pointer variables. Although the pointer variables are scoped depending on their declaration, the dynamic memory itself has no intrinsic scope or lifetime. It can be accessed from any routine or thread that is given its address and will exist until explicitly made free. In a language supporting automatic garbage collection, it will exist until the run-time system detects that there are no references to it. (If your language supports garbage collection, be sure the garbage collector is thread-safe.) The scope of dynamic memory is anywhere a pointer containing the address can be referenced. The lifetime is from allocation to deallocation. Typically dynamic memory is appropriate to manage persistent context. For example, in a reentrant routine that is called multiple times to return a stream of information (such as to list all active connections to a server or to return a list of users), using dynamic memory allows the program to create multiple contexts that are independent of all the program's threads. Thus, multiple threads could share a given context, or a single thread could have more than one context. 3.4 Managing a Thread's Stack For each thread created by your program, the Threads Library sets a default stack size that is acceptable to most applications. You can also set the stacksize attribute in a thread attributes object, to specify the stack size needed by the next thread created. This section discusses the cases in which the stack size is insufficient (resulting in stack overflow) and how to determine the optimal size of the stack. Most compilers on Compaq VAX based systems do not probe the stack. This makes stack overflow failure modes unpredictable and difficult to analyze. Be especially careful to use as little stack memory as practical. Most compilers on Compaq Alpha based systems generate code in the procedure prologue that probes the stack, which detects if there is not enough space for the procedure to run. 3.4.1 Sizing the Stack To determine the required size of a thread's stack, add the sizes of the frames, including local variables, for the deepest call tree. Add to that number an extra amount of memory to accommodate interrupts and context switching. Determining this figure is difficult because stack frames vary in size and because it might not be possible to estimate the depth of library routine call frames. Compaq's Visual Threads includes a number of tools and procedures to measure and monitor stack use. See the Visual Threads product's online help for more information. You can also run your program using a profiling tool that measures actual stack use. This is commonly done by "poisoning" the stack before it is used by writing a distinctive pattern, and then checking for that pattern after the thread completes. Remember: Use of profiling or monitoring tools typically increases the amount of stack memory that your program uses. 3.4.2 Using Stack Overflow Warning and Stack Guard Areas By default, at the overflow end of each thread's stack, the Threads Library allocates an overflow warning area followed by a guard area. These two areas can help a multithreaded program detect overflow of a thread's stack. Tru64 UNIX 5.0 and OpenVMS Alpha 7.3 include overflow warning support to allow the reporting of stack overflows while a thread can still be assured of executing code. The warning area is a page (or more) that is initially protected to trap writes, but then becomes writable so that it can be used to allow reporting or recovering from the overflow. (On Tru64 UNIX, the warning area is again protected once an overflow has been handled; on OpenVMS it remains unprotected.) A guard area is a region of no access memory. When the thread attempts to access a memory location within this region, a memory addressing violation occurs. For a thread that allocates large data structures on the stack, create that thread using a thread attributes object in which a large guardsize attribute value has been set. A large stack guard region can help to prevent one thread from overflowing into another thread's stack region.
<urn:uuid:d3c356ae-7298-4851-9f07-8aa71d5643ff>
3.578125
2,212
Documentation
Software Dev.
37.072103
University of Nottingham researcher Richard Hill and his colleagues used a powerful magnetic field to create a small, zero-gravity "arena" for fruit flies. Though magnetic fields attract magnetic substances such as iron, they also weakly repel certain other materials that are called "diamagnetic." These include water and organic matter--in other words, most of what's in a fruit fly. (Or in you. But there isn't a magnet big enough to try this trick on a person.) By carefully aligning their disc-shaped fly arenas inside a superconducting solenoid magnet, the researchers were able to create environments of roughly 1g (equal to Earth's gravity), 2g (twice Earth's gravity), and 0g (whee!). They also left one fly dish outside the magnet, so they could compare the 1g environments and make sure the magnetic field didn't just make all the flies crazy. Though the researchers provide many mathematical descriptions of their result, you can see it easily and immediately in this video. The 0g flies are on the top left. Fruit flies' normal behavior is to roam, but the 0g flies are tearing around their dish. Unlike human astronauts, who don't have much choice but to float, fruit flies have grippy little feet and the power of flight. So the weightless flies spent most of their time walking on the floor, walls, and ceiling as usual. (You might spot a few of them floating dazedly in the center of the dish, though.) What was unusual was the speed and amount that they traveled. This might be simply because it's easier for flies walk without gravity. If it takes less energy than usual to walk, a fruit fly that's putting the normal amount of effort into moving around will find itself at a near-sprint. The 2g flies supported this theory by walking more sluggishly than usual. Another possibility is that the flies' altered perception of gravity affected their behavior. Like human astronauts who go ricocheting off the walls for fun, the fruit flies might have noticed something was different and reacted to that feeling. The finding that weightless flies speed-walk isn't new: Experiments done on the International Space Station and on the space shuttle Columbia found the same result. But replicating the finding here on Earth shows that it wasn't a fluke caused by some other factor, such as the trauma of takeoff. The flies' altered behavior was directly due to their low-gravity environment, making it relevant to humans and any other organisms we might carry into space. Hill's study also shows that zero-gravity experiments, at least on very small organisms, don't have to be done in space. Studies done inexpensively here on Earth can provide real insights into life in outer space, and help create safer technologies for the lucky humans who get to go. Hill, R., Larkin, O., Dijkstra, C., Manzano, A., de Juan, E., Davey, M., Anthony, P., Eaves, L., Medina, F., Marco, R., & Herranz, R. (2012). Effect of magnetically simulated zero-gravity and enhanced gravity on the walk of the common fruitfly Journal of The Royal Society Interface DOI: 10.1098/rsif.2011.0715 Movie: Hill et al., data supplement; Photo: NASA
<urn:uuid:b4b842bf-769b-435a-9d80-45e3ed9772a5>
3.921875
695
Knowledge Article
Science & Tech.
60.391649
1. Nature: “Global warming blamed for 40% decline in the ocean’s phytoplankton”: “Microscopic life crucial to the marine food chain is dying out. The consequences could be catastrophic.” If confirmed, it may represent the single most important finding of the year in climate science. Seth Borenstein of the AP explains, “plant plankton found in the world’s oceans are crucial to much of life on Earth. They are the foundation of the bountiful marine food web, produce half the world’s oxygen and suck up harmful carbon dioxide.” Boris Worm, a marine biologist and co-author of the study said, “We found that temperature had the best power to explain the changes.” He noted, “If this holds up, something really serious is underway and has been underway for decades. I’ve been trying to think of a biological change that’s bigger than this and I can’t think of one.” 2. Science: Vast East Siberian Arctic Shelf methane stores destabilizing and venting: NSF issues world a wake-up call: “Release of even a fraction of the methane stored in the shelf could trigger abrupt climate warming.” 4. Nature Geoscience study: Oceans are acidifying 10 times faster today than 55 million years ago when a mass extinction of marine species occurred and Geological Society: Acidifying oceans spell marine biological meltdown “by end of century” — Co-author: “Unless we curb carbon emissions we risk mass extinctions, degrading coastal waters and encouraging outbreaks of toxic jellyfish and algae.” This is from a special issue of 16 articles in the Philosophical Transactions of the Royal Society B (Biological Science), “Biological diversity in a changing world,”– which notes “Never before has a single species driven such profound changes to the habitats, composition and climate of the planet.” A biogeochemist quoted by Nature explained that “perhaps [the] most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2. If true, this is an important finding: that a positive feedback to climate change is already occurring at a detectable level in soils.” Another major study in the February 2010 issue of the journal Ecology by Finnish researchers, “Temperature sensitivity of soil carbon fractions in boreal forest soil,” had a similar conclusion. The Finnish Environment Institute, which led the study, explained the results in a release, “Soil contributes to climate warming more than expected” There were so many important climate science findings this year I didn’t get to write on all of them. This one in particular was misunderstood: Reasonable worst-case scenarios for global warming could lead to deadly temperatures for humans in coming centuries, according to research findings from Purdue University and the University of New South Wales, Australia. The study notes that even a 12°F warming would be dangerous for many. In fact, we could well see these deadly temperatures in the next century or century and a half over large parts of the globe on a very plausible emissions path. For more info on these studies and their findings, read the full piece over on Climate Progress: A stunning year in climate science reveals that human civilization is on the precipice. This post is a quickie, a way for us to share more news with you by quickly covering good news stories on other sites. I'm the director of CleanTechnica, the most popular clean energy website in the world, and Planetsave, a leading green and science news site. I've been covering green news of various sorts since 2008, and I've been especially focused on solar energy, electric vehicles, bicycling, and wind energy for the past few years. You can also find my work on Scientific American, Reuters, Think Progress, GE's ecomagination site, several sites in the Important Media network, & many other places. To connect on some of your favorite social networks, go to zacharyshahan.com or click on some of the links below.
<urn:uuid:327e1f67-0573-4cbf-a475-26b88212a670>
3.28125
872
Listicle
Science & Tech.
37.239118
In honor of Darwin Day, I’d like to give a little shout out to some of Charles Darwin’s contributions to marine science. Theory of Coral Reef Formation: Onboard the Beagle, Darwin composed the theory of coral reef formation. He described three types of reefs: fringe, barrier, and atoll. His illustrations of reef formation and global reef locations are beautifully detailed. Most impressive is that Darwin came up with the theory without ever having seen a coral reef (though he would eventually see one during the Beagle’s voyage through the Pacific). And remember, back then there were no aerial photographs of atolls, etc. But Darwin’s theory of coral reef formation wasn’t found to be correct until 1951, when U.S. government geologists surveying Eniwetok, a Marshall Islands atoll, prior to a hydrogen bomb test there, finally drilled deep enough to resolve the mystery. Scientists immediately erected a small sign next to the borehole which read “Darwin was right”. Read more about the debate here. One marine-related hypotheses Darwin had onboard the Beagle: Darwin posited bioluminescence in the sea was the same type of bacteria as that on rotten meat (he was wrong). Darwin’s Fishes: Daniel Pauly’s book Darwin’s Fishes is an encylopedia of everything Charles Darwin ever wrote about fish, which represented about 0.7% of Darwin’s lifetime ouput. Using fish, Darwin gave the first rigorous account of the importance of colors in biology and also accounts of sexual selection. Pauly also believes that Darwin to be able to deomonstrate the roles isolated islands play in generating biodiversity (and endemism) using fishes. Barnacles: Back from the Beagle but still sitting on his theory of natural selection, Darwin began a study of barnacles that lasted eight years (photo of Darwin’s barnacle slides). He was first to identify and coin the term “dwarf males” (paired with a female barnacle lacking all male organs) and “complemental males” (housed within a hermaphrodite barnacle). His taxonomy of barnacles is still in use today. Lots of barnacles are named after Darwin and so are some fish, including this one: Semicossyphus darwini (Galapagos sheephead); drawing by Godfrey Merlen
<urn:uuid:e58985a6-1f00-4081-b5f6-e87f3165d730>
3.46875
515
Personal Blog
Science & Tech.
38.972715
There are generally two types of variable stars: - A star being eclipsed, where the changes in brightness are from perspective, are called extrinsic variables. For example, in a solar eclipse the moon gets between the sun and Earth. Binary star systems sometimes show eclipses, as do planetary systems with giant planets. - Stars whose brightness actually changes are called intrinsic variables. The stars get bigger and smaller over time. Some pulsate at a constant rate, some do not. All of them change the rate at which energy is put out, which changes their appearance to us. Intrinsic variable stars [change] There are a number of different types of variable stars. - Intrinsic variable stars: variation caused by changes in the physical properties of the stars themselves. Three subgroups: - Pulsating variables: stars whose radius expands and contracts as part of their natural evolutionary ageing processes. - Eruptive variables: stars who experience eruptions on their surfaces like flares or mass ejections. - Cataclysmic or explosive variables: stars which undergo a cataclysmic change, such as novae and supernovae. Pulsating variables [change] Cepheids and Cepheid-like [change] - Classical Cepheids include: Eta Aquilae, Zeta Geminorum, Beta Doradus, RT Aurigae, Polaris, and the namesake Delta Cephei. The North Star (Polaris) is the closest classical Cepheid, but it has many peculiarities and its distance is not certain. - Type II Cepheids include: W Virginis and BL Herculis. - Dwarf Cepheids include: Delta Scuti, SX Phoenicis. - RR Lyrae variables. Very common, used as absolute candles in globular clusters. RR Lyrae Long period and semi-regular [change] - Mira. Typical of a class of stars with pulsation periods longer than 100 days. They are red giant stars in the very late stages of stellar evolution. They will expel their outer envelopes as planetary nebulae and become white dwarfs in a few million years. Eruptive variables [change] Protostars are young objects that have not yet completed the process of contraction from a gas nebula to a true star. Most protostars show irregular brightness variations. Giants and Supergiants [change] Large stars lose their matter relatively easily. Variability due to eruptions and mass loss is fairly common among giants and supergiants. Cataclysmic or explosive stars [change] Supernovae are the most dramatic type of events in the universe. A supernova can briefly emit as much energy as an entire galaxy, brightening by more than 20 magnitudes (over one hundred million times brighter). Supernovae result from the death of an extremely massive star, many times heavier than the Sun. The supernova explosion is caused by a white dwarf or a star core reaching a certain mass/density limit, the Chandrasekhar limit. Then the star collapses in a fraction of a second. This collapse "bounces" and causes the star to explode and emit an enormous amount of energy. The outer layers of these stars are blown away at speeds of many thousands of kilometers an hour. The expelled matter may form nebulae called supernova remnants. A well-known example of such a nebula is the Crab Nebula, left over from a supernova that was observed in China and North America in 1054. The core of the star or the white dwarf may either become a neutron star (generally a pulsar) or disintegrate completely in the explosion. A supernova may also result from mass transfer onto a white dwarf from a star companion in a double star system. The infalling matter pushes the white dwarf over the Chandrasekhar limit. The absolute luminosity of this type of supernova can be calculated from its light curve, so these explosions can be used to fix the distance to other galaxies. One of the most studied supernovae is SN 1987A in the Large Magellanic Cloud. Eclipsing variables [change] - Brown, T. M.; Gilliland, R. L. (1994). "Asteroseismology". Annual Review of Astronomy and Astrophysics 32: 37. doi:10.1146/annurev.aa.32.090194.000345. - Cox, John P., Theory of Stellar Pulsation, Princeton, (1980) - Kilkenny, Dave. "Variable Stars II - Pulsating Stars". http://www.star.ac.za/course-resources/local/david-buckley/var2.pdf. - Turner, D.G.; Guzik, Joyce Ann; Bradley, Paul A. (2009). "Polaris and its Kin". AIP Conference Proceedings. pp. 59–68. doi:10.1063/1.3246569.
<urn:uuid:68d16d94-5c45-4bab-84a2-f743e69c4930>
3.703125
1,049
Knowledge Article
Science & Tech.
48.471286
Sci. STKE, 31 July 2007 Plant Science Ethylene Controls Root Meristem Production Katrina L. Kelner Science, AAAS, Washington, DC 20005, USA In plants, undifferentiated meristem tissue provides stem cells to produce roots and shoots. The root meristem contains a few of these stem cells in a region called the quiescent center. Ortega-Martínez et al. studied Arabidopsis plants with a defect in a gene that controls ethylene biosynthesis and found that it produced more of the gaseous hormone ethylene. The quiescent center cells in these mutants went through more cell divisions than normal, resulting in extra stem cells in the root meristem. Adding exogenous ethylene also increased quiescent cell division, and blocking its synthesis in the mutants prevented extra divisions. Citation: K. L. Kelner, Ethylene Controls Root Meristem Production. Sci. STKE 2007, tw274 (2007). Science Signaling. ISSN 1937-9145 (online), 1945-0877 (print). Pre-2008: Science's STKE. ISSN 1525-8882
<urn:uuid:f5e10190-464c-4d9a-b413-1845f58cabf1>
2.6875
247
Academic Writing
Science & Tech.
58.310242
Abstract: Yichun Fu Mentor: Dr. Burgmayer Molybdenum is a 4d transition metal. It has crucial biological functions in many enzymes. Despite the differences among the Mo-containing enzymes, they all have a Molybdenum cofactor named as Moco, which promotes the catalytic activities of an enzyme. The Moco utilizes a dithiolene ligand, Molybdopterin. The Burgmayer group conducted successful research to synthesize Molybdopterin via two pathways, using tetrasulfide and pterinyl alkynes. The focus of this project is synthesizing Molybdopterin and its precursors tetrasulfide TEA[Tp*Mo(S)S4] and BMOPP, one of the pterinyl alkynes. The goal is to replicate the results of previous experiments in large scale, compare the conditions and effectiveness, and perfect the methods. We compare the differences in the instructions from different resources and determine the best methods via experiments. Another objective is to improve the process of DMF distillation to produce pure tetrasulfide of high yield. DMF is a solvent used in the synthesis of tetrasulfide. The DMF has to be completely dried and distilled under N2 in order to protect the water-sensitive tetrasulfide from moisture and oxidation. Mass spectrometry, FT-IR, and H1NMR are used in the project to monitor the reaction and identify the compounds. Schlenk line is also used to conduct reaction under nitrogen. Therefore, another goal of the project is to acquire the relevant lab skills and develop the research methodology.
<urn:uuid:747cbb85-d017-4cf0-9a52-313c0e91de51>
2.75
346
Academic Writing
Science & Tech.
28.05025
A massive protocluster of galaxies at a redshift of z ≈ 5.3 This is the pre-published version harvested from ArXiv. The published version is located at http://www.nature.com/nature/journal/v470/n7333/full/nature09681.html Massive clusters of galaxies have been found that date from as early as 3.9 billion years1 (3.9 Gyr; z = 1.62) after the Big Bang, containing stars that formed at even earlier epochs2, 3. Cosmological simulations using the current cold dark matter model predict that these systems should descend from ‘protoclusters’—early overdensities of massive galaxies that merge hierarchically to form a cluster4, 5. These protocluster regions themselves are built up hierarchically and so are expected to contain extremely massive galaxies that can be observed as luminous quasars and starbursts4, 5, 6. Observational evidence for this picture, however, is sparse because high-redshift protoclusters are rare and difficult to observe6, 7. Here we report a protocluster region that dates from 1 Gyr (z = 5.3) after the Big Bang. This cluster of massive galaxies extends over more than 13 megaparsecs and contains a luminous quasar as well as a system rich in molecular gas8. These massive galaxies place a lower limit of more than 4 × 1011 solar masses of dark and luminous matter in this region, consistent with that expected from cosmological simulations for the earliest galaxy clusters4, 5, 7. PL Capak, D Riechers, NZ Scoville, C Carilli, P Cox, R Neri, B Robertson, M Savato, E Schinnerer, L Yan, GW Wilson, Min Yun, F Civano, M Elvis, A Karim, B Mobasher, and JG Staguhn. "A massive protocluster of galaxies at a redshift of z ≈ 5.3" Nature 470.7333 (2011): 233-235. Available at: http://works.bepress.com/min_yun/12
<urn:uuid:4b0a582e-7e71-42a3-9506-4ba34ab1f09a>
2.6875
457
Academic Writing
Science & Tech.
59.968081
Science Fair Project Encyclopedia Topoisomerases (type I: EC 18.104.22.168, type II: EC 22.214.171.124) are enzymes that act on the topology of DNA. The double-helical configuration that DNA strands naturally reside in makes them difficult to separate, and yet they must be separated by helicase proteins if other enzymes are to transcribe the sequences that encode proteins, or if chromosomes are to be replicated. In so-called circular DNA, in which double helical DNA is bent around and joined in a circle, the two strands are topologically linked, or knotted. Otherwise identical loops of DNA having different numbers of twists are topoisomers, and cannot be interconverted by any process that does not involve the breaking of DNA strands. Topoisomerases catalyze and guide the unknotting of DNA. The insertion of viral DNA into chromosomes and other forms of recombination can also require the action of topoisomerases. Many drugs operate through interference with the topoisomerases. The broad-spectrum fluoroquinolone antibiotics act by disrupting the function of bacterial type II topoisomerases. Some chemotherapy drugs work by interfering with topoisomerases in cancer cells: type 1 is inhibited by irinotecan and topotecan , while type 2 is inhibited by etoposide and teniposide . Type I topoisomerases Both type I and type II topoisomerases change the supercoiling of DNA. Type I topoisomerases function by nicking one strand of the DNA double helix, twisting it around the other strand, and religating (reconnecting) the nicked strand. (For this reason, these isomerases are sometimes described as nickases.) Type I topisomerases change the linking number of a circular DNA strand by 1. Type II topoisomerases Type II topoisomerases cut both strands of the DNA helix simultaneously. One free end is twisted, and the severed strands are reattached. Type II topoisomerases change the linking number of a DNA loop by 2. For example, DNA gyrase is a type II isomerase observed in E. coli and most other prokaryotes. Gyrase introduces negative supercoils and decreases the link number by 2. Gyrase is able to relieve knots in the bacterial chromosome. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:6344fdec-5bcf-4c47-b8cb-92a55a7e1c79>
4.09375
530
Knowledge Article
Science & Tech.
29.305366
Glenn Chaple's Observing Basics: The phases of Venus January 2009: Begin IYA2009 by replicating Galileo's observations of our "sister planet." November 24, 2008 |Happy International Year of Astronomy! Capitalizing on the fact that 2009 marks the 400th anniversary of Galileo's earliest telescopic forays into the night sky, the International Astronomical Union (IAU) is promoting a global celebration of astronomy.| You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:e6d4677a-3bb8-411e-b46f-1c5efd3fb91e>
2.78125
255
Truncated
Science & Tech.
34.861154
A closure is a programming technique that allows variables outside of the scope of a function to be accessed. In many cases, a closure is created when a function is defined within another function, allowing the inner function to access variables within the outer function. - Closures, The Term - Code Rant: What is a Closure? - PHP inner functions and closure
<urn:uuid:933cf074-f728-4bff-93dd-38c32d887671>
3.546875
75
Knowledge Article
Software Dev.
37.74791
Conservation of old-growth dependent mallee fauna Prepared by David Baker-Gabb for the Black-eared Miner Recovery Team, February 2001 (Revised February 2003) The Black-eared Miner Manorina melanotis (Wilson 1911) formerly occurred in the Murray Mallee region of South Australia, Victoria and New South Wales, but is no longer present over much of its historical range. Few birds remain in Victoria and New South Wales, with most colonies now confined to the Bookmark Biosphere Reserve about 50 km north-west of Renmark in South Australia. An intensive management program is under way to save the Black-eared Miner from extinction. Several publications have highlighted the rarity and plight of the bird in the past (Favaloro 1966; Considine 1986; Starks 1988). A Recovery Program for the species commenced in Victoria in 1991, with subsequent actions based on plans by Fitzherbert et al (1992), Middleton (1993), and McLaughlin (1993b). Clarification of the taxonomic status of the Black-eared Miner resulted in a commitment from regional, State and national organisations and agencies to save the species, and the production of a national Recovery Plan (Backhouse et al 1995). This plan was in turn revised when in 1995 many colonies of Black-eared Miners were discovered in the Bookmark Biosphere Reserve (Backhouse et al 1997). This latest (2002-2006) Recovery Plan sets out the actions required to continue and build on the successes already achieved in the recovery of this endangered species. This Recovery Plan conforms to the requirements of the Commonwealth Environment Protection and Biodiversity Conservation Act 1999. It is intended to be the national Recovery Plan for the Black-eared Miner, so that local plans and actions in relevant States clearly originate from the national plan. Subsidiary documents will be prepared as required under relevant State legislation to provide further detail of implementation within that State.
<urn:uuid:f2e75346-5242-4f0a-a752-cf513a3fca00>
2.84375
391
Knowledge Article
Science & Tech.
27.646604
There is an interesting letter in Nature Geoscience this month on what climate changes we have actually already committed ourselves to. The letter, by Mathews and Weaver (sub. reqd.), makes the valid point that there are both climatic and societal inertias to consider. Their figure neatly demonstrates the different issues: The upper line is often what is referred to as the ‘climate change commitment’ (for instance Wigley, 2005). This is the warming you get if we keep CO2 (and other GHG and pollutant levels) constant at today’s values. (Technically, the figure shows the case staying at year 2000 values). In such a scenario, the planet still has a radiative imbalance, and the warming will continue until the oceans have warmed sufficiently to equalise the situation – giving an additional 0.3 to 0.8ºC warming over the 21st Century. Thus the conclusion has been that because of climate inertia, further warming is inevitable. However, constant concentrations of CO2 imply a change in emissions – specifically an immediate cut of around 60 to 70% globally and continued further cuts over time. Matthews and Weaver make the point that this is a little arbitrary and that the true impact of climate inertia would be seen only with emissions cut to zero. That is, if we define the commitment as the consequence only of past emissions, then you should set future emissions to zero before you calculate it. This is a valid point, and the consequence of that is seen in the lower lines in the figure. CO2 concentrations would start to fall immediately since the ocean and terrestrial biosphere would continue to absorb more carbon than they release as long as the CO2 level in the atmosphere is higher than pre-industrial levels (approximately). And subsequent temperatures (depending slightly on the model you are using) would either be flat or slightly decreasing. With this definition then, there is no climate change commitment because of climate inertia. Instead, the reason for the likely continuation of the warming is that we can’t get to zero emissions any time soon because of societal, economic or technological inertia. That is an interesting reframing of an issue that comes up all the time in discussions of adaptation and mitigation. This is because it demonstrates that adaptation (over and above what is necessary to reduce vulnerabilities to current climate conditions) is unnecessary if mitigation is dramatic enough. However, the practical implication of this reframing is small. We are clearly not going to get to zero emissions any time soon, and even the 60-70% cuts required to stabilise concentrations initially seem a long way off. Thus as a practical matter, it doesn’t really matter whether the inertia is climatic or societal or technological or economic because the globe will continue to warm under all realistic scenarios (what we do have a possible control over is the magnitude of that warming). Thus further adaptation measures will still be needed.
<urn:uuid:71b2283d-8116-4e7e-9519-ae8990efba01>
3.015625
593
Comment Section
Science & Tech.
35.447447
Enumeration is mean of retaining all the series of element at a time. An object implement interface generates a series of all element at single time. This is read only. We cannot change the value of an object or value in the collection. Understand with Example Array List<String> list = new Array List<String>() Create an Array list of type list.add("girish"):-This is the method of adding an element to the list. it.has next ( ) - This method return you the next value if the value is set in list. The System show you an exception if there is no element specified in index. To remove the exception from the program defined below you have to give the proper index in the method list.remove(5). Output of the program |cannot remove this index elemnt| Note:-To remove this exception you have to give the proper index in the method list.remove(5). Recommend the tutorial Ask Questions? Discuss: Example to show Enumeration exception in java Post your Comment
<urn:uuid:b4167890-0ae0-4759-a5c1-ca0bb6048547>
3.9375
226
Tutorial
Software Dev.
56.661765
Startling animation reveals New York City's carbon footprint A eye-opening new animation of New York City visualizes the epic scale of climate pollution coming from our lives. I highly recommend watching to the end of this three minute video to see what a single year's worth looks like: My only quibble with this excellent animation is the choice of cool blue for the balloon colour. In my mind a glowing orange colour would better convey the global warming impact this CO2 causes. Maybe something more like this: Most of these CO2 balloons will linger in our atmosphere for centuries. For example, some of the CO2 balloons from New York City coal burning in the 1800s are still up there. Around the clock CO2 relentlessly traps extra heat energy that would have escaped into space and instead pumps it into our weather and oceans. As New York is discovering, all that extra heat energy will eventually come back to bite. Last year it was Hurricane Irene and Tropical Storm Lee that pounded them. This year superstorm Sandy amped up on climate steroids smashed ashore bringing up to $50 billion in damages. (see: Climate change powers "Frankenstorm" Sandy). NY Governor Andrew Cuomo recently lamented this triple blow of extreme weather events in just two years: "I get it, I've seen this movie three times … Climate change is real, it's here, it's going to happen again." "In just 14 months, two hurricanes have forced us to evacuate neighborhoods—something our city government had never done before. If this is a trend, it is simply not sustainable." 400,000 atomic bombs Just how much heat energy are we talking about globally? NASA climate scientist James Hansen say the current increase in global warming is "equivalent to exploding 400,000 Hiroshima atomic bombs per day 365 days per year. That’s how much extra energy Earth is gaining each day." Every minute another 278 atomic bombs worth of energy – more than four per second. And that is just the daily increase in our climate heating. (See: Global warming increasing by 400,000 atomic bombs every day) 100,000 times the heat How can such a massive increase in heat energy be possible? It turns out that each molecule of CO2 is so long-lasting, so tireless and so darn efficient at trapping heat that by the time that molecule leaves the atmosphere it will have warmed our planet 100,000 times more that the heat given off when it was first burned. Burn a lump of coal and the global warming that results will be 100,000 times the heat the burning coal gave off. Run a hair dryer on coal-fired electricity and the CO2 will heat the planet as much as eight jumbo jet engines running for the same time. Our croplands and water supplies get blown dry as well. CO2 molecules really are the "Energizer Bunnies" of climate heating -- the "primary control knob" for global temperatures. (see: Cooking up a dead planet) Billions of littered plastic bags One other way of visualizing this massive CO2 waste stream is to picture it as a pile of littered plastic shopping bags. Both CO2 and plastic bags are fossil fuel products. Both get littered into our environment where they cause harm for decades and even centuries before they breakdown. Each balloon in that animation (tonne of CO2) can be visualized as 100,000 plastic bags (tonne of plastic bags). It is hard for me to imagine we would tolerate an energy option that pumped out plastic bag litter at such a rate. A taxi cab litters CO2 weighing the same as two plastic bags every city block. An SUV spits out one every second on the highway. Phht, phhht, phhht. Large buildings using natural gas for heat spew a constant "ticker tape parade" worth. As this animation shows, we would all be literally buried in a metastasizing plastic bag trash pile. It's global warming, stupid This NYC balloon animation is just the latest example that shows that Michael Bloomberg -- independent, billionaire, mayor of America's largest city -- understands the growing menace climate change poses to our future. As the cover of his flagship magazine pointed out recently:
<urn:uuid:6a1da6a1-710a-4dc3-9a62-e06ed8d26231>
3.015625
869
Personal Blog
Science & Tech.
51.200375
The Microwave Limb Sounder (MLS) on NASA's Aura shows weaker than usual ozone transport and strong photochemical loss The difference between the means inside and outside the lower stratospheric vortex is an estimate of chemical O3 loss. Vortex chemical loss is at most 75 DU. Weaker than usual transport to high latitudes leads to ozone stratospheric columns outside the vortex ~25 DU lower than 2005-2010 average. The MLS 2005-2010 average shows stratospheric columns are ~390 Dobson Units (DU). Why? The vortex has broken down, allowing high O3 to be transported to high latitudes. 2011: MLS O3 columns are less than 260 DU at high latitudes!| Why? The 2011 vortex persists through late March, prohibiting the transport of O3 rich air from lower latitudes. Low temperatures also cause significant chemical loss. MLS has measured stratospheric column O3 for the past 7 years. Between 2005-2010, Arctic mean stratospheric columns (54-88N) averaged 390 Dobson Units (DU) at the end of March. In March 2011, the Arctic stratospheric column mean was less than 340 DU, with areas as low as 240 DU observed. Seasonally, Arctic column O3 has a significant increase during fall and winter. Transport is responsible for the increase but polar stratospheric cloud (PSC)-driven loss in the lower stratosphere can diminish it. The polar vortex forms during fall, isolating part of the high latitude air mass from horizontal transport. Most years the vortex has broken down by late March, but in 2011 the vortex was strong and the usual seasonal transport of ozone-rich air to polar latitudes had not yet occurred. This is part of the reason for the low ozone columns observed in late March by MLS and OMI on Aura. Low temperatures in the Arctic vortex in February and March caused PSC-catalyzed ozone loss. To estimate the chemical loss that occurred, we examine lower stratospheric (LS) ozone partial columns (100-38 hPa) inside the vortex because this is where the ozone loss occurs. By differencing the partial columns outside the LS vortex with those inside we get an upper limit on chemical ozone loss. By comparing the 2011 partial columns above and below the LS with those columns from previous years, we see that ozone was lower in 2011. These column abundances are controlled by transport, thus we can say that part of the cause of the low ozone columns at the end of March was weak transport.
<urn:uuid:1e950fe3-f098-4295-8e22-4d5599ecdd93>
3.359375
524
Knowledge Article
Science & Tech.
53.683068
You might think seafaring Vikings–who traveled hundreds of miles on rough seas between 750 and 1050 AD–would be adrift on cloudy days: not only did they lack compasses, but they were often traveling so far north that the sun never set, and thus couldn’t use stars to navigate. But scientists are finding new evidence to support the existence of what was once considered a mythical navigational tool: the sólarsteinn, or sunstone. It all starts with an Icelandic legend about a man named Sigurd. As Nature News reports: The saga describes how, during cloudy, snowy weather, King Olaf consulted Sigurd on the location of the Sun. To check Sigurd’s answer, Olaf “grabbed a sunstone, looked at the sky and saw from where the light came, from which he guessed the position of the invisible Sun.” In 1967, Thorkild Ramskou, a Danish archaeologist, suggested that this stone could have been a polarizing crystal such as Icelandic spar, a transparent form of calcite, which is common in Scandinavia. Knowing that our atmosphere can scatter sunlight and polarize it, Gábor Horváth, an optics scientist at Eötvös University in Budapest, and Susanne Åkesson, a migration ecologist from Lund University, Sweden decided to put calcite to the test. First they tested human abilities. Using photographs of cloudy skies, they tested how well subjects could estimate the sun’s position, and found that, with errors of up to 99°, the unaided eye isn’t the best navigational device. So in 2005 they they traveled the Arctic Ocean measuring the sky’s polarization patterns, and their findings were nothing less than eye-opening. From Nature News: The researchers were surprised to find that in foggy or totally overcast conditions the pattern of light polarization was similar to that of clear skies. The polarization was not as strong, but Åkesson believes that it could still have provided Viking navigators with useful information…. “I tried such a crystal on a rainy overcast day in Sweden,” she says. “The light pattern varied depending on the orientation of the stone.” The researchers published their findings in the journal Philosophical Transactions of the Royal Society B. Even though archeologists have yet to find a sunstone in a Viking wreck, which would put all debate over sunstone use to rest, this research still goes a long way in resurrecting what was once considered mere myth. And the coast looks very clear indeed as the scientists gear up for their next study: to test whether volunteers can accurately determine the position of the sun using these clear calcite crystals. 80beats: The Vikings Brought Another Group of Invaders to Britain: Mice 80beats: Next Global-Warming Victim: Centuries-Old Shipwrecks Discoblog: Hide the Women and Children! Researchers Dig Up Viking DNA Bad Astronomy: Vikings spotted on Mars Image: flickr / frankdouwes
<urn:uuid:abd69169-0b5d-4b55-ad2e-6b581f8ec31d>
3.875
641
Nonfiction Writing
Science & Tech.
39.199462
The orbital eccentricity of an astronomical object is a parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is a circular orbit, values between 0 and 1 form an elliptical orbit, 1 is a parabolic escape orbit, and greater than 1 is a hyperbola. The term derives its name from the parameters of conic sections, as every Kepler orbit is a conic section. It is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the galaxy. The eccentricity may take the following values: - circular orbit: - elliptic orbit: (see Ellipse) - parabolic trajectory: (see Parabola) - hyperbolic trajectory: (see Hyperbola) The eccentricity is given by ( is negative for an attractive force, positive for a repulsive one) (see also Kepler problem). or in the case of a gravitational force: where is the specific orbital energy (total energy divided by the reduced mass), the standard gravitational parameter based on the total mass, and the specific relative angular momentum (angular momentum divided by the reduced mass). For values of e from 0 to 1 the orbit's shape is an increasingly elongated (or flatter) ellipse; for values of e from 1 to infinity the orbit is a hyperbola branch making a total turn of 2 arccsc e, decreasing from 180 to 0 degrees. The limit case between an ellipse and a hyperbola, when e equals 1, is parabola. Radial trajectories are classified as elliptic, parabolic, or hyperbolic based on the energy of the orbit, not the eccentricity. Radial orbits have zero angular momentum and hence eccentricity equal to one. Keeping the energy constant and reducing the angular momentum, elliptic, parabolic, and hyperbolic orbits each tend to the corresponding type of radial trajectory while e tends to 1 (or in the parabolic case, remains 1). For a repulsive force only the hyperbolic trajectory, including the radial version, is applicable. For elliptical orbits, a simple proof shows that arcsin() yields the projection angle of a perfect circle to an ellipse of eccentricity . For example, to view the eccentricity of the planet Mercury (=0.2056), one must simply calculate the inverse sine to find the projection angle of 11.86 degrees. Next, tilt any circular object (such as a coffee mug viewed from the top) by that angle and the apparent ellipse projected to your eye will be of that same eccentricity. From Medieval Latin eccentricus, derived from Greek ekkentros "out of the center", from ek-, ex- "out of" + kentron "center". Eccentric first appeared in English in 1551, with the definition "a circle in which the earth, sun. etc. deviates from its center." Five years later, in 1556, an adjective form of the word was added. - is radius at apoapsis (i.e., the farthest distance of the orbit to the center of mass of the system, which is a focus of the ellipse). - is radius at periapsis (the closest distance). The eccentricity of the Earth's orbit is currently about 0.0167; the Earth's orbit is nearly circular. Over hundreds of thousands of years, the eccentricity of the Earth's orbit varies from nearly 0.0034 to almost 0.058 as a result of gravitational attractions among the planets (see graph). Mercury has the greatest orbital eccentricity of any planet in the Solar System (e=0.2056). Before 2006, Pluto was considered to be the planet with the most eccentric orbit (e=0.248). The Moon's value is 0.0549. For the values for all planets and other celestial bodies in one table, see List of gravitationally rounded objects of the Solar System. Most of the Solar System's asteroids have orbital eccentricities between 0 and 0.35 with an average value of 0.17. Their comparatively high eccentricities are probably due to the influence of Jupiter and to past collisions. The eccentricity of comets is most often close to 1. Periodic comets have highly eccentric elliptical orbits with eccentricities just below 1; Halley's Comet's elliptical orbit, for example, has a value of 0.967. Non-periodic comets follow near-parabolic orbits and thus have eccentricities even closer to 1. Examples include Comet Hale–Bopp with a value of 0.995 and comet C/2006 P1 (McNaught) with a value of 1.000019. As Hale–Bopp's value is less than 1, its orbit is elliptical and will in fact return. Comet McNaught has a hyperbolic orbit while within the influence of the planets, but is still bound to the Sun with an orbital period of about 105 years. As of a 2010 Epoch, Comet C/1980 E1 has the largest eccentricity of any known hyperbolic comet with an eccentricity of 1.057, and will leave the Solar System indefinitely. Neptune's largest moon Triton has an eccentricity of 1.6 × 10−5, the smallest eccentricity of any known body in the Solar System; its orbit is as close to a perfect circle as can be currently measured. Mean eccentricity The mean eccentricity of an object is the average eccentricity as a result of perturbations over a given time period. Neptune currently has an instant (current Epoch) eccentricity of 0.0113, but from 1800 A.D. to 2050 A.D. has a mean eccentricity of 0.00859. Climatic effect Orbital mechanics require that the duration of the seasons be proportional to the area of the Earth's orbit swept between the solstices and equinoxes, so when the orbital eccentricity is extreme, the seasons that occur on the far side of the orbit (aphelion) can be substantially longer in duration. Today, northern hemisphere fall and winter occur at closest approach (perihelion), when the earth is moving at its maximum velocity. As a result, in the northern hemisphere, fall and winter are slightly shorter than spring and summer. In 2006, summer was 4.66 days longer than winter and spring was 2.9 days longer than fall. Apsidal precession slowly changes the place in the Earth's orbit where the solstices and equinoxes occur (this is not the precession of the axis). Over the next 10,000 years, northern hemisphere winters will become gradually longer and summers will become shorter. Any cooling effect, however, will be counteracted by the fact that the eccentricity of Earth's orbit will be almost halved, reducing the mean orbital radius and raising temperatures in both hemispheres closer to the mid-interglacial peak. See also - A. Berger and M.F. Loutre (1991 (old, but published)). "Graph of the eccentricity of the Earth's orbit". Illinois State Museum (Insolation values for the climate of the last 10 million years). Retrieved 2009-12-17. - "JPL Small-Body Database Browser: C/1995 O1 (Hale-Bopp)". 2007-10-22 last obs. Retrieved 2008-12-05. - "JPL Small-Body Database Browser: C/2006 P1 (McNaught)". 2007-07-11 last obs. Retrieved 2009-12-17. - "Comet C/2006 P1 (McNaught) - facts and figures". Perth Observatory in Australia. 2007-01-22. Retrieved 2011-02-01. - "JPL Small-Body Database Browser: C/1980 E1 (Bowell)". 1986-12-02 last obs. Retrieved 2010-03-22. - David R. Williams (22 January 2008). "Neptunian Satellite Fact Sheet". NASA. Retrieved 2009-12-17. - Williams, David R. (2007-11-29). "Neptune Fact Sheet". NASA. Retrieved 2009-12-17. - "Keplerian elements for 1800 A.D. to 2050 A.D.". JPL Solar System Dynamics. Retrieved 2009-12-17. - This information is concerning the summer of the year 2006 not the current year we are in now. Prussing, John E., and Bruce A. Conway. Orbital Mechanics. New York: Oxford University Press, 1993. - World of Physics: Eccentricity - The NOAA page on Climate Forcing Data includes (calculated) data from Berger (1978), Berger and Loutre (1991). Laskar et al. (2004) on Earth orbital variations, Includes eccentricity over the last 50 million years and for the coming 20 million years. - The orbital simulations by Varadi, Ghil and Runnegar (2003) provides series for Earth orbital eccentricity and orbital inclination. - Kepler's Second law's simulation
<urn:uuid:4f6bbb8a-f456-41d6-baa8-963f56cfc529>
3.984375
1,923
Knowledge Article
Science & Tech.
58.252091
The authors are apparently unaware that clouds act as the Earth's negative feedback cooling mechanism and have maintained relatively stable Earth temperatures for millions of years without any evidence of net positive feedback "tipping points." Not to mention, the scheme is irreversible if solar activity enters a lull such as the Maunder or Dalton minimums, and is an irreversible hazard to satellites and space travel. Publication date: 1 April 2013 [any coincidence that this is April fool's day?] Source:Advances in Space Research, Volume 51, Issue 7 This paper examines the concept of a Sun-pointing elliptical Earth ring comprised of dust grains to offset global warming. A new family of non-Keplerian periodic orbits, under the effects of solar radiation pressure and the Earth’s J 2 oblateness perturbation, is used to increase the lifetime of the passive cloud of particles and, thus, increase the efficiency of this geoengineering strategy. An analytical model is used to predict the orbit evolution of the dust ring due to solar-radiation pressure and the J 2 effect. The attenuation of the solar radiation can then be calculated from the ring model. In comparison to circular orbits, eccentric orbits yield a more stable environment for small grain sizes and therefore achieve higher efficiencies when the orbit decay of the material is considered. Moreover, the novel orbital dynamics experienced by high area-to-mass ratio objects, influenced by solar radiation pressure and the J 2 effect, ensure the ring will maintain a permanent heliotropic shape, with dust spending the largest portion of time on the Sun facing side of the orbit. It is envisaged that small dust grains can be released from a circular generator orbit with an initial impulse to enter an eccentric orbit with Sun-facing apogee. Finally, a lowest estimate of 1×1012 kg of material is computed as the total mass required to offset the effects of global warming.
<urn:uuid:a8357ffe-c2a5-46c7-9eb3-43389ef8549f>
3.703125
386
Academic Writing
Science & Tech.
25.93695
Chemical Reactions in the Atmosphere Reading Assignment: Read the brief introduction to atmospheric chemistry by Sasha Madronich, Senior Scientist at the National Center for Atmospheric Research (linked below). Read chapter 9 in Manahan. Homework: HW-10, due Friday, April 4. We have seen, in earlier discussions, that substances in the lithosphere tend to become more reduced over time. In the lithosphere, for instance, biomass (CH2O) is slowly transformed through a sequence of steps to substances with no oxygen atoms, and then on to compounds with successively larger carbon to hydrogen ratios. The final product of this process is a form of pure carbon. Chemical reactions in the atmosphere have the opposite effect on substances, causing an atom to become more oxidized over time in the atmosphere. Atoms that enter the atmosphere as gases in a reduced state are oxidized, in a stepwise fashion, to form ionic substances that are washed out of the atmosphere in rainfall. One example of this transformation would be where the sulfur atom in hydrogen sulfide (H2S, oxidation number of -2) is washed out as a sulfate molecule (SO42-, oxidation number of +6). Understanding these transformations is one of the primary objectives for this section of the course. Atmospheric chemical processes are summarized nicely in the graphic from the International Global Atmospheric Chemistry Web page Atmospheric pressure decreases when moving from sea level to higher altitudes in a very predictable fashion. Atmospheric pressure is the force gravity exerted on a unit of air, by the mass of air directly above it. The light blue line in Figure 32.1 illustrates how air pressure changes with increasing altitude. The Scale Height equation is used by meterologists to estimate atmospheric pressure as a function of altitude. The Scale Height equation is: where M = average molar mass of air (28.92 g/mol), g = acceleration due to gravity (9.78 m-s2), h = height in meters, R = gas constant (8.314 J-mol-1-deg-1), and T is the absolute temperature in Kelvins. Ph represents the pressure at the new altitude and Po represents the atmospheric pressure at sea level. Atmospheric chemistry at ground level is very different from that which occurs in the thermosphere, and there are a number of reasons why this is so. The density of the atmosphere decreases at one progress from ground level to the edge of outer space. This means there are many more molecules per unit volume of gas to react with each other at ground level. Another factor is the fact that the temperature of the gases goes from near room temperature to very cold to very hot to very cold as one goes from ground level to the thermosphere. This affects the average velocity of the molecule, their kinetic energy, and the probability that a collision with another molecule will lead to new products. And finally, the composition of the gases themselves change dramatically from large, stable molecules at ground level to mostly small ions and highly charged atoms at the top to the thermosphere. Let's begin our discussion by looking at the Figure of stratification of the atmosphere that we used in the last lecture. As you can see, the atmosphere consists of four distinct regions. The area closest to the earth's surface, the troposphere, extends up about 10-16 km from the earth's surface. The stratosphere is next, and reaches up to about 50 km. The mesosphere lies between 50 to 85 km from the earth's surface, and the thermosphere goes from 85 km to 500 km away from the earth's surface. Figure 31.1 Stratification of the earth's atmosphere showing changes in temperature and pressure with altitude. The composition of the troposphere consists of mostly nitrogen and oxygen gases. There are smaller amounts of water vapor, argon, carbon dioxide, nitrogen oxides, sulfur oxides, methane and additional trace gases. This region of the atmosphere is where all life processes occur, and this region of the atmosphere is the one most affected by anthropogenic pollution. The reactions that take place in the troposphere may be acid base reaction or photochemical reactions, and substances in the troposphere usually have a shorter lifetime than in other atmospheric regions. The ultimate fate of chemical reactions in the troposphere is to be washed out through precipitation events. The stratosphere is not nearly as dense as the troposphere and molecules in the stratosphere are therefore exposed to much more intense radiation from the sun. This causes the stable form of molecules to be smaller is size and have a higher kinetic energy. Stratospheric ozone forms under these conditions, absorbing much of the ultraviolet light coming in from the sun. This absorption increases the average molecular velocity. The composition of the stratosphere is mainly nitrogen, oxygen, nitrogen oxides and ozone at this point of the atmosphere. The mesosphere contains mostly ions of the same molecules that make up the stratosphere. Being closer to the sun, these molecules are exposed to even more intense radiation that has the ability to simply ionize small molecules into positive ions and electrons. The thermosphere consists of a mixture of ions and highly charged atoms that are formed by the even more intense solar radiation that occurs at the outer edge of the atmosphere. The reason for these changes in atmospheric composition is the different amount of solar radiation present at each level of the atmosphere. Molecules act as very effective filters of light. Each layer of the atmosphere absorbs some sunlight, shielding the gases below from the radiation that it removes. The reasons for these changes are based in Figure 31.2, which shows the variation of atmospheric pressure vs. altitude and temperature vs. altitude. The temperature of the atmosphere at earth's surface is determined by radiation of energy from the land back into the air, and by the density of the gases in the air. Regions with higher ground temperatures also have higher air temperatures. As you move away from the earth's surface, convective heating has a smaller effect and the air cools. Air temperature starts near 0° Celsius at ground level, and drops to about -60° 18 km from the earth's surface. The point where temperature begins to increase defines the break between the troposphere and the stratosphere. Temperature then increases to value of about 20° C at a distance of 50 km. This is the break between the stratosphere and the mesosphere, and going high results in temperature drops with increases in altitude, reaching a low temperature of -100° C eighty-five km from the earth's surface. This defines the break between the mesosphere and the thermosphere, where temperature again increases with increasing altitude. The earth's solar radiation budget is an issue of major importance, and underlies the concerns surrounding global warming. Figure 32.3 illustrates the current accepted values for the solar budget. Figure 32.2 Earth's radiation budget expressed on the basis of portions of the 1,340 watts/m2 composing the solar flux. On average, the earth receives 1,340 watts/m2 energy at the top of the atmosphere, and many different things happen to this radiation before it is ultimately returned back into space. The average temperature of the earth is determined by the ratio of energy received from the sun and the amount returned back into space by convection and reflection. The amount of light reflected from clouds has a definite effect on the temperature of the earth's surface, and some scientists argue that a short-term solution to global warming would be to generate more clouds. The composition of the atmosphere governs the rate at which infrared radiation is emitted back to space. By adding molecules to the atmosphere that absorb infrared radiation, we have effectively placed a "blanket" over the atmosphere--resulting in warmer temperatures at ground level. Chemical reactions in the atmosphere can occur as gas phase collisions between molecules, on the surfaces of solid particles (particulate matter) or in aqueous solution (in water droplets). The reactions that take place in water droplets are predominately acid-base reactions (the same processes we studied in chapters 3 and 4). Reactions on particle surfaces are of minor importance, in most cases, because of the short residence time particles spend in the atmosphere. Gas phase reactions dominate the chemical changes that occur to substances in the atmosphere. The hydroxyl radical (HO· ) is by far the most important single species in atmospheric chemistry. It has been called the "Ajax of the atmosphere." There are several reactions that form the hydroxyl radical, but the primary process is one where an O-H bond of the water molecule is broken to form a hydrogen atom (H· ) and a hydroxyl radical (HO· ). The hydrogen atom can then react with another water molecule to form hydrogen and a second hydroxyl radical, or with an oxygen molecule (O2) to form a second hydroxyl radical and an oxygen atom. The new oxygen atom can then react with another water molecule to form two new hydroxyl radicals. The result of these processes is a constant concentration of about a 10 million hydroxyl radicals per cubic centimeter of air at ground level. These reactions are summarized in Figure 32.3. Figure 32.3 Reactions involved in the formation of the hydroxyl radical These processes result in the steady state concentrations of atmospheric hydroxyl radicals shown in Table 32.1. Table 32.1. Average background concentrations of the hydroxyl radical in the troposphere. Figure 32.4 Atmospheric reactions involving the hydroxyl radical. Environmental Chemistry -- ENV 440
<urn:uuid:5a066273-47fc-4628-9963-425d142fae02>
3.84375
1,950
Academic Writing
Science & Tech.
37.471295
A block B of mass 0.4 kg and a particle P of mass 0.3 kg are connected by a light inextensible string. The string passes over a smooth pulley at the edge of a rough horizontal table. B is in contact with the table and the part of the string between B and the pulley is horizontal. P hangs freely below the pulley (see diagram). (i) The systemis in limiting equilibrium with the string taut and P on the point of moving downwards. Find the coefficient of friction between B and the table. (ii) A horizontal force of magnitude X N, acting directly away from the pulley, is now applied to B. The system is again in limiting equilibrium with the string taut, and with P now on the point of moving upwards. Find the value of X.
<urn:uuid:e92c5e01-2b83-4cdd-8ba3-8b034add72b2>
3.4375
173
Tutorial
Science & Tech.
80.91539
Advantages and Capabilities of Infrasound monitoring for Bolide detection Infrasonic waves have very low signal attenuation in the atmosphere. The diagram below (taken from Beer, 1974) shows the approximate attenutation as a function of frequency and height: Figure 1. Attenuation of infrasound with frequency and propagation height in the atmosphere. Infrasound detection is a robust, cost effective technology for detection of blast waves from bolides. It is possible to detect kiloton explosions at 2000-3000 km ranges, with global coverage for megaton explosions. If bolide infrasound signals are detected at two or more stations, it is possible to geolocate the source of the explosion as shown below for an event which occurred on August 25, 2000. Figure 2. Map showing the intersection of infrasound bearings for a bolide occuring on August 25, 2000. The intersecting infrasound azimuth solution and the satellite-determined position are labeled. Only infrasound stations at Los Alamos (DLI), IS59 (Hawaii) and IS25 (Kourou, Guinea) (bold lines) had accurately determined azimuths. As well, bolide infrasound signals have characteristic periods which may be related back to the source energy of the fireball. The signal amplitudes are affected by winds, source altitude and local turbulence effects. These signals also strongly decay with range; these may be used with appropriate calibrations as a cruder estimate of source energy. An example of an infrasound signal from a large bolide which occurred over the Mediterranean ocean on June 6, 2002 is shown below. The bar shows the airwave associated with the bolide sweeping across the IS26 array in Germany, some 1800 km away.
<urn:uuid:97023721-8114-4b52-8223-5fcb1633e1eb>
3.359375
367
Knowledge Article
Science & Tech.
27.105034
Life on Earth has been modifying the environment for billions of years. Green-plant photosynthesis was essential for the development of our current oxygen-rich atmosphere. The history of increasing oxygen in the atmosphere and ocean is complex, however, and significant free oxygen has been available in the atmosphere only during the past 2.2 billion years. Now new measurements by University of Rochester geochemists have uncovered evidence that even after 2.2 billion years ago, the amount of oxygen in the oceans remained low, perhaps up to the time when multicelled life began to proliferate a few hundred million years ago. Their work, published this week in the journal Science, has been supported in part by the NASA Astrobiology Institute, as well as other grants from NSF and NASA. Their paper is “Molybdenum isotope evidence for widespread anoxia in mid-Proterozoic oceans,” by G.L. Arnold, A.D. Anbar, J. Barling and W.T. Lyons. The new evidence for a relatively anoxic (oxygen-free) ocean during the Proterozoic period has several implications for astrobiology. It warns us that additional information is needed to determine the actual concentrations of oxygen in the atmosphere and ocean during the Proterozoic. It suggests the possibility that the relatively recent in rise in oxygen in the ocean might have been an important environmental stimulus for the evolution of multicelled life. And the work provides important input to efforts to determine what signature of life could be detected in the atmospheres of planets circling other stars. To interpret atmospheric spectra of extrasolar planets, we need to understand how atmospheric oxygen content relates to the evolution of the biosphere. Astrobiologists are particularly interested in possible atmospheric signatures of microbial life, since the Earth has been a “microbe only” planet until relatively recently. As noted in their press release, the research team has pioneered a new method that reveals how ocean oxygen might have changed globally. Previously, geochemists developed ways to detect signs of ancient oxygen in particular areas, but not in the Earth's oceans as a whole. "This is the best direct evidence that the global oceans had less oxygen during that time," says Gail Arnold, a doctoral student of earth and environmental sciences at the University of Rochester and lead author of the research. Arnold examined rocks from northern Australia that were at the floor of the ocean over a billion years ago, using the new method developed by her and her coauthors. Their instrument -- called a Multiple Collector Inductively Coupled Plasma Mass Spectrometer -- was used to examine the chemistry of molybdenum’s isotopes within the rocks. Molybdenum is an element that enters the oceans through river runoff, dissolves in seawater, and can stay dissolved for hundreds of thousands of years. By staying in solution so long, molybdenum mixes well throughout the oceans, making it an excellent global indicator. The research team learned that the chemical behavior of molybdenum's isotopes in sediments is different depending on the amount of oxygen in the overlying waters, and as a result that the chemistry of molybdenum isotopes in the global oceans depends on how much seawater is oxygen-poor. Compared to modern samples, their measurements of ancient rocks from Australia point to oceans with much less oxygen. Their press release notes that “how much less oxygen” is the next question. A world full of anoxic oceans could have serious consequences for evolution. Eukaryotes, the kind of cells that make up all organisms except bacteria, appear in the geologic record as early as 2.7 billion years ago, but multicelled eukaryotes did not appear until much later. One of the paper’s authors, Ariel Anbar of the University of Rochester, previously suggested (with paleontologist Andrew Knoll of Harvard University) that an extended period of anoxic oceans might be the key to why the more complex eukaryotes barely eked out a living while their prolific bacterial cousins thrived. "It's remarkable that we know so little about the history of our own planet's oceans," says Anbar. "Whether or not there was oxygen in the oceans is a really straightforward chemical question that you'd think would be easy to answer. It shows just how hard it is to tease information from the rock record and how much more there is for us to learn about our origins." To help place the new work in context, Anbar addressed the question of whether the new work is consistent with previous estimates of oxygen in the Proterozoic atmosphere and oceans. He noted that the major lines of evidence usually cited for a rise in atmospheric oxygen from "almost nothing" to "something" about 2.2 billion years ago are: a) The cessation of banded iron formation (BIF) deposition in the oceans b) The appearance of terrestrial redbeds c) The disappearance of easily oxidized minerals deposited in terrestrial (land) environments d) The disappearance of the mass-independent sulfur isotope signature in marine sediments. Of these, the last three all deal with oxygen in the atmosphere, so only the first (the BIF interpretation) is potentially contradictory in terms of the amount of oxygen in the oceans. To end BIF deposition, you need to change ocean chemistry such that the amount of iron dissolved in the oceans, and hence available to make BIFs, falls markedly. But there are at least two ways to accomplish this: through changes in either available oxygen or iron sulfides. Anbar notes that Don Canfield has suggested that the initial rise of atmospheric oxygen led to an increase in sulfate supply to the oceans, and that sulfate-reducing microbes turned the oceans sulfidic for a billion years. It is also possible that the BIF were deposited in very narrow windows of time. In this case, they may reflect temporary conditions triggered by large volcanic eruptions, rather than indicating the average state of the oceans over this billion-year time span. We can also ask how the oceans could be in contact with an oxygen atmosphere for a billion years without themselves becoming oxygen-rich. Anbar notes that we have one similar analog today in the Black Sea, which is anoxic in spite of our oxygen-rich atmosphere. The oxygen content of the oceans is not a story of equilibrium with the atmosphere. The oxygen content is balanced by supply, mostly via equilibrium at the surface and physical mixing to depth, vs. loss, mostly due to the biota, consuming oxygen in the course of aerobic respiration. That loss rate of oxygen is largely dictated by the supply of organic carbon from the surface ocean, because it is during the aerobic respiration of that organic carbon that oxygen is consumed. In the Black Sea, the deep waters are anoxic because of a confluence of two factors: Primary production in the surface oceans which supplies organic carbon to depth (organic remains settle toward the bottom), combined with very sluggish mixing of the system, which inhibits the supply of oxygenated surface water to depth. In the Proterozoic, Anbar thinks it is reasonable to postulate that the marine biota were comparably productive. With an atmosphere having only a few percent oxygen, this microbial activity might have kept the oceans anoxic. Thus the absence of oxygen in the ancient oceans is not evidence for low levels of biological activity, but might actually be in part the product of an active anaerobic biota in the oceans. These active microbes may have been mostly bacteria and archaea, however, not the eukaryotes that play such an important role in Earth’s life today. This research suggests that we have much to do to determine the history of oxygen in Earth’s atmosphere and ocean. The Proterozoic seems to be a key period in the history of Earth’s biota. This is when the atmosphere was (perhaps only very gradually) changing to an oxygen-rich composition. It is also the time when life was preparing for the “Cambrian explosion,” when multicelled life forms and complex body types became common. Because the fossil record of the Proterozoic is so sparse, scientists know relatively little about the course of evolution during this span of more than one billion years. As we learn more about this crucial part of Earth’s history, astrobiologists will also be better able to identify ways we can look for evidence of inhabited planets around other stars. I thank Ariel Anbar for providing a copy of this paper and commenting on its significance.
<urn:uuid:d6318736-a5dd-4960-97e3-5bd1ef00c206>
3.8125
1,771
Knowledge Article
Science & Tech.
32.314317
Two subspecies of swamp sparrow nest in the mid-Atlantic states: the southern swamp sparrow and the coastal plain swamp sparrow. Scientists compared the songs of southern swamp sparrows from the mountains of Maryland with the songs of coastal plain swamp sparrows from the tidal marshes of Delaware. The songs differed between the two populations in syllable composition, repertoire size, trill rate, and frequency bandwidth. Scientists played recorded songs of the different subspecies for territorial males and the birds reacted more strongly to songs of their own subspecies. These two subspecies have likely been separated since the last Ice Age, about 10 to 15 thousand years ago. Their appearance does differ, with the coastal birds being darker, but their DNA is very similar. A divergence in song suggests that these two subspecies are on their way to becoming separate species. This article summarizes the information in this publication: Liu, Irene A., Lohr, Bernard, Olsen, Brian and Greenberg, Russell S. 2008. Macrogeographic Vocal Variation in Subspecies of Swamp Sparrow. The Condor, 110(1): 102-109. Variation in song can play a central role in species and subspecies recognition among birds. The ability of individuals to distinguish between songs of their own versus songs of a different subspecies potentially strengthens local adaptation of subspecific populations. We investigated the degree of vocal divergence and discrimination between two subspecies of Swamp Sparrow (Melospiza georgiana) to examine how variation in song could influence behavioral response. We recorded songs of Southern (M. g. georgiana) and Coastal Plain (M. g. nigrescens) Swamp Sparrow males in Maryland and Delaware, respectively, and analyzed variation in syllable composition, repertoire size, trill rate, and frequency bandwidth. In addition to describing differences in song characteristics, we performed an estimate of local song type diversity that predicted larger population repertoires in M. g. nigrescens. We then broadcast recordings to evaluate male territorial responses to song and found that males reacted more strongly to songs of their own subspecies than to songs of the other subspecies. The extent of song variation and discrimination suggests the possibility of continued divergence. Further tests may determine whether such results can be generalized beyond the populations studied to the subspecies level, and whether females as well as males differentiate between songs from separate subspecies. Teachers, Standards of Learning, as they apply to these articles, are available for each state.
<urn:uuid:1125c178-2a70-4deb-becd-b16bea02a004>
3.359375
507
Academic Writing
Science & Tech.
35.05292
What is the difference between the dot (.) and the dollar sign ($)?. As I understand it, they are both syntactic sugar for not needing to use parentheses. For example, let's say you've got a line that reads: If you want to get rid of those parenthesis, any of the following lines would also do the same thing: The primary purpose of the Going back to the same example: You can chain If that's too many parenthesis for your liking, get rid of them with the |show 3 more comments| They have different types and different definitions: In some cases they are interchangeable, but this is not true in general. The typical example where they are is: In other words in a chain of Also note that Note that I've intentionally added extra parentheses in the type signature. Hope this helps! The short and sweet version: ($) calls the function which is its left hand argument on the value which is its right hand argument. (.) composes the function which is its left hand argument on the function which is its right hand argument. ($) allows functions to be chained together without adding parentheses to control evaluation order: The compose operator (.) creates a new function without specifying the arguments: The example above is arguably illustrative, but doesn't really show the convenience of using composition. Here's another analogy: If we only use third once, we can avoid naming it by using a lambda: Finally, composition lets us avoid the lambda: One application that is useful and took me some time to figure out from the very short description at learn you a haskell: Since and parenthesizing the right hand side of an expression containing an infix operator converts it to a prefix function, one can write Why would anyone do this? For lists of functions, for example. Both are shorter than ... or you could avoid the '.' and '$' constructions by using pipelining: That's after you've added in the helper function:
<urn:uuid:2f693316-fcc3-4c81-a192-f208c5fabdb0>
3.203125
423
Q&A Forum
Software Dev.
49.896562
ENABLING MANAGEMENT RESPONSE OF SOUTHEASTERN AGRICULTURAL CROP AND PASTURE SYSTEMS TO CLIMATE CHANGE Location: National Soil Dynamics Laboratory Title: Effects of elevated carbon dioxide and increased temperature on methane and nitrous oxide fluxes: evidence from field experiments Submitted to: Frontiers in Ecology and the Environment Publication Type: Peer Reviewed Journal Publication Acceptance Date: August 31, 2012 Publication Date: December 3, 2012 Citation: Dijkstra, F.A., Prior, S.A., Runion, G.B., Torbert III, H.A., Tian, H., Lu, C., Venterea, R.T. 2012. Effects of elevated carbon dioxide and increased temperature on methane and nitrous oxide fluxes: evidence from field experiments. Frontiers in Ecology. 10(10):520-527. Interpretive Summary: Terrestrial ecosystems are important sources and sinks of greenhouse gases (GHGs), but their source/sink strength is sensitive to rising levels of atmospheric carbon dioxide (CO2), temperature, and changes in precipitation which can influence global climate. While CO2 has been well studied, climate change may also alter the release of other more potent GHGs [nitrous oxide (N2O) and methane (CH4)] which are less frequently measured in climate change studies. Here we cover examples illustrating that N2O and CH4 emissions can influence climate change. Net emissions of N2O and CH4 often increase with climate change resulting in a positive feedback for terrestrial ecosystems, despite a potential for an increase in carbon sequestration. It is important to include all three GHGs to accurately predict future climate change. Climate change has important effects on carbon (C) cycling in terrestrial ecosystems and carbon dioxide (CO2) exchange with the atmosphere that can provide positive or negative feedbacks to the global climate. However, climate change also affects emissions of the much more potent greenhouse gases nitrous oxide (N2O) and methane (CH4) from terrestrial ecosystems, but unlike CO2, these gases have been measured less frequently in climate change experiments. We present several examples from the literature demonstrating that climate change impacts on N2O and CH4 fluxes can significantly contribute to climate change feedbacks in terrestrial ecosystems. Net emissions of N2O and CH4 often increase with climate change resulting in a positive feedback for terrestrial ecosystems, despite the sometimes significant increase in C sequestration. It will be critical to take into account all three greenhouse gases to assess the manifestation of climate change feedbacks in terrestrial ecosystems.
<urn:uuid:1fdfcc8a-b4a7-49fc-9154-1f5484a88705>
2.875
541
Academic Writing
Science & Tech.
23.062642
|Themes > Science > Astronomy > The Galaxies > Groups, Clusters, and Superclusters of Galaxies > Superclusters of Galaxies| An Example: the Local SuperclusterOur own galaxy and its Local Group belong to a supercluster called the Local Supercluster. It similar in shape to a flattened ellipse (pancake), with the Virgo Cluster near its center and the Local Group near one end; its extent in the longest direction is about 40-50 Mpc. The large concentration of galaxies on the left side of the following diagram represents part of the Local Supercluster (see this explanation). Other nearby superclusters include Perseus-Pisces at a distance of about 70 Mpc and Hydra-Centaurus, which is approximately 45 Mpc distant (with the constellations in the names giving the approximate location on the celestial sphere; thus, Perseus-Pisces lies partially in the constellation Perseus and partially in Pisces, as viewed from the Earth). Motion within the Local SuperclusterShortly we shall discuss the expansion of the Universe. This general expansion, which increases the distances between galaxies steadily with time, is called the Hubble flow. Deviations of the velocity of a galaxy from the overall Hubble flow is termed the peculiar velocity. By examining the peculiar velocities of clusters and superclusters we can obtain estimates of local mass concentrations that may be responsible for causing the deviation from the Hubble flow. Mass of the Local SuperclusterAs we shall see, the Virgo Cluster at a distance of approximately 16 Mpc from us should, by the Hubble Law, be receding from us at a velocity of about 1100 km/s. However, the measured recessional velocity of the Virgo Cluster is approximately 170 km/s less than this. This difference, which is termed the Virgocentric peculiar velocity of the Local Group, presumably is due to the higher than average gravitational attraction felt between the Local Group and the rest of the Local Supercluster. This may be used to estimate a mass of 1015 solar masses for the Local Supercluster. The corresponding mass to light ratio for the Local Supercluster is about 570, expressed in units of the ratio of the solar mass to the solar luminosity, indicating the presence of large amounts of dark matter in the Local Supercluster.
<urn:uuid:087c37bb-4283-40d1-91ad-bed6c6339472>
3.90625
486
Knowledge Article
Science & Tech.
26.646319
Planetary Protection: X-ray Super-Flares Aid Formation of "Solar Systems" This Chandra image shows the Orion Nebula Cluster, a rich cluster of young stars observed almost continuously for 13 days. The long observation enabled scientists to study the X-ray behavior of young Sun-like stars with ages between 1 and 10 million years. They discovered that these young stars produce violent X-ray outbursts, or flares, that are much more frequent and energetic than anything seen today from our 4.6 billion-year-old Sun. The range of flare energies is large, with some of the stars producing flares that are a hundred times larger than others. The different flaring properties of the young Sun-like stars could have important implications for the formation of planets around these stars. According to some theoretical models, large flares could produce strong turbulence in a planet-forming disk around a young star. Such turbulence might affect the position of rocky, Earth-like planets as they form and prevent them from rapidly migrating towards the young star. Therefore, the survival chances of the Earth may have been enhanced by large flares from the young Sun. The different colors for the stars in the image are primarily due to the differences in the amount of gas and dust along the line of sight, which filters out the lower energy X-rays more effectively.
<urn:uuid:d9f45a97-cc41-4e05-a39e-d084a165d48d>
4.15625
271
Knowledge Article
Science & Tech.
45.187391
In Bayesian statistics, a credible interval (or Bayesian confidence interval) is an interval in the domain of a posterior probability distribution used for interval estimation. The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics. For example, in an experiment that determines the uncertainty distribution of parameter , if the probability that lies between 35 and 45 is 0.95, then is a 95% credible interval. Choosing a credible interval Credible intervals are not unique on a posterior distribution. Methods for defining a suitable credible interval include: - Choosing the narrowest interval, which for a unimodal distribution will involve choosing those values of highest probability density including the mode. - Choosing the interval where the probability of being below the interval is as likely as being above it. This interval will include the median. - Assuming the mean exists, choosing the interval for which the mean is the central point. It is possible to frame the choice of a credible interval within decision theory and, in that context, an optimal interval will always be a highest probability density set. Contrasts with confidence interval A frequentist 95% confidence interval of 35–45 means that with a large number of repeated samples, 95% of the calculated confidence intervals would include the true value of the parameter. The probability that the parameter is inside the given interval (say, 35–45) is either 0 or 1 (the non-random unknown parameter is either there or not). In frequentist terms, the parameter is fixed (cannot be considered to have a distribution of possible values) and the confidence interval is random (as it depends on the random sample). Antelman (1997, p. 375) summarizes a confidence interval as "... one interval generated by a procedure that will give correct intervals 95 % of the time". In general, Bayesian credible intervals do not coincide with frequentist confidence intervals for two reasons: - credible intervals incorporate problem-specific contextual information from the prior distribution whereas confidence intervals are based only on the data; - credible intervals and confidence intervals treat nuisance parameters in radically different ways. For the case of a single parameter and data that can be summarised in a single sufficient statistic, it can be shown that the credible interval and the confidence interval will coincide if the unknown parameter is a location parameter (i.e. the forward probability function has the form ), with a prior that is a uniform flat distribution; and also if the unknown parameter is a scale parameter (i.e. the forward probability function has the form ), with a Jeffreys' prior — the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution. But these are distinctly special (albeit important) cases; in general no such equivalence can be made. - ^ Edwards, W., Lindman, H., Savage, L.J. (1963) "Bayesian statistical inference in statistical research". Psychological Review, 70, 193-242 - ^ Lee, P.M. (1997) Bayesian Statistics: An Introduction, Arnold. ISBN 0-340-67785-6 - ^ O'Hagan, A. (1994) Kendall's Advance Theory of Statistics, Vol 2B, Bayesian Inference, Section 2.51. Arnold, ISBN 0-340-52922-9 - ^ Antelman, G. (1997) Elementary Bayesian Statistics (Madansky, A. & McCulloch, R. eds.). Cheltenham, UK: Edward Elgar ISBN 978-1-85898-504-6 - ^ a b Jaynes, E. T. (1976). "Confidence Intervals vs Bayesian Intervals", in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, (W. L. Harper and C. A. Hooker, eds.), Dordrecht: D. Reidel, pp. 175 et seq Statistics is easy: Confidence Interval Amir H. Ghaseminejad shows the meaning of Confidence Interval with a simple example. There are two schools in statistics. In the frequentist school, The para... Rishiri Mt. Skiing down, In credible So amazing skiing down the steep slope on very natural conditions! Small Sample Size Confidence Intervals Learn more: http://www.khanacademy.org/video?v=K4KDLWENXm0 Constructing small sample size confidence intervals using t-distributions. How to calculate Confidence Intervals and Margin of Error Tutorial on how to calculate the confidence interval and margin of error (interval estimate). Include an example and some discussion on the bell curve and z ... 95% Confidence Interval How to calculate and what it means. Intervals - Epiphany (Intro Solo Cover) One of the best song I' ve ever heard! I cut 700hz frequency to turn down the original solo volume only. The entire backing track may sound a bit different i... Gym Boss Interval Timer Gym Boss Interval Timer review and instructions on how to use it. Perfect for HIIT Training. Crossfit, MMA/Fighting, Weightlifting, Plyometrics, Calisthenics... How to use Excel to Calculate Confidence Interval Tutorial on using Microsoft Excel to determine confidence internals, margin of error, range, max, min and margin of error Like us on: http://www.facebook.com... Plot 95% Confidence Interval.mp4 Two 95% confidence intervals are plotted on a single graph. The two confidence intervals overlap. This means that at the 95% level of confidence, there is no... Stephen's Tutorials Confidence Interval in Excel Constructing a confidence interval in Excel. Finding the margin of error and then applying this to the sample mean. 2 Minute Medicine 2 Minute Medicine Thu, 16 May 2013 10:58:55 -0700 ... of patients with unprotected left main coronary artery disease (ULMCAD). With slightly wider credible interval, the Bayesian analysis concluded that PCI and CABG are comparable treatments for this patient population, and that medical therapy is ... Mon, 13 May 2013 12:20:36 -0700 The researchers estimate each year that there are 1.6 million (90% Credible Interval: 1.2–2 million) episodes of domestically acquired foodborne illness related to 30 specified pathogens. They added that there are believed to be 2.4 million (90%CrI: 1 ... Fri, 10 May 2013 08:02:47 -0700 The figures are based on the roll call vote analysis, and for each legislator provide a mean ideal point (referred to below as the Lib-Con Score) along with the 95 percent credible interval (CI) for this point estimate. Only when a legislator's CI does ... 7thSpace Interactive (press release) Fri, 10 May 2013 05:37:26 -0700 Results: The ratio of NGI risk to that of HCGI is estimated to be 4.5 with a credible interval 3.2 to 7.7. Conclusions: A risk level of 8 HCGI illnesses per 1000 swimmers, as in the 1986 freshwater criteria, would correspond to 36 NGI illnesses per ... 7thSpace Interactive (press release) Thu, 02 May 2013 20:35:19 -0700 Results from Montreal suggest that surgeons are half as likely as gastroenterologists to remove polyps, while those from Calgary were associated with a wide, non-significant Bayesian credible interval. However, residual confounding from patient-level ... Fri, 26 Apr 2013 06:46:39 -0700 Using cartographic approaches, we estimate there to be 390 million (95% credible interval 284–528) dengue infections per year, of which 96 million (67–136) manifest apparently (any level of disease severity). This infection total is more than three ... Oops, we seem to be having trouble contacting Twitter You can talk about Credible interval with people all over the world in our discussions.
<urn:uuid:802c1a17-d527-4c32-bbf4-abdd4a2873db>
3.28125
1,709
Knowledge Article
Science & Tech.
49.890297
Science in the News, December 2004 -- On December 26, 2004, a tsunami in the Indian Ocean devastated coastal areas of Indonesia, Sri Lanka, Thailand, India, Somalia, Myanmar, and others, causing more than 225,000 fatalities and leaving more than five million people homeless. This tsunami was triggered by an intense underwater earthquake 6.2 miles below the ocean floor registering 9.0 on the Richter Scale, the strongest earthquake since 1964. This was the first tsunami to occur in the Indian Ocean in over 100 years. "Tsunami" is a Japanese word that means "wave in the harbor." Also known as tidal waves, tsunamis are very large ocean waves created by underwater disturbances, such as earthquakes or volcanic eruptions. Moving outward from their source, these waves travel very fast, up to 600 miles (1000 kilometers) per hour. While traveling through deep water these waves may only reach a foot or two (30-60 centimeters) in height, and look unremarkable. The waves slow down as they reach shallow water, causing water to pile up into very high (and still very fast) waves as tall as 34 feet (10.5 meters). Rapid changes in water levels are an indication of an approaching tsunami. Tsunamis can be generated in all of the world's oceans, inland seas and any large body of water. Most tsunamis occur in the Pacific Ocean, as it covers more than one-third of the earth's surface. Between 1900 to 2001, 796 tsunamis were recorded in the Pacific Ocean. Natural disasters like tsunamis, hurricanes, rogue waves, and storms at sea have claimed many lives and still present a threat to coastal communities despite growing technological capabilities and early warning systems. The National Oceanic and Atmospheric Administration has two tsunami warning centers in the United States - in Hawaii and Alaska. Scientists and world leaders are now planning to build a tsunami warning system in the Indian Ocean, as well as communication and education programs to inform people on how to respond to a tsunami. Tsunamis can also have a negative affect on the natural environment, as they cause damage to already fragile coral reefs and mangrove swamps. Coral reefs and mangrove swamps are vital feeding and breeding grounds for fish. Therefore, their destruction could leave coastlines vulnerable to erosion, and local communities without a vital source of food. Wave That Shook The World This NOVA website is based on the PBS television program of the same name and tells the story of the 2004 tsunami that spread for 3,000 miles around the Indian Ocean basin. The site offers video footage, detailed animation, and scientific analysis. Tsunami! Hosted by the Geophysics Program at the University of Washington, this site provides information about the physics of a tsunami, historical, and recent tsunami events, links, and other information. You can also see an animation of the 1960 Chilean tsunami sweeping across the Pacific Ocean [requires QuickTime]. Under a Wall of Water For an account of the "23 foot wall of water" that struck Papua New Guinea, see this Time magazine article.
<urn:uuid:dd759b16-f508-4056-9ae8-6a6b2aeb8e9f>
4.25
633
Knowledge Article
Science & Tech.
41.769526
The 24 facets are octahedral, and six meet at each vertex. The number of vertices is also 24, as the 24-cell is self-dual. Vertices of a 24-cell centred at the origin of 4-space, with edges of length 1, can be given as follows: 16 vertices of the form (±½,±½,±½,±½), and 8 vertices obtained from (0,0,0,±1) by permuting coordinates. (Note that the first 16 vertices are the vertices of a tesseract, and the other 8 are the vertices of the dual of the tesseract. An analogous construction in 3-space gives the rhombic dodecahedron, which, however, is not regular.) These 24 vectors generate a lattice in R4. If we interpret the vectors as quaternions, then the lattice is closed under multiplication and is therefore a ring.
<urn:uuid:ffd94d78-d30c-4567-a171-6986a9e34b29>
2.828125
202
Knowledge Article
Science & Tech.
63.806667
This section explains how to use the match data to find out what was matched by the last search or match operation, if it succeeded. You can ask about the entire matching text, or about a particular parenthetical subexpression of a regular expression. The count argument in the functions below specifies which. If count is zero, you are asking about the entire match. If count is positive, it specifies which subexpression you want. Recall that the subexpressions of a regular expression are those expressions grouped with escaped parentheses, ‘\(...\)’. The countth subexpression is found by counting occurrences of ‘\(’ from the beginning of the whole regular expression. The first subexpression is numbered 1, the second 2, and so on. Only regular expressions can have subexpressions—after a simple string search, the only information available is about the entire match. Every successful search sets the match data. Therefore, you should query the match data immediately after searching, before calling any other function that might perform another search. Alternatively, you may save and restore the match data (see Saving Match Data) around the call to functions that could perform another search. Or use the functions that explicitly do not modify the match data; A search which fails may or may not alter the match data. In the current implementation, it does not, but we may change it in the future. Don't try to rely on the value of the match data after a failing search. This function returns, as a string, the text matched in the last search or match operation. It returns the entire text if count is zero, or just the portion corresponding to the countth parenthetical subexpression, if count is positive. If the last such operation was done against a string with string-match, then you should pass the same string as the argument in-string. After a buffer search or match, you should omit in-string or pass nilfor it; but you should make sure that the current buffer when you call match-stringis the one in which you did the searching or matching. Failure to follow this advice will lead to incorrect results. The value is nilif count is out of range, or for a subexpression inside a ‘\|’ alternative that wasn't used or a repetition that repeated zero times. This function is like match-stringexcept that the result has no text properties. This function returns the position of the start of the text matched by the last regular expression searched for, or a subexpression of it. If count is zero, then the value is the position of the start of the entire match. Otherwise, count specifies a subexpression in the regular expression, and the value of the function is the starting position of the match for that subexpression. The value is nilfor a subexpression inside a ‘\|’ alternative that wasn't used or a repetition that repeated zero times. This function is like match-beginningexcept that it returns the position of the end of the match, rather than the position of the beginning. Here is an example of using the match data, with a comment showing the positions within the text: (string-match "\\(qu\\)\\(ick\\)" "The quick fox jumped quickly.") ;0123456789 ⇒ 4 (match-string 0 "The quick fox jumped quickly.") ⇒ "quick" (match-string 1 "The quick fox jumped quickly.") ⇒ "qu" (match-string 2 "The quick fox jumped quickly.") ⇒ "ick" (match-beginning 1) ; The beginning of the match ⇒ 4 ; with ‘qu’ is at index 4. (match-beginning 2) ; The beginning of the match ⇒ 6 ; with ‘ick’ is at index 6. (match-end 1) ; The end of the match ⇒ 6 ; with ‘qu’ is at index 6. (match-end 2) ; The end of the match ⇒ 9 ; with ‘ick’ is at index 9. Here is another example. Point is initially located at the beginning of the line. Searching moves point to between the space and the word ‘in’. The beginning of the entire match is at the 9th character of the buffer (‘T’), and the beginning of the match for the first subexpression is at the 13th character (‘c’). (list (re-search-forward "The \\(cat \\)") (match-beginning 0) (match-beginning 1)) ⇒ (17 9 13) ---------- Buffer: foo ---------- I read "The cat -!-in the hat comes back" twice. ^ ^ 9 13 ---------- Buffer: foo ---------- (In this case, the index returned is a buffer position; the first character of the buffer counts as 1.)
<urn:uuid:7a8e4ca5-070e-4639-9064-c6997863996d>
2.875
1,035
Documentation
Software Dev.
62.897084
A storm over shifting sands Desertification advances in China The Boston Globe, May 5, 2002 LONGBAOSHAN, China - Whipped by the wind, sand from Sky Desert swept through this village last month like sheets of stinging rain, clattering against dried corn husks and piling up in low drifts against buildings. Longbaoshan, a farming community about 40 miles northwest of Beijing, stands on the front line of China's losing war against the country's advancing deserts. Driven by overgrazing, overpopulation, drought, and poor land management, they are slowly consuming vast areas of the country in a looming ecological disaster. Official figures tell a frightening story. From 1994 to 1999, desertified land grew by 20,280 square miles. Desert blankets more than a quarter of China's territory. Sands threaten herders and farmers in a nation with one-fifth of the world's population but only one-15th of its arable land. Scientists warn of calamity if the government does not stop the sands. ''Pastures, farmland, railroads, and other means of transportation will be buried under sand,'' said Dong Guangrong, a research fellow in environmental engineering at the Chinese Academy of Sciences in western China's Gansu Province. The environmental damage is visible across northern and northwestern China, the country's driest regions. In areas such as Inner Mongolia, sand dunes are enveloping grasslands, according to a US Embassy report. Not far from Sky Desert, sand and dust pour into the Guanting Reservoir - one of two from which Beijing draws water - at a rate of nearly 3 million tons annually. Silt, fertilizer runoff, and factory pollution made the water unfit for drinking in 1997. Last month, the worst sandstorm in a decade blinded the capital, painting the sky yellow and engulfing 40-story buildings. The storm dumped 30,000 tons of sand on the city. The effects of China's sandstorms stretch far beyond the capital. The National Aeronautics and Space Administration tracked dust from this spring's storm as it traveled across the Pacific Ocean and swirled high above California. Officials are trying to stop the sands by building green buffers. A project intended to protect Beijing in advance of the 2008 summer Olympic Games involves reclaiming desertified land in 75 counties. In Xuanhua County, about 90 miles northwest of the capital, officials are trying to finish planting a belt of white poplars and pines around the Yanghe Reservoir to halt an adjacent desert. In the past decade, more than 250,000 soldiers have pitched in, officials say. Officials acknowledged that poplars, which cost about 70 cents each, have limitations. In winter, they have no leaves to block sand. The trees also struggle during drought, which has afflicted Xuanhua County for three years. Critics question the efforts outside Beijing, arguing that larger deserts in places such as Inner Mongolia contribute more to sandstorms, and thus deserve greater attention. ''Putting hundreds of millions of dollars into the Beijing-Tianjin Sand Prevention and Forest Belt Project and ignoring the major sand source regions is ... practicing self-deception,'' wrote Shi Yuanchun of the China Academy of Sciences. A key weapon against desertification is water. Demand from industry and rising living standards have tapped rivers dry and have created shortages of drinking water. In 1997, the lower part of the Yellow River, China's ''mother'' river, ran dry for 226 days. For most of the year, the river's waters never reached the Yellow Sea. This story ran on page A9 of the Boston Globe on 5/5/2002.
<urn:uuid:70ee8ba4-9ec2-4a67-a308-bf88001306db>
3.09375
756
Truncated
Science & Tech.
51.925882
As concerns about global warming grow, scientists are turning to sophisticated computational models to better understand and ultimately predict the impact of climate change and human activity on biodiversity. Putting computer science to work on biodiversity Policy makers are asking tough questions about how climate change may effect populations of species and their habitat. Scientists haven't been able to provide definitive answers because ecosystems, and even each plant or animal, are so complex, and much about them remains unknown. Now, scientists are looking more frequently to hardware and software tools to help them make sense of the huge amounts of data they are collecting on species, and to give them a solid base from which to predict what might happen in the future. “It's very important for science to become better able to predict the behaviour of ecosystems,” says Rich Williams, a researcher with Microsoft Research, heading the Computational Ecology and Environmental Science group in the Cambridge (UK) lab. “Environmental policy making needs to move away from the triage behaviour it often has, and better scientific understanding of ecosystems is vital to that goal.” To help push fundamental advances in ecosystem science, Microsoft Research and its partners are working on common methodologies, computer models and scientific workflow software that eventually could be used broadly by the scientific community. They hope their work will lead to new decision-support tools to better inform policymaking by governments and nongovernmental organisations worldwide. Ecosystems pose large computational challenges. They are huge entities composed of plants and animals that rely on each another in many ways, such as predators or prey, and that vary greatly in lifespan, movement and physical size. “You have to come up with useful abstractions, and not know every detail,” Williams says. The first steps are to get baseline data on species and then to standardise those data. Much of the data about the biological world cannot be shared and compared easily. They exist in scattered scientific papers, library archives, and in labs where scientists sometimes won't share their hard-toget field data. Jorge Soberon knows first-hand the magnitude of the task of compiling reliable data to use for predictions. A senior scientist at the University of Kansas, Soberon is working with Williams and others to create a model of the geographical distribution of species in Mexico's cloud forests. This unique forest occurs worldwide on tropical mountains where there is frequent cloud cover or fog. They are rich in biodiversity, and contain many rare or endangered species, some of which occur only in the forests. In Mexico about 20 per cent of the plant species of the country live in the cloud forests, which occupy about 1 per cent of that country's total area. That's 10 times the number of plant species in Mexico's tropical rain forests, Soberon says. “These are big simulations with thousands of species and hundreds of thousands of data points.” Cloud forests strip moisture from the clouds and fogs to provide water for millions of people who live in villages below them. They are being threatened by farming, alien species and probably the biggest factor, climate change, according to a report by the UK-based United Nations Environment Programme World Conservation Monitoring Centre. There are already indications of populations suffering heat-related declines. Some butterflies are leaving the mountains to head north, or moving higher into the mountains. “The butterflies require a certain temperature and precipitation,” Soberon says. “If the trend in warmer climate continues, they will run out of suitable climate in 50 to 100 years.” He adds, “Since [climate] is going to change, we need to know where the species are going to be more endangered, and where the best possibilities are for doing something about it. We hope the results will be used to do policy and decision- making in Mexico.” Soberon is studying the locations where species have been observed in Mexico's cloud forests and their environment, such as temperature and rainfall, so he can understand what conditions the species need in order to live. There already are at least 20 competing methods for using these data to predict the geographic distribution of species. His work involves constructing virtual environments where all the relevant variables can be controlled, and testing and comparing the existing methods for predicting species distributions. Soberon says his database of species in Mexico's cloud forests so far has nearly 6,000 valid names of vertebrate and plant species and 30,000 records of observations of them, with the aim of having twice that number of observations. Those records can be used with mathematical modelling algorithms to predict the distribution of each of the species. “We will fit the species into the current and very recent past climates, and extrapolate given the future of climates that people like the IPCC (Intergovernmental Panel on Climate Change) provide you with,” Soberon explains. “These are big simulations with thousands of species and hundreds of thousands of data points.” The results of the project will be available publicly so other scientists can apply the methods to their own databases. Tracking the invaders The need for tools that can be broadly used prompted work on invasive species models by Elizaveta Pachepsky, a specialist scientist at the University of California at Santa Barbara. She began developing an invasion tool while doing postdoctoral research at Microsoft Research Cambridge (UK) in collaboration with Ed Baskerville, a visiting software developer. The tool aims to be easy for anyone to use with existing data to help develop models that predict which species are invasive and how fast they spread. The initial focus is on invasive plants, but the technology could be used for other species. “My idea was to develop a software tool to develop models without having to have a math background,” she says. Most models now use software code that is not published or that consists of a command line and lots of mathematical symbols, which can scare off users. Pachepsky is basing her tools on Microsoft's Windows operating system and Silverlight graphical Web presentation. The goal is to enable prediction and prevention strategies that could help stem invasive species' disruption to ecosystems and economic damage. By the end of the 20th century more than 120 marine species were transported to Europe by gardeners, on hiker's shoes, in ballast water of ships, and by other means, she says. DAISIE, the first comprehensive database of invasive and alien species in Europe, lists about 9,000 alien species. Only about 10 per cent are invasive, meaning they cause economic damage or threaten biodiversity. Among DAISIE's “100 of the worst” invaders is Acacia dealbata, a fast-growing tree also called the silver or blue wattle. In the United States, an estimated 50,000 invasive species of all kinds cause an estimated $137 billion in damage per year. Pachepsky says there are two components to the tool: one works with existing data (the spread of invasive species over several years), and another predicts what will happen in the coming years. Pachepsky expects a basic Web-based version and standalone version of the tool to be released within Microsoft this year and then to the scientific community as a free download. The work done by Pachepsky and Soberon requires collaboration among scientists in different disciplines. “For invasive species, the tool uses math, software development, computation, statistics and ecology. These are five major fields of science and inquiry,” says Pachepsky. “Bringing that together in a way that is coherent is a big challenge in science and progress in general.” Microsoft Research's Williams, whose group comprises scientists who are knowledgeable about technology, sees such multidisciplinary collaborations as fertile ground for problem solving. “It's a two-way street,” says Williams. “Computational tools and techniques open up new ways of tackling ecological problems. And biological ideas and the needs of scientists inspire the development of novel computational methods. It's really fantastic.” He adds that the results can be valuable in other areas as well. “There's already some interest in the potential for tech transfer from the tools that are being developed into product groups at Microsoft.”
<urn:uuid:a437c0b7-2cab-4ca2-96f0-bfb1fa462061>
3.65625
1,679
Knowledge Article
Science & Tech.
32.472903
This image from the OMI instrument on NASA's Aura satellite shows nitrogen dioxide levels from July 15, 2011 to July 18, 2011 of Ontario and the Great Lakes. The highest levels of NO2 appear in red. The NO2 is measured by the number of molecules in a cubic centimeter. Credit: NASA/James Acker Aura Detects Pollution in the Great Lakes Region Fires throughout Ontario are generating pollution that is showing up in data from NASA's Aura Satellite in the Great Lakes region. The fires have also forced thousands of residents to evacuate to other areas in Canada, according to CBC News. About 112 fires have ravaged 81,545 acres so far, said the province's minister of natural resources, Linda Jeffrey. Because of the smoke from wildfires blowing south, the Minnesota Pollution Control Agency issued an air pollution alert, according to the Duluth News Tribune. An image of the pollution from fires in Ontario was created from data of nitrogen dioxide (NO2) levels over the period from July 15, 2011 to July 18, 2011. It was created from Ozone Measuring Instrument (OMI) data using the NASA Giovanni system by Dr. James Acker at NASA's Goddard Space Flight Center in Greenbelt, Md. NO2 forms during fires when nitrogen reacts with oxygen. In fact, NO2 is formed in any combustion process where the oxygen is provided by Earth's atmosphere. Detection of NO2 is important because it reacts with sunlight to create low-level ozone or smog and poor air quality. The OMI instrument that flies aboard NASA's Aura satellite is able to detect NO2. Low-level ozone (smog) is hazardous to the health of both plants and animals, and ozone in association with particulate matter causes respiratory problems in humans. OMI measures NO2 by the number of molecules in a cubic centimeter. The highest concentrations appear in dark red and are located at the southern tip of Lake Michigan. OMI data is archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), and is provided by KNMI, the Koninklijk Nederlands Meteorologisch Instituut (Royal Netherlands Meteorological Institute). Dr. P.F. Levelt is the Principal Investigator of OMI, Dr. J. Tamminen is the Finnish Co-PI, and Dr. P.K. Bhartia leads the U.S. OMI science team. Dr. James Gleason (NASA) and Pepijn Veefkind (KNMI) are PIs of the OMI NO2 product. Rob Gutro / Dr. James Acker NASA's Goddard Space Flight Center, Greenbelt, Md.
<urn:uuid:ba34e449-693e-447c-bcaf-1d1bc676dec7>
3.65625
557
Knowledge Article
Science & Tech.
52.164439
Recall that anywhere there is an electric charge, there exists an electric field around it throughout space. If the charge moves, the electric field (or any field-line) around it moves as well. The motion of the charge causes change in the electric field intensity (E) at any point in space that varies with distance and time. Also, recall that the motion of a charge creates a magnetic field (B) that is perpendicular to the direction of motion of the charge. The important aspect is that the two effects (E) and consequently (B) turn out to be perpendicular to each other and they coexist. Now, if the motion of the charge is of oscillatory nature [with a sinusoidal equation: y = yo sin( ω t) ], an up-and-down motion for example, the variations of E and B will also be sinusoidal as well with equations of the type E = Eo sin (ω t ) and B = Bo sin (ω t ) where ω = 2πf. Since such oscillations cause other charges at some distance away ( no matter how far ) to oscillate accordingly, we believe that oscillations of an electric charge creates waves that propagate through space. The graphical representation of the propagation of such waves called the "electromagnetic waves" through space may be pictured as shown below. This is the way fields variations are sensed by a distant charge as the waves pass by it. Note that E is much greater than B. In fact E = cB where c is the speed of light (3.00x108 m/s in vacuum). The scale on the E-axis is therefore much greater than the scale on the B-axis ( an order of 108 times greater). This means that the magnetic effect is much weaker than the electric effect. Also, this figure shows one propagation direction only. Note that the E and B that reach a distant charge as a result of E&M waves propagation cause that distant charge to oscillate accordingly. The transmission of this effect is very fast but not instant. The speed of propagation of charge oscillations, E&M Waves, is measured to be 3.00x108 m/s in vacuum. This means 300,000 km/s or 186,000 miles /sec. You may put 186,000 miles on your car in 8 to10 years. Electromagnetic waves ( Light being one type of it ) travel that distance in one second! All radio transmitters and cellular phone systems take advantage of E&M waves. The trick is to mount sound waves (voice) onto the E&M waves (called modulation) and send them at the speed of light ( E&M waves). This is the maximum possible speed according to the "Einstein's Theory of Relativity." Recall that wave speed ( v ) is related to wavelength ( λ ) pronounced " lambda" and frequency ( f ) by v = f λ For electromagnetic E&M waves, letter ( c ) is commonly used for the wave speed. c = f λ Since c is a constant for any given medium, if f increases, then λ has to decrease in that medium. Example 1: An AC source is running at a frequency of 60.0Hz. This causes the current ( moving charges ) in wires connected to this generator to flow back-and-forth at that frequency. The charge oscillations in such wires produce E&M waves that as we know propagate at the speed of light c = 3.00x108 m/s. Find the wavelength, λ, for such waves. Solution: From c = f λ, we get λ = c /f = (3.00x108 / 60.0 ) m = 5,000,000 m = 3100 miles This is a very long wavelength and therefore very weak! Only 60.0 of such waves pass by a given point in space every second. This means that if there is a charge at one point in space, it oscillates only 60.0 times per second as such waves keep coming to it. The shorter the wavelength, the more energetic the wave is or the more energy it carries. Shorter wavelengths are associated with higher frequencies ( c = f λ ). Higher frequencies make a distant charge to oscillate faster; thus, imparting more energy to that charge. Example 2: Waves transmitted or received by cell-phones have wavelengths of about 15 cm or 6.0 inches. Calculate the frequency of such waves and express it in MHz. Note that MHz means Mega Hertz or Million Hertz. Solution: c = f λ ; f = c / λ = (3.00x108 / 0.15 )s -1 = 2.00x109 Hz = 2000 MHz Example 3: White light is a mixture of a large number of different electromagnetic waves whose wavelengths range from about 400nm ( violet ) to about 700nm ( red ). Find the corresponding frequency for each of these two limiting values in the visible range. Solution: To be solved by students. Speed of E&M Waves by Calculation: By solving the wave equation ( a differential equation not shown here ), it is possible to show that speed of E&M waves is given by Solution: Use the values of εo and μo (Form previous chapters or the front or back cover of your text) to verify that c = 3.00x108 m/s. Energy carried by Electromagnetic Waves: The energy carried by a wave is proportional to the square of its amplitude (A2). For E&M waves, it will be (Eo2), or (Bo2), or ( EoBo) where Eo and Bo are the maximum values of the electric and magnetic fields intensities. Eo= cBo. This energy transfer is expressed in units of Joules per second per meter squared through space. Since Joules per second is watt, we may say that the energy carried by a wave is expressed in watts / m2. It can be shown that the formula for maximum energy density carried by an electromagnetic wave is either of the three following forms: Io = c εo Eo 2 = c Bo 2 / μo = Eo Bo / μo. For a continuous sinusoidal wave, we may calculate an RMS ( Root Mean Square) value. Since (rms) power is 1/2 of the max. power, we may write: [subscript (o) denotes maximum value] I rms = (1/2) c εo Eo 2 = c Bo 2 / 2μo = Eo Bo / 2μo. Example 4: A 10-kilowatt AM radio station is tuned on by a radio set at a distance of 5.0 miles = 8.0 km from the transmitter antenna. Isotropic wave propagation in space means that waves are sent out by the transmitter uniformly in all directions. This is not really the case with actual antennas, but for simplicity here, we suppose isotropic propagation of wave energy in space. Assuming isotropic, calculate (a) the wave intensity (I rms) in watts / m2 at the 8.0-km radius, and (b) the magnitude of the electric and magnetic field strength ( Eo and Bo ) at that radius. Solution: Visualize a huge sphere ( of 8000m radius or 5 miles ) that 10,000 watts of energy is to be distributed over its surface. How much energy will every m2 of it receive? This simple division will give us the value of Irms. Irms = 10,000watts / [ 4(3.14 x 80002 )m2 ] = 1.2 x 10-5 watts / m2. (or 12 micro-watts per m2) Since I rms = (1/2)cεoEo2 , solving for Eo we get : Eo = [2I rms/cεo]1/2 = 9.4 x 10-3 V/m. (Can you verify that Volt/meter is the same as N/Coul.?) Since E = c B, we get: Bo = Eo / c = (9.4x10-3 / 3.00x108) = 3.1 x 10-11 T. Test Yourself 1: click here. 1) An electromagnetic wave is a result of the oscillation of (a) an electron (b) a proton (c) a neutron (d) a & b. 2) When a charged particle moves, its electric field (a) moves accordingly (b) remains unchanged (c) varies as sensed by other charges elsewhere (d) a & c. click here. 3) If a charged particle oscillates, its equation of motion is (a) quadratic in time (b) sinusoidal in time (c) neither a nor b. 4) A particle oscillating in the y-direction at a frequency of (f) Hz and an amplitude of (A) meters follows (a) y=Asin(2πft) (b) y=Asint (c) y=Asin(ft). 5) If a charged particle oscillates, the ripples generated in its electric field follow (a) E =Eosin (ωt) where ω = 2πf (b) E = (1/2)Eot2 + ωt (c) E = (1/2)Eot2. click here. 6) Anywhere a charged particle moves, it generates a magnetic effect that is (a) parallel to its direction of motion (b) perpendicular to its direction of motion (c) neither a nor b. 7) The magnetic effect of an electromagnetic wave is (a) separable from its electric effect (b) is not separable from its electric effect (c) can only be separated at high frequency oscillations of a charged particle. 8) The magnetic effect of an E&M wave is (a) much stronger than its electric effect (b) much weaker than its electric effect (c) has the same strength as its electric effect. click here. 9) The speed of E&M waves in vacuum is (a) the same as the speed of light (b) 3.00x1010 cm/s (c) 3.00x105 km/s (d) 186,000mi/s (e) a, b, & c. 10) Light is (a) an E&M wave (b) a mechanical wave and cannot travel in vacuum (c) is a longitudinal wave (d) is a transverse wave (e) a & d. click here. 11) The formula for wave speed is (a) v = f λ (b) v = ω λ (a) v = f ω. 12) Frequency is defined as (a) the number of meters per second (b) the number of ωs that occur per second (c) the number of wavelengths (full cycles) that occur per second. 13) Wavelength is (a) the distance between any two crests on a wave (b) the distance between a crest to the next one on a wave (c) the distance between a trough to the next one on a wave (d) b & c. click here. 14) The energy carried by a wave is proportional to (a) its amplitude, A (b) its amplitude squared, A2 (c) neither a nor b. 15) When a charged particle oscillates at one point in space, other charges in space (a) oscillate instantly as a result (b) will oscillate accordingly at some later time depending on their relative distances (c) both a & b. click here. 16) Since E&M waves move at a constant velocity in a medium with fixed properties (v = 300,000km/s in vacuum), they do not accelerate (a = 0), and the equation of motion for them is (a) x = (1/2)a t2 + vi t (b) x = vi t (c) x = Rθ. 17) If a charge starts oscillating now here on the Earth, a charge that is on the Moon, an average distance of 384,000km away, will start oscillating (a)1.2 min. later (b) 1.28s later (c) one month later. click here. 18) If a charge starts oscillating now here on the Earth, a charge that is on the Sun, an average distance of 150,000,000km away, will start oscillating (a)8.3 min. later (b) 500s later (c) both a & b. 19) A light year is the distance light travels in 1yr. If a charge starts oscillating now here on the Earth, a charge that is on the star Alpha Centauri, 4 light-years away, will start oscillating (a) 3x108s. later (b) 2 light-years later (c) neither a nor b. 20) From the Earth, it takes a radio signal (An E&M wave) 5.0s to reach a space station an back. The space station is (a) 1,500,000km away (b) 3,000,000km away (c) 750,000km away. click here. 21) From the Earth, it takes a radio signal (An E&M wave) 5.0min. to reach a space station an back. The space station is (a) 45,000,000km away (b) 90,000,000km away (c) 75,000,000km away. 22) The frequency of E&M waves used foe cellular phones is about 2000MHz. This frequency is (a) 2x109Hz (b) 2x106Hz (c)2x1012Hz. click here. 23) The wavelength of the E&M waves used for cellular phones is (a) 15cm (b) 6.0in (c) 0.15m (d) a, b, & c. 24) The wavelength of a certain red light (of course, an E&M wave) is 680nm. Its frequency is (a) 4.4x1014 s-1 (b)4.4x1014/s (c) 4.4x1014Hz (d) a, b, & c. 25) If you are solving for the frequency of an E&M wave, and you come up with f = 2.7x10-9/s, for example, (a) you accept the answer and think it must be correct (b) you doubt the answer thinking that order of 10-9 is extremely small to be the frequency of an E&M wave (c) you may think that a charge oscillating once every 109 seconds means practically motionless (d) b & c. click here. 26) The frequency of a certain violet light (of course, an E&M wave) is 7.3x1014/s. Its wave length is (a) 4.1x10-9m (b)41.0nm (c) 410nm (d) 160nm. click here. 27) The wavelength of a wave is 750m in vacuum and it occurs 400,000 times per second. The wave (a) has a speed of 3.0x108m/s (b) has a speed of 3.0x105km/s (c) is electromagnetic because only E&M waves can travel at that speed in vacuum (d) a, b, & c. 28) The wavelength of a wave is 1500m in vacuum and it occurs 200,000 times per second. The wave (a) has a speed of 3.0x108m/s (b) has a speed of 1.86x105mi/s (c) is electromagnetic because only E&M waves can travel at that speed in vacuum (d) a, b, & c. 29) The wavelength of a wave is 3000m in vacuum and it occurs 100,000 times per second. The wave (a) has a speed of 3.0x108m/s (b) has a speed of 3.0x105km/s (c) is electromagnetic because only E&M waves can travel at that speed in vacuum (d) a, b, & c. click here. 30) Ultraviolet rays (of course, E&M waves) have frequencies more than that of violet (fv = 7.5x1014/s). An E&M wave of frequency 9.5x1014/s is of course UV and not visible. It has a wavelength of (a) 3.2E-7m (b) 320nm (c) both a & b. 31) X- rays (of course, E&M waves) have frequencies more than that of ultraviolet (fUV > 7.5x1014/s). An E&M wave of frequency 6.5x1016/s is of course of X-rays type, and not visible. It has a wavelength of (a) 4.6E-9m (b) 4.6nm (c) both a & b. click here. 32) Gamma rays (of course, E&M waves) have frequencies more than that of X-Rays (fX > 1016/s). An E&M wave of frequency 5.0x1021/s is of course of Gamma type, not visible, and very penetrable. It's wavelength is (a) 6.0x10-14m (b) 60fm (c) a & b. Note: fm means femtometer that is10-15m. click here. 33) In general, for E&M waves, the speed is constant (3.00x108m/s in vacuum). An E&M wave of (a) lower frequency has of course a greater wavelength (b) higher frequency has of course a smaller wavelength (c) both a & b (d) neither a nor b. click here.
<urn:uuid:c9461e33-1fb6-46bb-972a-66e1e52945c9>
4.09375
3,841
Tutorial
Science & Tech.
84.617286
Low energy transfers to the Moon were first demonstrated in 1991 by the Japanese spacecraft Hiten. This was a result of a mission rescue by Edward Belbruno and James Miller. The transfer used by Hiten is a revolutionary new type of low energy transfer to the Moon derived from Weak Stability Boundary Theory. See Capture Dynamics and Chaotic Motions in Celestial Mechanics Unlike the standard three day transfer to the Moon this low energy route does not require large rocket engines to slow down to be captured into lunar orbit. It also takes three months instead of three days. Low energy transfers follow special pathways in space, sometimes referred to as the Interplanetary Transport Network. Other missions that have used low energy transfers are SMART-1, of the European Space Agency, and Genesis, of NASA.
<urn:uuid:b48086ef-353b-4a29-bb39-db642d839a0d>
3.640625
158
Knowledge Article
Science & Tech.
36.86
The Doppler Effect allows the distance between a satellite transmitting from space and a radio receiver on the ground to be measured by observing how the frequency received from the satellite transmitter changes as the satellite approaches, passes overhead, and moves away. The frequency received can be displayed on a Doppler Curve graph relating frequency to time. As a satellite approaches and passes overhead, the received frequency appears to fall. However, the rate of change in frequency is not constant. At first, the frequency changes slowly. Then the change increases to its greatest rate at the time of closest approach. After passing overhead, the rate of frequency change slows as the satellite moves away. - As a satellite approaches, the frequency of its transmitter appears to be higher than the actual transmission frequency. - Overhead is the time of closest approach when the transmitted frequency and the received frequency are the same. - As a satellite moves away, the frequency appears to be lower than the actual transmission frequency. example of a Doppler Curve graph Copyright 2001 Space Today Online Satellites main page E-Mail
<urn:uuid:9bb3629e-d6b3-4f2d-a922-45f4b5d3da20>
4.21875
219
Knowledge Article
Science & Tech.
32.177385
The following table gives information about the ORBITS of the NINE planets of the SOLAR SYSTEM. The force of GRAVITY makes the planets move in orbits that are nearly circular around the SUN. Planet Distance Mass Time for 1 from of Planet Orbit Sun (x10 ^22 kg) of Sun (million km) (days) Mercury 58 33.0 88.0 Venus 108 487 224.7 Earth 150 598 365.2 Mars 228 64.2 687.0 Jupiter 778 190,000 4332 Saturn 1,429 56,900 10760 Uranus 2,871 8,690 30700 Neptune 4,504 10,280 60200 Pluto 5,913 1.49 90600 1. a. Plot a graph of the DISTANCE from the SUN (on the x-axis) against the TIME for one ORBIT (on the y-axis) for each of the nine PLANETS. Join the points together to produce a SMOOTH curve. b. Write down what you can CONCLUDE from the graph. c. It was once thought that a tenth planet should have an orbit of 420 million km from the Sun. Use your graph to predict how long a YEAR will be on this planet. 2. a. Calculate the SPEED of each of the nine planets using the formulae SPEED = DISTANCE TRAVELLED /TIME 1) Distance travelled around an orbit = 2IIr 2) r = distance from the Sun The best units of speed to use are millions of km per day. Put these in a table THE SPEEDS OF THE PLANETS Planet Distance from sun Speed of planet (millions of km) (million km day -1) Mercury 58 4.14 Venus 108 etc. etc. b. Write down what you can CONCLUDE from this table. c. Newton's Law of Gravitation states that the force of gravity becomes SMALLER the further the planet is from the Sun. Use this fact and what you know about circular motion to explain why Mercury has such a HIGH speed and why Pluto has such a LOW speed. THE MOONS OF JUPITER Name of Distance Mass of Time for 1 Orbit Satellite from Satellite of Jupiter of Jupiter Jupiter (x10^18) (days) (million km) Sinope 23.70 0.078 758.00 Carme 22.60 0.096 692.50 Elara 11.74 0.78 259.65 Himalia 11.48 9.56 250.57 Callisto 1.88 108,000 16.69 Ganymede 1.07 148,000 7.15 Europa 0.67 48,000 3.55 Io 0.42 89,400 1.77 Amalthea 0.18 7.17 0.50 3. The above nine moons of JUPITER form a mini-solar system. a. Plot a graph of DISTANCE from Jupiter (on the x-axis) against TIME for one orbit (on the y-axis) for each of the SATELLITES in the above table. Join these points to produce a SMOOTH curve. b. In what way is the curve SIMILAR to the curve for the PLANETS? c. In what way is the curve DIFFERENT from the curve for the planets? d. The radius of SINOPE's orbit around Jupiter is VERY ROUGHLY the same as the radius of Mercury's orbit around the Sun. Explain why Sinope takes so long to complete one orbit whereas Mercury only takes 88 days. 4. Surprisingly, the MASS of the planets does not affect the orbit at all. You should DEMONSTRATE this by plotting a graph of DISTANCE from the Sun (on x-axis) against the MASS of the planet (on y-axis). The points should be in a RANDOM scatter showing that there is NO relation between the two VARIABLES. 5. JOHANN BODE noticed that if you measure the orbital RADII of the planets in the Solar System in AU (1 AU is the mean distance from the Earth to the Sun) then the radii of the orbits seem to obey a MATHEMATICAL rule called BODE'S LAW. Take the number 0, 3, 6, 12 ... doubling at each step. Add 4 to each number, then divide by 10. This is the predicted radius in AU. a. Check Bode's Law by constructing the following table of Actual Radius (in AU) and radius predicted by Bode's Law for all nine planets THE ORBITAL RADII OF THE PLANETS Planet Orbital Radius Radius from Radius (AU) Bode's Law (million km) Mercury 58 0.39 0.40 Venus 108 0.72 0.70 Earth 150 1.0 etc b. Using his Law, Bode predicted there should be a "missing" planet. What should be the ORBITAL RADIUS of this planet? What would be the TIME it takes to orbit the Sun? (So convinced were astronomers that this missing planet must exist that it was given the name CERES. In fact there are just ASTEROIDS at this orbital radius) c. If Bode's Law were true, it would have to apply to other planetary systems such as the moons of Saturn and Jupiter. Try to find a RULE that would fit the ORBITAL RADII of moons (the rule R = (n + 10) / 10 where n = 0, 3, 6 .. fits the first few) THE ORBITAL RADII OF THE MOONS OF SATURN Moon of Saturn Orbital Orbital Radius from Radius Radius Rule (thousand km) (Scaled) Mimas 186 1.0 Enceladus 238 1.28 Tethys 295 1.59 Dione 377 Rhea 527 Titan 1,222 Hyperion 1,481 Iapetus 3,561 Phoebe 12,952 N.B. Saturn has at least 25 moons, but most are no more than lumps of rock. Those above are over 200 km across). CLEARLY, Bode's Law is NOT true in general and it is just COINCIDENCE that the planets orbital radii fall close to his numbers. Back to Solar Contents page
<urn:uuid:2cbcd8e0-49a2-4157-9b75-9aa0bbb07d28>
3.6875
1,340
Tutorial
Science & Tech.
92.2906
12/17/2005 9:08:16 PM Gravatational Propulsion: Simple I've read many of the messages posted and decided I would add some comments and more importantly action myself (and hopefully some of you ) to find the answer to some fairly simple questions that will put the problem to one of required energy, energy generation and then materials and methods I think along the lines of what many of you are discussing. Space is big particularly if your not moving. It gets shorter if you travel fast relative to a stationary observer but not short enough. The speed of light is constant in all reference frames. Both these postulates appear to us to be true. Fermi Lab, Cern and even your watch on a 747 (if you fly long enough) demonstrate that moving clocks tick slower, lengths contract, particles last longer if moving relative to statinary observers etc. Taking one small step further from the Special Theory, where things get shorter or longer depending on who's observing what , we need to consider the path objects travel on in "space", specifcally space-time, and what effects these paths. Consider a sphere. Imagine that we all live in the skin (very thin) of this sphere. The sphere is expanding, which is why everything in space looks like it's traveling away from us, as caused by initial energy, the Big Bang, of the universe. The degree of curvature of this sphere is determined by the amount of energy. We can imagine the curvature of the sphere to be a three dimensional shadow in 4 dimensions adding time..and determined geometrically by a metric R^4: (Sqrt[-dt^dt + dx^dx + dy^dy + dz^dz] some mass and some imposed rules to deriive the shape ... Carl Sagan desribed spave - time and the curvature of our sphere, and its influence on moving massive bodies I think best describing a bowling ball placed at rest in the center of a trampoline. The bowling ball represents mass, that has the effect of bending spact-time (the tranpoline) which in terns defines the path of a rolling tennis ball released at the edge of the trampoline. The tennis ball orbits the bowling ball as defined by the curvature of the trampoline. In the same way our planets orbit the sun as defined by the suns mass and it's effect on the curvature of space time it exists in locally relative to our planets and passing objects including photos/light. Kepplers equation of motion can be exactly derived in terms of the motion of massive bodys around the sun via the potential (GMm/r) that describes the force between the sun and planets. Equivalently, the mass that the sun provides can be calculated into the fabric of space-time curvature (of our trampoline). A farily straight forward geometry calculation (motion of our tennis ball) to then derive the geodesics on this curved manifiold will yield the same equations of motion of the masses around the sun. The result is pretty cool as the equations of motion derived in terms of curvature and geometry in this way can be represented by a single Kristoffel symbol...Einsteins expresion of differential equations, a notation that he admitted his best contribution to science. Anyway, the important point to recognize is that mass bends space time. Mass is equivalent to energy. If we treat space-time as an object, we can derive equations of motion along the paths that space-time and the bending that mass/energy define. This is done most dramatically in the study of black holes and orbits of matter and radiation around them where the entire mass of a star is equated to energy and curvature from a point source the size of a pea....ie, the curvature is very great not even allowing light to escape beyond a radius from the source defined by the stars original mass before collapse......A very big bowling ball. In deep space, not perturbed by local variations of mass like planets, black holes, dark matter etc...light travels along lines that are defined by the general curvature of our universe..the 3 deg K background reminant of the big bang. Locally, where massive objects are present, light has been observed to bend locally. We now observe not only light bending around are own sun but even moving black holes detected by the disruption of observed postion of objects between us and the source the hole passes between. Back to the question of travel. Obviously the question is'nt how we go faster, but how we shorten the path. Light can be made to travel as fast as we want. the speed of light only defines the speed limit in the curved space that we produce. The bigger the bowling ball (energy), the more curvature we produce....Here's some questions that we need to answer: Can R^4, our best/favorite defined space-time manifold..pick a metric, be bent in such a way to bring two points closer together....say between here and alpha centari. Assuming that our physicist/geometrists friends come back with a solution, they likely will, we will need the solution as a function of curvature, or bending, at least locally relative to our ship that will create the mapped curvature solution from a to b. Once we have the curvature required to follow our "light" path, we can equate the curvature immediately to energy. That energy will be required to be produced and focused taking account for any local curvature produced in proximity (open space or in proximity of another curvature source such as a planet). Now that we have the required energy solved we look for a source. Be aware that the mass (energy) of a star that can collapse to the size of a pea to create the amount of curvature to create a black hole is enourmous...We're going to need alot of energy. Matter - Anti matter (we make anti matter all the time) works as it's 100% efficient but were going to need something massive that can produce it on small realtive scale. Any ideas ? A materials problem. With the energy problem solved, we now need to manipulate it generating a field and phase necessary to control direction...bad term as we basically don't move anywhere...another problem for our geometrists. will be listed below this message
<urn:uuid:326cfa66-ea0f-4eb1-99ae-134d01e997c3>
2.96875
1,320
Comment Section
Science & Tech.
54.988806
Tuesday, March 16, 2010 Mad science: Can you hear me now? The Guardian reports on an astronomer who claims that one of the unexpected consequences of switching to digital broadcast technology is that it makes the Earth harder to detect from space. In the past, TV and radio programmes were broadcast from huge ground stations that transmitted signals at thousands of watts. These could be picked up relatively easily across the depths of space, astronomers calculated. Now, most TV and radio programmes are transmitted from satellites that typically use only 75 watts and have aerials pointing toward Earth, rather than into space. "For good measure, in America we have switched from analogue to digital broadcasting and you are going to do the same in Britain very soon," Drake added. "When you do that, your transmissions will become four times fainter because digital uses less power." "Very soon we will become undetectable," he said. In short, in space no one will hear us at all. Drake also notes that this goes both ways. Assuming that there's an advanced alien race out there, it is possible that they also let their communications systems develop for maximum efficiency, that is to say, max results for minimum energy expenditure. If they did, then they'd be as invisible to us as we are becoming to them. What is true for humans would probably also be true for aliens, who may already have moved to much more efficient methods of TV and radio broadcasting. Trying to find ET from their favourite shows was going to be harder than we thought, Drake said. Of course, this is only assuming that we want to be found. Perhaps a little cosmic camouflage is a good thing.
<urn:uuid:302a4c9b-cf8b-409c-aab7-e83edab0f827>
2.90625
342
Personal Blog
Science & Tech.
51.794988
Scientific name: Cupido minimus Our smallest butterfly. Upperwings brown with blue dusting. Undersides pales blue with row of black spots. Our smallest resident butterfly is easily overlooked, partly because of its size and dusky colouring, but partly because it is often confined to small patches of sheltered grassland where its sole foodplant, Kidney Vetch, is found. Males set up territories in sheltered positions, perching on tall grass or scrub. Once mated, the females disperse to lay eggs but both sexes may be found from late afternoon onwards in communal roosts, facing head down in long grass. The butterfly tends to live in small colonies and is declining in most areas. Found throughout Britain and Ireland but rare and localised. Size and Family - Family – Blues - Small Sized - Wing Span Range (male to female) - 20-30mm - Listed as a Section 41 species of principal importance under the NERC Act in England - Listed as a Section 42 species of principal importance under the NERC Act in Wales - Classified as a Northern Ireland Priority Species by the NIEA - UK BAP status: Priority species - Butterfly Conservation priority: Medium - European Status: Not threatened - Protected in Great Britain for sale only The sole foodplant is Kidney Vetch (Anthyllis vulneraria). The larvae live only in the flower heads where they feed on developing anthers and seed. - Countries – England, Scotland, Ireland and Wales - Mainly South-central England and some eastern Scottish coasts and both east and west coast of Ireland - Distribution Trend Since 1970’s = Britain: -38% Rare but found on sheltered, warm grassland habitats which have Kidney Vetch. Habitats include; chalk and limestone grassland, coastal grasslands and dunes and man-made habitats such as; quarries, gravel pits, road embankments and disused railways.
<urn:uuid:a8d86319-6e56-4940-86c1-c3bbb9c371c4>
3.234375
414
Knowledge Article
Science & Tech.
37.146432
The Byte Code Engineering Library (Apache Commons BCEL™) is intended to give users a convenient way to analyze, create, and manipulate (binary) Java class files (those ending with .class). Classes are represented by objects which contain all the symbolic information of the given class: methods, fields and byte code instructions, in particular. Such objects can be read from an existing file, be transformed by a program (e.g. a class loader at run-time) and written to a file again. An even more interesting application is the creation of classes from scratch at run-time. The Byte Code Engineering Library (BCEL) may be also useful if you want to learn about the Java Virtual Machine (JVM) and the format of Java .class files. BCEL contains a byte code verifier named JustIce, which usually gives you much better information about what's wrong with your code than the standard JVM message. BCEL is already being used successfully in several projects such as compilers, optimizers, obsfuscators, code generators and analysis tools. Unfortunately there hasn't been much development going on over the past few years. Feel free to help out or you might want to have a look into the ASM project at objectweb.
<urn:uuid:6eb10ede-c9dc-4df9-a9ab-424090d0d27b>
2.984375
257
Knowledge Article
Software Dev.
45.316927
The basic idea is quite simple: suppose that for a metric trait, two populations A and B have mean value a and b and that a third population C is formed by mixture between A and B. Unlike allele frequencies where the admixed population's frequency will be between a and b immediately post-admixture, anthropometric traits may respond in unexpected ways to admixture (e.g., heterosis might cause first-generation offspring to exceed both their parents in height, rather than exhibit an intermediate value). I will leave the justification of the hypothesis that "mixed-origin offspring will possess intermediate metric traits" to the physical anthropologists, who may have gathered data on such things, and, for the present, I will take it for granted. So, assuming that c, the mean trait in the mixed population, is between a and b, we can easily see that (c-a)(c-b) will be negative, and hence so will be the correlation coefficient (over many traits) between C-A and C-B, where by C-A I denote the k-long vector difference of mean trait values between populations C and A. Going back to my analysis of Howells' dataset, I calculated population means for 57 traits over the NORMALIZED_DATA array of modern populations (in which sexual dimorphism has been removed and traits of different scale have been normalized in standard deviation units), and calculated 30*choose(29,2) correlations for each of 30 populations, expressed as a mixture of any pair of the remaining 29. I list below, the top 20 anti-correlations, and highlight a few in bold (third population as mixture of first two): BURIAT ANDAMAN PHILLIPI -0.54005191575771 EGYPT BURIAT NORSE -0.490018084440697 ANDAMAN ANYANG HAINAN -0.48323680182295 BURIAT ANDAMAN HAINAN -0.480939028739347 EGYPT BURIAT ZALAVAR -0.476445836100052 ANDAMAN ANYANG PHILLIPI -0.457902384166767 DOGON BURIAT PHILLIPI -0.416551851781419 BERG EASTER_I ZALAVAR -0.378996437433417 AUSTRALI BURIAT ARIKARA -0.375898166338775 BURIAT EASTER_I MOKAPU -0.37169703838378 ESKIMO ANDAMAN S_JAPAN -0.366611599944932 ESKIMO PERU N_JAPAN -0.354535077363928 TOLAI BURIAT ARIKARA -0.348110323746154 BERG EGYPT ZALAVAR -0.344843098962355 DOGON ESKIMO GUAM -0.344577928128792 TOLAI BURIAT GUAM -0.338804214799388 ESKIMO PHILLIPI GUAM -0.336537918547276 DOGON BURIAT HAINAN -0.332635954428392 TASMANIA BURIAT ARIKARA -0.331301837598433 ESKIMO PERU S_JAPAN -0.330302035072489 Some interesting ones: - Philippines as Buriat+Andaman; this makes sense if Philippines is the result of admixture between an "East Asian" and a "Negrito" population - Norse as Egypt+Buriat; the Howells "Egypt" sample is "Mediterranean" in the classical sense. Perhaps this involves the same "East Eurasian"-like signal of admixture detected by genetic methods? Similar signal also occurs for Zalavar (from Hungary) - Hainan as Andaman+Anyang; south Chinese as Neolithic Chinese+"Negrito"-like old south Chinese? - Arikara as Buriat+Australian; admixture between "Australoid" Paleo-Indians and "Mongoloid" ones? or between 1st wave Indians and later ones (sensu Reich et al. 2012)? - Guam as Tolai+Buriat; admixture between "Papuan"-like and East Asian-like people in Polynesia? And, there are some difficult-to-interpret cases (e.g., Philippines as Buriat+Dogon) which may point to limitations of the method; for example, the Dogon may act as a stand-in for the "equatorial"-like physique of the true "Andaman"-like mixing element. Presumably such limitations can be overcome by limiting the analysis to "selectively neutral" traits, rather than the whole suite of 57 Howells variables used here. I certainly think that the idea ought to be investigated further: it might be redundant when genetic data are available, but may prove useful in the analysis of admixture when such data do not exist, e.g., in anthropological data of prehistoric specimens from hot climates where archaeogenetic evidence may never materialize.
<urn:uuid:fab88e48-1bf5-4d63-99b2-3ab36dc8c90a>
3
1,116
Comment Section
Science & Tech.
45.632101
Magnetospheric Multiscale Mission ||This article needs additional citations for verification. (July 2009)| The Magnetospheric Multiscale Mission (MMS) is a planned NASA unmanned space mission, to study the Earth's magnetosphere using four identical spacecraft flying in a tetrahedral formation. It will be deployed in 2014. It is designed to gather information about the microphysics of magnetic reconnection, energetic particle acceleration, and turbulence, processes that occur in many astrophysical plasmas. The mission builds upon the successes of the ESA Cluster Mission, but will surpass it in spatial resolution and in temporal resolution, allowing for the first time measurements of the critical electron diffusion region, the site where magnetic reconnection occurs. Its orbit is optimized to spend extended periods in locations where reconnection is known to occur: at the dayside magnetopause—the place where the pressure from the solar wind and the planets' magnetic field are equal—and in the magnetotail—which is formed by pressure from the solar wind on a planet's magnetosphere and which can extend great distances away from its originating planet. Magnetic reconnection in Earth's magnetosphere is one of the mechanisms responsible for the aurora, and it is important to the science of controlled nuclear fusion because it is one mechanism preventing magnetic confinement of the fusion fuel. The study of turbulence in outer space involves the measurement of motions of matter in stellar atmospheres, like that of the Sun, and magnetic reconnection is a phenomenon in which energy is efficiently transferred from a magnetic field to charged particles. Personnel and purpose The principal investigator is James L. Burch of Southwest Research Institute, assisted by an international team of investigators, both instrument leads and theory and modeling experts. The Project Scientist is Thomas E. Moore of Goddard Space Flight Center. Education and public outreach is a key aspect of the mission, with student activities, data sonification, and planetarium shows being developed. The mission was selected for support by NASA in 2005; it has a projected launch date of 2014. System engineering, spacecraft bus design, and integration and test will be done by Goddard Space Flight Center in Maryland. Instrumentation is being improved, with extensive experience brought in from other missions, such as IMAGE, the Cluster and Cassini missions. In June 2009, MMS was allowed to proceed to Phase C, since they passed their PDR. The mission passed its Critical Design Review in September 2010. The launch is scheduled for October 2014 according to NASA. The craft will be carried to orbit by an Atlas V 421 rocket. Formation flying In order to collect the desired science data, the four satellite MMS constellation must maintain a tetrahedral formation through a defined region of interest in a highly elliptical orbit. The formation will be maintained through the use of a next generation space rated GPS receiver, Navigator, to provide orbit knowledge, and regular formation maintenance maneuvers. - Lewis, W.S. "MMS-SMART: Quick Facts". Southwest Research Institute. Retrieved 5 August 2009. - Vaivads, Andris; Retinò, Alessandro; André, Mats (12 July 2006). "Microphysics of Magnetic Reconnection". Space Science Reviews. Springer Netherlands. doi:10.1007/s11214-006-7019-3. ISSN 0038-6308. Retrieved 5 August 2009. - "The SMART Team". Mms.space.swri.edu. Retrieved 2012-09-28. - "Q&A: Missions, Meetings, and the Radial Tire Model of the Magnetosphere". Nasa.gov. 2010-10-01. Retrieved 2012-09-28. - "NASA's Magnetospheric Mission Passes Major Milestone". Nasa.gov. 2010-09-03. Retrieved 2012-09-28. - Lewis, W.S. "MMS-SMART: Quick Facts". Southwest Research Institute. Retrieved 17 April 2013. - "United Launch Alliance Atlas V Awarded Four NASA Rocket Launch Missions" (Press release). United Launch Alliance. 16 March 2009. Retrieved 5 August 2009. - Moldwin, Mark, An Introduction to Space Weather,Cambridge University Press, ISBN 978-0-521-86149-6, 2007. - SWRI to Lead MMS Mission - Curtis, Steve, Magnetospheric Multiscale Mission: Cross-scale Exploration of Complexity in the Magnetosphere, Ch 8 of Nonequilibrium Phenomena in Plasmas, ed. P. Kaw, Springer, ISBN 978-1-4020-3108-3, 2005. - National Academy of Sciences, "The Sun to the Earth - And Beyond", ISBN 978-0-309-08972-2, 2003. online version available. - NASA 2006 Strategic Plan - NASA 2007 Science Plan - MMS Site at Goddard Space Flight Center - MMS pages at SWRI - Educational and public outreach site at Rice University - Podcast site at GFSC - MMS YouTube site - MMS Page at NASA Science Mission Directorate - Space Math Problems
<urn:uuid:21a927ad-caba-480a-b5be-8b90db21a297>
3.265625
1,062
Knowledge Article
Science & Tech.
49.262874
XIV. PROGRESS IN ELECTRICITY FROM GILBERT AND VON GUERICKE TO FRANKLIN We have seen how Gilbert, by his experiments with magnets, gave an impetus to the study of magnetism and electricity. Gilbert himself demonstrated some facts and advanced some theories, but the system of general laws was to come later. To this end the discovery of electrical repulsion, as well as attraction, by Von Guericke, with his sulphur ball, was a step forward; but something like a century passed after Gilbert's beginning before anything of much importance was done in the field of electricity. In 1705, however, Francis Hauksbee began a series of experiments that resulted in some startling demonstrations. For many years it had been observed that a peculiar light was seen sometimes in the mercurial barometer, but Hauksbee and the other scientific investigators supposed the radiance to be due to the mercury in a vacuum, brought about, perhaps, by some agitation. That this light might have any connection with electricity did not, at first, occur to Hauksbee any more than it had to his predecessors. The problem that interested him was whether the vacuum in the tube of the barometer was essential to the light; and in experimenting to determine this, he invented his "mercurial fountain." Having exhausted the air in a receiver containing some mercury, he found that by allowing air to rush through the mercury the metal became a jet thrown in all directions against the sides of the vessel, making a great, flaming shower, "like flashes of lightning," as he said. But it seemed to him that there was a difference between this light and the glow noted in the barometer. This was a bright light, whereas the barometer light was only a glow. Pondering over this, Hauksbee tried various experiments, revolving pieces of amber, flint, steel, and other substances in his exhausted air-pump receiver, with negative, or unsatisfactory, results. Finally, it occurred to him to revolve an exhausted glass tube itself. Mounting such a globe of glass on an axis so that it could be revolved rapidly by a belt running on a large wheel, he found that by holding his fingers against the whirling globe a purplish glow appeared, giving sufficient light so that coarse print could be read, and the walls of a dark room sensibly lightened several feet away. As air was admitted to the globe the light gradually diminished, and it seemed to him that this diminished glow was very similar in appearance to the pale light seen in the mercurial barometer. Could it be that it was the glass, and not the mercury, that caused it? Going to a barometer he proceeded to rub the glass above the column of mercury over the vacuum, without disturbing the mercury, when, to his astonishment, the same faint light, to all appearances identical with the glow seen in the whirling globe, was produced. Turning these demonstrations over in his mind, he recalled the well-known fact that rubbed glass attracted bits of paper, leaf-brass, and other light substances, and that this phenomenon was supposed to be electrical. This led him finally to determine the hitherto unsuspected fact, that the glow in the barometer was electrical as was also the glow seen in his whirling globe. Continuing his investigations, he soon discovered that solid glass rods when rubbed produced the same effects as the tube. By mere chance, happening to hold a rubbed tube to his cheek, he felt the effect of electricity upon the skin like "a number of fine, limber hairs," and this suggested to him that, since the mysterious manifestation was so plain, it could be made to show its effects upon various substances. Suspending some woollen threads over the whirling glass cylinder, he found that as soon as he touched the glass with his hands the threads, which were waved about by the wind of the revolution, suddenly straightened themselves in a peculiar manner, and stood in a radical position, pointing to the axis of the cylinder. Encouraged by these successes, he continued his experiments with breathless expectancy, and soon made another important discovery, that of "induction," although the real significance of this discovery was not appreciated by him or, for that matter, by any one else for several generations following. This discovery was made by placing two revolving cylinders within an inch of each other, one with the air exhausted and the other unexhausted. Placing his hand on the unexhausted tube caused the light to appear not only upon it, but on the other tube as well. A little later he discovered that it is not necessary to whirl the exhausted tube to produce this effect, but simply to place it in close proximity to the other whirling cylinder. These demonstrations of Hauksbee attracted wide attention and gave an impetus to investigators in the field of electricity; but still no great advance was made for something like a quarter of a century. Possibly the energies of the scientists were exhausted for the moment in exploring the new fields thrown open to investigation by the colossal work of Newton. THE EXPERIMENTS OF STEPHEN GRAY In 1729 Stephen Gray (died in 1736), an eccentric and irascible old pensioner of the Charter House in London, undertook some investigations along lines similar to those of Hauksbee. While experimenting with a glass tube for producing electricity, as Hauksbee had done, he noticed that the corks with which he had stopped the ends of the tube to exclude the dust, seemed to attract bits of paper and leaf-brass as well as the glass itself. He surmised at once that this mysterious electricity, or "virtue," as it was called, might be transmitted through other substances as it seemed to be through glass.
<urn:uuid:4edb34b1-c9bb-4232-b08a-ef9a2cd2862b>
3.390625
1,188
Knowledge Article
Science & Tech.
32.05179
The cold start to 2010 can be partially explained by the dramatic shift in the mid-latitude jet stream configuration associated with the Arctic Oscillation (AO) and the North Atlantic Oscillation (NAO). The switch began on New Year’s Day as an arctic air mass entered the region and started what would be the coldest 2 weeks in climate history across the forecast area. Most of the climate sites across the area set new all-time records for consecutive days with minimum temperatures at 32 degrees or less. - RECORD BROKEN: ≤ 32°F Consecutive -Old Record: 8 days (January 17-24, 1977) -New Record: 12 days (January 2-14, 2010) The cold start to 2010 not only set records in the first 2 weeks of January, it ended up being the coldest start to the first four months of the calendar year (January through April) of all-time. - RECORD BROKEN: January to April 2010 Average Temperature -Old Record: 55.6°F in 1983 -New Record: 55.2°F (4.1°F below normal) The weather turned sharply warmer in May as it usually does in this region, due to a large ridge of high pressure that extended from the Western Atlantic Ocean across the southeast United States. This pattern remained in place over the entire summer season from May through September and was the cause of a heat wave that lasted longer than most ever have across the region. Almost all of the climate sites in southeast Georgia and northeast Florida set new all-time consecutive day streaks of maximum temperatures greater than or equal to 90 degrees. - RECORD BROKEN: Consecutive # of Days Max Temp ≥90°F -Old Record: 44 days (Jul-Aug 1992) -New Record: 50 days - RECORD BROKEN: May to September 2010 Average Temperature -Old Record: 81.2°F -New Record: 81.4°F (2.9°F above normal) in 2010 The dry weather continued into December, but in addition to the severe drought conditions was the return to a negative phase in the (NAO) and (AO) which also allowed for early season arctic air masses to plunge into the region. This will likely result in one of the coldest Decembers on record as average temperatures will run almost 10 degrees below normal. - RECORD BROKEN: Total # of Freezes -Old Record: 38 in 1977 -New Record: 43 in 2010 - RECORD BROKEN: # of December Freezes -Old Record: 12 in 2000 -New Record: 18 in 2010 2010 Average Temperature (Departure from Normal) Jacksonville, FL (JAX) (1.0°F below normal) [3rd coldest year on record] Given the cold latter portion of the 2009-2010 winter and the near record cold we have seen in December, it is tempting to conclude that our area may be in for a brutally cold winter…but that is not necessarily the case. Most winter seasons see pattern shifts at some point, and recent events bear that out…December of 2009 was actually 1.4 degrees warmer than normal in Jacksonville with 10 days reaching 70 degrees or higher. Despite the early season warmth, the rest of the winter turned abruptly colder, as noted in Table 2. A similar turnaround was noted in the winter of 1989-1990…after a record cold December averaging 7.7 degrees below normal, January and February turned almost balmy, averaging 4.4 and 6.5 degrees above normal, respectively. The bottom line is that what happens early in a season is not necessarily an indicator of what the rest of the season will be like.
<urn:uuid:9ddfc2b8-e151-4567-a3d0-cdd9067f9a15>
3.28125
789
Personal Blog
Science & Tech.
57.093778
4.4. A Mid-Infrared Look Within Galaxies ISO-CAM CVF studies between 5 and 17 µm are turning out to be powerful diagnostics of the radiation field within the disks of nearby galaxies, allowing us to disentangle the variations in heating intensity and hardness of interstellar radiation. The approach is to relate the intensity to the shape of the continuum, and the hardness to the ratios of ionic fine-structure lines (Tran 1998; Contursi 1998). See also the overview on ISOCAM studies of nearby galaxies by Vigroux et al. 1999, and the studies of NGC 891 by Le Coupanec et al. (1999) and by Mattila et al. (1999). Such studies are valuable in establishing the local relation between mid-infrared emission and the star formation intensity, thereby guiding the interpretation of the global fluxes. The ISOCAM images of galaxies show dust emission in nuclear regions, in the inner barred disk, outlining the spiral arms, and tracing the disk out to the Holmberg radius and beyond (Malhotra et al. 1996, Sauvage et al. 1996, Vigroux 1997, Smith 1998, Roussel et al. 1999, Dale et al. 2000b). There are clear color variations within spiral galaxies, some of which have not yet found satisfactory explanations (Helou et al. 1996; Tran 1998; Vigroux et al. 1999). Dale et al. (1999) describe behavior similar to the ISO-IRAS color diagram within the disks of three star forming galaxies, IC 10, NGC 1313, and NGC 6946, where the 6.75-to-15 µm color drops precipitously as the surface brightness exceeds a certain threshold. The point of inflexion in the color curve occurs at a surface brightness which is a function of the dust column density, whereas the shape of the curve seems invariant, and may result from a rise in both the hardness and intensity of the heating radiation (Figure 8). Dale et al. discuss these findings in the context of a two-component model for the interstellar medium, suggesting that star formation intensity largely determines the mid-infrared surface brightness and colors within normal galaxy disks, whereas differences in dust column density are the primary drivers of mid-infrared surface brightness variations among galaxy disks. Figure 8. The mid-infrared color as a function of surface brightness for three disk galaxies well resolved by ISOCAM, and smoothed so the resolution corresponds to ~ 200 pc in each case (Dale et al. 1999). All three data sets show roughly the same behavior, indicative primarily of how color and surface brightness evolve as heating intensity increases. The data are consistent with the expectation that a change in total ISM dust column density will shift the curves along the surface brightness axis: NGC 6946 does indeed have an order of magnitude greater column density than the other two galaxies in the product of HI + (2/3)H2 and metallicity. Rouan et al. (1996), Block et al. (1997) and Smith (1998) have combined ISOCAM and Br images with other broad-band and line images to estimate star formation rates, ISM parameters, obscuration and dust properties. These studies again point to AFE carriers as a ubiquitous component of interstellar dust, to the likely destruction of these carriers by ionizing UV, and to dust heating in non-starburst disk galaxies being derived from both old stars and OB stars. In M31, Pagani et al. (2000) demonstrate a very close correlation between mid-infrared emission at both 6.75 and 15 µm and the distribution of neutral gas as traced by HI and CO maps. The correlation is poorer with ionized gas as traced by H, and poorest with UV emission, a result which they attribute to extinction. They conclude that AFE can be excited by visible and near-IR photons, the dominant dust heating vectors in this particular case, and therefore by older disk and bulge stars. They also find evidence that in this environment the AFE carriers are amorphous carbonaceous particles formed in the envelopes of carbon stars, and have not yet been graphitized by ultraviolet radiation.
<urn:uuid:157eb025-b43c-4756-9fb3-6dba263dfa4c>
2.953125
875
Academic Writing
Science & Tech.
50.694165
Brought to you by the Mars Global Surveyor Radio Science Team Late Martian Weather! Highlights of the Martian Atmosphere Martian Temperature and Pressure Profiles Public Access to Data Products Daily Martian Weather Report Information MGS Radio Science Team Other Temperature and Pressure Profiles Images of the Martian Atmosphere Welcome to The Daily Martian Weather Report. Contact with the Mars Global Surveyor spacecraft on November 2, 2006, following a successful 10-year mission to explore and map the red planet Mars. A brief summary of the important discoveries of the MGS mission may be found As one of the mission science teams, the Mars Global Surveyor Radio Science Team conducted a detailed investigation of the martian atmosphere. Results of their study are presented on this site. The precision of the atmospheric measurements is extraordinary. Late martian weather readings were posted throughout the primary and extended mapping phases of the MGS mission. Atmospheric temperature and pressure profiles that have been archived with NASA's Planetary Data System were also made available for query on this site. These profiles illustrate the vertical structure of the atmosphere of Mars. Frosty rim, low lying ground fog and higher cloud layers over Lomonosov Crater in winter (Mars Orbiter Camera image courtesy of NASA/JPL/Malin Space Science Systems) The launch of the Mars Global Surveyor spacecraft from the Cape Canaveral Air Station took place on November 7, 1996. After a ten-month cruise to Mars, the MGS spacecraft executed its orbit insertion maneuver on September 12, 1997. The period of the initial orbit of Mars was nearly two days. The mission plan called for a three- to four-month aerobraking sequence to modify the orbit to one suitable for mapping the red planet. The mapping phase of the mission was then scheduled to begin in the spring of 1998, and to continue for one complete martian year (687 days). Unfortunately, problems with one of the two MGS solar panels forced the aerobraking sequence to proceed more slowly than planned. MGS executed its final aerobraking pass through the upper martian atmosphere on February 4, 1999, and successfully performed its aerobraking exit maneuver later that day. MGS executed its transfer to mapping orbit on February 19, 1999, and achieved the desired mapping orbit with a period just under two hours and an altitude of approximately 250 miles. The primary mapping phase of the MGS mission began in March, 1999, and was completed in January, 2001 after one martian year. An extended mapping mission began on January 31, 2001. A series of further extensions were granted by NASA and the US Congress as the spacecraft proved to be robust and continued to return high quality science data. When contact was finally lost in November, 2006, the mission was in its fourth extended phase. Following a concerted but ultimately unsuccessful effort to command the spacecraft to a safe state and reestablish radio contact, the mission was terminated on January 31, 2007. The long duration of the mission provided a special opportunity to study year to year changes on Mars.
<urn:uuid:6f7fa5cc-66d5-4b56-9d6f-737353b667e3>
3.140625
628
Knowledge Article
Science & Tech.
37.444514
When I’m programming, I usually anthroporporphize the code and the actors involved, and that means when I’m thinking about program flow, I think of it as if they’re real actors doing the tasks needed, and then I convert that back into code. But, when reading other people’s code, I often times find it hard to convert back into my internal actor-based representation. I realized that this is because the term “Object Oriented Programming” should likely be called “Subject Oriented Programming” I’m using the term Subject the way you use it when describing English sentence structure. Take a look at this: “The boy put the box in the bag.” Just to refresh your grade school English (which I had to do as well), take a brief look at the sentence diagramming article on wikipedia. Our sentence above boils down to: Subject: “The boy” Predicate & Direct object: “put the box” Indirect object: “in the bag” How would you represent that in code? Think about it for a second before continuing. So, here are the possibilities of how this could be written with a single method call: // Subject-Oriented programming: // "The boy put the box in the bag" boy.PutIn(box, bag); // Direct-object oriented programming: // "The box was put in the bag by the boy" box.PutIn(bag, boy); // Indirect-object oriented programming: // "The bag, to which was added a box, was done by the boy." OR // "A bag now has a box that was put there by the boy." bag.AddTo(box, boy); So, which is correct via the principles of OOP? What’s the correct place for the method that does the work? Is it on the Subject of the sentence? The Direct Object? Were you taught how to design this? (i.e. in school?) In practice, method placement is one of the biggest factors in how maintainable & testable your code is, and code is rarely as simple as what I’ve shown above. How do you figure out where the right place for your methods are? One code base where this was thought about a lot was in Python’s handling of string & list methods. The split() and join() methods in Python are both on the string type. In other words: The list type is rigid in that its methods contain no string-manipulation. This seems good at first until you read the code: s = "Hello, world" # Split the string into a list of "Hello" and "world": l = s.split(',') # Now, take that resulting list, and join it back together: j = ','.join(l) Wow, that last line looks weird! What’s going on there? We’re creating a temporary string “,” and then calling the join() method! Weird! For that last line, I would have much preferred to write: j = l.join(',') To be symmetric with the original splitting code, but to write things that way would place a string generating method “join()” on the list class, which also seems wrong.
<urn:uuid:5c6b77e7-529a-4f55-bbd2-6252bbf3ff39>
2.828125
709
Personal Blog
Software Dev.
70.180678
Cosmic Collisions impact Jupiter Posted on Sep 19, 2012 in News Contrary to popular belief, the solar system isn’t a quiet and serene neighborhood in the suburbs of the Milky Way. At 11:35 UTC on Monday, September 10, a cosmic vagrant – be it a comet or asteroid – crashed into the atmosphere of Jupiter in a brilliant flash of light. Dan Petersen, an amateur astronomy observing Jupiter, witnessed the impact with his own eyes. Because he did not have a camera attached to his telescope at the time, Petersen described the event on the astronomy forum CloudyNights.com, hoping for confirmation. Luckily, George Bell, another amateur astronomer, just happened to be recording Jupiter when the wayward object impacted. In the video, a dazzling flash of light can be easily seen on the left edge of the limb of Jupiter, lasting for a second and then fading from view. Impacts of comets and asteroids with Jupiter are of vital importance to life on Earth. Each collision with Jupiter is one fewer collision with Earth. An impact of an asteroid or comet with Earth will bring cataclysmic and apocalyptic destruction. An asteroid that measured a mere 100 meters wide had flattened over 830 square miles of forest in Siberia in 1908. 65 million years ago, an asteroid just 10 miles wide killed all dinosaurs, ushering in the age of mammals. Were it not for the presence of Jupiter, such devastating collisions would be much more frequent. The strong gravitational pull of Jupiter clears out many celestial interlopers from the inner solar system, flinging them away from Earth. Other, incoming asteroids are not so lucky. They are pulled in by Jupiter’s mass, and consumed by the gas giant as they plummet into Jupiter’s atmosphere. The most famous collision of an object with Jupiter is that of comet Shoemaker-Levy 9 in July 1994. In March of that year, professional astronomers Carolyn and Eugene Shoemaker and David Levy discovered the comet and found that it was captured by Jupiter’s gravitational field. Calculations of the comet’s trajectory showed that it will eventually impact Jupiter. Between July 16 and July 22 of 1994, telescopes across the world, including the Hubble Space Telescope, were trained on the gas giant. The comet, torn apart into over 20 pieces by Jupiter’s pull, plummeted into the southern hemisphere of Jupiter. Each piece burnt in the atmosphere, leaving dark, black blotches, each bigger than Earth, on the cloud-tops visible for months to come. The observations of amateur astronomers are vital to our continued understanding of the dynamics of the solar system. Because of the limited amount of time, professional astronomers cannot watch every nook and cranny of our cosmic backyard. Amateurs keep their watchful eyes on our planetary neighborhood, ever vigilant for wandering asteroids. The impact on Jupiter is a reminder that the Solar System is not a peaceful place. Asteroid and comets wander in our proximity, and one day, one may just cross paths with not Jupiter, but with Earth. Great article, hopefully it will scare up some more funding the for the space program! Probably won’t make this week’s paper though. I’m going to hang on to it!
<urn:uuid:dca8854f-b695-4d54-99fe-c9409b2efe33>
3.359375
667
Personal Blog
Science & Tech.
45.610707
Okay, so let’s say we have a closed circuit composed of a simple loop of wire following a closed path . There’s no battery or anything that might normally induce an electromotive force around the circuit by chemical or other means. And, as we saw when discussing Gauss’ law, Coulomb’s law gives rise to an electric field that looks like As we saw when discussing Gauss’ law for magnetism, we can rewrite the fraction in the integrand: So this electric field is conservative, and so its integral around the closed circuit is automatically zero. Thus there is no electromotive force around the circuit, and no current flows. And yet, that’s not actually what we see. Specifically, if we wave a magnet around near such a circuit, a current will indeed flow! Indeed, this is exactly how the simplest electric generators and motors work. To put some quantitative meat on these qualitative observational bones, we have Faraday’s law of induction. This says that the electromotive force around a circuit is equal to the rate of change of the magnetic flux through any surface bounded by that circuit. What? maybe a formula will help: where is any surface with . Why can we pick any such surface? Because if is another one then: We can calculate the boundary of this combined surface: Since our space is contractible, this means that our surface is itself the boundary of some region . But Gauss’ law for magnetism tells us that this is automatically zero. That is, every surface has the same flux, and so it doesn’t matter which one we use in Faraday’s law. Now, we can couple this with our original definition of electromotive force: But this works no matter what surface we consider, so we come up with the differential form of Faraday’s law:
<urn:uuid:e1d745d7-6886-4301-9688-b32ac87b666d>
3.875
391
Personal Blog
Science & Tech.
44.969477
Ask a question about 'Natterer compressor' Start a new discussion about 'Natterer compressor' Answer questions from other users A Natterer compressor was a type of air compression An air compressor is a device that converts power into kinetic energy by compressing and pressurizing air, which, on command, can be released in quick bursts... machine which was used in early experiments in making liquid oxygen Liquid oxygen — abbreviated LOx, LOX or Lox in the aerospace, submarine and gas industries — is one of the physical forms of elemental oxygen.-Physical properties:... (LOX) in the 1870s. A manually operated screw jack was utilized to compress air or other gases up to ~200 atm The standard atmosphere is an international reference pressure defined as 101325 Pa and formerly used as unit of pressure. For practical purposes it has been replaced by the bar which is 105 Pa... The pound per square inch or, more accurately, pound-force per square inch is a unit of pressure or of stress based on avoirdupois units... The device was created by Johann Natterer, a student of Adolf Martin Pleischl Adolf Martin Pleischl was a chemist and medical doctor.- References :... , for experiments creating liquid carbonic acid.
<urn:uuid:002c8ee6-05bb-4039-ad24-4d6a45c9870e>
3.25
270
Q&A Forum
Science & Tech.
45.244357
- Futurity.org - http://www.futurity.org - Voyager 1 cruises ‘highway’ at solar system’s edge Posted By Michael Buckley-Johns Hopkins On December 4, 2012 @ 3:56 pm In Science & Technology | 6 Comments JOHNS HOPKINS / CALTECH (US) — NASA’s Voyager 1 spacecraft, soon to become the first human-built object in interstellar space, is now cruising along an unexpected “magnetic highway” on the outside edge of our solar system. Scientists think that the area on the outskirts of the sun’s influence is the final obstacle Voyager has to negotiate before finally—at least 35 years after launch—crossing over into the void between the stars. “We believe this is the last leg of our journey to interstellar space,” says Edward Stone, a Voyager project scientist based at the California Institute of Technology (Caltech). “Our best guess is that it’s likely just a few months up to a couple years away. The new region isn’t what we expected, but we’ve come to expect the unexpected from Voyager.” The “magnetic highway” moniker refers to a connection in the area between our sun’s magnetic field lines and interstellar magnetic field lines. That connection allows lower-energy charged particles that originate inside our heliosphere—the bubble of charged particles the sun blows around itself—to zoom out. It also permits higher-energy particles from outside to stream in. Before entering this region, the sun’s charged particles bounced around in all directions, as if trapped on local roads inside the heliosphere. When Voyager found the highway, scientists operating its Johns Hopkins-built low-energy charged particle detector wondered if the probe had already ventured into interstellar space. Data indicating that the direction of the magnetic field lines has not changed leads the Voyager team, however, to conclude that this region is still inside the solar bubble. “If we were judging by the charged-particle data alone, I would have thought we were outside the heliosphere,” says Stamatios Krimigis of Johns Hopkins University’s Applied Physics Laboratory and principal investigator for the low-energy charged particle instrument. “In fact,” he says, “our instrument has seen the low-energy particles taking the exit ramp toward interstellar space. But we need to look at what all the instruments are telling us, and only time will tell whether our interpretations about this frontier are correct. One thing is certain: None of the theoretical models predicted any of Voyager’s observations over the past 10 years, so there is no guidance on what to expect.” Since December 2004, when Voyager 1 crossed a shockwave known as the “termination shock”, the spacecraft has been exploring the heliosphere’s outer layer, called the heliosheath. Here, the solar wind—the stream of charged particles from the sun—abruptly slowed down from supersonic speeds and became turbulent. Voyager 1′s environment was consistent for about five and a half years, but then the spacecraft detected that the outward speed of the solar wind slowed to zero. The intensity of the magnetic field also began to increase. Around May 14, LECP measured a sudden 5 percent increase in cosmic rays—high-energy particles coming into the solar system from elsewhere in the galaxy—followed by a similar increase on July 28. This second increase was accompanied by a decrease (by a factor of five) in low-energy particles, but this lasted only for four days. A few days later, the same up-and-down exchange occurred, but on Aug. 25 the instrument recorded an even larger increase in cosmic rays—bringing the total increase since the end of March to about 30 percent. Voyager 1 and its twin spacecraft, Voyager 2, were launched 16 days apart in 1977. Between them, they have visited Jupiter, Saturn, Uranus, and Neptune. Voyager 1 is the most distant manmade object, about 11 billion miles (18.5 billion kilometers) from the sun. Voyager 2 is about 9 billion miles (15 billion kilometers) out. While Voyager 2 has seen some gradual changes in charged particles, they are very different from those seen by Voyager 1. Scientists do not think Voyager 2 has reached the magnetic freeway. “The solar wind measurements speak to the unique abilities of the LECP detector, designed at APL nearly four decades ago,” Krimigis says. “Where a device with no moving parts would have been safer—lessening the chance a part would break in space—our team took the risk to include a stepper motor that rotates the instrument 45 degrees every 192 seconds, allowing it to gather data in all directions and pick up something as dynamic as the solar wind. A device designed to work for 500,000 ‘steps’ and four years has been working for 35 years and well past 6 million steps.” The new results were described at the American Geophysical Union meeting in San Francisco. The Voyager spacecraft were built and are operated by the Jet Propulsion Laboratory, a division of Caltech. The LECP instrument was designed and built at APL with NASA funding. The Voyager missions are a part of the NASA Heliophysics System Observatory, sponsored by the Heliophysics Division of the Science Mission Directorate in Washington Source: Johns Hopkins University Article printed from Futurity.org: http://www.futurity.org URL to article: http://www.futurity.org/science-technology/voyager-1-cruises-highway-at-solar-systems-edge/ URLs in this post: Voyager: http://www.nasa.gov/voyager American Geophysical Union meeting: http://fallmeeting.agu.org/2012/ Johns Hopkins University: http://www.jhuapl.edu/newscenter/pressreleases/2012/121203.asp Copyright © 2009 Futurity.org. All rights reserved.
<urn:uuid:0ef44f8f-4f7e-49fe-98fc-4d4dbc7044a3>
2.96875
1,289
Truncated
Science & Tech.
48.494662
Nitrogen oxides are emitted from high temperature combustion sources such as autos, trucks, aircraft and from boilers used to provide heat, steam or electricity. Since air is made up of almost 80 percent nitrogen, when high temperature burning occurs, some of the nitrogen in the air is burned to release nitric oxide or nitrogen dioxide gas. These gases can form a reddish-brown haze over urban areas or areas near large emitters. Airborne nitrates reduce visibility, contribute to acid rain, play a major role in the formation of ozone smog or react with other chemicals to form particulate matter. These particles can fall to earth in rain or snow to increase nitrogen levels in soils and water bodies. Nitrates deposited into water contribute to algae blooms that can cause depleted oxygen. For example, a significant portion of nitrogen that enters the Chesapeake Bay comes not from surface runoff or water discharges, but from airborne nitrates. What percent of nitrates in Iowa water originated from airborne deposition is currently unknown. Nitrogen oxides have gone unmonitored in Iowa since the 1980s. The state renewed monitoring in 2000.
<urn:uuid:6638907e-94ac-47db-906e-5c7e48480b1c>
3.828125
229
Knowledge Article
Science & Tech.
26.174257
so how do i know if the ph is changing? Indicate whether the pH increases, decreases, or remains the same when each of the following is added. (CH3NH3)Cl to a solution of CH3NH2 pyridinium nitrate, (C5H5NH)(NO3) to a solution of pyridine, C5H5N sodium formate to a solution of formic acid how do you know these? use a calulater what grade science is this so i can help you hi dan can you help me please on the science tab and it has my name on it hannah please thank you very much and have a very wonderful day Ecological ________ - normal, gradual changes that occur in the types of species that live in an area. Help please and help me fast please!!!!! if you want to know what it is then go to google and type in your question and there you go and remember to click on the first link to the question so thst way you can know what the correct answer is!!!!!!!!!! thanks! also for this question Calculate the hydronium ion concentration and pH in a 0.037 M solution of sodium formate, NaHCO2. hydronium ion concentration I made an equation: k=x^2/.037-x but when i look up the k what compound am i looking for? how do i know? For each of the following reactions, predict whether the equilibrium lies predominantly to the left or to the right. HCN(aq) + SO42-(aq) CN -(aq) + HSO4-(aq) I think it has something to do with the Ka but im not sure! An ocean wave travels 17.1 m in 6.7 s. The distance between the two nearest wave crests is 5.1 m. What is the frequency of the wave? Answer in units of H For Further Reading
<urn:uuid:0d925678-9af9-47ed-9a0e-36a4e346962d>
2.890625
403
Q&A Forum
Science & Tech.
85.184848
Mon Nov 09 09:48:52 GMT 2009 by Forlornehope Whenever this idea is suggested, no one ever mentions the Coriolis acceleration. A good example of this is the way in which air flows round atmospheric areas of low pressure, rather than going straight from high to low. Some of the comments here have the idea, the tangential component of velocity has to be increased as the load climbs the cable but don't seem to be aware that it is a well known part of dynamics. If I cast my mind back 40 years, I think that we covered this in the second year of Mech Eng at Imperial. Is it a fundamental flaw with the concept, or am I missing something? Tue Nov 10 17:14:39 GMT 2009 by Ted Armitage Lots of people have discussed the coriolis force without naming it. It is the apparent sideways force you need to overcome when you climb the cable so that your tangential speed increases in proportion to your distance from the centre of rotation. Some have belittled the force: after all you don't sway sideways as you climb the stairs unless it has been a particularly good party. But, as you say, its cumulative effect over many days has a major effect on weather systems. Perhaps a rough calculation would help. Imagine the earth viewed from above the north pole to be a disc with the equator at the rim. The radius is 4000 miles and the rim moves at 1000 mph, thus completing a revolution in 24 hours. Halfway between the pole and the equator the disc would only have a tangential speed of 500 mph. Air at the equator moves at 1000 mph, so if this air were transported radially halfway to the pole and retained it's tangential speed it would be moving 500 mph faster than the disc. Quite dramatic, but now consider how long that transport would take. (Of course, real winds winds don't follow radial paths.) If it was transported the 2000 miles as a 50 mph gale it would take 40 hours, giving an apparent tangential acceleration of 500/40 miles per hour per hour: 12.5 mph per hour. Not likely to impress Top Gear at 0 to 60 in 5 hours. That's 12.5*5280/(3600*3600) = 0.005 feet per second per second. To give a feel for this, gravity at the surface is 6400 times as great. So the sideways acceleration required is tiny (at 50 mph radially), but cannot necessarily be ignored. Perhaps a good comparison is the energy required. A geostationary orbit is about 6.5 times as far from the centre as the surface of the earth so the tangential speed must increase from 1 to 6.5 thousand mph. This represents an energy increase proportional to (6.5*6.5 -1*1): about 40. The energy required to lift a payload to geostationary orbit is a bit less (15%) than that required for the escape velocity of 25 thousand mph, representing an energy proportional to 25*25 = 625. So the energy to increase the tangential speed is about one fifteenth the energy to lift the payload. This is definitely significant. Next I should compare the energy required to keep the cable stable and resist wind forces, but that is left as an exercise for the reader. Re: Space Elevator Mon Nov 09 17:50:39 GMT 2009 by David Hasselhoff Its a great idea, but it would be really hard to implement. I mean, gravity would make the bottom want to fall in a certain direction unless the entire thing was flawlessly level. Even then, a gust of wind or the slightest movement would shift the center of gravity. Also what about airplanes. Also what about asteriods and satelites. I think it may be possible and its a really cool idea, but its highly unprobable Mon Nov 09 19:24:28 GMT 2009 by Mark What happens if space junk hits it? Yes, I know the odds but ISS is watching debris go by so is is possible given the amount of junk now floating around the planet. Mon Nov 09 19:27:38 GMT 2009 by james lake It's high enough. just like the breathable mix of elements falls into place at sea level, where the gravity is higher, and only gets worse from there, the higher you go the better it gets for launching something. Atoms less densley packed, less gravity to overcome, hell Sikorsky has the helicopter type beast that would pull the load right now. Hoist a cable up by helicopters. 4 each, feed them with a fuel line or power them with something better like rocket fuel tech, run that payload up, damn near in orbit anyway, and launch it. Maybe even build up to a space elevator, but the way to get there is specialy designed helicopters. I have more on this subject. Breakthrough In Industrial-scale Nanotube Processing. Mon Nov 09 20:55:20 GMT 2009 by Ta Check this out, one more step there. . All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:d5b92634-92a9-4833-b7a8-7d5c9bce3dc4>
2.984375
1,103
Comment Section
Science & Tech.
69.696421
Or. . . Thu May 27 12:09:28 BST 2010 by Jeff Green Is it naive to suggest that giving the plane a nice big positive charge would stop the ash getting into the engines in the first place? Or. . . Thu May 27 19:26:05 BST 2010 by Allan Brewer Its a beautifully creative idea. Without knowing the weight of an ash particle and the average charge per particle its beyond me to do the actual calculation, but my gut says very strongly that with the particle approaching the engine at 500mph, and with electrostatic interactions being proportional to the square of the distance i.e. essentially very short-range repulsion, that there would be nowhere near enough time to repel the particle a sufficient distance to avoid the engine. All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:035abcd6-ab51-4b21-b3c8-8809d5fd7ed2>
2.796875
212
Comment Section
Science & Tech.
64.290909
Cumulus Clouds (Low Cloud) If you look in the photo above you can see the line of Cumulus clouds that appear in the background. These clouds were created by daytime heating and also an unstable atmosphere. The moisture for these clouds was rising from the Pacific Ocean. The darker cloud is the stronger of the cumulus clouds meaning that the majority of the moisture is being absorbed by that cloud. The clouds around this one are also starting to get more moisture from the ocean to build. Above the cumulus clouds you will see the thin layer of white clouds known as cirrus clouds. With these clouds in place it indicates that there is moisture to higher levels which means the potential does exist for these clouds to further develop into Towering Cumulus clouds. The higher cloud is showing the most potential to reach the TCU status as it has a higher vertical profile as you can see in the photo. Image Caption: Photo taken over the Western Pacific with a Navy helicopter in the background. Credit: Joshua Kelly
<urn:uuid:6885f590-0ee4-430a-bb4c-608a59a088d6>
3.3125
203
Knowledge Article
Science & Tech.
51.949454
Nov. 26, 2004 Dutch researcher Lourens Rijniers has discovered why William of Orange's grave, the monument on the Dam in Amsterdam and the Alhambra in Granada are all badly affected by salt damage. Salt can cause a lot of damage in materials with small pores, such as concrete and mortar. This is because the pressure which builds up during the formation of salt crystals causes cracks to develop in the surrounding material. Rijniers proved this with MRI scans of wet porous materials. Rijniers used nuclear magnetic resonance, known in hospitals as MRI, to study salt crystallisation in model systems. The model system was a simplified porous material with pores of equal size. With this material Rijniers carried out the first experiments to demonstrate that the crystallisation of salts causes the build up of a pressure large enough to damage the material. The applied physicist estimated which circumstances could cause damage. Salt damage varies from white spots on masonry and the carbonation of concrete to the erosion of stone and crack formation in statues. For materials with small pores, such as concrete, mortar and limestone, crystallisation was indeed found to result in damage. However, for materials with only large pores, such as brick, this damage mechanism was found to have no effect. It is not yet clear how the damage arises in these materials. Rijniers wetted the model material with solutions of soda and sodium sulphate and studied the crystallisation process with the help of an MRI scanner. He calculated the pressure in the pores from the amount of salt that dissolved per volume of water. The Ph.D. student used theoretical models to explain how the pressure in the pores arose during the crystallisation process. Salt crystallisation is an important cause of damage in building materials and stones. Although it is clear that salt from seawater and the environment is responsible for the damage, the mechanism behind this is still not understood. An improved understanding of this mechanism will make it easier to prevent possible damage. This research was financed by Technology Foundation STW, the Priority Programme Material Research (PPM) and the Center for Building and Systems TU/e-TNO (KCBS). Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:9fe684ca-2d13-47ea-912c-08cb0444e9de>
3.625
485
Academic Writing
Science & Tech.
42.613525
astronomer argues for a changing cosmos By R. Cowen HVEN, Denmark, January 1578—Heavens! Could the teachings of Aristotle and other scholars all be wrong? This placid island seems an unlikely place from which to challenge the prevailing view of the universe. But that's just what Danish astronomer Tycho Brahe has done. Two of his recent discoveries promise to shatter centuries of learned pronouncements that the cosmos is eternal and immutable. The findings suggest instead that chaos, turmoil, and change rule the universe. Just 6 years ago, Tycho observed that stars can suddenly appear in the sky, blazing brighter than the planet Venus at its most luminous, and then fade from view. Now, he declares that the fuzzy, highly unstable objects known as comets reside in a region far beyond the moon, a region of the heavens thought to be unwavering and immutable. The findings have all of Europe agog. Tycho's odyssey began on the evening of Nov. 11, 1572. Walking back to his alchemy laboratory at Herrevad Abbey, near Copenhagen, the 26-year-old astronomer saw a brilliant white object that outshone Venus. Several of his servants and peasants confirmed his observations, he reported at the time. The object, slightly northwest of the constellation Cassiopeia, remained for 18 months in a patch of sky where no star had ever been seen before. At times, it was so bright that observers could view it in broad daylight. It also changed from white to red to leaden gray. Tycho and other astronomers scrambled to determine whether the new object moved across the sky. Any discernible motion would indicate the point-like object was not a star but an object nearer than the moon, within the so-called sublunary sphere. If so, the theories of Aristotle, Plato, and others who extoll the purity of the heavens could still hold. The young astronomer had just built a new version of a sextant, a compass-shaped device that accurately measures the latitude and longitude of distant objects. Tycho's sextant, which features 5.5-feet-long arms joined by a brass hinge, is unsurpassed in detecting the subtle movement of distant objects, he says. When he applied the device to the bright apparition, he reports in his book De Stella Nova, it stood stock-still and so must be a star. The startling discovery so intrigued King Frederick of Denmark that he bequeathed this island to Tycho for a new observatory. Still, the astronomer's finding cannot alone refute centuries of scholarly thought. A new study reported by Tycho just a few days ago, however, could force scientists to revise their long-held beliefs. The newest drama began last November while Tycho was catching fish at dusk in one of his island's many ponds. He noticed what appeared to be a bright star in the western sky. As the evening grew darker, however, he saw that the object had a reddish tail, the telltale signature of a comet. After sketching the comet, Tycho recorded its distance from two nearby stars in order to determine its position. Over the next few weeks, he diligently tracked the fading comet's motion and found that it has no measurable parallax—the extra motion of nearby objects due to Earth's movement through the heavens. Indeed, in a report to the king, Tycho calculates that the comet must lie farther away than 230 times the radius of Earth, or more than four times the distance to the moon. There can be no doubt that the comet is a bona fide celestial body, beyond the sublunary sphere, and thus in direct conflict with the teachings of the ancients, Tycho says. The king and others seem swayed by Tycho's careful measurements. Whether this comet of 1577 turns out to be an evil omen or a harbinger of good tidings remains to be seen, but it may spark a revolution in the way people view the cosmos. From Science News, Vol. 156, No. 25 & 26, December 18 & 25, 1999, p. vii. Copyright © 1999, Science Service.
<urn:uuid:53cbcbee-97b2-4209-8f6e-de2ad1977ae2>
3.640625
864
Truncated
Science & Tech.
56.537092
Imagine that you're in a car traveling at a high speed and accelerating quickly. You're heading directly toward a brick wall, and you continue to accelerate until you reach that wall. Sounds like you're in a heap of trouble, doesn't it? Not necessarily. When we hear the word acceleration, we usually assume it means "increasing in speed." In many cases, that's exactly right. But in physics, acceleration is defined as "a rate of change of velocity." This change can be either an increase or a decrease in speed, or a change in direction. In the example given here, acceleration refers to a decrease in speed, so the car decelerates as it approaches the brick wall, slowing down until it gently bumps into it. Just as acceleration has a precise meaning in physics, so do the terms speed and velocity. While it's perfectly acceptable at times to use the two terms interchangeably, each can have its own distinct meaning. Speed is defined as the rate of motion and is calculated by dividing the distance an object travels by the time it takes to travel that distance. Velocity, on the other hand, is a vector quantity -- a measurement of both the rate of motion (i.e., speed) plus direction. In other words, ten kilometers per hour is speed; east at 10 kilometers per hour is velocity. Speed, velocity, and acceleration are sometimes depicted graphically. A graph illustrating time vs. speed, for example, provides a record of how the speed of an object changes over time. From such a graph, it's also possible to see whether an object is traveling at a constant rate or accelerating: A line parallel to the time axis isn't changing its speed, while one that's slanted is. Another way to depict motion graphically is with arrows. Remember we said that velocity is a vector quantity? Another fact to know is that vector quantities can be described by both magnitude and direction. To represent velocity, an arrow's length can show the speed at which an object is traveling (magnitude, or the rate of acceleration), and its orientation can show the object's direction. Acceleration, which is described by a magnitude and a direction, is itself a vector quantity. (Force and distance are two other vector quantities.) It's possible for an object to be moving at a constant speed and be accelerating at the same time. Take as an example a car driving in a circle at 30 kilometers per hour. Although its speed is constant, its velocity is continually changing because its direction is continually changing as well. This change of direction results in an acceleration toward the center of the circle. Likewise, a satellite in circular orbit around Earth is traveling at a constant speed and accelerating toward Earth's center. Engineers who design cars are keenly aware of acceleration resulting from a change in direction. With safety in mind, they use tires that provide enough friction to keep their cars from losing control when rounding corners. To increase the friction force for a fast-moving car, some engineers design the car's body shape to "hug" the ground. When moving fast enough, the car's body deflects air upward, forcing the car downward. This downward push increases the friction force between the tires and the ground. What is the difference between speed and velocity? Define acceleration. What does it mean besides an increase in speed? What if the graph showed acceleration rather than speed? Where would the line be when the car was continuing at a constant speed of 40 mph in the same direction? Where would the line be if the speed were continually increasing? Decreasing? An object, such as a planet, circling another object, such as the Sun, at a constant speed is said to be accelerating. Explain why this motion is an example of acceleration.
<urn:uuid:03625274-0919-46e4-b3c1-0a3934b42801>
4.90625
767
Knowledge Article
Science & Tech.
50.696262
Unlike today's Web, web services can be viewed as a set of programs interacting cross a network with no explicit human interaction involved during the transaction. In order for programs to exchange data, it's necessary to define strictly the communications protocol, the data transfer syntax, and the location of the endpoint. For building large, complex systems, such service definitions must be done in a rigorous manner: ideally, a machine-readable language with well-defined semantics, as opposed to parochial and imprecise natural languages. It is possible to define service definitions in English; XML-RPC and the various weblogging interfaces are a notable example. But XML-RPC is a very simple system, by design, with a relatively small set of features; it's ill-suited to the task of building large-scale or enterprise applications. For example, you can't use XML-RPC to send an arbitrary XML document from one system to another without converting it to a base64-encoded string. Almost all distributed systems have a language for describing interfaces. They were often C or Pascal-like, often named similarly: "IDL" in DCE and Corba, MIDL, in Microsoft's COM and DCOM. The idea is that after rigorously defining the interface language, tools could be used to to parse the IDL and generate code stubs, thus automating some of grungier parts of distributed programming. The web services distributed programming model has an IDL, too; and as you can probably guess, it's the Web Services Definition Language, WSDL. It's pronounced by spelling out the letters or saying ``whizz-dell,'' which nearly rhymes with ``diesel.'' WSDL derives from two earlier efforts by a number of companies; the current de facto standard is a W3C Note submitted by IBM and Microsoft. There's a web services description working group, which is creating the next version of the note for eventual delivery as a W3C standard. So far the group hsa published a requirements document and some usage scenarios. One reason to like the requirements document is that it renames some of WSDL's more confusing terms. I find WSDL to be a frustrating mixture of verbosity -- most messages are essentially described three times -- and curious supposedly helpful defaults, such as omitting the name of a message in an operation. I'll use the now much-discussed Google WSDL to point some of these out. But, first, let's look at the state of web services programming and IDLs. In the classic IDL world, the definitions were processed by an IDL compiler to generate stubs for clients, which look like local function calls, and dispatch routines for the server that invoke the developer's code. When new applications were developed, the interfaces were designed from scratch, and all the benefits of ``contract'' programming were possible, including clean and regular definition of function semantics. But back in the real world, these distributed systems usually had to interact with existing systems. Often the project involved ``remoting'' an existing application by putting an RPC interface on an existing service. In cases like this, the IDL files could resemble compiler torture-tests, as network-oriented interface languages were coerced into supporting legacy applications. I mention this because it's about the stage at which WSDL and web services are today. The most widespread tools take an existing Java class or COM object and then generate a WSDL definition. This is backwards and half-assed. It's backwards because -- as we should have learned with earlier infrastructures -- the right thing to do is write the interface first. It's half-assed because while everybody's generating interfaces, nobody is capable of automatically consuming them. So how did we get here? I can think of two way. First, the vendors recognize that folks aren't going to throw out their existing code. Just like they put an HTTP front-end on legacy applications when the Web became popular, developers are now going to want to put a web services front-end on their existing code. The reason why we don't yet have good client development -- i.e., WSDL parsing -- is that it requires being able to turn arbitrary XML Schema definitions into useful stubs, which is hard for a couple of reasons. First, it's not clear how to fit web services programming into existing client frameworks. If you're using SOAP RPC, then all of the classic problems of IDL-based computing come back: memory management, transient network errors, etc. If using SOAP to send XML documents, then new issues such as DOM and SAX support must be dealt with. Second, WSDL prefers to use XML Schema to define the data to be transferred, and understanding XML Schema requires a significant amount of effort. As the experiences of the ``soapbuilders'' group (a mailing list of SOAP toolkit providers, working on achieving interoperability across their implementations) has shown, it can require a great deal of work just to be able to properly handle XML Schema's primitive types. I think there's a third reason, but one that nobody will admit in public. No vendor wants to spend the enormous effort involved in developing client-side WSDL toolkits when Microsoft can practically wipe them off the desktop by providing one of their own. Yes, I realize that this ignores peer-to-peer and servers talking to subservers, but I still stand by the statement. |Have you stumbled on any WSDL flaws? Share your experience in our forum.| |Post your comments| It's time to examine parts of is part of the Google developer's kit. A WSDL description is a set of: - Type definitions, contained in the typeselement, used to describe the data being exchanged, and can be any description language, although -- and I swear I'm not making this up -- "WSDL prefers" XML Schema. - Message definitions, appearing as multiple messageelements. As we'll see, message definitions are where we get the first hints that WSDL exceeds the 80/20 rule of flexibility and complexity. - Operation definitions, appearing within a bindingelement, which confusingly defines something called a ``port.'' - A service definition, contained in the serviceelement. This defines the endpoint (URL) where the server can be found, and -- by referring to the binding, err, port, -- specifies how to communicate with it. Web Services Essentials The Google WSDL file defines a SOAP RPC interface, which means it follows the encoding rules found in Section 5 of the SOAP 1.1 specification. I'll avoid the SOAP vs. REST discussions now, other than to mention that RPC is a familiar programming model to many developers. Conceptually, the resembles the following fragment of a C/C++ object: bool documentFiltering; char* searchComments; int estimatedTotalResultsCount; bool esimateIsExact; ResultElementArray resultElements; int _numresultElements; Note that my hypothetical Schema to C mapping required the addition of a new element to keep track of the size of the array. More interesting is the way the interacting specs require Google to ResultElementArray datatype. According to the SOAP RPC encoding rules, arrays are written by generating each element inside a container. XML Schema requires the container to be declared as its own type. SOAP 1.1 requires arrays to have a defaultable attribute that declares the type and size; SOAP 1.2 rightly divides this into two separate attributes. I say ``rightly'' because XML Schema doesn't have a way to let you default an attribute value in the SOAP 1.1 style. Because of this, WSDL provides its own arrayType attribute that does provide a default. Taking all of this together, the fairly straightforward ResultElementArray array requires the following <xsd:complexType name="ResultElementArray"> <xsd:complexContent> <xsd:restriction base="soapenc:Array"> <xsd:attribute ref="soapenc:arrayType" wsdl:arrayType="typens:Resultelement"/> </xsd:restriction> </xsd:complexContent> </xsd:complexType> More from Rich Salz Given all of this complexity, we shouldn't be surprised that Google apparently missed the text that said the element should have been It's also hard not to look at that fragment and despair. All that complication, just to say "we're sending an array." Unfortunately, since WSDL is caught between two other specs, there seems little else that could be done. The WSDL authors couldn't change SOAP, since they were defining a use for it, and one can only imagine the howls if they tried to modify XML Schema. Let's now look at some message definitions. The following two definitions define a request message and its response. Because they are labeled as ``opname'' and ``opnameResponse,'' WSDL will let us default those names later on. <message name="doGetCachedPage"> <part name="key" type="xsd:string"/> <part name="url" type="xsd:string"/> </message> <message name="doGetCachedPageResponse"> <part name="return" type="xsd:base64Binary"/> </message> In the list above, I said we have our first hint about WSDL's excessive flexibility. First, a message is intended to be an abstract definition -- that specifies nothing about the bytes on the wire. As the spec concedes, however, "in some cases, the abstract definition may match the concrete representation very closely or exactly." When sending XML the representation is exact, and it should be possible to omit message's altogether. (A Google search for "optmize the common case" finds over 300,000 message element also shows too much flexibility. The individual message parts can be specified in-line, they can reference a type from the types section, it can have a mix of name and type declarations, and so on. Can anyone look at the doGoogleSearch message and the GoogleSearchResult datatype, and give a good, practical rationale for the style differences? And why don't all message elements appear in their own operation is defined as a set of message exchanges. WSDL supports two-party communication, and four operation types are defined (single incoming, single outgoing, incoming request with response, outgoing request with reply), although only the obvious two are currently supported: client sends message, client sends message and server responds. Here is an abstract operation definition -- remember, we don't yet know how bytes appear on the wire -- that uses the earlier message formats: <operation name="doGetCachedPage"> <input message="typens:doGetCachedPage"/> <output message="typens:doGetCachedPageResponse"/> </operation> The messages have a name; WSDL defaults them as described above. Finally, we're ready to bring these abstract messages and datatypes down to earth. This is done in the binding element, which has its own set of <binding name="GoogleSearchBinding" type="typens:GoogleSearchPort"> <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="doGetCachedPage"> <soap:operation soapAction="urn:GoogleSearchAction"/> <input> <soap:body use="encoded" namespace="urn:GoogleSearch" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </input> <output> <soap:body use="encoded" namespace="urn:GoogleSearch" encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"/> </output> </operation> This is where the fairly clean WSDL elements fall apart into a set of nasty special-case elements. For that reason alone, I can binding is its own element, but I still think the separation comes at the cost of too much redundancy and repetition. For example, notice the duplication of attributes in each code element is an example of excessive flexibility. First, we've already declared our intent to use SOAP RPC encoding, through the style attribute in the previous element. While SOAP defines document and RPC styles, WSDL doubles this to define ``document/encoded'', ``document/literal'', ``rpc/encoded'', and ``rpc/literal''. service element ties the abstract messages and their concrete realization together with an endpoint (in this case, a SOAP URL). So we now know what to send, and where to send it. Next month we'll write some code to do just that. - Local Locksmith L.A 1-310-925-1720 2009-06-30 22:52:24 carpetcare - Local Locksmith Westwood 1-310-925-1720 2009-06-30 22:51:28 carpetcare - Emergency Locksmith Santa Monica CA 1-310-925-1720 2009-06-30 22:50:02 carpetcare - Locksmith in Santa Monica CA 1-310-925-1720 2009-06-30 22:48:19 carpetcare - Exmining WSDL 2003-01-14 10:20:39 Paul Sabadin
<urn:uuid:0d6d1549-1e3f-4d76-b9d6-cb68d3928a0f>
2.734375
2,912
Comment Section
Software Dev.
41.367849
After the decimation suffered during World War II, mankind took a look at all the new technologies he had created to fight the war and turned his gaze towards the stars. From the late 1940’s this onward and upward reach has helped to fuel the engines of our ingenuity, but what has fueled those stellar ambassadors that now dot our solar system and beyond. |photo credit: Patrick Hoesly| To move from the surface of the earth to this new ocean a rocket must be moving about 7 miles per second. That takes a lot of energy. Many different propellants have been used. The very first rocket fuels were a mix of kerosene and liquid oxygen. Alcohol, hydrogen peroxide, and liquid hydrogen have also been used, in addition to solid fuels. They can provide thrust without the need for all the refrigeration and containment equipment that some of the liquid fuels, such as liquid hydrogen and oxygen, require. Once the probe is beyond the reach of the atmosphere there is no way to change what’s on board. The probe cannot drop by the local Radio Shack and pick up a fresh pair of AA batteries. While the probe is being built on Earth, the engineers must make sure that they provide a source of power that will give the probe the right amount of power. Too little power and the scientific instrumentation won’t work; too much power could over heat the probe. On board chemical batteries can be used, but they take space that could be used for scientific instruments. Solar panels can be used, but only up to a certain distance from the sun. Beyond the orbit of Jupiter, probes need an internal power supply that will last for years. They use the heat from radioactive decay of fissionable isotope. |photo credit: FlyingSinger| Early probes like Sputnik and Explorer 1 used chemical batteries to power their systems. In March of 1958 Vanguard 1, the 4th artificial satellite and the 1st powered by solar power, was launched. Probes with solar panels have more space on board for scientific instruments than probes that use only chemical batteries. Probes sent into the inner solar system (sun to Mars) are almost all powered using solar arrays. Mariner 2, the first USA probe to Venus, suffered the loss of one of its solar arrays, but because it was closer to the sun, it was able to operate using only one solar array. No American manned space craft have made use of solar arrays yet (the new Multi-Purpose Crew Vehicle may), the Russian Soyuz spacecraft have used them since 1967. The International Space Station (ISS) is the largest man-made structure outside our atmosphere. Larger than a football field (but smaller than a football pitch), this outpost orbits the earth every hour and a half. It is also powered completely by solar power. Past the atmosphere, solar power becomes more practical and more consistent (there is no night in space). Because of the orbital path of the ISS, it is eclipsed by the earth for 30 minutes out of every hour and a half. The station makes use of rechargeable batteries to make sure it is never without power. |photo credit: Undertow851| As the probes go farther and farther away from the sun, the light that can reach them is less and less. Until August of 2011, no probe to Jupiter had ever been powered just by solar panels. Juno, the latest probe to Jupiter, has the largest solar arrays given to a deep space probe and the first probe to Jupiter to use solar arrays. Jupiter receives only 4% of the sunlight we enjoy on Earth. Advances in solar technology have now made it practical to use solar panels out 5 Astronomical Units (AUs) from the sun. All other deep space probes have used a radioisotope thermoelectric generator (RTG). A RTG works by converting the heat from the decay of a radioactive fuel into electricity. American probes have been using Plutonium 238 (an isotope of Plutonium) since the late 1960’s. It has a half life of about 88 years. RTGs have powered all our interplanetary probes (the Voyagers and Pioneers and soon to be New Horizons). However, NASA has begun to run out of fuel for the RTGs and the creation of more is full of political and safety considerations. |photo credit: giumaiolini| The technology that we’ve made to go out to the ‘verse with will also help us here on the cool, green hills of earth. RGTs have been used, mainly by Russia, to provide power for off the grid light houses. Advances in solar panels for space are used down here on Terre Firma. With the reliably of solar power in space, there are even attempts to construct orbital solar collectors to beam down electricity. There will be from heaven to Earth more than is dreamt of.
<urn:uuid:edf65a9f-2c35-4140-991a-5c0740e49a34>
3.859375
1,014
Personal Blog
Science & Tech.
51.321042
Ocean warming may increase the abundance of marine consumers Warmer ocean temperatures could mean dramatic shifts in the structure of underwater food webs and the abundance of marine life, according to a new study by researchers at the University of North Carolina at Chapel Hill, UNC Coastal Studies Institute and DePauw University. Michael F. Piehler, Program Head in Estuarine Ecology and Human Health at UNC Coastal Studies Institute and an Assistant Professor at the UNC Institute of Marine Sciences in Morehead City, N.C is a study co-author. Until now, little has been known about how changes in temperatures might affect the total productivity and growth of all marine consumers (such as animals, fungi and bacteria) relative to their prey (including algae and plants). The study, published online Aug. 25, 2009 in the journal PLoS Biology, looked at a simple underwater food chain and how temperature changes affect organisms' growth and metabolism. In warmer temperatures, these processes happen faster. As a result, demands for food and nutrients increase with temperature. Researchers placed tiny zooplankton (consumers in the food chain) and phytoplankton (which are photosynthesizing producers) in small containers and incubated them at different temperatures and in two nutrient scenarios reflecting low and high resource supply conditions for phytoplankton. The results suggest that higher temperatures could lead to an increase in the number of consumers in the ocean, such as zooplankton or fish, but a reduction in the overall mass of living creatures in the sea. Mary O'Connor, the study's lead author, said the findings have implications for how marine and other ecosystems might respond to climate change. "Small changes in ocean temperature, like those expected with climate change or even just a warmer summer, have fundamentally different effects on marine consumers and their food supply," said O'Connor, who carried out the research while a graduate student at UNC and is now a postdoctoral fellow at the National Center for Ecological Analysis and Synthesis in Santa Barbara, Calif. "This means we may be able to understand some important consequences of ocean temperature change before we go out and study temperature effects on every single species," O'Connor said. "The components of this theory have been around for decades, but I think we are just starting to comprehend the enormous range of processes and patterns in nature that are very strongly influenced by temperature," said John Bruno, Associate Professor of marine sciences at UNC Chapel Hill and a co-author of the study. Ocean temperature averages about 30 C (86 F) in the tropics and 2 C (35.6 F) in the polar regions, and varies between summer and winter. Climate models predict ocean temperatures will rise between 2 C and 7 C (or between 1 F and 11 F) in different parts of the world in the next 100 years, and increases of 1 C to 4 C (1 F - 9 F) have already been observed. All of these types of changes would affect the food chains of the ocean, O'Connor said. Other study authors are Dina M. Leech, formerly a postdoctoral researcher at UNC Coastal Studies Institute and now an Assistant Professor at DePauw University; and Andrea Anton, a doctoral student in the UNC Curriculum in Ecology.
<urn:uuid:10993d20-0091-47f8-b031-2ff975f77c02>
3.09375
670
Academic Writing
Science & Tech.
32.042023
You never create an actual streambuf object, but only objects of classes derived from class streambuf. Examples are filebuf and strstreambuf, which are described in man pages filebuf(3CC4) and ssbuf(3), respectively. Advanced users may want to derive their own classes from streambuf to provide an interface to a special device or to provide other than basic buffering. Man pages sbufpub(3CC4) and sbufprot(3CC4) discuss how to do this. Apart from creating your own special kind of streambuf, you may want to access the streambuf associated with an iostream to access the public member functions, as described in the man pages referenced above. In addition, each iostream has a defined inserter and extractor which takes a streambuf pointer. When a streambuf is inserted or extracted, the entire stream is copied. ifstream fromFile("thisFile"); ofstream toFile ("thatFile"); toFile << fromFile.rdbuf(); We open the input and output files as before. Every iostream class has a member function rdbuf that returns a pointer to the streambuf object associated with it. In the case of an fstream, the streambuf object is type filebuf. The entire file associated with fromFile is copied (inserted into) the file associated with toFile. The last line could also be written like this: fromFile >> toFile.rdbuf(); The source file is then extracted into the destination. The two methods are entirely equivalent.
<urn:uuid:2e8537f0-2c65-4f68-8d2c-94d550c46367>
3.0625
322
Documentation
Software Dev.
42.65206
You can configure the Network Server to use a specific number of threads to handle connections. You can change the configuration on the command line or by using the servlet interface. The minimum number of threads is the number of threads that are started when the Network Server is booted. This value is specified as a property, derby.drda.minThreads = <min>. The maximum number of threads is the maximum number of threads that will be used for connections. If more connections are active than there are threads available, the extra connections must wait until the next thread becomes available. Threads can become available after a specified time, which is checked only when a thread has finished processing a communication. java org.apache.derby.drda.NetworkServerControl maxthreads <max> [-h <hostname>] [-p <portnumber>]You can also use the derby.drda.maxThreads property to assign the maximum value. A <max> value of 0 means that there is no maximum and a new thread will be generated for a connection if there are no current threads available. This is the default. The <max> and <min> values are stored as integers, so the theoretical maximum is 2147483647 (the maximum size of an integer). But the practical maximum is determined by the machine configuration. java org.apache.derby.drda.NetworkServerControl timeslice <milliseconds> [-h <hostname>] [-p <portnumber>] You can also use the derby.drda.timeSlice property to set this value. A value of 0 milliseconds indicates that the thread will not give up working on the session until the session ends. A value of -1 milliseconds indicates to use the default. The default value is 0. The maximum number of milliseconds that can be specified is 2147483647 (the maximum size of an integer).
<urn:uuid:e60fd46a-7c16-4c1a-bc25-34ee4605deba>
2.96875
394
Documentation
Software Dev.
50.751925
London Zoological Society reports on health of planet Earth Tuesday, October 24, 2006 The annual Living Planet Report compiled by the Zoological Society of London and the Global Footprint Network, published this week by WWF International, assessed "the health of the planet's ecosystems" and measured the demands people make on the planet's resources. The Living Planet Index is based on population trends between 1970 and 2003 of over 3 600 populations of more than 1 300 vertebrate species from around the world. There has been an overall decline of around 30 per cent over the 33-year period. Tropical species populations declined by around 55 per cent on average from 1970 to 2003, while temperate species populations, have shown little overall change. The Ecological Footprint measures the area of biologically productive land and sea required to sustian humanity. In 2003 this was 14.1 billion global hectares, or 2.2 global hectares per person (a global hectare is a hectare with world-average ability to produce resources and absorb wastes). The total supply of productive area, or biocapacity, in 2003 was 11.2 global hectares, or 1.8 global hectares per person. Demand overshot supply first in the 1980s and has been increasing every year since, By 2003 the overshoot was about 25 per cent. Thus, it took approximately a year and three months for the Earth to produce the ecological resources humanity used in that year. The Living Planet Index and the Ecological Footprint, along with other measures, have been adopted as indicators for the 2010 targets of the Convention on Biological Diversity. The report concludes with a description of the roles of the various disciplines required to "shift humanity's current trajectory on to a path that will remain within the biological capacity of the planet". James P. Leape, the Director General of WWF International summarises: "The message of these two indices is clear and urgent: we have been exceeding the Earth's ability to support our lifestyles for the past 20 years, and we need to stop. We must balance our consumption with the natural world's capacity to regenerate and absorb our wastes. If we do not, we risk irreversible damage". - "The Living Planet Report" — , October 23, 2006
<urn:uuid:885d738e-9b75-46e0-a264-9d859197287d>
3.34375
453
Knowledge Article
Science & Tech.
43.216489
Sivapullaiah, Puvvadi Venkata and Manju, * (2005) Kaolinite - alkali interaction and effects on basic properties. In: Geotechnical and Geological Engineering, 23 (5). pp. 601-614. Restricted to Registered users only Download (258Kb) | Request a copy The influence of type and amount of clays present in soils on their properties is well understood. The clays exert their influence through large specific surface area and charges on them. Their effect is mostly exhibited through inter particle bonding and subsequent particle associations. The mineralogical influence of soils in water is well documented. However, the change in soil water system because of presence some of the contaminants can greatly influence the soil behaviour. Some of the changes are due to formation of new compounds due to interactions between the soil and pollutant. The paper reports the effect of interaction of kaolinite mineral with alkali on the index properties of soils from which the geotechnical behaviour can be understood. Detailed X-ray diffraction studies have shown that sodium aluminum silicate hydroxide hydrate (NASH) is formed by clay alkali reactions. The type and amount of formation of the compound is influenced by the concentration of alkali solution. While the compound formed is in smaller quantities with 1 N NaOH solution, significantly high quantity is formed with 4 N NaOH solution. Presence of alumina is shown to play a significant role. It was observed that the formation of sodium aluminum silicate hydroxide hydrate is reduced in the presence of alumina. Specific gravity of contaminated clay soil was reduced which confirms the formation of new compounds. Water adsorption and specific surface area of soil are also influenced due to soil alkali interaction. The changes in the free swell and index properties of soil in the presence of alkali have been explained by the changes in soil fabric and the formation of new compound. |Item Type:||Journal Article| |Additional Information:||Copyright of this article belongs to Springer.| |Keywords:||Alkali;Alumina;Index properties;Kaolinite;Sodium aluminum silicate hydroxide hydrate;X-ray diffraction| |Department/Centre:||Division of Mechanical Sciences > Civil Engineering| |Date Deposited:||30 Apr 2007| |Last Modified:||19 Sep 2010 04:31| Actions (login required)
<urn:uuid:03fd4449-198e-4539-bf9b-e8184f17ab25>
2.875
504
Academic Writing
Science & Tech.
28.511873
Besides working with individual objects, applications often need to deal with collections of objects. Figure 8 gives a simple example of using collections with JiBX. Here the classes represent the basics of an airline flight timetable, which I'll expand on for the next examples. In this example I'm using three collections in the root The Figure 8 binding definition uses a collection element for each collection. In the case of the first two collections there's a nested structure element to provide the details of the items present in the collection. I've highlighted the definitions for the collection of carriers in green, and the actual carrier information in blue, to emphasize the connection between the different components. The collection of airports is handled in the same way as the collection of carriers. The collection of notes differs from the other collections both in that it's stored as an array, and that the values are simple In the case of the Figure 8 binding the collections are homogeneous, with all items in each collection of a particular type. You can also define heterogeneous collections, consisting of several types of items, by just including more than one structure (or value) element as a child of the collection element. Figure 9 demonstrates using a heterogeneous collection for the carrier and airport data from Figure 8, with the structure definitions for the carrier and airport components (shown in green) combined in a single collection.Figure 9. Heterogeneous collection, with factory Figure 9 also demonstrates one way to work with collection interfaces (shown in blue). I've changed the type of the collection field in the As of the JiBX 1.1 release there's an easier way to accomplish the same effect as using a factory to supply instances of an implementation class. This is to use the new create-type attribute to specify the class used when creating new instances of an object. See the object attribute group descriptions for the full details. As with structure elements with multiple child components, heterogeneous collections can be either ordered (meaning the items of each type may be repeated, but the different types of items must always occur in the specified order) or unordered (meaning the items can be in any order). Either way, the child components of a collection are always treated as optional by JiBX (so zero or more instances are accepted). The collection element is generally similar to the structure element in usage and options, but accepts some additional attributes that are unique to working with collections of items. Most of the added attributes are for when you want to implement a custom form of collection, using your own methods to add and retrieve items in the collection. Another attribute, item-type, can be used to specify the type of items in the collection. For the prior examples I've used embedded structure elements to define the structure of items in the collection. This isn't the only way to use collections, though. You can instead leave a collection element empty to tell the binding compiler that objects in the collection will have their own mapping definitions. Specifying the type of items can be useful in this case to avoid ambiguity. Figure 10 shows an example of using mapping definitions in this way.Figure 10. Collections with mappings In Figure 10 I've converted the embedded carrier and airport structure definitions used in the earlier examples into their own mapping elements. The binding uses an item-type attribute to specify that the first collection (shown in blue) contains only carriers, while the second collection (shown in green) uses a generic You can nest collections inside other collections. This is the approach used to represent multidimensional arrays, or Java collections made up of other collections. You can also use value elements directly as the child of a collection element, though only if the value representation is as an element. This is the way you'd handle a collection of simple The collection element will work directly with all standard Java collections implementing the Figure 11 gives a more complex example of working with collections. This builds on the Figure 10 XML and data structures. The prior collections of carrier and airport elements are still present, but now the XML representation uses wrapper elements (carriers and airports, respectively) for the collections of each type. The blue highlighting in the diagram shows this change. In the binding definition, the addition of the wrapper element is shown by just adding a name attribute to each collection element.Figure 11. Collections and IDs I've also added route and flight information to the Figure 11 binding. The most interesting part about these additions is the use of references back to the airport and carrier information. The carrier reference linkages are highlighted in green, the airport linkages in magenta. In the Java code, the linkages are direct object references. On the XML side, these are converted into ID and IDREF links - each carrier or airport defines an ID value, which is then referenced by flight or route elements. The binding definition shows these linkages through the use of an ident="def" attribute on the child value component of a mapping element supplying the ID, and an ident="ref" attribute on an IDREF value component that references an ID. Using ID and IDREF links allows references between objects to be marshalled and unmarshalled, but is subject to some limitations. Each object with an ID must have a mapping in the binding. The current JiBX code also requires that you define objects in some consistent way, though the references to the objects can be from anywhere (even before the actual definitions of the objects). In other words, you have to define each object once and only once. In Figure 11 the definitions occur in the carriers and airports collections. The current code also prohibits using IDREF values directly within a collection (so the definitions can be from a collection, but not the references) - to use references in a collection you need to define some sort of wrapper object that actually holds the reference. However, see JiBX extras for some support classes which extend the basic JiBX handling in these areas.
<urn:uuid:f609fc38-e426-4d6d-90a0-1e653a046650>
2.734375
1,227
Documentation
Software Dev.
37.389042
See also the Dr. Math FAQ: Browse High School Euclidean/Plane Geometry Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Pythagorean theorem proofs. - Non-Euclidean Geometry for 9th Graders [12/23/1994] I would to know if there is non-euclidean geometry that would be appropriate in difficulty for ninth graders to study. - Non-parallel Glide Reflections [10/21/1998] A glide reflection consists of a line reflection and a translation parallel to the reflection. What if the translation is not parallel to - No Slope: An Ambiguous Term [05/08/2003] What is the distinction between a line with a slope of zero and a line with no slope? - Number of Equations Needed in a Simultaneous Linear System [10/29/2003] Could you tell me why we need the same number of equations as variables in order to get a unique solution to a system of simultaneous linear equations? - Obtuse and Oblique [7/29/1996] Are the terms "obtuse" and "oblique" interchangeable? - Optimization: Minimum Area [11/07/1997] How do you fold a piece of paper (rect. with width a and unlimited length) so one corner just reaches the righthand side for minimum area? - The Order of a Proof [01/29/1999] How can you figure out what order to put your proof in? - Parallel Lines [12/31/1998] What are some ways of proving lines parallel - geometrically and - Parallel Lines and Three-Dimensional Space [12/18/2005] My math book says two lines that are each perpendicular to the same third line are not necessarily parallel to each other. How can that be? They would not touch, and isn't that the definition of parallel - Parallel Lines and Transversals Proof [09/28/1998] Prove: If two angles are cut by a transversal and the same-side angles are supplementary, then the lines are parallel. - Parallel Lines, Concentric Circles [04/24/2003] If parallel means that two lines never intersect, would the lines that form a circle drawn inside another circle be considered parallel? - Parallel Lines: Euclidean and Non-Euclidean Geometry [4/25/1996] If two lines are parallel, can they intersect? - Parallel Lines in Projective Space [05/18/1998] Do parallel lines intersect at infinity? Is this in projective space? - Parallel Lines: Two Column Proof [09/09/1998] Could you break down the steps in doing a two column proof to show that two lines are parallel given certain congruent angles? - Path Length or Displacement? [10/17/2001] A body moves from A due east 5m to B, then from B due north 6m to C, and from C due west 5m to D. Calculate total distance covered from A to D. - Path Less Than 1 + sqrt(3) [03/18/2003] Is there a way to connect the four vertices of a square (of side length 1) such that the path travelled is less than 1 + sqrt(3)? - Perimeter of 1000m [07/13/1999] Find the shape with a perimeter of 1000m and the largest possible area. - Perimeter of a Line [08/25/2002] Does a line have perimeter? - Perimeter of a Reuleaux Triangle [04/15/2001] How can I find the perimeter of a Reuleaux triangle of width h? - Planes and Lines [10/26/1996] Do planes and lines contain the same number of points? - Planting Trees [08/13/2002] I have to plant 10 trees in 5 rows with 4 trees in each row. - Point and Line [04/07/2001] How does something without dimension create something with dimension? - Point Equidistant from 3 Other Points [04/11/1999] How do you find a point that is equidistant from three other points? - Polygon Angles [02/14/1997] What is the sum of the measure of the angles in polygons with sides 3-50? - Polyominoes [09/08/1997] I am using polyominos, but I do not know how to tell my dad what they are. How can I tell him so he will know? - Precision in Measurement: Perfect Protractor? [10/16/2001] Given that protractors are expected to be accurate to the degree, and in some instances the minute or second, how are angles accurately constructed and marked? - Problems about the Angle between the Hands of a Clock [09/08/2005] At 1:45, the angle between the hands of a clock is 142.5 degrees. When is the next time the angle between the hands will be 142.5 degrees? In addition to that specific problem, this talks about general strategies for solving problems involving angles between the hands of a clock. - Proof of Congruency [10/13/1996] Line PR bisects angles QPS and QRS; prove that segments RQ and RS are - Proof of Heron's Area Formula [12/30/1997] I need to write a proof of Heron's Area Formula. - Proof of Perpendicularity [10/23/1999] How can you prove that two lines (neither vertical) are perpendicular if and only if the product of the gradients is equal to -1? - Proofs and Reasons [01/03/1999] Write a two-column proof for the following theorem: AC is greater than BC and AP = BQ. - Proportions of Exact Enlargements [03/18/1998] How are two objects related if one is an "exact enlargement of the - Proving Lines Congruent [03/29/2002] Prove line AL is congruent to line CM. - Ratios, Geometry, Trigonometry [06/10/1999] A homeschool teacher asks for help with triangles, flagpoles, and - Real-World Carpentry and Trigonometry [11/19/2002] I'm trying to come up with a formula to calculate the height of an arc at the midpoint of the chord that defines it knowing only the length of the arc and the length of the chord. - Reflection Points on a Circle-Shaped Mirror [09/30/2003] Points A and B are located within a circle. If A were a light emitting point and B a light receiving point, then B would receive light from points P on the circle. How can I find these points? - Reflection, Rotation, Translation and Glide Reflection [06/27/2005] Considering the four symmetry transformations--reflection, rotation, translation, and glide reflection--is it possible to express transformations in the two-dimensional plane as a composition of at most three reflections? - Reflex Angle [11/30/1998] What is a reflex angle? - Research in dynamic geometry [08/27/1997] I would like to know about research into learning Geometry using Dynamic - Reuleaux Curve Applications [05/25/2002] What is the Reuleaux curve used for?
<urn:uuid:7f498345-aeeb-4138-9c91-41b61e37ff27>
3.46875
1,665
Q&A Forum
Science & Tech.
65.660111
2.2. The effect on the acoustic peaks We will now focus our attention on possible effects of primordial magnetic fields on small angular scales. That is, temperature, as well polarization, anisotropies of the CMBR. By small angular scale (< 1°) we mean angles which correspond to a distance smaller than the Hubble horizon radius at the last scattering surface. Therefore, what we are concerning about here are anisotropies that are produced by causal physical mechanisms which are not related to the large scale structure of the space-time. Primordial density fluctuations, which are necessary to explain the observed structures in the Universe, give rise to acoustic oscillations of the primordial plasma when they enter the horizon some time before the last scattering. The oscillations distort the primordial spectrum of anisotropies by the following primary effects : a) they produce temperature fluctuations in the plasma, b) they induce a velocity Doppler shift of photons, c) they give rise to a gravitational Doppler shift of photons when they climb-out or fall-in the gravitational potential well produced by the density fluctuations (Sachs-Wolfe effect). In the linear regime, acoustic plasma oscillations are well described by standard fluid-dynamics (continuity + Euler equations) and Newtonian gravity (Poisson's equation). In the presence of a magnetic field the nature of plasma oscillations can be radically modified as Magneto-Hydro-Dynamics (MHD) has to be taken into account. To be pedagogical, we will first consider a single component plasma and neglect any dissipative effect, due for example to a finite viscosity and heat conductivity. We will also assume that the magnetic field is homogeneous on scales larger than the plasma oscillations wavelength. This choice allows us to treat the background magnetic field B0 as a uniform field in our equations (in the following symbols with the 0 subscript stand for background quantities whereas the subscript 1 is used for perturbations). Within these assumptions the linearized equations of MHD in comoving coordinates are (4) where a is the scale factor. where [^(B)] B a2 and = 1 / 0, 1 and v1 are small perturbations on the background density, gravitational potential and velocity respectively. cS is the sound velocity. Neglecting its direct gravitational influence, the magnetic field couples to fluid dynamics only through the last two terms in Eq. (12). The first of these terms is due to the displacement current contribution to × B, whereas the latter account for the magnetic force of the current density. The displacement current term can be neglected provided that where vA is the, so called, Alfvén velocity. Let us now discuss the basic properties of the solutions of these equations, ignoring for the moment the expansion of the Universe. In the absence of the magnetic field there are only ordinary sound waves involving density fluctuations and longitudinal velocity fluctuations (i.e. along the wave vector). By breaking the rotational invariance, the presence of a magnetic field allows new kind of solutions that we list below (useful references on this subject are [59, 60]). where is the angle between k and B0. Fast magnetosonic waves involve fluctuations in the velocity, density, magnetic field and gravitational field. The velocity and density fluctuations are out-of-phase by / 2. Eq. (2.17) is valid for vA << cS. For such fields the wave is approximatively longitudinal. Detailed treatments of the evolution of MHD modes in the matter dominated and radiation dominated eras of the Universe can be found in Refs. [61, 62]. The possible effects of MHD waves on the temperature anisotropies of the CMBR has been first investigated by Adams et al. In the simplest case of magnetosonic waves, they found that the linearized equations of fluctuations in the Fourier space are for the baryon component of the plasma and for the photon component. In the above V = i k · v, R = (pb + b) / (p + ) = (3 b) / (4) and cb is the baryon sound velocity in the absence of interactions with the photon gas. As it is evident from the previous equations, the coupling between the baryon and the photons fluids is supplied by Thomson scattering with cross section T. In the tight coupling limit (Vb ~ V) the photons provide the baryon fluid with a pressure term and a non-zero sound velocity. The magnetic field, through the last term in Eq. (1.21), gives rise to an additional contribution to the effective baryon sound velocity. In the case of longitudinal waves this amounts to the change In other words, the effect of the field can be somewhat mimicked by a variation of the baryon density. A complication arises due to the fact that the velocity of the fast waves depends on the angle between the wave-vector and the magnetic field. As we mentioned previously, we are assuming that the magnetic field direction changes on scales larger than the scale of the fluctuation. Different patches of the sky might therefore show different fluctuation spectra depending on this angle. Figure 2.1. The effect of a cosmic magnetic field on the multipole moments. The solid line shows the prediction of a standard CDM cosmology ( = 1, h = 0.5, B = 0.05) with an n = 1 primordial spectrum of adiabatic fluctuations. The dashed line shows the effect of adding a magnetic field equivalent to 2 × 10-7 Gauss today. From Ref. The authors of Ref. performed an all-sky average summing also over the angle between the field and the line-of-sight. The effect on the CMBR temperature power spectrum was determined by a straightforward modification of the CMBFAST numerical code. From the Fig. 2.1 the reader can see the effect of a field B0 = 2 × 10-7 G on the first acoustic peak. The amplitude of the peak is reduced with respect to the free field case. This is a consequence of the magnetic pressure which opposes the in-fall of the photon-baryon fluid in the potential well of the fluctuation. Although this is not clearly visible from the figure, the variation of the sound velocity, hence of the sound horizon, should also produce a displacement of the acoustic peaks. The combination of these two effects may help to disentangle the signature of the magnetic field from other cosmological effects (for a comprehensive review see ) once more precise observations of the CMBR power spectrum will be available. Adams at al. derived an estimate of the sensitivity to B which MAP and PLANCK satellites observations should allow to reach by translating the predicted sensitivity of these observations to b. They found that a magnetic field with strength today B0 > 5 × 10-8 G should be detectable. It is interesting to observe that a magnetic field cannot lower the ratio of the first to second acoustic peak as showed by recent observations . Alfvén waves may also leave a signature on the CMBR anisotropies. There are at least three main reasons which make this kind of wave of considerable interest. The first is that Alfvén waves should leave a quite peculiar imprint on the CMBR power spectrum. In fact, as we discussed in the above, these waves do not involve fluctuations in the density of the photon-baryon fluid. Rather, they consist only of oscillations of the fluid velocity and of the magnetic field. Indeed, by assuming that the wavelength is smaller than the Hubble radius and that relativistic effects are negligible, the equations describing Alfvén waves are Since the gravitational Doppler shift (Sachs-Wolfe effect) is absent in this case, the cancellation against the velocity Doppler shift which occurs for the acoustic modes does not take place for the Alfvén waves. This could provide a more clear signature of the presence of magnetic fields at the last scattering surface . The second reason why Alfvén waves are so interesting in this contest is that they are vector (or rotational) perturbations. As a consequence they are well suited to probe peculiar initial condition such as those that might be generated from primordial phase-transitions. It is remarkable that whereas vector perturbations are suppressed by universe expansion and cannot arise from small deviations from the isotropic Friedmann Universe for t 0 , this is not true in the presence of a cosmic magnetic field (5) The third reason of our interest for Alfvèn waves is that for this kind of waves the effect of dissipation is less serious than what it is for sound and fast magnetosonic waves. This issue will be touched upon in the next section. A detailed study of the possible effects of Alfvén waves on the CMBR anisotropies has been independently performed by Subramanian and Barrow and Durrer at al. who reached similar results. We summarize here the main points of the derivation as given in Ref. . In general, vector perturbations of the metric have the form where B and H are divergence-free, 3d vector fields supposed to vanish at infinity. Two gauge-invariant quantities are conveniently introduced by the authors of Ref. : which represents the vector contribution to the perturbation of the extrinsic curvature and the vorticity. In the absence of the magnetic field, and assuming a perfect fluid equation of state, the vorticity equation of motion is In the radiation dominated era the solution of this equation is = const. which clearly does not describe waves and, as we mentioned, is incompatible with an isotropic universe when t 0. In the presence of the magnetic field, Durrer et al. found These equations describe Alfvén waves propagating at the velocity vA(e · ()), where vA is the Alfvén velocity and e is the unit vector in the direction of the magnetic field (6) In this case some amount of initial vorticity is allowed which is connected to the amplitude of the magnetic field perturbation B1 The general form of the CMBR temperature anisotropy produced by vector perturbations is where V = - is a gauge-invariant generalization of the velocity field. We see from the previous equation that besides the Doppler effect Alfvèn waves gives rise to an integrated Sachs-Wolfe term. However, since the geometric perturbation is decaying with time, the integrated term is dominated by its lower boundary and just cancels in V. Neglecting a possible dipole contribution from vector perturbations today, Durrer at al. obtained As predicted in Ref. , Alfvén waves produce Doppler peaks with a periodicity which is determined by the Alfvén velocity. Since, for reasonable values of the magnetic field strength, vA << 1 this peaks will be quite difficult to detect. Durrer et al. argued that Alfvén waves may leave a phenomenologically more interesting signature on the statistical properties of the CMBR anisotropies. In the absence of the magnetic field all the relevant information is encoded in the C's coefficients defined by where µ n · n'. By introducing the usual spherical harmonics decomposition the C's are just Because of its spin-1 nature, the vorticity vector field induces transitions ± 1 hence a correlation between the multipole amplitudes a+1, m and a-1, m. This new kind of correlation is encoded in the coefficients Durrer at al. determined the form of the C and D coefficients for the case of a homogeneous background magnetic field in the range -7 < n < - 1, where n determine the vorticity power spectrum according to On the basis of these considerations they found that 4-year COBE data allow to obtain a limit on the magnetic field amplitude in the range -7 < n < - 3 on the order of (2-7) × 10-9 Gauss. 4 Similar equations were derived by Wasserman to the purpose to study the possible effect of primordial magnetic fields on galaxy formation. Back. 5 Collisionless matter, like e.g. gravitons after the Planck era, may however support nonzero vorticity even with initial conditions compatible with an isotropic universe . Back. 6 Differently form the authors of Ref. , Durrer at al. assumed a homogeneous background magnetic field. This however is not a necessary condition for the validity of the present considerations. Back.
<urn:uuid:f6db00bc-be60-40e0-8737-0872b0e7bbb2>
2.828125
2,631
Academic Writing
Science & Tech.
43.396395
Prove that you can make any type of logic gate using just NAND Can you set the logic gates so that this machine can decide how many bulbs have been switched on? What will happen when you switch on these circular circuits? Can you think like a computer and work out what this flow diagram Can you invert this confusing sentence from Lewis Carrol? Investigate circuits and record your findings in this simple introduction to truth tables and logic. This article explains the concepts involved in scientific mathematical computing. It will be very useful and interesting to anyone interested in computer programming or mathematics. Creating a schedule to cook a meal consisting of two different recipes, plus rice. Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . Investigate how logic gates work in circuits. Providing opportunities for children to participate in group narrative in our classrooms is vital. Their contrasting views lead to a high level of revision and improvement, and through this process. . . . Sort these mathematical propositions into a series of 8 correct Have a go at being mathematically negative, by negating these
<urn:uuid:ab7f5dc2-ca73-48bd-9dc4-dbc93cf860d4>
3.828125
261
Content Listing
Science & Tech.
42.856753
When winds blow across the ocean surface, kinetic energy is transferred from the wind to surface water as a result of the friction between the wind and the ocean surface. The kinetic energy transferred to the ocean surface sets the surface layer of water in motion and generates both waves and currents. Sunlight shining on the oceans adds thermal energy to the surface layer. Currents carry the thermal energy to other regions The process of energy transfer from winds to waves and currents is complex and depends on many factors, including wind speed, air-sea temperture difference, and roughness of the surface (whether or not waves are already present and how high they are). Therefore the percentage of the wind's energy that is converted into kinetic energy of ocean currents is variable. Wind-generated currents transport large volumes of water and thermal energy across the oceans. Winds are the primary energy source for currents that flow horizontally in the ocean surface layers (less than 1 km deep). Surface currents are often called "wind-driven currents" or "wind drift currents." Density-driven ocean currents, generated by variations in water density, lead to a convective flow or movement. Heat Transport by Currents Ocean circulation transfers heat from the tropics toward the poles, moderating mid- and high-latitude climates. Heat transported by currents moderates the climates of regions into which they flow. For example, heat is transported north across the Atlantic first by the Gulf Stream and then east across the Atlantic by the North Atlantic Drift. This transported heat moderates Western Europe's climate. Both wind-driven and density-driven currents are important structures in the heat-transport system. Boundary currents flow parallel to the coast. Two types of boundary currents are important: Western boundary currents on the western edge of the oceans are fast, narrow jets. East Australian Current Eastern boundary currents on the eastern edge of the oceans are generally weaker than western boundary currents.
<urn:uuid:2acf6d4f-52bf-4681-a585-b0cdb5a1aa38>
4.125
396
Knowledge Article
Science & Tech.
37.374563
Jan 4, 2011, 1:42 AM Post #5 of 5 What the code does is this. Imagine you call the script like so: |The program now shifts the first argument (-n) into $input. It then goes on to check if $input is defined and is "-n". If that is the case it sets $numeric to 1, and shifts the second argument (file.txt) into $input, which will now have the value "file.txt". The difference between this call and the call somescript.pl -n file.txt will be that $numeric is set to 1, which I guess is meant to change some behavior later on in the program. I hope this clarifies things.
<urn:uuid:e5aad3ee-fcb8-40c7-a168-6869de554847>
2.703125
154
Comment Section
Software Dev.
89.802668
Jan 19, 2001, 11:30 AM Post #1 of 1 (From the Perl FAQ) Is it a Perl program or a Perl script? It doesn't matter. In ``standard terminology'' a program has been compiled to physical machine code once, and can then be be run multiple times, whereas a script must be translated by a program each time it's used. Perl programs, however, are usually neither strictly compiled nor strictly interpreted. They can be compiled to a byte code form (something of a Perl virtual machine) or to completely different languages, like C or assembly language. You can't tell just by looking whether the source is destined for a pure interpreter, a parse-tree interpreter, a byte code interpreter, or a native-code compiler, so it's hard to give a definitive answer here.
<urn:uuid:fb005ad4-a9d7-4414-8944-f3594691f0d0>
2.96875
169
Comment Section
Software Dev.
50.275225
“The familiar three dimensions of space—height, width and length—may have been just one or two when the universe was formed, some physicists say. to the report, their proposal holds that the familiar three dimensions of space could have been folded into two or just one at extremely high energies and temperatures. Such conditions characterized the universe just after the “Big Bang” that gave it birth. As the universe cooled, in this view, more spatial dimensions would have appeared one by one. “
<urn:uuid:9c35267d-8158-4b44-a4ac-c2beac0e5ebb>
3.171875
110
Truncated
Science & Tech.
41.036401
What are Black Holes Anyway? Black holes are not really holes at all. They are the opposite of empty! Black holes have the most matter stuffed into the least space of any objects in the universe. Because they are so compact, they have very strong gravity. Here on Earth, gravity is what makes things fall down, rather than just float away, when you let go of them. Gravity is what you are measuring when you step on a scale to weigh yourself. Your weight is the amount of force that Earth's gravity exerts on you. The more matter your body contains, the more you weigh. Likewise, the more matter an object has, the stronger its gravity. The gravity of a black hole is so strong that not even light can escape. Even if a bright star is shining right next to a black hole, you cannot see the black hole. Instead of reflecting the light as other objects do, the black hole just swallows the starlight forever. Any matter that gets too close to a black hole gets swallowed up as well. There are at least two kinds of black holes One kind is called a stellar-mass black hole. You can think of it as a "one-big-star" black hole. This type of black hole forms when a big star burns up all its fuel and explodes (called a supernova). Then what's left collapses into a super-compact object—a black hole. Stars must contain quite a bit more matter than our Sun for this to happen. So our Sun, and most stars, will never become black holes. Stellar-mass black holes are only a few tens of kilometers across—maybe about 40 miles. Just imagine. Our Sun is so huge that about one million Earths would fit inside it. A star with enough matter to become a black hole contains maybe 10 times as much matter as the Sun. Now imagine a star with that much matter, shrinking into a space no farther across than the distance you can drive a car in less than one hour! Another kind of black hole is called a supermassive black hole. You can think of this type as a "million-big-star" black hole, because it contains as much matter as one million to 100 million Suns! Astronomers think that supermassive black holes are lurking at the centers of galaxies, including our own Milky Way galaxy. They don't know yet how these humongous black holes are formed. Learning More About Black Holes Scientists really want to learn more about black holes and other strange and massive objects in the Universe. One space mission that is helping them do just that is a space telescope called XMM-Newton. It was launched into Earth orbit in 1999 by NASA and the European Space Agency. It observes the universe in high-energy x-rays, a type of light that we can't see with our eyes. Matter, such as gas and dust particles, near black holes puts out x-rays as it swirls around at light speed just before the black hole swallows it up. By observing these x-rays, XMM can help scientists understand the black hole.
<urn:uuid:bb511965-cc23-4d45-a9fe-265d7ac8d6d6>
4.15625
640
Knowledge Article
Science & Tech.
66.377238
A vector is a mathematical object that has both magnitude (a numeric value) and direction. Velocity is a vector because it has both magnitude and direction. For example, a velocity of 33 kilometers per hour (kph) southeast has a magnitude of 33 and a direction of southeast. Speed is not a vector, and direction is not a vector, but speed and direction together, modifying the same object, form a vector. Here are some other examples of vectors. Displacement can be a vector when describing the location of one point with respect to another point (whether those points represent two objects, or one object in motion). For example, "New York is 500 miles north of Virginia" or "The ball rolled 3 feet to the left." Force can be a vector, since the gravitational force that pulls you toward the earth has both a magnitude and a direction. Rotation, when modified with a direction, is a vector. Think of a clock hand, rotated 90° clockwise. Graphically, a vector is usually represented as an arrow (in other words, if you had to show a vector in a graph, that's how you'd sketch it). Mathematically, a vector's direction is often specified by an angle. To use the example given above, "33 kph southeast" may alternatively be described as "33 kph at 45 degrees." In Flash, vectors are used primarily with physics applications. This is because multiple vectors (of the same type) can be added together to form one resultant vector. Adding vectors is called superposition. For example, if a balloon is floating in the air, several forces are being exerted on it simultaneously, such as force from the wind, gravitational force, and a buoyant force (that is, the force that is pushing the balloon up). With three forces acting on one balloon, it might be difficult to figure out what the balloon will do. Will it rise or will it fall? Using superposition, you can add the vectors together to find the resultant vector (and determine the balloon's next move). One vector is much easier to work with than three. Vectors can be divided up into x and y components (in this context, the word components refers to pieces). This is called resolving a vector. You already did this same thing in the "Projection" section. Resolving a vector is nothing more than projecting it along the coordinate system axes. To add vectors together, you must Resolve all of the vectors into their x and y components. Those pieces are the remaining two sides of the right triangle. Add all of the x components together. Add all of the y components together. Let's use the example we started above. Imagine a balloon in the air with three forces acting on it: A gravitational force with a magnitude of 10 at an angle of 90° A buoyant force with a magnitude of 8 at an angle of 270° A wind force with a magnitude of 5 at an angle of 45° To add the vectors together (looking back at our three-step checklist above), the first step is to resolve each vector into its components. What follows is the balloon example we've been using, written in ActionScript. Nothing will appear on the screen; this is purely a mathematical exercise to introduce you to the role of ActionScript in this process. Later in the book, after I've introduced you to the other concepts necessary to understanding them, we'll delve into many more practical examples. In the code below, I've used the number 1 appended to the ends of all variables associated with the gravitational force; 2 for the buoyant force; and 3 for the wind force. (The lines that begin with // are comment lines, for information only.) To try this ActionScript yourself, open the Actions panel in Flash and enter the ActionScript below, or open the force_example.fla file from the Chapter03 folder on the CD-ROM. //Gravitational force angle1 = 90; magnitude1 = 10; //Buoyant force angle2 = 270; magnitude2 = 8; //Wind force angle3 = 45; magnitude3 = 5; //Resolve the vectors into their components x1 = magnitude1*Math.cos(angle1*Math.PI/180); y1 = magnitude1*Math.sin(angle1*Math.PI/180); x2 = magnitude2*Math.cos(angle2*Math.PI/180); y2 = magnitude2*Math.sin(angle2*Math.PI/180); x3 = magnitude3*Math.cos(angle3*Math.PI/180); y3 = magnitude3*Math.sin(angle3*Math.PI/180); Notice the Math.PI/180 factor in each line of ActionScript above. Remember that the trigonometric functions only work with angles measured in radians. This factor converts the angle from degrees to radians. The next two steps are to add all of the x components and y components together to form two resultant vectors: //Add the x pieces x = x1 + x2 + x3; //Add the y pieces y = y1 + y2 + y3; You now have the sum of all the forces in the x direction and the sum of all the forces in the y direction. Add these two lines of ActionScript to display the result in the output window: trace("Force in the x direction="+x); trace("Force in the y direction="+y); When you test the SWF file, you will see that the force in the y direction is 1.53. Since this number is greater than 0, the balloon will be forced to move toward the ground. The force in the x direction is 3.53. This means that the balloon will be forced to move to the right. Still lost? There is hope! To many people, math is a dry subject. It is understandable if, when you've finished this chapter, you feel like you have grasped only part of it. Everything will make more sense when you start to see the practical uses of the math you've seen here, and the concepts will become more solidified in your mind. It may make sense for you to reread parts of this chapter when you start to use trigonometry in your games. With the concepts and techniques in this chapter, you are adding practical skills to your programming toolkit. You will find that these things will come in handy frequently. We will revisit vectors and explore more examples of vector uses in the chapters on physics and collision reactions.
<urn:uuid:9dea656a-cc39-4d66-aaef-15ff3f34296d>
4.40625
1,360
Tutorial
Science & Tech.
61.172449
Galaxy clusters are the largest gravitationally bound objects in the universe. They have three major components: - Hundreds of galaxies containing stars, gas and dust; - Vast clouds of hot (30 - 100 million degrees Celsius) gas that is invisible to optical telescopes; - Dark matter, a mysterious form of matter that has so far escaped direct detection with any type of telescope, but makes its presence felt through its gravitational pull on the galaxies and hot gas. The hot gas envelopes the galaxies and fills the space between galaxies. It contains more mass than all the galaxies in the cluster. Although the galaxies and hot gas clouds are very massive, scientists have determined that about 10 times more mass is needed to hold the cluster together. Something, namely dark matter must exist to provide the additional gravity. Astronomers think that galaxy clusters form as clumps of dark matter and their associated galaxies are pulled together by gravity to form groups of dozens of galaxies, which in turn merge to form clusters of hundreds, even thousands of galaxies. The gas in galaxy clusters is heated as the cluster is formed. This heating can be a violent process as gas clouds enveloping groups of galaxies collide and merge to become a cluster over billions of years. Chandra images provide dramatic evidence of these mega-mergers. Cosmic "weather systems" millions of light years across are observed, as relatively cool 50 million degree Celsius clouds of gas fall into much larger and hotter clouds. It takes a long time to build a galaxy cluster. Exactly how long depends on details such as the amount of dark matter in the universe, whether the dark matter is hot or cold, how fast the universe is expanding, etc. The pressure in the hot gas is an accurate probe of the amount of dark matter in clusters of galaxies. By using this information, and X-ray surveys to count the number of large clusters in the universe, astronomers can test the various theories for the content and evolution of the universe. Chandra observations of the clouds of hot gas in clusters of galaxies will provide other clues to the origin, evolution and destiny of the universe. Combined X-ray and microwave observations can measure the effect of the cluster gas as it scatters the cosmic microwave background streaming through the cluster from the depths of the universe. The amount of scattering makes it possible to estimate the distance to the cluster. This information can be used to estimate the size and age of the universe. Another intriguing question is the ultimate fate of the colossal gas reservoirs in galaxy clusters. The crush of all the gas and dark matter in the cluster pushes the particles in the center of the cluster closer together. This causes them to collide more frequently and to slowly lose their energy to radiation, like a tire with a slow leak. In a billion years or so, this radiation leak will take its toll and, if there is no energy source to offset the losses, the gas will cool and slowly settle – in what is called a cooling flow – onto a massive galaxy in the center of the cluster. Early X-ray observations indicated that the cooling was occurring at such a rate that hundreds of new stars or cool gas clouds should be forming every year in the centers of many clusters. As astronomers began searching for this cool matter, they found some, but not nearly enough. New observations of galaxy clusters by Chandra and the XMM Newton X-ray Observatory, together with radio observations, may point to a resolution of this problem. They show that in a number of cases, the inflow of cooling gas appears to be deflected by magnetic fields, and perhaps heated by explosions from the vicinity of a supermassive black hole at the core of the central galaxy. Whether or not such violent activity will explain the shortage of cool gas should become clear in the next few years.
<urn:uuid:d7b074fd-91eb-4fa1-967b-993d8003a1c6>
4.5
767
Knowledge Article
Science & Tech.
42.042156
It is well known that the past decade or so has seen less global warming than might have been expected – but what is the cause? This is more of a discussion post, rather than any new analysis. The most recent decade has seen observed global temperatures at the lower limit of the model projections. There seem to be 3 possibilities for this relative slowdown in the rate of warming: 1) Internal climate variability 2) The assumed radiative forcings are wrong 3) The climate simulators used are too sensitive to greenhouse gases I think there is evidence that all 3 possibilities are playing some role. Firstly, climate simulators show a range of internal variability behaviours, and a decade with no global warming (or even a cooling) is not implausible – various analyses indicate that around 5% of decades should exhibit a cooling trend globally, perhaps because the warming is in the deeper ocean. In fact, we might have expected a cooling decade sometime in the next few decades anyway. If internal variability is the sole cause, then we might expect a more rapid warming over the next decade. However, another option is that the radiative forcings used in the climate simulations are somehow incorrect. The most obvious culprit would be the emission of aerosol precursors, which help cool the planet. The scenarios used by the IPCC optimistically tend to show a rapid reduction in aerosol emissions from 2005 onwards, which may not have happened. So, perhaps some of the relative lack of warming is due to the fact that we are not using the observed forcings, but projected forcings after 2005? In addition, even if the projections have the correct forcings, the models may be too sensitive to aerosol reductions, and so the projections produce a more rapid warming than that observed. Other candidates for producing incorrect forcings are stratospheric effects or volcanic aerosols. The final possibility is that the higher climate sensitivity models are too sensitive to greenhouse gases. Recent analyses in the ‘Detection & Attribution’ framework have suggested this, along with initial work on examining hindcasts from decadal predictions. What would help answer this? More time to see what happens, of course. A better understanding of recent aerosol trends, and whether the models are responding correctly would be very beneficial. Also, more regular updates to observed emissions and radiative forcings to allow more concrete detection and attribution of trends. My suspicion is that all 3 possibilties are playing some role. The next few years will be very interesting indeed!
<urn:uuid:054d0e44-0480-4370-8fc5-1078c6bd575b>
3.3125
509
Personal Blog
Science & Tech.
30.182384
Super Efficient Feeding Habits of Blue Whales As most people will know, the blue whale is the largest animal alive and probably the largest animal that has ever lived. When a blue whale calf is born it is as big as a fully-grown hippopotamus and during its first seven months it will drink about 400 litres of its mother's milk every day. It is known that a blue whale's mouth is big enough to accommodate 100 adults, but what Bob Shadwick and his colleagues from the University of British Columbia wanted to know was how much a blue whale could eat in a single mouthful and how much energy it would burn while foraging. Typically a blue whale will dive to about 100 metres or more for food, with the longest recorded dive being 36 minutes. This length of time is most unusual because although with the colossal supplies of oxygen that these huge creatures carry in their blood and muscles, dives are not usually any longer than 15 minutes. Bob Shadwick's theory was that the act of feeding must use up a great deal of energy. He explained how whales lunge repeatedly through deep shoals of krill, engulfing their own body weight in water before filtering out the nutritious crustaceans. It was thought that the huge drag effect of slowing down to feed and then accelerating again simply took its toll. Proving this seemed impossible until Shadwick and his student Jeremy Goldbogen discovered that by the skilful use of hydrophones, pressure sensors and two-axis accelerometers, it would be possible to use the resulting measurements to calculate the energetic cost of blue whale lunges. A number of whales were studied and Goldbogen discovered that dives lasted for between 3.1 and 15.2 minutes, with a whale lunging for up to six times in a single dive. Goldbogen had previously established that he could calculate the speed of a whale by correlating the acoustic noise of the water swishing past a hydrophone. In this way he was able to establish the speed of the whales as they lunged repeatedly during each dive. The team wanted to calculate the forces exerted on the whales as they accelerated with their colossal mouthful of water. Noting that their mouths seemed to inflate like parachutes as the whales engulfed the krill, Goldbogen tracked down a parachute expert who was able to help him with building a mathematical model to calculate the forces involved. In this way researchers were able to determine that the whales used about 3,226 kilojoules of energy on each dive, but they still needed to know how much energy could be extracted from one mouthful of krill. Goldbogen precisely estimated the size of a blue whale's mouth and then calculated the volume of water and the amount of krill it could hold. He found that resulting energy could be anything from 34,776 kilojoules to an amazing 1,912,680 kilojoules. This led the team to the conclusion that on a foraging dive a blue whale could provide itself with 90 times as much energy as it used up.
<urn:uuid:23abbf2e-4fcc-432c-9bb4-d0439c5b010d>
3.953125
627
Truncated
Science & Tech.
43.864393
#include <string.h> int strncmp( const char *str1, const char *str2, size_t count ); The strncmp() function compares at most count characters of str1 and str2. The return value is as follows: Return value Explanation less than 0 str1 is less than str2 equal to 0 str1 is equal to str2 greater than 0 str1 is greater than str2 If there are less than count characters in either string, then the comparison will stop after the first null termination is encountered. strcmp(), strnchr(), strncpy()
<urn:uuid:f904eb95-fb24-46bd-8ddc-4050b4931541>
3.109375
132
Documentation
Software Dev.
76.825622
Earthquakes Recorded in Virginia: A Primer In the theory of plate tectonics, the earth's outermost layer is composed of plates that move relative to each other. Most of the world's earthquakes occur at the plate boundaries. Since places like the California coast are on a boundary between two plates, they have many more earthquakes than places like Virginia, which is near the center of the North American plate (Figure 1a). Yet earthquakes still occur in Virginia (Figure 1b). Figure 1: (a) Seismogram of the January 17, 1994 Northridge earthquake, magnitude 6.8. (b) Seismogram of the January 22, 1995 Pulaski earthquake, magnitude 2.9. Both events were recorded on a seismograph in Virginia has had over 160 earthquakes since 1977 of which 16% were felt. This equates to an average of one earthquake occurring every month with two felt each year. Click here for a summary of the largest earthquakes in Virginia. The largest earthquake to occur in Virginia is the 1897 magnitude 5.8 Giles County earthquake. This earthquake is the third largest in the eastern US in the last 200 years and was felt in twelve states. Click here for a discussion on the observed effects of this event. Seismic activity (seismicity) has been known for several decades to be strongest in and around Giles County and in central Virginia. This led researchers at the VTSO to concentrate seismic monitoring stations in these two areas, as shown in Figure 2, which shows earthquakes (circles, scaled to magnitude) in and near Virginia from 1774 through 1994.
<urn:uuid:e5368898-9615-4a6c-b40d-787f0da0a85d>
3.8125
364
Knowledge Article
Science & Tech.
53.855346
What happens if we get too close to a Black Hole? The answer to this question might involve becoming a space traveler. Scientists have developed the theory that objects that are pulled into a black hole might move through a tunnel (sometimes called a worm hole) and end their journey in another spot in the universe. The trip would happen very, very quickly. It is not possible to see a black hole. The term applies to a theoretical place in the universe that forms when a star’s mass collapses. The mass of the star compresses into such a small space nothing can exit the hole (not even light). The theory has its basis in the Theory of General Relativity (Einstein). This phenomenon has never been observed. But scientists have predicted the existence of black holes, some using alternate theories to account for what they have observed. One of the theories is called magnetospheric eternally collapsing objects. This particular idea doesn’t use what is called the spacetime singularity that may be at the center of the black hole. In more recent years scientists have come closer to proving that black holes were indeed tied to general relativity. They also found that once the process of collapse started there would be no way to stopping it. Modern space scientists, including Stephen Hawking, have proposed that black holes can emit radiation, even while nothing else will escape. As far as the majority of astronomers and physicists are concerned, black holes do exist. Some scientists have even predicted that the matter collapsing into a black hole would have to appear somewhere in the universe. Some scientists have tried to describe what our surroundings would look like if we approached a black hole. Light would be bending in curious and unusual ways due to the strong gravitational pull. In common terms, things around us would look odd and strange. This same massive collapse and pull of gravity would take us in quickly. We would have no chance to escape. Parts of us would be smashed and other parts would be stretched. We wouldn’t look anything like the human being that approached the black hole. We would die as a result of these changes in shape. Our system would not be able to withstand the pressure and stress. Scientists have also developed a theory that the mass and pull of black holes can vary, with some exerting more pressure and gravitational pull than others. According to this theory we might even survive our first contact with a black hole that had less mass pulled in. But we would eventually die from the action of falling into the black hole. It is also fascinating to note that if we could survive and remain at the edge of the black hole as it formed we could experience the collapse of a star. But when we entered the black hole we would become part of what has been termed a “singularity.” In this case all would literally be one. The star’s mass, light and our body would all become part of this single entity.
<urn:uuid:2b795922-7d0a-4cb8-b279-b42b25e7588a>
4.125
596
Knowledge Article
Science & Tech.
53.783957
Online Geometry Problem 697: Square, Circle, Sector, Segment, Tangent, Inscribed, Congruence. Level: High School, Honors Geometry, College, The figure shows a square ABCD of area S and the inscribed circle of center O. Circles E and G are tangent to circle O and tangent each other at the center O. Prove that the area of the yellow shaded region is equal to three quarters of S.
<urn:uuid:54106677-a25c-480a-ab81-80a8eb471e75>
3.40625
98
Tutorial
Science & Tech.
56.778409
Continued from previous page The interpretation of the comparison of observed and model-predicted concentrations for both organic carbon and black carbon is more difficult because of both inaccuracies in the observations (Section 5.1.2) and the fact that most measured concentrations are only available on a campaign basis. In addition, the source strength and atmospheric removal processes of carbonaceous aerosols are poorly known. Most models were able to reproduce the observed concentrations of BC to within a factor of 10 (see Figure 5.10) and some models were consistently better than this. Both modelled and observed concentrations varied by a factor of about 1,000 between different sites, so agreement to within a factor of 10 demonstrates predictive capability. However, there are still large uncertainties remaining in modelling carbonaceous aerosols. Figure 5.10: Observed and model-predicted concentrations of black carbon (in ng C m-3) at a number of locations. The models are listed in Table 5.8. Observations refer to those summarised by Liousse et al. (1996) and Cooke et al. (1999). Symbols refer to: circle, Liousse Atlantic; square,Liousse Pacific; diamond, Liousse Northern Hemisphere rural; plus, Liousse Southern Hemisphere rural; asterisk, Liousse Northern Hemisphere remote; cross, Liousse Southern Hemisphere remote; upward triangle, Cooke remote; left triangle, Cooke rural; downward triangle, Cooke urban. Table 5.9 presents an overview of the comparison between observed and calculated surface mixing ratios. Table 5.9a gives the comparison in terms of absolute mass concentrations while Table 5.9b gives the comparison in terms of average differences of percents. The average absolute error for sulphate surface concentrations is 26% (eleven models) and the agreement between modelled concentrations and observations is better for sulphate than for any other species. The largest difference with observed values is that of carbonaceous aerosols with an average absolute error (BC: nine models, OC: eight models) of about 179%. This may be partly due to the large uncertainties in the estimated strength of biomass burning and biogenic sources. The average absolute error for the dust (six models) and sea salt (five models) simulations is 70 and 46%, respectively. Continued on next page Other reports in this collection
<urn:uuid:bde4f10d-7df8-450f-a7b9-3a270d7bb6d4>
3.203125
479
Academic Writing
Science & Tech.
33.524106
WHETHER we are searching the cosmos or probing the subatomic realm, our most successful theories lead to the inescapable conclusion that our universe is just a speck in a vast sea of universes. Until recently many physicists were reluctant to accept the idea of this so-called multiverse. Recent progress in cosmology, string theory and quantum mechanics, though, has brought about a change of heart. "The multiverse is not some kind of optional thing, like can you supersize or not," says Raphael Bousso, a theoretical physicist at the University of California, Berkeley. Our own cosmological history, he says, tells us that "it's there and we need to deal with it". These days researchers like Bousso are treating multiverses as real, investigating them, testing them and seeing what they say about our universe. One of their main motivations is the need to explain why the physical laws underlying our universe ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ec8164d6-1021-47e3-991c-62eacf336d6b>
2.71875
219
Truncated
Science & Tech.
42.849813
Image #3 of sequence. In sharp contrast, by July 1998, a dramatic recovery had taken place. There is a well-developed cold tongue and a dramatic bloom of phytoplankton along the equator. High chlorophyll concentrations had not previously been observed over such a large area. Image ID: fish2198, NOAA's Fisheries Collection Credit: Courtesy of F. Chavez; Published in Science Magazine, Vol 286, 10 December 1999, p. 212. • High Resolution Photo Available
<urn:uuid:7b339379-d5c1-46c1-9b5f-590bc639ffc6>
2.828125
108
Truncated
Science & Tech.
48.345
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 5 results on physics.org and 13 results in our database of sites 12 are Websites, 0 are Videos, and 1 is a Experiments) Search results on physics.org Search results from our links database A description of the relevant work of a number of scientists who are famous for work relating to gases or vacuums. In its 100-year history, the electric vacuum cleaner has become an indispensable home appliance for most people, and it's obvious why. This site goes through the basic science behind the humble ... Video telling the story of a man who was accidentally exposed to a vacuum. Page explaining what a pressure vacuum is. The conduction of electricity in thin gases in vacuum tubes was the key to the discovery of the electron in 1897.Description of the discovery of Cathode rays. An excellent description of how Thermoses (Vacuum Flasks) work using heat transfer principles. From HowStuffWorks.com. An extremely good, informative site. A brief explanation why ion engines need a vacuum Fun demonstration of air pressure with shaving foam and a vacuum pump. Description of electromagnetic waves, which move energy through empty space (a vacuum), stored as part of an electric and magnetic field. Ever thought about how a brick of vacuum packed coffee goes soft as soon as it's opened? Understanding the behaviour of granular materials could help physicists to better understand not only coffee ... Showing 1 - 10 of 13
<urn:uuid:a62de6e8-55c4-444a-9f53-6732bfce0078>
3.21875
349
Content Listing
Science & Tech.
58.122619
Nucleic acids are long chains of monomers (nucleotides) that function as storage molecules in a cell. Nucleotides are composed of sugar, a phosphate group, and a nitrogenous base. ATP, DNA and RNA are all examples of nucleic acids. Nucleic acids are one of the four basic kinds of organic molecules. Now they're used as many of you know to store genetic information and thatâs the famous DNA and RNA whether DNA is storing genetic information long term inside of the nucleus of one of your cells or for transferring that genetic information from the cell, from the nucleus that is out to the ribosomes and that would be in the form of messenger RNA. Now what a lot of people donât think about is that actually nucleic acids are used as a means to transfer or add energy and thatâs the molecule known as ATP. And thatâs a trick question that a lot of really sneaky Bio teachers like myself will sometimes like to sneak in there cause people forget that ATP is actually a kind of nucleic acid, a nucleotide. And that leads into what are the monomers of nucleic acids? The nucleotides are those monomers. The building blocks that are used to build the longer polymers of nucleic acids. Now the basic structure of a nucleotide is that you'll have at the heart a five carbon sugar sometimes called a pentose sugar, -ose meaning carbohydrate, pent meaning five. On one end of our pentose sugar weâll have phosphate group and this gives a strong negative charge to molecules like DNA and RNA. On the other end youâll have the one thing that makes one nucleotide different from the other and thatâs a base that has the element nitrogen in it because this molecule has, or this base has some nitrogen in it. Theyâll often call it a nitrogenous base. If you look at DNA, DNA uses one of four possible nitrogenous bases. Those are thymine, often abbreviated T, cytosine, often abbreviated, you got it C, adenine which has two rings in its nitrogenous base abbreviated A and guanine another of the two ringed nitrogenous bases. RNA is very similar. It will use guanine, adenine and cytosine. The one difference in the bases between RNA and DNA is that theyâll have uracil in place of thymine. Now letâs take a look at how you join these nucleotides together. And what happens is that the phosphate of one nucleotide here, joins the sugar of the next nucleotide forming a long strand of DNA nucleotides or RNA nucleotides, kind of like in a little conga line. Now DNA is very famous for having a structure known as the double helix and thatâs because DNA you'll get one strand over here and another strand over there. Now notice how this strand here, the phosphate is pointing upwards, here itâs pointing down. Thatâs called anti-parallel where the two strands are moving or are aligned in opposite directions. These little dashed lines here are things called hydrogen bonds and they are what are holding this strand here to that strand there. Now, you commonly would draw DNA in a ladder form like this, kind of like, this looks like a ladder, alright. But you know that DNA forms a double helix, a helix is a cork screw shape. Now we call it a double helix because thereâs one two strands and they get twisted up in this shape like that. So this is the structure of nucleic acids. Again, individually itâs just two long chains of nucleotides joined together but when wound up, it forms the nice long term stable structure known as the double helix.
<urn:uuid:d445b71e-8984-45c6-963e-653a522e18f6>
4.28125
782
Knowledge Article
Science & Tech.
47.831978
Compounds, and Mixtures Any substance that contains only one kind of an atom is known as an element. Because atoms cannot be created or destroyed in a chemical reaction, elements such as phosphorus (P4) or sulfur (S8) cannot be broken down into simpler substances by these reactions. Example: Water decomposes into a mixture of hydrogen and oxygen when an electric current is passed through the liquid. Hydrogen and oxygen, on the other hand, cannot be decomposed into simpler substances. They are therefore the elementary, or simplest, chemical substances - elements. Each element is represented by a unique symbol. The notation for each element can be found on the periodic table The elements can be divided into three categories that have characteristic properties: metals, nonmetals, and semimetals. Most elements are metals, which are found on the left and toward the bottom of the periodic table. A handful of nonmetals are clustered in the upper right corner of the periodic table. The semimetals can be found along the dividing line between the metals and the nonmetals. Elements are made up of atoms, the smallest particle that has any of the properties of the element.John Dalton, in 1803, proposed a modern theory of the atom based on the following assumptions. 1. Matter is made up of atoms that are indivisible and indestructible. 2. All atoms of an element are 3. Atoms of different elements have different weights and different chemical 4. Atoms of different elements combine in simple whole numbers to form compounds. 5. Atoms cannot be created or destroyed. When a compound decomposes, the atoms are Elements combine to form chemical compounds that are often divided into two categories. Metals often react with nonmetals to form ionic compounds. These compounds are composed of positive and negative ions formed by adding or subtracting electrons from neutral atoms and molecules. Nonmetals combine with each other to form covalent compounds, which exist as neutral molecules. The shorthand notation for a compound describes the number of atoms of each element, which is indicated by a subscript written after the symbol for the element. By convention, no subscript is written when a molecule contains only one atom of an element. Thus, water is H2O and carbon dioxide is CO2. Ionic and Covalent Compounds positive and negative ions (Na+Cl-) Exist as neutral suchs as table salt (NaCl(s)) gases (C6H12O6(s), H2O(l), melting and boiling points Lower melting and boiling points (i.e., often exist as a liquid or gas at room temperature) force of attraction between particles force of attraction between molecules into charged particles in water to give a solution that conducts electricity Remain as same molecule in water and will not conduct electricity Determining if a Compound is Ionic or Covalent Calculate the difference between the electronegativities of two elements in a compound and the average of their electronegativites, and find the intersection of these values on the figure shown below to help determine if the compound is ionic or covalent, or metallic. |Practice Problem 1: each of the following compounds, predict whether you would expect it to be ionic or covalent. (a) chromium(III) oxide, Cr2O3 (b) carbon tetrachloride, CCl4 (c) methanol, CH3OH (d) strontium fluoride, SrF2 here to check your answer to Practice Problem 1 |Practice Problem 2: the following data to propose a way of distinguishing between ionic and covalent compounds. ||Melting Point ( oC) ||Boiling Point ( oC) here to check your answer to Practice Problem 2 A molecule is the smallest particle that has any of the properties of a compound. The formula for a molecule must be neutral. When writing the formula for an ionic compound, the charges on the ions must balance, the number of postive charges must equal the number of negative charges. ||Balanced formula has 2 positive charges (1 calcium ion with +2 charge) and 2 negative charges (2 chloride ions with a -1 charge) ||Balanced formula has 6 positive charges (2 aluminum ions with a +3 charge) and 6 negative charges (3 sulfate ions with -2 charge) Mixtures Vs. Compounds The law of constant composition states that the ratio by mass of the elements in a chemical compound is always the same, regardless of the source of the compound. The law of constant composition can be used to distinguish between compounds and mixtures of elements: Compounds have a constant composition; mixtures do not. Water is always 88.8% O and 11.2% H by weight regardless of its source. Brass is an example of a mixture of two elements: copper and zinc. It can contain as little as 10%, or as much as 45%, zinc. Another difference between compounds and mixtures of elements is the ease with which the elements can be separated. Mixtures, such as the atmosphere, contain two or more substances that are relatively easy to separate. The individual components of a mixture can be physically separated from each other. Chemical compounds are very different from mixtures: The elements in a chemical compound can only be separated by destroying the compound. Some of the differences between chemical compounds and mixtures of elements are illustrated by the following example using raisin bran and Raisin bran has the following characteristic properties of - The cereal does not have a constant composition; the ratio of raisins to bran flakes changes from sample - It is easy to physically separate the two "elements," to pick out the raisins, for example, and eat them separately. Crispix has some of the characteristic properties of a compound. - The ratio of rice flakes to corn flakes is constant; it is 1:1 in every sample. - There is no way to separate the "elements" without breaking the bonds that hold them together.
<urn:uuid:aebc9ee5-c805-4d7e-9918-b1d704d69b94>
3.75
1,392
Tutorial
Science & Tech.
39.471739
Date: September 17, 2010 Creator: Behrens, Carl E. Description: This report discusses the U.S. Landsat Mission, which has collected remotely sensed imagery of the Earth's surface for more than 35 years. The two satellites currently in orbit are operating beyond their designed life and may fail at any time. Most Landsat data is used by federal agencies. Efforts to commercialize Landsat operations have not been successful. This report discusses issues facing Congress regarding funding for new Landsat satellites. Contributing Partner: UNT Libraries Government Documents Department
<urn:uuid:05ca6f23-89da-4e4c-a9cc-c380c9647284>
3.296875
116
Structured Data
Science & Tech.
35.348599
As can be observed above, the effects of alcohol as shown in the blood alcohol concentration indicates that the higher the concentration the greater the risk of discomfort, unconsciousness and possibly death at the worst. Living with the Monster Near the end of a half-mile-long hallway connecting the four reactors of the Chornobyl Nuclear Power Plant, graph bars and squiggles flash on a monitor. Only a few yards away rises the concrete-and-steel sarcophagus sheathing the … Continue reading → The acids in acid rain react chemically with any object they contact. Acids are corrosivechemicals that react with other chemicals by giving up hydrogen atoms. The acidity of a substance comes from the abundance of free hydrogen atoms when the substance is dissolved in water. Acidity is measured using a pH scale with units from 0 to 14. Acidic substances have pH numbers from 1 to 6—the lower the pH number, the stronger, or more corrosive, the substance. Some non-acidic substances, called bases or alkalis, are like acids in reverse—they readily accept the hydrogen atoms that the acids offer. Bases have pH numbers from 8 to 14, with the higher values indicating increased alkalinity. Pure water has a neutral pH of 7—it is not acidic or basic. Rain, snow, or fog with a pH below 5.6 is considered acid rain. When bases mix with acids, the bases lessen the strength of an acid. This buffering action regularly occurs in nature. Rain, snow, and fog formed in regions free of acid pollutants are slightly acidic, having a pH near 5.6. Alkaline chemicals in the environment, found in rocks, soils, lakes, and streams, regularly neutralize this precipitation. But when precipitation is highly acidic, with a pH below 5.6, naturally occurring acid buffers become depleted over time, and nature’s ability to neutralize the acids is impaired. Acid rain has been linked to widespread environmental damage, including soil and plant degradation, depleted life in lakes and streams, and erosion of human-made structures. In soil, acid rain dissolves and washes away nutrients needed by plants. It can also dissolve toxic substances, such as aluminum and mercury, which are naturally present in some soils, freeing these toxins to pollute water or to poison plants that absorb them. Some soils are quite alkaline and can neutralize acid deposition indefinitely; others, especially thin mountain soils derived from granite or gneiss, buffer acid only briefly. The effects of acid rain on wildlife can be far-reaching. If a population of one plant or animal is adversely affected by acid rain, animals that feed on that organism may also suffer. Ultimately, an entire ecosystem may become endangered. Some species that live in water are very sensitive to acidity, some less so. Freshwater clams and mayfly young, for instance, begin dying when the water pH reaches 6.0. Frogs can generally survive more acidic water, but if their supply of mayflies is destroyed by acid rain, frog populations may also decline. Fish eggs of most species stop hatching at a pH of 5.0. Below a pH of 4.5, water is nearly sterile, unable to support any wildlife. Land animals dependent on aquatic organisms are also affected. Scientists have found that populations of snails living in or near water polluted by acid rain are declining in some regions. In The Netherlands songbirds are finding fewer snails to eat. The eggs these birds lay have weakened shells because the birds are receiving less calcium from snail shells. Most farm crops are less affected by acid rain than are forests. The deep soils of many farm regions, such as those in the Midwestern United States, can absorb and neutralize large amounts of acid. Mountain farms are more at risk—the thin soils in these higher elevations cannot neutralize so much acid. Farmers can prevent acid rain damage by monitoring the condition of the soil and, when necessary, adding crushed limestone to the soil to neutralize acid. If excessive amounts of nutrients have been leached out of the soil, farmers can replace them by adding nutrient-rich fertilizer. Modern understanding of acids and bases began with the discovery in 1834 by the English physicist Michael Faraday that acids, bases, and salts are electrolytes. That is, when they are dissolved in water, they produce a solution that contains charged particles, or ions, and can conduct an electric current Ionization. In 1884 the Swedish chemist Svante Arrhenius (and later Wilhelm Ostwald, a German chemist) proposed that an acid be defined as a hydrogen-containing compound that, when dissolved in water, produces a concentration of hydrogen ions, or protons, greater than that of pure water. Similarly, Arrhenius proposed that a base be defined as a substance that, when dissolved in water, produces an excess of hydroxyl ions, OH-. The neutralization reaction then becomes: H+ + OH-⇄H2O A number of criticisms of the Arrhenius-Ostwald theory have been made. First, acids are restricted to hydrogen-containing species and bases to hydroxyl-containing species. Second, the theory applies to aqueous solutions exclusively, whereas many acid-base reactions are known to take place in the absence of water. The first three demonstration plants of the Department of the Interior’s program to develop methods for converting saline water to fresh water were completed during 1961. The plants at Freeport, Tex., and San Diego, Calif., use distillation processes; that at Webster, S.D., electro-dialysis. Plants are still to be completed at Roswell, N.Mex. (distillation process), and at Wrightsville Beach, N.C. (freezing process). The plants at Webster and Roswell are for converting local brackish water to potable water; the other three convert seawater. Meanwhile, Congress authorized $75 million for saline water research over the next six years, a big increase from previous government spending. Several private companies are working on conversion techniques, such as freezing and electro-dialysis, too. The Interior Department‘s helium conservation program also got under way during 1961. Under the program, the government can buy up to $47.5 million worth of helium a year from private companies participating in the program. Contracts were signed with several gas-producing and pipeline firms, which will build plants to extract at low temperature the very small amounts of helium found in natural gas. In addition, Kerr-McGee Oil Industries is building a private plant in Arizona for recovering helium for sale on the market; this plant is not connected with the government program. Chemical production has been growing more rapidly in many foreign countries—especially some European nations and Japan—than in the United States. As a result, overseas producers have been competing more aggressively with U.S. firms in their home markets, in the less-developed areas, and even in the United States itself. At the same time, the rapid growth in demand overseas, lower labor and operating costs in many foreign lands, and the establishment of larger coherent market areas through such organizations as the European Common Market and the European Free Trade Association has led American firms to step up their international operations. Most major U.S. chemical firms — and many smaller ones, too—have organized international subsidiaries, built plants abroad, acquired foreign firms, or formed joint ventures overseas with foreign producers and investors. During 1960, according to the Department of Commerce, U.S. chemical companies invested nearly $250 million abroad, including $86 million in Europe and about $70 million in both Canada and Latin America.
<urn:uuid:65799970-915e-4b0c-b13d-7c671c65b500>
3.09375
1,575
Nonfiction Writing
Science & Tech.
42.486964
In mathematics, a plane is a flat, two-dimensional surface. A plane is the two dimensional analogue of a point (zero-dimensions), a line (one-dimension) and a solid (three-dimensions). Planes can arise as subspaces of some higher dimensional space, as with the walls of a room, or they may enjoy an independent existence in their own right, as in the setting of Euclidean geometry. When working exclusively in two-dimensional Euclidean space, the definite article is used, so, the plane refers to the whole space. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory and graphing are performed in a two-dimensional space, or in other words, in the plane. Euclidean geometry Euclid set forth the first great landmark of mathematical thought, an axiomatic treatment of geometry. He selected a small core of undefined terms (called common notions) and postulates (or axioms) which he then used to prove various geometrical statements. Although the plane in its modern sense is not directly given a definition anywhere in the Elements, it may be thought of as part of the common notions. In his work Euclid never makes use of numbers to measure length, angle, or area. In this way the Euclidean plane is not quite the same as the Cartesian plane. Planes embedded in 3-dimensional Euclidean space This section is specifically concerned with planes embedded in three dimensions: specifically, in R3. In three-dimensional Euclidean space, we may exploit the following facts that do not hold in higher dimensions: - Two planes are either parallel or they intersect in a line. - A line is either parallel to a plane, intersects it at a single point in three-dimensional space, or is contained in the plane. - Two lines perpendicular to the same plane must be parallel to each other. - Two planes perpendicular to the same line must be parallel to each other. Definition with a point and a normal vector In a three-dimensional space, another important way of defining a plane is by specifying a point and a normal vector to the plane. Let r0 be the position vector of some known point in the plane, and let n be a nonzero vector normal to the plane. The idea is that a point P with position vector r is in the plane if and only if the vector drawn from to P is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be expressed as the set of all points r such that (The dot here means a dot product, not scalar multiplication.) Expanded this becomes which is the familiar equation for a plane. Note that this means that two non-equal points can be used to define a plane so long as they are ordered and used according to an agreed convention: for example, the first point sits on the plane and the normal vector is defined implicitly from ( - ). Defining a plane with a point and two vectors lying on it Alternatively, a plane may be described parametrically as the set of all points of the form where s and t range over all real numbers, v and w are given vectors defining the plane, and r0 is the vector representing the position of an arbitrary (but fixed) point on the plane. The vectors v and w can be visualized as vectors starting at r0 and pointing in different directions along the plane. Note that v and w can be perpendicular, but cannot be parallel. Defining a plane through three points Let p1=(x1, y1, z1), p2=(x2, y2, z2), and p3=(x3, y3, z3) be non-collinear points. Method 1 The plane passing through p1, p2, and p3 can be defined as the set of all points (x,y,z) that satisfy the following determinant equations: Method 2 To describe the plane as an equation in the form , solve the following system of equations: This system can be solved using Cramer's Rule and basic matrix manipulations. Let If D is non-zero (so for planes not through the origin) the values for a, b and c can be calculated as follows: These equations are parametric in d. Setting d equal to any non-zero number and substituting it into these equations will yield one solution set. Method 3 This plane can also be described by the "point and a normal vector" prescription above. A suitable normal vector is given by the cross product and the point r0 can be taken to be any of the given points p1,p2 or p3. Distance from a point to a plane For a plane and a point not necessarily lying on the plane, the shortest distance from to the plane is It follows that lies in the plane if and only if D=0. If meaning that a, b, and c are normalized then the equation becomes Line of intersection between two planes The line of intersection between two planes and where are normalized is given by This is found by noticing that the line must be perpendicular to both plane normals, and so parallel to their cross product (this cross product is zero if and only if the planes are parallel, and are therefore non-intersecting or entirely coincident). The remainder of the expression is arrived at by finding an arbitrary point on the line. To do so, consider that any point in space may be written as , since is a basis. We wish to find a point which is on both planes (i.e. on their intersection), so insert this equation into each of the equations of the planes to get two simultaneous equations which can be solved for and . Dihedral angle Given two intersecting planes described by and , the dihedral angle between them is defined to be the angle between their normal directions: Planes in various areas of mathematics In addition to its familiar geometric structure, with isomorphisms that are isometries with respect to the usual inner product, the plane may be viewed at various other levels of abstraction. Each level of abstraction corresponds to a specific category. At one extreme, all geometrical and metric concepts may be dropped to leave the topological plane, which may be thought of as an idealized homotopically trivial infinite rubber sheet, which retains a notion of proximity, but has no distances. The topological plane has a concept of a linear path, but no concept of a straight line. The topological plane, or its equivalent the open disc, is the basic topological neighborhood used to construct surfaces (or 2-manifolds) classified in low-dimensional topology. Isomorphisms of the topological plane are all continuous bijections. The topological plane is the natural context for the branch of graph theory that deals with planar graphs, and results such as the four color theorem. The plane may also be viewed as an affine space, whose isomorphisms are combinations of translations and non-singular linear maps. From this viewpoint there are no distances, but collinearity and ratios of distances on any line are preserved. Differential geometry views a plane as a 2-dimensional real manifold, a topological plane which is provided with a differential structure. Again in this case, there is no notion of distance, but there is now a concept of smoothness of maps, for example a differentiable or smooth path (depending on the type of differential structure applied). The isomorphisms in this case are bijections with the chosen degree of differentiability. In the opposite direction of abstraction, we may apply a compatible field structure to the geometric plane, giving rise to the complex plane and the major area of complex analysis. The complex field has only two isomorphisms that leave the real line fixed, the identity and conjugation. In the same way as in the real case, the plane may also be viewed as the simplest, one-dimensional (over the complex numbers) complex manifold, sometimes called the complex line. However, this viewpoint contrasts sharply with the case of the plane as a 2-dimensional real manifold. The isomorphisms are all conformal bijections of the complex plane, but the only possibilities are maps that correspond to the composition of a multiplication by a complex number and a translation. In addition, the Euclidean geometry (which has zero curvature everywhere) is not the only geometry that the plane may have. The plane may be given a spherical geometry by using the stereographic projection. This can be thought of as placing a sphere on the plane (just like a ball on the floor), removing the top point, and projecting the sphere onto the plane from this point). This is one of the projections that may be used in making a flat map of part of the Earth's surface. The resulting geometry has constant positive curvature. Alternatively, the plane can also be given a metric which gives it constant negative curvature giving the hyperbolic plane. The latter possibility finds an application in the theory of special relativity in the simplified case where there are two spatial dimensions and one time dimension. (The hyperbolic plane is a timelike hypersurface in three-dimensional Minkowski space.) Topological and differential geometric notions The one-point compactification of the plane is homeomorphic to a sphere (see stereographic projection); the open disk is homeomorphic to a sphere with the "north pole" missing; adding that point completes the (compact) sphere. The result of this compactification is a manifold referred to as the Riemann sphere or the complex projective line. The projection from the Euclidean plane to a sphere without a point is a diffeomorphism and even a conformal map. See also - Line-plane intersection - Plane of rotation - Point on plane closest to origin - Projective plane - Eves 1963, pg. 19 - Joyce, D. E. (1996), Euclid's Elements, Book I, Definition 7, Clark University, retrieved 8 August 2009 |chapterurl=missing title (help), Plane, 2009 of Planes - Dawkins, Paul, "Equations of Planes", Calculus III - Eves, Howard (1963), A Survey of Geometry I, Boston: Allyn and Bacon, Inc. - Weisstein, Eric W., "Plane", MathWorld. - "Easing the Difficulty of Arithmetic and Planar Geometry" is an Arabic manuscript, from the 15th century, that serves as a tutorial about plane geometry and arithmetic
<urn:uuid:85f3f18e-cf67-4dfa-a36a-4c8f1c969e72>
3.9375
2,234
Knowledge Article
Science & Tech.
39.360176
What do refrigerator magnets, ear buds and counterfeit currency have in common? This week's element is neodymium, the "twin" of praseodymium, which we first learned about last week. Neodymium has the symbol Nd and the atomic number 60. Like most metals, neodymium is a lustrous silvery white colour, and like its twin, it tarnishes rapidly in air so it must be stored under argon (as above) or oil. Like the other lanthanoids, it is a rare earth metal that is anything but rare. In fact, neodymium is exceedingly common -- almost as common as copper -- being the second most common of the rare earth elements in the Earth's crust, following cerium. Of course, this led me to wonder how this group of fifteen elements came to be known as "rare earth metals" when we know that most of them are not at all rare (although there are a... - Reclaiming rare earthsWed, 24 Oct 2012, 21:37:16 EDT - Rare earth elements in US not so rareWed, 17 Nov 2010, 18:05:07 EST - Mastery of rare-earth elements vital to America's securityTue, 16 Mar 2010, 17:47:32 EDT - Heavy metals accumulate more in some mushrooms than in othersFri, 30 Oct 2009, 10:49:50 EDT - Recycling: A new source of indispensible 'rare earth' materials mined mainly in ChinaWed, 29 Jun 2011, 18:53:22 EDT
<urn:uuid:a1ab3ac2-1f97-4217-b207-e7eea74487b6>
2.921875
326
Content Listing
Science & Tech.
59.045351