text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
The weather is notoriously hard to predict. Ever since Michael Fish famously declared on national television in October 1987 that there was going to be no hurricane, the day before the worst storms since 1703, people have been wary of weather reports. But in fact our ability to forecast the weather has improved immeasurably in the past few decades: mathematical researchers have been working with meteorologists, oceanographers and physicists since the end of World War II on the problem. There are many difficulties in weather prediction. When it's raining in your town, it is quite possible for it to be dry (or even sunny!) just a few miles away. No TV presenter can show that level of detail on a weather map, and whatever the presenter says somebody will complain that it wasn't right. Another difficulty is that the weather is chaotic - which means that tiny changes in the atmosphere today can result in completely different weather patterns in a few days' time. This is known as the "Butterfly Effect": if a butterfly decides to flap its wings in Florida Springs then it could cause a hurricane in Spain a week later. This is one of the hallmarks of a chaotic system. The phenomemon of chaos is still not completely understood and mathematicians work on it even today. In 1963 the metereologist Edward Lorenz, working at the Massachusetts Institute of Technology in the USA, invented and studied a simplified model of thermal convection, which can be seen as a very basic model of the weather. This model consists of only the three "differential equations" as shown on the poster: It would take too long to explain here what the variables x, y and z describe physically, but a readable account of the derivation of these equations can be found in the book by Peitgen, Jürgens and Saupe. The greek letters (sigma), (rho) and (beta) are parameters of the system, and Lorenz used the now classic values of 10, 28 and 8/3 respectively. To his surprise he found that the equations behaved in an unpredictable way: the smallest changes in the starting conditions lead to very different evolution of the system after only a short time. Despite its simplicity, the system is chaotic. In spite of this chaotic nature there is a remarkable structure in the equations: it is possible to find a so-called "strange attractor", which is shown on the poster as the yellow spiralling set of points. Whatever initial conditions you use, the system of equations is attracted to this set; but the motion on the attractor is very unpredictable and continually mixes around. The Lorenz attractor (as it is called today) turned out to be a prototype for chaos in other dynamical systems. Similar chaotic attractors have been found in many areas of study, for example, in mechanical, electronic and optical systems. So Lorenz had showed that even seemingly simple systems can have astonishingly complicated dynamics. And that means that the weather is at least as complicated. The study of the exact nature of chaos in the Lorenz equations remains an active field of research. In order to bring out the structure of the system it helps to find the "stable manifold" of the system, which is a collection of the special points which, when used as initial conditions, lead to the system ending up at another special point x = y = z = 0 (which happens to be a point of unstable equilibrium of the system). The stable manifold is difficult to compute, but a new method has been found (see the links below) which has enabled us to show it on the poster as the blue surface. It is calculated by "growing" it in concentric rings, which can be seen on the poster in different shades of blue. The poster also illustrates the importance of visualisation tools in modern mathematics. The study of chaos is important in many fields other than the weather: the movement of share prices in the Stock Exchange, and turbulence in fluids, for instance. Much more work has yet to be done!References: - H.-O. Peitgen, H. Jürgens and D. Saupe: Chaos and Fractals, Springer Verlag 1992. - J. Gleick: Chaos, the Making of a New Science, Heinemann 1987.
<urn:uuid:60306765-7255-4731-b287-a79831f4cf9f>
3.625
875
Knowledge Article
Science & Tech.
48.74113
In physics, acceleration is defined as the rate of change of velocity—that is, the change of velocity with time. An object is said to undergo acceleration if it is changing its speed or direction or both. A device used for measuring acceleration is called an accelerometer. An object traveling in a straight line undergoes acceleration when its speed changes. An object traveling in a uniform circular motion at a constant speed is also said to undergo acceleration because its direction is changing. The term "acceleration" generally refers to the change in instantaneous velocity. Given that velocity is a vector quantity, acceleration is also a vector quantity. This means that it is defined by properties of magnitude (size or measurability) and direction. In the strict mathematical sense, acceleration can have a positive or negative value. A negative value for acceleration is commonly called deceleration. The dimension for acceleration is length/time². In SI units, acceleration is measured in meters per second squared (m•s-²). Then, for the definition of instantaneous acceleration; also OR , i.e. Velocity can be thought of as the integral of acceleration with respect to the time. (Note, this can be a definite or indefinite integration). - is the acceleration vector (as acceleration is a vector, it must be described with both a direction and a has a magnitude) - v is the velocity function - x is the position function (also known as displacement or change in position) - t is time - d is Leibniz's notation for differentiation When velocity is plotted against time on a velocity vs. time graph, the acceleration is given by the slope, or the derivative of the graph. If used with SI standard units (metres per second for velocity; seconds for time) this equation gives a the units of m/(s•s), or m/s² (read as "metres per second per second," or "metres per second squared"). An average acceleration, or acceleration over time, ā can be defined as: - u is the initial velocity (m/s) - v is the final velocity (m/s) - t is the time interval (s) elapsed between the two velocity measurements (also written as "Δt") Transverse acceleration (perpendicular to velocity), as with any acceleration which is not parallel to the direction of motion, causes change in direction. If it is constant in magnitude and changing in direction with the velocity, we get a circular motion. For this centripetal acceleration we have: One common unit of acceleration is g, one g (more specifically, gn or g 0) being the standard uniform acceleration of free fall or 9.80665 m/s², caused by the gravitational field of Earth at sea level at about 45.5° latitude. Jerk is the rate of change of an object's acceleration over time. As a result of its invariance under the Galilean transformations, acceleration is an absolute quantity in classical mechanics. Relation to relativity After defining his theory of special relativity, Albert Einstein realized that forces felt by objects undergoing constant proper acceleration are indistinguishable from those in a gravitational field, and thus defined general relativity that also explained how gravity's effects could be limited by the speed of light. If you accelerate away from your friend, you could say (given your frame of reference) that it is your friend who is accelerating away from you, although only you feel any force. This is also the basis for the popular Twin paradox, which asks why only one twin ages when moving away from his sibling at near light-speed and then returning, since the aging twin can say that it is the other twin that was moving. General relativity solved the "why does only one object feel accelerated?" problem which had plagued philosophers and scientists since Newton's time (and caused Newton to endorse absolute space). In special relativity, only inertial frames of reference (non-accelerated frames) can be used and are equivalent; general relativity considers all frames, even accelerated ones, to be equivalent. With changing velocity, accelerated objects exist in warped space (as do those that reside in a gravitational field). Therefore, frames of reference must include a description of their local spacetime curvature to qualify as complete. An accelerometer inherently measures its own motion (locomotion). It thus differs from a device based on remote sensing. Accelerometers can be used to measure vibration on cars, machines, buildings, process control systems and safety installations. They can also be used to measure seismic activity, inclination, machine vibration, dynamic distance and speed with or without the influence of gravity. One application for accelerometers is to measure gravity, wherein an accelerometer is specifically configured for use in gravimetry. Such a device is called a gravimeter. Accelerometers are being incorporated into more and more personal electronic devices such as mobile phones, media players, and handheld gaming devices. In particular, more and more smartphones are incorporating accelerometers for step counters, user interface control, and switching between portrait and landscape modes. Accelerometers are used along with gyroscopes in inertial guidance systems, as well as in many other scientific and engineering systems. One of the most common uses for micro electro-mechanical system (MEMS) accelerometers is in airbag deployment systems for modern automobiles. In this case, the accelerometers are used to detect the rapid negative acceleration of the vehicle to determine when a collision has occurred and the severity of the collision. Accelerometers are perhaps the simplest MEMS device possible, sometimes consisting of little more than a suspended cantilever beam or proof mass (also known as seismic mass) with some type of deflection sensing and circuitry. MEMS Accelerometers are available in a wide variety of ranges up to thousands of gn's. Single axis, dual axis, and three axis models are available. The widespread use of accelerometers in the automotive industry has pushed their cost down dramatically. The Wii Remote for the Nintendo Wii console contains accelerometers for measuring movement and tilt to complement its pointer functionality. Within the last several years, Nike, Polar and other companies have produced and marketed sports watches for runners that include footpods, containing accelerometers to help determine the speed and distance for the runner wearing the unit. More recently, Apple Computer and Nike have combined the footpod, with Apple's iPod nano to provide real-time audio feedback to the runner on his/her pace and distance. It is known as the Nike + iPod Sports kit. A small number of modern notebook computers feature accelerometers to automatically align the screen depending on the direction the device is held. This feature is only relevant in Tablet PCs and smartphones, including the iPhone. Some laptops' hard drives utilize an accelerometer to detect when falling occurs. When low-g condition is detected, indicating a free-fall and an expected shock, the write current is turned off so that data on other tracks is not corrupted. When the free-fall and shock ends, the data can be rewritten to the desired track, thus negating the effects of the shock. Camcorders use accelerometers for image stabilization. Still cameras use accelerometers for anti-blur capturing. The camera holds off snapping the CCD "shutter" when the camera is moving. When the camera is still (if only for a millisecond, as could be the case for vibration), the CCD is "snapped." Some digital cameras contain accelerometers to determine the orientation of the photo being taken and some also for rotating the current picture when viewing. The Segway and balancing robots use accelerometers for balance. - Coordinate vs. physical acceleration - Speed and Velocity - Cutnell, John D., and Kenneth W. Johnson. Physics. 7th ed. Hoboken, NJ: John Wiley, 2006. ISBN 0471663158 - Halliday, David, Robert Resnick, and Jearl Walker. Fundamentals of Physics. 7th ed. Hoboken, NJ: John Wiley, 2005. ISBN 0471216437 and ISBN 978-0471216438. - Kuhn, Karl F. Basic Physics: A Self-Teaching Guide. 2nd ed. Hoboken, NJ: John Wiley, 1996. ISBN 0471134473 - Serway, Raymond A., and John W. Jewett. Physics for Scientists and Engineers. 6th ed. St. Paul, MN: Brooks/Cole, 2004. ISBN 0-534-40842-7. - Tipler, Paul. Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics. 5th ed. New York: W. H. Freeman, 2004. ISBN 0-7167-0809-4. All links retrieved August 16, 2012. - Acceleration and Free Fall - a chapter from an online textbook. - Science aid: Speed and Motion - Physics Classroom: Acceleration - Acceleration Calculator New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
<urn:uuid:93fa58fc-eb8e-400f-93d6-14d30ad42c28>
4.6875
1,989
Knowledge Article
Science & Tech.
34.124621
Properties of Waves Looking for a lab that coordinates with this page? Try the Period of a Pendulum Lab from The Laboratory.Curriculum Corner Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Wave Properties.PhET Simulation: Wave Interference This PhET simulation provides a virtual environment for demonstrating a wealth of wave properties and behaviors - including the period-frequency relationship. Experiment with water waves, sound waves and light waves. Frequency and Period of a Wave The nature of a wave was discussed in Lesson 1 of this unit. In that lesson, it was mentioned that a wave is created in a slinky by the periodic and repeating vibration of the first coil of the slinky. This vibration creates a disturbance that moves through the slinky and transports energy from the first coil to the last coil. A single back-and-forth vibration of the first coil of a slinky introduces a pulse into the slinky. But the act of continually vibrating the first coil with a back-and-forth motion in periodic fashion introduces a wave into the slinky. Suppose that a hand holding the first coil of a slinky is moved back-and-forth two complete cycles in one second. The rate of the hand's motion would be 2 cycles/second. The first coil, being attached to the hand, in turn would vibrate at a rate of 2 cycles/second. The second coil, being attached to the first coil, would vibrate at a rate of 2 cycles/second. The third coil, being attached to the second coil, would vibrate at a rate of 2 cycles/second. In fact, every coil of the slinky would vibrate at this rate of 2 cycles/second. This rate of 2 cycles/second is referred to as the frequency of the wave. The frequency of a wave refers to how often the particles of the medium vibrate when a wave passes through the medium. Frequency is a part of our common, everyday language. For example, it is not uncommon to hear a question like "How frequently do you mow the lawn during the summer months?" Of course the question is an inquiry about how often the lawn is mowed and the answer is usually given in the form of "1 time per week." In mathematical terms, the frequency is the number of complete vibrational cycles of a medium per a given amount of time. Given this definition, it is reasonable that the quantity frequency would have units of cycles/second, waves/second, vibrations/second, or something/second. Another unit for frequency is the Hertz (abbreviated Hz) where 1 Hz is equivalent to 1 cycle/second. If a coil of slinky makes 2 vibrational cycles in one second, then the frequency is 2 Hz. If a coil of slinky makes 3 vibrational cycles in one second, then the frequency is 3 Hz. And if a coil makes 8 vibrational cycles in 4 seconds, then the frequency is 2 Hz (8 cycles/4 s = 2 cycles/s). The quantity frequency is often confused with the quantity period. Period refers to the time that it takes to do something. When an event occurs repeatedly, then we say that the event is periodic and refer to the time for the event to repeat itself as the period. The period of a wave is the time for a particle on a medium to make one complete vibrational cycle. Period, being a time, is measured in units of time such as seconds, hours, days or years. The period of orbit for the Earth around the Sun is approximately 365 days; it takes 365 days for the Earth to complete a cycle. The period of a typical class at a high school might be 55 minutes; every 55 minutes a class cycle begins (50 minutes for class and 5 minutes for passing time means that a class begins every 55 minutes). The period for the minute hand on a clock is 3600 seconds (60 minutes); it takes the minute hand 3600 seconds to complete one cycle around the clock. Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen. Frequency is a rate quantity. Period is a time quantity. Frequency is the cycles/second. Period is the seconds/cycle. As an example of the distinction and the relatedness of frequency and period, consider a woodpecker that drums upon a tree at a periodic rate. If the woodpecker drums upon a tree 2 times in one second, then the frequency is 2 Hz. Each drum must endure for one-half a second, so the period is 0.5 s. If the woodpecker drums upon a tree 4 times in one second, then the frequency is 4 Hz; each drum must endure for one-fourth a second, so the period is 0.25 s. If the woodpecker drums upon a tree 5 times in one second, then the frequency is 5 Hz; each drum must endure for one-fifth a second, so the period is 0.2 s. Do you observe the relationship? Mathematically, the period is the reciprocal of the frequency and vice versa. In equation form, this is expressed as follows. The quantity frequency is also confused with the quantity speed. The speed of an object refers to how fast an object is moving and is usually expressed as the distance traveled per time of travel. For a wave, the speed is the distance traveled by a given point on the wave (such as a crest) in a given period of time. So while wave frequency refers to the number of cycles occurring per second, wave speed refers to the meters traveled per second. A wave can vibrate back and forth very frequently, yet have a small speed; and a wave can vibrate back and forth with a low frequency, yet have a high speed. Frequency and speed are distinctly different quantities. Wave speed will be discussed in more detail later in this lesson. Throughout this unit, internalize the meaning of terms such as period, frequency, and wavelength. Utilize the meaning of these terms to answer conceptual questions; avoid a formula fixation. 1. A wave is introduced into a thin wire held tight at each end. It has an amplitude of 3.8 cm, a frequency of 51.2 Hz and a distance from a crest to the neighboring trough of 12.8 cm. Determine the period of such a wave. 2. Frieda the fly flaps its wings back and forth 121 times each second. The period of the wing flapping is ____ sec. 3. A tennis coach paces back and forth along the sideline 10 times in 2 minutes. The frequency of her pacing is ________ Hz. 4. Non-digital clocks (which are becoming more rare) have a second hand that rotates around in a regular and repeating fashion. The frequency of rotation of a second hand on a clock is _______ Hz. 5. Olive Udadi accompanies her father to the park for an afternoon of fun. While there, she hops on the swing and begins a motion characterized by a complete back-and-forth cycle every 2 seconds. The frequency of swing is _________. a. 0.5 Hz b. 1 Hz c. 2 Hz 6. In problem #5, the period of swing is __________. a. 0.5 second b. 1 second c. 2 second 7. A period of 5.0 seconds corresponds to a frequency of ________ Hertz. 8. A common physics lab involves the study of the oscillations of a pendulum. If a pendulum makes 33 complete back-and-forth cycles of vibration in 11 seconds, then its period is ______. 9. A child in a swing makes one complete back and forth motion in 3.2 seconds. This statement provides information about the child's 10. The period of the sound wave produced by a 440 Hertz tuning fork is ___________. 11. As the frequency of a wave increases, the period of the wave ___________. c. remains the same
<urn:uuid:ceb957bf-9c0a-4396-b0f5-2d5fc2fa977d>
3.5625
1,694
Tutorial
Science & Tech.
67.521829
The behavior of microorganisms presents a paradox that puzzles the confident logic of science: some microbes can cease performing a function that had been considered necessary for their survival, yet this seeming disparity causes them to thrive and proliferate. A progressive group of microbiologists explain that while these microbes seem to be discarding vital functions, they are really just finding others to get these essential tasks done for them. This particular adaptation encourages microorganisms to live in cooperative communities. This idea is called The Black Queen Hypothesis, named after the queen of spades in the card game Hearts, in which the popular strategy is to avoid picking this card. Accordingly, the loss of the microbe’s ability to satisfy its own vital needs is instead an evolutionary efficacious strategy for certain communal microbes. These microbes can live together more resourcefully if they get rid of certain behaviors and rely on others to help them meet their needs. Richard Losick of Harvard says that the Black Queen Hypothesis offers a new way of understanding how complex, inter-dependent communities of microorganisms thrive. Cooperative and even altruistic behavior abounds beyond the microbe world: bats feed hungry friends, honeybees commit suicide to defend the hive, birds raise offspring that aren’t their own, and humans leap in front of traffic to save total strangers. The biological success of certain species, like ants, depends entirely on their ability to cooperate and form structured societies based on shared sacrifice. Cooperation may no longer be thought of as an ideal confined to the imaginations of dreamers: it may not be the exception, but rather the rule, or at least part of the rule. This hypothesis may spur a new evolutionary theory, or at least call for a rereading of the standard one, for it recognizes that a dog-eat-dog concept of nature is limited. After all, we are tuned to cooperate, not merely compete. Image: "Working Together Teamwork Puzzle Concept" by lumaxart on Flickr courtesy of a Creative Commons License.Tweet
<urn:uuid:f2d6b5cd-2476-49ac-8993-6b3b0376c1ab>
3.546875
410
Personal Blog
Science & Tech.
26.004725
Transient electromagnetics, or time-domain methods (TEM or TDEM), refer to a branch of geophysics, among other disciplines, that use an electromagnetic impulse excitation to investigate the subsurface. TEM methods are generally sensitive to the electrical properties of the subsurface in geologic applications, but are also sensitive to magnetic properties in applications like UXO detection and characterization. Two fundamental electromagnetic principles are required to derive the physics behind TEM surveys: Faraday's law of induction and Lenz's Law. A loop of wire is generally energized by a direct current. At some time (t0) the current is cut off as quickly as possible. Faraday's law dictates that a nearly identical current is induced in the subsurface to preserve the magnetic field produced by the original current (eddy currents). Due to ohmic losses, the induced surface currents dissipate--this causes a change in the magnetic field, which induces subsequent eddy currents. The net result is a downward and outward diffusion of currents in the subsurface. These currents produce a magnetic field by Faraday's law. At the surface, the change in magnetic field [flux] with time is measured. The way the currents diffuse in the subsurface is related to the conductivity distribution in the ground. This is a basic view of the physical principles involved. When conductive bodies are present, the diffusion of the transients is changed. In addition, transients are induced in the conductive bodies as well. This is only the most basic overview. The paper by McNeill is freely available from the Geonics website explaining the basics of the method: Geonics technical notes
<urn:uuid:329c1aaf-f95b-4b1e-b1e7-833a65175159>
3.75
344
Knowledge Article
Science & Tech.
27.045975
Lions finding so-called lion habitat not all that habitable. A paper out today in the journal Biodiversity and Conservation by Nicholas School graduate Jason Riggio (now at the University of California, Davis) and colleagues uses an extensive dataset and high-resolution satellite imagery to demonstrate that much of what we thought was suitable lion habitat is not. If you care about lions, that is probably pretty depressing news, but it's also news we can use -- to design more effective strategies to save the world's big cats. Not your father's savannah The story starts with African savannahs because that's where lions make their homes. Savannahs are generally defined as tropical and subtropical grasslands with scattered trees. They typically experience a strong rainy season and a strong dry season and are often transitional zones between deserts and jungle. (To learn more about savannahs, check out this page and this multimedia primer.) Savannahs are especially prevalent in Africa, covering much of the non-Saharan and what we think of as Africa's tropical forests (the non-wet ones) -- in all about 5.2 million square miles or almost 50 percent of the total continent. Good for the lions, right? Not exactly. With the growth in human population, people have been increasingly encroaching on the savannahs at the expense of wildlife, including lions. Lion populations have been on the decline. In just a half century lion numbers have plummeted -- from around 100,000 in 1960 to about 33,000, according to the last African-wide survey [pdf]in 2006. Have things changed much since 2006? Riggio and his co-authors set out to find out, and in so doing ran into an unexpected puzzle. Not a puzzle of numbers but one of geography. The authors’ field observations of the savannahs that were, according to the existing maps, supposed to be prime lion habitat were actually not.* "Existing maps made from low-resolution satellite imagery show large areas of intact savannah woodlands [where lions should have but were not inhabiting]. Based on our fieldwork in Africa, we knew they were wrong," explained Riggio. "Using very high-resolution imagery we could tell that many of these areas are riddled with small fields and extensive, if small, human settlements that make it impossible for lions to survive." Overlaying population data on top of their own map built from high-resolution imagery -- population data, I should add, from both their work and more than 40 mainly country-specific reports since the last assessment -- Riggio et al found that free-ranging lions inhabited only a quarter of the potential 5.2 million square miles of savannah. This is significant because the authors believe that, as a top predator, lions serve as a proxy for ecosystem biodiversity. Areas with lions would be expected to be relatively intact. Stated differently, this means that only 25 percent of African savannah has not been disturbed, disrupted, and/or reshaped by growing human population. And while their estimate of the total lion population -- between 32,000 and 35,000 lions -- isn't that different from previous estimates, the locations of the lion communities they found are significantly different. Because of savannah fragmentation, they are spatially much more constrained than previously thought and include very small communities that may not be viable. The authors found that today's lions are dispersed between 67 discrete areas of which only 15 hold 500 or more lions. Of those, only 10 areas (holding about 24,000 lions in toto) are thought to have the potential to support lions for the long haul. The authors further specify that another 10,000 lions live in less viable habitats. None of the lion areas with long-term potential (so-called strongholds) are located in West or Central Africa. The upshot of the research? Using an "updated geographical framework," Riggio et al have created a map that they believe "contains our best estimates of lion areas--places that, as best we can tell, likely have resident lion populations." If folks are serious about saving the lions, they will need to get serious about protecting those areas. * In this study the authors defined a savannah as various biomes areas (such as grasslands, dry woodlands, etc.) that receive between 11 and 60 inches of annual rainfall.
<urn:uuid:7dbdf194-bfca-4086-95e0-7a724d8148ad>
2.953125
903
Knowledge Article
Science & Tech.
44.868038
1. Explain Archimedes' trisection of the angle. 2. Explain the determination of the latitude. 3. Find all of the angles in a pentagram. 4. Show how the proportion of the golden mean arises in the pentagram. 5. Explain the construction of the proportion of the golden mean. 6. Explain the construction of the pentagram. 7. Show how to use a golden rectangle to construct a pentagram. 8. Explain Thales of Miletus' determination of the distance of a ship from the shore. 9. Prove that the deflection angle between the tangent to a circle and a chord to another point on the circle is half of the central angle determined by the chord. 10. Answer the feet in the mirror question.
<urn:uuid:b70b0bbf-9e9f-4ca5-9ee1-9de5c71bee4e>
4.40625
165
Listicle
Science & Tech.
72.435
SP01: Quantification of light pollution Investigation of the sources and extent of the urban light dome of Berlin, and their temporal variation Subproject 01 is carried out at the Institute for Space Sciences at the FU Berlin (FUB ISS) and at the Leibniz Institute of Freshwater Ecology and Inland Fisherie (IGB). The subproject investigates the light dome over Berlin, and its variations in time and space. This will be accomplished with both aerial and ground based measurements, in multiple spectral channels. When light travels through the sky, it is occasionally scattered or absorbed by atmospheric components. In the daytime, scattering from molecules makes the sky blue and the sunset red, scattering from water droplets produces rainbows, and scattering and absorption from aerosols allow us to see smog. In the nighttime, these same scattering processes return some of the light emitted upward from cities back towards the ground, producing the "sky glow" present over large cities at night. In addition to blocking the view of faint stars and the milky way, this sky glow is believed to influence urban ecosystems, and may disturb the circadian rhythms of the humans and animals living in the city. By continually monitoring the brightness of the sky from the ground, we will observe how the sky glow changes in response to the changing weather and seasons, and will begin to track how the sky brightness changes from year to year. Using aerial photography and spectrography, we will also map out the spacial and spectral distributions and intensities of the sources of light on the ground. This map will make possible simulations of the sky glow of the city, will provide a dataset for testing algorithms for future satellite based missions. We have composed a preliminary high resolution mosaic image of Berlin using about 3000 aerial photos taken at 10,000 ft on a clear night. Samples of the photos with reduced resolution are shown below. Scientist is Dr. Christopher Kyba Leader of subproject is Prof. Dr. Jürgen Fischer
<urn:uuid:1a08ce58-11c9-4eae-aaf5-f257c436d4bb>
3.4375
408
Academic Writing
Science & Tech.
41.777676
In biological systems information is propagated from one form to another by chemical reactions. An example is the translation of mRNA into protein by the ribosome. Under certain circumstances there are limits to the accuracy of this kind of process. In a one-step process with two possible outcomes the accuracy is bounded above in terms of the difference of the free energies of the two alternative reactions. In other words, it is bounded in terms of the ratio of the reaction constants. Putting in the numbers for some important biological processes shows that this bound is exceeded by a large factor. This led to a proposal by Hopfield (PNAS 71, 4135) of a way in which this accuracy can be achieved by using more complicated reactions with several steps. He called it kinetic proofreading. (There was other related work by Ninio (Biochimie 57, 587) at about the same time.) Later McKeithan (PNAS 92, 5042) applied this idea to the question of how the T cell receptor can discriminate so accurately between different antigens. This model was studied mathematically by Eduardo Sontag (IEEE Transactions on Automatic Control, 46, 1028), who related it to chemical reaction network theory (CRNT). Here I will take Sontag’s work as starting point for my description. Let be the concentration of T cell receptors not bound to a ligand and the concentration of peptide-MHC complexes not bound to a receptor. When a peptide-MHC complex binds to a receptor this gives the basic form of the occupied receptor and the concentration of these is denoted by . The rate constant for this process is denoted by This basic form can be modified by phosphorylation at up to sites, giving rise to quantities . There are successive phosphorylation reactions leading from to and the corresponding rate constants are denoted by . There are dissociation reactions where the peptide-MHC complex detaches from the receptor and the receptor is simultaneously completely dephosphorylated. The rate constants are denoted by . The total concentrations of T cell receptors and peptide-MHC complexes (both bound and free) are denoted by and respectively. They are conserved quantities and can be used to eliminate the variables and from the system if desired. Doing so gives the system for the variables , at the beginning of Sontag’s paper. In the terminology of CRNT this corresponds to restricting to a stoichiometric compatibility class. It is elementary to calculate the stationary solutions of the original system and there is exactly one in each stoichiometric compatibility class. In terms of CRNT the system is weakly reversible and of deficiency zero. General theory then implies that there is exactly one stationary solution in each stoichiometric compatibility class and that it is asymptotically stable. Sontag strengthens this result, proving that all solutions converge to the corresponding stationary solutions at late times. Now I come back to the original motivation. For simplicity let us suppose that and are independent of . Let . Then it turns out, as computed by McKeithan, that the ratio of the fully phosphorylated complex to the total complex is . This means that if is not too small this ratio depends very sensitively on the value of the dissociation constant . If it is which gives rise to further signalling within the cell this gives a way of magnifying differences between the binding properties of ligands.
<urn:uuid:f8a72607-7c85-4483-9b2e-69ce4d5b4c5b>
2.78125
698
Personal Blog
Science & Tech.
38.475418
The TListBox Delphi control displays collection of strings in a scrollable list. By setting the MultiSelect property to true, the user can select more than one item at a time. How to Remove Selected ListBox ItemsWhen MultiSelect is true, the user can select multiple items in the control, and the SelCount property indicates the number of selected items. To remove all the selected items from the list box you need to call the Delete method of the underlying TStrings object. Since Delete changes the ordinal position of the remaining items in the list, when deleting items from the list using the for loop you need to start iterating from the end of he list. The Selected property tells you if an item at a particular index is selected. Here's the code to delete multiple selected items in the list box: Delphi tips navigator: //make sure ListBox1.MultiSelect = true var ii : integer; begin with ListBox1 do begin for ii := -1 + Items.Count downto 0 do if Selected[ii] then Items.Delete(ii) ; end; end; » TObject(Sender) vs. (Sender as TObject) - Differences Between a Hard Cast and an AS Cast in Delphi « Implement the On Item Checked Event for TListView
<urn:uuid:164e5755-28f3-4002-8346-5bfa1f939a4a>
2.75
275
Tutorial
Software Dev.
47.62042
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index] In a message dated 96-02-13 13:40:39 EST, Robert.J.Meyerson@uwrf.edu (Rob >Most cladigrams I have seen tend to be constructed in the speciation model,= > where the cladigram gets broader as one goes up the "tree" (like a maple). > Since Gould's punk eq has shown quite convincingly that the tree of life > actually shaped like a pine, is it possible that most cladigrams portrays= > an old fashioned view of evolution? This is an artifact of the process of speciation by cladogenesis. In vertebrates certainly, and probably in most animals, species diversify by branching apart, not by coming together. So a cladogram will always resemble a tree, with more branches at the top than lower down. But the cladogram doesn't tell you which branches became extinct and which survived, and it doesn't tell you which of the groups of organisms whose evolution it diagrams
<urn:uuid:428cb440-8c7b-4296-84ef-f39e0880d3d2>
3.125
241
Comment Section
Science & Tech.
54.947474
Application Programming Interface (API) Common name used to describe the interface a programmer uses to access a library. Common 3D API's include OpenGL, Direct3D, QuickDraw3D, Renderman, Glide, and there are dozens of other lesser known ones. The two most popular real-time 3D API's these days are OpenGL and Direct3D. Glide enjoyed a dominant but brief popularity on the PC platform in the late 1990s.
<urn:uuid:e23c4b18-7d99-4dbe-868b-cbcfd8b9e949>
2.8125
93
Knowledge Article
Software Dev.
40.836735
The importance of benthic microbial production in the lakes of the McMurdo Dry Valleys and how the benthic cyanobacterial mats are adapted to survive and grow in deep water (extreme low light but no freezing) was investigated. Quantity and quality of light penetrating the ice and light attenuation properties of the water column were determined. Samples of benthic mat cores were collected for ... measurements of the response of photosynthesis and respiration to varying light intensities, analysis for chlorophyll a and biomass to enable expression of results in a quantitative manner and the determination of areal concentrations of chlorophyll a, phycobilin pigments and ash free dry mass. Additional cores were preserved for subsequent analysis of pigments, nutrients and carbon content. The distinct morphology of the mats, height and density of pinnacle formations were measured. Samples were collected from a wide range of depth, up to 40 m. The location of photosynthetic activity within the mats and the attenuation of light through microbial mats were determined from samples from all depths and the effects of light limitation on the rate of photosynthetic electron transport by the mats and the mechanism whereby plants cope with excess light was investigated. A series of experiments were also undertaken to determine the fine scale structure of the photosynthetic component of the mats and to relate this to within-mat light attenuation. At each depth, five replicate mat cores were taken for invertebrate counting to investigate their potential role in carbon cycling and to investigate the variable responsible for regulating the distribution of benthic meiofauna. All invertebrates were identified to the lowest level possible.
<urn:uuid:496f143e-aeec-429d-8fc0-95b5829e2d91>
3.125
332
Academic Writing
Science & Tech.
20.907563
Southern Sea Otter Recovery There's good news and bad news regarding population levels for the Southern Sea Otter (Enhydra lutris nereis). These cute as a button marine mammals once roamed the near shore kelp forests of the entire West Coast. Their thick fur and general amenability toward the human population but them at odds with the fur hunters of their era. Population declines led the species to the edge of extinction, and in 1977, they were placed on the Endangered Species List. Recent surveys by the United States Geological Society, USGS, show population decreases for the population that lives along Coastal California. The numbers are significant because scientists have used the number of 3,090 individuals, for three consecutive years, as the threshold for assuming that population levels are sufficiently large for delisting the species. Initial reports suggest, "breeding-age females are dying in higher than usual numbers from multiple causes, including infectious disease, toxin-exposure, heart failure, malnutrition and shark attacks". No hypotheses were offered to suggest whether these findings were man-made, pollution related events or part of the natural life cycle of sea otter populations due to natural variability in ocean circumstances. It could be, for example, that a combination of human and natural causes are depleting their food sources such as anemones and crabs, thus causing stress on the population. Once the food sources recover, so too will the Sea Otters. Experts at The Monterey Bay Aquarium’s Sea Otter Research and Conservation program hypothesize that "Pathogens and parasites, possibly linked to coastal pollution, can weaken otter immune systems". Sea otter population levels may be a function of both types of explanations. In any event, the good news for sea otters is that the doubling of the population since the surveys began in the 1980s provides hope that the remaining population might have a stronger base to reach the magic 3,090 level. To date, that level has never been reached. © 2011. Patricia A. Michaels. All Rights Reserved.
<urn:uuid:23431395-5d90-47b1-8ff3-b09f2ffc127f>
3.75
423
Truncated
Science & Tech.
34.608851
I'm writing about an article called "Turning Up The Heat". Antarctica starts to melt because of the weather and because of the temperature of the ocean and of the air. Scientist think that maybe the pollution is causing the earth to warm up to quick in fact studies are saying that the earth is warming up to quick than they thought. The temperature is more sensitive to pollution. Ice sheets, glaciers, and polar ice caps are starting to melt which can cause the ocean's water to rise up and cause serious floodings and also violent droughts. Polar Bears can also become instinct because the ice can melt and polar bears can't live without cold weather. It can be a serious problem if we lose our polar bears because they are a important part of earth.
<urn:uuid:8e20fd83-3d95-49bd-99a4-504d621743e2>
2.796875
155
Personal Blog
Science & Tech.
59.311957
If f(x)=0 has multiple roots,how is N-R method affected?How can N-R formula be modified to avoid this problem If you mean Newton-Raphson then multiple roots is a common problem of any iterative method. Different starting values may give different solutions. Sketching the graph gives you a good idea of what starting values to try but the shape of the graph will determine how good N-R is and in some situations it doesn't work at all. For simplicity sake, say that is a differenenciable function. Select an interval such as, , i.e. they have opposite signs, and . Then by the intermediate value theorem there is a solution on this interval. Furthermore, since the function is either strictly increasing or strictly deceasing (but not both) implies there cannot be another solution on this interval. Thus, if the function satisfies the condition for the Newton-Raphson algorithm then it will converge to that root.Originally Posted by bobby77 Here is a list of steps. 1)Find an interval such as, . (Gaurentees existence of a root). 2)Confirm that has only one sign. If not try chaning step #1 slightly to adjust step #2. (Gaurentees uniquesness of a root). 3)Select any point on . 4)Algorithm will converge to that solution on that interval..
<urn:uuid:bf4da6d7-9ad8-4b49-9cb7-71ee33bbc917>
2.859375
293
Q&A Forum
Science & Tech.
63.605682
Anonymous @ on Square of electric field and process used Can anyone explain how can we calculate the magnitude of electric field at a point on the surface of a gold/silver particle? We can simulate the square of electric field and we get a color mapping of it with the scale at the bottom. But how can we exactly make out with color shade is it compared to scale at that point. Moreover the magnitude numbers shows below on the scale, what are its units? what are those numbers taken in reference to?
<urn:uuid:313d2275-4f85-46ff-9940-9954f152f3d9>
2.96875
103
Q&A Forum
Science & Tech.
49.685032
Global surface air temperature in 1995: Return to pre-Pinatubo level Article first published online: 7 DEC 2012 Copyright 1996 by the American Geophysical Union. Geophysical Research Letters Volume 23, Issue 13, pages 1665–1668, 15 June 1996 How to Cite - Issue published online: 7 DEC 2012 - Article first published online: 7 DEC 2012 - Manuscript Accepted: 22 MAR 1996 - Manuscript Received: 15 DEC 1995 Global surface air temperature has increased about 0.5°C from the minimum of mid-1992, a year after the Mt. Pinatubo eruption. Both a land-based surface air temperature record and a land-marine temperature index place the meteorological year 1995 at approximately the same level as 1990, previously the warmest year in the period of instrumental data. As El Niño warming was small in 1995, the solar cycle near a minimum, and ozone depletion near record levels, the observed high temperature supports the contention of an underlying global warming trend. The pattern of Northern Hemisphere temperature change in recent decades appears to reflect a change of atmospheric dynamics.
<urn:uuid:dbff1cb2-652b-405f-a9e5-a7067cb13981>
3.03125
231
Academic Writing
Science & Tech.
33.311897
REST stands for Representational State Transfer. (It is sometimes spelled "ReST".) It relies on a stateless, client-server, cacheable communications protocol -- and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC or SOAP to connect between machines, simple HTTP is used to make calls between machines. - In many ways, the World Wide Web itself, based on HTTP, can be viewed as a REST-based architecture. RESTful applications use HTTP requests to post data (create and/or update), read data (e.g., make queries), and delete data. Thus, REST uses HTTP for all four CRUD (Create/Read/Update/Delete) operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL, et al.). Later, we will see how much more simple REST is. - Despite being simple, REST is fully-featured; there's basically nothing you can do in Web Services that can't be done with a RESTful architecture. REST is not a "standard". There will never be a W3C recommendataion for REST, for example. And while there are REST programming frameworks, working with REST is so simple that you can often "roll your own" with standard library features in languages like Perl, Java, or C#.
<urn:uuid:4b47c4ca-6911-4603-9d53-60d4a6a5083c>
3.71875
313
Knowledge Article
Software Dev.
53.91162
Time series of freeze-free season anomalies shown as the number of days per year, for the Northeast region. Length of the freeze-free season is defined as the period between the last occurrence of 32F in the spring and first occurrence of 32F in the fall. The dashed line is a linear fit. Based on daily COOP data from long-term stations in the National Climatic Data Centers Global Historical Climate Network data set. Only stations with less than 10% missing daily temperature data for the period 1895-2011 are used in this analysis. Freeze events are first identified for each individual station. Then, event dates for each year are averaged for 1x1degree grid boxes. Finally, a regional average is determined by averaging the values for the individual grid boxes. There is an overall statistically significant upward trend.
<urn:uuid:ac6dee92-c7d3-41b9-9dd9-db431e000b0b>
3.0625
163
Structured Data
Science & Tech.
45.69558
Internet Resources for Teaching About Mars This collection of digital resources includes web sites useful for integrating Mars data into commonly taught undergraduate geoscience courses and for teaching planetary geology in general. Please tell us about your favorite web sites, and we can add them to this collection. Results 1 - 3 of 3 matches Recent gullies on Mars and the source of liquid water part of SERC Print Resource Collection This article from the Journal of Geophysical Research evaluates two mechanisms for the generation of liquid water at the surface of Mars in relation to recently discovered gullies. The first involves ... Primary centers and secondary concentrations of tectonic activity through time in the western hemisphere of Mars part of SERC Print Resource Collection This article from the Journal of Geophysical Research discusses the timing of five main stages of radial and concentric structures formed around the Tharsis volcanic region on Mars. Geologic mapping ... Mars Exploration Rover Mission part of SERC Web Resource Collection
<urn:uuid:9a216f43-85f8-4e4e-b431-0c615062bdaa>
3.265625
195
Content Listing
Science & Tech.
31.989887
UNH Speakers Bureau Solar Flares and Gamma-Ray Bursts Gamma-Rays from solar flares are produced by high energy electrons and ions accelerated in the early phase of a flare. The ubiquitous Gamma-Ray Bursts observed by the Swift satellite are produced by myriad mechanisms. They were first observed by Vela satellites in the late 1960’s.
<urn:uuid:094c8f9f-82ff-4513-82a3-bd0062a5d86f>
3.203125
75
Knowledge Article
Science & Tech.
39.128947
In 185 AD, astronomers recorded the appearance of a new star in the Nanmen asterism - a part of the sky identified with Alpha and Beta Centauri on modern star charts. The new star was visible for months and is thought to be the earliest composite image from orbiting telescopes of the 21st century, XMM-Newton and Chandra in X-rays, and Spitzer and WISE in infrared, supernova remnant RCW 86, understood to be the remnant of that The false-color view shows interstellar gas heated by the expanding supernova shock wave at X-ray energies (blue and green) and interstellar dust radiating at cooler temperatures in infrared light (yellow and red). An abundance of the element iron and lack of a neutron star or pulsar in the remnant suggest that the original supernova was Type Ia. supernovae are thermonuclear explosions that a white dwarf star as it accretes material from a companion in a binary star system. the X-ray emitting shell and infrared dust temperatures indicate that the remnant is expanding extremely rapidly into a remarkable low density bubble created before the explosion by the white dwarf system. Near the plane of our Milky Way Galaxy, RCW 86 is about 8,200 light-years away and has an estimated radius of 50 light-years.
<urn:uuid:98a9c6a9-58f9-4479-a793-dd80b6f64cf2>
4
291
Knowledge Article
Science & Tech.
30.775795
questions about embryos colby at biology.bu.edu Mon May 5 12:24:38 EST 1997 Uncle Al Schwartz wrote: > As 98+% of the human genome consists of The human genome is not ninety-eight percent introns. Roughly three percent of our genome is coding DNA. The rest is various classes of repetitive DNA. (Britten and Kohne called them foldback DNA, highly repetitive DNA and middle-repetitive DNA.) The repetitive DNA may be localized or dispersed. Introns are non-coding regions of genes. I don't know what percent of the genome they take up, but since they are part of genes, I'll guess less than three percent. [I grabbed this info from Li and Graur's 1991 book (Molecular > Alan "Uncle Al" Schwartz More information about the Mol-evol
<urn:uuid:d16382db-4a33-4485-8660-e97c50422ab1>
2.859375
196
Comment Section
Science & Tech.
62.255004
Magnitude Estimation Methods Some contributors are now starting to specify the magnitude estimation method that is used (this is not required here, but is for observations submitted to The International Comet Quarterly). The goal of making a magnitude estimate is to obtain the total integrated brightness of the comet's head or coma. This is done by comparing defocused stars, of known brightness, to the comet. Specifically, the average surface brightness of the comet is compared with the surface brightness of defocused stars. Here is a quick summary of the different methods: The Sidgwick or In-Out Method: The in-focus comet is compared to the out-of-focus comparison stars. It is very important that the defocused stars must be the same size as the comet. This is the most popular method and works very well for diffuse comets. Strongly condensed objects, such as C/1995 O1 (Hale-Bopp), are more difficult to estimate using this method because it is very difficult to determine the comet's "average" surface brightness. The Bobrovnikoff or Out-Out Method: The comet and comparison stars are put out-of-focus together. Very easy to do. Works well for very strongly condensed (high DC) objects. Can result in a significant underestimate of brightness in very diffuse and/or large comets. The Morris or Modified-Out Method*: This method was developed to bridge the gap between the Sidgwick (works well for really diffuse comets) and Bobrovnikoff (best for strongly condensed comets) Methods. The comet is put slightly out-of-focus - just enough to "flatten" the brightness profile so that it is easier to determine the comet's average surface brightness. The average surface brightness of the comet is memorized as is its out-of-focus diameter. The comparison stars are then defocused to the comet's out-of-focus diameter (somewhat larger than its in-focus diameter). This method is considered more difficult than the other two methods by some observers. Note that when the comet is very condensed, this method "becomes" the Bobrovnikoff Method and when the comet is very diffuse, it becomes the Sidgwick Method. Thus, the other two methods are subsets of this method... There are other methods, most notably the Beyer or "Way-Out" Method, but the ones given above are the methods recommended for making magnitude estimates today. Each method requires practice, particularly when comparison stars are not in the comet's field. For the record, the author of this page uses the Sidgwick Method for comets with DC < 3 and the Bobrovnikoff Method for comets with DC = 8 or 9. The Morris Method is used for all other DC values. Obviously, exceptions occur if there is a star in the comet's coma...then the Sidgwick Method is the obvious choice. Other observers have different preferences...it is not the intent to set a standard here. - I used to call this method the Equal-Out method, because even though the comparison stars and comet are defocused, the defocused images are the same diameter (which isn't usually true with the Out-Out method). Daniel Green, Central Bureau of Astronomical Telegrams, pointed out that "equal-out" could be confusing...because the In-Out method has equal diameters and the Out-Out method is "defocused" equally. To avoid confusion, I have changed the name to "Modified-Out" method. Daniel Green has written an article published in the October 1996 International Comet Quarterly on the history of comet magnitude estimate methodology. Detailed links and articles - Deep Sky Photography - Comet Hale-Bopp - Comet Hyakutake - Carribean Solar Eclipse - Turkey - Solar Eclipse - Leonid Meteor Shower - Expedition to Mt. Wilson - Solar Eclipse in Western Zambia - How to find a Black Hole - Mars Odyssey Instrument Fails - Astronauts Hear a Crunching Sound - Images of Wetlands from Space - The Next Supernova? - Magnitude Estimation Methods - Comet Observations - Airborne Large Aperture Telescope - Abstract for ALAT & Related Concepts - Multiple Function Lighter Than Air Platform - Why an Airship? - Airborne Large Aperture Telescope - Airborne Large Telescope & LTA Platform - Asteroid Threat Ruled Out - Comets Currently Visible: C/1998 K5 - Comet Definitions - Comets Currently Visible: C/1998 T1 - XMM-Newton Finds the Most Distant Quasar - Comets Currently Visible: P/1998 QP54 - Comet News: C/1998 U3 (Jager) - Opportunity Finds its Heat Shield - Crew Begins Unloading Progress - Comet News: P/1998 S1 (LINEAR-Mueller) - Huygens is On Its Way - Asteroid Threat Upgraded to 1 in 45 - Comet News: C/1998 T1 (LINEAR) - Jovian Moon Was Probably Captured - Illustration of the ALAT Platform - Comet News: P/1998 QP54(LONEOS-Tucker) - Actual Multiple Function ALAT Platform - Endeavour on its Way to the Station - The norwegian suneclipse in may
<urn:uuid:a3ccf90e-3581-4d59-8c44-ec8de8212785>
3.6875
1,127
Knowledge Article
Science & Tech.
40.385494
Found 10 - 12 results of 12 programs matching keyword " observatory" Join the Exploratorium's Dr. Paul Doherty as he visits a "sculpture to observe the stars" in northern New Mexico, where the Sangre de Cristo Mountains meet the eastern plains. There artist Charles Ross is creating an art installation that is also a star observatory. This major earthwork has two main elements: the Star Tunnel, which allows you to walk through the entire history of the earth's changing alignment to our North Star, Polaris; and the Solar Pyramid, where one can visually experience an hour of the earth's rotation. We stayed up with Exploratorium scientist Ron Hipschman at the Lick Observatory in San Jose, California, for the best view we've had of Mars in a long, long time. At midnight on August 27, Earth and Mars passed closer to one another than they have in 60,000 years. Astronomers were on hand to tell us all about our nearest neighbor—its geography, orbit, and why both NASA and the European Space Agency have chosen this time to launch robotic missions to Mars.
<urn:uuid:cd79f8a3-e9cc-452a-b454-398305b7cf1f>
2.90625
229
Content Listing
Science & Tech.
49.222973
If you live on the East coast of the United States, be sure to watch the skies tonight. NASA will be launching five rockets in five minutes from the Wallops Flight Facility in Virginia, on March 14, 2012. The launches will take place late at night. Each rocket will release a chemical tracer that will create a milky-white tracer cloud that will glow. The glowing clouds will be visible to people who are on the ground, looking up at the sky, from South Carolina through New Jersey. All of these rockets are suborbital. These unmanned rockets are part of the Anomalous Transport Rocket Experiment (ATREX) mission. The purpose of launching the rockets is to study the high-altitude jet stream that is located 60 to 65 miles above the surface of the earth. The winds in this upper jet stream can have speeds of 200 to 300 mph. This is the same region where electrical turbulence often occurs. Those electrical currents can adversely affect radio communications, and communications with satellites. Two of the five rockets that will be launched have instrumented payloads. They are carrying equipment that will measure the pressure and temperature in the atmosphere. The measurements will be taken when the wind speed is at its height. One of the rockets that will be launched is Terrier Oriole rocket. This is a two-stage rocket that uses a Terrier first stage booster and then uses an Oriole rocket motor for the second stage of its propulsion. The rocket has four fins that are placed in order to provide stability. Two of the rockets are Terrier-Improved Orions. These are a two-stage spin stabilized rocket system. It uses either a Terrier MK 12 Mod 1 or a MK70 for the first stage. It uses an improved Orion motor for the second stage. The remaining two rockets are Terrier-Improved Malemutes. These are high-performance, two-stage rockets that are used for payloads that weigh less than 400 pounds. The first stage booster for this rocket is a Terrier MK 12 Mod 1. The second stage propulsion unit is a Thiokol Malemute TU-758 rocket motor that has been specifically designed for high-altitude research rocket applications. I find it interesting that NASA selected March 14 to do this launch. March 14, or 3-14, is Pi Day, obviously, because Pi = 3.1415926535…. Those who live on the East coast can end their Pi Day celebrations by gazing up into the night sky, and watching for the glowing cloud produced by the rockets. It will make Pi Day of 2012 that much more memorable! Image: Kennedy Space Center (nasa) by BigStock
<urn:uuid:45b2a121-b568-46e0-97e1-97960c735638>
3.140625
549
Personal Blog
Science & Tech.
58.167375
La Verne Magazine "Tradition & Change" Filling Up With Hydrogen by Jeanette M. Neyman Imagine being in the middle of a large busy city on the United States west coast. The familiar roar of internal combustion engines can be heard all around, but there are almost no toxic emissions coming from all these cars, trucks, buses and planes. The vision is staggering: a society powered almost entirely by hydrogen -- the most abundant element on earth. When the hydrogen is used as an energy source in a fuel cell, it generates no emissions other than water, which is recycled to make more hydrogen. A mere fantasy? Perhaps not -- especially if Dr. Iraj Parchamazad, professor of chemistry and chair of the Chemistry Department, has anything to do with it. Making this vision a reality in the 21st century is the goal of researchers led by Dr. Parchamazad at the University of La Verne. A $600,000 private research grant will help the ULV chemistry department explore promising new hydrogen technology for production, storage and utilization as an alternative fuel. Substantial research monetary support has also come from the United States Department of Energy. "Sooner or later we will find it, but it is like a war where nobody talks about it, because it is so revolutionary," Dr. Parchamazad says. "It will be like electricity was to the beginning of the 19th century." Presently, Dr. Parchamazad and his team are in the process of applying for patents on their work. Sale of stock for the research enterprise is on the horizon. "The United States government, including the Department of Defense, Department of Transportation and Department of Energy support this emerging technology so much they consider it an issue of national security," Several countries, including the United States, Canada, Germany and France are putting tremendous resources into developing the technology. Hydrogen is the chemist's analog to electricity. Like electricity, the hydrogen element does not occur naturally to be used as fuel-it must be generated or produced by consuming fuels or other forms of energy. These new hydrogen technologies would put nature's most basic element to work as a versatile energy carrier and a clean fuel. However, Dr. Parchamazad points out that one of the greatest obstacles of using hydrogen is safety because it is extremely flammable. Dan Herrig, a full-time research associate for the project, feels that although the technology is merely in the prototype stage, it has endless possibilities. "This is a once-in-a-lifetime opportunity for me to make a name for myself," he says.
<urn:uuid:9d403392-731e-4e8a-aaa0-2d32970bc95d>
3.015625
575
Nonfiction Writing
Science & Tech.
35.626935
Evolution of seal "Flippers" over time? Seals are aquatic mammals that evolved from terrestrial mammals with legs. Seals have appendages that are flipper-like (‘flippers’), which are adapted for movement in water. Explain how the flippers may have evolved from legs of a terrestrial ancestor. A) If you were to go back in time and examine a population of the ancestral species, would all the individuals look the same? Describe the variation in the character within the ancestral species that provided the ‘raw material’ for natural selection to occur. B) What type (or types) of selection pressure would have favored the evolution of the character of interest? C) How would the character of interest allow the organism that had it to reproduce more than an organism that lacked it? Re: Evolution of seal "Flippers" over time? i am a new member of this forum. |All times are GMT. The time now is 10:01 AM.| Powered by vBulletin® Version 3.8.4 Copyright ©2000 - 2013, Jelsoft Enterprises Ltd. Copyright 2005 - 2012 Molecular Station | All Rights Reserved
<urn:uuid:f146b31a-0129-4c59-baea-974c5249acbc>
3.6875
254
Comment Section
Science & Tech.
53.12453
Buried landmines are difficult to find, especially if they contain no metal parts. Now researchers at the Technical University of Darmstadt in Germany say a technique for looking inside the human body could help (Journal of Physics D, vol 35, p 939). In nuclear magnetic resonance, molecules placed in a powerful magnetic field emit a characteristic pattern of radio waves. With the operator at a safe distance, an NMR-based landmine detector could determine the substance's chemical make-up and its position in the ground from the pattern of waves. The team says the technique could be particularly useful for identifying TNT, which is used in roughly half of all mines but is difficult to spot with other methods. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:d2c1d070-df4c-429e-87dd-e19410ed0def>
3.703125
169
Truncated
Science & Tech.
39.693571
Read more: "Instant Expert: Earthquakes" Many avenues for earthquake forecasting have been explored, from prior changes in animal behaviour to electromagnetic signals. Yet predicting exactly when an earthquake will happen remains impossible today. Still, there is a great deal we do know about the Earth's shaking in the future When seismologists are asked whether earthquakes can be predicted, they tend to be quick to answer no. Sometimes even we geologists can forget that, in the ways that matter, earthquakes are too predictable. We know where in the world they are likely to happen. For most of these zones, we have quite good estimates of the expected long-term rates of earthquakes We know the largest ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:08c79c4b-e4bf-4170-a777-6c1a7c8e985f>
3.625
166
Truncated
Science & Tech.
44.981
Scientific Name: Dermochelys coriacea Description: Unlike other sea turtles, the bony shell of the leatherback is not visible. Instead, it is covered by a leathery layer of black or brown skin, hence their name. The shell has seven ridges running from front to back. Size: Leatherbacks are the largest of the extant (living) turtle species. They grow to over two meters in length and can weigh up to 2000 pounds. Diet: Jellyfish make up the biggest portion of their diet, but they also eat seaweed, fish, crustaceans, and other marine invertebrates. Leatherbacks have downward pointing spines in their throat, which allows jellyfish to be swallowed, but prevents them from coming back up. Typical Lifespan: Leatherbacks reach maturity at approximately 13 to 14 years. Their average lifespan is unknown, but it’s thought to be at least 30 years. Habitat: Leatherbacks spend most of their lives at sea and sometimes look for prey in coastal waters. The females come on land to lay eggs. Range: Leatherbacks are found in tropical and temperate marine waters all over the world. This means they live off of both the east and west U.S. coasts and also in Puerto Rico, the Virgin Islands, and Hawaii. Life History and Reproduction: Adult leatherbacks have few natural predators, but their eggs and newborns are preyed upon by many animals including birds, raccoons, and crabs. Female leatherbacks return to the same nesting beach to lay their eggs. Temperature determines the gender of the offspring—if it’s warm in the nest, females will be born. Likewise, if temperatures are cooler, males develop. Once the eggs hatch, they’re on their own—the baby sea turtles must make it into the water and learn to fend for themselves without any care from their parents. Fun Fact: Leatherbacks have been documented diving to over 1200 meters! By contrast, scuba divers typically descend to only about 30 meters. Additionally, the Pacific leatherback is the fastest aquatic reptile and can reach speeds of 22 miles per hour! Conservation Status: Federally listed as endangered. Their biggest threats all stem from mankind. Clutches of eggs are often illegally poached, and the offspring that do hatch sometimes become attracted to beach resort lighting and crawl away from the sea instead of towards it. Adults are victims of poaching, entanglement in fishing gear, and they sometimes ingest plastic marine litter. University of Michigan Animal Diversity Web The IUCN Red List of Threatened Species North Florida Ecological Services Office
<urn:uuid:667f4c94-1c28-4626-96f6-75ef5e92b919>
3.78125
547
Knowledge Article
Science & Tech.
48.635687
Indexes may also be used to enforce uniqueness of a column's value, or the uniqueness of the combined values of more than one column. CREATE UNIQUE INDEX name ON table (column [, ...]); Currently, only B-tree indexes can be declared unique. When an index is declared unique, multiple table rows with equal indexed values will not be allowed. NULL values are not considered equal. PostgreSQL automatically creates unique indexes when a table is declared with a unique constraint or a primary key, on the columns that make up the primary key or unique columns (a multicolumn index, if appropriate), to enforce that constraint. A unique index can be added to a table at any later time, to add a unique constraint. Note: The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly.
<urn:uuid:2bd68a52-ede4-4817-94fc-7b6a8c50ab5b>
2.8125
199
Documentation
Software Dev.
40.636962
By Emily Sohn When a scorpion attacks, its victim rarely has time to fight back. First, the clawed creature grabs and pins down its target — maybe a cricket, grasshopper or spider. Using a stinger in its rear end, the scorpion jabs the victim’s flesh. Then, the true suffering begins. After an initial shock of severe pain, the prey might start to feel burning and tingling sensations. Shaking and trembling ensue, followed by paralysis. If the victim doesn’t die from the venom, the scorpion will probably just eat it alive. Even people can die from the sting some scorpions deliver. But despite the fear that scorpions inspire among many people, scientists refuse to run away or even fight back. Instead, they are drawing inspiration from scorpions and other stinging animals. In the safety of their laboratories, researchers are squeezing venom out of poisonous creatures. They are dissecting the venom, decoding its secrets, and making chemical versions of their own. This is what venom researchers know: Understanding an attacker is the best way to steal its powers. With a growing list of discoveries, scientists are developing scorpion-inspired pesticides, cancer treatments, painkillers and more. “Look at the world from the viewpoint of a scorpion,” says Raymond John St. Leger, an entomologist (insect expert) at the University of Maryland in College Park. “It wants toxins which can kill insects so it can eat those insects. Then it wants toxins for defense. In the case of something big and nasty like a child that comes to stomp on it, it wants to be able to defend itself. There are a lot of different chemicals there.” Tapping into those chemicals, experts say, has enormous potential. “This is one of the major resources that is naturally available to us,” St. Leger says. “It looks like [venom has] a very bright future, much of which we can’t predict.” A world of venom Many animals ? spiders, snakes, bees, stingrays, centipedes, anemones, ants and snails, to name some ? make venom. Compared with the venom from other animals, scorpion venom is one of the simplest types to study. It looks like a thick milky soup, and is made of a variety of molecules, including hundreds of small proteins called peptides. Many of those peptides are toxic. That means that they have the power to harm the cells of unfortunate victims. They might cause paralysis, for example. Or they might kill cells. Venom toxins have to get inside cells to cause damage. There, they work in different ways to destroy cells or change the messages cells send to other cells. To extract venom from a scorpion, scientists often use a long tool to hold the animal at a distance. A mild electric shock then causes the animal to squirt a little venom, which the scientists collect in a little test tube. Venom expert Bora Inceoglu once studied a type of scorpion that was so aggressive, all Inceoglu had to do was grab the scorpion and the animal would squeeze out some venom. For protection, Inceoglu, who works at the University of California, Davis, wears only a pair of goggles. He doesn’t worry too much, because venom is harmless if it gets on your skin. He has never been stung. “Initially, I was nervous, but then I got used to it,” he says. “My wife was with me one time, and she got so scared she had to leave the room.” A typical squirt holds between 5 microliters and 50 microliters of liquid. One microliter is a millionth of a liter, and 50 microliters is the size of a really tiny droplet. A whole lot of toxic peptides can fit into even just a tiny amount of venom. The venom in one scorpion can contain hundreds of different toxins. The deadly African fat-tailed scorpion (Parabuthus transvaalicus) has a toxin that can kill one type of beetle but not another. This variety and specificity could make scorpion venom a good source of natural pesticides. Copyright Wikimedia Commons Once researchers have extracted venom, the real (and less scary) work begins. Using basic scientific techniques, scientists can pull peptides apart within the droplets collected. Then researchers look at the peptides and test them on animals. The goal is to understand what the peptides look like, how they differ from each other and what they do. The research involves plenty of challenges. There are 1,300 scorpion species, and as many as 300 peptides in the venom of each one. Most of the peptides scientists know about so far are from one species. Inceoglu estimates that there are probably more than 100,000 different peptides in the world’s scorpions. So far, scientists have identified about 500 of them. That’s just half a percent of the total that exist. “That number is increasing,” Inceoglu says. “But we’re nowhere near the whole number.” Putting toxins to work As some scientists continue to identify and examine individual toxic peptides inside of venom, others are searching for ways to use those toxins to make the world better. In one of the most promising approaches, venom toxins are inspiring pesticides that protect agricultural crops from insects. Scorpions are already really good at killing insects, after all. Why not copy what they do? As an insect-fighter, venom has lots of potential. One reason is that each toxic peptide inside venom has had millions of years to target a specific type of insect. For example, some fat-tailed scorpions in Africa produce a well-studied toxic compound called AaIT. The toxin paralyzes some types of beetles but does nothing to others. It has no effect on humans. Other types of toxins might work only on grasshoppers or locusts. (The number and type of toxins determines how dangerous an animals’ venom is). That kind of focused attack would be hugely helpful to farmers. Traditional pesticides are made from strong chemicals that tend to harm all animals equally. As these chemicals fight the bad guys, they also hurt the good guys, such as helpful insect pollinators, innocent farm animals like cows, and people. It would be more useful to have venom-like strategies that killed only mosquitoes that carry diseases or worms that eat corn crops. Scorpion-inspired pesticides would also be safer and more environmentally friendly than chemicals because they would decompose. So, these pesticides wouldn’t build up in soil or water. They can’t get into the bodies of animals and people if they don’t get into water or food. Scientists have already deciphered what some of these specific toxins look like and how they work. Researchers have even created versions that act like the originals. The biggest challenge now is to make the venom-inspired pesticides do what they’re supposed to do. Spraying them onto plants won’t do: Insects can swallow venom without harm. Instead, researchers are experimenting with bacteria and fungi that can deliver toxins directly into the bodies of insects, just like scorpion stings do. “They’ve got great potential,” St. Leger says. “What we need now is delivery systems.” As crazy as it may sound, venom toxins might also help heal people. Some scorpion venom toxins affect only the cells of mammals. These compounds probably provide defense against predators, such as coyotes and squirrels. But they also work on humans. That’s bad news if the venom comes from the stinger of a scorpion. In the hands of a doctor, though, the poisons might do good. Toxic compounds that kill cells, for example, could be injected into tumors to fight cancer. Compounds that paralyze cells could fight pain. In fact, there is already a pain-killing drug available that was inspired by the venomous cone snail, a type of sea snail. Nature has been doing experiments for hundreds of millions of years, St. Leger says. Some species and strategies have gone extinct. Others, like scorpions, have developed powerful chemicals for staying alive. As venom research pushes forward, there is a lot to be learned from what already exists in the environment. “All we are left with is all of Nature’s successful experiments,” St. Leger says. “They teach us a lesson. They might show us that things we hadn’t thought of before are possible.”
<urn:uuid:98859a3d-3a6e-4427-8caf-f29556cfe9d9>
3.40625
1,823
Truncated
Science & Tech.
55.040417
(4 of 6) Enceladus holds mysteries of its own. A bright white world with a relatively smooth face, it appears to have been repeatedly resurfaced by some kind of underground slurry or perhaps by ice volcanoes. In some places, once deep crevasses have been largely filled in and craters have been cut neatly in half, leaving one side deep and raw and the other covered, as if by snowdrifts. The area of the Saturnian ring that follows in the wake of Enceladus is slightly thicker than the rest, as if the moon were pumping out some kind of frozen exhaust, leaving a plume in its wake like the smoke from a steamship. Other questions should be answered when Cassini flies by Hyperion, a tumbling moon that appears to have been knocked off its pins by a collision eons ago and has never regained its footing; and Tethys, a moon that bears such a massive impact scar that only the barest geological margin keeps it from shattering altogether. It is Titan, however, that will be the main attraction. One of the largest moons in the solar system larger than Mercury or Pluto Titan would be a perfectly good planet if it were orbiting the sun under its own steam. NASA scientists were keenly disappointed when the Voyager 1 spacecraft flew by Titan in 1980. The moon's dense, orange atmosphere completely concealed its surface from view, revealing not a clue about what was happening on the ground. Scientists speculate that there may be quite a bit happening. Rich in nitrogen as well as ethane, methane and other carbon-based gases, the Titanian air contains the raw chemical material believed to be needed to give rise to life and just the kind that probably existed on the primordial Earth. Titan's frigid temperature about --280F would surely have prevented life from emerging. Nonetheless, over time the candlelike heat of the distant sun may have slow-cooked some of the organic materials, forming more complex molecules. What's more, if there is lightning in Titan's atmosphere, the random jolts could have shocked even bigger molecules into existence. The Cassini-Huygens mission will investigate Titan from many angles. Of the 59 flybys of the nine selected moons, 45 will be devoted to Titan most at a distance of just 590 miles. Preliminary images received last weekend revealed a bright cloud pattern about the size of Arizona near the south pole and what appeared to be a massive impact crater. But there will be much more. Radar will pierce the Titanian cloud cover, mapping plains, mountains and perhaps even lakes of liquid ethane and methane though early observations last weekend cast new doubt on the existence of the lakes. Spectrometers and other instruments will take the chemical measure of the moon's air, and cameras will again try to photograph Titan from outside in.
<urn:uuid:8e3c7086-98af-40aa-bd26-f149f4174646>
3.859375
581
Truncated
Science & Tech.
41.29875
Living documents with XML events (1/4) - exploring XML Living documents with XML events Most documents today are living documents in the sense that they are constantly updated and never finished. With the advent of HTML and Web browsers documents became also living in that they can interactively respond to events. Now this kind of life comes to XML documents with the newly standardized XML event handling... An event is the manifestation of some asynchronous happening associated with an element in an XML document, such as a mouse click on some text element, or an arithmetical error in the value of an attribute, or one of many other possibilities. In the DOM event model, an event is dispatched by passing it down the document tree in the capture phase to the element where the event occurred (called its target). Subsequently it then may be passed back up the tree again in the bubbling phase. In general an event can be responded to at any element in the path (an observer) in either phase by causing an action, and/or by stopping the event, and/or by cancelling the default action for the event at the place it is responded to. Where there are events, actions, handlers, and listeners cannot be far: actionis some way of responding to an event; handleris some specification for such an action, for instance using scripting or some other method. listeneris a binding of such a handler to an event targetting some element in a document. HTML binds events to an element by encoding the event name in an attribute name, such that the value of the attribute is the action for that event at that element. This method has two main disadvantages: firstly it hardwires the events into the language, so that to add a new event, you have to make a change to the language, and secondly it forces you to mix the content of the document with the specifications of the scripting and event handling, rather than allowing you to separate them out. Therefore the XML approach is slightly different... Produced by Michael Claßen Created: Nov 12, 2001 Revised: Nov 12, 2001
<urn:uuid:39ee6102-b73e-4650-bc67-5e1e9aefb5dd>
3.109375
434
Knowledge Article
Software Dev.
43.610798
Inheritance diagram for wx.DataObject: A wx.DataObject represents data that can be copied to or from the clipboard, or dragged and dropped. The important thing about wx.DataObject is that this is a ‘smart’ piece of data unlike ‘dumb’ data containers such as memory buffers or files. Being ‘smart’ here means that the data object itself should know what data formats it supports and how to render itself in each of its supported formats. A supported format, incidentally, is exactly the format in which the data can be requested from a data object or from which the data object may be set. In the general case, an object may support different formats on ‘input’ and ‘output’, i.e. it may be able to render itself in a given format but not be created from data on this format or vice versa. Not surprisingly, being ‘smart’ comes at a price of added complexity. This is reasonable for the situations when you really need to support multiple formats, but may be annoying if you only want to do something simple like cut and paste text. To provide a solution for both cases, wxPython has two predefined classes which derive from wx.DataObject: wx.DataObjectSimple and wx.DataObjectComposite. wx.DataObjectSimple is the simplest wx.DataObject possible and only holds data in a single format (such as HTML or text) and wx.DataObjectComposite is the simplest way to implement a wx.DataObject that does support multiple formats because it achieves this by simply holding several wx.DataObjectSimple objects. So, you have several solutions when you need a wx.DataObject class (and you need one as soon as you want to transfer data via the clipboard or drag and drop): |Possible Implementations||Pros and Cons| |Use one of the built-in classes||You may use wx.TextDataObject, wx.BitmapDataObject or wx.FileDataObject in the simplest cases when you only need to support one format and your data is either text, bitmap or list of files.| |Use wx.DataObjectSimple||Deriving from wx.DataObjectSimple is the simplest solution for custom data - you will only support one format and so probably won’t be able to communicate with other programs, but data transfer will work in your program (or between different copies of it).| |Use wx.DataObjectComposite||This is a simple but powerful solution which allows you to support any number of formats (either standard or custom if you combine it with the previous solution).| |Use wx.DataObject||This is the solution for maximal flexibility and efficiency, but it is also the most difficult to implement.| Please note that the easiest way to use drag and drop and the clipboard with multiple formats is by using wx.DataObjectComposite, but it is not the most efficient one as each wx.DataObjectSimple would contain the whole data in its respective formats. Now imagine that you want to paste 200 pages of text in your proprietary format, as well as Word, RTF, HTML, Unicode and plain text to the clipboard and even today’s computers are in trouble. For this case, you will have to derive from wx.DataObject directly and make it enumerate its formats and provide the data in the requested format on demand. Note that neither the GTK+ data transfer mechanisms for clipboard and drag and drop, nor OLE data transfer, copy any data until another application actually requests the data. This is in contrast to the ‘feel’ offered to the user of a program who would normally think that the data resides in the clipboard after having pressed ‘Copy’ - in reality it is only declared to be available. You may also derive your own data object classes from wx.CustomDataObject for user-defined types. The format of user-defined data is given as a mime-type string literal, such as “application/word” or “image/png”. These strings are used as they are under Unix (so far only GTK+) to identify a format and are translated into their Windows equivalent under Win32 (using the OLE IDataObject for data exchange to and from the clipboard and for drag and drop). Note that the format string translation under Windows is not yet finished. wxPython note: At this time this class is not directly usable from wxPython. Derive a class from wx.PyDataObjectSimple instead. wx.BitmapDataObject, wx.CustomDataObject, wx.DataObjectComposite, wx.DataObjectSimple, wx.FileDataObject, wx.MetafileDataObject, wx.PyBitmapDataObject, wx.PyDataObjectSimple, wx.PyTextDataObject, wx.TextDataObject, wx.URLDataObject Copy all supported formats in the given direction to the array pointed to by formats. There is enough space for GetFormatCount (dir) formats in it. The method will write the data of the format format. Returns None on failure. Returns the data size of the given format format. Returns the number of available formats for rendering or setting the data. Returns the preferred format for either rendering the data (if dir is Get, its default value) or for setting it. Usually this will be the native format of the wx.DataObject. Returns True if this format is supported.
<urn:uuid:7af7f658-d3a6-4f90-aebd-35fb7ac94b95>
3.125
1,188
Documentation
Software Dev.
53.709446
New in version 2.0. The module xml.sax.saxutils contains a number of classes and functions that are commonly useful when creating SAX applications, either in direct use, or as base classes. - escape(data[, entities]) Escape &, <, and > in a string of data. You can escape other strings of data by passing a dictionary as the optional entities parameter. The keys and values must all be strings; each key will be replaced with its corresponding value. - class XMLGenerator([out[, encoding]]) This class implements the ContentHandler interface by writing SAX events back into an XML document. In other words, using an XMLGenerator as the content handler will reproduce the original document being parsed. out should be a file-like object which will default to sys.stdout. encoding is the encoding of the output stream which defaults to - class XMLFilterBase(base) This class is designed to sit between an XMLReader and the client application's event handlers. By default, it does nothing but pass requests up to the reader and events on to the handlers unmodified, but subclasses can override specific methods to modify the event stream or the configuration requests as they pass through. See About this document... for information on suggesting changes. - prepare_input_source(source[, base]) This function takes an input source and an optional base URL and returns a fully resolved InputSource object ready for reading. The input source can be given as a string, a file-like object, or an InputSource object; parsers will use this function to implement the polymorphic source argument to their
<urn:uuid:91e9adcf-19e9-443d-8376-5d46b3c3fe6a>
2.953125
364
Documentation
Software Dev.
47.422492
2. Using RubyGems This chapter gives examples of the most common user opertions performed with the gem command. See the gem Command Reference manual for details about particular gem commands. Versioning is a pretty basic concept in RubyGems. You might want to glance at the Specifying Versions chapter for a better understanding of how versions work with RubyGems. When you run gem query --remote # shortcut: gem q -R you see will a detailed list of all the gems on the remote server. Sample output (heavily abbreviated): *** REMOTE GEMS *** activerecord (0.8.4, 0.8.3, 0.8.2, 0.8.1, 0.8.0, 0.7.6, 0.7.5) Implements the ActiveRecord pattern for ORM. BlueCloth (0.0.4, 0.0.3, 0.0.2) BlueCloth is a Ruby implementation of Markdown, a text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML). captcha (0.1.2) Ruby/CAPTCHA is an implementation of the 'Completely Automated Public Turing Test to Tell Computers and Humans Apart'. cardinal (0.0.4) Ruby to Parrot compiler. cgikit (1.1.0) CGIKit is a componented-oriented web application framework like Apple Computers WebObjects. This framework services Model-View-Controller architecture programming by components based on a HTML file, a definition file and a Ruby source. progressbar (0.0.3) Ruby/ProgressBar is a text progress bar library for Ruby. It can indicate progress with percentage, a progress bar, and estimated remaining time. rake (0.4.0, 0.3.2) Ruby based make-like utility. The progressbar gem is a nice and simple utility that we will use to demonstrate further features. When you run gem query --remote --name-matches doom # shortcut: gem q -R -n doom you will see a detailed list of matching gems on the remote server. *** REMOTE GEMS *** ruby-doom (0.8, 0.0.7) Ruby-DOOM provides a scripting API for creating DOOM maps. It also provides higher-level APIs to make map creation easier. When you run (as root, if appropriate and necessary) gem install --remote progressbar # shortcut: gem i -r progressbar the progressbar gem will be installed on your computer. Notice that you don’t need to specify the version, but you can if you want to. It will default to the last version available. gem ins -r progressbar-0.0.3 gem ins -r progressbar --version '> 0.0.1' In both cases, the output is simply:Attempting remote installation of ‘progressbar’ Successfully installed progressbar, version 0.0.3 RubyGems allows you to have multiple versions of a library installed and choose in your code which version you wish to use. Useful extra options for installation are —gen-rdoc for generating the gem’s RDoc API documentation, and —run-tests to run the gem’s unit tests, if any. Note too that when you remotely install a gem, it will download and install any specified dependencies. Try installing copland and see that it prompts you to accept log4r as well (if it’s not already installed). When you run gem specification progressbar # shortcut: gem spec progressbar you will see all the details of the ’’progressbar’’ gem. --- !ruby/object:Gem::Specification rubygems_version:"1.0\" name: progressbar version: !ruby/object:Gem::Version version: 0.0.3 date: 2004-03-20 20:03:00.679937 +11:00 platform: summary: "Ruby/ProgressBar is a text progress bar library for Ruby. It can indicate progress with percentage, a progress bar, and estimated remaining time." require_paths: - lib files: - sample/test.rb - lib/progressbar.rb - docs/progressbar.en.rd - docs/progressbar.ja.rd - ChangeLog autorequire: progressbar author: Satoru Takabayashi email: email@example.com homepage: http://namazu.org/~satoru/ruby-progressbar/ Some interesting information includes the author’s details, the version and description of the gem. There is also important technical information for RubyGems to use this gem properly. This includes the list of files included, where to include files from, and what to require by default (more on this later). If we’ve finished with progressbar, we can uninstall it. gem uninstall progressbar Successfully uninstalled progressbar version 0.0.3 If there are more than one version of a gem installed, the gem command will ask you which version to delete. If there are other gems that depend upon the gem being uninstalled, and if there is no other way to satisfy that dependency, then the user will be will be given a warning and allowed to cancel the uninstall. gem query --local # shortcut: 'gem q -L' You’ve no doubt noticed the —local and —remote options on most of the command lines shown so far. If you don’t specify either of these, then gem will (usually) try ’’both’’ a local and remote operation. For example:gem ins rake # Attempt local installation; go remote if necessary gem list -b ^C # List all local AND remote gems beginning with “C” You can run your own gem server. This means other people can (potentially) install gems ‘’from your computer’’. And as a side-effect of that, you can view your installed gems through your web browser. Just run and point your browser to http://localhost:8808. You’ll be able to view the documentation for each gem, as long as you asked for it to be generated when you installed it. If you want to always generate RDoc documentation and run unit tests for each gem you install, then you can specify these command-line options in a config file (.gemrc in your home directory). gem: --rdoc --test There are other things you can achieve with a config file (RDoc parameters, GEMPATH settings). See `gem help env` for the details. gem check —alien will report on any rogue (unmanaged) files in the RubyGems repository area. gem check —verify progressbar will check that the installed ’’progressbar’’ gem is valid against its own checksum.
<urn:uuid:3261d510-56db-4c03-9756-5df3249db92d>
2.90625
1,513
Documentation
Software Dev.
65.32801
Just look at the path on the sphere. Here it is in Google Earth: The path on your map is strongly curved because your map uses a projection with lots of distortion. (The distortion grows without bound towards the poles and this path is getting close to the north pole.) The distortion is necessary to explain the curvature of this geodesic on the map but the connection between them is subtle. More can be said that is at once useful, informative, and elegant. See whether you agree. The OP's map uses a Mercator projection. Its salient qualities are that it is Cylindrical: in particular, meridians are vertical lines on the map, Conformal: any angle at which two paths cross on the earth will be correctly rendered on the map, and Loxodromic: any route of constant bearing (on the earth) is rendered as a straight line segment on the map. These properties make it easy to read some critical information directly off the map. In this context I am most interested in the angles made by any path with each of the meridians it crosses. (These are the bearings measured from the north.) For instance, the path depicted in the question starts in Canada, around 54 degrees latitude, making an angle of about 30 degrees with its meridian. What we also need to know about a point at 54 degrees latitude is that it is closer to the earth's axis than points along the equator. In fact, it's cos(54) * R from the axis, where R is the earth's radius. (This is essentially the definition of the cosine. It helps to have some familiarity with cosines, so you understand how they behave, but you don't really need to know any other trigonometry at all. I promise. Well, one more thing: the sine of an angle is the cosine of its complement. E.g., sin(32 degrees) = cos(90-32) = cos(58).) Finally, note that the earth is rotationally symmetric about its axis. This lets us invoke Clairaut's beautiful Theorem (1743): On a path in any smooth surface of revolution, the product of the distance to the axis with the sine of the bearing is constant if and only if the path is locally geodesic. Thus, since we are starting off at latitude 54 degrees at an angle of 30 degrees, the product in the theorem equals cos(54) * R * sin(30) = 0.294 * R. How does this help? Well, consider what would happen if the path were to continue approximately straight on the map. Sooner or later it would rise to a latitude of 73 degrees. Using Clairaut's theorem we can solve for the bearing at this latitude: cos(73) * R * sin(bearing) = 0.294 * R; sin(bearing) = 0.294 / cos(73) = 1; bearing = 90 degrees. This says that by the time we reach a latitude of 73 degrees, we must be traveling due east! That is, the path, in order to be a geodesic, must curve so strongly that the initial bearing of 30 degrees (east of north) becomes 90 degrees (east of north). (Of course I found the value 73 degrees by solving the equation cos(latitude) = cos(latitude) * sin(90) = cos(54) * sin(60). To do this yourself you would have to know that (a) sin(90) = 1 (because sin(90) = cos(90-90) = cos(0) = 1) and (b) most calculators and spreadsheets have a function to solve cosines; it's called ArcCos or inverse cosine. I hope you don't view this little detail as breaking my earlier promise about no more trig...) After doing a few calculations like this you develop an intuition for what Clairaut's Theorem is saying. A path in a surface of revolution (like the earth) can be geodesic (locally shortest or "straight") only when (a) its bearing becomes more parallel to the meridians at points far from the axis and (b) its bearing gets more perpendicular to the meridians at points closer to the axis. Because there is a limit on how perpendicular one can get--90 degrees is it!--there is a limit to how close to the axis you can get. This constant adjustment of bearing (= angle to the meridian) and latitude (= distance to the axis) causes the apparent curvature of geodesics on most maps, especially on those using cylindrical projections, where the meridians and lines of latitude are rendered as vertical and horizontal lines, respectively. Here are some easy implications of Clairaut's Theorem. See whether you can prove them all: The equator must be a geodesic. All meridians are geodesics. No line of latitude, other than the equator (and the poles, if you want to include them), can be a geodesic. Not even a small part of a line of latitude can be geodesic. Loxodromes (aka rhumb lines), which are lines of constant bearing, cannot be geodesics unless they are meridians or the equator. Not even a small part of such a loxodrome can be geodesic. In other words, if you sail or fly in a fixed compass direction, then--with a few obvious exceptions--your path is constantly curving! Point 4 says if you fly from the Canadian Rockies at an initial bearing of 30 degrees east of north, you must appear, relative to north, to be constantly turning (to the right) in order to fly straight; you will never go north of 73 degrees latitude; and if you continue far enough, you will make it to Poland and will be headed roughly 150 degrees east of north when you get there. Of course the details--73 degrees and Poland and 150 degrees--are obtained only from the quantitative statement of Clairaut's Theorem: you can't usually figure out that sort of thing just using your intuitive idea of geodesics. It is noteworthy that all these results hold on a general spheroid (a surface of revolution generated by an ellipse), not just on perfect spheres. With slight modifications they hold for tori (surfaces of bagels or truck tires) and many other interesting surfaces. (The sci fi author Larry Niven wrote a novel in which a small artificial torus-shaped world is featured. The link includes an image from the novel's cover depicting part of this world.)
<urn:uuid:2da0dc78-4209-48c6-946e-1aa4e607878f>
3.765625
1,389
Q&A Forum
Science & Tech.
59.440853
Was Reading the Manual I thoght API Stood For Application Programming Interface not Protocol . AIP Stands for Application Interface Protocol. Below is a Wikipedia Reference upon a simple Google search for API. http://en.wikipedia.org/wiki/Applicatio ... _interface Can Someone Explain, How is a substance a protocol? I thought a protocol was Quote from Wikipedia, "In computing, a protocol is a convention or standard that controls or enables the connection, communication, and data transfer between two computing endpoints. In its simplest form, a protocol can be defined as the rules governing the syntax, semantics, and synchronization of communication. Protocols may be implemented by hardware, software, or a combination of the two. At the lowest level, a protocol defines the behavior of a hardware connection." For Comparison API Quotation From Wikipedia, "An application programming interface (API) is a source code interface that an operating system or library provides to support requests for services to be made of it by computer programs. Advanced Programming Interface (API) is a near synonym with wider application that predates the current common usage. In the original term the concept is meant to represent any well defined interface between two separate programs. The main difference is that this older term does not inculcate a parent-child relationship and can therefore be applied to peer-to-peer situations more logically, e.g. internal kernel services which can present themselves as separate programs." A Explanation would be helpful
<urn:uuid:56bdf9ae-b57f-4f53-8d8d-e4e9ac85bfc9>
3.09375
305
Comment Section
Software Dev.
27.986447
Sandy’s Two-Fisted Attack: Water From Air And Sea Filed by KOSU News in Science. November 1, 2012 On Monday, Sandy brought heavy rain, winds and storm surges to the Northeast, causing widespread flooding and extensive damage to hundreds of communities, particularly in New Jersey and New York. But the drenching from all that water varied greatly by region. In areas south of Atlantic City, N.J., where the storm made landfall Monday night, the wind was pushing out toward the ocean. This prevented high storm tides along the Virginia, Maryland and Delaware coasts and in Chesapeake Bay. But the same arm of the storm that held the ocean at bay carried a lot of rain. Some parts of Maryland saw 12.5 inches of rain as Sandy passed through, according to the National Weather Service. That’s nearly a quarter of Maryland’s total rainfall in 2011 — about 51 inches — a large portion of which fell during Hurricane Irene. Delaware and New Jersey also recorded high rain levels matching Irene’s deluge. All the rain Maryland got has to go somewhere: down. As it flows downhill, it takes soil and debris with it, and eventually fills creeks and rivers to the brimming point. A lot of this runoff will eventually end up in the Potomac River; the National Oceanic and Atmospheric Administration has issued a flood warning for the upper Potomac and a coastal flood warning for the river’s tidal regions until Thursday. Further north, New York saw relatively little rainfall — just 3.5 inches over the course of the storm. But off New York’s coast, the swirling hurricane winds were pushing inland, piling water up against the shore. Combining with a high tide on Monday night, the storm surge broke records in New York and New Jersey. This tide map from Sandy Hook, N.J., shows the rising water as the storm approached. The tide heights were impressive — 12.5 feet higher than normal at King’s Point, N.Y., and 9 feet higher than normal in New Haven, Conn. So were the waves. There were swells towering almost 40 feet off Atlantic City, and 30 feet outside New York Harbor. The combination of the storm surge, the tides and the waves resulted in the destructive flooding we now see. [Copyright 2012 National Public Radio]
<urn:uuid:e2710513-79ce-49d5-9a1c-f8156b7d62ea>
3.125
495
Truncated
Science & Tech.
65.200385
site also contains transmission schedules and transmitting frequencies for various United States and What is the purpose of a satellite atmospheric What is the main advantage of geostationary Which GOES satellite provides imagery over all of South America? What type of satellite is the NOAA 14? What are the main advantages of polar-orbiting What is the average swath width of a polar- Which organization is responsible for providing near real-time DMSP environmental imagery to Q14. Which geostationary satellite will provide imagery for Spain and Portugal? LEARNING OBJECTIVES: Recognize the particular advantages of imagery from geostationary satellites and polar-orbiting Define spatial resolution, radiometer, electromagnetic wave, and albedo. Define the terms visual, infrared, near infrared, and water vapor as they relate to satellite imagery. Recognize the advantages of visual, infrared, and water vapor imagery. The pictures or images available from environmental satellites vary, depending on the type of satellite and the type of sensor in use. Geostationary satellites continuously "look" at the same geographical area of the earth. However, the image area is centered on the satellite subpoint on the equator. At the subpoint, clouds are seen from directly overhead. Further away from the subpoint, clouds seen in the image are viewed from an angle, and feature distortion occurs. Cloud cover is often overestimated toward image edges because the sensor is actually viewing the clouds from the side. Near the horizon, the image is considered unusable due to distortion. Polar-orbiting satellites are in much lower orbits than geostationary satellites; therefore, the satellite can only see a limited portion of the earth as the satellite sensors scan from horizon to horizon. Because of the acute view angle near the horizon, the satellite image near the horizon is usually of little value and is usually not processed or displayed by receiver Satellite sensors designed to produce pictures or images of earth, its oceans, and its atmosphere are very different from the cameras used to take a photograph. They are more like a video camera, only much more specialized. These scanning sensors are called radiometers, and instead of film, an electronic circuit sensitive only to a small range of electromagnetic wavelengths measures the amount of energy that is received. Satellites may can-y several different image sensors, each of which is sensitive to only a small band of energy at a specific wavelength. The radiometer used by the TIROS-N and POES series satellites is known as the Advanced Very High Resolution Radiometer (AVHRR) and contains many types of Satellite sensors scan across the surface of the earth in consecutive scan lines along a path normal to the direction of travel of the satellite. As the sensor moves through a scan line, it very rapidly measures energy levels for only a very small portion of the earth at a time. Each individual energy measurement will compose a single picture element or pixel of the overall satellite image. The sensor then assigns an intensity level from 0 to 256 for each pixel. The size of the area (field-of-view) scanned by the sensor determines the spatial resolution of the overall image. Thus, the smaller the area scanned for each pixel, the higher the spatial resolution. Some sensors may scan an area as small as 0.5 km across (high resolution), while others scan areas as large as 16 km (low resolution). When composed into an image, smaller pixels allow the image to be much clearer and show greater detail. Clouds and land boundaries appear better defined. If objects are smaller than the sensor resolution, the sensor averages the brightness or temperature of the object with the background. Normally, the sensors aboard satellites are able to provide better resolution for visual imagery than for infrared imagery. DMSP satellites have very high-resolution capabilities in both visual and infrared.
<urn:uuid:df630248-b997-4aa1-950f-700daea3a730>
3.46875
883
Knowledge Article
Science & Tech.
22.383573
|Annu. Rev. Astron. Astrophys. 1988. 26: Copyright © 1988 by . All rights reserved 2.2. The Idea of Geometrical Experiments How then are we to measure deviations from Euclidean predictions? What rules concerning the properties of measuring rods do we adopt, and by what rules do we assess whether an experiment has given a non-Euclidean result? Early on, Poincaré denied the reality of actual curved space by stating that in any measurement that appeared to give a non-Euclidean result, one is at liberty to redefine the properties of the measuring rods in such a way as to recover a Euclidean prediction. A particularly interesting example of this, involving nonuniformly heated metal measuring rods, is given by Robertson (1949). Poincaré's point has been variously debated (cf. Whittaker 1958, Reichenbach 1958) with the consensus opinion being that contrived (unreasonable) explanations of changes in the measuring rods, if they are required to save the Euclidean case, are less desirable than a real Riemann-Lobachevski geometry. The debate then changes to the meaning of contrived and unreasonable. Consider again the Fitzgerald contraction of fast-moving measuring rods, and ultimately the reality of the Lorentz transformation. The issue is now resolved in Einstein's (1905) favor in that his deeper interpretation of space-time is viewed as more reasonable than the Fitzgerald explanation, which is now viewed as contrived. In cosmology we are faced with similar problems. We cannot measure distances by placing rigid rods end to end. Rather, operational definitions of distance "by angular size," "by apparent luminosity," "by light travel time," or "by redshift" are perforce employed. Their use then requires a theory that connects the observables (luminosity, redshift, angular size) with the various notions of distances (McVittie 1974). One of the great initial surprises is that these distances differ from one another at large redshift, yet all have clear operational definitions. Which distance is "correct?" All are correct, of course, each consistent with their definition. Clearly, then, distance is a construct in the sense of Margenau (1950), operationally defined entirely by its method of measurement. The best that astronomers can do is to connect the observables by a theory and test predictions of that theory when the equations are written in terms of the observables alone. To this end, the concept of distance becomes of heuristic value only. It is simply an auxiliary parameter that must drop from the final predictive equations. But spatial curvature appears on a different footing. Although it too cannot be directly measured without a covering theory of "luminosity distance" or "redshift distance" to relate "volumes" to "distance," the curvature does enter as a primary parameter (not to be dropped from the equations) in the predictive relations between the observables (luminosity, angular diameter, and redshift). The curvature is if = 0. The parameter q0 (or 0) enters into all the equations connecting redshift, luminosity, angular size, and number counts. In this sense, the curvature is measurable and therefore is "real," because it has observable effects on the m(z), (z), and N(z) relations. Direct experimental geometry is then a possibility, provided that we are willing to accept the equations that connect the q0 measure of curvature with angles, areas, volumes, and redshifts - equations derived from some adopted cosmology.
<urn:uuid:df401d1a-d5db-4958-8eaf-c94fd6ce141e>
3.203125
750
Academic Writing
Science & Tech.
32.380905
Hubble Composite Image of the Star Forming Region in the Tarantula Nebula Image Credit: NASA, ESA, D. Lennon and E. Sabbi (ESA/STScI), J. Anderson, S. E. de Mink, R. van der Marel, T. Sohn, and N. Walborn (STScI), N. Bastian (Excellence Cluster, Munich), L. Bedin (INAF, Padua), E. Bressert (ESO), P. Crowther (University of Sheffield), A. de Koter (University of Amsterdam), C. Evans (UKATC/STFC, Edinburgh), A. Herrero (IAC, Tenerife), N. Langer (AifA, Bonn), I. Platais (JHU), and H. Sana (University of Amsterdam) Launched on 24 April 1990, the Hubble Space Telescope has provided many extraordinary images of the universe (Eagle Nebula, Antennae Galaxies, Asteroid Collision). In celebration of the 22nd Anniversary, astronomers have released this image of the Tarantula Nebula (30 Doradus, or NGC 2070) in the Large Magellanic Cloud. This is an intense star forming region containing several million stars ranging in age from several thousand years to 25 million years old. The image is approximately 650 light-years across.
<urn:uuid:59110a85-4584-4428-a624-78d108e69252>
3.40625
288
Personal Blog
Science & Tech.
49.318454
Artificial molecule evolves in the lab By NEW SCIENTIST Added: Thu, 08 Jan 2009 00:00:00 UTC UPDATE: Scientists develop first examples of RNA that replicates itself indefinitely Thanks to Eric for the link. A new molecule that performs the essential function of life – self-replication – could shed light on the origin of all living things. If that wasn't enough, the laboratory-born ribonucleic acid (RNA) strand evolves in a test tube to double itself ever more swiftly. "Obviously what we're trying to do is make a biology," says Gerald Joyce, a biochemist at the Scripps Research Institute in La Jolla, California. He hopes to imbue his team's molecule with all the fundamental properties of life: self-replication, evolution, and function. Click here to continue reading: Stephen Cave - Financial Times Comments What we really know about our evolutionary past – and what we don’t - - Ancestors Trail Walk Comments WALK DARWIN’S TREE OF LIFE ~ 26 AUGUST 2012 - event begins on Saturday 25 August Liat Clark - Wired.co.uk Comments Astrophysicists simulate 14 billion years of cosmic evolution in high resolution Alok Jha - The Guardian Comments Cambridge scientists claim DNA overlap between Neanderthals and modern humans is a remnant of a common ancestor - - Science Blog Comments Why, after millions of years of evolution, do organisms build structures that seemingly serve no purpose? Charles Choi - CBS News Comments Four decades ago, in 1972, the Koobi Fora Research Project discovered the enigmatic fossilized skull known as KNM-ER 1470 which ignited a now long-standing debate about how many different species of early Homos existed.
<urn:uuid:c2f9d21f-7714-4792-83a1-8a116f4483cb>
3.390625
374
Truncated
Science & Tech.
33.07972
ENDURANCE RUNNING AND ITS RELEVANCE TO SCAVENGING BY EARLY HOMININS Article first published online: 25 OCT 2012 © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution. Volume 67, Issue 3, pages 861–867, March 2013 How to Cite Ruxton, G. D. and Wilkinson, D. M. (2013), ENDURANCE RUNNING AND ITS RELEVANCE TO SCAVENGING BY EARLY HOMININS. Evolution, 67: 861–867. doi: 10.1111/j.1558-5646.2012.01815.x - Issue published online: 5 MAR 2013 - Article first published online: 25 OCT 2012 - Accepted manuscript online: 27 SEP 2012 11:00AM EST - Received February 27, 2012 Accepted August 27, 2012 - human evolution; - local enhancement; - meat eating; It has been argued that endurance running ability may have been important in hominin evolution, giving hominins an enhanced ability to scavenge by allowing them to reach carcasses before other terrestrial vertebrate scavengers. This would have allowed them to exploit the carcass before eventually surrendering it on the arrival of potentially dangerous large terrestrial scavengers. Here, we use a simple spatial model to evaluate the ability of competitors to hominin scavengers to find carcasses. We argue that both hominin and nonhominin terrestrial scavengers would often first have been alerted to available carcasses by overflying aerial scavengers. Our model estimates that nonhominin scavengers will generally be able to reach the carcass within 30 min of detecting a plume of vultures above a nearby carcass. We argue that endurance running over periods greater than 30 min would not have provided a selective advantage to early hominins through increased scavenging opportunities. However, shorter distance running may have been selected, particularly if hominins could defend or usurp carcasses from other mammalian scavengers.
<urn:uuid:a00137b4-2b2c-4b8f-9bf5-1de872c2fb3c>
3.296875
426
Academic Writing
Science & Tech.
37.931763
Why does the space shuttle returning to Earth cause two separate sonic booms? By Kevin Pitts April 19, 2012 I heard two talks at the American Physical Society meeting about the energy challenges facing our nation. One by Robert Rosner, former Director of Argonne National Laboratory, the second by Steve Koonin, former Undersecretary of Energy in the Obama administration. They posted lots of talks from the conference, but unfortunately, they didn’t post these. Much of what Koonin discussed arose from the U.S. Department of Energy’s Quadrennial Technology Report (link). I can’t do justice to all they discussed. It was fascinating, motivating and alarming. I'm just summarizing a couple of the takeaways. The challenges are multifaceted. You've probably heard of the technical challenges.associated with energy. Can we develop renewable energy and reduce our dependence on foreign oil? Can we improve battery technology to make electric cars competitive with gasoline driven engines? Can we find large scale storage techniques to that our national energy system is no longer an "on demand" system? But beyond the technical challenges, there are economic, political and social issues, and all of these issues are global. First of all, worldwide demand for energy is going to continue to grow. The number of new cars sold in the United States has been decreasing for many years, with 5.5 million sold in 2009. China purchases more new cars than the U.S. does and their rate is growing, with an expectation to reach 20 million new cars purchased in a few years! As countries improve their economic standing, they consume more energy. We are players in a world energy market. And if you subscribe to the idea that gas prices can be quickly reduced by more drilling, Koonin had some sobering statistics. The U.S. currently uses about 20 million barrels of oil per day. Optimistic scenarios say that the US could improve oil production by a million barrels a day in 10 years. If you look at history, the timescale for major changes in our energy usage (how long did it take us to stop using wood as our primary fuel? How long did it take us to stop using coal as our primary fuel?) change happens over many decades, not a few years. Another example of this is improved automobile design. A new product or innovation in the auto industry takes the better part of 10 years to achieve full market penetration. Then it’s another 15 years (automobile lifespan) before it is fully adopted. So if we decided today that hybrids or electric vehicles were the *only* way to go, it would still take 25 years before everybody was driving one. Physics has played a role and can continue to play a role. More efficient energy sources could still be a game changer. New techniques to utilize nuclear power, new materials for solar power, innovated energy storage techniques – any of these could potentially be game changers. But what I learned last night was that, even if you have a game changer, it’s not all rainbows and flowers. It will take time, effort and political will to benefit from it. If you have questions about the Physics Illinois Undergraduate Program, contact the Undergraduate Office, 217.333.4361. If you have any feedback or suggestions for this blog, please contact Kevin Pitts. Department of Physics 1110 West Green Street Urbana, IL 61801-3080Physics Library | Contact Us | My.Physics | Privacy Statement | Copyright Statement
<urn:uuid:9e839c7f-8a32-4ffa-9be6-96cf381df8d5>
2.796875
738
Personal Blog
Science & Tech.
49.423412
The other parts, other than the inverse square, were clear already before Newton, or at least were easy to guess. That the force of gravity is proportionality to mass of a small object responding to the field of another comes from Galileo's observation of the universal acceleration of free fall. If the acceleration is constant, the force is proportional to the mass. By Newton's third law, the force is equal and opposite on the two objects, so you can conclude that it should be proportional to the second mass too. The model which gives you this is if you assume that everything is made from some kind of universal atom, and this atom feels an inverse square attraction of some magnitude. If you sum over all the pairwise attractions in two bodies, you get an attraction which is proportional to the number of atoms in body one times the number of atoms in body two. So the only part that was not determined by simple considerations like this was the falloff rate. I should point out that if you look at two sources of a scalar field, and look at the force, it is always proportional to $g_1$ times $g_2$, where $g_1$ and $g_2$ are the propensity of each source to make a field by itself. Further, if you put two noninteracting sources next to each other, this g is additive, if the field is noninteracting, essentially for the reasons described above--- the independent attractions are independent. So that the proportionality to an additive body constant you multiply over the two bodies is clear. That for gravity, the g is the mass, this was established by Galileo. Let's call the force law between the objects $F(m_1,m_2,r)$. We know that if we put the body m_1 in free fall, the acceleration doesn't depend on the mass, so $$ F(m_1,m_2,r) = m_1 G(m_2,r) $$ So that the mass will cancel in Newton's law to give a universal acceleration. This gives you the relation $$ F( a m_1 , m_2, r ) = a F(m_1,m_2,r) $$ We know that if we put body 2 in free fall, the same cancellation happens, but we also know Newton's third law: $F(m_1,m_2,r)= F(m_2,m_1,r)$ so that $$ F( m_1, a m_2, r) = a F(m_1,m_2,r) $$ So you now write $$ F( m_1 \times 1 , m_2 \times 1, r) = m_1 F( 1, m_2\times 1 , r) = m_1 m_2 F(1,1,r) $$ And this tells you that the force is proportional to the masses times a function of r. The form of the function is undetermined. An independent argument for the scaling is that if you consider the object m_1 as composed of two nearby independent objects of mass $m_1/2$, then $$ F(m_1/2 , m_2 , r) + F(m_1/2 , m_2 , r) = F(m_1,m_2,r)$$ Then the same conclusion follows. These types of scaling arguments are second nature by now, and they are automatically done by matching units. So if you have a force per unit mass, the force between two massive particles must be per unit mass 1 and per unit mass 2. This general argument fails for direct three-body forces, where the force between 3 bodies is not decomposable as a sum of forces between the pairs bodies individually. There are no macroscopic examples, since the pairwise additivity is true for linear fields, but the force between nucleons has a 3-body component.
<urn:uuid:0e04b33a-d53e-4bfe-9733-6dbbfbc9ef2c>
3.859375
835
Q&A Forum
Science & Tech.
59.133644
without changing your settings we'll assume you are happy to receive all RSC cookies. You can change your cookie settings by navigating to our Privacy and Cookies page and following the instructions. These instructions are also obtainable from the privacy link at the bottom of any RSC page. Large parts of the periodic table cannot be cooled by current laser-based methods. We investigate whether zero energy fragmentation of laser cooled fluorides is a potential source of ultracold fluorine atoms. We report new ab initio calculations on the lowest electronic states of the BeF diatomic molecule including spin–orbit coupling, the calculated minima for the valence electronic states being within 1 pm of the spectroscopic values. A four colour cooling scheme based on the A2Π ← X2Σ+ transition is shown to be feasible for this molecule. Multi-Reference Configuration Interaction (MRCI) potentials of the lowest energy Rydberg states are reported for the first time and found to be in good agreement with experimental data. A series of multi-pulse excitation schemes from a single rovibrational level of the cooled molecule are proposed to produce cold fluorine atoms. Fetching data from CrossRef. This may take some time to load. Physical Chemistry Chemical Physics - Information Point
<urn:uuid:d23e1e31-36fd-4b1d-a232-4a0dad8a0d78>
2.71875
266
Truncated
Science & Tech.
28.349599
Several factors need to be kept in mind when choosing indicators (and, where appropriate, targets) for SEA: - should they be input or outcome indicators/targets? - should social and economic indicators/targets be included? State, pressure, response; input, outcome Indicators and targets can be divided into three types: - state indicators that describe the state of the environment, for instance ambient NOx levels; - pressure indicators that describe human pressures on the environment, for instance emissions of NOx; and - response indicators that describe responses to these pressures, for instance 'Percentage of cars with catalytic converters' or 'bus frequency on route X'. A range of organisations, notably the OECD, use these distinctions when describing their indicators. The distinctions between state, pressure and response indicators are not always so clear in practice, particularly for social and economic issues. However they illustrate an important concept in SEA: the distinction between inputs and outcomes. Outcomes are end-states, for instance 'clean air' or 'healthy people'. Inputs are the things that authorities do to try to achieve desired outcomes, for instance 'pedestrianisation of the city centre' or 'provision of more playing fields'. Inputs are means by which outcomes are reached. Outcomes = state, input = response. An easy way of distinguishing between input and outcome is by asking "why is this being proposed?" until no more answer can be found: Q: Why are you proposing a new power station? A: To produce more electricity. Q: Why do you need to produce more electricity? A: To heat houses. Q: Why do you need to heat houses? A: To keep people healthy. Q: Why do you need to keep people healthy? ... (keeping people healthy is the outcome; all the others are inputs) Whether input or outcome indicators are used in SEA may well be important , as shown in the figure below. Below are two scenarios. In each case, the city planners could present information on NOx ambient levels (state) or bus frequency. Which indicator should they use in each case? For both scenarios, ambient NOx levels and bus frequency are needed. In Scenario 1, there are many buses but still high pollution levels. This may be because of topographic conditions, or because the buses or factories are very polluting. In Scenario 2, air quality is good despite low bus frequency, possibly for the opposite reasons. This suggests that, for any environmental issue, one should really look at multiple indicators for each issues, e.g. NOx emissions, and various responses to NOx emissions. However in practice this tends to be infeasible in terms of resources and funding. A key role of SEA is to ensure that planners consider 1. the full range of desired outcomes, and 2. the full range of realistic inputs to achieve them. The topics listed at Unit 3 - air, water, soil etc. - suggest some of the outcomes that a strategic action may ultimately want to achieve. The same outcome can often be achieved through different means/inputs, which in turn may have very different environmental impacts. For instance, people can be kept warm (social outcome) through more power stations or better insulation in people's houses, but power stations will have much greater impacts on air quality and the landscape than insulation. Planners often focus very quickly on inputs (Park and Ride sites, new road to bypass Town X, electric vehicles for all council employees) without checking whether other, less environmental damaging, inputs exist. SEA should test whether there are other means of achieving the same outcomes. Planners also often make assumptions about the links between input and outcomes. But it may be worthwhile, through the SEA process, to check that these links really are there: Will new buses really attract people out of their cars? Will they really have cleaner emissions than the cars they replace?
<urn:uuid:99f94953-acdd-48dd-ab7b-d4ffe4dfb592>
3.390625
815
Knowledge Article
Science & Tech.
42.060557
supercritical fluid state Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review. In Table 1 most of the important chemical equilibrium separation methods are subdivided in terms of the two insoluble phases (gas, liquid, or solid). A supercritical fluid is a phase that occurs for a gas at a specific temperature and pressure such that the gas will no longer condense to a liquid regardless of how high the pressure is raised. It is a state intermediate between a gas and a... Gaseous substances beyond a specific temperature and pressure (the critical point) become a supercritical fluid, a state that is more dense than a gas but less dense than a liquid. A supercritical fluid can thus dissolve (i.e., solvate) species better than a gas while being less viscous than a liquid. Supercritical-fluid chromatography is used to separate substances that are relatively nonpolar... ...at normal laboratory conditions with molecular weights below 1,000 are best separated with liquid-solid or liquid-liquid systems. Lower members of the molecular weight scale range are amenable to supercritical-fluid separations. Size-exclusion methods are involved at molecular weights above 1,000. Field-flow fractionation extends the size range to colloids and microscopic particles. ...temperature and pressure (374 °C [705.2 °F], 218 atmospheres). Above its critical temperature, the distinction between the liquid and gaseous states of water disappears—it becomes a supercritical fluid, the density of which can be varied from liquidlike to gaslike by varying its temperature and pressure. If the density of supercritical water is high enough, ionic solutes are... What made you want to look up "supercritical fluid state"? Please share what surprised you most...
<urn:uuid:42868e14-4d56-47b7-b07c-7c87a78391ee>
3.6875
390
Knowledge Article
Science & Tech.
40.875526
boundary layerArticle Free Pass boundary layer, in fluid mechanics, thin layer of a flowing gas or liquid in contact with a surface such as that of an airplane wing or of the inside of a pipe. The fluid in the boundary layer is subjected to shearing forces. A range of velocities exists across the boundary layer from maximum to zero, provided the fluid is in contact with the surface. Boundary layers are thinner at the leading edge of an aircraft wing and thicker toward the trailing edge. The flow in such boundary layers is generally laminar at the leading or upstream portion and turbulent in the trailing or downstream portion. See also laminar flow; turbulent flow. What made you want to look up "boundary layer"? Please share what surprised you most...
<urn:uuid:447203f1-2e7d-4134-a109-124601b94f51>
3.90625
158
Knowledge Article
Science & Tech.
57.920179
Analyzing bottlenecks in chemical reactions His goal is to determine with greater accuracy the nature of the bottlenecks, ultimately providing scientists with a far greater comprehension of important reactions, allowing the researchers to better control them. Fields that are expected to benefit from this research range from atmospheric chemistry to automotive engine design. When it comes to understanding the spectrum of chemical reactions, King says highly accurate data are needed for just a few key reactions. "Just as there are multiple routes that will get you from UB to Kleinhans Music Hall near downtown Buffalo, there are multiple ways to get from reactants to products," explains King. "Molecules may encounter minor traffic jams while approaching and leaving the bottleneck, but these are of little importance compared with the time it takes to get through the one particular elementary reaction that has a high-energy barrier: the bottleneck itself." Identifying accurate rates for these key reactions, which involve the breaking of a chemical bond and the formation of a new one, is critical. During a chemical reaction, King notes that molecules go through a transition state. "To predict the speed of the reaction, one needs to know the shape and energy of the molecule in the transition state," says King. That's easier said than done, however. Even using state-of-the-art experimental techniques, it is usually impossible to observe molecules going through the transition state since they exist in that state for only a fraction of an instant. However, by executing extremely complex calculations based on the theory of quantum mechanics, King says, super-computers allow computational chemists to predict the energy and structure of these important, but fleeting, intermediate molecules or molecular fragments. "As computational chemists, we treat molecules like mechanical systems that consist of particles, nuclei and electrons, in order to examine the mathematical relationships among them," explains King. "If you can ‘solve' those relationships, then you could, in principle, answer almost any question." He notes, however, that scientists never really solve these equations. "We always make mathematical approximations," he explains. "These approximations have gotten awfully good over the past few years, but we'd like to make them even better." With funding from the National Science Foundation, King and his colleagues at UB are developing a method that they hope will make those approximations from 10 to 100 times more accurate, a goal they hope to attain more quickly thanks to the power of the Dell cluster. As a first test of the new method, the UB team is studying a simple molecule composed of two carbon atoms, which has the distinction of being a stable molecule, so it can be studied in the laboratory; it also has many similarities to the fleeting intermediate species that occur in the very short-lived transition state that chemists long to study. "It has a lot in common with a typical transition state, so we picked it out as a nice test case," says King. "Yet even with such a simple molecule, this problem is too big to run on a single processor." King's team has found that it takes several days to complete one run using a dozen processors on one of the UB Center for Computational Research's parallel computers. "On the new Dell cluster, instead of taking several days, it takes just a few hours," King observes. In order to solve all of the equations related to the dissociation of the two carbon atoms, the UB team developed a mathematical expression called a "wave function" that contains a whopping 140 million terms. Their findings generated on the Dell cluster will be compared to the findings obtained by laboratory spectroscopic analyses, which measure the diatomic carbon molecule. This brings the UB team closer to developing a robust method that would allow scientists to quickly figure out the few terms that are critical to determining the reaction rate and the mechanisms for a particular reaction. As for how long it will be before the team develops such a method, King estimates it could take just a few years. "Between the calculations we're making using the supercomputers and the observations that have been made in the lab," he says, "we're getting fantastically good agreement." return to High-speed science main page
<urn:uuid:8e1ec3ae-7f91-4fbf-a52e-7d6e192bdde7>
3.34375
868
Knowledge Article
Science & Tech.
30.04273
New CO2 Sucker Could Help Clear the Air Researchers in California have produced a cheap plastic capable of removing large amounts of carbon dioxide (CO2) from the air. Down the road, the new material could enable the development of large-scale batteries and even form the basis of "artificial trees" that lower atmospheric concentrations of CO2 in an effort to stave off catastrophic climate change. These long-term goals attracted the researchers, led by George Olah, a chemist at the University of Southern California (USC) in Los Angeles. Olah, who won the 1994 Nobel Prize in chemistry, has long envisioned future society relying primarily on fuel made from methanol, a simple liquid alcohol. As easily recoverable fossil fuels become scarce in the decades to come, he suggests that society could harvest atmospheric CO2 and combine it with hydrogen stripped from water to generate a methanol fuel for myriad uses. Olah and his colleagues also work on making cheap, iron-based batteries that can store excess power generated by renewable energy sources and feed it into the electrical grid during times of peak demand. To function, the iron batteries grab oxygen from the air. But if even tiny amounts of CO2 get into the reaction, it kills the battery. In recent years, researchers have come up with good CO2 absorbers made from porous solids called zeolites and metal organic frameworks. But they're expensive. So Olah and his colleagues set out to find a cheaper alternative. They turned to polyethylenimine (PEI), a cheap polymer that is a decent CO2 absorber. But it only grabs CO2 at its surface. To boost PEI's surface area, the USC team dissolved the polymer in a methanol solvent and spread it atop a batch of fumed silica, a cheap, industrially produced porous solid made from microscopic droplets of glass fused together. When the solvent evaporated, it left solid PEI with a high surface area. When the researchers tested the new material's CO2-grabbing abilities, they found that in humid air—the kind present in most ambient conditions—each gram of the material sopped up an average of 1.72 nanomoles of CO2. That's well above the 1.44 nanomoles per gram absorbed by a recent rival made from aminosilica and among the highest levels of CO2 absorption from air ever tested, the team reported last month in the Journal of the American Chemical Society. Once saturated with CO2, the PEI-silica combo is easy to regenerate. The CO2 floats away after the polymer is heated to 85°C. Other commonly used solid CO2 absorbers must be heated to over 800°C to drive off the CO2.
<urn:uuid:01d22422-659f-48db-a8cc-f9b05f928c7d>
3.796875
565
Truncated
Science & Tech.
41.258893
|Geographical Range||Malaysia (in southeastern Asia)| |Scientific Name||Heteropteryx dilatata| |Conservation Status||Not listed by IUCN| Talk about specialized camouflage! Depending on the gender, these fascinating insects resemble either leaves OR twigs. The body of a female Malaysian walkingstick (pictured at left) looks just like a green leaf, while the smaller male does a great impersonation of a brown twig. Both have evolved these clever body shapes to escape detection by predators. When they sit very still in a bush or tree, they look just like the plant they're sitting on. If by chance they're attacked by a predator, the insects can still defend themselves by kicking out with their sharp, spiny legs. The Malaysian walkingstick is also known by the more exotic name "jungle nymph." It is the Asian cousin of the walkingsticks we find here in Missouri.
<urn:uuid:4e4e1e54-62c1-47cc-873d-51f99fd4ddbb>
2.90625
194
Knowledge Article
Science & Tech.
41.945625
BIO_s_fileSection: OpenSSL (3) Index Return to Main Contents NAMEBIO_s_file, BIO_new_file, BIO_new_fp, BIO_set_fp, BIO_get_fp, BIO_read_filename, BIO_write_filename, BIO_append_filename, BIO_rw_filename - FILE bio BIO_METHOD * BIO_s_file(void); BIO *BIO_new_file(const char *filename, const char *mode); BIO *BIO_new_fp(FILE *stream, int flags); BIO_set_fp(BIO *b,FILE *fp, int flags); BIO_get_fp(BIO *b,FILE **fpp); int BIO_read_filename(BIO *b, char *name) int BIO_write_filename(BIO *b, char *name) int BIO_append_filename(BIO *b, char *name) int BIO_rw_filename(BIO *b, char *name) DESCRIPTIONBIO_s_file() returns the BIO file method. As its name implies it is a wrapper round the stdio FILE structure and it is a source/sink BIO. Calls to BIO_read() and BIO_write() read and write data to the underlying stream. BIO_gets() and BIO_puts() are supported on file BIOs. BIO_flush() on a file BIO calls the fflush() function on the wrapped stream. BIO_reset() attempts to change the file pointer to the start of file using fseek(stream, 0, 0). BIO_seek() sets the file pointer to position ofs from start of file using fseek(stream, ofs, 0). BIO_eof() calls feof(). Setting the BIO_CLOSE flag calls fclose() on the stream when the BIO is freed. BIO_new_file() creates a new file BIO with mode mode the meaning of mode is the same as the stdio function fopen(). The BIO_CLOSE flag is set on the returned BIO. BIO_new_fp() creates a file BIO wrapping stream. Flags can be: BIO_CLOSE, BIO_NOCLOSE (the close flag) BIO_FP_TEXT (sets the underlying stream to text mode, default is binary: this only has any effect under Win32). BIO_set_fp() set the fp of a file BIO to fp. flags has the same meaning as in BIO_new_fp(), it is a macro. BIO_get_fp() retrieves the fp of a file BIO, it is a macro. BIO_seek() is a macro that sets the position pointer to offset bytes from the start of file. BIO_tell() returns the value of the position pointer. NOTESWhen wrapping stdout, stdin or stderr the underlying stream should not normally be closed so the BIO_NOCLOSE flag should be set. EXAMPLESFile BIO ``hello world'': BIO *bio_out; bio_out = BIO_new_fp(stdout, BIO_NOCLOSE); BIO_printf(bio_out, "Hello World\n"); BIO *bio_out; bio_out = BIO_new(BIO_s_file()); if(bio_out == NULL) /* Error ... */ if(!BIO_set_fp(bio_out, stdout, BIO_NOCLOSE)) /* Error ... */ BIO_printf(bio_out, "Hello World\n"); Write to a file: BIO *out; out = BIO_new_file("filename.txt", "w"); if(!out) /* Error occurred */ BIO_printf(out, "Hello World\n"); BIO_free(out); BIO *out; out = BIO_new(BIO_s_file()); if(out == NULL) /* Error ... */ if(!BIO_write_filename(out, "filename.txt")) /* Error ... */ BIO_printf(out, "Hello World\n"); BIO_free(out); RETURN VALUESBIO_s_file() returns the file BIO method. BIO_new_file() and BIO_new_fp() return a file BIO or NULL if an error occurred. BIO_set_fp() and BIO_get_fp() return 1 for success or 0 for failure (although the current implementation never return 0). BIO_seek() returns the same value as the underlying fseek() function: 0 for success or -1 for failure. BIO_tell() returns the current file position. BUGSBIO_reset() and BIO_seek() are implemented using fseek() on the underlying stream. The return value for fseek() is 0 for success or -1 if an error occurred this differs from other types of BIO which will typically return 1 for success and a non positive value if an error occurred. SEE ALSOBIO_seek(3), BIO_tell(3), BIO_reset(3), BIO_flush(3), BIO_read(3), BIO_write(3), BIO_puts(3), BIO_gets(3), BIO_printf(3), BIO_set_close(3), BIO_get_close(3)
<urn:uuid:1b1f8b2f-1890-48ac-ba6f-6230e1e8d968>
2.984375
1,233
Documentation
Software Dev.
58.42975
Molecular clouds are the sites of all star formation within our Galaxy, and thus the study of the physical and kinematics properties of the gas in these clouds is essential to understand the process by which new stars are born. The molecular gas can be readily probed using millimeter and submillimeter wavelength spectroscopy. In addition to providing the distribution of molecular gas in this region, the high resolution spectra provide information on the cloud kinematics. This image shows a region of the Taurus Molecular Cloud Complex in the emissions of the J=1-0 transitions of 12CO and 13CO. The 12CO emission is color coded, with red representing gas moving away, blue gas moving toward, and green gas moving at the average velocity of the cloud. Learn more. Photo Credit: Mark Heyer
<urn:uuid:b0dd9a41-f752-4932-95d8-58294ece0eab>
3.46875
165
Knowledge Article
Science & Tech.
37.736818
A year and a half ago, NASA announced that one of its scientists, Felisa Wolfe-Simon, had found a bacterium that could use arsenic instead of phosphorus in its DNA. This revelation, published in Science, had enormous implications for our understanding of what’s necessary for life—we’ve always thought phosphorus was essential and arsenic poisonous, and having that disproven might mean life could exist in environments where it had been thought impossible. Almost immediately, though, scientists and science journalists began to pick apart this paper. DISCOVER blogger Carl Zimmer rounded up the case against in a Slate article shortly after the paper’s publication. Ever since, he’s kept track of the story’s evolution—including experiments posted by microbiologist Rosie Redfield on her blog that provided evidence against the claim—here on his blog. All the way along, Wolfe-Simons refused to comment on Redfield’s experiments, saying she would wait until they were published by a peer-reviewed journal. Now, Redfield’s paper and one other paper finding no evidence of arsenic life have been published by Science, the same journal that published the original claim. The researchers found no evidence of arsenic being used in the bacterium’s DNA. The authors note that it can survive at very low concentrations of phosphorus and can handle concentrations of arsenic an order of magnitude higher than other cells, which is neat, but not evidence that it can actually use arsenic–chemical tests showed that there was no arsenic in the bacterium’s DNA. Wolfe-Simons, now open to talking about the work, said something very peculiar in corresponding with Alan Boyle of MSN: “There is nothing in the data of these new papers that contradicts our published data.” It’s hard to see how she can think that; however, she and her collaborators appear to be alone in that conviction.
<urn:uuid:9bf0d23c-d52d-4298-bbe2-7b1b4ea13166>
3.484375
394
Nonfiction Writing
Science & Tech.
32.809227
Most of the world experiences drastic seasonal variation in the amount of food that is available throughout the year. In deep-sea habitats as well as the poles a single or sometimes few pulses of food provide nourishment for the entire year. Now you may wonder what that means to you? Why does it matter what happens in the deep, dark ocean or far away in a frozen waste land? The answer is that these communities decide how much of the carbon that we are putting into the atmosphere stays in the ocean, only to be released again and how much is buried for geologic time periods (meaning largely beyond the age of humans). However, we know very little about how the biology of how these habitats actually function, what makes them decide whether they break down and release the carbon and nitrogen or burry for, as far as humans are concerned, ever? Quite simply, that is the goal of this research. The poles provide access to communities that have similar seasonality as the deep sea but at a depth that allows manipulative research. Most biology does poorly with a rapid rise of 4,000m (12,000ft), the average depth of the ocean, however in the Antarctic we can take a community that is adapted to a similar long period of starvation from a mere 20m (60 ft) below the sea surface and experiment with it to figure out how it ‘functions.’ To do this we head to the most southerly place on the planet where we can access the ocean through SCUBA, McMurdo Station. This site is not only far south but holds another important community that is an enigma: the densest habitat on earth. Spiophanes tcherniai is a species of Polychaete that occurs in incredible densities. To be exact in every square meter of sediment there are 150,000 to 180,000 individuals of this species as well as a variety of other species that we call ‘macrofauna.’ The macrofauna are visible to the eye but amazing under a microscope and all are, by definition, greater than 0.3mm in size. Its an diverse community with small shrimp looking animals, worms of every shape and colors, not to mention clams and anemones. They co-occur with an incredible variety of bacteria and what we really want to know with this research is whether the bacteria are competing with the animals or facilitating the persistance of these animals, allowing this incredible density in a veritable dessert of food. Understanding the roll of bacteria is key. Animals have a relatively simple diet of what they can digest. Easy to digest food (which we call labile) is usually fresh and not the most abundant form of carbon in the oceans. Refractory compounds are the dominant source of potential food but cannot be digested by animals without help. Cows, for example, get help by special bacteria in their rumen giving them greater access to the food in grass. In the Antarctic, the shortage of food is really a shortage of ’labile’ food. Refractory compounds are present year round in the sediment. Yet both bacteria and animals prefer the labile compounds and that leads to competition for a limiting food resource. I should mention that the Antarctic is cold. The water is as cold as salt water can be without freezeing, a balmy -1.8C or 28F. Prevailing knowledge would suggest that bacteria have enzymes which are not very good at those cold temperatures and thus the animals are better at digesting the labile compounds. Since the bacteria cannot digest the fresh food, the food stays fresh throughout the year until the next pulse of food. The other idea is that the bacteria and animals compete for this labile compound, both as competitors, yet as the food turn refractory, the animals switch and start eating the bacteria. During this research we will test to see which of these actually occur. Do the bacteria and both consume the same labile food source when it is present or are the bacteria inferior competitors? To do this we will collect many many sediment cores (called microcosms) from the the dense Spiophanies beds and keep them in the lab for 6 weeks. We will knock out the bacteria using a series of antibiotics and see how the macrofaunal communities differ with and without bacteria, as well as how much carbon gets burried vs how much is released. With this we hope to gain a better understanding of how the world around us works. This research is suported by the National Science Foundation, Office of Polar Programs. The content of this website are not in any way representive of either the NSF or Oregon State University.
<urn:uuid:8b48fb3e-f67b-4b47-8c3a-80f784800fa3>
3.953125
956
Academic Writing
Science & Tech.
47.936344
Pascal is an influential imperative and procedural programming language, designed in 1969 and published in 1970 by Niklaus Wirth as a small and efficient language. It was largely (but not exclusively) intended to teach students structured programming and data structuring. Pascal is a descendant of algol, but it was implemented on a wide range of architectures, from PDP-11s, IBM PCs, to CDC Cyber and IBM System 370 mainframes. Pascal probably reached critical mass around the time Borland released Turbo Pascal in 1983. Pascal is a purely procedural language and includes control statements with reserved words such as for, and so on. However, Pascal also has many data structuring facilities and other abstractions not included in ALGOL 60 like type definitions, records, pointers, enumerations, and sets. - The Pascal Language - ISO 7185 Standard Pascal - Free pascal (Open source compiler for Pascal and object pascal) Free Pascal/Delphi Programming Books Chat about Pascal with other Stack Overflow users
<urn:uuid:f85b0f77-3215-4839-af5f-6e7471842037>
3.59375
212
Knowledge Article
Software Dev.
27.514231
Evidence of a Black Hole in the Center of the Milky Way: The picture below is a view of the very center of the Milky Way obtained in mid 2002 from a combination of three infrared wavebands. The compact objects are stars, and two yellow arrows mark the location of Sagittarius A*, the black hole candidate within the radio source Sagittarius A. We can see objects within one light year of the galactic center! In this image one light year equals 8 arc seconds. The next picture is another infrared image looking even more closely at the Milky Way's center. Black hole candidate Sagittarius A* is marked with a cross. The width of the picture is about 70 light days or about 0.2 light years. The next amazing picture shows the observed orbit of a star called S2 around the black hole candidate Sagittarius A*. Star S2 is about 7 times the diameter of our Sun and has 15 times the mass of our Sun. The elliptical orbit (with an eccentricity of 0.87) of S2 has been observed since 1992, and about two thirds of its 15.2 year orbit has been traced out. The closest approach happened in early 2002 when S2 came within about 17 light hours of Sagittarius A*! 17 light hours is about 123 astronomical units, or about 1.5 solar system diameters! This is a close approach, but S2 was still about 64 times more distant from Sagittarius A* than the tidal disruption distance of 16 light minutes. From the observed orbit of S2 and other nearby masses the mass of Sagittarius A* can be calculated to be 2.6 million solar masses with an uncertainty of .2 million solar masses. The conclusion is that Sagittarius A* is a 2.6 million solar mass object smaller than 1.5 solar system diameters! Sure does look like a black hole!
<urn:uuid:1abcb91e-ece1-4829-a740-d45b2a9d06ba>
3.734375
384
Knowledge Article
Science & Tech.
67.462611
The particles category implements the facilities necessary to describe the physical properties of particles for the simulation of particle-matter interactions. All particles are based on the G4ParticleDefinition class, which describes basic properties such as mass, charge, etc., and also allows the particle to carry a list of processes to which it is sensitive. A first-level extension of this class defines the interface for particles that carry cuts information, for example range cut versus energy cut equivalence. A set of virtual, intermediate classes for leptons, bosons, mesons, baryons, etc., allows the implementation of concrete particle classes which define the actual particle properties and, in particular, implement the actual range versus energy cuts equivalence. All concrete particle classes are instantiated as singletons to ensure that all physics processes refer to the same particle properties. The object-oriented design of the 'particles' related classes is shown in the following class diagrams. The diagrams are described in the Booch notation. Figure 2.13 shows a general overview of the particle classes. Figure 2.14 shows classes related to the particle table. Figure 2.15 shows the classes related to the particle decay table. |27.06.05 section on design philosophy added (from Geant4 general paper) by D.H. Wright| |Dec. 2006 Conversion from latex to Docbook verson by K. Amako|
<urn:uuid:0053fbb0-bc8e-4006-bd8a-6cdbbd3c960a>
2.734375
286
Documentation
Software Dev.
37.964388
REMAINDER(3) Linux Programmer's Manual REMAINDER(3) NAME drem, dremf, dreml, remainder, remainderf, remainderl - floating-point remainder function SYNOPSIS #include <math.h> /* The C99 versions */ double remainder(double x, double y); float remainderf(float x, float y); long double remainderl(long double x, long double y); /* Obsolete synonyms */ double drem(double x, double y); float dremf(float x, float y); long double dreml(long double x, long double y); Link with -lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): remainder(): _SVID_SOURCE || _BSD_SOURCE || _XOPEN_SOURCE >= 500 || _XOPEN_SOURCE && _XOPEN_SOURCE_EXTENDED || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; or cc -std=c99 remainderf(), remainderl(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; or cc -std=c99 drem(), dremf(), dreml(): _SVID_SOURCE || _BSD_SOURCE DESCRIPTION The remainder() function computes the remainder of dividing x by y. The return value is x-n*y, where n is the value x / y, rounded to the nearest integer. If the absolute value of x-n*y is 0.5, n is chosen to be even. These functions are unaffected by the current rounding mode (see fenv(3)). The drem() function does precisely the same thing. RETURN VALUE On success, these functions return the floating-point remainder, x-n*y. If the return value is 0, it has the sign of x. If x or y is a NaN, a NaN is returned. If x is an infinity, and y is not a NaN, a domain error occurs, and a NaN is returned. If y is zero, and x is not a NaN, a domain error occurs, and a NaN is returned. ERRORS See math_error(7) for information on how to determine whether an error has occurred when calling these functions. The following errors can occur: Domain error: x is an infinity and y is not a NaN An invalid floating-point exception (FE_INVALID) is raised. These functions do not set errno for this case. Domain error: y is zero errno is set to EDOM. An invalid floating-point exception (FE_INVALID) is raised. CONFORMING TO The functions remainder(), remainderf(), and remainderl() are specified in C99 and POSIX.1-2001. The function drem() is from 4.3BSD. The float and long double variants dremf() and dreml() exist on some systems, such as Tru64 and glibc2. Avoid the use of these functions in favor of remainder() etc. BUGS The call remainder(nan(""), 0); returns a NaN, as expected, but wrongly causes a domain error; it should yield a silent NaN. EXAMPLE The call "remainder(29.0, 3.0)" returns -1. SEE ALSO div(3), fmod(3), remquo(3) COLOPHON This page is part of release 3.27 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. 2010-09-20 REMAINDER(3) Generated by dwww version 1.11.3 on Tue May 21 04:27:42 CEST 2013.
<urn:uuid:dbffb583-ec46-44a8-8c55-91c0269f0901>
3.078125
837
Documentation
Software Dev.
71.286855
Orange Hue Before Hail Name: Rance S. Just curious as to why sometimes before it hails that the surrounding air and sky exhibits an orangeish hue... Orange, red, blue, green, and even purple!! Hail is an indicator of intense weather. The clash of hot moist air and cold dry air that becomes "inverted" that is, the less dense hot moist air near the surface of the earth and the dense dry cold air above. When this becomes unstable turbulence results and the turbulence can pick up dust and dirt and suck it up into the upper atmosphere. Depending upon conditions sunlight is scattered off the suspended particles and can produce some intensely colored clouds. But head for the basement!! The colors you see in the sky are the result of several influences, including time of day, angle of the sun, cloud height and thickness, and even your location relative to the storm. Some storm clouds exhibit a greenish cast before a hailstorm. So there is no good correlation between "sky color" and severe weather, as there are too many other variables that can account for those appearances. Wendell Bechtold, meteorologist Forecaster, National Weather Service Weather Forecast Office, St. Louis, MO Click here to return to the Weather Archives Update: June 2012
<urn:uuid:51992091-da69-4564-8de6-92d89ecad338>
3.21875
284
Knowledge Article
Science & Tech.
50.712247
A wave of recent online videos showing the bottom of a dropped Slinky hovering in midair has scientists taking a closer look at the toy many of us played with as a kid. Grab a Slinky from the toy box and try this experiment at home. STEP 1: Hold the Slinky allowing it to levitate in the air. STEP 2: Drop the Slinky and observe closely. If you look close enough you will observe that the bottom coils stay suspended in the air. As you hold the slinky in the air, gravity pulls down. That downward pull is balanced by the upward pull of tension created by the coils above the coils at the bottom. When you let the Slinky go, the bottom stays suspended as the top collapses. As the slinky collapses the coils slam into each other creating a wave downward that eventually reaches the bottom coils telling them to fall to the ground. Go to hookedonscience.org for more experiments that might get you “Hooked on Science.”
<urn:uuid:2418870a-00d6-4175-b3b7-48a97c93ced3>
3.359375
204
Tutorial
Science & Tech.
67.234519
Phobos-Soil (Phobos-Grunt) is the first mission dedicated to the study of the Martian moon Phobos. It will deploy a spacecraft in an orbit around Mars that is quasi synchronous with Phobos. The spacecraft will also carry a lander to perform in situ investigations of the Phobos regolith and a return capsule that will bring pristine samples back to Earth. The mission builds on Russia's long heritage in space science and robotic exploration. In its launch configuration, the spacecraft will be made up of four key parts: A Chinese orbiter, Yinghuo-1 (YH-1), will piggyback with the mission to be delivered in orbit around Mars. Spacecraft key facts A Russian Federal Space Agency mission, Phobos-Soil (Phobos Grunt) was originally proposed by the Council on Space of the Russian Academy of Science. The Space Research Institute of the Russian Academy of Sciences (IKI RAS) leads science investigations. The Lavochkin Association is the prime contractor for the space segment. ESA is providing ground station support.
<urn:uuid:a6688d49-d5dc-4615-b4c2-dcc7f5ede901>
3.1875
223
Knowledge Article
Science & Tech.
41.597518
April 6, 2011 This band is the disk of our spiral galaxy. Since we are inside this disk, the band appears to encircle the Earth. The above spectacular picture of the Milky Way arch, however, goes where the unaided eye cannot. Photographer Juan Carlos Casado. March 25, 2011 An expedition member walks on the cooled lava floor, turned red by the reflected glow of a lake, of a caldera in Nyiragongo volcano in the Democratic Republic of the Congo. “Down here you feel the volcano,” says photographer Carsten Peter. Photograph Carsten Peter March 12, 2011 The world’s “sleeping giants” can wake up much quicker than thought, according to a new volcano model. Scientists believe the magma chambers—or reservoirs of molten rock—under dormant volcanoes are filled with sticky, viscous mush. March 10, 2011 Pu’u ‘O’o crater (seen in a file picture) is located in a remote section of Hawaii Volcanoes National Park, which has been temporarily closed to the public due to the recent activity. February 16, 2011 Passengers flying off on a Caribbean holiday were stunned when they spotted this massive volcanic eruption, which sent a huge plume of ash into the sky. Picture Mary Jo Penkala January 30, 2011 Horses are herded to safety away from volcanic ash clouds near Eyjafjallajokull, Iceland. Picture Rakel Sigurda January 28, 2011 A one-mile cordon has been established around a volcano on Mount Kirishima after it erupted scattering rocks and ash across southern Japan and sending smoke billowing 5,000ft into the air. January 25, 2011 Snow cover highlights the calderas and volcanic cones that form the northern and southern ends of Onekotan Island, part of the Russian Federation in the western Pacific Ocean. January 24, 2011 The super-volcano beneath Yellowstone National Park in Wyoming has been rising at a record rate since 2004. Erupt for the first time in 600,000 years, wiping out two-thirds of the U.S.?
<urn:uuid:3b06042f-332a-464c-8c5d-551f64d7311a>
2.71875
455
Content Listing
Science & Tech.
49.467511
The NB-36 made a number of flights in the 1950s carrying an operating nuclear reactor. The crew worked from a lead-shielded cockpit. As America’s nuclear renaissance matures, proponents of molten-salt reactors (MSRs) increasingly expect the focus to shift their way, to a much simpler design. That design translates into lower capital costs, speedier construction, and fewer potential operating risks. Together they point to the likelihood of less-grueling regulatory processes for approvals and operating licenses, if and when MSRs get that far. Not everyone is content to wait for the spotlight to shift. A small Canadian company, Ottawa Valley Research Associates Ltd., (OVRA) has filed for a broad MSR patent in the U.S. and internationally. The applications cover a redesign in reactor piping that addresses some long-standing technical problems. The applications were filed by David LeBlanc, principal of OVRA and a physics researcher at Carleton University in Ottawa, Ont., Canada. The patent applications are to protect OVRA’s intellectual property or “IP,” arising from LeBlanc’s years of MSR research. “In almost any business case, it is the IP developed with the product itself that has the real value,” he noted. OVRA’s U.S. patent application was filed in May 2008. The international patents were filed for in November 2009. “There has been no opportunity like this since [John D.] Rockefeller gobbled up the entire oil industry” in the late 1800s. LeBlanc is squarely in the mainstream of nuclear innovation as laid out by the Generation IV International Forum (GIF). MSRs are one of GIF’s the six chosen “Gen-4” technologies. Launched in 2001, GIF members include the U.S., the U.K., Canada, France, China, Japan, Russia and the European Atomic Energy Community (Euratom). Gen-4 goals are to improve nuclear safety, better resist proliferation, minimize waste and overuse of natural resources, and decrease costs overall. “MSRs have a great deal to offer for all of these goals,” LeBlanc pointed out. Researchers designed a molten salt reactor for use on nuclear-powered bombers. Heat from the reactor would replace the combustion of fuel within the jet engines. MSRs use molten salts for coolant rather than water. Getting rid of water allows MSRs to operate at 500 °C to 1,000 °C (roughly 900 to 1,800 °F) for greater efficiency. Water that cools current reactors is about 275 °C to 315 °C, or roughly 530 °F to 600 °F). Lower operating pressures mean no need for large pressure vessels or massive concrete and steel containment structures. Coolants in other low-pressure Gen-4 reactors include sodium and lead / bismuth. Other MSR advantages: - Simpler fuel cycles, with the fuel dissolved in the fluid salt. - Lower capital costs, possibly 25 to 50 percent less than for today’s reactors. - “Burning” used fuel from conventional reactors, a solution to the problem of storing “spent” fuel rods. - Near zero long-lived radiotoxicity of MSR wastes—one-ten-thousandth that of current light-water reactors, which means no need for Yucca Mountain-type repositories. In contrast to today’s ubiquitous light-water reactors, Gen-4s would be simpler to engineer, win speedier regulatory approval, get built sooner, and be simpler to operate. Many Gen-4s could also be built smaller, just a few hundred megawatts, making them feasible for large–scale, heat-driven industrial processes. Today’s power-generation units are well over 1,000 megawatts. With Gen-4, “suddenly, the almost forgotten MSR technology was a hot research topic,” LeBlanc said. But it will not be a free lunch. Gen-4 is long-term internationally coordinated R&D with demonstrations in 2020-2030. Challenges include materials—high-temperature, corrosion-resistant metal alloys; ceramics; nuclear-grade graphites and composites; and nano-structured ferrites, according to the U.S. Department of Energy (DOE). For the past 30 years, U.S. government backing and investment went to water-cooled reactors for generating electricity. Today U.S. utilities operate just over 100 reactors. They and the Navy’s aircraft carriers and submarines account for virtually all U.S. reactors. A small but growing effort continues at the DOE’s Oak Ridge National Laboratory in Tennessee. That focuses mainly on the cooling capabilities of molten salts for solid fuels, and the requisite plumbing, rather than true MSRs. In the past, said LeBlanc, that plumbing had been “a nightmare of interlaced fuel and blanket salts that caused Oak Ridge to abandon an otherwise promising two-fluid concept in 1968. It was an extreme engineering challenge.” He explained that his patent application covers “surprisingly simple solution to the plumbing, a tube within a tube: a large blanket salt tube enveloping a long and narrow core.” Looming over everything nuclear in the U.S. is the protracted American regulatory process. “The robust, inherent and simple-to-understand safety of these reactors suggests that if given a rational regulatory overview,” LeBlanc said, “they may prove relatively simple to license.” The other five Gen-4’s are very high temperature gas, sodium-cooled fast, supercritical water-cooled, gas-cooled fast and lead-cooled fast. Fast refers to a portion of the neutron spectrum. Improvements to existing reactors 2000 and later are considered third generation. Today’s operating units, 1970-2030, are second generation. The first generation was 1950-1970 prototypes and demonstration units. [Adapted from “Too Good to Leave on the Shelf” by David LeBlanc, for Mechanical Engineering, May 2010.] For the past 30 years, U.S. government backing and investment went to water-cooled reactors for generating electricity. More on this topic Responding to the 2011 disaster at Japan's Fukushima Dai-ichi nuclear station, the ASME Presidential Task Force on Response to Japan Nuclear Power ... Nathan Hurt grew up during the age when teenagers were modifying cars into hot rods, and he was right there among them. That hands-on experience ...
<urn:uuid:412bec29-0ae9-4602-a9a4-44b98604f337>
2.75
1,388
Content Listing
Science & Tech.
52.546948
Our Sun, The Nearest Star The sun turns 700,000,000 tons of hydrogen into 695,000,000 tons of helium every second by nuclear fusion. The remaining 5 tons is converted into energy. 400,000,000,000,000,000,000,000,000 watts Distances to Other Stars Distances are particularly easy to calculate if we use the parsec as our distance unit. In that case, the distance of a star in parsecs is: D = 1/p where D is the distance in pc and p is the parallax angle in seconds of arc. For example, Sirius has a parallax angle of 0.38 seconds of arc and thus its distance from the Earth is d = 1/0.38 = 2.6 pc = 8.6 LY. The nearest star (other than the Sun) is the alpha-Centauri system, which has a parallax of 0.76 seconds of arc, corresponding to a distance of 1.315 pc = 4.3 LY. Thus, all stars have parallax angles of less than one second of arc. The Nearest Stars How far away is Polaris? Where can I find reliable distances to nearby stars? I am trying to find out the distance to the pole star. In one reference I found on the web 300 ly was the figure given while a second source gave 690 ly. My question for you is what is the distance and is there a > generally accepted reference available on line to get such information? From: "Ask An Astronomer" We now have an excellent way to find out the distances to (relatively) nearby stars: the Hipparcos catalog from a European space mission was released last summer. I checked the catalog for you and found that the distance to Polaris is 132 parsecs. There are 3.26 ly in one parsec, so the distance to Polaris is 430 ly. Polaris is a Cephied star, a type of variable star. That makes it tricky to get the distance from the brightness of the star. The Hipparcos satellite made a trigonometric measurement which is more secure. However, the measurement still has an uncertainty of plus or minus 100 ly, which is larger than one would like; the reason is because Polaris is so far away. It is near the limit of what Hipparcos could reach.
<urn:uuid:931571ab-1049-48c0-ab03-68a84cffd068>
3.296875
495
Q&A Forum
Science & Tech.
72.896191
slickheadArticle Free Pass slickhead, also called smoothhead, any of several deep-sea fishes, family Alepocephalidae (order Salmoniformes), found in almost all oceans at depths up to 5,500 m (17,800 feet) or more. Slickheads are dark, soft, and herringlike; species vary greatly in structure, and a few possess light-producing organs. Some common features of the family are absence of a swim bladder, presence of a lateral line, and position of the dorsal fin far back and above the anal fin. What made you want to look up "slickhead"? Please share what surprised you most...
<urn:uuid:cf925338-c948-4bb2-8a8d-47fda3d7c6c6>
3.15625
140
Knowledge Article
Science & Tech.
51.915
"A single death is a tragedy, a million deaths is a statistic. " Sec. 1.2 Notes: Data display VARIABILITY. (Recall: data is a plural term... use it with the correct verb...or you will lose credibility.) The "pattern" of the variability is the distribution. You may display data 1. visually through graphs 2. by summarizing them numerically, or 3. by describing them verbally. Every math course strives to encourage mathematical communication through the same methods...words, pictures, and symbols. Statistics is no different. It is a good idea to use more than one descriptive method including tabular (frequency distribution, and relative and cumulative frequency distributions) graphical showing overall pattern, center, spread, shape (symmetric, left/right skewed) and Ponder this...."Who is baseball's greatest homerun hitter?" This section will give us the tools we need to analyze data on potential candidates and formulate a response. To describe a distribution with numbers we consider shape, center, and spread. These three characteristics vie a good description of the overall pattern. We already know about shape: symmetric or skewed. The most common measure of center is our ordinary arithmetic average (MEAN). To find the MEAN of a set of observations, add all the values together and divide by the number of observations where observations are names x1, x2, x3, ..., xn The S (capital Greek sigma) is short for "add 'em all up." The xi implies subscripts whose only use is to differentiate the observations. The bar over the x indicates the mean of all the x values (say "x-bar"). The mean is sensitive to the influence of a few extreme observations so use with a symmetric distribution is desirable. When the distribution is skewed the mean will be pulled toward the long tail. Thus, the MEAN IS NOT A RESISTANT MEASURE OF CENTER. The MEAN uses the actual value of each observation and will "chase" a single large observation upward. Another measure is needed . The median M is the midpoint of a distribution, the number such that half the observations are smaller and half are larger. You may find the median by hand for small data sets by arranging the values in order and finding the midpoint. If n is odd, the median M is the center observation in the ordered list. If n is even, the median is the average of the two center observations. Finding the median for large data sets should be left to the calculator or computer. The median is not affected by outliers, therefore the MEDIAN IS A RESISTANT MEASURE OF CENTER. For a symmetric distribution, the MEAN and MEDIAN are close together. In a skewed distribution, the mean is farther out in the long tail than the median. Reports about home prices, incomes, and other strongly skewed distributions usually give the median. The mean and median measure CENTER in different ways and both are useful. Both measures can be found using the TI-83 calculator. Enter the data into a list using STAT (Edit) Press 2nd STAT >>Math Select 3 for mean or whatever measure you desire. NOTE: using STAT (Calc) 1 - Var Stats , we can find all the important stats from a specific data set Using only one measure of the center of a distribution can be misleading. We are also interested in finding the spread or variability. Spread can be found by calculating the RANGE (subtract the min data point from the max data point) The description of spread can also be improved by considering QUARTILES. Q1 is the median of the lower half, Q2 is the median itself, Q3 is the median of the upper half. Q1 is larger than 25% of the data, Q2 is greater than 50% of the data, Q3 is greater than 75% of the data. The IQR (Interquartile Range) is the distance between the first and third quartiles IQR = Q3 - Q1. If an observation falls between Q1 and Q3, then it is not unusually high or low. The IQR is the basis of a "rule of thumb" for identifying suspected OUTLIERS. The formula is 1.5 times IQR....call an observation an outlier IF it falls more than 1.5 X IQR above Q3 or below Q1. The smallest and largest observations also tell about the distribution. Combining all five numbers we get a good summary of center and spread. FIVE NUMBER SUMMARY consists of the minimum data point, Q1, median, Q3, and maximum data point. This summary leads to a new type of graph....BOX PLOT. Showing less detail than histograms or stem plots, box plots are be used for side-by-side comparisons of more than one distribution. These plots can be horizontal or vertical. Box plots give an indication of the symmetry or skewness of a distribution...in a symmetric distribution, Q1 and Q3 are equidistant from the median. Since a regular box plot conceals outliers, we will accept the use of the modified box plot which plots outliers as isolated points and shows more detail. A modified box plot is a graph of the five number summary with outliers plotted individually. (see page 47). The five number summary is NOT the most common numerical description of a distribution. Rather, a combination of the mean as a measure of center and the standard deviation as a measure of spread is commonly used. Variance (s2) is the average of the square of the deviations from the mean. Standard deviation (s) is the square root of the variance s2= (x1 - + (x2 - `x)2 + ...+ (xn - `x)2 The standard deviations show how spread out the data are about their mean. Some deviations will be positive and some will be negative. Curiously, the SUM of the deviations is ALWAYS = 0. Properties of s (standard deviation): s measures spread about the mean and should only be used when the mean is chosen as the measure of center s = 0 only when there is NO SPREAD...all observations have the same value s is NOT resistant to outliers. Soon we will learn that the standard deviation is the natural measure of spread for an important class of symmetric distributions, the NORMAL distributions. Logically then, an essential point is the usefulness of a statistical procedure is tied to the shape of the distribution. What and how to choose.... the five number summary is better for skewed distributions or those containing outliers. Use the mean and standard deviation for relatively symmetric distributions. A GRAPH gives the best overall "picture" of a distribution, numerical measures of center and spread give specific facts about the distribution but don't describe its entire shape. ALWAYS... PLOT THE DATA Sometimes a situation requires comparison of two or more distributions. The best method for comparison is back to back stem plots or side-by-side bar graphs. Sometimes it is necessary to convert units of measure...we use a linear transformation to accomplish this. The rule is...to produce a change from x to a new x value we add a constant (a) which moves the data vertically and/or multiply by a positive number. Adding a constant amount to each observation DOES NOT change the spread. It does increase the measures of center and quartiles by the same amount. Multiplication increases the measures of center (mean and median) by the same multiple. The measures of spread (standard deviation and IQR) are also multiplied by that factor. Linear transformations DO NOT change the shape of a distribution.
<urn:uuid:354396f9-6395-4864-bced-251bd5a6f8c2>
3.9375
1,617
Tutorial
Science & Tech.
58.517121
O'dogherty, M.J., 1969. THE DETECTION OF FIRES BY SMOKE: PART 2. SLOWLY DEVELOPING WOOD CRIB FIRES. Fire Research Notes 793 Measurements have been made, at ceiling level, of the optical density per metre of smoke from slowly developing wood crib fires. Results are given for ceiling heights from 2.4 m to 7.0 m (8 ft to 23 ft) above the fire, and for horizontal distances up to 9.8 m (32 ft) across the ceiling. In these experiments, the change in potential across the open chamber of an ionisation smoke detector, and the output signal from an optical-scattering smoke detector were also continuously recorded. In addition, the response times of some proprietary smoke detectors were measured. The significance of the results in relation to a suitable sensitivity test for smoke detectors is discussed.
<urn:uuid:2df9c0ea-9bbe-4bc9-b7dc-c79bbcf51578>
2.734375
184
Academic Writing
Science & Tech.
55.673428
Over the past three decades, astronomers have discovered hundreds of dusty disks around stars, but only two — 49 CETI is one — have been found that also have large amounts of gas orbiting them. Young stars, about a million years old, have a disk of both dust and gas orbiting them, but the gas tends to dissipate within a few million years and almost always within about 10 million years. Yet 49 CETI, which is thought to be considerably older, is still being orbited by a tremendous quantity of gas in the form of carbon monoxide molecules, long after that gas should have dissipated. "We now believe that 49 CETI is 40 million years old, and the mystery is how in the world can there be this much gas around an otherwise ordinary star that is this old," said Benjamin Zuckerman, a UCLA professor of physics and astronomy and co-author of the research, which was recently published in the Astrophysical Journal. "This is the oldest star we know of with so much gas." Zuckerman and his co-author Inseok Song, a University of Georgia assistant professor of physics and astronomy, propose that the mysterious gas comes from a very massive disk-shaped region around 49 CETI that is similar to the sun's Kuiper Belt, which lies beyond the orbit of Neptune. The total mass of the various objects that make up the Kuiper Belt, including the dwarf planet Pluto, is about one-tenth the mass of the Earth. But back when the Earth was forming, astronomers say, the Kuiper Belt likely had a mass that was approximately 40 times larger than the Earth's; most of that initial mass has been lost in the last 4.5 billion years. By contrast, the Kuiper Belt analogue that orbits around 49 CETI now has a mass of about 400 Earth masses — 4,000 times the current mass of the Kuiper Belt. "Hundreds of trillions of comets orbit around 49 CETI and one other star whose age is about 30 million years. Imagine so many trillions of comets, each the size of the UCLA campus — approximately 1 mile in diameter — orbiting around 49 CETI and bashing into one another," Zuckerman said. "These young comets likely contain more carbon monoxide than typical comets in our solar system. When they collide, the carbon monoxide escapes as a gas. The gas seen around these two stars is the result of the incredible number of collisions among these comets. "We calculate that comets collide around these two stars about every six seconds," he said. "I was absolutely amazed when we calculated this rapid rate. I would not have dreamt it in a million years. We think these collisions have been occurring for 10 million years or so." Using a radio telescope in the Sierra Nevada mountains of southern Spain in 1995, Zuckerman and two colleagues discovered the gas that orbits 49 CETI, but the origin of the gas had remained unexplained for 17 years, until now. UCLA is California's largest university, with an enrollment of more than 40,000 undergraduate and graduate students. The UCLA College of Letters and Science and the university's 11 professional schools feature renowned faculty and offer 337 degree programs and majors. UCLA is a national and international leader in the breadth and quality of its academic, research, health care, cultural, continuing education and athletic programs. Six alumni and six faculty have been awarded the Nobel Prize. For more news, visit the UCLA Newsroom and follow us on Twitter. Stuart Wolpert | Source: EurekAlert! Further information: www.ucla.edu More articles from Physics and Astronomy: A Hidden Population of Exotic Neutron Stars 24.05.2013 | Chandra X-ray Center Hubble reveals the Ring Nebula’s true shape 24.05.2013 | NASA/Goddard Space Flight Center This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers. Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking. Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ... The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist. "The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn. He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ... New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue. The development of new microscopes and fluorescent dyes in ... A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers. Droplets in this toroidal shape made ... Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry. Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada. High manufacturing cost and a short lifetime are still a major obstacle on ... 24.05.2013 | Life Sciences 24.05.2013 | Ecology, The Environment and Conservation 24.05.2013 | Physics and Astronomy 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:d569e967-38bd-40c5-adb8-71c10807f92c>
3.859375
1,388
Content Listing
Science & Tech.
53.705134
|Jan29-13, 09:09 PM||#1| Combinations of n objects taken r at a time An office furniture manufacturer that makes modular storage files offer its customers two choices for the base and four choices for the top, and the modular storage files come in five different heights. The customer may choose any combination of the five different-sized modules so that the finished file has a base, a top, and one, two, three, four, five, or six storage modules. How many choices does the customer have if the completed file has four storage modules, a top, and base? The order in which the four modules are stacked is irrelevant. The best i could get was nCr=(n/r), (2/1)x(4/1)x((5/4)+(5/3)+(5/2)+(5/1)), and it is wrong. my teacher spent half an hour on this one question and he has no clue how to do it. This is basically my last resort. |Jan29-13, 11:33 PM||#2| It's like the ice-cream flavors problem - only the number of choices for top and bottom flavor are not the same as for the middle ones. You've done that one before right? It would help to see your reasoning - but it looks like you did something to do with combinations out of 5 (but it looks like you got the formula mixed up) , times the 2 bases, times the 4 tops? The answer cannot be correct because it gives you a fraction in the answer - and you cannot have a fraction of a combination. In fact, the method you tried will usually give you a fraction. (BTW: do you know the correct answer?) In the description: You have a base (out of 2), a top (out of 4) so that's 8 right there... well done. The middle part has 4 modules in the middle, which the customer assembles out of 5 varieties. If the varieties are A,B,C,D,E, then one choice may be AABE, another may be ACCE. If order matters, then AABE is different from ABAE etc. Is the way forward clearer now? Note: it is easy to get mixed up about formulas, so try to look at what the formulas are trying to describe: |permutations, probabality, statistcs| |Similar Threads for: Combinations of n objects taken r at a time| |Time Dilation with objects at low speeds||Special & General Relativity||9| |Signal vs Time objects||Introductory Physics Homework||1| |Formula for combinations with repeting objects||General Math||2| |Position Time, falling objects||Introductory Physics Homework||3| |time it take for small objects to come together under gravity||General Physics||1|
<urn:uuid:e36d5ba3-31e3-4597-b935-ba6fba5bd80d>
2.703125
614
Comment Section
Science & Tech.
61.943745
To clarify Phoenix’s light travel time: the spacecraft did not cruise straight to Mars but took a longer path following an elliptical orbit around the sun, with Mars at the aphelion. Because the distance from Mars to Earth was about 250 million kilometers during the mission, the one-way light travel time was a little less than 15 minutes. “Thought Experiments,” by Joshua Knobe, describes the question of free will versus determinism. I think it’s impossible to determine (pun intended) whether we live in a deterministic or free-will world. I believe I have free will. But suppose I’m wrong. Then it’s determined that I will believe that I have free will. We can conduct the experiments that Knobe talks about, but doing so assumes free will. Otherwise, the outcomes are determined. Berkeley Heights, N.J. There is another way of thinking about morality than the one put forward by Knobe. Instead of it being essentially altruistic, noble and somehow emanating from inside us, we can think of it as focused largely on how we want others to behave toward us. If others behave morally, they create an environment that is generally beneficial to us. Our own “moral” behavior, however, is dependent on whether there are effective social sanctions that make it advantageous to behave in a particular way. From this perspective, it is easy to understand the relatively constrained behavior of people who are part of a religious or other mainstream group and the more fluid “morality” of those who are “open to experience.” West Vancouver, B.C. In “A Formula for Economic Calamity,” by David H. Freedman, David Colander of Middlebury College asserts that climate models often have no terms to account for the effects of clouds. This is not true. In my class on climate change problem solving, I use a 2005 paper by M. H. Zhang et al. that compares modeled clouds with observed ones from 10 climate models. There are many earlier and later references that document over three decades of ever more sophisticated inclusion of clouds in weather and climate models. The statement that clouds are not included is misinformation that has been propagated in political arguments used to discredit such models. There is an important difference between physical climate models and economic ones: namely, physics. The physics of climate change are simple classical physics in a stunningly complex, multiscale system, so it is possible to design experiments based on cause and effect. The uncertainty associated with future climate projections linked to economic possibilities of what people will do is far larger than the uncertainty associated with physical climate models. Richard B. Rood Department of Atmospheric, Oceanic and Space Sciences University of Michigan FREEDMAN REPLIES: Rood is right to point out that climate models are often designed to try to account for clouds. The statement in the article, which was attributed to an economist and not a climate scientist, was a vague oversimplification that suggested climate models frequently fail to account for clouds. In fact, the climate science literature is replete with papers that call out the challenges of accurately accounting for clouds in models. Surely if we have to err in gauging uncertainty in science, it’s better to err on the side of overestimating it. If only economists working in financial risk models had done just that.
<urn:uuid:69fd3997-8c11-4b4c-a53d-0a8159b61fef>
2.90625
703
Comment Section
Science & Tech.
39.732624
||The Boolean class wraps a value of the primitive type boolean in an object. ||The Byte class is the standard wrapper for byte values. ||The Character class wraps a value of the primitive type in an object. ||Instances of the class Class represent classes and interfaces in a running Java application. ||The Double class wraps a value of the primitive type double in an object. ||The Float class provides an object wrapper for Float data values, and serves as a place for float-oriented operations. ||The Integer class wraps a value of the primitive type in an object. ||The Long class wraps a value of the primitive type in an object. Math contains methods for performing basic Object is the root of the class hierarchy. ||Every Java application has a single instance of class Runtime that allows the application to interface with the environment in which the application is running. ||The Short class is the standard wrapper for short values. String class represents character strings. ||A string buffer implements a mutable sequence of characters. System class contains several useful class fields ||A thread is a thread of execution in a program. Throwable class is the superclass of all errors and exceptions in the Java language. ||Thrown when an exceptional arithmetic condition has occurred. ||Thrown to indicate that an array has been accessed with an ||Thrown to indicate that an attempt has been made to store the wrong type of object into an array of objects. ||Thrown to indicate that the code has attempted to cast an object to a subclass of which it is not an instance. ||Thrown when an application tries to load in a class through its string name using the forName method in class but no definition for the class with the specified name could be found. Exception and its subclasses are a form of Throwable that indicates conditions that a reasonable application might want to catch. ||Thrown when an application tries to load in a class, but the currently executing method does not have access to the definition of the specified class, because the class is not public and in another package. ||Thrown to indicate that a method has been passed an illegal or ||Thrown to indicate that a thread has attempted to wait on an object's monitor or to notify other threads waiting on an object's monitor without owning the specified monitor. ||Signals that a method has been invoked at an illegal or ||Thrown to indicate that a thread is not in an appropriate state for the requested operation. ||Thrown to indicate that an index of some sort (such as to an array, to a string, or to a vector) is out of range. ||Thrown when an application tries to create an instance of a class newInstance method in class Class, but the specified class object cannot be instantiated because it is an interface or is an abstract class. ||Thrown when a thread is waiting, sleeping, or otherwise paused for a long time and another thread interrupts it. ||Thrown if an application tries to create an array with negative size. ||Thrown when an application attempts to use null in a case where an object is required. ||Thrown to indicate that the application has attempted to convert a string to one of the numeric types, but that the string does not have the appropriate format. RuntimeException is the superclass of those exceptions that can be thrown during the normal operation of the Java Virtual Machine. ||Thrown by the system to indicate a security violation. ||Thrown by the charAt method in class String and by other methods to indicate that an index is either negative or greater than or equal to the size of the string.
<urn:uuid:74d3da5a-fb85-4f9d-ab88-84ff53428679>
2.75
804
Documentation
Software Dev.
48.726062
Prof. Stephen A. Nelson Ternary Phase Diagrams Crystallization in Ternary Systems I. Equilibrium Crystallization Where all 2 Component Systems are Binary Eutectic Systems. |Figure 1 shows a three dimensional representation of the three component (ternary) system ABC. Note that composition is measured along the sides of the basal triangle and temperature (or pressure) is measured vertically. The top of the figure shows a surface with contour's representing lines of constant temperature. These contours are called isotherms. Note that the eutectic points in each of the binary systems project into the ternary systems as curves. These curves are called boundary curves, and any composition on one of these curves will crystallize the two phases on either side of the curve.| |Figure 2 shows the same figure in two dimensions as seen from above. The boundary curves and isotherms are also shown projected onto the basal triangle. Note how the temperature decreases toward the center of the diagram In Figure 3 we trace the crystallization of composition X. Figure 3 is the same as Figure 2, with the isotherms left off for greater clarity. |Note that the final solid must consist of crystals A + B + C since the initial composition is in the triangle ABC. At a temperature of about 980° the liquid of composition X would intersect the liquidus surface. At this point it would begin to precipitate crystals of C. As temperature is lowered, crystals of C would continue to precipitate, and the composition of the liquid would move along a straight line away from C. This is because C is precipitating and the liquid is becoming impoverished in C and enriched in the components A + B. |At a temperature of about 820°, point L in Figure 3, we can determine the relative proportion of crystals and liquid. With further cooling, the path of the liquid composition will intersect the boundary curve at point 0. At the boundary curve crystals of A will then precipitate. The liquid path will then follow the boundary curve towards point M. The bulk composition of the solid phase precipitated during this interval will be a mixture of A + C in the proportion shown by point P. At point M, the bulk composition of the solid phases so far precipitated through the cooling history lies at point N (the extension of the straight line from M through the initial composition X). At this time the % solid will be given by the distances : and the % liquid by the distances: Note, however, that the solid at this point consists of crystals of A and crystals of C. So, we must further break down the percentages of the solid. This is done as follows: The percentage of the solid that is A will be given by the distance from C to N relative to the distance between A and C; i.e. by the formula: Similarly, the percentage of the solid consisting of crystals of C is given by the formula: We can now calculate the exact percentage of all phases present in composition X at a temperature of 660° (where the liquid composition is at point M). The following formulae apply: Note also that we can determine the composition of all phases present in the system at this point. The composition of the liquid is given by reading the composition of point M off the basal triangle. Since it is a mixture of A, B, and C, it will have a composition expressed in terms of the percentages of A, B, and C. The composition of the solids are 100%A and 100% C; i.e. they are pure solid phases (not mixtures). With further cooling, the liquid composition will move to the ternary eutectic, E, at a Temperature of about 650°, at which point crystals of B will precipitate. The temperature will remain constant until all of the liquid is used up. The final crystalline product will consist of crystals of A + B + C in the proportions given by the initial composition X. Crystallization will proceed in an analogous manner for all other compositions in the ternary system. To summarize, we can express the path of crystallization for composition X in an abbreviated form as follows: II. Crystallization in Ternary Systems that Contain a Compound that Melts Congruently. A ternary system that has a binary system with a compound that shows congruent melting (melts to a liquid of its own composition) is shown in Figure 5. Also shown is the binary system X-Y that contains the intermediate compound W. The result of the addition of this intermediate compound is essentially that the ternary system XYZ is divided into two smaller ternary systems represented by triangles WYZ and XWZ. |Crystallization in this system is illustrated in Figure 6, where the isotherms have been removed for simplicity.| |We first note that any composition within the triangle WYZ must end up with crystals of W + Y + Z in the final crystalline product, compositions in the triangle XWY will end up with crystals of X + W + Y, and compositions on the line WZ must end up with crystals of W + Z only. Consider first crystallization of composition A in Figure 6. Crystallization begins at about 1160° with separation of crystals of Z. The composition of the liquid then changes along a straight line away from Z. When the temperature reaches about 680°, the liquid composition has intersected the boundary curve at point B. |At this time, crystals of W begin to separate and with further lowering of temperature, the liquid moves along the boundary curve, B-E1, precipitating crystals of Z + W. When the liquid reaches the ternary eutectic, E1, crystals of X begin to separate along with crystals of W and Z. The temperature remains constant at 640° until all of the liquid is used up leaving a final product of crystals of X + W + Z in the proportions of the original composition, A. We can summarize this crystallization history in abbreviated form as follows: |Now consider the crystallization of composition M which lies on the binary system W-Z. Since this is a binary system, only phases W and Z will be found in the final crystalline product. Thus, crystallization will stop when the liquid composition reaches the point O, which is equivalent to the binary eutectic in the system W-Z.| Again, we can construct isothermal planes showing the phases present in any part of the system at any temperature of interest. Such an isothermal plane at 700° for the system XYZ is shown in Figure 7. III. Crystallization in Ternary Systems Containing an Incongruently Melting Compound. A. Equilibrium Crystallization |We will consider equilibrium crystallization of compositions P, Q, S, T and X, as all will behave somewhat differently. B. Fractional Crystallization We next consider what might happen under conditions of fractional crystallization in the system. As you recall, fractional crystallization occurs when a crystalline phase is somehow removed from the system and is thus prevented from reacting with the liquid to form different crystals. Consider now fractional crystallization of composition P in Figure 8. Under equilibrium conditions, composition P would follow the path discussed above, becoming completely solid at the ternary peritectic, R, with an assemblage of crystals of A + D + C. |We will consider fractional crystallization in steps for this example. That is at various points we will imagine all of the previously precipitated crystals are somehow removed. At 1090° composition P begins to crystallize. Crystals of A separate from the liquid. On cooling to point Q, we remove all of the previously precipitated crystals of A, our system now has composition Q, since we have removed part of the system that has already crystallized as crystals of A. Cooling to point S, more crystals of A precipitate, and again are removed from the system. Note that at this point, the system has the composition S. Further cooling of composition S, without removing any more crystals from the system, would result in the liquid composition following the equilibrium crystallization path of composition S, as discussed above. Note that composition S is in the triangle D-C-B, and would end up crystallizing D, C, and B, in contrast to the assemblage that would have crystallized from composition P (A, D, and C) if it had crystallized under equilibrium conditions. Thus fractional crystallization of composition P would result in not only a different final crystalline assemblage, but a final liquid composition which is very different from the final liquid composition that would result from equilibrium crystallization. If we continue our fractional crystallization of composition P, starting with the liquid (and system) having a composition of S, further cooling to point T on the boundary curve, results in further precipitation of A. If we remove all crystals of A, the system now has a composition T. Because no crystals are present to react with the liquid to form crystals of D, the liquid composition will not change along the boundary curve towards R, but instead will move directly across the field where only D is precipitated. When the temperature reaches 680°, at point V, crystals of B form and the liquid composition changes along the boundary curve toward E. At E, crystals of C form and our final assemblage consists of crystals of D + B + C. Note how fractional crystallization has allowed the liquid to become enriched in B, while under equilibrium conditions no crystals of B could have formed. The system discussed above is very similar to the behavior of the system Mg2SiO4 - SiO2 - CaAl2Si2O8, where A = Forsterite, B = Quartz, C = Anorthite, and D = Enstatite. Thus, knowledge of the crystallization behavior of such a system is in many ways analogous to what may happen in magmas and shows how a basaltic magma, that would normally only crystallize forsterite, enstatite, and plagioclase, could change to a rhyolitic magma that would crystallize quartz. |IV. Ternary System with a Binary Solid Solution.| |Figure 9 shows the ternary system Albite - Anorthite - Diopside. As you recall, albite and anorthite form a complete solid solution series (the plagioclase series). Anorthite and diopside form a eutectic system, as do albite and diopside. However, the eutectic in the system albite-diopside is very close to pure albite. The solid solution between albite and anorthite continues into the ternary system and is expressed by the boundary curve connecting the two binary eutectics. Note that this boundary curve forms a "temperature valley" in the ternary system.| |We will now consider equilibrium crystallization of 2 compositions on the plagioclase side of the boundary curve. It must be noted that geometrical methods will not predict the exact path of equilibrium crystallization in systems of this type. We here give an approximate path consistent with experimental results.| |First, consider crystallization of composition D, in Figure 10, which is 27% Albite, 46% Anorthite and 28% diopside. The final product should consist of pure diopside and a plagioclase of composition 63% Anorthite. At a temperature of about 1325° the liquid begins to crystallize with separation of plagioclase of composition 99% anorthite. As the liquid is cooled, it moves along the curved path D-P, while continually reacting with the previously precipitated plagioclase to form a more albitic plagioclase. |By the time the liquid composition has reached point P, the Plagioclase has the composition 98% anorthite. this is found by extending a line from the liquid composition through the initial composition (D) back to the base of the triangle. With continued cooling, the liquid composition will eventually reach the boundary curve at point M, at which time the plagioclase has the composition S (90% anorthite). We now construct what is known as a three phase triangle. It is shown in the figure by the straight line connecting the three phases in equilibrium, Di (solid), liquid (point M) and plagioclase solid solution (Point S). With continued cooling, the liquid composition changes along the boundary curve towards pure albite. Meanwhile the plagioclase solid solution is continually made over to more albitic compositions. Crystallization ceases when the base of the three phase triangle intersects the original liquid composition, D. Such a three phase triangle with apices Di, I, F, is shown in the figure. It indicates that the last liquid has the composition I, and is in equilibrium with pure diopside and a plagioclase composition of 70% anorthite (point F). Crystallization of composition P will be similar to that of D. Note, however that the liquid composition will follow a different path and intersect the boundary curve at point L. The liquid composition will then change along the boundary curve until the base of the three phase triangle Di, H, G, intersects the initial composition P. The final assemblage will then consist of pure diopside and plagioclase of composition G (60% anorthite). We now consider what would happen if the solid material were removed somehow, and prevented from reacting with the liquid. This is the case of fractional crystallization. Suppose we start again with composition D and cool it to a point where the liquid has the composition P. If we now remove all of the plagioclase that has crystallized up to this point, our liquid, and thus our entire system (without the removed crystals) now has the composition P. As we saw in the example above, composition P follows a different path of crystallization than composition D, and will produce a plagioclase of more albitic composition that of composition D. Thus, by continually removing plagioclase from contact with the liquid, it is possible, under perfect fractionation conditions to produce an almost pure albitic plagioclase from a liquid which would give a very calcic plagioclase under perfect equilibrium conditions. Examples of questions on this material that could be asked on an exam
<urn:uuid:fa28e492-d94f-443d-8198-a0d6ad7c7640>
3.828125
3,081
Academic Writing
Science & Tech.
41.053019
Primer (molecular biology) A primer is a strand of nucleic acid that serves as a starting point for DNA synthesis. It is required for DNA replication because the enzymes that catalyze this process, DNA polymerases, can only add new nucleotides to an existing strand of DNA. The polymerase starts replication at the 3'-end of the primer, and copies the opposite strand. Many of the laboratory techniques of biochemistry and molecular biology that involve DNA polymerase, such as DNA sequencing and the polymerase chain reaction (PCR), require DNA primers. These primers are usually short, chemically synthesized oligonucleotides, with a length of about twenty bases. They are hybridized to a target DNA, which is then copied by the polymerase. Mechanism in vivo The lagging strand of DNA is that strand of the DNA double helix that is orientated in a 5' to 3' manner. Therefore, its complement must be synthesized in a 3'→5' manner. Because DNA polymerase III cannot synthesize in the 3'→5' direction, the lagging strand is synthesized in short segments known as Okazaki fragments. Along the lagging strand's template, primase builds RNA primers in short bursts. DNA polymerases are then able to use the free 3'-OH groups on the RNA primers to synthesize DNA in the 5'→3' direction. The RNA fragments are then removed by DNA polymerase I for prokaryotes or DNA polymerase δ for eukaryotes (different mechanisms are used in eukaryotes and prokaryotes) and new deoxyribonucleotides are added to fill the gaps where the RNA was present. DNA ligase then joins the deoxyribonucleotides together, completing the synthesis of the lagging strand. Primer removal In eukaryotic primer removal, DNA polymerase δ extends the Okazaki fragment in 5' to 3' direction, and when it encounters the RNA primer from the previous Okazaki fragment, displacing the 5′ end of the primer into a single-stranded RNA flap, which is removed by nuclease cleavage. Cleavage of the RNA flaps involves either endonuclease 1 (FEN1) cleavage of short flaps, or coating of long flaps by the single-stranded DNA binding protein replication protein A (RPA) and sequential cleavage by Dna2 nuclease and FEN1. This mechanism is a potential explanation to how HIV virus can transform its genome into double stranded DNA from the RNA-DNA formed after reverse transcription of its RNA. However, the HIV-encoded reverse transcriptase has own ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that copies the sense cDNA strand into an antisense DNA to form a double-stranded DNA intermediate. Uses of synthetic primers In PCR, primers are used to determine the DNA fragment to be amplified by the PCR process. The length of primers is usually not more than 30 (usually 18–24) nucleotides, and they need to match the beginning and the end of the DNA fragment to be amplified. They direct replication towards each other – the extension of one primer by polymerase then becomes the template for the other, leading to an exponential increase in the target segment. It is worth noting that primers are not always for DNA synthesis, but can in fact be used by viral polymerases, e.g. influenza, for RNA synthesis. PCR primer design Pairs of primers should have similar melting temperatures since annealing in a PCR occurs for both simultaneously. A primer with a Tm significantly higher than the reaction's annealing temperature may mishybridize and extend at an incorrect location along the DNA sequence, while Tm significantly lower than the annealing temperature may fail to anneal and extend at all. Primer sequences need to be chosen to uniquely select for a region of DNA, avoiding the possibility of mishybridization to a similar sequence nearby. A commonly used method is BLAST search whereby all the possible regions to which a primer may bind can be seen. Both the nucleotide sequence as well as the primer itself can be BLAST searched. The free NCBI tool Primer-BLAST integrates primer design tool and BLAST search into one application, so does commercial software product such as ePrime, Beacon Designer. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design. Mononucleotide repeats should be avoided, as loop formation can occur and contribute to mishybridization. Primers should not easily anneal with other primers in the mixture (either other copies of same or the reverse direction primer); this phenomenon can lead to the production of 'primer dimer' products contaminating the mixture. Primers should also not anneal strongly to themselves, as internal hairpins and loops could hinder the annealing with the template DNA. The reverse primer has to be the reverse complement of the given cDNA sequence. The reverse complement can be easily determined, e.g. with on-line calculators. Degenerate primers Sometimes degenerate primers are used. These are actually mixtures of similar, but not identical primers. They may be convenient if the same gene is to be amplified from different organisms, as the genes themselves are probably similar but not identical. The other use for degenerate primers is when primer design is based on protein sequence. As several different codons can code for one amino acid, it is often difficult to deduce which codon is used in a particular case. Therefore primer sequence corresponding to the amino acid isoleucine might be "ATH", where A stands for adenine, T for thymine, and H for adenine, thymine, or cytosine, according to the genetic code for each codon, using the IUPAC symbols for degenerate bases. Use of degenerate primers can greatly reduce the specificity of the PCR amplification. The problem can be partly solved by using touchdown PCR. Degenerate primers are widely used and extremely useful in the field of microbial ecology. They allow for the amplification of genes from thus far uncultivated microorganisms or allow the recovery of genes from organisms where genomic information is not available. Usually, degenerate primers are designed by aligning gene sequencing found in GenBank. Differences among sequences are accounted for by using IUPAC degeneracies for individual bases. PCR primers are then synthesized as a mixture of primers corresponding to all permutations. See also - Oligonucleotide synthesis — the methods by which primers are manufactured - Distinguishing the pathways of primer removal during Eukaryotic Okazaki fragment maturation Contributor Author Rossi, Marie Louise. Date Accessioned: 2009-02-23T17:05:09Z. Date Available: 2009-02-23T17:05:09Z. Date Issued: 2009-02-23T17:05:09Z. Description: Dr. Robert A. Bambara, Faculty Advisor. Thesis (PhD) – School of Medicine and Dentistry, University of Rochester. UR only until January 2010. UR only until January 2010. - Doc Kaiser's Microbiology Home Page > IV. VIRUSES > F. ANIMAL VIRUS LIFE CYCLES > 3. The Life Cycle of HIV Community College of Baltimore County. Updated: Jan., 2008 - S. Patricia, Stock; John, Vanderberg; Itamar, Glazer; Noel, Boemare (2009). "1.6.2. Primers development and virus identification strategies". Insect Pathogens: Molecular Approaches and Techniques. CAB International. p. 22. ISBN 978-1-84593-478-1. "Specificity is influenced by the length of the primers and typically primers between 18–24 nucleotides are suitable for PCR." - "Electronic PCR". NCBI - National Center for Biotechnology Information. Retrieved 13 March 2012. - Adenosine added on the primer 50 end improved TA cloning efficiency of polymerase chain reaction products, Ri-He Peng, Ai-Sheng Xiong, Jin-ge Liu, Fang Xu, Cai Bin, Hong Zhu, Quan-Hong Yao - Reverse Complement Calculator
<urn:uuid:25f7cef7-2737-4166-9dd2-35756365fb68>
4.25
1,772
Knowledge Article
Science & Tech.
36.477706
Dr Lipton [See his videos on thios topic: Video #1 Video #2] says our environment feeds signals to our senses which feeds signals to our brains which filter these systems via our beliefs. The brain then sends signals to all cells which react to them causing certain genes to be activated. This activation creates RNA which encodes for a particular protein. The protein then can start the process over by signaling other cells. Dr Lipton can take a cell in a Petri Dish and remove its DNA. The cell continues to live and function. This proves our DNA is not what predetermines our cells actions but our environment does. - Evolution seen in ‘synthetic DNA’ (talesfromthelou.wordpress.com) - DNA and Life (avicenna2020.wordpress.com) - DNA alternative created by scientists (biosingularity.com) - Scientists Create Artificial DNA, Observe Darwinian Evolution In Them (techie-buzz.com)
<urn:uuid:1d9ff031-95d2-4f6e-ae9a-ebab9f44e431>
2.90625
204
Personal Blog
Science & Tech.
56.932584
Major Section: PROGRAMMING List is the macro for building a list of objects. For example, (list 5 6 7) returns a list of length 3 whose elements are 7 respectively. Also see list*. List is defined in Common Lisp. See any Common Lisp documentation for more information.
<urn:uuid:7d45b3f8-da3d-4908-b8e7-f23fb6d369b3>
2.953125
63
Documentation
Software Dev.
64.094
“So,” Herr Schrodinger says to us, “I’m looking for a function of x. It needs to be equal to the negative of its second derivative, up to a constant factor. When x is zero, the function itself should be zero. And when x equals L for some constant L greater than zero, the function also needs to be zero. Can you find me such a function?” Sure we can. As you know, the derivative of a function is its rate of change. Its second derivative is the rate of change of the rate of change. There are functions which are equal to their derivative (and second derivative), e^x is one of them. But we’re looking for a function which is equal to the negative of its own second derivative. There’s two of them, the sine and the cosine function. And since the sum of a derivative is the derivative of the sum, we know the answer to Schrodinger’s problem is: Differentiate that sucker twice, and you’ll sure enough get the negative of the very same function back, times an overall constant which happens to be the square of k. We have to have the constants A, B, and k so that we don’t miss any of the possibilities. As an example if we wanted a function equal to its derivative, e^x works, but so does any constant times e^x. Those are perfectly legitimate functions in their own right, and we can’t ignore them. Now we have the answer to the first part of Schrodinger’s question. But our function isn’t zero at x = 0, and it isn’t zero at x = L. Probably some of the ways we might select the constants A, B, and k might work, but certainly not all of them. We have to begin chopping away the ones that don’t work. First, we know that at x = 0, f(x) must equal zero too. But while sin(0) = 0 automatically, cos(0) is never zero no matter what k we select. The only way we can force f(0) = 0 is to chop that term entirely by setting B = 0. This leaves us with: Which is promising, but not equal to 0 when x = L. At least not unless we can figure out what particular values of k cause that condition to be true. Now we know that the sine function is zero at lots of x values. The sine function is zero at x = 0, pi, 2pi, 3pi, 4pi, etc for any n times pi. But we want it to be equal to zero at x = L, so that means we have to pick k such that Solve for k, plug into our function: And we’re done! Our function fits every condition Schrodinger set out for us. Notice that there’s an infinite number of possible n. Assuming we set L = 1, we can graph a few (The value of A is arbitrary, I’m just going to set it equal to 1 also): This is, in slight disguise, a very important problem in intro quantum mechanics – the wavefunction of a particle in a box. Each n corresponds to an energy level of the system. Most importantly any sum of those solutions is itself a solution, which leads us to the superposition principle and the expansion postulate. And complete sets of orthogonal functions, which are super-cool and I might just make it a topic of a future Sunday Function.
<urn:uuid:13dad021-d493-4235-ba21-af0717d94099>
3.375
754
Personal Blog
Science & Tech.
71.201586
Heat waves exacerbated by climate change The Climate Commission says the heatwave which has made Australia sweat this week has been made worse by climate change. Global warming's role in heatwaves is expected to increase in coming decades too. Source: AM | Duration: 3min 38sec ASHLEY HALL: Fire crews in Tasmania, Victoria and New South Wales have worked through the night to try to contain dozens of bushfires still burning out of control. And there's concern that a return to heat wave conditions today will bring many more blazes. The Federal Government's Climate Commission says we'd better get used to it - heatwaves are set to get hotter, longer, bigger and more frequent. The Climate Commission says the heatwave and bushfires which have dogged Australia this week have been exacerbated by global warming. The Commission is launching its findings in a new report today, called "Off the Charts: Extreme Australian Summer Heat." Simon Lauder reports. SIMON LAUDER: This week has given the entire country a taste of what the experts call extreme heat conditions. Rex Spencer works at the car repair shop at Yulara in the red centre which is in the midst of a record hot spell. REX SPENCER: I think we're all feeling it, that's for sure. SIMON LAUDER: How do you cope with 52 degrees in the workshop? REX SPENCER: We drink lots of water and I think we've bought the local shop out of icy poles actually this week. SIMON LAUDER: Australia's average temperature has increased by 0.9 of a degree since 1910. But Rex Spencer doesn't believe that's got much to do with the current heatwave. REX SPENCER: Oh, I'd be a bit sceptical to jump on that reasoning straight away, that's for sure. SIMON LAUDER: He's not alone. When the Prime Minister Julia Gillard linked the heatwave with climate change this week, the acting opposition leader, Warren Truss, said that was utterly simplistic. But climate change experts have no doubt that climate change is a factor. DAVID KAROLY: What we have been able to see is clear evidence of an increasing trend in hot extremes, reductions in cold extremes. And with the increases in hot extremes more frequent extreme fire danger days. SIMON LAUDER: Professor David Karoly has written a report for the Climate Commission which attempts to answer questions about the link between heatwaves and climate change. The report says small changes in average temperature can have a significant impact on the frequency and nature of extreme weather events. DAVID KAROLY: What it means for the Australian summer is an increased frequency of hot extremes, more hot days, more heat waves and more extreme bush fire days and that's exactly what we've been seeing typically over the last decade and we will see even more frequently in the future. SIMON LAUDER: Is there a way to explain how this heat wave may have been affected by climate change? DAVID KAROLY: Yeah, climate change will have worsened this heatwave both by extending its spatial extent and increasing its intensity. What climate change is doing is worsening the conditions associated with heat waves. So it makes them longer, it makes the intensity of the heat wave worse. And together they lead to more frequent extreme fire danger days. SIMON LAUDER: The Climate Commission report says the number of record heat days across Australia has doubled since 1960. More temperature records are likely to be broken as hot conditions continue this summer. Professor Karoly says, based on current projections of greenhouse gas emissions and climate change, the long-term outlook is even more dire. DAVID KAROLY: We are expecting in the next 50 years for two to three degrees more warming. In other words two or three times the warming we've seen already, leading to much greater increases in heat waves and extreme fire danger days. So we're expecting future climate change to lead to much greater increases in extremes in the next 30 to 50 years. ASHLEY HALL: Scientific advisor to the Climate Commission, Professor David Karoly, speaking to Simon Lauder.
<urn:uuid:2748fb3e-711a-42f7-b72d-225e039c326a>
2.796875
874
Audio Transcript
Science & Tech.
48.115052
Information on WSDL (Web Services Description Language) WSDL (Web Services Description Language) provides a means based on XML by which the capabilities of a web server can be described. WSDL describes web services in terms of a set of end points that accept requests and issue responses. A WSDL document specifies the server providing the services, the binding of the services provided to the services and the message formats used for requests and responses. WSDL is used in UDDI (WHAT IS THIS??), and in conjunction with SOAP, HTTP and MIME. Other topics in our resources on XML related to WSDL include: Please contact us if you would like to nominate other terms for these glossaries. Please contact Argos Press Pty Ltd to request information on licensing this site's content (such as this glossary entry on WSDL (Web Services Description Language)). © Argos Press Pty Ltd, Canberra, 2003-2009. All rights reserved.
<urn:uuid:db4554fa-c2c2-4441-bff3-2fd8adfb9253>
2.6875
213
Documentation
Software Dev.
56.484659
|What do Saturn's rings look like from the dark side? From Earth, we usually see Saturn's rings from the same side of the ring plane that the Sun illuminates them -- one might call this the bright side. Geometrically, in the above picture taken in August by the robot Cassini spacecraft now orbiting the Sun is behind the camera but on the other side of the ring plane. Such a vantage point gives a breathtaking views of the most splendid ring system in the Solar System. Strangely, the rings have similarities to a photographic negative of a front view. For example, the dark band in the middle is actually the normally bright B-ring. The ring brightness as recorded from different angles indicates ring thickness and particle density of ring particles. At the top left of the frame is Saturn's moon Tethys, which although harder to find, contains much more mass than the entire ring system. Cassini Imaging Team,
<urn:uuid:c457d164-099a-41c4-9a47-cda32f786e5a>
3.203125
218
Truncated
Science & Tech.
52.939802
illustration, two distant galaxies formed about 2 billion years after the big bang are caught in the afterglow of GRB090323, a gamma-ray burst seen across the Universe. Shining through its own host galaxy and another nearby galaxy, the alignment of gamma-ray burst and galaxies was inferred from the afterglow spectrum following the burst's initial detection by the Fermi Gamma Ray Space Telescope in March of 2009. As seen by one of the European Southern Observatory's very large telescope units, the spectrum of the burst's fading afterglow also offered a surprising result - the distant galaxies are richer in heavy elements than the Sun, with the ever seen in the early Universe. Heavy elements that enrich mature galaxies in the local Universe were made in past generations So these young galaxies have experienced a prodigious rate of star formation and chemical evolution compared to our own Milky Way. In the illustration, the light from the burst site at the left passes successively through the galaxies to the right. Spectra illustrating dark absorption lines of the galaxies' elements imprinted on the afterglow light are shown as insets. Of course, astronomers on planet Earth would be about 12 billion light-years off the right edge of the frame. L. Calçada - Research Team: Sandra Savaglio
<urn:uuid:a9178669-1c70-42f1-987c-259e7b9769b7>
4.15625
299
Knowledge Article
Science & Tech.
33.179201
waste from nuclear power plants can be in solid, liquid, or gaseous forms. Some of the types and quantities of possible nuclear waste are noted below. Uranium (DU) is, according to the to the Military Toxins Project, the radioactive byproduct of the uranium enrichment process, is "roughly 60% as radioactive as naturally occurring uranium and has a half-life of 4.5 billion years." The United States has in excess of 1.1 billion pounds of DU waste material." Level Radioactive Waste (LLW) is any radioactive waste not classified as high-level waste, transuranic waste, or uranium mill tailings. LLW often contains small amounts of radioactivity dispersed in large amounts of material. It is generated by uranium enrichment processes, reactor operations, isotope production, medical procedures, and research and development activities. LLW is usually made up of rags, papers, filters, tools, equipment, discarded protective clothing, dirt, and construction rubble contaminated with radionuclides." sludge is what is left over after raw sewage has been treated at the wastewater treatment plants. Water and many of the contaminants are removed from the raw sewage; Bacteria are then left to do the job of reducing human waste, leaving a concentrated semisolid sludge cake. In the past, wastewater treatment plants paid to for disposal of sludge in landfills or through incineration. Over one third of the 5.3 million metric tons of sewage sludge produced each year in the US is now dumped on farmland and forestland. Sludge isn't just "fertilizer." Heavy metals, parasites (and other pathogens), chemicals such as chlorine can all be contained in sewage sludge. But the 503 regs don't include testing or treatment for radioactivity in sludge, which can originate from industry, the medical profession and labs." from Sierra Club website, Nuclear dispute that nuclear power plants do produce nuclear waste. What is in dispute is how dangerous this waste is, whether it can be safely disposed of, and how does this waste compare with the air pollution from other fossil fuel power plants.
<urn:uuid:2481170e-0fe2-4602-8170-2ca5be842156>
3.875
484
Knowledge Article
Science & Tech.
38.668332
Originators: G. Wegner; Geňa Hahn, André Raspaud, and Weifan Wang (presented by Douglas West - REGS 2007) Definitions: A k-coloring of the vertices of a graph is an injective coloring if for every vertex v, the neighbors of v receive distinct colors. The injective chromatic number χi(G) is the least k such that G has an injective k-coloring. The square of a graph G is the graph G² with the same vertex set as G in which vertices are adjacent if their distance in G is at most 2 The maximum average degree mad(G) of a graph G is the maximum, over all subgraphs, of the average vertex degree. Background: An injective coloring need not be a proper coloring; indeed, the injective chromatic number of the Petersen graph is 5, with each color class inducing K2. Hahn-Kratochvil-Siran-Sotteau [HKSS] introduced χi(G) and noted for a graph with maximum degree d that d ≤ χi(G) ≤ χ(G²)≤ d²-d+1. Indeed, χi(G) equals the chromatic number of the "common neighbor graph", the graph on vertex set V(G) where vertices are adjacent if they have a common neighbor in G. The upper bound d²-d+1 holds with equality if and only if G is the incidence graph of a projective plane of order d-1 [HKSS]. The lower bound holds with equality for a d-regular graph only if d divides the number of vertices [HKSS]. [HKSS] also discusses injective coloring of cartesian products, particularly hypercubes. The conjectured extremal values for χi(G) for planar graphs (in terms of the maximum degree d) are similar to those for χ(G²). (Wegner [W]): If G is planar with maximum degree d, then χ(G²)≤7 when d=3, χ(G²)≤d+5 when 4≤ d≤ 7, and χ(G²)≤(3d/2)+1 when d≥ 8. Comments: For planar graphs with girth g and maximum degree d, Dvořák-Král'-Nejedlý-Škrekovski [DKNS] proved that χ(G²)≤d+1 when g≥7 and d is sufficiently large, and χ(G²)≤d+2 when g=6. Molloy-Salavatipour [MS] proved that χ(G²)&le(5d/3)+78. For d=3, Montassier-Raspaud [MR] proved that χ(G²)≤5 if g≥14 and χ(G²)≤6 if g≥10. Conjecture 2 (Hahn-Raspaud-Wang [HRW]): If G is planar with maximum degree d, then χi(G)≤ ⌈3d/2⌉. Comments: Doyon-Hahn-Raspaud [DHR] proved that χi(G)≤ ⌈3d/2⌉ whenever G does not have K4 as a minor. Question 3: What bounds can be given on the injective chromatic number when mad(G) is bounded? Doyon-Hahn-Raspaud [DHR] proved bounds on χi(G) in terms of the maximum degree d when mad(G) is bounded: If mad(G)<14/5, then χi(G)≤ d+3, If mad(G)< 3, then χi(G)≤ d+4, and If mad(G)<10/3, then χi(G)≤ d+8. Since mad(G)<2g/(g-2) when G is a planar graph with girth g, these provide the upper bounds d+3, d+4, d+8 for planar graphs with girths 7,6,5, respectively, and maximum degree d. These bounds have been improved in various ways by Lužar, Škrekovski, and Tancer [LST]: If g≥19, then χi(G)≤ d. If g≥10, then χi(G)≤ d+1. If g≥5 and d is large, then χi(G)≤ d+4. If g≥7 and d=3, then χi(G)≤ 5. Problem 4: Find classes on which χi is computable in polynomial time. Comments: [HKSS] showed that computing χi is NP-hard in general. It is easy for trees and cycles and probably is not much harder for cacti. What about outerplanar graphs? [DHR] A. Doyon, G. Hahn, A. Raspaud; On the injective chromatic number of sparse graphs, preprint 2005. [DKNS] Z. Dvořák, D. Král', P. Nejedlý, R. Škrekovski; Coloring squares of planar graphs with girth six, European J. Combin. 29 (2008), 838--849. [HKSS] Hahn, Geňa; Kratochvîl, Jan; Širáň, Jozef; Sotteau, Dominique; On the injective chromatic number of graphs. Discrete Math. 256 (2002), no. 1-2, 179--192. [HRW] G. Hahn, A. Raspaud, W. Wang; On the injective coloring of K4-minor free graphs, preprint 2006. [LST] B. Lužar, R. Škrekovski, and M. Tancer; Injective colorings of planar graphs with few colors, preprint 2008: kam.mff.cuni.cz/~kamserie/serie/clanky/2006/s798.ps, to appear in Discrete Math. [MS] M. Molloy and M. R. Salavatipour; A bound on the chromatic number of the square of a planar graph, J. Combin. Th. (B) 94 (2005), 189-213. [MR] M. Montassier and A. Raspaud; A note on 2-facial coloring of plane graphs, Technical Report RR-1341-05, LaBRI (2005). [W] G. Wegner, Graphs with given diameter and a coloring problem, Technical Report, University of Dortmund (1977). Posted July 2008
<urn:uuid:a32bc3f2-8647-4cbf-86c8-a359c33a7301>
2.859375
1,554
Academic Writing
Science & Tech.
95.123307
Rubber Bands and Paper Clip Hi all. I have a question. How do you show the difference between stress (pressure) and force? We were given 2 short elastic bands, 2 long elastic bands, and 2 paper clips. Idea that came up was to tie one elastic band to 1 weight (anything). Suspend it. And measure length of elastic band. Then take 2 same length elastic bands and tie them both to wait suspending the same weight with the elastic bands side by side. Then measure the length of the elastic bands. Obviously, the parallel suspension will have shorter length. This goes to show that the two elastic bands have twice as much cross sectional area if you dice the elastic bands than the experiment with only 1 elastic band. I understand that pressure is the amount of force per unit area. However, I don't see how this connects to showing the difference between stress and force? How do I show it? If i can't with this experiment. What other experiment can I do? And the other experiment is to show the differcence between strain and deflection. This one. I blanked out. Thoughtless. No idea. All I know is that strain is change in length over the original length. Plz Help. Thx.
<urn:uuid:810fa133-9133-4ad7-9b5a-2aa09214001c>
3.28125
258
Comment Section
Science & Tech.
72.714199
Let's start with the oldest-living animal of all, and one of the strangest in the entire animal kingdom: Turritopsis nutricula, otherwise known as the immortal jellyfish. You might think that this animal's common name is poetic, that perhaps it lives for a few hundred years, impressing generations of scientists--but you'd be wrong. Its name is literal: Turritopsis nutricula is biologically immortal. The immortal jellyfish can theoretically live forever, thanks to a process that is believed to be unique, called transdifferentiation. It has the ability, at any stage in its life, to completely transform back into a polyp, its earliest stage of life. You can imagine it like the mythical phoenix, an immortal bird which is repeatedly reborn as a chick. The immortal jellyfish doesn't die; it merely regenerates its cells in a younger stage, then ages naturally again. That doesn't mean all Turritopsis nutricula are immortal; the species is a small invertebrate in the ocean, and is susceptible to all of the nasty things that can befall such creatures, whether that's being eaten or succumbing to disease. But it is biologically capable of immortality. The New York Times Magazine ran a great article about the immortal jellyfish last year--highly recommended reading.
<urn:uuid:117dcba9-1af5-4e00-84d0-fb7fb2678d29>
2.96875
269
Personal Blog
Science & Tech.
33.24
4 March 2005 | EN Satellite image of tropical cyclone Marilyn approaching islands in the Caribbean, 1995 Following last year's tsunami, the international community has agreed to create a tsunami warning system for the Indian Ocean. To help push plans forward, members of UNESCO's Intergovernmental Oceanographic Commission (IOC) are meeting this week (3-8 March) in France. But however welcome this initiative is, it would be wrong to build a single-purpose warning system for a single ocean basin, warns Keith Alverson. Writing in Nature, Alverson, who is based at the OIC's Global Ocean Observing System says a "quick technological fix" is not the solution. He says the warning system set up by countries following tsunamis in the Pacific Ocean in the 1960s was soon neglected and its equipment became outdated. Alverson says the world should instead create an integrated, global system to warn of and prepare for a range of ocean-disasters, including cyclones and giant waves created by hurricanes. Integrating the warning system with local initiatives would help ensure it is maintained and continuously funded, he says. All SciDev.Net material is free to reproduce providing that the source and author are appropriately credited. For further details see Creative Commons.
<urn:uuid:eb2a495f-1ff7-4ba9-8729-44a8dcc406d9>
3.5
257
Truncated
Science & Tech.
34.761353
It's a significant finding in the search for signs of extraterrestrial life. According to astronomers using the National Science Foundation's Green Bank Telescope, evidence of prebiotic molecules have been discovered in interstellar space -- the first such evidence unearthed. The finding, according to experts, could increase the odds of discovering life outside of our own solar system... To continue reading, subscribe to Science Recorder today. |Subscribe to Science Recorder and gain access to one of the web's largest collection of science news and analysis.| |Already have an account? Sign in and begin reading Science Recorder today.|
<urn:uuid:77779015-4c19-4ad4-b453-1d42175ba279>
3.15625
122
Truncated
Science & Tech.
35.215431
So I mentioned batteries the other day and thought I should follow up with a bit more electrochemistry. Electroplating is simply the act of using an electrical current to deposit a layer of one thing (usually a metal) on top of another thing. This is fairly common procedure for many things including making car parts and taps shiny by depositing a layer of chromium. In electroplating the object to be covered is located at the cathode (the negative end) and must be electrically conductive. Another electrode is also needed and in some cases is made of the material to be deposited. Both these electrodes are then placed in a solution containing dissolved metal salts (such as copper sulfate). The ions in this solution allow the flow of an electrical current along with providing the metal necessary to coat the cathode. When this system is switched on the dissolved metal ions move towards the cathode and begin to adhere to the surface. This is because the positive charge on the metal ion is removed by the addition of electrons from the power source. This turns the water soluble ionic metal into the non soluble solid metal we all know, coating the cathode in the process. This continues until all the metal ions in the solution are used up or the current can no longer flow. In cases where the anode is made of metal it may also begin to dissolve as it attempts to make up for the ion imbalance in the solution, thus it reduces in mass and itself can transfer to the cathode. Electroplating can also lead to fractals as seen in the SEM image. Images: 1, 2
<urn:uuid:59f158ff-1440-4b19-a89c-d6a256a9b026>
3.75
325
Personal Blog
Science & Tech.
41.01928
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2005 October 22 Explanation: How could a galaxy become shaped like a ring? The rim of the blue galaxy pictured on the right is an immense ring-like structure 150,000 light years in diameter composed of newly formed, extremely bright, massive stars. That galaxy, AM 0644-741, is known as a ring galaxy and was caused by an immense galaxy collision. When galaxies collide, they pass through each other -- their individual stars rarely come into contact. The ring-like shape is the result of the gravitational disruption caused by an entire small intruder galaxy passing through a large one. When this happens, interstellar gas and dust become condensed, causing a wave of star formation to move out from the impact point like a ripple across the surface of a pond. The intruder galaxy has since moved out of the frame taken by the Hubble Space Telescope and released to commemorate the anniversary of Hubble's launch in 1990. Ring galaxy AM 0644-741 lies about 300 million light years away. Authors & editors: NASA Web Site Statements, Warnings, and Disclaimers NASA Official: Jay Norris. Specific rights apply. A service of: EUD at NASA / GSFC & Michigan Tech. U.
<urn:uuid:84115b37-6835-481b-8be1-12915e9e3d0e>
4
279
Knowledge Article
Science & Tech.
45.367948
The proteomics of Chernobyl: The area immediately surrounding the Chernobyl nuclear plant has been deemed unfit for human habitation, but plants and animals don't pay attention to safety officials, and have happily taken advantage of the fact that they can go about their business undisturbed in the exclusion zone. Researchers have checked whether one plant that's thriving in the area (flax) has acquired any specific adaptations to deal with the radiation. The answer, apparently, is no. When the seeds of a plant grown with contaminated soil were checked using proteomes, only 35 proteins showed significant changes, and these performed a broad range of functions. They're never too young to start them on irony: Well, maybe before they know how to speak would be too young, but kids seem to grow up fast when it comes to things like hyperbole and understatement. The authors recorded nine hours of family conversation among parents and kids from about four to six. Children's responses to their parents "revealed some understanding of ironic language, particularly sarcasm and rhetorical questions." Kids as young as four were already able to use rhetorical questions themselves, and (no surprise here) showed a tendency towards hyperbole. Telling stories in the dismal science: Talk sales numbers and margins long enough, and someone will invariably mention the razors-and-blades analogy. Basically, you sell a product at a low profit or loss—the razor handle—in order to commit your buyer to a lifetime of high-profit blade use. The irony here is that, although this model may have been followed in innumerable markets, razors may not have actually been one of them. A paper in progress has looked into how one of the market's pioneers, Gillette, behaved, and found that it priced the handles to profit while they were patent-protected, kept getting new patents, and only matched their competitors' prices once a device was out of patent. Blade prices were held steady throughout the period. "Gillette hadn't played razors-and-blades when it could have during the life of the 1904 patents and didn't seem well situated to do so after their expiration, but it was exactly at that point that Gillette played something like razors-and-blades and that was when it made the most money," the author concludes. "Razors-and-blades seems to have worked at the point where the theory suggests that it shouldn't have." It turns out we've had female viagra the whole time: And it's called a placebo. Sixteen women who complained of sexual dysfunction were randomized into a placebo group as part of a larger clinical trial and, after eight weeks of placebo, they reported a significant improvement in sexual function. Underlying the placebo effect in this case are age and the status of their sexual relationship, which both modulated the impact that the placebo had. The authors suggest that these factors may influence the women's interest in responding to treatment. Weird technology: Here are two items I'm not sure were really on the list of modern conveniences I'd most like to see. For the first, we have spray on clothing. You're too late to catch the demonstration of the material in action, which took place in London on Thursday, but I'm sure it was impressive. The somewhat unusual twist in this development is that, once the sprayed-on fabric dries, the newly formed garment can actually be removed, washed, and worn again. Next up, we have the prefab artificial ovary. There's not much to the fabrication, though, simply the creation of a honey-comb pattern made out of a gel that acts to host both oocytes and other ovarian cells, which self-organize into tissues that can produce mature oocytes about a third of the time. The authors suggest that this might be a good option for women who will have their ovaries removed or destroyed during medical procedures, but we'll probably need to be able to stick the artificial ovary in the freezer for a few years, since nobody's likely to be interested in getting pregnant right after a major medical incident. Viability after freezing has yet to be demonstrated.
<urn:uuid:f1bb5f90-9d01-4186-8abc-c9a18c7bd42d>
2.75
849
Listicle
Science & Tech.
39.853836
The most detailed discussion of how statistics are used is the Row Estimation Examples section of the documentation. Ultimately all information about how a query might be executed is turned into a serious of costs via the various Cost Constants. So if a table is 1000 pages in size, and the statistics suggest 10% of it will be touched at random by a proposed query, that's 100 pages * 4.0 (random_page_cost) = 400 cost units for pulling in the data; then other constants are used to determine things like processing costs on the data in those pages. The query optimizer tries various ways of obtaining and combining individual components: different join types, different ways to access the table data, etc. It iterates through the possible plans from those combinations, then picks the one that has the cheapest total cost to execute. I wrote just over 50 pages on this subject for my book PostgreSQL 9.0 High Performance, which has the longest discussion of query execution available right now. There's isn't too much there on how statistics are used beyond what's shown in the documentation though. Most of it covers all of the various query plan node elements you might run into.
<urn:uuid:4e68f38f-ee0d-406b-8bed-e192b3225e8c>
2.75
239
Q&A Forum
Software Dev.
49.429451
[erlang-questions] String encoding and character set Wed Jan 17 01:59:51 CET 2007 My guess is that with a string format you can access the nth character of the message by its position, which can be very difficult to do with a list if the encoding support different size for different characters (and sometimes the same character can have different encoding depending of previous ones: contextual encoding) ... I guess the string type abstract all that, but list is enough encoding like ASCII, UTF8. So two questions: (1) am I clear? (2) if yes, am I right? ;) On 1/17/07, Robert Virding <> wrote: > We do actually, in fact we have something much much better, a list. > Using a list you don't have to worry about encodings but can use the > unicode value directly in the string/list. This makes all processing > much easier. Then when you are done you can convert it to what ever > encoding you want. > I don't really understand why anyone would want to process data in an > unnecessarily complex format instead of a simple one. > dda wrote: > > String types – at least well-implemented ones – don't just store a > > string, but also encoding information. They are/should be geared > > towards pain-free manipulation of text data, and by text I mean things > > outside ASCII-land. Encodings-aware string manipulation functions > > don't function on bytes, but on characters, a quite different notion. > > We don't have this in Erlang. > erlang-questions mailing list -------------- next part -------------- An HTML attachment was scrubbed... More information about the erlang-questions
<urn:uuid:f6b365c4-22cf-478b-a6a1-7313a6f5bb77>
2.9375
391
Comment Section
Software Dev.
61.843788
What is G.L.O.R.I.A.? The Global Observation Research Initiative in Alpine Environments (GLORIA) is a worldwide network of long-term research sites established to assess the impacts of climate change in sensitive native alpine communities. There are more than 60 GLORIA target regions in mountain ranges worldwide, including several relatively new regions in North America. Many alpine species face habitat fragmentation and loss, and even extinction because they are adapted to cold temperatures and very limited in their geographic distribution. Alpine communities are also limited to the extent that they can migrate to higher altitudes or latitudes due to the island nature of mountain tops. Upward movement of alpine flora with recent warming has already been observed in the European Alps and climate change models predict more rapid and larger climate change at high elevations.
<urn:uuid:25e5534e-98d9-4184-a598-95c1f6c20c2f>
3.59375
172
Knowledge Article
Science & Tech.
27.095183
Your current filters are… by Andy Grover When the computer receives a packet, it is copied into a kernel buffer by the NIC, then copied by the CPU from the kernel buffer to its actual destination in the receiving process's address space. The same data is transferred over the memory bus THREE times, and the CPU must dumbly read and then write every single byte, even before the application sees it. RDMA (Remote Direct Memory Access) lets processes on different machines send data directly into each other's process spaces, resulting in greatly increased efficiency. But, using RDMA is very hard, compared to BSD sockets. This talk will introduce my work on making RDMA usable by mere mortals, from Python! 1st–4th June 2010
<urn:uuid:fdfdbb79-4117-4b6e-a47b-b01d091e68ca>
2.78125
154
Content Listing
Software Dev.
41.3624
Ea-5 The Electromagnetic Pump Published: Tuesday 14 March 2006 - Updated: Tuesday 29 March 2011 To demonstrate the magnetic force on an electric current (in a fluid) using a simple electromagnetic pump. - Electromagnetic pump - D.C. supply from lecture bench - T.V. camera and monitor - Desk lamp - Adjustable permanent magnet - Hook up wire Click on pictures to enlarge. a) Principle of Physics illustrated. An electric current experiences a force at right angles to both the current direction and the magnetic field. F = force per unit volume = J × B J = current density The mercury kept in the chemical cupboard is funnelled into the electromagnetic pump to the top of the smaller channel. Approximately 3-5V D.C. is applied to the binding posts from the D.C. supply fitted to the lecture bench, (The switch in the basement for the D.C. supply must be turned to the correct position for operation). It is advisable not to allow operation of the pump for longer than 30 seconds as it is drawing quite a large current.
<urn:uuid:30742805-31bf-4ee1-afe9-576b49d89d19>
3.5625
238
Tutorial
Science & Tech.
61.766866