text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Some scientists say it's an exciting "start". We take a boat to see the results. At the moment, we're whizzing down a channel about a few miles downstream from Caernarvon.
Denise Reed spends a lot of her life in these wetlands: she studies them for Louisiana State University. Reed would make a great scout leadershe's got no-nonsense hair, an infectious smile, and she forges through the grasses on this wetland like she's leading an expedition.
"We're going to go over to see some marsh over thereby those trees." Reed says. "And that's where we're going to see how the freshwater, the nutrients and the sediments coming out of the diversion structure are revitalizing the marsh. So we're gonna go see. It's right over on the other side there ... Look at all this wonderful green, you know, there's nice big growth on these plants."
Tracing the effects of the Canaervon. Photo: William Brangham/NOW with Bill Moyers
Reed says if we had walked here before they started the Caernarvon project, it would have felt completely different. This wetland was sick back then, and when wetlands are sick, the soil gets all mushy and turns into open water. But now we're walking on solid ground.
"You look at those ponds over there in the distance," Reed explains, "you see how the grass is gradually moving in and filling in. You can see that just here, you can see that grass growing out into the middle of this area. This would have all been bare. What is land loss? Land loss is marsh turning to open water. Here we've got open water in ponds filling in and becoming marsh. A lot of people think it's hopeless down here in coastal Louisiana, but just coming down here and looking at this makes us believe that we can do this."
But these changes have disrupted some people's lives. The problem is, the minute you put your finger on a map and say, 'Let's tinker with nature here, let's mimic the old floods there,' chances are that you might flood somebody's backyard. Or you'll disrupt the bays and inlets where George Barisich does his fishing.
Next: A Plague of Killer Mussells | <urn:uuid:75e536bc-b651-4d8e-87c9-5ed446966253> | 3.09375 | 477 | Audio Transcript | Science & Tech. | 71.227569 | 700 |
Ever see a monarch butterfly?
They have bright orange and black wings, and every year they fly from Canada to Mexico and then back again. Each individual butterfly doesn’t make the trip, but females lay eggs along the way and their offspring continue on.
What a trip!
Some people think monarch butterflies are in danger because they eat milkweed plants, and milkweed plants are getting harder to find. The problem is that an insect called the milkweed stem weevil also likes to eat milkweed plants, and it eats a lot of them.
But an Agricultural Research Service (ARS) scientist made a discovery that could help save milkweed plants and monarchs.
The scientist, Charles Suh, was working on a new boll weevil trap when he made his discovery. Boll weevils are a problem for farmers because they attack cotton plants, so farmers in Texas asked Suh to find out why their boll weevil traps weren’t working.
Suh asked the trap manufacturer to make a trap with the exact mix of natural compounds that boll weevils use to sniff out each other. Suh placed the new traps in cotton fields and found that they didn’t catch any more boll weevils, but they did catch a lot of the milkweed stem weevils that eat milkweed plants.
With a little more work, the discovery could lead to traps that control milkweed stem weevils. That would mean enough milkweed plants for monarch butterflies to keep making those long distance trips.
By Dennis O'Brien, Agricultural Research Service, Information Staff | <urn:uuid:1bdcc472-c55c-4506-a05a-5c28629a7118> | 3.765625 | 329 | Knowledge Article | Science & Tech. | 59.046274 | 701 |
Path of the solar eclipse…click for animation. (Credit: A.T. Sinclair/NASA).
This year’s big ticket astronomical event occurs over a sparsely populated but beautiful track of our planet; we’re talking about July 11th’s total solar eclipse. Of course, it isn’t often that an eclipse doesn’t occur over the windswept Arctic or a war-torn banana republic… the Sun and sand of an island eclipse may just be the perfect combo. If you haven’t already made plans to catch one of the numerous cruises headed that way you may have to enjoy it vicariously with the rest of us via the Internet; this eclipse graces only a smattering of islands before making a brief landfall in South America across the Chilean-Argentine border at sunset. The path of solar totality will not grace our planet again until November 2012 in another South Pacific eclipse that intersects this month’s path! Its maximum length of 5 minutes and 20 seconds occurs over open ocean. Two very interesting sites for viewing include Easter Island and just off of the coast of French Polynesia and Tahiti; the more adventurous may want to head for the Cook Islands site of Mangaia, which lies right along the centerline. Weather prospects may favor the northern hump of the path, with a mean cloudiness of less than 50%… but for sheer beauty and landscape photo ops, Easter island will be your best bet. No doubt most of humanity will experience this one vicariously via the web; follow @Astroguyz via Twitter, as we’ll post where online to watch this extra-ordinary event in the days leading up to the eclipse!
The Astro-term for this week is Metonic Series. A metonic series of eclipses arises from the fact that the period of 19 tropical solar years is very nearly equal to 235 synodic months. This was first recognized by the astronomer Meton of Athens in the year 432 B.C. The error of difference is 2 hours per 19 years, and this accumulates to a full calendar day every 219 years. A metonic cycle of eclipses will share the same calendar date in groupings of 4 to 5 per series… for example, the first eclipse related to this month’s was on July 11th, 1953 and the last will be 19 years from now, on July 11, 2029. Do not confuse metonic series with saros cycle, which is independent of the solar calendar and based on a period of 223 synodic months. So what, you say? Well, metonic series not only factor into eclipses landing on the same date, but also play a role in calculating when the Moon will appear at the same phase in the same position again… metonic series even play in to trajectory calculations for lunar bound spacecraft, as well as serving as a basis for the Hebrew calendar and the computation of Easter! | <urn:uuid:d2170d2e-bd07-4f81-9e1d-163040edab7e> | 2.578125 | 601 | Personal Blog | Science & Tech. | 52.187678 | 702 |
Conditions of Use
No planet is alike each other. They all have there uniqueness. Like Saturn has its rings and Jupiter has its big storm on it called the red dot. Some of the planets are different colors too. Like Uranus and Neptune are blue. Planets can be red, blue, and white. Also when you look up into the sky, you sometimes see a star in the sky and the next night its not there. That's because its a wondering star better known as a planet. The most common one is Venus.
The planets are close and far away from the Sun. The closest one it Mercury. The furthest one is Neptune. Now your probably wondering why it's not Pluto,well to tell you the truth, Pluto is not a planet anymore. I know, it's sad. Now let me give you a list so you know how close and far away the planets are.
Pluto- 39.5AU (Not a Planet)
The planets are all different sizes. For instance, Jupiter is thhe largest planet, and Mercury is the smallest. Now your probably wondering, why isn't Pluto the smallest planet. Well to tell you the truth, Pluto is not a planet anymore. Scientists say it drifted to far out of our solar system. So now we just call Pluto a dwarf planet.
We used lot of mesurements when we were working with the planets. But the most we used were lightyears and AUs. We used lightyears for measuring the distance of the stars and how long it takes for the light of the stars to get to us, and we used AUs to tell the distance of the planets. An AU is how far a planet is form the sun. Here are all the planets and their AU's.
Article posted September 24, 2009 at 09:41 AM •
comment • Reads 1297
Return to Blog List
Add a Comment
Latest 10 Comments: | <urn:uuid:62bee2b6-af39-4622-83de-06d9d332cc72> | 3.203125 | 394 | Personal Blog | Science & Tech. | 76.043057 | 703 |
figure tag is used to provide the
structure for inserting a figure into a CNXML document. A figure
may contain an image, multimedia object, or caption tag.
<title>The World's Cutest Dog</title>
<media id="dogpic" alt="A dog sitting on a bed">
<image mime-type="image/jpeg" src="image1.jpg" />
Notice how cute the dog is just sitting there.
Results in this display:
Figure 1: Notice how cute the dog is just sitting there.
|The World's Cutest Dog|
Allows you to determine which way subfigure elements are
arranged. Has no effect if the figure has no subfigure children.
- horizontal -
Subfigures appear side by side (default).
- vertical -
Subfigures appear one on top of the other.
Defines the type of figure in order to give specialized control over numbering.
Figures of the same type are numbered in series (i.e., Figure 1, Figure 2...).
Type can be used in conjunction with label so that figures of each
user-defined type appear with their own label. Type can be any user-defined
value that reflects the purpose of the figure.
A unique identifier, whose value must begin with a letter and contain only letters,
numbers, hyphens, underscores, colons, and/or periods (no spaces).
may contain an optional
tag, followed by an optional title
Next, it must contain:
may contain an | <urn:uuid:8c5551fd-d2dc-4ef1-966a-c0bcfa8ee5d2> | 2.875 | 327 | Documentation | Software Dev. | 48.186005 | 704 |
There is a passage in On intelligence about the differences between parallel processing in human versus computers :
From the dawn of the industrial revolution, people have viewed the
brain as some sort of machine. They knew there weren't gears and cogs
in the head, but it was the best metaphor they had. Somehow
information entered the brain and the brain-machine determined how the
body should react. During the computer age, the brain has been viewed
as a particular type of machine, the programmable computer. And as we
saw in chapter 1, AI researchers have stuck with this view, arguing
that their lack of progress is only due to how small and slow
computers remain compared to the human brain. Today's computers may be
equivalent only to a cockroach brain, they say, but when we make
bigger and faster computers they will be as intelligent as humans.
There is a largely ignored problem with this brain-as-computer
analogy. Neurons are quite slow compared to the transistors in a
computer. A neuron collects inputs from its synapses, and combines
these inputs together to decide when to output a spike to other
neurons. A typical neuron can do this and reset itself in about five
milliseconds (5 ms), or around two hundred times per second. This may
seem fast, but a modern silicon-based computer can do one billion
operations in a second. This means a basic computer operation is five
million times faster than the basic operation in your brain! That is a
very, very big difference. So how is it possible that a brain could be
faster and more powerful than our fastest digital computers? "No
problem," say the brain-as-computer people. "The brain is a parallel
computer. It has billions of cells all computing at the same time.
This parallelism vastly multiplies the processing power of the
I always felt this argument was a fallacy, and a simple thought
experiment shows why. It is called the "one hundred–step rule." A
human can perform significant tasks in much less time than a second.
For example, I could show you a photograph and ask you to determine if
there is cat in the image. Your job would be to push a button if there
is a cat, but not if you see a bear or a warthog or a turnip. This
task is difficult or impossible for a computer to perform today, yet a
human can do it reliably in half a second or less. But neurons are
slow, so in that half a second, the information entering your brain
can only traverse a chain one hundred neurons long. That is, the brain
"computes" solutions to problems like this in one hundred steps or
fewer, regardless of how many total neurons might be involved. From
the time light enters your eye to the time you press the button, a
chain no longer than one hundred neurons could be involved. A digital
computer attempting to solve the same problem would take billions of
steps. One hundred computer instructions are barely enough to move a
single character on the computer's display, let alone do something
But if I have many millions of neurons working together, isn't that
like a parallel computer? Not really. Brains operate in parallel and
parallel computers operate in parallel, but that's the only thing they
have in common. Parallel computers combine many fast computers to work
on large problems such as computing tomorrow's weather. To predict the
weather you have to compute the physical conditions at many points on
the planet. Each computer can work on a different location at the same
time. But even though there may be hundreds or even thousands of
computers working in parallel, the individual computers still need to
perform billions or trillions of steps to accomplish their task. The
largest conceivable parallel computer can't do anything useful in one
hundred steps, no matter how large or how fast.
Here is an analogy. Suppose I ask you to carry one hundred stone
blocks across a desert. You can carry one stone at a time and it takes
a million steps to cross the desert. You figure this will take a long
time to complete by yourself, so you recruit a hundred workers to do
it in parallel. The task now goes a hundred times faster, but it still
requires a minimum of a million steps to cross the desert. Hiring more
workers— even a thousand workers— wouldn't provide any additional
gain. No matter how many workers you hire, the problem cannot be
solved in less time than it takes to walk a million steps. The same is
true for parallel computers. After a point, adding more processors
doesn't make a difference. A computer, no matter how many processors
it might have and no matter how fast it runs, cannot "compute" the
answer to difficult problems in one hundred steps.
So how can a brain perform difficult tasks in one hundred steps that
the largest parallel computer imaginable can't solve in a million or a
billion steps? The answer is the brain doesn't "compute" the answers
to problems; it retrieves the answers from memory. In essence, the
answers were stored in memory a long time ago. It only takes a few
steps to retrieve something from memory. Slow neurons are not only
fast enough to do this, but they constitute the memory themselves. The
entire cortex is a memory system. It isn't a computer at all.
The point made here is that the computing paradigm (that is, the way the whole thing works) of the brain and the computer are completely different. The computer is a Turing machine, and the brain is something else, possibly a memory system if you think that Jeff Hawking is right. Whatever it is, the brain is not a Turing machine.
To go back to your question:
Why can't human brains be used to do massive parallel processing in
the same way computers are doing today?
It has to do with the way the human brain works. If you assume that the brain will do any task in a parallel fashion, and the more neurons involved, the better the performance; then in order to maximize your performance you should use your whole brain. 1 task: 100% performance, 2 tasks: 50% performance, 3 tasks: 33% performance, and so on.
But if you add an "attention switching cost" to go from one task to another, then you are better off just focusing on one task where the switching cost is zero.
So you can multitask, but it won't be efficient. | <urn:uuid:4e2434c7-881c-47a7-922d-d412827921b3> | 3.625 | 1,378 | Q&A Forum | Science & Tech. | 53.529127 | 705 |
Date: January 1, 1959
Description: With the assumptions that Berthelot's equation of state accounts for molecular size and intermolecular force effects, and that changes in the vibrational heat capacities are given by a Planck term, expressions are developed for analyzing one-dimensional flows of a diatomic gas. The special cases of flow through normal and oblique shocks in free air at sea level are investigated. It is found that up to a Mach number 10 pressure ratio across a normal shock differs by less than 6 percent from its ideal gas value; whereas at Mach numbers above 4 the temperature rise is considerable below and hence the density rise is well above that predicted assuming ideal gas behavior. It is further shown that only the caloric imperfection in air has an appreciable effect on the pressures developed in the shock process considered. The effects of gaseous imperfections on oblique shock-flows are studied from the standpoint of their influence on the life and pressure drag of a flat plate operating at Mach numbers of 10 and 20. The influence is found to be small. (author).
Contributing Partner: UNT Libraries Government Documents Department | <urn:uuid:ab30b57c-df72-4818-9b0c-4b1841f9a159> | 3.046875 | 229 | Academic Writing | Science & Tech. | 31.567098 | 706 |
Starting in Python 1.4, Python provides a special make file for building make files for building dynamically-linked extensions and custom interpreters. The make file make file builds a make file that reflects various system variables determined by configure when the Python interpreter was built, so people building module's don't have to resupply these settings. This vastly simplifies the process of building extensions and custom interpreters on Unix systems.
The make file make file is distributed as the file Misc/Makefile.pre.in in the Python source distribution. The first step in building extensions or custom interpreters is to copy this make file to a development directory containing extension module source.
The make file make file, Makefile.pre.in uses metadata provided in a file named Setup. The format of the Setup file is the same as the Setup (or Setup.in) file provided in the Modules/ directory of the Python source distribution. The Setup file contains variable definitions:
and module description lines. It can also contain blank lines and comment lines that start with "#".
A module description line includes a module name, source files, options, variable references, and other input files, such as libraries or object files. Consider a simple example::
This is the simplest form of a module definition line. It defines a module, ExtensionClass, which has a single source file, ExtensionClass.c.
This slightly more complex example uses an -I option to specify an include directory:
EC=/projects/ExtensionClass cPersistence cPersistence.c -I$(EC)
This example also illustrates the format for variable references.
For systems that support dynamic linking, the Setup file should begin:
to indicate that the modules defined in Setup are to be built as dynamically linked modules. A line containing only "*static*"can be used to indicate the subsequently listed modules should be statically linked.
Here is a complete Setup file for building a cPersistent module:
# Set-up file to build the cPersistence module. # Note that the text should begin in the first column. *shared* # We need the path to the directory containing the ExtensionClass # include file. EC=/projects/ExtensionClass cPersistence cPersistence.c -I$(EC)
After the Setup file has been created, Makefile.pre.in is run with the "boot" target to create a make file:
make -f Makefile.pre.in boot
This creates the file, Makefile. To build the extensions, simply run the created make file:
It's not necessary to re-run Makefile.pre.in if the Setup file is changed. The make file automatically rebuilds itself if the Setup file changes. | <urn:uuid:56ffe86e-709b-4226-ac03-dc5ca9d362d9> | 3.5 | 564 | Documentation | Software Dev. | 37.10875 | 707 |
Crustal Deformation Data
The US Geological Survey maintains a variety of fault and volcano monitoring sites around the western United States. Instruments at these sites include strainmeters, tiltmeters, magnetometers, creepmeters, pore pressure monitors, as well as other environmental parameters such as temperature and barometric pressure.
The data are collected and monitored to help understand how, when, and why large earthquakes, fault slip and volcanic activity occur. The measurements provide a near real-time record of the related crustal deformation before, during and after events. The goal is to better understand these natural processes, and use these data to reduce the earthquake and volcanic hazards associated with them.
This web site provides data plots and data downloads for many instruments that are concentrated in areas where large earthquakes are likely to occur in California and areas of known volcanic activity (Long Valley, CA). In particular, the USGS has concentrated instrumentation efforts in the San Francisco Bay Area, near San Juan Bautista and Parkfield, and the Long Valley, CA and Southern California regions.
The plots and data on this site are generated automatically and are not reviewed. They should not be used for engineering, legal, or any other critical applications. | <urn:uuid:0b171463-4522-47dd-a5ae-5d070d0e93c5> | 3.3125 | 249 | Knowledge Article | Science & Tech. | 14.877727 | 708 |
Losing Stream in Our Battle to Predict and Prevent Invasive Species
Invasive species -- plants, animals, and microbes introduced to regions beyond their native range -- carry a global price tag of $1.4 trillion dollars. They are responsible for the loss of natural resources and biodiversity, damages to infrastructure, and an uptick in infectious diseases.
Not all non-native species pose a threat. Scientists around the world have spent the last several decades teasing apart the conditions that set the stage for debilitating invaders, like giant hogweed, zebra mussels, or gray squirrels. A number of hypotheses have emerged to help predict how natural areas will respond to introduced plants, animals, and microbes.
An analysis of 371 invasion studies using six dominant invasion hypotheses has revealed their predictive power is weakening. The paper's authors -- Jonathan Jeschke, Lorena Gómez Aparicio, Sylvia Haider, Tina Heger, Christopher Lortie, Petr Pyšek, and David Strayer -- found empirical support for all six hypotheses declining, with recent studies showing the lowest levels of support. Hypotheses that were too broad or omitted ecosystem interactions fared among the worst, plants proved easier to predict than animals, and, contrary to popular belief, diverse ecosystems were not inherently resistant against invaders. The study was published in the open-access journal NeoBiota.
The paper's authors comment: "The observed decline effect means our confidence in making sound policy and management decisions based on the six analyzed hypotheses is lower today than it was in the past. Scientists were overly optimistic about the predictive power of these hypotheses. Given that invasive species are an expensive and ever growing problem, this is a situation that needs to be addressed."
Similar "decline effects" have been noted in other disciplines, among them pharmacological research, psychology, and animal behavior. The effect has been attributed to publication bias, inadequate sample sizes, and a tendency of early tests of hypotheses to pick study organisms or systems where positive results are expected.
Lead author Jonathan Jeschke, of Technische Universität München, concludes: "The decline effect is both worrying and fascinating. It's a phenomenon that should be investigated across disciplines, as medical and psychological researchers have shown its effects can be strong, and it can distort the predictive power of hypotheses."
The paper's authors offer four solutions to improve current hypotheses in invasion biology:
(1) Existing gaps in empirical tests of hypotheses should be filled. The study revealed crucial gaps in empirical studies, showing that most studies have focused on terrestrial plants but have ignored other organisms and aquatic habitats.
(2) Existing hypotheses should be specified for groups of organisms and habitats.
(3) Interactions of invasive species with their new ecosystems should be regularly considered. The study shows that hypotheses considering such interactions are better supported by empirical evidence than other hypotheses.
(4) Revised hypotheses should be rejected if they do not work. Those hypotheses that still lack empirical support after specification for groups of organisms and habitats (solution 2), consideration of invader-ecosystem interactions (solution 3), or another form of revision should be discarded. Scientists should not waste time and resources to continue working with these hypotheses. Instead, fresh ideas and novel hypotheses are needed to further our understanding of biological invasions -- something that is essential to effective management in today's rapidly changing world. | <urn:uuid:adfc38a0-3a0c-4aae-97c5-b8d26dc4f8a7> | 3.34375 | 688 | Truncated | Science & Tech. | 17.600873 | 709 |
Proceedings of the International Astronomical Union (2005), 2004:4748 Cambridge University Press
Nowadays, more than one hundred extra-solar planets are known, and about a dozen of multi-planetary systems have been discovered. Most of them have been detected by the radial velocity (RV) method. The recovery of orbital parameters from RV data leads to several problems. Usually RV data cover irregularly a short time interval which is frequently shorter than the orbital period of the most distant planet. Moreover, observations contain a noise due to the instabilities of the star. The distribution of this noise is unknown. A precise determination of the dynamical state of a multi-planetary system is important for understanding its stability and evolution. In most cases observers determine the orbital parameters for multi-planetary systems just fitting a sum of Keplerian orbits. The parameters obtained in such a way are in most cases the only accessible data about an extra-solar system because the observes very rarely publish their observations. However, the parameters from a multi-Keplerian fit as it has already been observed by many authors, cannot be interpreted as the osculating elements for actual planetary orbits. Moreover, these parameters can be considered as Keplerian elements of: relative, barycentric or Jacobi orbits. One can find arguments that the interpretation of parameters from a multi-Keplerian fit as elements of Keplerian orbits in the Jacobi coordinates is the most proper one, see [Lee and Peale, 2002; Godziewski et al. 2003].
Our first aim was to determine how badly a multi-Keplerian fit determines osculating orbits. To this end, we performed several numerical simulations. For a chosen planetary system with two planets we generated a synthetic RV observations using the Newtonian three body problem. Then we fitted to these observations the Keplerian model and compared the obtained Keplerian elements with the true osculating elements of orbits. Then we changed the semi-major axis and the eccentricity of one planet and repeated all calculations. In this way we obtained maps of differences between the true and the fitted Keplerian elements (relative, barycentric and Jacobi) for a given system with two planets.
The conclusions from these experiments are following. Even for a quite big separation of planets (2 AU), multi-Keplerian fits are bad. The errors appear mainly in the positions of planets in their orbits and can achieve 60 deg and more. The errors in eccentricities and semi-major axes achieve a few percent, but they can be bigger for bigger masses of planets, or when the observations cover only a part of the period of the external planet. Moreover, the errors are maximal for systems close to a mean motions resonance. All the above conclusions do not depend on how we interprete the parameters of a Keplerian fit: relative, barycentric, as well as Jacobi elements are equally bad if we look at the overall results. | <urn:uuid:99b65116-b661-429a-b76e-06694fb16baa> | 2.6875 | 600 | Academic Writing | Science & Tech. | 27.195116 | 710 |
Provided by: freebsd-manpages_6.2-1_all
chooseproc, procrunnable, remrunqueue, setrunqueue - manage the queue of
extern struct rq itqueues;
extern struct rq rtqueues;
extern struct rq queues;
extern struct rq idqueues;
struct thread *
remrunqueue(struct thread *td);
setrunqueue(struct thread *td);
The run queue consists of four priority queues: itqueues for interrupt
threads, rtqueues for realtime priority processes, queues for time
sharing processes, and idqueues for idle priority processes. Each
priority queue consists of an array of NQS queue header structures. Each
queue header identifies a list of runnable processes of equal priority.
Each queue also has a single word that contains a bit mask identifying
non-empty queues to assist in selecting a process quickly. These are
named itqueuebits, rtqueuebits, queuebits, and idqueuebits. The run
queues are protected by the sched_lock mutex.
procrunnable() returns zero if there are no runnable processes other than
the idle process. If there is at least one runnable process other than
the idle process, it will return a non-zero value. Note that the
sched_lock mutex does not need to be held when this function is called.
There is a small race window where one CPU may place a process on the run
queue when there are currently no other runnable processes while another
CPU is calling this function. In that case the second CPU will simply
travel through the idle loop one additional time before noticing that
there is a runnable process. This works because idle CPUs are not halted
in SMP systems. If idle CPUs are halted in SMP systems, then this race
condition might have more serious repercussions in the losing case, and
procrunnable() may have to require that the sched_lock mutex be acquired.
choosethread() returns the highest priority runnable thread. If there
are no runnable threads, then the idle thread is returned. This function
is called by cpu_switch() and cpu_throw() to determine which thread to
switch to. choosethread() must be called with the sched_lock mutex held.
setrunqueue() adds the thread td to the tail of the appropriate queue in
the proper priority queue. The thread must be runnable, i.e. p_stat must
be set to SRUN. This function must be called with the sched_lock mutex
remrunqueue() removes thread td from its run queue. If td is not on a
run queue, then the kernel will panic(9). This function must be called
with the sched_lock mutex held.
cpu_switch(9), scheduler(9), sleepqueue(9) | <urn:uuid:965a1a82-1964-4e33-9104-d78e49933092> | 2.65625 | 635 | Documentation | Software Dev. | 57.721263 | 711 |
Since you are having this confusion, I think it helps to consider the concepts of zero, infinity and "undefined".
In the most basic sense, division is the opposite of multiplication. Thus, the fact that 2 x 3 = 6 implies that 6 / 3 = 2.
1 x 0 = 0. Applying the above logic, 0 / 0 = 1. However, 2 x 0 = 0, so 0 / 0 must also be 2. In fact, it looks as though 0 / 0 could be any number! This obviously makes no sense - we say that 0 / 0 is "undefined" because there isn't really an answer.
Likewise, 1 / 0 is not really infinity. Infinity isn't actually a number, it's more of a concept. If you think about how division is often described in schools, say, number of sweets shared between number of people, you see the confusion. If I go around some people giving them 0 sweets each, how many people do I need to go around until I have given away my 1 sweet? An infinite number? Kind of, because I can keep going around infinitely. However, I never actually give away that sweet. This is why people say that 1 / 0 "tends to" infinity - we can't really use infinity as a number, we can only imagine what we are getting closer to as we move in the direction of infinity. However, in this case, the number of sweets I have is never changing, so I'm not really getting closer to anywhere. Even this logic doesn't really work.
The long and short of it is that 1 / 0 doesn't really make sense as a calculation. When we do use the notion of infinity we tend to use positive infinity where it doesn't matter purely by convention. However, if you think about it too hard you start to get into philosophy and stuff, like "what actually is infinity?" and "wait, what is a number"?
The things people are talking about where it does are different ways of using numbers so they don't really count. For example, in the trivial ring, there is only one number, which works like a 0 (add it to anything and you get that thing) and a 1 (multiply it by anything and you get the same thing again) and makes sense because you can only add it to or multiply it by itself to get itself. It's pretty boring actually, but in that case this one number - let's call it x - is both 0 and 1, so 1 / 0 = x / x = x because everything equals x. As you can see, this is a bit of a cheat because we don't even have enough numbers to have a notion of 1 / 0 in the way you're thinking of it. | <urn:uuid:f5ecf10f-b9a5-4afd-b6d2-50e952f5505b> | 3 | 565 | Q&A Forum | Science & Tech. | 71.001875 | 712 |
This shape would appear to be a rectangular prism.
The lateral area (area of every side except top and bottom) is given by the formula:
LA = ph (perimeter of the base multiplied by the height)
The surface area is then found by adding the LA to the areas of the Bases (top and bottom).
SA = LA + 2B
In your figure:
LA = 18 times 3 = 54 square cm
SA = 54 + 2(14) = 82 square cm
The volume of a right rectangular prism is given by the formula:
V = LWH (length times width times height)
In your case,
V = 7 X 2 X 3 = 42 cubic cm.
Well, on the offchance you can't understand masters' explanation; the easiest way to get the area is just to say: Total surface area = Sum of the areas of each face.
So you've six faces, all rectangles. The area of a rectangle is obtained by just multiplying the two sides.
So, say, the face at the top is 7x2 = 14.
And the face at the bottom will be the same = 14.
The face in front is 7x3=21
As is the one at the back = 21.
The other face you can see is 3x2 = 6.
And the corresponding one that you can't see =6 also.
So the sum of the four squares is 14+14+21+21+6+6 = 82.
The Surface Area is found by adding the areas of all the faces or more simply, if the base is "l" by "w" and the height is "h", then the surface area is given by:
SA = 2(lw + hl + hw)
Another method I already gave you in my earlier post. | <urn:uuid:511ac3d6-f1e2-46a9-9f2e-fc151104aeed> | 3.625 | 385 | Q&A Forum | Science & Tech. | 81.671952 | 713 |
As many know, this is the 150th anniversary of the publication of On the Origin of Species. If I may be so bold, one of the things that might distinguish our thinking about evolution in the last 50 years from the first hundred years might be the speed at which natural selection can operate. For a long time, we thought of evolution taking long times: millions of years would be needed to see the gradual accumulation of changes. We learned in the past few decades that we can see the effects of selection over the course of a few decades.
There are a few fast changing situations that should press the fast forward button on natural selection. Invasions are one. That’s why they’re invasions, not slow expansions. Boronow and Langkilde look at how the invasion of red fire ants are affecting fence lizards.
The ants (Solenopsis invicta) are nasty little buggers. A dozen will kill a fence lizard in less than a minute. You’d think that would apply some pretty strong selection on the lizards if they have any traits in the population that provide even a little defense against the ants.
To test whether natural selection has started acting on the fence lizards (Sceloporus undulatus), they collected lizards from two locations: one was invaded by the ants 70 years ago, and the other has not been invaded yet. Then, they allowed some angry ants to bite restrained lizards, and measured the animals’ performance on several behavioural tasks, like biting, running, and so on. A control group of lizards where handled, but not bitten. They also looked at the effect of dilute venom on the lizards’ blood directly.
The bottom line?
There’s no effect.
The lizards from the region that had been putting up with ants for seven decades had the same behavioural responses to the ants as lizards from the region with no ants. No differences in the blood responses to venom, either, though the blood was affected by venom.
The authors suggest that the ant venom might have a “tipping point.” Less than a certain dose, and the lizard is fine. More than that dose, and you’ve got a scaly corpse. The range in between “fine” and “dead” could be minuscule, in which case, there may not be a lot of variation for natural selection to work on. Thus, if the lizards can keep the bites under the critical value, they suffer no fitness consequences.
Another issue is that the fence lizards do live with other fire ants, like Solenopsis xyloni. These have weaker venom, and they’re not as numerous as the red fire ants, but it might be that the fence lizards have already been pushed to have defenses against fire ants.
A third possibility is simply that there is no existing variation that gives some members of the population greater resistance than others. Seventy years, which is about 35 generations of lizards, is quite a while, but may not be long enough. Who knows when just the right mutation will give some lucky lizard – and its offspring – a selective advantage.
Boronow, K., & Langkilde, T. (2009). Sublethal effects of invasive fire ant venom on a native lizard Journal of Experimental Zoology Part A: Ecological Genetics and Physiology, 9999A DOI: 10.1002/jez.570
Lizard picture by J.N. Stewart on Flickr, used under a Creative Commons license.
Ant picture by AJC1 on Flickr, used under a Creative Commons license. | <urn:uuid:30101667-01b1-4e3c-895f-4681fab593c3> | 3.109375 | 756 | Personal Blog | Science & Tech. | 54.689741 | 714 |
Modelling Southern Ocean krill population dynamics: biological processes generating fluctuations in the South Georgia ecosystem
Murphy, Eugene J.; Reid, Keith. 2001 Modelling Southern Ocean krill population dynamics: biological processes generating fluctuations in the South Georgia ecosystem. Marine Ecology Progress Series, 217. 175-189. 10.3354/meps217175Full text not available from this repository.
Variability is a key feature of the pelagic ecosystems of the Southern Ocean and an important aspect of the variation is fluctuation in the abundance of krill Euphausia superba Dana, the major prey item of many of the higher predators. Direct impacts of variability in the large-scale physical environment, such as changes in ocean circulation, have been suggested as the main factor generating the observed fluctuations. So far, however, there has been little quantitative assessment of the importance of krill population dynamics in the observed variation. Here, analyses of a model of krill population development and predator diet data from South Georgia have been used to examine seasonal changes in the population structure of krill. The krill population model was combined with a size-based selection function and used to generate expected length-frequency distributions in the predator diet through a summer season. Comparison of the model solutions with the predator diet data indicates that the model can reproduce the observed pattern of variation and emphasizes that adult population changes are a key aspect of the interannual fluctuations observed during some years. Low krill abundance was associated with reduced representation of the 3+ age group (3 to 4 yr old), whereas when krill were abundant the 3+ age class was the major age group present. The seasonal changes in the population structure in the predator diet involve a complex interaction of relative year class strength, timing of immigration, fluctuations in growth rates and dynamic predatorselective effects. Development of the model to examine the interactive effects of changing krill growth and mortality rates will be a valuable next step. The dominance of the changes in krill population age structure underlines the fact that to understand the variability of the South Georgia ecosystem we must identify the major factors generating variability in population dynamics throughout the Scotia Sea.
|Programmes:||BAS Programmes > Antarctic Science in the Global Context (2000-2005) > Dynamics and Management of Ocean Ecosystems|
|Additional Keywords:||Ecosystem, Krill, Ocean, Model, Population dynamics, Predators, Diet data, Interannual, Variability, Allochthonous, Southern Ocean|
|Date made live:||24 Oct 2012 12:49|
Actions (login required) | <urn:uuid:dcaf410c-6541-4d4e-8f5a-7faf564bd31d> | 2.6875 | 526 | Academic Writing | Science & Tech. | 14.833083 | 715 |
Press Release 09-118
The Abyss: Deepest Part of the Oceans No Longer Hidden
Nereus is first undersea vehicle to enable routine scientific investigation of ocean depths worldwide
June 2, 2009
The Abyss is a dark, deep place, but it's no longer hidden. At least when Nereus is on the scene. Nereus is a new type of deep-sea robotic vehicle, called a hybrid remotely operated vehicle (HROV).
Nereus dove to 10,902 meters (6.8 miles) on May 31, 2009, in the Challenger Deep in the Mariana Trench in the western Pacific Ocean, reports a team of engineers and scientists aboard the research vessel Kilo Moana.
The dive makes Nereus the world's deepest-diving vehicle, and the first vehicle to explore the Mariana Trench since 1998.
"Much of the ocean's depths remain unexplored," said Julie Morris, director of the National Science Foundation (NSF)'s Division of Ocean Sciences, which funded the project. "Ocean scientists now have a unique tool to gather images, data and samples from everywhere in the oceans, rather than those parts shallower than 6,500 meters (4 miles). With its innovative technology, Nereus allows us to study and understand previously inaccessible ocean regions."
Nereus's unique hybrid-vehicle design makes it ideally suited to explore the ocean's last frontiers, marine scientists say. The unmanned vehicle is remotely operated by pilots aboard a surface ship via a lightweight, micro-thin, fiber-optic tether that allows Nereus to dive deep and be highly maneuverable. Nereus, however, can also be switched into a free-swimming, autonomous vehicle mode.
"Reaching such extreme depths is the pinnacle of technical challenges," said Andy Bowen, project manager and principal developer of Nereus at the Woods Hole Oceanographic Institution (WHOI). "The team is pleased that Nereus has been successful in reaching the very bottom of the ocean to return imagery and samples from such a hostile world. With a robot like Nereus we can now explore anywhere in the ocean. The trenches are virtually unexplored, and Nereus will enable new discoveries there. Nereus marks the start of a new era in ocean exploration."
Nereus (rhymes with "serious") is a mythical Greek god with a fish-tail and a man's torso. The vehicle was named in a nationwide contest open to high school and college students.
The Mariana Trench forms the boundary between two tectonic plates, where the Pacific Plate is subducted beneath the small Mariana Plate. It is part of the Pacific Ring of Fire, a 40,000-kilometer (25,000-mile) area where most of the world's volcanic eruptions and earthquakes occur. At 11,000 meters, its depth is about the height a commercial airliner flies.
To reach the trench, Nereus dove nearly twice as deep as research submarines are capable of, and had to withstand pressures 1,000 times that at Earth's surface--crushing forces similar to those on the surface of Venus, according to Dana Yoerger of WHOI and Louis Whitcomb of Johns Hopkins University, who developed the vehicle's navigation and control system and conducted successively deeper dives to test Nereus.
"We couldn't be prouder of the stunning accomplishments of this dedicated and talented team," said Susan Avery, president and director of WHOI. "With this engineering trial successfully behind us, we're eager for Nereus to become widely used to explore the most inaccessible reaches of the ocean. With no part of the deep seafloor beyond our reach, it's exciting to think of the discoveries that await."
Only two other vehicles have succeeded in reaching the Mariana Trench: the U.S. Navy-built bathyscaphe Trieste, which carried Jacques Piccard and Don Walsh there in 1960, and the Japanese-built robot Kaiko, which made three unmanned expeditions to the trench between 1995 and 1998.
Trieste was retired in 1966 and Kaiko was lost at sea in 2003.
The Nereus engineering team believed that a tethered robot using traditional technologies would be prohibitively expensive to build and operate. So they used unique technologies and innovative methods to strike a balance between size, weight, materials cost and functionality.
Building on previous experience developing tethered robots and autonomous underwater vehicles (AUVs), the team fused the two approaches together to develop a hybrid vehicle that could fly like an aircraft to survey and map broad areas, then be converted quickly into a remotely operated vehicle (ROV) that can hover like a helicopter near the seafloor to conduct experiments or to collect biological or rock samples.
The tethering system presented one of the greatest challenges in developing a cost-effective ROV capable of reaching these depths. Traditional robotic systems use a steel-reinforced cable made of copper to power the vehicle, and optical fibers to enable information to be passed between the ship and the vehicle. If such a cable were used to reach the Mariana Trench, it would snap under its own weight before it reached that depth.
To solve this challenge, the Nereus team adapted fiber-optic technology developed by the Navy's Space and Naval Warfare Systems Center Pacific to carry real-time video and other data between the Nereus and the surface crew. Similar in diameter to a human hair and with a breaking strength of only eight pounds, the tether is composed of glass fiber with a very thin protective jacket of plastic.
Nereus brings approximately 40 kilometers (25 miles) of cable in two canisters the size of large coffee cans that spool out the fiber as needed. By using this very slender tether instead of a large cable, the team was able to decrease the size, weight, complexity and cost of the vehicle.
Another weight-saving advance of the vehicle is its use of ceramic spheres for flotation, rather than the much heavier traditional syntactic foam used on vehicles like the submersible Alvin or the ROV Jason.
Each of Nereus's two hulls contains between 700 and 800 of the 9-centimeter (3.5-inch) hollow spheres that are precisely designed and fabricated to withstand crushing pressures.
WHOI engineers also developed a hydraulically operated, lightweight robotic manipulator arm that could operate under intense pressure.
With its tandem hull design, Nereus weighs nearly 3 tons in air and is about 4.25 meters (14 feet) long and approximately 2.3 meters (nearly 8 feet) wide. It is powered by more than 4,000 lithium-ion batteries. They are similar to those used in laptop computers and cell phones, but have been carefully tested to be used safely and reliably under the intense pressure of the depths.
"These and future discoveries by Nereus will be the result of its versatility and agility--it's like no other deep submergence vehicle," said Tim Shank, a biologist at WHOI who is aboard the expedition. "It allows vast areas to be explored with great effectiveness. Our true achievement is not just getting to the deepest point in the oceans, but unleashing a capability that now enables deep exploration, unencumbered by a heavy tether and surface ship, to investigate some of the richest geological and biological systems on Earth."
On May 31, the team took the vehicle to 10,902 meters, the deepest dive to date. Testing will continue over the next few days and the team will return to port on June 5. On this initial engineering cruise, Nereus's AUV mode was not tested.
On its dive to the Challenger Deep, Nereus spent more than 10 hours on the bottom, sending live video back to the ship through its fiber-optic tether and collecting biological and geological samples with its manipulator arm, and placed a marker on the seafloor signed by those onboard the surface ship.
"The samples collected by the vehicle include sediment from the tectonic plates that meet at the trench and, for the first time, rocks from deep exposures of the Earth's crust close to mantle depths south of the Challenger Deep," said geologist Patty Fryer of the University of Hawaii, also aboard the expedition. We will know the full story once shore-based analyses are completed back in the laboratory this summer. We can integrate them with the new mapping data to tell a story of plate collision in greater detail than ever before accomplished in the world's oceans."
Additional funds for Nereus were provided by the Office of Naval Research, the National Oceanic and Atmospheric Administration, the Russell Family Foundation and WHOI.
Cheryl Dybas, NSF (703) 292-7734 email@example.com
Stephanie Murphy, WHOI (508) 289-3340 firstname.lastname@example.org
Nereus Slideshow: http://www.whoi.com/page.do?pid=10076&tid=201&cid=33893&ct=362#
Nereus Animation: http://www.whoi.com/page.do?pid=10076&tid=1061&cid=48563&cl=33973
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:0b399bfa-1b64-4462-8f87-da1687cad0f8> | 3.5625 | 2,123 | News (Org.) | Science & Tech. | 48.383328 | 716 |
Sparks - St. Elmo's Fire
Instructor/speaker: Prof. Walter Lewin
Last time I mentioned to you that charge resides at the surface of solid conductors but that it's not uniformly distributed.
Perhaps you remember that, unless it happens to be a sphere.
And I want to pursue that today.
If I had a solid conductor which say had this shape and I'm going to convince you today that right here, the surface charge density will be higher than there.
Because the curvature is stronger than it is here.
And the way I want to approach that is as follows.
Suppose I have here a solid conductor A which has radius R of A and very very far away, maybe tens of meters away, I have a solid conductor B with radius R of B and they are connected through a conducting wire.
If they are connected through a conducting wire, then it's equipotential.
They all have the same potential.
I'm going to charge them up until I get a charge distribution QA here and I get QB there.
The potential of A is about the same that it would be if B were not there.
Because B is so far away that if I come with some charge from infinity in my pocket that the work that I have to do to reach A per unit charge is independent of whether B is there or not, because B is far away, tens of meters, if you can make it a mile if you want to.
And so the potential of A is then the charge on A divided by 4 pi epsilon 0 the radius of A.
But since it is an equipotential because it's all conducting, this must be also the potential of the sphere B, and that is the charge on B divided by 4 pi epsilon 0 R of B.
And so you see immediately that the Q, the charge on B, divided by the radius of B, is the charge on A divided by the radius on A.
And if the radius of B were for instance 5 times larger than the radius of A, there would be 5 times more charge on B than there would be on A.
But if B has a 5 times larger radius, then its surface area is 25 times larger and since surface charge density, sigma, is the charge on a sphere divided by the surface area of the sphere, it is now clear that if the radius of B is 5 times larger than A, it's true that the charge on B is 5 times the charge on A, but the surface charge density on B is now only one-fifth of the surface charge density of A because its area is 25 times larger and so you have this -- the highest surface charge density at A than you have at B.
5 times higher surface charge density here than there.
And I hope that convinces you that if we have a solid conductor like this, even though it's not ideal as we have here with these two spheres far apart, that the surface charge density here will be larger than there because it has a smaller radius.
It's basically the same idea.
And so you expect the highest surface charge density where the curvature is the highest, smallest radius, and that means that also the electric field will be stronger there.
That follows immediately from Gauss's law.
If this is the surface of a conductor, any conductor, a solid conductor, where the E field is 0 inside of the conductor, and there is surface charge here, what I'm going to do is I'm going to make a Gaussian pillbox, this surface is parallel to the conductor, I go in the conductor, and this now is my Gaussian surface, let this area be capital A, and let's assume that it is positive charge so that the electric field lines come out of the surface like so, perpendicular to the surface.
Always perpendicular to equipotential, so now if I apply Gauss's law which tells me that the surface integral of the electric flux throughout this whole surface, well, there's only flux coming out of this surface here, I can bring that surface as close to the surface as I want to.
I can almost make it touch the conductor.
So everything comes out only through this surface, and so what comes out is the surface area A times the electric field E.
The A and E are in the same direction, because remember E is perpendicular to the surface of the equipotentials.
And so this is all there is for the surface integral, and that is all the charge inside, well the charge inside is of course the surface charge density times the area A, divided by epsilon 0, this is Gauss's law.
And so you find immediately that the electric field is sigma divided by epsilon 0.
So whenever you have a conductor if you know the local surface charge density you always know the local electric field.
And since the surface charge density is going to be the highest here, even though the whole thing is an equipotential, the electric field will also be higher here than it will be there.
I can demonstrate this to you in a very simple way.
I have here a cooking pan and the cooking pan, I used to boil lobsters in there, it's a large pan.
The cooking pan I'm going to charge up and the cooking pan here has a radius, whatever it is, maybe 20 centimeters, but look here at the handle, how very small this radius is, so you could put charge on there and I'm going to convince you that I can scoop off more charge here where the radius is small than I can scoop off here.
I have here a small flat spoon and I'm going to put the spoon here on the surface here and on the surface there and we're going to see from where we can scoop off the most charge.
Still charged from the previous lecture.
So here, we see the electroscope that we have seen before.
I'm going to charge this cooking pan with my favorite technique which is the electrophorus.
So we have the cat fur and we have the glass plate.
I'm going to rub this first with the cat fur, put it on, put my finger on, get a little shock, charge up the pan, put my finger on, get another shock, charge up the pan, and another one, charge up the pan, make sure that I get enough charge on there, rub the glass again, put it on top, put my finger on, charge, once more, and once more.
Let's assume we have enough charge on there now.
Here is my little spoon.
I touch here the outside here of the can -- of the pan.
And go to the electroscope and you see a little charge.
It's very clear.
What I want to show you now it's very qualitative is that when I touch here the handle, it's a very small radius, that I can take off more charge.
There we go.
That's all I wanted to show you.
So you've seen now in front of your own eyes for the first time that even though this is a conductor that means that it is an equipotential, that the surface charge density right -- right here is higher than the surface charge density here.
Only if it is a sphere of course for circle symmetry reasons will the charge be uniformly distributed.
If the electric field becomes too high we get what we call electric breakdown.
We get a discharge into the air.
And the reason for that is actually quite simple.
If I have an electron here and this is an electric field, the electron will start to accelerate in this direction.
The electron will collide with nitrogen and oxygen molecules in the air and if the electron has enough kinetic energy to ionize that molecule then one electron will become two electrons.
The original electron plus the electron from the ion.
And if these now start to accelerate in this electric field, and if they collide with the molecules, and if they make an ion, then each one will become two electrons, and so you get an avalanche.
And this avalanche is an electric breakdown and you get a spark.
When the ions that are formed become neutral again they produce light and that's what you see.
That's the light that you see in the spark.
And so sparks will occur typically at the -- at sharp points -- at areas where the curvature is strong, whereby the radius is very small, that's where the electric fields are the highest.
How strong should the electric field be?
Well, we can make a back of the envelope calculation.
If you take air of 1 atmosphere, dry air, at room temperature, then the -- the electron on average, on average, will have to travel about 1 micron, which is 10 to the -6 meters, between the collisions with the molecules, it's just a given.
Sometimes a little more, sometimes a little less.
Because it's a random process of course.
To ionize nitrogen, to ionize oxygen, takes energy.
To ionize an oxygen molecule takes twelve-and-a-half electron volts.
And to ionize nitrogen takes about 15 electron volts.
What is an electron volt?
Well, an electron volt is a teeny weeny little amount of energy.
It's 1.6 times 10 to the -19 joules.
Electron volt is actually a very nice unit of energy.
Because once you have an electron and it moves over a potential difference of one volt, it gains in kinetic energy, that's the definition of an electron volt, it gains 1 electron volt.
It's the charge of the electron, which is 1.6 times 10 to the -19 coulombs, multiplied by 1 volt.
And that gives you then the energy, 1 electron volt.
And so what it means then -- let's assume that this number is 10 electron volts.
We only want a back of the envelope calculation.
So we want the electron to move over a potential difference delta V which is roughly 10 volts and we want it to do that over a distance delta X which is 10 to the -6 meters, that's your 1 micron.
And if that happens you'll get this enough kinetic energy in the electron to cause an ion.
So what electric field is required for that, that is delta V, the potential difference, divided by the delta X, so that is 10 divided by 10 to the -6, so that's about 10 to the 7 volts per meter.
That's a very strong electric field.
In reality when we measure the electric fields near breakdown, it is more like 3 million volts per meter.
But it's still very close.
This was only a back of the envelope calculation.
So very roughly at 1 atmosphere air, room temperature, when the air is dry, we get electric breakdown at about 3 million volts per meter.
When the ions neutralize you see light, that's why sparks can be seen.
They heat the air, they produce a little pressure wave, so you can also hear noise.
If you had two parallel plates and you would bring those plates closely together and suppose they had a potential difference of 300 volts, then you would reach an electric field of 3 million volts per meter when the distance D is about one tenth of a millimeter.
So that's when you expect spontaneous discharge between these two plates.
In practice however it probably will happen when the plates are further apart than one tenth of a millimeter.
And the reason for that is that there is no such thing as perfect plates.
The plates have imperfections.
That means there are always areas on the plate which are not flat, which are a little bit like what you see there, small radius, and that's of course where the electric field then will be larger and that's where the discharge will occur first.
However, if you touch the doorknob and you get a spark, you feel a spark, and you look at the spark and you see that when you're 3 millimeters away from the doorknob that the spark develops, you can s- pretty sure that the potential difference between you and the door was of the order of 10000 volts, several thousand volts, at least.
Because over 3 millimeters it requires 10000 volts to get the 3 million volts per meter.
When you comb your hair or when you take your shirt off you get little sparks, you can hear them and if it's dark you can see them, and you can be sure that at the sharp ends of this hair, of the fabric, that you have developed electric fields of the order of 3 million volts per meter.
And then you get the automatic breakdown.
Now of course high voltage alone doesn't necessarily kill you.
What -- what -- what matters is not so much the voltage to get killed but it's the current that goes through you.
And current is charge per unit time.
And so in SI units it would be coulombs per second.
For which we write a capital A which stands for Ampere, the man who did a tremendous amount of research in this area, A Frenchman.
And so if you touch the doorknob the instantaneous current may actually be quite high.
It may be an ampere even, but it may only last for 1 millisecond.
And so that's not going to kill you.
We all know that when you comb your hair that you don't die and you also know that when you take your shirt off even though you may hear the sparks that that's not lethal.
So maybe in a future lecture we can discuss in some more details what it does take to actually execute someone electrically which is very unpleasant but nevertheless we would have to evaluate how long the current should last, how strong the current should be and then also during which parts of the body the current would cause lethal reactions.
So I want to be a little bit more quantitative now uh and deepen our knowledge of the Van de Graaff.
Slowly we're going to understand how the Van de Graaff works.
And today I want to calculate with you how much charge we can put on the Van de Graaff and what the maximum potential is at the surface.
If we charge up the Van de Graaff, with charge Q, then the potential of the surface is an equipotential, is Q divided by 4 pi epsilon 0 R.
And the electric field right here at the surface would be Q divided by 4 pi epsilon 0 R squared.
So in this case of spherical symmetry we have that the potential V equals E times R.
But we know that E cannot exceed 3 million volts per meter.
And so that gives you now a limit on the potential that we can give the Van de Graaff.
So if you substitute in here 3 million volts per meter you can calculate what potential you can maximally reach for a given sphere with a given radius.
And if we here have the radius and we here have the voltage, then if the radius of the sphere were 3 millimeters then you could not exceed a voltage of 10 kilovolts.
If you did you would get this automatic electric breakdown.
You would get a spark.
If you have a sphere of 3 centimeters that would be 100 kilovolts and our Van de Graaff, which has a radius of 30 centimeters, would therefore be 1 million volts.
And you could not exceed that.
And in practice in fact this one doesn't even make it to 1 million volts.
The sphere is not perfect.
There are imperfections of the sphere.
There are areas which have so-to-speak sharp points and so we won't make it to 1 million volts.
We get a breakdown maybe at a few hundred thousand, maybe 300000 volts.
You can now also calculate what the maximum charge is on the Van de Graaff.
Because if the maximum potential is 300000 volts, you know the radius is .3 meters, so you can calculate now what the maximum charge is that you can put on the Van de Graaff using that equation, will give you 10 microcoulombs.
And so the maximum potential for our Van de Graaff is of the order of 300000 volts.
So this gives you now a feeling, a quantitative feeling, for numbers, for what the -- can I put this down?
So that gives you an idea of what our Van de Graaff can do, and later we will understand how the charge gets there.
But at least you have some feeling now for potentials, and for the charges that are involved.
If here's my Van de Graaff and I approach the Van de Graaff with a sphere which is connected to the earth and if this Van de Graaff had positive charge on it then the sphere will become negatively charged through induction and so you get field lines which go from the Van de Graaff to this object, always perpendicular to the equipotentials, so they go like this, and so the electric field here will probably be the strongest, and so the spark will then develop between this sphere and the Van de Graaff provided that you were close enough.
So that you do achieve a electric field close to this sphere of about 3 million volts per meter.
And I will show you that later, you will see more sparks today than you've ever seen before in your life, but I want you to appreciate a little bit more about the sparks about lightning before uh I demonstrate that.
So you get a little bit more out of it.
If I approach the Van de Graaff not with the sphere but I would walk to the Van de Graaff being very courageous like this, I'm also a pretty good conductor, I'm also connected with the earth, then the chances are that the spark would develop first between my nose and the Van de Graaff, because that is the smallest curve, the sharpest curvature, the smallest radius, or certainly my head, would be a good candidate for being hit first.
If I approach the Van de Graaff like this with my hand stretched, then chances are of course that the sparks will first develop between my fingertips.
Because it's a very small radius and they're very close to the VandeGraaff, and so that's where the discharge will occur.
So before we will enjoy some of this, you will enjoy it, I will enjoy it less, um I want to talk a little bit about lightning with you first.
Because what you're going to see in a way is a form of lightning.
There are 400000 thunderstorms every day on average on earth.
There are about 100 lightning flashes every second.
The top of a thundercloud becomes positive and the bottom becomes negative.
The physics of that is not so easy, and probably incomplete, and I will not go into the details of the physics, but it does have to do with the flow of water drops.
They become elongated, they can become charged because of friction, and they can break off, and they can transport charge.
I will simply give you some facts.
And so I will accept the fact that the cloud is going to be charged.
This is the cloud.
Positive at the top, negative at the bottom.
And here is the earth.
Because of induction, the earth of course will therefore become positively charged here, and so we're going to see field lines, electric field lines, which go from the earth to the cloud, always perpendicular to the equipotentials, something like this.
I'll give you some dimensions, uh this may be something like 5 kilometers, this vertical distance D is about 1 kilometer.
These are typical numbers, of course, it can vary enormously from thunderstorm to thunderstorm.
And this height is something typically like 10 kilometers.
And this allows us now to make some very interesting calculations to get some feeling for the potential difference between the cloud and the earth.
That's the first thing we can do.
If we make the simplifying assumption that the electric field is more or less constant here, it's like having two parallel plates, where the electric field is constant between them, then the potential difference delta V between the bottom of the cloud and the earth, is simply the electric field times the distance D.
So this becomes E times D.
But if the breakdown occurs at 3 million volts per meter -- by the way that's dry air, when it -- when there is a thunderstorm it's probably not so dry, but let's take the 3 million volts per meter, so we get 3 times 10 to the 6, that is for E, and the distance between the cloud and the earth let's take 1 kilometers.
So that's 10 to the 3rd meters, so we get of the order of 3 billion volts between the earth and the clouds.
And the values that are typically measured are several hundred million to 1 billion volts, so it is not all that different.
You expect that the potential is probably less than what we have calculated because clearly uh these are not flat surfaces, there are trees, here on the ground, there are buildings on the ground, which are like sharp points, where the electric field will be locally higher, and so you will get a discharge at these sharp points first.
And that means the potential difference between the cloud and the earth could then be less than the 3 billion that we have calculated here.
It's only a back of the envelope calculation.
The details of the physics of the discharge very complicated.
But I want to share with you some facts without giving detailed explanations.
The start of the lightning begins when electrons begin to flow from the cloud to the earth.
They form a funnel, which is about 1 to 10 meters in diameter and we call that the step leader.
The step leader moves about 100 miles per second and so it comes down in about 5 milliseconds.
5 milliseconds from here to here and it takes about half a coulomb to the earth.
Half a coulomb, for about 5 milliseconds, that means the current is about 100 amperes.
The step leader creates a channel of ionized air, full of ions and full of electrons, which is an extremely good conductor.
And with -- when this step leader reaches the ground there is this highly conductive channel and the electrons can now very quickly flow from this channel to the ground.
And that starts first right here at the surface of the earth.
That's where the electrons will first go to the earth.
And then successively electrons which are higher up in the channel will make it down to the earth.
And so you're going to see electrons going through the channel to the earth but first the electrons are closer to the earth than the electrons farther away and then even farther away.
And this is actually where most of the action occurs.
The current is now enormously high, 10000 to some 100000 amperes, and you heat the air, get a tremendous amount of light, the ions recombine and you get pressure, heat can produces pressure, and there comes your thunder.
And so most of the action is not in the step leader but is in the second phenomenon, which we call the return stroke.
Which is from the earth to the cloud.
And the speed of that return stroke is about 10 to 20 percent of the speed of light.
During the return stroke there is about 5 coulombs exchange between the cloud and the earth, and 5 coulombs is a sizable fraction of the total charge that was on the cloud -- on the cloud the first place -- to start with.
After a return stroke, maybe 20 milliseconds later, this whole process can start again.
You can get a step leader.
And you can get the return stroke.
However, the step leader will now follow exactly the same path that was made before because that's where the air is ionized so that's where the conductivity is very high, so that's the easiest way to go.
And this process can recur 5, 10, maybe 15 times.
So what a- appears to you as one lightning bolt in fact could be 10 flashes back and forth between the cloud and the earth.
And the -- the real light is not in the step leader, that's very little light, but the real light is in the return strokes.
So 10 return strokes, which may be 20, 30, 40 milliseconds apart, appear to you and to me only as one flash, which would take place maybe in as little as a tenth of a second.
And during these 5 or 10 return strokes you exchange between the cloud and the earth maybe a total of 25 to 50 coulombs, and that of course will lower the potential difference.
And if the potential difference becomes too low then the process stops.
You have to wait now for the clouds to charge up again.
And then lightning will strike again.
And that can take anywhere from maybe 4, 5, 10, 20 seconds.
And then you get another lightning bolt.
The study of these -- of this process, of the step leader and of the return stroke, can be done with a camera, which is called the Boys camera.
Let me first explain to you in detail -- in principle how it works.
If this is the area on the film that is exposed by your lens suppose that I move the film at a very high speed to the left and suppose the step leader comes down and it sees some light from the step leader, then I may see on the film this.
And from here to here would then be the 5 milliseconds which it takes the step leader to go from the cloud to the earth.
Now the return stroke takes place with way higher speed and so I see a tremendous amount of light because there's a lot of light in the return stroke.
And of course this is very steep.
Because it goes 100 times faster up than the step leader came down.
And so you can measure these times and so you can get the speed of the return stroke.
And then later in time, maybe 30, 40 seconds later, on the film, you may see another return stroke.
And you may see another one.
And so you can see then how long the time was between the return strokes and you can also calculate their speeds.
With a real camera it's not really the film that is moving but it is the -- the lens that is moving, and the way these pictures are taken, and I will show you one, is if this is photographic plate, then it is the camera that moves over the plate with a um very high speed, about 3000 revolutions per minute, and so you would get these -- this information then not horizontally but you get it spread out over the film.
But you get the same information, you can calculate speeds and times.
During the past decade, new forms of lightning have been discovered which occur way above the clouds.
Way higher up.
Red colors have been seen.
Red sprites they are called.
And also blue jets.
The light is very faint and it occurs only for a very short amount of time.
It's very difficult to photograph.
I have not been able to get good slides for today.
However, I did see some pictures on the Web.
And when you log into the Web, when you visit the Web 8.02 which you should, then I give you directions how to access slides pictures of the red sprites and of the blue jets.
The physics of that is not very well understood.
It's being researched very heavily.
But it's way above the clouds.
There are also other forms of electric breakdown, of discharge.
They are different in the sense that it's not an individual spark.
But there is a continuous flow of -- of -- of charge.
It occurs always from very sharp points.
So there is a continuous current actually going on.
And some of that you may have seen but you may not remember when we used a carbon arc here.
We had two carbon arcs, two carbon rods, and we had a potential difference between them and we got a discharge between them which caused a tremendous amount of light, which we used for projection purposes.
So a carbon arc discharge is such a form of discharge whereby you have a continuous current.
It's not just sparks.
If you take grass or trees or brushes for that matter, with thunderstorm activity, they can go into this discharge at their sharp tips.
And we call this brush discharge, we call it St.
Elmo's fire, it's all the same thing, it's also called corona discharge.
I normally call it corona discharge.
It produces light because the ions when they neutralize produce light.
Heat makes sound, pressure, and so you can hear this cracking noise of the corona discharges.
An airplane that flies or a car that drives, there is friction with the air, and any form of friction can charge things up.
And so it's not uncommon at night that you can see this corona discharge from the tip of the wings of an airplane.
I've also seen it from cars.
Corona discharge from cars.
Which charge themselves up simply by driving through the air.
The air flow would charge them up.
You can hear it, cracking, and you can see it sometimes if it's dark enough, you see some light.
In general it's bluish light.
Something completely on the side, going back to the lightning bolts, lightning bolts, the discharge, the moving electrons, can cause radio waves.
And these radio waves you can receive on your car radio.
And all of you have experienced this.
Driving around, lightning very far away, you can hear it on the radio.
So that's telling you that there is lightning going on somewhere.
After a thunderstorm, something that many of you may not have experienced because in the cities there is always -- always exhaust from cars, that spoils everything, but when you're out in the country after a thunderstorm there's a very special smell in the air.
I love it.
And that's ozone.
O2, O2 in lightning becomes O3.
And O3 has a wonderful smell, and you can really smell that.
It's very typical.
I hope that most of you sooner or later in life will have that experience.
Go to the country after a thunderstorm and you can really smell this ozone.
Let's now look at some slides.
The first slide that you will see is one very classic slide made by Gary Ladd, at Kitt Peak Observatory in Arizona, uh what I like about this is that uh these are the observatories, the telescopes, in the domes, and of course when you're an astronomer, this is the kind of weather that you can do without.
But nevertheless it happens.
Uh you see here return strokes, the light is definitely due to the return strokes, it's very bright.
These are step l- leaders that never made it to the earth, and if a step leader doesn't make it to the earth you don't get a return stroke and so the light as you can see here is much less.
And what you think here is only one bolt is probably at least 10, 5 to 10, maybe 15, flashes.
All right next slide please.
Here you see the result of a Boys camera exposure.
For those of you who are sitting in front you can recognize maybe the Empire State Building here.
And the Empire State Building is hit here by lightning at the very tip, that's the sharp edge, that's where you expect it to be hit.
This is not taken when the camera was rotating.
This is just the exposure the way you and I would see it.
Not moving camera but here you see the result of the rotating Boys camera.
And this is the same flash.
So here you see the return stroke, the -- the light from the step leader is too faint.
You can't see that.
So here is the return stroke and then this time separation may be 30 or 40 milliseconds, see another stroke, you see another one, and another one, so there's 6 here, looks like you see a double one here.
And so you have 6 or 7 of these return strokes.
And this is the way that you can study speeds and how much charge actually is exchanged between these uh between the clouds and in this case the Empire State Building.
Uh the next slide shows you a corona discharge in the laboratory this is a high voltage supply with a very sharp tip -- tip here at the end, the sharp point, and here you see not individual sparks, you don't call this lightning but this is what you would call the St.
Elmo's fire, the corona discharge is bluish light.
And in fact when you are close to this power supply you can also smell the ozone.
It also produces locally ozone.
And you can see it.
If you make it dark in the laboratory you can see some bluish light.
Uh when I was a graduate student I had to build power supplies, high voltage power supplies, and I remember when my soldering job was not a very good job that means when I take the solder ironing off then I could draw a little sharp point, the solder, and that would then later cause me problems with corona discharge, that means I would have to redo the soldering so that the radius of the solder joint would become larger, so no sharp points.
That's enough for the slides right now.
Benjamin Franklin invented the lightning rod.
His idea was that through the lightning rod you would get a continuous discharge, corona discharge, between the cloud and the building.
And therefore you would keep the potential difference low.
And so there would be no danger of lightning.
And so he advised King George the Third to put these sharp points on the royal palace and on uh powder houses, ammunition, storage places for ammunition.
There was a lot of opposition against Franklin.
Uh they argued that uh a lightning rod will only attract lightning.
And that the effect of the discharge, lowering the potential difference, would be insignificant.
But nevertheless the King followed Franklin's advice and after the sharp rods, the lightning rods, were placed, there was a lightning bolt that hit one of the ammunition places at Pearl Fleet, but there was very little damage.
And so we now know that on the one hand the discharge is indeed insignificant.
And so the opposition was correct.
And in fact you do attract lightning, unlike what Franklin had hoped for.
However, if your lightning rod is thick enough that it can handle the high current, which is 10000 or 100000 amperes, then the current will go through the lightning rod and therefore there will not be an explosion.
So it will not hit the building.
So it will be confined to the lightning rod.
And so it worked but for different reasons than Franklin had in mind, but he had the right intuition.
Was a very great scientist, and great statesman.
And so his lightning rod survived up to today.
So now I want to return to the Van de Graaff and show you some of the things that we have discussed.
And the first thing that I would want to do is create some sparks.
I run the Van de Graaff and I will approach it with this small sphere, small radius, and as I come closer and closer, the electric field will build up here and then I would predict that if sparks fly over, that they would go between the Van de Graaff and this uh this sphere.
This sphere is grounded.
And so any current that will flow will flow not through Walter Lewin but will go through the ground, so there's no danger that anything will happen to me.
At least not yet.
You already hear some cracking noise.
That means there are already sparks flying around inside there.
It's very hard to avoid, there are always some sharp edges in there that we cannot remove.
This is not an ideal instrument.
But I still think I will be able to show you some lightning.
By coming closer.
There we go.
So what you think is only one spark may well be several like these return strokes, the way I described with lightning.
So what you're seeing here now is that the electric field locally has become larger than 3 million volts per meter and then you're going to this discharge phenomenon that we described, and that gives you then -- that gives you the lightning.
What I will do now is I would like you to experience -- although it may not be so fascinating for you -- to experience a corona discharge between a very sharp point that I have here, extremely sharp, and the Van de Graaff.
And the only way that I can convince you that there is indeed going to be a discharge between this point and the Van de Graaff is by approaching the Van de Graaff and this cracking noise that you hear now will disappear.
And the reason why it will disappear is that if I get a corona discharge between the tip and the Van de Graaff it will drain current, it will lower the potential and so that cracking noise will disappear.
So the sparks which are now flying over will not fly over anymore.
You will not be able to see the light.
It's -- there's too much light here.
Although I can probably see at the tip here this blue light.
So I'm going to approach the Van de Graaff now.
It's almost as if I had a lightning rod and I'm not worried at all because if any current starts flowing it goes through this rod, which is like a lightning rod to the earth.
So I'm not worried at all.
I just am very brave, very courageous, approaching the V- the Van de Graaff, and I want you to listen to that cracking noise.
That cracking noise will disappear when I'm going to be -- draw a current through this sharp point.
Oh, boy, there I go.
And the cracking stops.
And I can actually see here some glowing discharge, bluish.
Will be impossible for you to see.
I can come closer, I'm not worried.
And so I'm draining charge now off the Van de Graaff thereby lowering the potential of the Van de Graaff and so these crazy sparks that occur here can no longer occur.
But now they will.
Can you hear them?
And now you can't.
If I were crazy then I would develop a corona discharge between the Van de Graaff and myself.
One way I could do that is by approach it with my fingertips as I mentioned earlier, but that may be a little bit too dangerous because I may draw a spark, I may be hit by lightning, which is the last thing that I would want today.
However, a corona discharge using these tinsels may be less dangerous.
So I get a continuous flow of current which now unfortunately doesn't go through the lightning rod but now it goes straight through my body.
And I can assure you that I can feel that.
It's probably a very low current.
It may be only a few microamperes.
But it's not funny.
It's not pleasant.
But anything for my students, what the hell.
There we go.
Ya ya ya ya ya.
You see tinsels, I'm now in a corona discharge and I feel the current through my fingers, it's a continuous discharge now.
This is St.
You can't h- ah, there was lightning.
Boy, you got something for your 27000 dollars.
So you saw both corona discharge and you saw lightning.
Boy, you were luckier than the -- than the first class by the way.
Clearly lightning can be dangerous, lightning can cause a fire, it can excite, it can explode fumes, if you gas your car just the flow of gasoline can charge up the nozzle, friction can charge things up, that's why the nozzle is always grounded, because a spark could cause a major explosion.
If you fill a balloon with hydrogen then the flow of hydrogen is friction, can charge up the balloon and a spark can then ignite the hydrogen.
And this has led to a classic tragic accident, it's a long time ago.
But it's so classic that I really have to show this to you.
Hitler was very proud of his large airships.
They're named after Graf Zeppelin the Germans called them the Zeppelins, we call them dirigibles or blimps.
And one of the largest ones that Hitler's Germany ever built was the Hindenburg, 803 feet long and 7 million cubic feet of hydrogen.
And the Germans couldn't fill their Zeppelins with helium because they didn't have helium.
And the Americans were not going to sell them helium, for very good reason.
And so they had to fill them with hydrogen.
And so the Hindenburg which was the name of this Zeppelin came over in May, 1937 and when it arrived at Lakehurst in New Jersey it started a gigantic fire.
It came over in 35 hours trans-Atlantic and you see here the explosion.
May 6 at 7:25 in the afternoon.
There were 45 passengers on board and 35 died in this fire.
The speculation was that this may have been sabotage.
It's still quite possible.
Although the official inquiry board concluded that it was St.
Elmo's fire, that as the uh ship moored on this mast here, that a spark flew over and that that caused the uh the explosion, the fire.
And it was the end of the dirigibles for Germany.
Napoleon, also not the nicest man on earth, uh had the suspicion when many of his soldiers got sick in Egypt that this was the result of marsh gas.
And they suspected that this bad air that they could smell when they were near marshes that that was the cause of the disease, bad air in French is mal air, and so they called the disease malaria.
And so the way that they tested the air to make sure that the soldiers wouldn't get malaria was to build a small gun which was like so, this was a conducting barrel.
And they would let some of this marsh gas in the gun and put a cork on here, close it off, and here was a sharp pin, this pin was completely insulated from the barrel, the conducting barrel, and then they would put some charge on here, so that the spark would fly over there.
This is really the precursor of the spark plug that we have in our cars.
It's no different.
And so if indeed there was then this marsh gas in there, there might be an explosion and that was a warning then that um there may be danger for the soldiers.
Well, this morning I was walking through the building and I was in Lobby 7 and I smelled some funny, it was a funny smell, and I was just wondering whether perhaps, who knows, at MIT anything can happen, whether uh there was some uh some uh gas there that shouldn't be there.
And so I brought my uh my special gun which is here, which is uh built after Napoleon and uh you see here this uh little sphere and I opened up the cork here and I let some of that air in, Building 7, and then I decided that we, you and I would do the test and see whether perhaps there was some uh some gas there that uh may cause some danger.
So I would have to cause a discharge then inside the -- the barrel here.
I can try to do that by combing my hair uh but that may not be sufficient amount of charge so I can always make sure that there will be a spark inside that gun and use this -- this disk.
Which has a little bit more charge on it.
So here is then this uh Lobby 7 gas inside.
Now of course there's one possibility that there was nothing wrong with the air, in which case you will see nothing.
And there is another possibility that the air wasn't kosher enough and that you may see here small bloop and since it's going to be very small at best you have to be very quiet otherwise you won't hear anything.
And so let's first try now with my comb.
I have my comb here.
To see whether I can generate a spark inside this barrel and that may not work because I'm not sure that I get enough charge on this comb.
No, that doesn't work at all.
Well, let's see whether we can use this instrument.
I sure hope that we won't get malaria.
See you tomorrow. | <urn:uuid:0b039c0b-d2e7-44ec-92d2-bb2f379367ca> | 3.390625 | 9,350 | Audio Transcript | Science & Tech. | 67.335169 | 717 |
On November 25, 1952, three months after returning from England, Pauling finally made a serious stab at a structure for DNA.
The immediate spur was a Caltech biology seminar given by Robley Williams, a Berkeley professor who had done some amazing
work with an electron microscope. Through a complicated technique he was able to get images of incredibly small biological
structures. Pauling was spellbound. One of Williams's photos showed long, tangled strands of sodium ribonucleate, the salt
of a form of nucleic acid, shaded so that three-dimensional details could be seen. To Pauling the strands appeared cylindrical.
He guessed then, looking at these black-and-white slides in the darkened seminar room, that DNA was likely to be a helix.
No other conformation would fit both Astbury's x-ray patterns of the molecule and the photos he was seeing.
Even better, Williams was able to estimate the sizes of structures on his photos, and his work showed that each strand was
about 15 angstroms across. Pauling was interested enough to ask him to repeat the figure, which Williams qualified by noting
the difficulty he had in making precise measurements.
The next day, Pauling sat at his desk with a pencil, a sheaf of paper, and a slide rule. New data that summer from Alexander
Todd's laboratory had confirmed the linkage points between the sugars and phosphates in DNA; other work showed where they
connected to the bases. Pauling was already convinced from his earlier work that the various-sized bases had to be on the
outside of the molecule; the phosphates, on the inside. Now he knew that the molecule was probably helical. These were his
starting points for a preliminary look at DNA. He still lacked critical data - he had no decent x-ray images, for instance,
and no firm structural data on the precise sizes and bonding angles of the base-sugar-phosphate building blocks of DNA - but
he went with what he had.
It was a mistake. After a few pages of theorizing, using sketchy and sometimes incorrect data, Pauling became convinced -
as Watson and Crick had been at first - that DNA was a three-stranded structure with the phosphates on the inside. Unfortunately,
he had no Rosalind Franklin to set him right. | <urn:uuid:bb706256-060f-4d0e-abdd-85b7887e5fba> | 3.921875 | 491 | Nonfiction Writing | Science & Tech. | 47.167632 | 718 |
Assume you have a planet of mass $M$ and radius $R$ and have a stationary spaceship at distance $4R$ from the center of the planet.If a projectile is launched from the spaceship of mass $m$ and velocity $v$ and just grazes the planet's surface, what will be the locus of the projectile?
I guess on Earth we take projectile path to be parabolic because of no variation in acceleration due to gravity. But in this case acceleration due to gravity will change with change in distance.
So in the end, will the trajectory be parabolic, elliptical, circular? Explain why with full proof. | <urn:uuid:1eec5fe0-6547-425a-bc78-ecd818d42c5a> | 2.921875 | 133 | Q&A Forum | Science & Tech. | 50.953205 | 719 |
Orbiting Retrievable Far and Extreme Ultraviolet Spectrometer
Launch Date: November 20, 1996
Mission Project Home Page - http://nssdc.gsfc.nasa.gov/nmc/masterCatalog.do?sc=1996-065B
The ORFEUS-SPAS II mission followed the ORFEUS-SPAS I mission flown in 1993, motivated by improvements in instrument performance and the critical need for additional observation time. The purpose of the ORFEUS-SPAS II mission was to conduct investigations of celestial sources in the far and extreme ultraviolet spectral range, and to increase understanding of the evolution of stars, the structure of galaxies, and the nature of the interstellar medium. ORFEUS-SPAS II was one of a series of planned joint DARA (German Space Agency) /NASA missions. The name arises from the reusable Astro-Shuttle Pallet Satellite (Astro-SPAS), and the Orbiting Retrievable Far and Extreme Ultraviolet Spectrometers (ORFEUS) Telescope carried on Astro-SPAS.
ORFEUS-SPAS was a free-flying platform designed to be deployed and retrieved from the space shuttle. The Astro-SPAS carrier was powered by batteries, and data from the instruments were stored on tape. Absolute pointing was accurate to within a few arc seconds. ORFEUS-SPAS is 4.5m in length and has a 2.5m width base. Operation of ORFEUS-SPAS was approximately 40km from the shuttle.
ORFEUS-SPAS II carried the same three spectrometers, operating over the wavelength range 400 - 1250 Angstroms, as was carried on ORFEUS-SPAS I. The Tubingen Ultraviolet Echelle Spectrometer (TUES) and the Berkeley Extreme and Far-UV Spectrometer (BEFS) were housed on the primary instrument - the ORFEUS 1-m telescope. The Interstellar Medium Absorption Profile Spectrograph (IMAPS) was operated independently from ORFEUS.
The ORFEUS-SPAS II mission was flown in November-December 1996. The mission acquired spectra of numerous celestial objects during 14 days of observations. Efficiency of 62.5% for all instruments was achieved.
Last updated: April 16, 2012
- STS-80 info - http://science.ksc.nasa.gov/shuttle/missions/sts-80/mission-sts-80.html | <urn:uuid:9ee8ab98-958f-405d-a451-31117863a388> | 2.921875 | 516 | Structured Data | Science & Tech. | 48.8451 | 720 |
Genome-sequencing data indicates that sponges were preceded by ctenophores, complex marine predators also called comb jellies.
The scientists presented their findings at the annual meeting for the Society for Integrative and Comparative Biology, in San Francisco, California.
Although they are gelatinous like jellyfish, comb jellies form their own phylum (ctenophores.) The tree of life roots the comb jellies’ lineage between the group containing jellyfish and sea anemones and the one containing animals with heads and rears, which includes slugs, flies, and humans. Ctenophores swim through the sea with iridescent cilia, and snare prey with sticky tentacles. They have nerves, muscles, tissue layers and light sensors, all of which sponges lack.
Developmental biologists from the University of Washington in Seattle sequenced the genome of the comb jelly Pleurobrachia bachei and they discovered that DNA sequences place them at the base of the animal tree of life. Another team presented results from the genome sequencing of the comb jelly Mnemiopsis leidyi and found that the phylum lands below or as close to the base as sponges on the tree of life.
It’s been thought that predator-prey interactions as well as sensory adaptations evolved long after the origin of sponges, states Billie Swalla, part of the University Washington team. The ancestor of all animals might look different from modern comb jellies and sponges.
Gene families, cell-signaling networks, and patterns of gene expression in comb jellies support the ancient origin. Ctenophores grow their nerves with unique sets of genes. It’s possible that they are descendants of Ediacaran organisms, which appeared in the fossil record before animals.
Comb jellies are the only animals that lack certain genes crucial to producing microRNA, states Andy Baxevanis, a comparative biologist at the US National Human Genome Research Institute in Bethesda, Maryland, and leader of the M. leidyi project. These short RNA chains help regulate gene expression. Sponges and comb jellies lack other gene families that all other animals possess.
If ctenophores evolved before sponges, the sponges probably lost some of their ancestors’ complexity. It’s also possible that sponges have a complexity that has yet to be defined. | <urn:uuid:0788be3c-ba50-4b6a-82f6-5035aa1dc9ce> | 3.875 | 506 | News Article | Science & Tech. | 31.777569 | 721 |
Mission Type: Flyby
Launch Vehicle: 8K78 (no. T103-16)
Launch Site: NIIP-5 / launch site 1
Spacecraft Mass: 893.5 kg
Spacecraft Instruments: 1) imaging system and 2) magnetometer
Spacecraft Dimensions: 3.3 m long and 1.0 m in diameter (4 m across with the solar panels and radiators deployed)
Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi
National Space Science Data Center, http://nssdc.gsfc.nasa.gov/
The second of three Soviet spacecraft intended for the 1962 Mars launch window, Mars 1 was the first spacecraft sent by any nation to fly past Mars.
Its primary mission was to photograph the surface. This time the upper stage successfully fired the probe toward Mars, but immediately after engine cutoff, controllers discovered that pressure in one of the nitrogen gas bottles for the spacecraft's attitude-control system had dropped to zero (due to incomplete closure of a valve).
On 6 and 7 November 1962, controllers used a backup gyroscope system to keep the solar panels constantly exposed to the Sun during the coast phase, although further midcourse corrections became impossible. Controllers maintained contact with the vehicle until 21 March 1963, when the probe was 106 million kilometers from Earth.
Mars 1 eventually silently flew by Mars at a distance of 197,000 kilometers on 19 June 1963. Prior to loss of contact, scientists were able to collect data on interplanetary space (on cosmic-ray intensity, Earth's magnetic fields, ionized gases from the Sun, and meteoroid impact densities) up to a distance of 1.24 AU. | <urn:uuid:53817c6b-e149-4531-b0dd-eea6dbc743b2> | 3.671875 | 369 | Knowledge Article | Science & Tech. | 55.919265 | 722 |
Seen at the Air Force Space and Missile Museum at Cape Canaveral Air Force Station, Florida — it’s a model of the
Dinosaur Dynasoar space plane:
Likely the most poorly-named program ever conceived, the Dynasoar (for dynamic soaring) was an early attempt at making a reusable manned space plane — essentially a mini-shuttle, and in some sense a follow-on to the X-15 experimental aircraft. First proposed in 1957, the U. S. Air Force saw this single-seat craft as their way into space — and assigned it a dizzying future array of tasks. Variants were discussed for reconnaissance, long-range weapons delivery, and even in-orbit warfare (the Soviets were planning similar vehicles at the time, so this was hardly unilateral thinking).
Ultimately, the program wound down in 1963, victim of an unclear mission, escalating costs, and a hostile political environment — just months away from completion of the first flight-worthy vehicle. In its early days, Dynasoar was hobbled by the Eisenhower administration’s desire to avoid military competition with NASA’s mission of manned orbital flight. Once the Kennedy administration was in place, Defense Secretary McNamara ultimately cancelled Dynasoar in favor of military use of the Gemini spacecraft that NASA was then developing (although it too would soon be cancelled). | <urn:uuid:608f1efc-7218-4646-849b-f4f9f32d2d66> | 3.25 | 280 | Personal Blog | Science & Tech. | 24.512033 | 723 |
There are multiple variations of IF statements:
The simple IF construct is used to evaluate a Boolean condition and execute an appropriate set of commands. For instance the following example determines whether the current day of the week is Friday:
IF DATEPART(dw, GETDATE()) = 6 BEGIN PRINT 'TGI Friday' END
We can easily extend the same construct by adding an ELSE clause to take the alternative course of action and calculate the number of days before Friday:
IF DATEPART(dw, GETDATE()) = 6 BEGIN PRINT 'TGI Friday' END ELSE BEGIN SELECT 'Sorry, Friday is ' + CONVERT(VARCHAR(1), (6-DATEPART(dw, GETDATE())) )+ ' days away' END
Alternatively we can determine the Boolean value of a condition based on the outcome of a SELECT statement:
IF (SELECT COUNT(*) FROM authors) > 20 PRINT 'authors table contains more than 20 records'
Note: If you wish to execute multiple statements depending on the value of a condition, be sure to enclose those statements within a BEGIN END block
ometimes you have to check for multiple conditions and control your program accordingly. You can do so by combining these conditions with AND and OR within the simple IF construct. The following example demonstrates two conditions combined with AND operator:
IF (SELECT COUNT(*) FROM authors) > 20 AND (SELECT COUNT(*) FROM authors WHERE state = 'ca')>0 PRINT 'authors table contains more than 20 records' SELECT CAST((SELECT COUNT(*) FROM authors WHERE state = 'ca') AS VARCHAR(2)) + ' of them are from california'
The previous query will return the number of authors in CA only if the authors table contains any authors from that state and only if the total number of authors is greater than 20. Therefore the AND operator returns true only if both parts of the condition are true.You can also combine conditions with the OR operator, which checks both conditions and returns TRUE if one of them is TRUE or if both of them are true. The OR operator returns FALSE if both parts of the condition are FALSE. Although you can use the OR operator within a simple IF construct it is not recommended to do so. If you have to check for multiple conditions combined with OR logic, it is recommended to use either nested IF statements or CASE operator.
IF EXISTS construct checks for the existence of a record fitting a specified criterion or any records in the specified table and takes action accordingly. This is a great approach to take if you don't have to retrieve any values from a table. Since you cannot do any data retrieval with EXISTS, the SELECT statement within the EXISTS clause does not have to specify any column names - the "*" operator will suffice. In fact IF EXISTS will be more efficient than IF SELECT. The following example checks whether there are any authors from KS in the authors table:
IF EXISTS (SELECT * FROM authors WHERE state = 'ks') PRINT 'found author(s) from KS'
As we noted earlier, if you have to check multiple conditions you can use nested IF ELSE statements. These should be handled with care and appropriately indented to make the code readable. If you've used any other programming language you'll find T-SQL nested IFs very easy to get used to. The following example executes appropriate user stored procedures depending on the price of "abc" stocks:
DECLARE @stock_price SMALLMONEY SELECT @stock_price = stock_price FROM high_yield_stock WHERE stock_symbol = 'abc' IF @stock_price > 10.01 BEGIN EXEC usp_alert_for_high_prices END ELSE IF @stock_price BETWEEN 5.01 and 10.00 BEGIN EXEC usp_notify_managers END ELSE EXEC usp_alert_for_price_drop
If the outer condition evaluates to TRUE, the inner conditions will not be checked. This is especially useful if the evaluation criteria require multi-table joins or searches which are resource intensive.
IF NOT (SELECT COUNT(*) FROM sales WHERE ord_date BETWEEN '1/1/95' AND '12/31/95') > 0 BEGIN PRINT 'no sales have been recorded in 1995' END | <urn:uuid:637a87cc-2f87-44da-b4da-484fcca5a7e0> | 3.046875 | 920 | Documentation | Software Dev. | 32.728608 | 724 |
by Anne E. Egger, Ph.D.
We all see changes in the landscape around us, but your view of how fast things change is probably determined by where you live. If you live near the coast, you see daily, monthly, and yearly changes in the shape of the coastline. Deep in the interior of continents, change is less evident – rivers may flood and change course only every 100 years or so. If you live near an active fault zone or volcano, you experience infrequent but catastrophic events like earthquakes and eruptions.
Throughout human history, different groups of people have held to a wide variety of beliefs to explain these changes. Early Greeks ascribed earthquakes to the god Poseidon expressing his wrath, an explanation that accounted for their unpredictability. The Navajo view processes on the surface as interactions between opposite but complementary entities: the sky and the earth. Most 17th century European Christians believed that the earth was essentially unchanged from the time of creation. When naturalists found fossils of marine creatures high in the Alps, many devout believers interpreted the Old Testament literally and suggested that the perched fossils were a result of the biblical Noah’s flood.
In the mid-1700’s, a Scottish physician named James Hutton (see Biography link to the right) began to challenge the literal interpretation of the Bible by making detailed observations of rivers near his home. Every year, these rivers would flood, depositing a thin layer of sediment in the floodplain. It would take many millions of years, reasoned Hutton, to deposit a hundred meters of sediment in this fashion, not just the few weeks allowed by the Biblical flood. Hutton called this the principle of uniformitarianism: processes that occur today are the same ones that occurred in the past to create the landscape and rocks as we see them now. By comparison, the strict biblical interpretation, common at the time, suggested that the processes that had created the landscape were complete and no longer at work.
Figure 1: This image shows how James Hutton first envisioned the rock cycle.
Hutton argued that, in order for uniformitarianism to work over very long periods of time, earth materials had to be constantly recycled. If there were no recycling, mountains would erode (or continents would decay, in Hutton’s terms), the sediments would be transported to the sea, and eventually the surface of the earth would be perfectly flat and covered with a thin layer of water. Instead, those sediments once deposited in the sea must be frequently lifted back up to form new mountain ranges. Recycling was a radical departure from the prevailing notion of a largely unchanging earth. As shown in the diagram above, Hutton first conceived of the rock cycle as a process driven by earth’s internal heat engine. Heat caused sediments deposited in basins to be converted to rock, heat caused the uplift of mountain ranges, and heat contributed in part to the weathering of rock. While many of Hutton’s ideas about the rock cycle were either vague (such as “conversion to rock”) or inaccurate (such as heat causing decay), he made the important first step of putting diverse processes together into a simple, coherent theory.
Hutton’s ideas were not immediately embraced by the scientific community, largely because he was reluctant to publish. He was a far better thinker than writer – once he did get into print in 1788, few people were able to make sense of his highly technical and confusing writing (see the Classics link to the right to sample some of Hutton's writing). His ideas became far more accessible after his death with the publication of John Playfair’s “Illustrations of the Huttonian Theory of the Earth” (1802) and Charles Lyell’s “Principles of Geology” (1830). By that time, the scientific revolution in Europe had led to widespread acceptance of the once-radical concept that the earth was constantly changing.
A far more complete understanding of the rock cycle developed with the emergence of plate tectonics theory in the 1960’s (see our Plate Tectonics I module). Our modern concept of the rock cycle is fundamentally different from Hutton’s in a few important aspects: we now largely understand that plate tectonic activity determines how, where, and why uplift occurs, and we know that heat is generated in the interior of the earth through radioactive decay and moved out to the earth’s surface through convection. Together, uniformitarianism, plate tectonics, and the rock cycle provide a powerful lens for looking at the earth, allowing scientists to look back into earth history and make predictions about the future.
The rock cycle consists of a series of constant processes through which earth materials change from one form to another over time. As within the water cycle and the carbon cycle, some processes in the rock cycle occur over millions of years and others occur much more rapidly. There is no real beginning or end to the rock cycle, but it is convenient to begin exploring it with magma. You may want to open the rock cycle schematic below and follow along in the sketch, click on the caption to open this diagram in a new window.
Figure 2: A schematic sketch of the rock cycle. In this sketch, boxes represent earth materials and arrows represent the processes that transform those materials. The processes are named in bold next to the arrows. The two major sources of energy for the rock cycle are also shown; the sun provides energy for surface processes such as weathering, erosion, and transport, and the earth's internal heat provides energy for processes like subduction, melting, and metamorphism. The complexity of the diagram reflects a real complexity in the rock cycle. Notice that there are many possibilities at any step along the way.
Magma, or molten rock, forms only at certain locations within the earth, mostly along plate boundaries. (It is a common misconception that the entire interior of the earth is molten, but this is not the case. See our Earth Structure module for a more complete explanation.) When magma is allowed to cool, it crystallizes, much the same way that ice crystals develop when water is cooled. We see this process occurring at places like Iceland, where magma erupts out of a volcano and cools on the surface of the earth, forming a rock called basalt on the flanks of the volcano. But most magma never makes it to the surface and it cools within the earth’s crust. Deep in the crust below Iceland’s surface, the magma that doesn’t erupt cools to form gabbro. Rocks that form from cooled magma are called igneous rocks; intrusive igneous rocks if they cool below the surface (like gabbro), extrusive igneous rocks if they cool above (like basalt).
Figure 3: This picture shows a basaltic eruption of Pu'u O'o, on the flanks of the Kilauea volcano in Hawaii. The red material is molten lava, which turns black as it cools and crystallizes.
Rocks like basalt are immediately exposed to the atmosphere and weather. Rocks that form below the earth’s surface, like gabbro, must be uplifted and all of the overlying material must be removed through erosion in order for them to be exposed. In either case, as soon as rocks are exposed at the earth’s surface, the weathering process begins. Physical and chemical reactions caused by interaction with air, water, and biological organisms cause the rocks to break down. Once rocks are broken down, wind, moving water, and glaciers carry pieces of the rocks away through a process called erosion. Moving water is the most common agent of erosion – the muddy Mississippi, the Amazon, the Hudson, the Rio Grande, all of these rivers carry tons of sediment weathered and eroded from the mountains of their headwaters to the ocean every year. The sediment carried by these rivers is deposited and continually buried in floodplains and deltas. In fact, the U.S. Army Corps of Engineers is kept busy dredging the sediments out of the Mississippi in order to keep shipping lanes open.
Figure 4: Photograph from space of the Mississippi Delta. The brown color shows the river sediments and where they are being deposited in the Gulf of Mexico.
Under natural conditions, the pressure created by the weight of the younger deposits compacts the older, buried sediments. As groundwater moves through these sediments, minerals like calcite and silica precipitate out of the water and coat the sediment grains. These precipitants fill in the pore spaces between grains and act as cement, gluing individual grains together. The compaction and cementation of sediments creates sedimentary rocks like sandstone and shale, which are forming right now in places like the very bottom of the Mississippi delta. Because deposition of sediments often happens in seasonal or annual cycles, we often see layers preserved in sedimentary rocks when they are exposed. In order for us to see sedimentary rocks, however, they need to be uplifted and exposed by erosion. Most uplift happens along plate boundaries where two plates are moving towards each other and causing compression. As a result, we see sedimentary rocks that contain fossils of marine organisms (and therefore must have been deposited on the ocean floor) exposed high up in the Himalaya Mountains – this is where the Indian plate is running into the Eurasian plate.
Figure 5: The Grand Canyon is famous for its exposures of great thicknesses of sedimentary rocks.
If sedimentary rocks or intrusive igneous rocks are not brought to the earth’s surface by uplift and erosion, they may experience even deeper burial and be exposed to high temperatures and pressures. As a result, the rocks begin to change. Rocks that have changed below the earth’s surface due to exposure to heat, pressure, and hot fluids are called metamorphic rocks. Geologists often refer to metamorphic rocks as “cooked” because they change in much the same way that cake batter changes into a cake when heat is added. Cake batter and cake contain the same ingredients, but they have very different textures, just like sandstone, a sedimentary rock, and quartzite, its metamorphic equivalent. In sandstone, individual sand grains are easily visible and often can even be rubbed off; in quartzite, the edges of the sand grains are no longer visible, and it is a difficult rock to break with a hammer, much less rubbing pieces off with your hands.
Some of the processes within the rock cycle, like volcanic eruptions, happen very rapidly, while others happen very slowly, like the uplift of mountain ranges and weathering of igneous rocks. Importantly, there are multiple pathways through the rock cycle. Any kind of rock can be uplifted and exposed to weathering and erosion; any kind of rock can be buried and metamorphosed. As Hutton correctly theorized, these processes have been occurring for millions and billions of years to create the earth as we see it: a dynamic planet.
The rock cycle is not just theoretical; we can see all of these processes occurring at many different locations and at many different scales all over the world. As an example, the Cascade Range in North America illustrates many aspects of the rock cycle within a relatively small area, as shown in the diagram below.
Figure 6: Cross-section through the Cascade Range in Washington state. Image modified from the Cascade Volcano Observatory, USGS.
The Cascade Range in the northwestern United States is located near a convergent plate boundary, where the Juan de Fuca plate, which consists mostly of basalt saturated with ocean water is being subducted, or pulled underneath, the North American plate. As the plate descends deeper into the earth, heat and pressure increase and the basalt is metamorphosed into a very dense rock called eclogite. All of the ocean water that had been contained within the basalt is released into the overlying rocks, but it is no longer cold ocean water. It too has been heated and contains high concentrations of dissolved minerals, making it highly reactive, or volatile. These volatile fluids lower the melting temperature of the rocks, causing magma to form below the surface of the North American plate near the plate boundary. Some of that magma erupts out of volcanoes like Mt. St. Helens, cooling to form a rock called andesite, and some cools beneath the surface, forming a similar rock called diorite.
Storms coming off of the Pacific Ocean cause heavy rainfall in the Cascades, weathering and eroding the andesite. Small streams carry the weathered pieces of the andesite to large rivers like the Columbia and eventually to the Pacific Ocean, where the sediments are deposited. Continual deposition of sediments near the deep oceanic trench results in the formation of sedimentary rocks like sandstone. Eventually, some sandstone is carried down into the subduction zone, and the cycle begins again (see Experiment! link to the right).
The rock cycle is inextricably linked not only to plate tectonics, but to other earth cycles as well. Weathering, erosion, deposition, and cementation of sediments all require the presence of water, which moves in and out of contact with rocks through the hydrologic cycle; thus weathering happens much more slowly in a dry climate like the desert southwest than in the rainforest (see our The Hydrologic Cycle module for more information). Burial of organic sediments takes carbon out of the atmosphere, part of the long-term geological component of the carbon cycle (see our The Carbon Cycle module); many scientists today are exploring ways we might be able to take advantage of this process and bury additional carbon dioxide produced by the burning of fossil fuels (see News and Events link to the right). The uplift of mountain ranges dramatically affects global and local climate by blocking prevailing winds and inducing precipitation. The interactions between all of these cycles produce the wide variety of dynamic landscapes we see around the globe.
Anne E. Egger, Ph.D. "The Rock Cycle: Uniformitarianism and Recycling," Visionlearning Vol. EAS-2 (7), 2005. | <urn:uuid:1d8cdccb-098e-46d2-97a3-50a9d15430c5> | 4.0625 | 2,938 | Academic Writing | Science & Tech. | 39.073182 | 725 |
3-Tier Web Application Development
By Nannette Thacker
In web application development, three-tier architecture refers to separating the application process into three specific layers. What the user sees via a web browser is called the presentation tier and is content served from a web server. The middle tier performs the business logic processing that occurs, for example, when a user submits a form. The back end consists of the data tier which handles the database processing and access to the data. We'll take a simplistic look at each of these.
The Presentation Tier or User Interface is the portion the user sees when they open a web page in the browser. It is as simple as you reading this article all the way to searching a catalog and purchasing a product using a shopping cart. It is what is presented to the user on the client side within their web browser.
In ASP.net and utilizing Visual Studio or Visual Web Developer, developers can separate the user interface from the business logic and data access layer with various tools.
ASP.net allows using MasterPages to setup the site look and feel. As well, when creating a WebForm which utilizes the MasterPage, you may create it and allow the code to be placed in a separate file, known as codebehind, thus keeping your business logic in a separate layer from the look and feel.
You may also setup the site design using Themes, Skins, and Cascading Style Sheets.
Business Logic or Application Tier
Data Access Tier
In ASP.net, the Data Access layer is where you define your typed datasets and tableadapters. It is where you define your queries or stored procedures. The business tier may then make use of this functionality. In your classes, rather than defining ad hoc queries, you may use a TableAdapter to access the Data Access Layer.
As an example of how this works, let's assume you are creating a web page that allows the user to enter information which you wish to then enter into a database. You first create a dataset and tableadapter that allows insert into the table, either by a query or stored procedure. This is your data access layer.
You then create a class, which retrieves the information from the form, checks for field validations and then uses the tableadapter to send the data to the database.
You create a web form, which can use a GridView control or other controls to allow the user to input the data into the web form. In the codebehind of the web form, you handle the submit button click event, and send the data from the form to your class, which sends the information to the database using the tableadapter.
When utilized properly, using a multi-tier architecture improves performance and scalability. If a web page needs an update or redesign, all of this may be handled by altering the CSS and HTML, without affecting the business or data logic. Any of the three tiers may be replaced or upgraded individually without affecting the other tiers. For instance, if you change the database on the back end, it wouldn't affect the presentation or business logic tiers, other than changing the database connection.
This is a simple introduction to the three-tier web architecture, but I hope it has helped you understand the layers of a multi-tier architecture.
May your dreams be in ASP.net! | <urn:uuid:b20d7c6a-30c9-48e5-bbf6-e63641a13898> | 3.125 | 684 | Personal Blog | Software Dev. | 49.558458 | 726 |
RNA, ribonucleotide acid, is built up of a phosphate and nitrogenous base, a ribose sugar, and a phosphate. The bases used are adenine (A), cytosine (C), guanine (G) and uracil (U).
The chemical structure of RNA
There are four major groups of RNA: messenger RNA (mRNA), ribosomal RNA (rRNA), transfer RNA (tRNA) and small, regulatory RNAs (sRNA). mRNA is transcribed from DNA by the enzyme RNA polymerase, and is then used as a template in translation. rRNAs are a major component of the ribosome, the translation machinery. They are divided into the 50S large subunit (23S and 5S) and small 30S (16S) in prokaryotes. The rRNAs decode the mRNA and interact with tRNAs. The tRNAs are attached to specific amino acids and carry them (with the help of elongation factor Tu) to the ribosome during translation. The sRNAs form a quite recently discovered group of regulatory RNAs that are thought to be of great importance especially during stress, when they bind specifically to their targets and as a consequence effect the expression of genes, either at the level of transcription or translation. | <urn:uuid:f44e2793-140c-4216-818a-97021b96f284> | 3.90625 | 271 | Knowledge Article | Science & Tech. | 39.421532 | 727 |
|How the Membrane Protein AmtB Transports Ammonia|
Membrane proteins provide molecular-sized entry and exit portals for the various substances that pass into and out of cells. While life scientists have solved the structures of protein channels for ions, uncharged solutes, and even water, up to now they have only been able to guess at the precise mechanisms by which gases (such as NH3, CO2, O2, NO, N2O, etc.) cross biological membranes. But, with the first high-resolution structure of a bacterial ammonia transporter (AmtB), determined by a team in the Stroud group from the University of California, San Francisco, it is now known that this family of transporters conducts ammonia by stripping off the proton from the ammonium (NH4+) cation and conducting the uncharged NH3 “gas.”
Progress in determining structures of membrane proteins of all kinds has been slowed by the difficulty of obtaining sufficiently robust crystals that diffract to high resolution. A common strategy is to grow crystals of proteins from multiple organisms in which the protein is known to have evolved from a common ancestor (orthologs) and select the one that gives the best diffraction data. The UCSF researchers cloned multiple orthologs of the integral membrane protein AmtB belonging to the Amt/MEP/Rh superfamily.
To define any preferred sites for ammonia or methyl ammonia (CH3NH2) and to clarify the mechanism for transport or conductance of these molecules, crystals were grown in the absence of any ammonium derivative and in the presence of ammonium sulfate or methyl ammonium sulfate.
Diffraction data from crystals of AmtB from the bacterium Escherichia coli were collected at ALS Beamline 8.3.1 with a CCD area detector. Phases were calculated from multiple-wavelength anomalous dispersion (MAD) data from a selenomethionine (SeMet)-substituted protein. After data processing (solvent flattening and phase extension to 2.0 Å), the model was refined to 1.35 Å, the highest-resolution structure of any membrane protein to date.
Overall, the structure shows that AmtB is a trimer, with each monomer containing a channel conducting ammonia. The monomer protein chain includes two structurally similar motifs of opposite polarity. Each motif spans the cell membrane between the periplasm (region between the cell wall and the membrane) and the cytoplasm (cell interior) five times.
Comparison of the structures with and without ammonia and with methyl ammonia enabled the team to identify a wider vestibule site at the periplasmic side of the membrane that recruits NH4+ and a narrower 20-Å-long hydrophobic channel midway through the membrane that lowers the dissociation constant of NH4+, thereby forming NH3, which is then stabilized by interactions with two conserved histidine side chains inside the channel. In a second vestibule at the cytoplasmic end of the channel, the NH3 returns to equilibrium as NH4+. An ammonia conduction assay was devised using stopped-flow kinetics and, together with the structural result, proved that it is only neutral NH3 that is conducted by the channel. This is the first time that the structure and mechanism of a “gas channel” has been determined.
Conductance of uncharged NH3, versus the NH4+ ion, solves several biological problems. Transport of only uncharged NH3 assures selectivity against all ions. NH4+ or any other ion would be unstable in the center of the hydrophobic bilayer, while NH3 is not. Passage of uncharged NH3 would not result in a net change of protons across the membrane nor would it change the membrane potential, thus neither energy any negative counter ion to balance the charge is needed to accumulate ammonia.
The structure of AmtB and the mechanism of gas transport are common to other members of the superfamily in eukaryotic cells. For example, related Rh proteins in humans are thought to be critical players in systemic pH regulation in the kidney, in amino acid biosynthesis, and in the central nervous system.
Research conducted by S. Khademi, J. O’Connell III, J. Remis, Y. Robles-Colmenares, L.J.W. Miercke, and R.M. Stroud (University of California, San Francisco).
Research funding: National Institutes of General Medical Sciences. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: S. Khademi, J. O'Connell III, J. Remis, Y. Robles-Colmenares, L.J. Miercke, and R.M. Stroud, “Mechanism of ammonia transport by Amt/MEP/Rh: Structure of AmtB at 1.35 Å,” Science 305, 1587 (2004). | <urn:uuid:8f2b79b8-6ec8-43b1-bfb2-c21e1e8f0d36> | 2.859375 | 1,060 | Knowledge Article | Science & Tech. | 45.421513 | 728 |
Johannes Wilcke invented and then Alessandro Volta perfected the electrophorus over two hundred years ago. This device was quickly adopted by scientists throughout the world because it filled the need for a reliable and easy-to-use source of charge and voltage for experimental researches in electrostatics [Dibner, 1957]. Many old natural philosophy texts contain lithographs of the electrophorus.
A hand-held electrophorus can produce significant amounts of charge conveniently and repeatedly. It is operated by first frictionally charging a flat insulating plate called a "cake". In Volta's day, the cake was made of shellac/resin mixtures or a carnauba wax film deposited on glass. Nowadays, excellent substitutes are available. TeflonTM, though a bit expensive, is a good choice because it is an excellent insulator, charges readily, and is easy to clean and maintain. The electrophorus is ideal for generating energetic capacitive sparks required for vapor ignition demonstrations.
The basic operational steps for the electrophorus are depicted in the sequence of diagrams below. Note that the electrode, though making intimate contact with the tribocharged plate, actually charges by induction. No charge is removed from the charged cake and, in principle, the electrode can be charged any number of time by repeating the steps depicted. Click here to view a neat animation of the electrophorus charging process. Ainslie describes interesting experiments with an electrophorus that was charged in the Springtime and then its charge monitored throughout the summer [Ainslie, 1982]. The apparent disappearance of the charge during humid weather and its reappearance in the Fall must be attributed to changes in the humidity.
The energy for each capacitive spark drawn from the electrophorus is actually supplied by the action of lifting the electrode off the cake. This statement can be confirmed by investigating the strength of the sparks as a function of the height to which the electrode is lifted. Layton makes this point and further demonstrates with a small fluorescent tube the dependence of the electrostatic potential on the position of the electrode [Layton, 1991]. Lifting the electrode higher gives stronger sparks [Lapp, 1992].
CLICK HERE to view an interactive, animated version of this demonstration that reveals the movement of charge as the steps of the demonstration are followed. Please be patient while the Java script loads!
The electrophorus works most reliably if the charged insulating plate rests atop a grounded plane, such as a metal sheet, foil, or conductive plastic. [See Bakken Museum booklet, pp. 78-80.] The ground plane limits the potential as the electrode is first lifted from the plate, thus preventing a premature brush discharge. In dry weather, powerful 3/4" (2 cm) sparks can be drawn easily from a 6" (15 cm) diameter, polished, nick-free aluminum electrode. Estimating the potential of the electrode at V = ~50 kV and the capacitance at C = ~20 pF, we get
Q = CV = ~1 microCoulomb
for the charge and
Ue = CV2/2 = ~30 milliJoules
for the capacitive energy. This energy value easily exceeds the minimum ignition energy (MIE) of most flammable vapors.
Click here to learn about a new type of electrophorus invented by S. Kamachi. The web site of the world-famous Exploratorium in San Francisco describes a simple electrophorus made of aluminum pie plates and other inexpensive materials. Young scientists should check out this page. In addition, the library references below contain interesting information about the electrophorus and other electrostatics demonstrations. One example is the cylindrical electrophorus [Ainslie, 1980].
A simple leaf electroscope attachment, shown in the figure below, makes it very easy to reveal some of the important charging and charge redistribution phenomena of the electrophorus. This accessory is especially handy because it works even on warm, humid days when large, impressive sparks can not be coaxed out of the electrophorus. Refer to the electroscope page for details on how to make this convenient accessory.
The electroscope is operated in the same way as before, but now the electroscope reveals information about the charge and its distribution on the electrode. In particular, it should be noted that, as the electrophorus is lifted up, its charge does not change. The leaves of the electroscope spread apart because the constant charge on the electrode redistributes itself, with about half of the charge moving to the top surface. Another thing to notice is that the leaves, which spread to a wide angle when the electrode is first lifted, slowly come back together with time, indicating the leakage of electric charge, presumably due to corona discharge from the edges of the leaves.
Corona discharge accessory
Another simple accessory is a corona discharge point that can be attached to the electrophorus. The attachment is a metal rod of diameter 1/16" or greater with one end sharpened to a point. When the charged electrode is lifted, the electric field at the sharpened tip exceeds the corona limit and a local discharge starts, dissipating the charge on the electrophorus. If one listens closely as the electrode is lifted, a soft, varied-pitch buzzing noise lasting just a few seconds may be heard. This is the corona, and it stops after the voltage has been reduced below the corona threshold. Passive corona discharge points are used widely in manufacturing to dissipate unwanted static charge.
The corona discharge can be largely suppressed by covering the sharpened point with a small piece of antistatic plastic foam of the type used for packaging ESD-sensitive electronic components. The figure below shows how this scheme -- called resistive grading -- works to reduce or stop corona discharges.
D.S. Ainslie, "Inversion of electrostatic charges in a cylindrical electrophorus", Physics Teacher, vol. 18, No. 7, October, 1980, p. 530.
D.S. Ainslie, "Can an electrophorus lose its charge and then recharge itself?", Physics Teacher, vol. 20, No. 4, April, 1982, p. 254.
Bakken Library and Museum, Sparks and Shocks, Kendall/Hunt Publishing Co., Dubuque, IA, 1996, pp. 53-55.
B. Dibner, Early Electrical Machines, pub. #14, Burndy Library, Norwalk, CT, 1957, p. 50-53.
R.A. Ford, Homemade Lightning: creative experiments in electricity (2nd ed.), TAB Books (McGraw-Hill), New York 1996, chapter 10.
O.D. Jefimenko, "Long-lasting electrization and electrets," in Electrostatics and its Applications (A.D. Moore, ed.), Wiley-Interscience, New York, 1973, pp. 117-118.
D.R. Lapp, "Letters," Physics Teacher, November, 1992, p. 454.
W. Layton, "A different light on an old electrostatic demonstration," Physics Teacher, Vol. 29, No. 1, January, 1991, p. 50-51.
K.L. Ostlund and M.A. Dispezio, "Static electricity dynamically explored," Science Scope, February, 1996, pp. 12-16. | <urn:uuid:c009257d-7858-4c03-97a8-b9e17e05b736> | 3.640625 | 1,544 | Tutorial | Science & Tech. | 43.789977 | 729 |
Science Fair Project Encyclopedia
GABA A receptor
The receptor is a multimeric transmembrane receptor that sits in the membrane of its neuron. Once bound to its ligand, the protein receptor changes confirmation within the membrane. This particular protein is configured in such a way as to allow certain ions to pass through its pore when the pore is open. The ligand GABA is the endogenous compound that tells this receptor to open, allowing chloride ions (Cl-) to pass down its electrochemical gradient . Because the chloride ion concentration is high outside of the cell, opening of the channel pore results in an influx of chloride into the cell, thus hyperpolarizing it.
Other ligands interact with the GABA(A) receptor to mimic GABA or to potentiate its response. Such other ligands include the benzodiazepines (increase pore opening frequency), barbiturates (increase pore opening duration), and certain steroids. Still other compounds interact with the GABA(A) receptor to attenuate the effects of GABA; such blocking agents are Flumazenil (a competitive benzodiazepine antagonist) and picrotoxin, which blocks the channel directly.
The phenotypic response to all of these interactions is seen in effects such as muscle relaxation, sedation, anticonvulsion, and anesthesia, based on the location of the cell in question, its intracellular second-messenger milieu, and the dose of the ligand at the receptor; the dosage issue is commonly related to the amount of exogenous drug that is delivered to the patient (e.g., anesthesia during surgery).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:9474f750-6893-454a-97ac-d40a464838cf> | 3.75 | 365 | Knowledge Article | Science & Tech. | 25.149776 | 730 |
Telescopium, Indus, and Pavo - Downloadable article
Galaxies galore populate this trio of southern constellations.
March 3, 2009
|This downloadable article is from an Astronomy magazine 45-article series called "Celestial Portraits." The collection highlights all 88 constellations in the sky and explains how to observe each constellation's deep-sky targets. The articles feature star charts, stunning pictures, and constellation mythology. We've put together 11 digital packages. Each one contains four Celestial Portraits articles for you to purchase and download.|
"Telescopium, Indus, and Pavo" is one of four articles included in Celestial Portraits Package 4.
As the cooler air of autumn descends across the Northern Hemisphere, the splendors of the summer sky sink in the west. Sagittarius and the center of the Milky Way dip below the horizon by midevening, yielding to a rather sparse region where star patterns are difficult to trace and galaxies prevail. The southernmost of the constellations east of the Milky Way rank among the most obscure in the entire heavens. From the northern United States, only the top stars in Telescopium and Indus poke above the southern horizon, while Pavo remains completely hidden. Most of this area comes into view from the southern tier of states, though the vista improves markedly from locales even farther south.
A small triangle of modest stars south of Corona Australis forms the shape of Telescopium the Telescope. Only Alpha (α) Telescopii, a yellowish star located 250 light-years from Earth, shines brighter than magnitude 4.0. To read the complete article, purchase and download Celestial Portraits Package 4.
|Deep-sky objects in Telescopium, Indus, and Pavo|
IC 4662, NGC 6684, NGC 6744, NGC 6752, NGC 6810, IC 4889, Dunlop 227, NGC 6868, NGC 6876, Abell 3716, NGC 7020, NGC 7041, NGC 7049, Theta Indi, NGC 7090, Y Pavonis, IC 5152 | <urn:uuid:8c98cbd5-d496-4c28-989a-afc0d6c2f9ea> | 2.953125 | 453 | Truncated | Science & Tech. | 49.291445 | 731 |
Solenoids produce magnetic fields that are relatively intense for
the amount of current they carry. To make a direct comparison,
consider a solenoid with 55 turns per centimeter, a radius of 1.25
cm, and a current of 0.170 A.
(a) Find the magnetic field at the center of the solenoid.
(b) What current must a long, straight wire carry to have the same
magnetic field as that found in part (a)? Let the distance from the
wire be the same as the radius of the solenoid, 1.25 cm.
Not sure how to do this one, i maybe use the turns to find force,
then use F=ILBsin() ??? any help is appreciated | <urn:uuid:41425225-25cf-43c5-9a50-529c0f09245d> | 3.484375 | 158 | Q&A Forum | Science & Tech. | 75.352751 | 732 |
5 things about Friday's space events
NASA says objects traveling in different directions
At least 1,000 people have been injured in Russia as the result of a meteor exploding in the air. The energy of the detonation appears to be equivalent to about 300 kilotons of TNT, said Margaret Campbell-Brown of the department of physics and astronomy at the University of Western Ontario.
Meanwhile, an asteroid approached Earth but did not hit it, coming closest at about 2:25 p.m. ET.
You probably have some questions about both of those events, so here's a brief overview:
1. Are these events connected?
The meteor in Russia and the asteroid that passed by on Friday afternoon are "completely unrelated," according to NASA. The trajectory of the meteor differs substantially from that of asteroid 2012 DA14, NASA said. Estimates on the meteor's size are preliminary, but it appeared to be about one-third the size of 2012 DA14.
The term "asteroid" can also be used to describe the rock that exploded over Russia, according to the European Space Agency and NASA, although it was a relatively small one.
2. What's the difference between an asteroid and a meteorite and other space rocks?
According to NASA, here's how you tell what kind of object is falling from the sky:
Asteroids are relatively small, inactive rocky bodies that orbit the sun.
Comets are also relatively small and have ice on them that can vaporize in sunlight. This process forms an atmosphere and dust and gas; you might also see a "tail' of dust or gas.
Meteoroids are small particles from comets or asteroids, orbiting the sun.
Meteors are meteoroids that enter the Earth's atmosphere and vaporize, also known as shooting stars.
Meteorites are meteoroids that actually land on the Earth's surface. The pieces of the meteor that exploded in Russia are meteorites.
Generally meteorites are smaller than grains of sand and vaporize on passage through the atmosphere. But there are also larger meteorites.
Comets and asteroids are left over from when the solar system formed. There used to be more of them, but over time they've collided to form major planets, or they've got booted from the inner solar system to the Oort cloud or have been ejected from the solar system entirely.
3. Why didn't we see the Russian meteor coming?
Only one space rock that impacted the planet has ever been observed before it hit the Earth, Campbell-Brown said.
That's because objects that do hit the Earth tend to be smaller, and it's too hard to see them. The one sighting before impact happened in 2008, a day before a meteor exploded over Sudan.
Current estimates suggest that the Russian meteor was about 15 meters (49 feet) across, which is too small for telescopic surveys.
"Unfortunately the objects of this size have to be very close to Earth for us to be able to see them at all," Campbell-Brown said.
The asteroid that approached Earth today, which NASA has been tracking, is about 45 meters long, which is relatively small for an asteroid.
4. How does this compare to other Earth impacts?
The Earth picks up tons of meteoric debris every day, but big pieces are fairly uncommon, said David Dundee, astronomer at Tellus Science Museum in Cartersville, Georgia.
An object the size of the Russian meteor comes in about once every 50 years, but none has been recorded since 1908, when an asteroid exploded and leveled trees over an area of 820 square miles - about two-thirds the size of Rhode Island - in Tunguska, Russia.
"This is the largest event that we know of that's happened since Tunguska," Campbell-Brown said.
The Tunguska event did not leave a crater. If there are craters as a result of Friday's meteor, they would be very small, resulting from the debris from the midair explosion.
"It's unfortunate that this occurred over a populated area," Campbell-Brown said. Over a desert or ocean, it would have done very little damage.
This is much smaller than the event thought to have wiped out the dinosaur population, she said.
The meteor was moving through space at about 33,000 miles per hour. When it suddenly decelerated above Russia, the energy was converted into heat and sound, which resulted in a shock wave of energy and a sonic boom, Dundee said.
About three years ago, a woman in Cartersville, Georgia, discovered a baseball-sized meteorite in her home, which had flown straight through the roof. It is now at the Tellus Museum, Dundee said.
5. Why shouldn't you touch a meteorite?
As a meteor comes through the atmosphere, it gets very hot, but this thin hot layer quickly cools off. When you find it on the ground, a meteorite is generally acclimated to ambient temperature.
"We advise people not to touch things with their hands because we like to look for trace elements in the meteorites, and if you touch it in your hand, you've contaminated it," Campbell-Brown said.
Meteorites are probably not more radioactive than Earth rocks, and the minerals inside aren't toxic, she said. The biggest reason to not touch them is to preserve the scientific status.
Copyright 2013 by CNN NewSource. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:d6b23ced-0252-47b6-ac35-b7ca24e86aa8> | 3.46875 | 1,139 | Content Listing | Science & Tech. | 53.831926 | 733 |
is a project to create a GPL
that was started by Richard Stallman
, the creator of EMACS
. The GNU project is now overseen by the Free Software Foundation
, which Richard Stallman
GNU is a recursive acronym, and it stands for GNU's Not Unix. This, no doubt, is because of Richard Stallman's grounding in Lisp.
The GNU Project has created a great many software packages, including gcc, gdb, sed, glibc, make, awk, find, and a good deal more besides. These packages make up a major portion of every Linux distribution.
The development of the GNU operating system, the Hurd, continues. | <urn:uuid:0a2c6d37-67ce-4143-9f6b-60fe5ca7a85d> | 2.984375 | 140 | Knowledge Article | Software Dev. | 58.242236 | 734 |
From Physics Research Archive - Page 4
The Physics Classroom: Total Internal Reflection - Sep 16, 2010
The optical fiber in the photo above doesn't just guide the beam--the fiber produces the beam. Instead of a tube of helium and neon gas, or a piece of ruby, the "active medium" of this laser is added to the glass in the fiber. Since the mirrors are just the polished ends of the fiber, there is nothing to go out of alignment, and maintenance is easy.
Network Theory: A Key to Unraveling How Nature Works - Sep 1, 2010
You are looking at a network diagram that shows the interconnectedness of the world economy. To learn more about this network, visit Mapping the World Economy.
Making a supersonic jet in your kitchen - Aug 16, 2010
What exactly happens when an object makes a splash in water? The disk shown above was pulled into water in a reproducible way to investigate the splash.
The Real Sea Monsters: On the Hunt for Rogue Waves - Aug 1, 2010
This "rogue wave" broke over the deck of an oil tanker, and was much taller than the other waves on the ocean at the time. See Freak Waves, Rogue Waves for graphs of rogue waves building up in the ocean, and for the measurement of one that struck an oil platform in the North Sea.
From Soap Bubbles to Technology - Jul 16, 2010
The soap film you see here, made in between two metal rings, is called a catenoid, and it uses the minimum area to enclose a given volume. Click on the image to see another example of a "minimal surface" soap film.
About Dust - Jul 1, 2010
This satellite image shows a recent dust storm in China that was so large it spread out to neighboring countries. For more on this storm, see this Time magazine article and also About Dust.
Shock Diamonds and Mach Disks - Jun 16, 2010
When the speed of the gases in a jet or rocket exhaust exceeds the speed of sound, a dazzling pattern results called "shock diamonds" or "Mach disks," as shown in this photo of the SR-71 Blackbird. The diamonds are created by crisscrossing shock waves in the exhaust.
image credit: NASA, ESA, H. Bond (STScI), R. Ciardullo (Penn State), and the Hubble Heritage Team (AURA/STScI); image source; larger image
Stellar Evolution - Jun 1, 2010
When the Sun reaches the end of its life, its outer layers will drift into space, an intricate cloud illuminated by its hot, dense core, as in this false-color image of a planetary nebula and white dwarf. For more details, see this page on the death of solar-mass stars.
Perspectives on Plasmas - May 16, 2010
This is a ball of plasma, created by discharging electricity into a solution. See the image source for more on how the image was made.
Properties of Volcanic Ash - May 1, 2010
Why were so many European airports closed due to the volcano? The image above of one volcanic ash particle begins to tell us why: the extremely small particles, with their many voids, can travel great distances after eruption. Once inside a jet engine, they melt and then re-solidify. Read Properties of Volcanic Ash for more details. You can learn about the specific dangers of flying through volcanic ash here. | <urn:uuid:61a1c128-6043-43d5-acf4-3f35e7ec4ab0> | 2.703125 | 722 | Content Listing | Science & Tech. | 55.684473 | 735 |
Q A conducting rod of length l moves on two horizontal frictionless rails, as in the figure below. A constant force of magnitude 1.00 N moves the bar at a uniform speed of 2.00 m/s through a magnetic field vector B that is directed into the page. (a)What is the current in an 8 ohm resistor. (b)What is the rate of energy dissipation in the resistor? (c)What is the mechanical power delivered by the constant force? | <urn:uuid:f182ceb6-b0c4-4d01-9109-ac52c1427299> | 3.390625 | 100 | Q&A Forum | Science & Tech. | 76.298659 | 736 |
Oceanic and Atmospheric Administration's (NOAA) most
of the Climate Report,
released on September 8, claims that the summer of 2010 was the
summer on record for the United States.
NOAA has been conducting the State
of the Climate Report since
1895, taking factors into account such as storm patterns,
precipitation and temperature.
Results are compiled at NOAA's National Climatic Data Center in
the lower 48 states, only seven had normal temperatures through the
months of June, July and August. 10 were classified as "above
normal," 29 were "much above normal," and two were
the summer of 2010, 10 states experienced their warmest summer ever.
These states were Alabama, Georgia, South Carolina, Tennessee, North
Carolina, Virginia, Maryland, Delaware, New Jersey and Rhode Island.
The Southeast had their warmest
summer ever while the Northeast had their fourth warmest and
the Central states had their third warmest.
above normal warmth occurred mostly on the eastern side of the
country, setting temperature records in cities like Asheville, NC,
Tallahassee, FL, Wilmington, DE, Tenton, NJ, Philadelphia and New
trends were off as well. For the first five months of the year, the
Upper Midwest received no rainfall. When the summer months hit, heavy
rainfall swarmed the area. States like Minnesota, South Dakota,
Nebraska, Illinois, Iowa and Michigan had their wettest summer in the
top 10 this year, while Wisconsin experienced their wettest yet with
6.91 inches of rainfall above average. On the other hand, the
Mid-Atlantic and Southeast experienced below average levels of
precipitation due to a lack of tropical weather activity and a high
far as weather goes, Minnesota is set to break its record of 74
tornado's from 2001 while wildfires have settled down in the Western
states due to milder weather.
of the Climate Report for
August can be seen here. | <urn:uuid:b37d3b04-d136-4010-ad58-68671b2cc3fc> | 3.296875 | 424 | News Article | Science & Tech. | 39.223957 | 737 |
Full Lab Manual
Introduction & Goals
Chemistry & Background
In Your Write-up
In this two-week experiment, you will learn how to use an ion-exchange column and how to carry out an acid-base titration using an indicator. You will then apply these skills to determine the total concentration of cations in a sample of seawater. This week you will concentrate on understanding the chemistry of ion exchange and estimate the capacity of an ion exchange column from your observations.
1. Which ions are attached to the resin at the start of a given experiment?
2. Which ions are becoming attached to the resin and which are coming off, during each experiment?
3. How many equivalents of H+ will be replaced by the charging solution?
4. What will happen when the column reaches its capacity?
Trustees of Dartmouth College, Copyright 19972003 | <urn:uuid:fe8bd937-b24a-4a54-bb26-43a48a944a00> | 3.453125 | 178 | Tutorial | Science & Tech. | 52.824545 | 738 |
Zombie Fungus Rears Its Ugly Head
Photograph courtesy David Hughes
A stalk of the newfound fungus species Ophiocordyceps camponoti-balzani, grows out of a "zombie" ant's head in a Brazilian rain forest.
Originally thought to be a single species, called Ophiocordyceps unilateralis, the fungus is actually four distinct species—all of which can "mind control" ants—scientists announced Wednesday.
The fungus species can infect an ant, take over its brain, and then kill the insect once it moves to a location ideal for the fungi to grow and spread their spores.
(Related pictures: ""Zombie" Ants Controlled, Decapitated by Flies.")
All four known fungi species live in Brazil's Atlantic rain forest, which is rapidly changing due to climate change and deforestation, said study leader David Hughes, an entomologist at Penn State University.
Hughes and colleagues made the discovery after noticing a wide diversity of fungal growths emerging from ant victims, according to the March 2, 2011 study in the journal PLoS ONE.
"It is tempting to speculate that each species of fungus has its own ant species that it is best adapted to attack," Hughes said.
"This potentially means thousands of zombie fungi in tropical forests across the globe await discovery," he said. "We need to ramp up sampling—especially given the perilous state of the environment." | <urn:uuid:b8333d3b-fa01-4ee8-bea4-92597bdaaf27> | 3.375 | 299 | News Article | Science & Tech. | 34.781555 | 739 |
Graph the Asymptote of a Tangent Function
An asymptote is a line that helps give direction to a graph of a trigonometry function. This line isn’t part of the function’s graph; rather, it helps determine the shape of the curve by showing where the curve tends toward being a straight line — somewhere out there. Asymptotes are usually indicated with dashed lines to distinguish them from the actual function.
The asymptotes for the graph of the tangent function are vertical lines that occur regularly, each of them π, or 180 degrees, apart. They separate each piece of the tangent curve, or each complete cycle from the next.
The equations of the tangent’s asymptotes are all of the form
where n is an integer. Under that stipulation for n, the expression 2n + 1 always results in an odd number. By replacing n with various integers, you get lines such as
The reason that asymptotes always occur at these odd multiples of
is because those points are where the cosine function is equal to 0. As such, the domain of the tangent function includes all real numbers except the numbers that occur at these asymptotes.
The preceding figure shows what the asymptotes look like when graphed alone.
The first figure isn’t all that exciting, but it does show how many times the tangent function repeats its pattern. Now take a look at the second preceding figure, which shows one cycle of the tangent function on a graph. The tangent values go infinitely high as the angle measure approaches 90 degrees. The values go infinitely low as the angle measure approaches –90 degrees.
In the third figure, there is more of the tangent on a graph, asymptotes included, to give you a better idea of what’s going on.
As you can see, the tangent function repeats its values over and over. The main difference between this function and the sine and cosine functions is that the tangent has all these breaks between the cycles. As you move from left to right, the tangent appears to go up to positive infinity. It actually disappears at the top of the graph and then picks up again at the bottom, where the values come from negative infinity. Graphing calculators and other graphing utilities don’t usually show the graph disappearing at the top, so it’s up to you to know what’s actually happening, even though the picture may not look exactly that way.
One of the peculiarities of graphing calculators is that they try to connect the tangent function to make it continuous across the screen. For this reason, you’ll usually see some lines between the different parts of the curve. In a way, these lines are errors — they aren’t the asymptotes, although you may be tempted to think they are. The only way to get rid of those extra lines is to turn your calculator to the dot mode (as opposed to the connected mode). Most calculators have ways to set the settings (or mode) for things such as degrees and radians, dotted graphs and connected graphs, floating decimals and fixed decimals, and so on. The changes are usually easy to do — just see your calculator’s manual for specific instructions. The hard part is remembering what setting you’re in. | <urn:uuid:8dec08bf-06f6-42d9-b382-8778f2fa5ff3> | 4.1875 | 715 | Tutorial | Science & Tech. | 51.931447 | 740 |
Rhenium is a rare, silvery-white metallic element. Its atomic number is 75 and its symbol is Re. Rhenium was discovered in 1925 by a team of German scientists named Walter Noddack, Ida Tacke-Noddack, and Otto Berg. They discovered rhenium as a trace element in platinum ores and the mineral columbite. It is very dense. It has a melting temperature of 3,186 degrees Celsius (5,767 degrees Fahrenheit). It is not known to have any health benefit for animals or plants. Rhenium does not form minerals of its own, but it does occur as a trace element in columbite, tantalite and molybdenite. These minerals are the principal sources of columbium (commonly called niobium), tantalum and molybdenum metals.
Rhenium is a very rare element that is produced principally as a by-product of the processing of porphry copper-molybdenum ores. Because it is scarce, very little rhenium is actually processed and isolated each year as compared to the millions of tons of copper and millions of pounds of molybdenum that are extracted from these same porphry copper deposits. As a result, the processing of rhenium poses no environmental threat. The equipment that reduces sulfur dioxide in these processing plants also removes any rhenium that may escape through the smokestacks.
|Previous Element: Tungsten|
Next Element: Osmium
|Phase at Room Temp.||solid|
|Melting Point (K)||3453.2|
|Boiling Point (K)||5923|
|Heat of Fusion (kJ/mol)||33.054|
|Heat of Vaporization (kJ/mol)||707|
|Heat of Atomization (kJ/mol)||770|
|Thermal Conductivity (J/m sec K)||48|
|Electrical Conductivity (1/mohm cm)||51.813|
|Number of Isotopes||45 (2 natural)|
|Electron Affinity (kJ/mol)||14|
|First Ionization Energy (kJ/mol)||760|
|Second Ionization Energy (kJ/mol)||---|
|Third Ionization Energy (kJ/mol)||---|
|Atomic Volume (cm3/mol)||8.9|
|Ionic Radius2- (pm)||---|
|Ionic Radius1- (pm)||---|
|Atomic Radius (pm)||137|
|Ionic Radius1+ (pm)||---|
|Ionic Radius2+ (pm)||---|
|Ionic Radius3+ (pm)||---|
|Common Oxidation Numbers||+4|
|Other Oxid. Numbers||-3, -1, +1, +2, +3 +5, +6, +7|
|In Earth's Crust (mg/kg)||7.0x10-4|
|In Earth's Ocean (mg/L)||4.0x10-6|
|In Human Body (%)||0%|
|Regulatory / Health|
|OSHA Permissible Exposure Limit (PEL)||No limits|
|OSHA PEL Vacated 1989||No limits|
|NIOSH Recommended Exposure Limit (REL)||No limits|
University of Wisconsin General Chemistry
Mineral Information Institute
Jefferson Accelerator Laboratory
Rhenium was named after the Greek word for the Rhine River, Rhenus.
Rhenium is obtained almost exclusively as a by-product of the processing of a special type of copper deposit known as a porphyry copper deposit. Specifically, it is obtained from the processing of the mineral molybdenite (a molybdenum ore) that is found in porphyry copper deposits. A porphyry copper deposit is a valuable copper-rich deposit in which copper minerals occur throughout the rock. The copper in these deposits occurs as primary chalcopyrite (CuFeS2) or the important secondary copper mineral chalcocite (Cu2S).
The identified rhenium resources in the United States are estimated to total 5 million kilograms. These resources are found in the southwestern United States. The identified rhenium resources in the rest of the world are estimated to total 6 million kilograms. Countries producing rhenium include Armenia, Canada, Chile, Kazakhstan, Mexico, Peru, Russia, and Uzbekistan. Even though the United States has significant rhenium resources, the majority of the rhenium consumed in the U.S. is imported. Chile and Kazakhstan provide the majority of the imported rhenium. The rest is imported from Mexico and other nations.
Because of its very high melting point, rhenium is used to make high temperature alloys (an alloy is a mixture of metals) that are used in jet engine parts. It is also used to make strong alloys of nickel-based metals. Rhenium alloys are used to make a variety of equipment and equipment parts, such as temperature controls, heating elements, mass spectrographs, electrical contacts, electromagnets, and semiconductors. An alloy of rhenium and molybdenum is a superconductor of electricity at very low temperatures. These superalloys account for the majority of the rhenium use each year.
Rhenium is also used in the petroleum industry to make lead-free gasoline. In this application, rhenium compounds act as catalysts. (A catalyst is a chemical compound that takes part in a chemical reaction, and can often make the reaction proceed more quickly, but the chemical is not consumed in the chemical reaction.)
Substitutes and Alternative Sources
Substitutes for rhenium as a catalyst are being researched. Iridium and tin have been found to be a good catalyst for at least one reaction. Cobalt, tungsten, platinum and tantalum can be used in some of the other applications for rhenium.
- Common Minerals and Their Uses, Mineral Information Institute.
- More than 170 Mineral Photographs, Mineral Information Institute.
Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Mineral Information Institute. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Mineral Information Institute should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content. | <urn:uuid:c664e83e-de9f-4793-8ec6-412b4f0d26e6> | 4.21875 | 1,404 | Knowledge Article | Science & Tech. | 37.044087 | 741 |
Earth Observing Laboratory (EOL) Field Deployments
EOL supports the observing needs of research programs in the following categories:
Become a fan on
Watch us on
Follow us on
- Climate Science
- Atmospheric Chemistry
- Atmospheric Physics
- Atmospheric Dynamics
Earth science, or geoscience, is an all encompassing term used for the sciences that relate to the Earth's processes; atmospheric, geological, geophysical, glacial, and oceanic. The atmosphere is one component of many that make up Earth's intricate system. Atmospheric science is a broad discipline, within which there are several more specific areas of study (ie - climate, ocean/air interactions, atmospheric chemistry, societal impacts).
The National Center for Atmospheric Research (NCAR) is charged with, among other things, providing the atmospheric science community with expertise, oversight and observing systems to carry out research projects on the field.
NCAR :: Earth Observing Laboratory :: Field Projects
Field Project Categories:
Field projects are designed to develop a more complete understanding of the complex interactions between the Earth’s atmosphere, oceans, land, ice masses, and biosphere. The impact of human activities on the Earth’s physical, chemical, and biological processes is a major focus of our national center. The Earth Observing Laboratory (EOL) is tasked primarily with developing technologically advanced instrumentation and data acquisition systems, and overseeing scientific field campaigns that enable the collection of data for innovative scientific research. EOL field projects contribute directly to NCAR’s goal of improving society's understanding of the atmosphere and Earth's systems, specifically by investigating atmospheric processes and examining interactions between the atmosphere and other environmental components.
EOL's field projects can be categorized into 5 areas of study: severe weather, climate processes, atmospheric chemistry, atmospheric cycles, and ocean systems. Many of the projects can be included in multiple categories, due to the nature of the project. For example, the main goal of a study may be to examine severe weather, which is likely a subset of a natural atmospheric cycle or climate process, and could be exacerbated by air pollution. | <urn:uuid:5825321d-ea34-4af4-9788-29b027baba9f> | 3.109375 | 431 | About (Org.) | Science & Tech. | 5.048014 | 742 |
This modules contains the interface to the heart process. heart sends periodic heartbeats to an external port program, which is also named heart. The purpose of the heart port program is to check that the Erlang runtime system it is supervising is still running. If the port program has not received any heartbeats within HEART_BEAT_TIMEOUT seconds (default is 60 seconds), the system can be rebooted. Also, if the system is equipped with a hardware watchdog timer and is running Solaris, the watchdog can be used to supervise the entire system.
An Erlang runtime system to be monitored by a heart program, should be started with the command line flag -heart (see also erl(1). The heart process is then started automatically:
% erl -heart ...
If the system should be rebooted because of missing heart-beats, or a terminated Erlang runtime system, the environment variable HEART_COMMAND has to be set before the system is started. If this variable is not set, a warning text will be printed but the system will not reboot. However, if the hardware watchdog is used, it will trigger a reboot HEART_BEAT_BOOT_DELAY seconds later nevertheless (default is 60).
To reboot on the WINDOWS platform HEART_COMMAND can be set to heart -shutdown (included in the Erlang delivery) or of course to any other suitable program which can activate a reboot.
The hardware watchdog will not be started under Solaris if the environment variable HW_WD_DISABLE is set.
The HEART_BEAT_TIMEOUT and HEART_BEAT_BOOT_DELAY environment variables can be used to configure the heart timeouts, they can be set in the operating system shell before Erlang is started or be specified at the command line:
% erl -heart -env HEART_BEAT_TIMEOUT 30 ...
The value (in seconds) must be in the range 10 < X <= 65535.
It should be noted that if the system clock is adjusted with more than HEART_BEAT_TIMEOUT seconds, heart will timeout and try to reboot the system. This can happen, for example, if the system clock is adjusted automatically by use of NTP (Network Time Protocol).
In the following descriptions, all function fails with reason badarg if heart is not started.
Sets a temporary reboot command. This command is used if a HEART_COMMAND other than the one specified with the environment variable should be used in order to reboot the system. The new Erlang runtime system will (if it misbehaves) use the environment variable HEART_COMMAND to reboot.
Limitations: The length of the Cmd command string must be less than 2047 characters.
Clears the temporary boot command. If the system terminates, the normal HEART_COMMAND is used to reboot.
Get the temporary reboot command. If the command is cleared, the empty string will be returned. | <urn:uuid:79083817-8ee2-43d4-a20c-07d11ac9c2c3> | 2.71875 | 635 | Documentation | Software Dev. | 50.791185 | 743 |
Interrupt the execution of an expression and allow the inspection of the environment where
browser was called from.
browser(text = "", condition = NULL, expr = TRUE, skipCalls = 0L)
- a text string that can be retrieved once the browser is invoked.
- a condition that can be retrieved once the browser is invoked.
- An expression, which if it evaluates to
TRUEthe debugger will invoked, otherwise control is returned directly.
- how many previous calls to skip when reporting the calling context.
A call to
browser can be included in the body of a function. When reached, this causes a pause in the execution of the current expression and allows access to the R interpreter.
The purpose of the
condition arguments are to allow helper programs (e.g. external debuggers) to insert specific values here, so that the specific call to browser (perhaps its location in a source file) can be identified and special processing can be achieved. The values can be retrieved by calling
The purpose of the
expr argument is to allow for the illusion of conditional debugging. It is an illusion, because execution is always paused at the call to browser, but control is only passed to the evaluator described below if
expr evaluates to
TRUE. In most cases it is going to be more efficient to use an
if statement in the calling program, but in some cases using this argument will be simpler.
skipCalls argument should be used when the
browser() call is nested within another debugging function: it will look further up the call stack to report its location.
At the browser prompt the user can enter commands or R expressions, followed by a newline. The commands are
- (or just an empty line, by default) exit the browser and continue execution at the next statement.
- synonym for
- enter the step-through debugger if the function is interpreted. This changes the meaning of
c: see the documentation for
debug. For byte compiled functions
nis equivalent to
- print a stack trace of all active function calls.
- exit the browser and the current evaluation and return to the top-level prompt.
(Leading and trailing whitespace is ignored, except for an empty line).
Anything else entered at the browser prompt is interpreted as an R expression to be evaluated in the calling environment: in particular typing an object name will cause the object to be printed, and
ls() lists the objects in the calling frame. (If you want to look at an object with a name such as
n, print it explicitly.)
The number of lines printed for the deparsed call can be limited by setting
TRUE disables the use of an empty line as a synonym for
c. If this is done, the user will be re-prompted for input until a valid command or an expression is entered.
This is a primitive function but does argument matching in the standard way.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
Chambers, J. M. (1998) Programming with Data. A Guide to the S Language. Springer.
Documentation reproduced from R 2.15.3. License: GPL-2. | <urn:uuid:a0f6a31d-26d9-465a-85af-fcef7ca88935> | 3.84375 | 690 | Documentation | Software Dev. | 58.462505 | 744 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Engineering and Environmental Challenges: Technical Symposium on Earth Systems Engineering
The term Lupang Pangako means promised land—the sardonic name given to a garbage dump outside the city of Manila inhabited by almost 100,000 people. I visited Lupang Pangako about 15 years ago in a different life as a geologist, and the place really is hell on earth. As you drive through the Promised Land, you see stygian mists rising from the hillsides, the mountains of garbage, and if you look closely you see movement everywhere in the distance. You soon realize that the mountains are covered with people scavenging for their livelihoods.
You may remember that in July 2000 torrential typhoon rains caused a huge landslide in the Promised Land that buried more than 200 people under a mountain of garbage. To me, this horrific event provides a powerful indicator of how we should be thinking about the impacts of climate on people and about human adaptation. The problem was not whether the typhoon was an above-average or below-average event. It was not a problem whose root causes could be revealed through a better understanding of anthropogenic climate change. The problem was that 100,000 people were living in poverty so deep that they could survive only by culling garbage.
The results of humanity’s mistreatment of the environment fall disproportionately on poor people, on developing countries, and on tropical regions. Although these impacts are most severe in their chronic forms, they are most spectacular in their catastrophic versions, such as this landslide. As Figure 1 shows, the number of disasters has risen sharply throughout the world in the last 30 years, most markedly in the developing world. This trend does not reflect a changing climate; it reflects changing demographics—growing numbers of poor people living in urban areas, living in coastal regions, living on garbage dumps. Unlike changes in climate, this trend is something we can control. These are not natural disasters; these are intersections of natural phenomena and complex sociopolitical and socioeconomic processes.
The number of disasters will continue to rise because we know that demographic trends are pointing toward more urbanization and greater numbers of impoverished people moving from agrarian areas to cities—often to areas in harm’s way. Megacities like Jakarta and Manila that have nearly 10 million people apiece are subject to typhoons, volcanoes, earthquakes, landslides, epidemics, and floods, for example. Because generating more knowledge on climate dynamics cannot help us in the short term, it is worth talking not just about the behavior of the climate and our capacity to modify it by reducing greenhouse gas emissions, but also about the interactions of social systems with climate and the engineered systems that sustain human beings. These systems are not sensitive to emissions of carbon dioxide but are very sensitive to demographic and socioeconomic trends. We have much less control over the future behavior of the climate than we do over the behavior of human beings.
Given the complexity of these interdependent systems, the practical challenge is to learn to operate in ways that minimize our impact on the planet and maximize our resilience in the face of unpredictable events and the ever-changing | <urn:uuid:7084b5bd-239a-410e-b73d-f1d0e7c57a3d> | 3.453125 | 672 | Truncated | Science & Tech. | 26.027676 | 745 |
The Plasma Spray – Physical Vapor Deposition (PS-PVD) rig at NASA's Glenn Research Center uses new technology to create super thin ceramic coatings. Here, Bryan Harder, the lead for the PS-PVD, installs a sample in the rig. Image Credit: NASA
Turbines, or rotary engines that create power, have a multitude of uses. They are used in machines that perform work on Earth and are essential components of airplanes. Currently, most turbines are built using metallic based components, and these metal components require cooling to avoid reaching their thermal limits. New, more efficient engine technology requires components that can survive higher temperatures and reduced cooling.
Silicon based ceramic components show great potential for use in advanced, higher efficiency engines, as they are capable of withstanding higher temperatures and weigh less than metal components. However, when unprotected, these silicon based ceramic components react and erode in turbine engine environments due to the presence of water vapor.
New coating processing technology is being pioneered at NASA Glenn's Research Center in Cleveland. The technology is used to protect advanced silicon based ceramic engine components that are being developed for future engines. This coating processing technology will enable more complex and thinner coatings than are currently possible. This is important for coating turbine blades, which need to endure engine environments and stress conditions, while still remaining smooth to avoid the disruption of airflow. This coating processing technology, called Plasma Spray – Physical Vapor Deposition (PS-PVD), has the potential to radically improve the capabilities of ceramic composite turbine components.
"PS-PVD technology is really necessary for the integration of silicon-based ceramic airfoil components into turbine engines. The use of these silicon-based ceramics as engine airfoil components would increase engine operation temperature, which translates into higher efficiencies," says Bryan Harder, the lead for the PS-PVD Facility at Glenn.
Plasma Spray – Physical Vapor Deposition
The PS-PVD rig uses a system of vacuum pumps and a blower to remove air from the chamber, reducing the pressure to one Torr (1/760th of normal atmospheric pressure). Image Credit: NASA
It has been known for decades that enveloping metals and other substances, such as silicon based ceramic components, with a ceramic coating can protect them. But there is new, cutting-edge technology that can create ceramic coatings in an extremely precise, uniform fashion—the coatings can be controlled to a thickness of ten microns (a micron is one-millionth of a meter). This technology is made possible by Glenn's Plasma Spray – Physical Vapor Deposition (PS-PVD) Facility.
The Plasma Spray – Physical Vapor Deposition (PS-PVD) Coater was completed at Glenn in 2010. Created in collaboration with Sulzer Metco, the PS-PVD rig is one of only two such facilities in the U.S.A. and one of four in the entire world. The PS-PVD rig, which is currently a research and development facility, uses a state of the art processing method of creating thin ceramic coatings. Planning began for the facility in 2007, and construction began in 2008 (previously constructed infrastructure was reused and is now the base for the new rig).
The rig is nearing completion of its capabilities testing and assessment phase. A team of five, led by Bryan Harder, a materials research engineer, has put the rig through its paces. The rig will soon begin supporting the Supersonic Project within NASA's Aeronautics Research Mission Directorate at Glenn. Eventually, the rig could be of service to many other areas and projects within Glenn, other NASA centers and governmental entities, and private industry partners.
"When you have something that has broad capabilities like this, it really allows us to work with a lot of different areas, which is a great thing," says Bryan Harder.
Super Thin Ceramic Coatings
Ceramic powder is pumped into the PS-PVD rig. It will be transformed inside the chamber to become a thin, precise, accurate ceramic coating. Image Credit: NASA
The Plasma Spray-Physical Vapor Deposition (PS-PVD) rig creates thin, extremely precise ceramic coatings. These coatings are created on metal, ceramic, or other appropriate materials.
"To create these coatings, ceramic powder is injected into a very high power plasma flame under a vacuum. During operation, the plasma is approximately 7 feet long and 3 feet wide. The ceramic material is vaporized within the plasma, and condenses onto the target component," says Bryan Harder.
The coatings can be single or multilayer, and they protect the components from environmental and thermal impact. The extremely high heat and the vacuum within the chamber allow the ceramic coating to be precisely applied, creating durable, long-lasting, effective coatings.
"If you can reduce the thickness, and still provide an effective barrier layer — you can reduce the weight, you can reduce your cost. There are a lot of benefits that come from this technology," Harder says.
Inside the Chamber
Within the PS-PVD, an extremely hot plasma flame is created. The plasma can reach a temperature of 10,000 degrees Celsius—ten times hotter than a candle flame. Image Credit: NASA
Located at Glenn, the Plasma Spray – Physical Vapor Deposition (PS-PVD) is installed in a dedicated room. A large, blimp-shaped chamber is made of stainless steel. The exterior metal, which is welded to a second sheet of stainless steel beneath, has cool water pumped through it to keep the chamber from getting too warm.
Inside the chamber is a steel arm which holds a plate made of a nickel-based superalloy. This plate holds the component that will be coated. Several feet away from this plate is the torch, where the ceramic powder is injected into the plasma. Once the chamber is closed, a system of vacuum pumps and a blower remove air from the chamber, reducing the pressure to one Torr (1/760th of normal atmospheric pressure). Then, helium and argon gases are introduced to the torch. An arc is created between the anode and cathode inside the chamber, ionizing the gases and creating the high temperature plasma.
The plasma, which can grow to seven feet in length, can be observed through one of three portals on the side of the rig. Its steady, fierce, concentrated glow resembles a Lightsaber from the Star Wars movies. Once the vacuum and plasma are stable, the ceramic powder is introduced to the torch. The plasma immediately begins to change colors. Depending on which ceramic powder is introduced, the plasma dramatically erupts into oranges, yellows, aquas, purples and blues.
The gas stream moves at a speed of Mach 2 — a rate of more than 2,000 feet per second. As the ceramic powder and the plasma blast the arm and plate where the component being coated is attached, the plasma appears to envelop the component and splash around it. The plasma, which appeared like a Lightsaber, seems to morph into the effect of the undulating stream of magic that occurs when Harry Potter's wand meets with Lord Voldemort's wand, in the Harry Potter movies.
Inside the PS-PVD, ceramic powder is introduced into the plasma flame. The plasma vaporizes the ceramic powder, which then condenses to form the ceramic coating. Image Credit: NASA
The entire process is over in about five minutes. The plasma is extinguished and the exhaust system clears the chamber. The pressure is returned to normal atmospheric conditions, and then the chamber can be opened. The newly-coated component glows red hot and must cool down for an hour before it can be handled. The plasma within the chamber can reach a scorching 10,000 degrees Celsius — ten times hotter than a candle flame.
After the sample cools, it will be tested and evaluated to ensure the coating is an effective barrier. And then the sample — be it a small test button or an essential component of a supersonic aircraft — is ready to go. The front, sides and inside of the sample can be coated — a capability never previously available from vapor deposition techniques.
"The PS-PVD allows us to do things that you can't do anywhere else," Harder says.
This newly developed technology could have myriad applications, both within NASA and with potential industry partners. The potential applications are only beginning to be discovered — from membrane technology to fuel cells to ion conductors and beyond.
The rig is a game-changing technology; Glenn is maturing and developing a technology that doesn't exist elsewhere, while making direct contributions to the NASA mission.
"This is new ground," Bryan Harder says. "This was only developed in the last couple of years… and we don't even know the limits of what it [PS-PVD] is capable of."
-Tori Woods, SGT Inc. NASA's Glenn Research Center | <urn:uuid:f1431e61-97ca-4d5d-8075-51f535c02fe1> | 4.15625 | 1,834 | Knowledge Article | Science & Tech. | 41.229169 | 746 |
Scientists have kept a close watch on the dazzling northern lights on Earth and other planets in our solar system, but now they have the chance to explore the auroras of alien planets orbiting distant stars, a new study suggests.
Auroras on Earth occur when charged particles from the sun are funneled to the planet's poles and interact with the upper atmosphere, sparking spectacular light shows. Similar processes have been observed on other planets in the solar system, with Jupiter's auroras more than 100 times brighter than those on Earth, scientists said.
Now, scientists are finding evidence of aurora displays on exoplanets for the first time. Researchers used the Low-Frequency Array radio telescope based in The Netherlands to observe radio emissions most likely caused by powerful auroras from planets outside of our solar system.
"These results strongly suggest that auroras do occur on bodies outside our solar system, and the auroral radio emissions are powerful enough — 100,000 times brighter than Jupiter's — to be detectable across interstellar distances," study lead author Jonathan Nichols of the University of Leicester in England said in a statement.
Jupiter's auroras are caused by an interaction of charged particles shot from its volcanic moon, Io, and the rotation of the planet itself.
The gas giant turns on its axis once every 10 hours, dragging its magnetic field along for the ride, and effectively creating a whirl of electricity at each of the planet's poles.
Space news from NBCNews.com
Teen's space mission fueled by social media
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
Auroras akin to Earth's have been spotted on Saturn. But these newest findings show that auroras on exoplanets probably aren't formed from charged particles traveling on the solar wind. Instead, the auroras on the dim, "ultracool dwarf" stars and "failed stars" known as brown dwarfs that Nichols studied probably behave more like Jupiter's northern and southern lights.
By studying these radio emissions, scientists will gain more insight into the strength of a planet's magnetic field, how it interacts with its parent star, whether it has any moons and even the length of its day.
The new research is detailed in a recent issue of The Astrophysical Journal.
- Saturn's Aurora - The Movie
- Aurora Guide: How the Northern Lights Work (Infographic)
- Amazing Auroras: Northern Lights of November 2012 (Photos)
© 2013 Space.com. All rights reserved. More from Space.com. | <urn:uuid:95023968-accd-423e-8dc1-c2cdb09db880> | 3.734375 | 542 | News Article | Science & Tech. | 42.561272 | 747 |
initdb creates a new PostgreSQL database cluster (or database system). A database cluster is a collection of databases that are managed by a single server instance.
Creating a database system consists of creating the directories in which the database data will live, generating the shared catalog tables (tables that belong to the whole cluster rather than to any particular database), and creating the template1 database. When you create a new database, everything in the template1 database is copied. It contains catalog tables filled in for things like the built-in types.
initdb initializes the database
cluster's default locale and character set encoding. Some locale
categories are fixed for the lifetime of the cluster, so it is
important to make the right choice when running initdb. Other locale categories can be changed
later when the server is started. initdb
will write those locale settings into the postgresql.conf configuration file so they are
the default, but they can be changed by editing that file. To set
the locale that initdb uses, see the
description of the
The character set encoding can be set separately for each
database as it is created. initdb
determines the encoding for the template1 database, which will serve as the
default for all other databases. To alter the default encoding
initdb must be run as the user that will own the server process, because the server needs to have access to the files and directories that initdb creates. Since the server may not be run as root, you must not run initdb as root either. (It will in fact refuse to do so.)
Although initdb will attempt to create the specified data directory, often it won't have permission to do so, since the parent of the desired data directory is often a root-owned directory. To set up an arrangement like this, create an empty data directory as root, then use chown to hand over ownership of that directory to the database user account, then su to become the database user, and finally run initdb as the database user.
This option specifies the directory where the database system should be stored. This is the only information required by initdb, but you can avoid writing it by setting the PGDATA environment variable, which can be convenient since the database server (postmaster) can find the database directory later by the same variable.
Selects the encoding of the template database. This will also be the default encoding of any database you create later, unless you override it there. To use the encoding feature, you must have enabled it at build time, at which time you also select the default for this option.
Sets the default locale for the database cluster. If this option is not specified, the locale is inherited from the environment that initdb runs in.
--locale, but only sets
the locale in the specified category.
Selects the user name of the database superuser. This defaults to the name of the effective user running initdb. It is really not important what the superuser's name is, but one might choose to keep the customary name postgres, even if the operating system user's name is different.
Makes initdb prompt for a password to give the database superuser. If you don't plan on using password authentication, this is not important. Otherwise you won't be able to use password authentication until you have a password set up.
Other, less commonly used, parameters are also available:
Print debugging output from the bootstrap backend and a few other messages of lesser interest for the general public. The bootstrap backend is the program initdb uses to create the catalog tables. This option generates a tremendous amount of extremely boring output.
Specifies where initdb should find its input files to initialize the database system. This is normally not necessary. You will be told if you need to specify their location explicitly.
By default, when initdb determines that an error prevented it from completely creating the database system, it removes any files it may have created before discovering that it can't finish the job. This option inhibits tidying-up and is thus useful for debugging.
Specifies the directory where the database system is to
be stored; may be overridden using the | <urn:uuid:36d169ba-1580-4254-8473-ddb7aa3f534b> | 3.109375 | 866 | Documentation | Software Dev. | 35.48102 | 748 |
by Michael Mann and Gavin Schmidt
On this site we emphasize conclusions that are supported by “peer-reviewed” climate research. That is, research that has been published by one or more scientists in a scholarly scientific journal after review by one or more experts in the scientists’ same field (‘peers’) for accuracy and validity. What is so important about “Peer Review”? As Chris Mooney has lucidly put it:
[Peer Review] is an undisputed cornerstone of modern science. Central to the competitive clash of ideas that moves knowledge forward, peer review enjoys so much renown in the scientific community that studies lacking its imprimatur meet with automatic skepticism. Academic reputations hinge on an ability to get work through peer review and into leading journals; university presses employ peer review to decide which books they’re willing to publish; and federal agencies like the National Institutes of Health use peer review to weigh the merits of applications for federal research grants.
Put simply, peer review is supposed to weed out poor science. However, it is not foolproof — a deeply flawed paper can end up being published under a number of different potential circumstances: (i) the work is submitted to a journal outside the relevant field (e.g. a paper on paleoclimate submitted to a social science journal) where the reviewers are likely to be chosen from a pool of individuals lacking the expertise to properly review the paper, (ii) too few or too unqualified a set of reviewers are chosen by the editor, (iii) the reviewers or editor (or both) have agendas, and overlook flaws that invalidate the paper’s conclusions, and (iv) the journal may process and publish so many papers that individual manuscripts occasionally do not get the editorial attention they deserve.
Thus, while un-peer-reviewed claims should not be given much credence, just because a particular paper has passed through peer review does not absolutely insure that the conclusions are correct or scientifically valid. The “leaks” in the system outlined above unfortunately allow some less-than-ideal work to be published in peer-reviewed journals. This should therefore be a concern when the results of any one particular study are promoted over the conclusions of a larger body of past published work (especially if it is a new study that has not been fully absorbed or assessed by the community). Indeed, this is why scientific assessments such as the Arctic Climate Impact Assessment (ACIA), or the Intergovernmental Panel on Climate Change (IPCC) reports, and the independent reports by the National Academy of Sciences, are so important in giving a balanced overview of the state of knowledge in the scientific research community.
There have been several recent cases of putatively peer-reviewed studies in the scientific literature that produced unjustified or invalid conclusions. Curiously, many of these publications have been accompanied by heavy publicity campaigns, often declaring that this one paper completely refutes the scientific consensus. An excellent account of some of these examples is provided here by Dr. Stephen Schneider (Stanford University).
Perhaps the most publicized recent example was the publication of a study by astronomer Willie Soon of the Harvard University-affiliated Harvard-Smithsonian Center for Astrophysics and co-authors, claiming to demonstrate that 20th century global warmth was not unusual in comparison with conditions during Medieval times. Indeed, this study serves as a prime example of one of the “myths” that we have debunked elsewhere on this site. The study was summarily discredited in articles by teams of climate scientists (including several of the scientists here at RealClimate), in the American Geophysical Union (AGU) journal Eos and in Science. However, it took some time the rebuttals to work their way through the slow process of the scientific peer review. In the meantime the study was quickly seized upon by those seeking to sow doubt in the validity behind the scientific consensus concerning the evidence for human-induced climate change (see news articles in the New York Times, and Wall Street Journal). The publication of the study had wider reverberations throughout the academic and scientific institutions connected with it. The association of the study with the “Harvard” name caused some notable unease among members of the Harvard University community (see here and here) and the reputation of the journal publishing the study was seriously tarnished in the process. The editor at Climate Research that handled the Soon et al paper, Dr. Chris de Frietas, has a controversial record of past editorial practices (see this ‘sidebar’ to an article in Scientific American by science journalist David Appell). In an unprecedented (to our knowledge) act of protest, chief editor Hans von Storch and 3 additional editors subsequently resigned from Climate Research in response to the fundamental documented failures of the editorial process at the journal. A detailed account of these events are provided by Chris Mooney in the Skeptical Inquirer and The American Prospect, by David Appell in Scientific American, and in a news brief in Nature. The journal’s publisher himself (Otto Kline) eventually stated that “[the conclusions drawn] cannot be concluded convincingly from the evidence provided in the paper”.
Another journal which (quite oddly) also published the Soon et al study, “Energy and Environment”, is not actually a scientific journal at all but a social science journal. The editor, Sonja Boehmer-Christensen, in defending the publication of the Soon et al study, was quoted by science journalist Richard Monastersky in the Chronicle of Higher Education somewhat remarkably confessing “I’m following my political agenda — a bit, anyway. But isn’t that the right of the editor?”.
Shaviv and Veizer (2003) published a paper in the journal GSA Today, where the authors claimed to establish a correlation between cosmic ray flux (CRF) and temperature evolution over hundreds of millions of years, concluding that climate sensitivity to carbon dioxide was much smaller than currently accepted. The paper was accompanied by a press release entitled “Global Warming not a Man-made Phenomenon”, in which Shaviv was quoted as stating,“The operative significance of our research is that a significant reduction of the release of greenhouse gases will not significantly lower the global temperature, since only about a third of the warming over the past century should be attributed to man”. However, in the paper the authors actually stated that “our conclusion about the dominance of the CRF over climate variability is valid only on multimillion-year time scales”. Unsurprisingly, there was a public relations offensive using the seriously flawed conclusions expressed in the press release to once again try to cast doubt on the scientific consensus that humans are influencing climate. These claims were subsequently disputed in an article in Eos (Rahmstorf et al, 2004) by an international team of scientists and geologists (including some of us here at RealClimate), who suggested that Shaviv and Veizer’s analyses were based on unreliable and poorly replicated estimates, selective adjustments of the data (shifting the data, in one case by 40 million years) and drew untenable conclusions, particularly with regard to the influence of anthropogenic greenhouse gas concentrations on recent warming (see for example the exchange between the two sets of authors). However, by the time this came out the misleading conclusions had already been publicized widely.
Next, we discuss the first of three so-called “bombshell” papers that supposedly “knock the stuffing out of” the findings of the IPCC. Patrick Michaels and associates billed his own paper (McKitrick and Michaels, 2004) (co-authored by Ross McKitrick ), this way:
After four years of one of the most rigorous peer reviews ever, Canadian Ross McKitrick and another of us (Michaels) published a paper searching for “economic” signals in the temperature record. …The research showed that somewhere around one-half of the warming in the U.N. surface record was explained by economic factors, which can be changes in land use, quality of instrumentation, or upkeep of records.
It strikes us as odd, to say the least, that, after one of the “most rigorous peer reviews ever”, nobody involved (neither editor, nor reviewers, nor authors) seems to have caught the egregious basic error that the authors mistakenly used degrees rather than the required radians in calculating the cosine functions used to spatially weight their estimates**. This mistake rendered every calculation in the paper incorrect, and the conclusions invalid — to our knowledge, however, the paper has not yet been retracted. Remarkably, there were still other independent and equally fundamental errors in the paper that would have rendered it entirely invalid anyway. To the journals credit, they published a criticism of the paper by Benestad (2004) to this effect. It may come as no surprise that McKitrick and Michaels (2004) was published in Climate Research and was handled by none other than Chris de Frietas.
The other two “bombshell” papers were published in the AGU journal Geophysical Research Letters (GRL) which publishes over 1500 papers per year. It can be conservatively estimated that they publish no more than 70% of the papers received, and thus probably process over 2000 papers per year. That gives each of the typically 8 or so editors of the journal almost a paper per day to evaluate. While GRL publishes many excellent papers and provides an important forum to the research community for rapid publication of important results, occasionally, poor papers slip through the net. These two papers were authored by Douglass and collaborators (Douglass et al, 2004a;2004b) the first with Fred Singer as a co-author and the second with both Singer and Michaels. Both papers*** argue that recent atmospheric temperatures have been cooling, rather than warming, based on the analysis of data over a selective (1979-1996) time interval that eliminates periods of significant warming both before and after, and using a controversial satellite-derived temperature record whose robustness has been called into question by other teams analysing the data. An excellent discussion of both papers is provided by Tim Lambert.
Another relevant GRL paper was the article by Legates and Davis (1997) which criticized the use of “centered correlations” common to numerous “Detection and Attribution” studies supporting the detection of human influence on recent climate change. They argued that correlations could increase while observed and simulated global means diverge. However, as pointed out in the chapter on Detection and Attribution in IPCC (2001)*, centered correlations were introduced for precisely this reason: to provide an indicator that was statistically independent of global mean temperature changes. As noted by the IPCC, “if both global mean changes and centered pattern correlations point towards the same explanation of observed temperature changes, it provides more compelling evidence than either of these indicators in isolation”. Again, a basic logical flaw in the authors’ criticism of past work was not caught in peer review.
Next, we consider the paper by Soon et al (2004) published in GRL which criticized the way temperature data series had been smoothed in the IPCC report and elsewhere. True to form, contrarians immediately sold the results as ‘invalidating’ the conclusions of the IPCC, with the lead author Willie Soon himself writing an opinion piece to this effect. Once again, a few short months later, a followup article was published by one of us (Mann, 2004) that invalidated the Soon et al (2004) conclusions, demonstrating (with links to supporting Matlab source codes and data) how (a) the authors had, in an undisclosed manner, inappropriately compared trends calculated over differing time intervals and (b) had not used standard, objective statistical criteria to determine how data series should be treated near the beginning and end of the data. It is unfortunate that a followup paper even had to be published, as the flaws in the original study were so severe as to have rendered the study of essentially no scientific value.
There are other examples of studies that have even been published in high quality venues that were heavily publicized at the time, but in retrospect were flawed (though not as egregiously as the examples above). For instance, Fan et al (1998), on the size of the carbon sink in the continental US, rebutted by Schimel et al. (2000). Or the solar-cycle length/climate correlation described by Friis-Christensen and Lassen (1991) whose seeemingly impressive correlation for the latter half of the 20th Century disappears if you don’t change the averaging scheme half way along (Laut, 2003; Damon and Laut, 2004).
The current thinking of scientists on climate change is based on thousands of studies (Google Scholar gives 19,000 scientific articles for the full search phrase “global climate change”). Any new study will be one small grain of evidence that adds to this big pile, and it will shift the thinking of scientists slightly. Science proceeds like this in a slow, incremental way. It is extremely unlikely that any new study will immediately overthrow all the past knowledge. So even if the conclusions of the Shaviv and Veizer (2003) study discussed earlier, for instance, had been correct, this would be one small piece of evidence pitted against hundreds of others which contradict it. Scientists would find the apparent contradiction interesting and worthy of further investigation, and would devote further study to isolating the source of the contradiction. They would not suddenly throw out all previous results. Yet, one often gets the impression that scientific progress consists of a series of revolutions where scientists discard all their past thinking each time a new result gets published. This is often because only a small handful of high-profile studies in a given field are known by the wider public and media, and thus unrealistic weight is attached to those studies. New results are often over-emphasised (sometimes by the authors, sometimes by lobby groups) to make them sound important enough to have news value. Thus “bombshells” usually end up being duds.
However, as demonstrated above, even when it initially breaks down, the process of peer-review does usually work in the end. But sometimes it can take a while. Observers would thus be well advised to be extremely skeptical of any claims in the media or elsewhere of some new “bombshell” or “revolution” that has not yet been fully vetted by the scientific community.
*Note added 1/21/05: It has come to our attention that Legates and Davis (1997) were similarly rebutted in a separate publication by Wigley et al (2000).
**Note added 1/21/05: McKitrick and Michaels have published an errata correcting the degrees/radians error in CR 27, 265-268 which now shows that latitude correlates much better with temperature trends than any economic statisitic.
***Note added 1.25.05: Chip Knappenberger correctly points out that the the second Douglass et al paper doesn’t actually make the claim that the atmosphere is cooling. We therefore withdraw that specific comment, but note that the comment concerning the selective use of data series and time periods stands.
Benestad, R.E., Are temperature trends affected by economic activity? Comment on McKitrick & Michaels., Climate Research, 27, 171-173, 2004.
Damon, P. E. and P. Laut, Pattern of Strange Errors Plagues Solar Activity and Terrestrial Climate Data, Eos, 85, p. 370. 2004
Douglass, D. H., Pearson, B.D., and S.F.Singer, Altitude dependence of atmospheric temperature trends: Climate models versus observation, Geophys. Res. Lett., 31, L13208, doi:10.1029/2004GL020103, 2004.
Douglass, D. H., Pearson, B.D., and S.F.Singer, Knappenberg, P.C., and P.J. Michaels, Disparity of tropospheric and surface temperature trends: New evidence, Geophys. Res. Lett., 31, L13207, doi:10.1029/2004GL020212, 2004, 2004.
Fan, S., Gloor, M., Mahlman, J., Pacala, S., Sarmiento, J., Takahashi, T., Tans, P. A Large Terrestrial Carbon Sink in North America Implied by Atmospheric and Oceanic Carbon Dioxide Data and Models, Science 282: 442-446, 1998.
Friis-Christensen, E., and K. Lassen, Length of the Solar Cycle: An indicator of Solar Activity Closely Associated with Climate, Science 254, 698-700, (1991).
Legates, D. R. and R. E. Davis, The continuing search for an anthropogenic climate change signal: limitations of correlation based approaches, Geophys. Res. Lett., 24, 2319-2322, 1997.
Laut, P., Solar activity and terrestrial climate: An analysis of some purported correlations, J.Atmos. Solar-Terr.Phys.,65, 801-812. 2003
Mann, M.E., On Smoothing Potentially Non-Stationary Climate Time Series, Geophys. Res. Lett., 31, 2319-2322, L07214, doi: 10.1029/2004GL019569, 2004.
McKitrick, R., and Michaels, P.J., A test of corrections for extraneous signals in gridded surface temperature data., Climate Research, 26, 159-173, 2004.
Rahmstorf, S., D. Archer, D.S. Ebel, O. Eugster, J. Jouzel, D. Maraun, G.A. Schmidt, J. Severinghaus, A.J. Weaver, and J. Zachos, Cosmic rays, carbon dioxide, and climate, Eos, 85, , 38,41, 2004.
Schimel, D., Melillo, J., Tian, H., McGuire, A. D., Kicklighter, D., Kittel, T., Rosenbloom, N., Running, S., Thornton, P., Ojima, D., Parton, W., Kelly, R., Sykes, M., Neilson, R. and Rizzo, B., Contribution of Increasing CO2 and Climate to Carbon Storage by Ecosystems in the United States, Science 287: 2004-2006, 2000
Shaviv, N, and J. Veizer, Celestial driver of Phanerozoic climate?, GSA Today, 13, , 4-10, 2004.
Soon, W., D. R. Legates, and S. L. Baliunas, Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere gridded surface temperature: A note of caution, Geophys. Res. Lett., 31, , L03209, doi:10.1029/2003GL019141, 2004.
Soon, W., and S. Baliunas, Proxy climatic and environmental changes over the past 1000 years, Climate Research, 23, 89-110, 2003.
Soon, W., S. Baliunas, C, Idso, S. Idso and D.R. Legates, Reconstructing climatic and environmental changes of the past 1000 years, Energy and Environment, 14, 233-296, 2003.
Wigley, T.M.L, Santer, B.D and K.E. Taylor, K.E., Correlation approaches to detection, Geophys. Res. Lett.,, 27, 2973-2976, 2000. | <urn:uuid:e9441bbe-9143-46a0-a90b-44b8bb712b75> | 2.828125 | 4,120 | Nonfiction Writing | Science & Tech. | 46.781773 | 749 |
CR-39 is transparent in visible spectrum and is almost completely opaque in the ultraviolet range. It has high abrasion resistance, in fact the highest abrasion/scratch resistance of any uncoated optical plastic. CR-39 is about half the weight of glass and index of refraction only slightly lower than that of crown glass, making it an advantageous material for eyeglasses and sunglasses lenses. A wide range of colors can be achieved by dyeing of the surface or the bulk of the material. CR-39 is also resistant to most of solvents and other chemicals, to gamma radiation, to aging, and to material fatigue. It can withstand the small hot sparks from welding. It can be used continuously in temperatures up to 100 °C and up to one hour in 130 °C.
In the radiation detection application, raw CR-39 material is exposed to proton recoils caused by incident neutrons. The proton recoils cause tracks, which are enlarged by an etching process in a caustic solution of sodium hydroxide.
The enlarged tracks are counted under a microscope (commonly 200x), and the number of tracks is proportional to the amount of incident neutron radiation.
Effect of alpha-particle energies on CR-39 line-shape parameters using positron annihilation technique.(Polyally diglycol carbonate )
Jul 01, 2006; Polyally diglycol carbonate "CR-39" is widely used as etched track type particle detector. Doppler broadening positron... | <urn:uuid:7294914f-5629-4204-9346-748aee0f98cf> | 2.9375 | 314 | Knowledge Article | Science & Tech. | 45.809195 | 750 |
Termites' enzyme anomaly
26 March 2007
Japanese researchers have discovered a previously unknown method used by termites to digest cellulose. The discovery offers a novel source of enzymes to assist in the production of biofuels, they suggest.
Primitive groups of termites break down the normally indigestible cellulose with the aid of cellulase enzymes secreted by single-celled protozoans in the termite gut. Higher termites secrete their own cellulases directly from cells in their midgut. But Gaku Tokuda and Hirofumi Watanabe from the University of the Ryukyus, Okinawa predicted that these endogenous cellulases produced by higher termites are not sufficient to meet the insect's energy needs.
Termites digest the cellulose in wood, causing extensive damage.
© William Rafti of the William Rafti Institute.
Tokuda believes these findings will be of interest to the US Department of Energy, which is funding genomic analysis of different termite species in its search for more efficient methods of converting agricultural and forestry waste into usable energy. He points out that the termites could be a valuable source of industrial enzymes because they have adapted to live on diets involving a wide range of plant material as well as wood.
'The termite gut is the smallest bioreactor in the world,' said Tokuda. 'We have a lot of fundamental knowledge to learn from these micro bioreactors to establish efficient biomass conversion systems.'
But termite expert David Bignell from Queen Mary College, University of London, UK, is sceptical about claims of industrial applications of termite-based products.
'This argument has been used for the last 30 years to justify all kinds of termite research but I don't know of any commercial applications,' said Bignell. 'We have an extensive knowledge of microbial cellulases, and most interest from biotechnologists is focused there.'
Where the Japanese study is interesting, he said, is in highlighting an evolutionary paradox: why should the termites go to the trouble of developing their own cellulases when they can be supplied more efficiently by symbiotic bacteria? Bignell suspects that cellulases in the hindgut are there to supply the energy needs of symbiotic bacteria, not the host.
'In wood feeders the limiting factor for growth is nitrogen, not carbon, because wood contains very little nitrogen. So the bacterial community of the termite hindgut contains bacteria that fix nitrogen from the atmosphere, and nitrogen fixation is energetically expensive. Any cellulose processing going on there may just be supporting the N-fixing bacteria.'
ReferencesG Tokuda and H Watanabe, Biol Lett, 2007, DOI: 10.1098/rsbl.2007.0073
Also of interest
Rapid termiticide development fails to stem insect swarm.
Comment on this story at the Chemistry World blog
Read other posts and join in the discussion
External links will open in a new browser window | <urn:uuid:3e9db914-f55b-4cb0-95f9-a3c24eb0a813> | 3.1875 | 615 | News Article | Science & Tech. | 27.03829 | 751 |
The Voyager 1 has found its way into the far reaches of space, specifically to the edge beyond which scientists believe lies interstellar space. This area is within our solar bubble, and is referred to as a “magnetic highway for charged particles.” The findings were detailed earlier today at the American Geophysical Union, which took place in San Francisco.
The magnetic highway detailed in the announcement is explained thusly: the connection betwixt the sun’s magnetic field lines and interstellar magnetic field lines lets high energy particles from beyond our heliopsphere “zoom in,” while letting low-energy particles “zoom out.” When Voyager goes beyond these fields into interstellar space, it is believed the occasion will be marked by a change in the direction of the lines.
One of the project’s scientist Edward Stone offered this statement. “Although Voyager 1 still is inside the sun’s environment, we now can taste what it’s like on the outside because the particles are zipping in and out on this magnetic highway. We believe this is the last leg of our journey to interstellar space. Our best guess is it’s likely just a few months to a couple years away. The new region isn’t what we expected, but we’ve come to expect the unexpected from Voyager.”
Voyager has been traipsing around the outer layer of the heliosphere for years, 5.5 of which were spent with stable solar wind. Over time – and rather suddenly – the solar wind decreased, eventually to zero. Says scientists, if one were to look at the charged particle (solar wind) information, it would seem that the Voyager is already beyond the heliosphere. Other data doesn’t indicate this, however, and so for the time being, it’s still a matter of patience. | <urn:uuid:72d41de0-6f22-4bce-a5fe-c004418d8494> | 3.328125 | 389 | News Article | Science & Tech. | 50.295576 | 752 |
Gone are the days when Fred Haise, of the Apollo 13 crew, could remark on how his urine looked as if a golden string of glittering stars as they passed out of the evacuation chamber of the space capsule in which he’d just relieved himself. Or at least gone are the days when Bill Paxton in the character of Haise could say something like that in a movie and be accurate. These days our astronauts drink the water recycled from their pee — and find clever new uses for their poop too.
On the International Space Station, they use an automated system to remove extract water from waste for use once more. On the private Mars shot that will launch in 2018, Inspiration Mars, they will use a simpler, but perhaps more elegant, method: a technique called forward osmosis. Water likes to move to diffuse concentrations whenever possible from high to low, in an effort to even things out. So when it’s confronted with a solution that contains a solution with a high concentration of salt, which has a high osmotic potential, this ticks water off and it will move toward that higher concentration to balance things out. So when urine, which contains lots of water, of course, is put into in a chamber it will pass through the membrane, leaving behind all of the other stuff that makes up urine, leaving the water purified and the salty solution diluted and ready to, yick, drink. I’m sure that any remnant salty taste from the diluted solution will provide the astronauts plenty of reminder of where their water came from (their pee).
The forward osmosis method was tested on the last space shuttle flight, Atlantis in 2011. The problem is, the results showed that in microgravity osmosis worked at only about half the efficiency it does here on Earth, something the mission’s engineers will have to work out. The Inspiration Mars people recently announced that these bags of water and waste will be used in an entirely novel way, to line the space capsule to protect it from cosmic rays.
A quote from a mission member in a New Scientist article on the subject points out that nuclei block cosmic radiation and that water has about three times the amount of nuclei that are found in metals. And since the nuclei block rather than absorb radiated particles, the water won’t become irradiated, keeping it safe to drink. So lining the capsule with bags of water that the crew can also drink should be an efficient and elegant solution to both staving off cosmic radiation and dehydration among the crew.
But what’s more efficient than drinking the water extracted from your own urine? Nothing. When a bag of water is finished off by the husband/wife team that will serve as personnel on the mission, they will use said bag to deposit waste. The osmotic processes will then extract the water for reuse. In the meantime, that bag of waste will go back in the capsule lining to serve as a shield once more, this time filled with poop or pee. And one can only imagine how effective those are at reflecting cosmic rays. | <urn:uuid:623fbf29-1a7e-4207-b1a5-60e0c0a190ee> | 3.09375 | 629 | Personal Blog | Science & Tech. | 46.969385 | 753 |
Hydroinformatics is the rapidly developing field in which information technology is applied to address water-related issues such as flood estimation and rainfall-runoff modeling. This book is a thorough overview of all the latest developments in this increasingly vital discipline.
Hydroinformatics is an emerging subject that is expected to gather speed, momentum and critical mass throughout the forthcoming decades of the 21st century. This book provides a broad account of numerous advances in that field - a rapidly developing discipline covering the application of information and communication technologies, modelling and computational intelligence in aquatic environments. A systematic survey, classified according to the methods used (neural networks, fuzzy logic and evolutionary optimization, in particular) is offered, together with illustrated practical applications for solving various water-related issues. These include, but are not limited to, flood estimation, rainfall-runoff modelling, rehabilitation of urban water networks, estimation of ocean temperature profiles, etc. Particular attention is also given to certain aspects of the most recent technological progress in hydroinformatics including the development of protocols for model integration and of computer architectures for modern modelling systems.Invited contributions were obtained from leading international experts - including academics, hydrological practitioners and industrial professionals - such that this edited volume constitutes an authoritative source of reference material and is essential reading for active workers in this field. | <urn:uuid:c0a7683b-c6b4-4bb2-88f5-924ec7d772de> | 2.515625 | 268 | Product Page | Science & Tech. | -12.92008 | 754 |
Even though lead usage has declined due to environmental awareness and regulation, several human sources of lead continue to affect birds. Hunting ammunition and fishing gear are ingested by the birds, with toxic effects.
Homepage for the research on occurrence, movement, flux, fate, and effects of agricultural chemicals, such as pesticides, in 25 states by the Midcontinent Agricultural Chemical Research Project (MACRP) with links to study results and publications.
Multiple studies addressing urban water-quality issues, to describe biological, chemical, and physical characteristics of urban water resources over time, and relate those characteristics to natural processes and human activities
Primary homepage for the National Water Quality Assessment (NAWQA) Program studying water quality in river, aquifer and coastal water basins throughout the nation. Links to reports, data, models, maps and national synthesis studies.
Trace elements are inorganic chemicals occurring in small amounts in nature. This web site of the National Water Quality Assessment links to U.S. data, publications, news, and other sites on trace metals, metalloids and radionuclides in water. | <urn:uuid:dd2be1c2-1006-4338-8f37-7a45837fbcfa> | 3.09375 | 224 | Content Listing | Science & Tech. | 12.491013 | 755 |
Picture of red tide taken from the NOAA Research Vessel Ron Brown
Click on image for full size
Courtesy of NOAA
Robots Watch out for Poisonous Plankton!
News story originally written on January 30, 2003
Tiny plankton that live in the sea may look harmless but certain types are able to kill fish, poison seafood and even choke swimmers. Now robots have been developed to search the seas for the dangerous plankton!
Plankton spend most of their life floating in ocean water. They cannot swim like fish, but instead float wherever the currents take them. The harmful types of plankton are single-celled, microscopic creatures called algae that photosynthesize like plants.
Most types of algae are very important for life in the sea because they are food for animals like clams, fish and whales. However, a few types of algae have poisons within them that are harmful to other creatures. When the dangerous types of algae grow so fast that they darken the ocean water with a reddish cloud called a red tide, they are dangerous to animals that eat them. When people eat seafood that ate the poisonous algae, they get sick too.
Special underwater robots have been released into the Gulf of Mexico to look for dangerous algae. The robots are called autonomous underwater vehicles, or AUVs. They look like small airplanes that glide underwater. They carry sensors to detect algae and record salinity and temperature of the water so that scientists can study when the red tides form.
Researchers hope that with the information from their robots and satellite images, they will be able to warn people living near the coast if a giant cloud of algae is in the ocean near them.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
About 70% of the Earth is covered with water. Over 97% of that water is found in the oceans. Everyone who has taken in a mouthful of ocean water while swimming knows that the ocean is really salty! Dissolved...more
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The weather was great as Discovery took 8 1/2 minutes to reach orbit. This was the United States' 123rd...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials want an answer from the Russian government. The necessary service module is currently waiting to be...more
A coronal mass ejection (CME) happened on the Sun early last month. The material that was thrown out from this explosion passed the ACE spacecraft. The SWICS instrument on ACE has produced a new and very...more
J.S. Maini of the Canadian Forest Service called forests the "heart and lungs of the world." This is because forests filter air and water pollution, absorb carbon dioxide, release oxygen, and maintain...more | <urn:uuid:8d7ea1a9-b1b5-446c-9ad6-218523c3a391> | 3.3125 | 692 | Content Listing | Science & Tech. | 55.800592 | 756 |
The basic forces in nature
Contemporary Physics Education Project
The interactions in the Universe are governed by four forces (strong, weak, electromagnetic and gravitational).
Physicists are trying to find one theory that would describe all
the forces in nature as a single law.
So far they have succeeded in producing a single theory that describes
the weak and electromagnetic forces (called electroweak force). The strong and gravitational forces are not yet described by this theory.
Table courtesy of University of Guelph, Guelph, Ontario (Cananda)
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
The neutrino is an extremely light particle. It has no electric charge. The neutrino interacts through the weak force. For this reason and because it is electrically neutral, neutrino interactions with...more
Some ideas are used throughout the sciences. They are "tools" that can help us solve puzzles in different fields of science. These "tools" include units of measurement, mathematical formulas, and graphs....more
Mechanics is the term used to refer to one of the main branches of the science of physics. Mechanics deals with the motion of and the forces that act upon physical objects. We need precise terminology...more
The interactions in the Universe are governed by four forces (strong, weak, electromagnetic and gravitational). Physicists are trying to find one theory that would describe all the forces in nature as...more
When the temperature in the core of a star reaches 100 million degrees Kelvin fusion of Helium into Carbon occurs. Oxygen is also formed from fusion of Carbon and Helium together when the temperature is...more
A plot of the binding energy per nucleon vs. atomic mass shows a peak atomic number 56 (Iron). Elements with atomic mass less then 56 release energy if formed as a result of a fusion reaction. Above this...more
There are several experiments where nuclear fusion reactions have been achieved in a controlled manner (that means no bombs are involved!!). The two main approaches that are being explored are magnetic...more | <urn:uuid:f6b8e300-5420-4e46-9c29-9b2d695a94b5> | 3.484375 | 475 | Content Listing | Science & Tech. | 45.407259 | 757 |
First we have the monarch
butterflies. These incredible creatures spend their summer
days in the northern parts and then migrate (leaving for another
place) to the south for the winter. They travel thousands of
miles - some almost 2900 km from Canada to Mexico. Just
looking at a map, you can see how far Canada is from Mexico, but
these little butterflies fly all the way to protect themselves
from the cold winters. Traveling so much certainly does tire
them out, which is why some of them cannot make a return trip.
These butterflies have never been to these foreign places, but
they still make the trip successfully. How do they do it?
A map showing the
monarch butterfly migration.
They travel from Canada to Mexico.
Then there are the Green sea turtles
(or the scientific term is chelonia mydas) who swim for months and
months in the east direction to migrate from a sea in Brazil in South America
to an island called Ascension Island, which is about 3200 km away.
Apparently these turtles were hatched on this island, and after they
grow up in South America, they return to their birthplace, the
Ascension Island, to hatch their own eggs.
Another example is that of some crabs
that are willing to walk about 240 km from deep water to shallow
water, just to lay their eggs.
These creatures have some kind of inborn
compass or instinct that tells them where to go and at
what is the right time for migration. No one has taught them this, most
of them have never been to the new place, but they still manage to
migrate safely. Scientists till today, cannot give good, complete
explanations for these migrations. It probably is just one of
nature's powers and mysteries..... | <urn:uuid:dbcddb3e-9989-4206-871b-4ac752667ccb> | 3.65625 | 381 | Personal Blog | Science & Tech. | 61.48327 | 758 |
From Ajax Patterns
|Revision as of 14:54, 17 September 2009
WikiMartha (Talk | contribs)
Foundational Technology Patterns
← Previous diff
|Revision as of 14:02, 25 October 2009
220.127.116.11 (Talk | contribs)
Next diff →
|Line 6:||Line 6:|
|* [[Ajax App]] Create a rich application in a modern web browser.||* [[Ajax App]] Create a rich application in a modern web browser.|
|== Display Manipulation ==||== Display Manipulation ==|
Revision as of 14:02, 25 October 2009
Foundational Technology Patterns
These patterns are the building blocks of Ajax applications. They are more "reference patterns" than true "design patterns", at least from the perspective of a modern Ajax developer, who will take these technologies as a given. The bestessays patterns are included to introduce the types of technologies that are used, provide a common vocabulary used throughout the language, and facilitate a discussion of pros and cons.
- Ajax App Create a rich application in a modern web browser.
- Display Morphing Alter styles and values in the DOM to change display information such as replacing text and altering background colour.
- Page Rearrangement Restructure the DOM to change the page's structure - moving, adding, and removing elements.
- Web Service Expose server-side functionality with an HTTP API.
- XMLHttpRequest Call Use XMLHttpRequest objects for browser-server communication.
- IFrame Call Use IFrames for browser-server communication.
- HTTP Streaming Stream server data in the response of a long-lived HTTP connection.
- Lazy Inheritance An approach intended to simplify writing OOP and provides support of prototype-based classes hierarchies, automatic resolving and optimizing classes dependencies.
- Richer Plugin Make your application "more Ajax than Ajax" with a Richer Plugin.
Programming Patterns (25)
- RESTful Service Expose web services according to RESTful principles.
- RPC Service Deepak Expose web services as Remote Procedural Calls (RPCs).
- Ajax Stub Use an "Ajax Stub" framework which allows browser scripts to directly invoke server-side operations, without having to worry about the details of XMLHttpRequest and HTTP transfer.
- HTML Message Have the server generate HTML snippets to be displayed in the browser.
- Plain-Text Message Pass simple messages between server and browser in plain-text format.
- XML Message Pass messages between server and browser in XML format.
- UED Format Send message from the browser to the server using the UED Data exchange format.
- Call Tracking Accommodate busy user behaviour by allocating a new XMLHttpRequest object for each request. See Richard Schwartz's blog entry.Note: Pending some rewrite to take into account request-locking etc.
- Periodic Refresh The browser refreshes volatile information by periodically polling the server.
- Distributed Events Keep objects synchronised with an event mechanism.
- Cross-Domain Proxy Allow the browser to communicate with other domains by server-based mediation.
- Flash-enabled XHR A client-side proxy pattern for cross-domain Ajax, using invisible flash to bridge the domain communication gap.
- XML Data Island Retain XML responses as "XML Data Islands", nodes within the HTML DOM.
- Browser-Side XSLT Apply XSLT to convert XML Messages into XHTML.
- Browser-Side Templating Produce browser-side templates and call on a suitable browser-side framework to render them as HTML.
- Fat Client Create a rich, browser-based, client by performing remote calls only when there is no way to achieve the same effect in the browser.
- Browser-Side Cache Maintain a local cache of information.
- Guesstimate Instead of grabbing real data from the server, make a guesstimate that's good enough for most user's needs. ITunes Download Counter, GMail Storage Counter.
- Multi-Stage Download Quickly download the page structure with a standard request, then populate it with further requests.
- Predictive Fetch Anticipate likely user actions and pre-load the required data.
- Pseudo-Threading Use a timer and a worker queue to process jobs without the blocking application flow.
- Code Compression Compress code on the server, preferably not on the fly.
Code Generation and Reuse
- Cross-Browser Component Create cross-browser components, allowing programmers to reuse them without regard for browser compatibility.
Functionality and Usability Patterns (28)
All of these widget patterns will be familiar to end-users, having been available in desktop GUIs and some in non-AJAX DHTML too. They are included here to catalogue the interaction styles that are becoming common in AJAX applications and can benefit from XMLHttpRequest-driven interaction.
- Drilldown To let the user locate an item within a hierarchy, provide a dynamic drilldown.
- Microcontent Compose the page of "Microcontent" blocks - small chunks of content that can be edited in-page.
- Microlink Provide Microlinks that open up new content on the existing page rather than loading a new page.
- Popup Support quick tasks and lookups with transient Popups, blocks of content that appear "in front of" the standard content.
- Portlet Introduce "Portlets" - isolated blocks of content with independent conversational state.
- Live Command-Line In command-line interfaces, monitor the command being composed and dynamically modifying the interface to support the interaction.
- Live Form Validate and modify a form throughout the entire interaction, instead of waiting for an explicit submission.
- Live Search As the user refines their search query, continuously show all valid results.
- Data Grid Report on some data in a rich table, and support common querying functions.
- Progress Indicator Hint that processing is occurring.
- Rich Text Editor e.g. http://dojotoolkit.org/docs/rich_text.html
- Slider Provide a Slider to let the user choose a value within a range.
- Suggestion Suggest words or phrases which are likely to complete what the user's typing.
- Drag-And-Drop Provide a drag-and-drop mechanism to let users directly rearrange elements around the page.
- Sprite Augment the display with "sprites": small, flexible, blocks of content.
- Status Area Include a read-only status area to report on current and past activity.
- Virtual Workspace Provide a browser-side view into a server-side workspace, allowing users to navigate the entire workspace as if it were held locally.
- One-Second Spotlight When a page element undergoes a value change or some other significant event, dynamically manipulate its brightness for a second or so.Responded
- One-Second Mutation When a page element undergoes a value change or some other significant event, dynamically mutate its shape for a second or so.
- One-Second Motion Incrementally move an element from point-to-point, or temporarily displace it, to communicate an event has occurred.
- Blinkieblinkpattern When an element is blinking
- Highlight Highlight elements by rendering them in a consistent, attention-grabbing, format.
- Lazy Registration Accumulate bits of information about the user as they interact, with formal registration occurring later on.
- Direct Login Authenticate the user with an XMLHttpRequest Call instead of form-based submission, hashing in the browser for improved security.
- Host-Proof Hosting Server-side data is stored in encrypted form for increased security, with the browser decrypting it on the fly.
- Timeout Implement a timeout mechanism to track which clients are currently active.
- Heartbeat Have the browser periodically upload heartbeat messages to indicate the application is still loaded in the browser and the user is still active.
- Autosave Autosave un-validated forms to a staging table on the server to avoid users losing their work when their session expires if they get called away from their desk while filling out a long form.
- Unique URLs Use a URL-based scheme or write distinct URLs whenever the input will cause a fresh new browser state, one that does not depend on previous interaction.
Development Practices (8)
- DOM Inspection Use a DOM Inspection Tool to explore the dynamic DOM state.
- Traffic Sniffing Diagnose problems by sniffing Web Remoting traffic.
- Data Dictionary Visualize DOM tags in a table format, with a row for each attribute. (Contributed pattern)
- Simulation Service Develop the browser application against "fake" web services that simulate the actual services used in production.
- Service Test Build up automated tests of web services, using HTTP clients to interact with the server as the browser normally would.
- System Test Build automated tests to simulate user behaviour and verify the results. | <urn:uuid:d4fe2988-7fed-4572-bf1f-b9a2071aed13> | 2.734375 | 1,880 | Structured Data | Software Dev. | 34.996801 | 759 |
Welcome to http:/www.handsonuniverse.org/activities/Explorations/tactile-moonphases/
Try this instead: Link to alternate page with thumbnails linked to larger images.
SEE Project. http://analyzer.depaul.edu/SEE_Project/ These images are set for high contrast that suits the needs of individuals who are blind and visually impaired. Print then copy the images onto swellform paper, then process through a Swellform Graphics Machine (http://analyzer.depaul.edu/SEE_Project/bm030507.htm). The result will be tactile images to sense by touch rather than through sight. SEE Project is funded by NASA IDEAS.
You may notice that some images appear larger or smaller than others. The moon's orbit brings it sometimes as close as 55 Earth radii, and other times as far as 65 Earth radii. What difference does distance make in the apparent size of our Moon? Take a look at two full moon pictures taken on different dates.
These pictures are mosaics constructed from images taken with Univ. of Chicago Yerkes Observatory Rooftop Telescope - South (Meade 8 inch, F/6.3, SBIG ST8 CCD). Number of moon refers to day in cycle. Moon images were taken during a variety of cycles.
Questions or Comments? mailto:email@example.com?subject=Project SEE: Moon Phases
Links to jpg files, png files,
Link to fts files for display/manipulation with Hands-On Universe image processing software.
Explorations * Hands-On Universe * SEE Project | <urn:uuid:4887fdb1-88ee-4e89-94b6-35a801b59d0a> | 2.921875 | 350 | Tutorial | Science & Tech. | 54.619713 | 760 |
This advice changed my view of the world. Not only did I realize that being a teenager with a Y-chromosome can't be easy either, it also explained why my male classmates were suddenly developing interests in things like Special Relativity or Scanning Tunnel Microscopes (Nobel Prize '86). It made also sense they were usually very irritated if a girl attempted to join them: all that was just suppressed hormones, the poor guys*. It further revealed a deep connection between General Relativity and potatoes that hadn't previously occurred to me. Most disturbingly however, it labeled General Relativity as unsexy, a fact that has bothered me ever since.
Over the course of years I moreover had to notice that General Relativity is a subject of great mystery to many, it's a word that has entered the colloquial language as the incomprehensible and ununderstandably complicated result of a genius' brain. My physics teacher notably told me when getting tired of my questions that there are maybe three people in the world who understand General Relativity, thereby repeating (as I found out later) a rumor that was more than half a century old (see Wikipedia on the History of General Relativity).
Special and General Relativity is also the topic I receive the most questions about. The twin paradox for example still seems to confuse many people, and only a couple of days ago I was again confronted with a misunderstanding that I've encountered repeatedly, though its origin is unclear to me. The twin paradox is not a paradox, so the explanation seems to go, because it doesn't take into account General Relativity. That's plain wrong. The twin paradox is not a paradox because it doesn't take into account acceleration (unless your spacetime allows closed timelike loops you will have to accelerate one of the twins to get them to meet again which breaks the symmetry).
The problem is that for reasons I don't know many people seem to believe Special Relativity is about constant velocities only, possibly a consequence of bad introductionary textbook. That is not the case. Heck, you can describe acceleration even in Newtonian mechanics! To make that very clear:
- The difference between Special and General Relativity is that the former is in flat space, whereas the latter is in a 'general', curved space.
- Flat space does not mean the metric tensor is diagonal with the entries (-1,1,1,1), this is just the case in a very specific coordinate system. Flat space means the curvature tensor identically vanishes (which is independent of the coordinate system).
- Of course one can describe accelerated observers in Special Relativity.
That leads me now directly to the Equivalence Principle, the cornerstone of General Relativity. Googling 'Equivalence Principle' it is somehow depressing. Wikipedia isn't wrong, but too specific (the Equivalence Principle doesn't have anything to do with standing on the surface of the Earth). The second hit is a NASA website which I find mostly confusing (saying all objects react equally to gravity doesn't tell you anything about the relation of gravitational to inertial mass). The third and fourth hits get it right, the fifth is wrong (the locality is a crucial ingredient).
So here it is:
- The Equivalence Principle: Locally, the effects of gravitation (motion in a curved space) are the same as that of an accelerated observer in flat space.
That is what Einstein explains in his thought experiment with the elevator. If you are standing in the elevator (that is just a local patch, theoretically infinitesimally small) you can't tell whether you are pulled down because there is a planet underneath your feet, or because there is a flying pig pulling up the elevator. This website has two very nice mini-movies depicting the situation.
If you could make your elevator larger you could however eventually distinguish between flat and curved space because you could measure geodesic deviation, i.e. the curvature.
If you think of particles, the Equivalence Principle means that the inertial mass is equal to the gravitational mass, which has been measured with impressive precision. But the above formulation makes the mathematical consequences much clearer. To formulate your theory, you will have to introduce a tangential bundle on your curved manifold where you can deal with the 'local' quantities, and you will have to figure out how the cuts in this bundle (tensors) will transform under change of coordinates. If you want your theory to be independent of that choice of coordinates it will have to be formulated in tensor equations. Next thing to ask is then how to transport tensors from one point to the other, which leads you to a 'covariant' derivative.
The Equivalence Principle is thus a very central ingredient of General Relativity and despite its simplicity the base of a large mathematical apparatus, it's the kind of insight every theoretical physicist dreams of. It gives you a notion of a 'straightest line' in curved space (a geodesic) on which a testparticle moves. This curve most notably is independent of the mass of that particle: heavy and light things fall alike even in General Relativity (well, we already knew this to be the case in the Newtonian limit). For a very nice demonstration see the video on the NASA website. Please note that this holds for pointlike testparticles only, it is no loger true for extended or spinning objects, or for objects that significantly disturb the background.
The Equivalence Principle however is not sufficient to give you Einstein's field equations that describe how space is curved by its matter content. But that's a different story. It remains to be said all this is standard textbook knowledge and General Relativity is today not usually considered a large mystery. There are definitely more than 3 people who understand it. We have moved on quite a bit since 1905.
General Relativity is sexy.
Though I doubt there's more than three people in the world who really understand potatoes.
* In the more advanced stages of confusion they start referring to physical theories as women.
Josh, this one's for you. | <urn:uuid:a3a65a4a-b5d3-41fd-bd68-47cb1070e5fa> | 2.53125 | 1,259 | Personal Blog | Science & Tech. | 35.772112 | 761 |
- freezing a small piece of leaf tissue in liquid nitrogen (-196 degrees C !) and grinding it as finely as possible.
- adding a detergent to release the DNA from the cells of the leaf tissue.
- adding chloroform. The detergent and chloroform do not mix (like oil and water), but proteins and other things we do not want are drawn into the chloroform while the DNA is left in the detergent.
- the detergent layer is removed, and alcohol is added to it. This precipitates the DNA (i.e., makes it turn into a solid), and we can actually see it. I don’t have any pictures of Pseudopanax DNA, but precipitated DNA all looks much the same – see this link.
It is possible to extract DNA using household items (see this link).
In order to analyse the DNA further we have to make it go back into solution. The alcohol is tipped off, and a small amount of salt solution is added; the DNA ‘dissolves’ in this.
To test the quality and quantity of the extracted DNA, we run a small amount of the DNA solution on an agarose gel in a process called electrophoresis (see link).
In the gel above, each lane corresponds to a separate sample, except the right-most lane which is a ‘ladder’ for sizing the DNA samples. A negative charge was applied at the top and a positive charge at the bottom. DNA is negatively charged, so its moves towards a positive charge.
The bright blobs indicated by the green arrow indicate that we got high quality (the DNA is in big pieces, as it hasn’t moved very far) and quantity (a brighter stain indicates more DNA) for most of these samples, which is great!
The sample labelled 5957 (my collection number) is a bit weak, while we didn’t get anything for sample 5964.
These DNA extractions are all from samples of fierce lancewood (Pseudopanax ferox), except 5966 which is P. macintyrei.
The next step in assessing the relationships of these plants is to genetically ‘fingerprint’ them. | <urn:uuid:fe6fe460-9092-465a-84fb-171f676ce368> | 3.296875 | 467 | Personal Blog | Science & Tech. | 58.46689 | 762 |
On New Year’s day, Comet Tuttle will be closest to the Earth, a mere 25 million miles away, and also at its brightest. The comet will just be visible to the unaided eye, so you will need to be observing from a very dark site.
A gallery of images, and sky maps of when and where to look, can be found at SpaceWeather.com.
[Image of Comet Tuttle taken by Pete Lawrence]
Happy solstice to all our readers!
The winter solstice this year occurs at 6am, on 22 December, 2007.
That is the time when the Earth’s North pole is pointing directly away from the Sun (which is why it is so much colder in the Northern hemisphere).
For people living in the Southern hemisphere, the South pole is pointing towards the Sun, making it summertime ‘down-under’!
On the night of the 13 December, and the morning of the 14 December, the Geminid shooting star shower reaches its peak.
The Earth will be ploughing through a stream of debris left behind by asteroid Phaethon, and we see these fragments burn up as they hit the Earth’s atmosphere, causing the shooting stars.
And they are often big fragments! I myself saw a huge fireball in the UK during the Geminid shower of 1994.
More details can be found at the NASA science website.
Details of all the major annual meteor showers visible from the UK are available on the NMM website.
Comet Holmes now appears almost twice the diameter of the full Moon in the night sky.
To see the latest images, see the gallery at Spaceweather.com.
Because the comet is so large in the sky, it is spread out, making it appear much fainter in the night sky. But it is still visible to the unaided eye when well away from light pollution.
The best way to observe the comet now is with a pair of binoculars that are large (to collect a lot of light) but with low magnification (because the comet is so large in the sky).
The apparent size and brightness of Comet Holmes is regularly estimated by amateur astronomers world wide. A list of estimates is available at the IAC/ICQ/MPC website. Using averages of these estimates, I have plotted the apparent size of Comet Holmes against time (below).
In this graph, you can see the number of days along the bottom since 24 October, 2007 – the date when Comet Holmes suddenly increased in brightness.
Up the left hand side of the graph, I show the angular size of the comet – that is how big the comet appears to us in the night-time sky. The apparent size of the Full Moon, which is half a degree across (or 32 arc-minutes) is labelled for comparison.
Up the right hand side of the graph, I show the actual size of Comet Holmes in millions of km (assuming that the comet is at a fixed distance of 1.7 AU away – although the comet is moving away from us, it has not moved too much over the last 2 months).
Note how within days of the outburst in October, the comet was bigger than the separation of the Earth and the Moon, and within weeks it was physically bigger than the Sun!
Currently, it appears about 1 degree (60 arc-mins) across in the night sky – that’s twice the diameter of the full Moon. In physical size, the nucleus of the comet is now surrounded by a cloud of gaseous water that is over 2.5 times larger than the Sun.
What an amazing comet! | <urn:uuid:ad4b71d1-d528-4474-b6b5-5e55a81c8aed> | 3.59375 | 757 | Personal Blog | Science & Tech. | 64.333113 | 763 |
following INSERT INTO statement:
INSERT INTO TABLE2 SELECT * FROM TABLE1
Now suppose you do not want to copy all the rows, but only those rows that meet a specific criteria. Say you only want to copy those rows where COL1 is equal to "A." To do this you would just modify the above code to look like this:
INSERT INTO TABLE2 SELECT * FROM TABLE1 WHERE COL1 = 'A'
See how simple it is to copy data from one table to another using the INSERT INTO method? You can even copy a selected set of columns, if you desire, by identifying the specific columns you wish to copy and populate, like so:
INSERT INTO TABLE2 (COL1, COL2, COL3)
SELECT COL1, COL4, COL7 FROM TABLE1
The above command copies only data from columns COL1, COL4, and COL7 in TABLE1 to COL1, COL2, and COL3 in TABLE2. | <urn:uuid:ba2bdb2d-713f-4c58-931c-9526b5907a25> | 2.59375 | 205 | Q&A Forum | Software Dev. | 38.620079 | 764 |
Chemical Principles/Will It React? An Introduction to Chemical Equilibrium
And so, nothing that to our world appears,
Perishes completely, for nature ever
Upbuilds one thing from another's ruin;
Suffering nothing yet to come to birth
But by another's death.
Lucretius (95-55 B.C.)
The main question asked in Chapter 2 was "If a given set of substances will react to give a desired product, how much of each substance is needed?" Our basic assumptions were that matter cannot be arbitrarily created or destroyed, and that atoms going into a reaction must come out again as products.
In this chapter we ask a second question: "Will a reaction occur, eventually?" Is there a tendency or a drive for a given reaction to take place, and if we wait long enough will we find that reactants have been converted spontaneously into products? This question leads to the ideas of spontaneity and of chemical equilibrium. A third question, "Will a reaction occur in a reasonably short time?" involves chemical kinetics, which will be discussed in Chapter 22. For the moment, we will be satisfied if we can predict which way a chemical reaction will go by itself, ignoring the time factor.
Spontaneous Reactions
A chemical reaction that will occur on its own, given enough time, is said to be spontaneous. In the open air, and under the conditions inside an automobile engine, the combustion of gasoline is spontaneous:
C7H16 + 11 O2 → 7CO2 + 8 H2O
(The reaction is exothermic, or heat emitting. The enthalpy change, which was defined in Chapter 2, is large and negative: = -4812 kJ mole-1 of heptane at 298 K. The heat emitted causes the product gases to expand, and it is the pressure from these expanding gases that drives the car.) In contrast, the reverse reaction under the same conditions is not spontaneous:
7CO2 + 8H2O C7H16 + 11 O2
No one seriously proposes that gasoline can be obtained spontaneously from a mixture of water vapor and carbon dioxide.
Explosions are examples of rapid, spontaneous reactions, but a reaction need not be as rapid as an explosion to be spontaneous. It is important to understand clearly the difference between rapidity and spontaneity. If you mix oxygen and hydrogen gases at room temperature, they will remain together without appreciable reaction for years. Yet the reaction to produce water is genuinely spontaneous:
2H2 + O2 → 2H2O
We know that this is true because we can trigger the reaction with a match, or catalyst of finely divided platinum metal.
The preceding sentence suggests why a chemist is interested in whether a reaction is spontaneous, that is, whether it has a natural tendency to occur. If a desirable chemical reaction is spontaneous but slow, it may be possible to speed up the process. Increasing the temperature will often do the trick, or a catalyst may work. We will discuss the functions of a catalyst in detail in Chapter 22. But in brief, we can say now that a catalyst is a substance that helps a naturally spontaneous reaction to go faster by providing an easier pathway for it. Gasoline will burn rapidly in air at a high enough temperature. The role of a spark plug in an automobile engine is to provide this initial temperature. The heat produced by the reaction maintains the high temperature needed to keep it going thereafter. Gasoline will combine with oxygen at room temperature if the proper catalyst is used, because the reaction is naturally spontaneous but slow. But no catalyst will ever make carbon dioxide and water recombine to produce gasoline and oxygen at room temperature and moderate pressures, and it would be a foolish chemist who spent time trying to find such a catalyst. In short, an understanding of spontaneous and nonspontaneous reactions helps a chemist to see the limits of what is possible. If a reaction is possible but not currently realizable, it may be worthwhile to look for ways to carry it out. If the process is inherently impossible, then it is time to study something else.
Equilibrium and the Equilibrium Constant
The speed with which a reaction takes a place ordinarily depends on the most concentrations of the reacting substances. This is common sense, since most reactions take place when molecules collide, and the more molecules there are per unit of volume, the more often collisions will occur.
The industrial fixation of atmospheric nitrogen is very important in the manufacture of agricultural fertilizers (and explosives). One of the steps in nitrogen fixation, in the presence of a catalyst, is
N2 + O2 → 2NO (4-1)
If this reaction took place by simple collision of one molecule of N2 and one molecule of O2, then we would expect the rate of collision (and hence the rate of reaction) to be proportional to the concentrations of N2 and O2:
Rate of NO production
R1 = k1[N2][O2] (4-2)
The proportionality constant k1 is called the forward-reaction rate constant, and the bracketed terms [N2][O2] represent concentrations in moles per liter. This rate constant, which we will discuss in more detail in Chapter 22, usually varies with temperature. Most reactions go faster at higher temperatures, so k1 is larger at higher temperatures. But k1 does not depend on the concentrations of nitrogen and oxygen gases present. All of the concentration dependence of the overall forward reaction rate, R1, is contained in the terms [N2] and [O2]. If this reaction began rapidly in a sealed tank with high starting concentrations of both gases, then as more N2 and O2 were consumed, the forward reaction would become progressively slower. The rate of reaction would decrease because the frequency of collision of molecules would diminish as fewer N2 and O2 molecules were left in the tank.
The reverse reaction can also occur. If this reaction took place by the collision of two molecules of NO to make one molecule of each starting gas,
2NO → N2 + O2 (4-3)
then the rate of reaction again would be proportional to the concentration of each of the colliding molecules. Since these molecules are of the same compound, NO, the rate would be proportional to the square of the NO concentration:
Rate of NO removal [NO][NO]
R2 = k2[NO]2 (4-4)
where R2 is the overall reverse reaction rate and k2 is the rev~rse-reaction rate constant. If little NO is present when the experiment begins, this reaction will occur at a negligible rate. But as more NO accumulates by the forward reaction, the faster it will be broken down by the reverse reaction.
Thus as the forward rate, R1, decreases, the reverse rate, R2 , increases. Eventually the point will be reached at which the forward and reverse reactions exactly balance (4-5):
R1 = R2 [N2][O2]k1 = k2[NO]2
This is the condition of equilibrium. Had you been monitoring the concentrations of the three gases, N2 O2, and NO, you would have found that the composition of the reacting mixture had reached an equilibrium state and thereafter ceased to change with time. This does not mean that the individual reactions had stopped, only that they were proceeding at equal rates; that is, they had arrived at, and thereafter maintained, a condition of balance or equilibrium.
The condition of equilibrium can be illustrated by imagining two large fish tanks, connected by a channel (Figure 4-1). One tank initially contains 10 goldfish, and the other contains 10 guppies. If you watch the fish swimming aimlessly long enough, you will eventually find that approximately 5 of each type of fish are present in each tank. Each fish has the same chance of blundering through the channel into the other tank. But as long as there are more goldfish in the left tank (Figure 4-la), there is a greater probability that a goldfish will swim from left to right than the reverse. Similarly, as long as the number of guppies in the right tank exceeds that in the left, there will be a net flow of guppies to the left, even though there is nothing in the left tank to make the guppies prefer it. Thus the rate of flow of guppies is proportional to the concentration of guppies present. A similar statement can be made for the goldfish.
At equilibrium (Figure 4-1b), on an average there will be 5 guppies and 5 goldfish in each tank. But they will not always be the same 5 of each fish. If 1 guppy wanders from the left tank into the right, then it or a different guppy may wander back a little later. Thus at equilibrium we find that the fish have not stopped swimming, only that over a period of time the total number of guppies and goldfish in each tank remains constant. If we were to fill each tank with 9 goldfish and then throw in 1 guppy, we would see that, in its aimless swimming, it would spend half its time in one tank and half in the other (Figure 4-1 c).
In the NO reaction we considered, there will be a constant concentration of NO molecules at equilibrium, but they will not always be the same NO molecules. Individual NO molecules will react to re-form N2 and O2, and other reactant molecules will make more NO. As with the goldfish, only on a head-count or concentration basis have changes ceased at equilibrium.
The equilibrium condition for the NO-producing reaction, equation 4-1, can be rewritten in a more useful form:
in which the ratio of forward and reverse rate constants is expressed as a simple constant, the equilibrium constant, Keq. This equilibrium constant will vary as the temperature varies, but it is independent of the concentrations of the reactants and products. It tells us the ratio of products to reactants at equilibrium, and is an extremely useful quantity for determining whether a desired reaction will take place spontaneously.
General Form of the Equilibrium Constant
We derived the equilibrium-constant expression for the NO reaction by assuming that we knew the way that the forward and reverse steps occurred at the molecular level. If the NO reaction proceeded by simple collision of two molecules, the derivation would be perfectly correct. The actual mechanism of this reaction is more complicated. But it is important, and fortunate for chemists, that we do not have to know the reaction mechanism to write the proper equilibrium constant. The equilibrium-constant expression can always be written from the balanced chemical equation, with no other information, even when the forward and reverse rate expressions are more complicated than the balanced equation would suggest. (We shall prove this in Chapter 16.) In our NO example, the forward reaction actually takes place by a series of complicated chain steps. The reverse reaction takes place by a complementary set of reactions, so that these complications cancel one another in the final ratio of concentrations that gives us the equilibrium constant. The details of the mechanism are "invisible" to the equilibrium-constant expression, and irrelevant to equilibrium calculations.
A general chemical reaction can be written as
In this expression, A and B represent the reactants; C and D, the products. The letters a, b, c, and d represent the number of moles of each substance involved in the balanced reaction, and the double arrows indicate a state of equilibrium. Although only two reactants and two products are shown in the general reaction, the principle is extendable to any number. The correct equilibrium-constant expression for this reaction is
It is the ratio of product concentrations to reactant concentrations, with each concentration term raised to a power given by the number of moles of that substance appearing in the balanced chemical equation. Because it is based on the quantities of reactants and products present at equilibrium, equation 4-8 is called the law of mass action.
|Give the equilibrium-constant expression for the reaction|
The equilibrium constant is given by
Since all four substances have a coefficient of 1 in the balanced equation, their concentrations are all raised to the first power in the equilibrium-constant expression.
|What is the equilibrium-constant expression for the formation of water from hydrogen and oxygen gases? The reaction is|
Since two moles of hydrogen and water are involved in the chemical equation, their concentrations are squared in the Keq expression.
|Give the equilibrium-constant expression for the dissociation (breaking up) of water into hydrogen and oxygen. The reaction is|
An important general point emerges here. This reaction is the reverse of that of Example 2, and the equilibrium-constant expression is the inverse, or reciprocal, of the earlier one. If a balanced chemical reaction is reversed, then the equilibrium-constant expression must be inverted, since what once were reactants now are products, and vice versa.
|The dissociation of water can just as properly be written as
What then is the equilibrium-constant expression?
Notice that when the reaction from Example 3 is divided by 2, resulting in the Example 4 reaction, the equilibrium constant is the square root of the old value, or the old Keq to the one-half power. Similarly, if the reaction is doubled, the Keq must be squared. In general, it is perfectly proper to multiply all the coefficients of a balanced chemical reaction by any positive or negative number, n, and the equation will remain balanced. (Multiplying all the coefficients of an equation by - 1 is formally the same as writing the equation in reverse. Write out a simple equation and prove to yourself that this is so.) But if all the co1ficients of an equation are multiplied by n, then the new equilibrium-constant expression is the old one raised to the nth power. Hence, when working with equilibrium constants, one must keep the corresponding chemical reactions clearly in mind.
|The reaction for the formation or the breakdown of ammonia can be written in a number of ways:
(Each of these expressions might be appropriate, depending on whether you were focusing on nitrogen, ammonia, hydrogen, or the dissociation of ammonia.) What are the equilibrium-constant expressions for each formulation, and how are the equilibrium constants related?
Notice that there is nothing wrong with fractional powers in the equilibrium-constant expression.
Using Equilibrium Constants
Equilibrium constants have two main purposes:
- 1. To help us tell whether a reaction will be spontaneous under specified conditions.
- 2. To enable us to calculate the concentration of reactants and products that will be present once equilibrium has been reached.
We can illustrate how equilibrium constants can be used to achieve these ends, and also the fact that an equilibrium constant is indeed constant, with real data from one of the most intensively studied of all reactions, that between hydrogen and iodine to yield hydrogen iodide:
If we mix hydrogen and iodine in a sealed flask and observe the reaction, the gradual fading of the purple color of the iodine vapor tells us that iodine is being consumed. This reaction was studied first by the German chemist Max Bodenstein in 1893. Table 4-1 contains the data from Bodenstein's experiments. The experimental data are in the first three columns. In the fourth column, we have calculated the simple ratio of product and reactant concentrations, [HI]/[H2][I2], to see if it is constant. It clearly is not, for as the hydrogen concentration is decreased and the iodine concentration is increased, this ratio varies from 2.60 to less than 1. The law of mass action (Section 4-3) dictates that the equilibrium-constant expression should contain the square of the HI concentration, since the reaction involves 2 moles of HI for every mole of H2 and I2 , The fifth column shows that the ratio [HI]2/[H2][I2] is constant within a mean deviation of approximately 3%. * Therefore, this ratio is the proper equilibrium-constant expression, and the average value of Keq for these six runs is 50.53.
The equilibrium constant can be used to determine whether a reaction under specified conditions will go spontaneously in the forward or in the reverse direction. The ratio of product concentration to reactant concentration, identical to the equilibrium constant in form but not necessarily at equilibrium conditions, is called the reaction quotient, Q:
Q = (not necessarily at equilibrium) (4-10)
If there are too many reactant molecules present for equilibrium to exist, then the concentration terms in the denominator will make the reaction quotient, Q, smaller than Keq. The reaction will go forward spontaneously to make more product. However, if an experiment is set up so that the reaction quotient is greater than Keq, then too many product molecules are present for equilibrium and the reverse reaction will proceed spontaneously. Therefore, a comparison of the actual concentration ratio or reaction quotient with the equilibrium constant allows us to predict in which direction a reaction will go spontaneously under the given set of circumstances:
Q < Keq (forward reaction spontaneous) Q > Keq (reverse reaction spontaneous) (4-11) Q = Keq (reactants and products at equilibrium)
- These are Bodenstein's original numbers. Modern data can be much more accurate, with
less deviation in Keq. The mean deviation is the average of the deviations of individual calculated Keq from the average Keq.
|If 1.0 X 10-2 mole each of hydrogen and iodine gases are placed in a I-liter flask at 448°C with 2.0 X 10-3 mole of HI, will more HI be produced?|
The reaction quotient under these conditions is
This is smaller than the equilibrium value of 50.53 in Table 4-1, which tells us that excess reactants are present. Hence, equilibrium will not be reached until more HI has been formed.
|If only 1.0 X 10-3 mole each of H2 and I2 had been used, together with 2.0 X 10-3 mole of HI, would more HI have been produced spontaneously?|
You can verify that the reaction quotient is Q = 4.0. Because this is less than Keq, the forward reaction is still spontaneous.
|If the conditions of Example 7 are changed so that the HI concentration is increased to 2.0 X 10-2 mole liter-1 , what happens to the reaction?|
The reaction quotient now is Q = 400. This is greater than Keq- There are now too many product molecules and too few reactant molecules for equilibrium to exist. Thus the reverse reaction occurs more rapidly than the forward reaction. Equilibrium is reached only by converting some of the HI to H2 and 12, so the reverse reaction is spontaneous.
|If the conditions of Example 7 are changed so that the HI concentration is 7.1 X 10-3 mole liter-1 , in which direction is the reaction spontaneous?|
Under these conditions,
Since Q equals Keq within the limits of accuracy of the data, the system as described is at equilibrium, and neither the forward nor the backward reaction is spontaneous. (Both reactions are still taking place at the molecular level, of course, but they are balanced so their net effects cancel.)
The second use for equilibrium constants is to calculate the concentrations of reactants and products that will be present at equilibrium.
|If a 1-liter flask contains 1.0 X 10-3 mole each of H2 and I2 at 448°C, what amount of HI is present when the gas mixture is at equilibrium?|
The Keq expression is treated as an ordinary algebraic equation, and solved for the HI concentration:
You can verify that in Example 7 the HI concentration was less than this equilibrium value; in Example 8 it was more; and in Example 9 it was just this value.
|One-tenth of a mole, 0.10 mole, of hydrogen iodide is placed in an otherwise empty 5.0 liter flask at 448°C. When the contents have come to equilibrium, how much hydrogen and iodine will be in the flask?|
From the stoichiometry of the reaction, the concentrations of H2 and I2 must be the same. For every mole of H2 and I2 formed, 2 moles of HI must decompose. Let y equal the number of moles of H2 or I2 per liter present at equilibrium. The initial concentration of HI before any dissociation has occurred is
Begin by writing a balanced equation for the reaction, then make a table of concentrations at the start and at equilibrium:
The HI concentration of 0.020 mole liter-1 has been decreased by 2y for every y moles of H2 and I2 that are formed. The equilibrium-constant expression is
We immediately see that we can take a shortcut by taking the square root of both sides:
For 5 liters, 5 0.0022 = 0.011 mole of H2 and of I2 will be present at equilibrium. Only (0.020 - 0.0044) 5 = 0.080 mole of HI will be left in the 5-liter tank, and the fraction of HI dissociated at equilibrium is
Shortcuts such as taking the square root in the preceding example are not always possible, yet part of the skill of solving equilibrium problems lies in recognizing shortcuts when they occur and using them. The key is often a good intuition about what quantities are large and small relative to one another, and this intuition comes from thoughtful practice and understanding of the chemistry involved. You should remember that these are chemical problems, not mathematical ones.
In many cases a quadratic equation must be solved.
|If 0.00500 mole of hydrogen gas and 0.0100 mole of iodine gas are placed in a 5.00 liter tank at 448°C, how much HI will be present at equilibrium?|
The initial concentrations of H2 and I2 are
This time, let the unknown variable y be the moles per liter of H2 or I2 that have reacted at equilibrium:
The quilibrium expression is
The square-root shortcut is now impossible because the starting concentrations of H2 and I2 are unequal. Instead we must reduce the equation to a quadratic expression:
A general quadratic equation of the form ay2 + by + c = 0 can be solved by the quadratic formula,
Thus for this problem
The first solution is physically impossible since it shows more H2 reacting than was originally present. The second solution is the correct answer: y=0.935 10-3 mole liter-1. Therefore, the equilibrium concentrations are
Units and Equilibrium Constants
As we have seen, the square brackets around a chemical symbol, as in [N2], represent concentrations, usually but not exclusively in units of moles liter-1. Concentrations expressed as moles liter-1 are often given the special symbol c, as in cN2, the concentrations measured in these units is denoted by Kc.
An equilibrium constant as we have defined it thus far may itself have units. In Example 1, Keq is unitless since the moles2 1iter-2 of the numerator and denominator cancel. In Example 2, the units of Keq are moles-1 liter since concentration occurs to the second power in the numerator and to the third power in the denominator. In Example 3 the units of Keq are the inverse: moles liter-1. The units demanded by Example 4, moles1/2 liter-1/2, may seem strange but they are perfectly respectable.
|What are the units for the equilibrium constants in the four reactions of Example 5?|
The Keq expression is treated as an ordinary algebraic equation, and solved for the HI concentration:
The question of units for Keq becomes important as soon as we realize that we can measure concentration in units other than moles liter-1. The partial pressure in atmospheres is a convenient unit when dealing with gas mixtures, and the equilibrium constant then is identified by Kp. Since the numerical values of Kp and Kc in general will be different, one must be sure what the units are when using a numerical constant.
|One step in the commercial synthesis of sulfuric acid is the reaction of sulfur dioxide and oxygen to make sulfur trioxide:
At 1000 K, the equilibrium constant for this reaction is Kp = 3.50 atm-1. If the total pressure in the reaction chamber is 1.00 atm and the partial pressure of unused 02 at equilibrium is 0.10 atm, what is the ratio of concentrations of product (S03) to reactant (S02)?
The equilibrium mixture has 0.59 mole of S03 for every 1 mole of S02.
The ideal gas law permits us to convert between atmospheres and moles liter-1, and between Kp and Kc:
PV = nRT (3-8) (4-12)
In the general chemical reaction written earlier,
Δn (read "delta n"), the increase in number of moles of gas during the reaction, is n = c + d - a - b (4-13)
The equilibrium-constant expression in terms of partial pressures is
With the ideal gas law applied to each gas component, we can convert this expression to Kc:
(RT)Δn = Kc(RT)Δn (4-15)
(Do not confuse the two uses of the symbol c in equation 4-15: one is for concentration in moles liter-1 and the other for the number of moles of substance C.)
|What is the numerical value of Kc for the reaction of Example 14?|
Three moles of reactant gases are converted into only 2 moles of product, so Δn = - 1. Hence at 1000 K,
Although the numerical answers that result when different units are used may differ, the physical reality must be the same.
|What is the concentration of oxygen in Example 14, in moles liter-1? Solve Example 14 again using Kc from Example 15.|
Three moles of reactant gases are converted into only 2 moles of product, so Δn = - 1. Hence at 1000 K,
This is the same ration of SO_3 to SO_4 as was obtained when atmospheres were used. The choice is one of convenience.
Equilibrium Involving Gases with Liquids or Solids
All the examples considered so far have involved only one physical state, a gas, and are examples of homogeneous equilibria. Equilibria that involve two or more physical states (such as a gas with a liquid or a solid) are called hetergenous equilibria. If one or more of the reactants or products are solids or liquids, how does this affect the form of the equilibrium constant?
The answer, in short, is that any pure solids or liquids that may be present at equilibrium have the same effect on the equilibrium no matter how much solid or liquid is present. The concentration of a pure solid or liquid can be considered constant, and for convenience all such constant terms are brought to the left side of the equation and incorporated into the equilibrium constant itself. As an example, limestone (calcium carbonate, CaCO3), breaks down into quicklime (calcium oxide, CaO) and carbon dioxide, CO2:
The simple equilibrium-constant expression is
- K'eq =
As long as any solid limestone and quicklime are in contact with the gas, their effect on the equilibrium is unchanging. Hence the terms [CaCO3] and [CaO] remain constant and can be merged with K'eq:
- Keq = K'eq [CO2(g)]
This form of the equilibrium-constant expression tells us that, at a given temperature, the concentration of carbon-dioxide gas above limestone and calcium oxide is a fixed quantity. (this is true only as long as both solid forms are present.) Measuring concentration in units of atmospheres, we get
- Kp = pCO2
with the experimental value 0.236 atm at 800°C.
We can see what this means experimentally by considering a cylinder to which CaCO, and CaO have been added. The cylinder has a movable piston, as shown in Figure 4-2. If the piston is fixed at one position, then CaCO3 will decompose until the pressure of CO2 above the solids is 0.236 atm (if the temperature is 800°C). If you try to decrease the pressure by raising the piston, then more CaCO3 will decompose until the pressure again rises to 0.236 atm. Conversely, if you try to increase the pressure by lowering the piston, some of the CO2 gas will react with CaO and become CaCO3 decreasing the amount of CO2 gas present until the pressure once more is 0.236 atm. The only way to increase pCO2, is to raise the temperature, which increases the value of Kp itself to 1 atm at 894°C and to 1.04 atm at 900°C.
An even simpler example is the vaporization of a liquid such as water:
This process can be treated as a chemical reaction in a formal sense even though bonds within molecules are not made or broken. Imagine that the cylinder shown in Figure 4-2 is half-filled with water rather than with CaCO3 and CaO, and that the piston is initially brought down to the surface of the water. As the piston is raised, liquid will evaporate until the pressure of water vapor is a constant value that depends only on the temperature. This is the equilibrium vapor pressure of water at that temperature. At 25°C, the vapor pressure of water is 0.0313 atm. At 100°C, the vapor pressure reaches 1 atm and, as we shall see in Chapter 18, this is just the definition of the normal boiling point of water. The pressure of water vapor above the liquid in the cylinder does not depend on whether the water in the cylinder is 1 cm or 10 cm deep; the only requirement is that some water be present and capable of evaporating to make up any decrease in vapor pressure. Only when the piston is raised to the point where no more liquid exists can the pressure of water vapor fall below 0.0313 atm, if the cylinder is at 25°C. Similarly, if the piston is lowered, some of the vapor condenses, keeping the pressure at 0.0313 atm. Only when all vapor has condensed and the piston is resting on the surface of the liquid can the pressure inside the cylinder be raised above 0.0313 atm.
The formal equilibrium treatment of the evaporation of water would be
- K'eq =
- [H2O(l)] = constant , as long as liquid is present
- Keq = K'eq[H2O(l)] = [H2O(g)]
In pressure units, the expression would be
- Kp = pH2O(g)
From a practical standpoint, what the preceding discussion means is that the concentration terms for pure solids and liquids are simply eliminated from the equilibrium-constant expression. (They are present, implicitly, in the Keq.)
|If the hydrogen iodide reaction previously discussed in this chapter is carried out at room temperature, then iodine is present as deep purple crystals rather than as vapor. What then is the form of the equilibrium-constant expression, and does the equilibrium depend on the amount of iodine crystals present?|
The reaction is
and the equilibrium-constant expression is:
As long as some I2(s) crystals are present, the quantity is immaterial as far as equilibrium is concerned.
|Tin(IV) oxide reacts with carbon monoxide to form metallic tin and CO2 by the reaction
What is the equilibrium-constant expression?
|What is the equilibrium-constant expression for the following reaction leading to liquid water?
What would the expression be if the product were water vapor?
If the product is H2O(l), the equilibrium-constant expression is
If the product is H2O(g), the equilibrium-constant expression is
The preceding example shows that as long as liquid water is present the gas-phase concentration is fixed at the vapor pressure of water at that temperature. Hence the water contribution, being constant, can be lumped into Keq.
Factors Affecting Equilibrium: Le Chatelier's Principle
Equilibrium represents a balance between two opposing reactions. How sensitive is this balance to changes in the conditions of a reaction? What can be done to change the equilibrium state? These are very practical questions if, for example, one is trying to increase the yield of a useful product in a reaction.
Under specified conditions, the equilibrium-constant expression tells us the ratio of product to reactants when the forward and backward reactions are in balance. This equilibrium constant is not affected by changes in concentration of reactants or products. However, if products can be withdrawn continuously, then the reacting system can be kept constantly off-balance, or short of equilibrium. More reactants will be used and a continuous stream of new products will be formed. This method is useful when one product of the reaction can escape as a gas, be condensed or frozen out of a gas phase as a liquid or solid, be washed out of the gas mixture by a spray of a liquid in which it is especially soluble, or be precipitated from a gas or solution.
For example, when solid lime (CaO) and coke (C) are heated in an electric furnace to make calcium carbide (CaC2),
the reaction, which at 2000-3000°C has an equilibrium constant of close to 1.00, is tipped toward calcium carbide formation by the continuous removal of carbon monoxide gas. In the industrial manufacture of titanium dioxide for pigments, TiCl4 and O2 react as gases:
The product separates from the reacting gases as a fine powder of solid Ti02 , and the reaction is thus kept moving in the forward direction. When ethyl acetate or other esters used as solvents and flavorings are synthesized from carboxylic acids and alcohols,
the reaction is kept constantly off-balance by removing the water as fast as it is formed. This can be done by using a drying agent such as Drierite (CaS04), by running the reaction in benzene and boiling off a constant-boiling benzene-water mixture, or by running the reaction in a solvent in which the water is completely immiscible and separates as droplets in a second phase. A final example: Since ammonia is far more soluble in water than either hydrogen or nitrogen is, the yield of ammonia in the reaction
can be raised to well over 90% by washing the ammonia out of the equilibrium mixture of gases with a stream of water, and recycling the nitrogen and hydrogen.
All the preceding methods will upset an equilibrium (in our examples, in favor of desired products) without altering the equilibrium constant. A chemist can often enhance yields of desired products by increasing the equilibrium constant so that the ratio of products to reactants at equilibrium is larger. The equilibrium constant is usually temperature dependent. In general, both forward and reverse reactions are speeded up by increasing the temperature, because the molecules move faster and collide more often. If the increase in the rate of the forward reaction is greater than that of the reverse, then Keq. increases with temperature and more products are formed at equilibrium. If the reverse reaction is favored, then Keq. decreases. Thus Keq for the hydrogen- iodine reaction at 448°C is 50.53, but at 425°C it is 54.4, and at 357°C it increases to 66.9. Production of HI is favored to some extent by an increase in temperature, but its dissociation to hydrogen and iodine is favored much more.
The hydrogen iodide-producing reaction is exothermic or heat emitting:
(If you check this figure against Appendix 3, remember that this reaction involves gaseous iodine, not solid.) If the external temperature of this reaction is lowered, the equilibrium is shifted in favor of the heat-emitting or forward reaction; conversely, if the temperature is increased, the reverse reaction, producing H2 and I2 is favored. The equilibrium shifts so as to counteract to some extent the effect of adding heat externally (raising the temperature) or removing it (lowering the temperature).
The temperature dependence of the equilibrium point is one example of a more general principle, known as Le Chatelier's principle: If an external stress is applied to a system at chemical equilibrium, then the equilibrium point will change in such a way as to counteract the effects of that stress. If the forward half of an equilibrium reaction is exothermic, then Keq will decrease as the temperature increases; if it is endothermic, Keq will increase. Only for a heat-absorbing reaction can the equilibrium yield of products be improved by increasing the temperature. A good way to remember this is to write the reaction explicitly with a heat term:
Then it is clear that adding heat, just like adding HI, shifts the reaction to the left. (see Figure 4-3.)
Le Chatelier's principle is true for other kinds of stress, such as pressure changes. The equilibrium constant, Keq, is not altered by a pressure change at constant temperature. However, the relative amounts of reactants and products will change in a way that can be predicted from Le Chatelier's principle.
The hydrogen- iodine reaction involves an equal number (2) of moles of reactants and product. Therefore, if we double the pressure at constant temperature, the volume of the mixture of gases will be halved. All concentrations in moles liter-1 will be doubled, but their ratio will be the same. In Example 12, doubling the concentrations of the reactants and product does not change the equilibrium constant:
- Keq =
- = 50.51
Thus the hydrogen- iodine equilibrium is not sensitive to pressure changes. Notice that in this case Keq does not have units, since the concentration units in the numerator and denominator cancel.
In contrast, the dissociation of ammonia is affected by changes in pressure because the number of moles (2) of reactant does not equal the total number of moles (4) of products:
The equilibrium constant for this reaction at 25°C is
- Keq = 2.5 10-9 mole2 liter -2
One set of equilibrium conditions is
- N2 = 3.28 10-3 mole liter-1
- H2 = 2.05 10-3 mole liter-1
- NH3 = 0.106 mole liter-1
(Can you verify that these concentrations satisfy the equilibrium condition?) If we now double the pressure at constant temperature, thereby halving the volume and doubling each concentration,
- N2 = 6.56 10-3 mole liter-1
- H2 = 4.10 10-3 mole liter-1
- NH3 = 0.212 mole liter-1
the ratio of products to reactants, the reaction quotient, is no longer equal to Keq:
- Q = 1.0 10-8 mole2 liter-2
Since Q is greater than Keq, too many product molecules are present for equilibrium. The reverse reaction will run spontaneously, thereby forming more NH3 and decreasing the amounts of H2 and N2. Consequently, part of the increased pressure is offset when the reaction shifts in the direction that lowers the total number of moles of gas present. In general, a reaction that reduces the number of moles of gas will be favored by an increase in pressure, and one that produces more gas will be disfavored. (See Figure 4-4.)
|If the hydrogen iodide reaction were run at a temperature at which the iodine was a solid, would an increase in pressure shift the equilibrium reaction toward more HI, or less? What would be the effect of pressure on Keq?|
Since the reaction of 2 moles of gaseous HI now yields 1 mole of gaseous H2 and 1 mole of solid I2 the stress of increased pressure is relieved by dissociating HI to H2 and I2. However, Keq will be unchanged by the pressure increase.
What effect does a catalyst have on a reaction at equilibrium? None. A catalyst cannot change the value of Keq, but it can increase the speed with which equilibrium is reached. This is the main function of a catalyst. It can take the reaction only to the same equilibrium state that would be reached eventually without the catalyst.
Catalysts are useful, nevertheless. Many desirable reactions, although spontaneous, occur at extremely slow rates under ordinary conditions. In automobile engines, the main smog-producing reaction involving oxides of nitrogen is
(Once NO is present, it reacts readily with more oxygen to make brown N02.) At the high temperature of an automobile engine, Keq for this reaction is so large that appreciable amounts of NO are formed. However, at 25°C, Keq= 10-30. (Using only the previous two bits of information and Le Chatelier's principle, predict whether the reaction as written is endothermic or exothermic. Check your answer using data from Appendix 3.) The amount of NO present in the atmosphere at equilibrium at 25°C should be negligible. NO should decompose spontaneously to N2 and O2 as the exhaust gases cool. But any Southern Californian can verify that this is not what happens. Both NO and N02 are indeed present, because the gases of the atmosphere are not at equilibrium.
The rate of decomposition of NO is extremely slow, although the reaction is spontaneous. One approach to the smog problem has been to search for a catalyst for the reaction
that could be housed in an exhaust system and could break down NO in the exhaust gases as they cool. Finding a catalyst is possible; a practical problem arises from the gradual poisoning of the catalyst by gasoline additives, such as lead compounds. This is the reason why new cars with catalytic converters only use lead-free gasoline.
A proof of the assertion that a catalyst cannot change the equilibrium constant is illustrated in Figure 4-5. If a catalyst could shift the equilibrium point of a reacting gas mixture and produce a volume change, then this expansion and contraction could be harnessed by mechanical means and made to do work. We would have a true perpetual-motion machine that would deliver power without an energy source. From common sense and experience we know this to be impossible. This "common sense" is stated scientifically as the first law of thermodynamics, which will be discussed in Chapter 15. A mathematician would call this a proof by contradiction: If we assume that a catalyst can alter Keq, then we must assume the existence of a perpetual-motion machine. However, a perpetual-motion machine cannot exist; therefore our initial assumption was wrong, and we must conclude that a catalyst cannot alter Keq.
In summary, Keq is a function of temperature, but it is not a function of reactant or product concentrations, total pressure, or the presence or absence of catalysts. The relative amounts of substances at equilibrium can be changed by applying an external stress to the equilibrium mixture of reactants and products, and the change is one that will relieve this stress. This last statement, Le Chatelier's principle, enables us to predict what will happen to a reaction when external factors are changed, without having to make exact calculations.
A spontaneous reaction is one that will take place, given enough time, without outside assistance. Some spontaneous reactions are rapid, but time is not an element in the definition of spontaneity. A reaction can be almost infinitely slow and still be spontaneous.
The net reaction that we observe is the result of competition between forward and reverse steps. If the forward process is faster, then products accumulate, and we say that the reaction is spontaneous in the forward direction. If the reverse process is faster, then reactants accumulate, and we say that the reverse reaction is the spontaneous one. If both forward and reverse processes take place at the same rate, then no net change is observed in any of the reaction components. This is the condition of chemical equilibrium.
The ratio of products to reactants, each concentration term being raised to a power corresponding to the coefficient of that substance in the balanced chemical equation, is called the equilibrium constant, Keq. (See equation 4-8.) It can be used to predict whether a given reaction under specified conditions will be spontaneous, and to calculate the concentrations of reactants and products at equilibrium. The reaction quotient, Q, has a form that is identical with that of the equilibrium constant, Keq, but Q applies under nonequilibrium conditions as well. For a given set of conditions, if Q is smaller than Keq, the forward reaction is spontaneous; if Q is greater than Keq, the reverse reaction is spontaneous; and if Q = Keq, the system is at equilibrium.
The equilibrium constant can be used with any convenient set of concentration units: moles liter-1 , pressure in atmospheres, or others. Its numerical value will depend on the units of concentration, so one must be careful to match the proper values of Keq and units when solving problems. If gas concentrations are expressed in moles liter-1, the equilibrium constant is designated by Kc; if in atmospheres, by Kp. Just as partial pressure of the jth component of a gas mixture is related to moles per liter by pj = cjRT, so Kp and Kc are related by Kp = Kc(RT)Δn, in which Δn is the net change in number of moles of gas during the reaction.
When some of the reactants or products are pure solids or liquids, they act as infinite reservoirs of material as long as some solid or liquid is left. Their effect on equilibrium depends only on their presence, not on how much of the solid or liquid is present. Their effective concentrations are constant, and can be incorporated into Keq. In practice, this simply means omitting concentration terms for pure solids and liquids from the equilibrium-constant expression. Evaporation of a liquid can be treated formally as a chemical reaction with the liquid as reactant and vapor as product. These conventions for writing concentration terms for a liquid permit us to write the equilibrium constant for evaporation as Kp = pj where pj is the equilibrium vapor pressure of substance j.
Le Chatelier's principle states that if stress is applied to a system at equilibrium the amounts of reactants and products will shift in such a manner as to minimize the stress. This means that for a heat-absorbing, or endothermic, reaction, Keq increases as the temperature is increased, since carrying out more of the reaction is a way of absorbing some of the added heat. Similarly, cooling increases Keq for a heat-emitting or exothermic reaction. Although the equilibrium constant Keq is independent of pressure, and changing the total pressure on a reacting system does not alter Keq directly, an increase in pressure does cause the reaction to shift in the direction that decreases the total number of moles of gas present.
A catalyst has no effect at all on Keq or the conditions of equilibrium. All that a catalyst can do is to make the system reach equilibrium faster than it would have done otherwise. Catalysts can make inherently spontaneous but slow reactions into rapid reactions, but they cannot make nonspontaneous reactions take place of their own accord. | <urn:uuid:0380ba0a-eb55-47d7-9ade-aefc9151e100> | 4.25 | 9,948 | Knowledge Article | Science & Tech. | 47.625577 | 765 |
Splash (fluid mechanics)
In fluid mechanics, a splash is a sudden disturbance to the otherwise quiescent free surface of a liquid (usually water). The disturbance is typically caused by a solid object suddenly hitting the surface, although splashes can occur in which moving liquid supplies the energy. This use of the word is onomatopoeic.
Splashes are characterized by transient ballistic flow, and are governed by the Reynolds number and the Weber number. In the image of a brick splashing into water to the right, one can identify freely moving airborne water droplets, a phenomenon typical of high Reynolds number flows; the intricate non-spherical shapes of the droplets show that the Weber number is high. Also seen are entrained bubbles in the body of the water, and an expanding ring of disturbance propagating away from the impact site.
Physicist Lei Xu and coworkers at the University of Chicago discovered that the splash due to the impact of a small drop of ethanol onto a dry solid surface could be suppressed by reducing the pressure below a specific threshold. For drops of diameter 3.4 mm falling through air, this pressure was about 20 kilopascals, or 0.2 atmosphere.
Splash plate
A plate made of a hard material on which a stream of liquid is designed to fall is called a "splash plate". It may serve to protect the ground from erosion by falling water, such as beneath an artificial waterfall or water outlet in soft ground. Splash plates are also part of spray nozzles, such as in irrigation sprinkler systems.
See also
- Harold Eugene Edgerton, whose Milkdrop Coronet is arguably the most famous photograph of a splash
- Slosh, other free surface phenomenon
- Lei Xu et al., "drop splashing on a dry smooth surface", Phys. Rev. Letts. (2005) | <urn:uuid:4c6631d4-811f-4f47-be0c-a0d12013f51d> | 3.890625 | 383 | Knowledge Article | Science & Tech. | 46.962959 | 766 |
your.data <- data.frame(Symbol = c("IDEA","PFC","RPL","SOBHA"))
new.variable <- as.vector(your.data$Symbol) # this will create a character vector
VitoshKa suggested to use the following code.
new.variable.v <- your.data$Symbol # this will retain the factor nature of the vector
What you want depends on what you need. If you are using this vector for further analysis or plotting, retaining the factor nature of the vector is a sensible solution.
How these two methods differ:
#1 2 3 4
#IDEA PFC RPL SOBHA | <urn:uuid:06bc238c-6c4d-4ceb-9868-86abe7bfa5ea> | 2.578125 | 142 | Q&A Forum | Software Dev. | 69.892273 | 767 |
Fungi use spores to survive and spread to new sources of food.
Spores are not seeds. A seed contains a small form of a plant, plus some
food to help it get started, wrapped in a hard shell. A seed is made of
Most spores are single cells protected by a cell wall. Some spores are made of several cells, but the largest spore is still smaller than the smallest seed (made by orchids).
There is no microscopic fungus inside a spore. A spore contains all the chemicals needed to make its kind of fungus. When conditions are right, the spore starts to grow and creates a web-like mycelium, the fungus individual.
BACK TO MUSHROOMS
Last update: 25 Oct 96. © 1996, Robert Fogel, Ivins, UT 84738. | <urn:uuid:8f083ebf-dfb3-490d-99d6-2e6459a84c58> | 3.78125 | 174 | Knowledge Article | Science & Tech. | 70.042138 | 768 |
Also, it was also a place of owners to release fishes and other animals 'back to their nature'. Some of these released animals were not native in our region. Over time, a community of both native and invasive species is created within the longkang, eventually forming a longkang habitat.........
Recently, while walking along a road in Pulau Ubin with KS, RY, JL and IV, we chanced upon a longkang which was teaming with life. With one glance, we saw animals from 2 phylums and about 5 classes, mainly from the subphylum Vetebrata which is under phylum Chordata.
Apparently, like other longkangs, there were some invasive species, for example the tortise (class reptilia), which we could not take pictures of due to reflection of the water. Also, this fish (identification unknown) may not be a native as well.
Schools of what looked like small half-beaks were also seen as well (picture below).
As this longkang is in close proximity to a mangrove habitat, some of the mangrove species were also seen, mainly the gobies..
Also, a tree climbing crab (Episesarma sp.)was spotted by KS at the edge of the longkang (picture above). Some small mudlobster mounds (no picture) and burrows (picture below) were seen around the longkang as well. Species from the subphylum Crustacea (refer to the Spiders at our backyard..) seems to have a foothold here as well.
Couldn't resist the temptation, I decided to enter the longkang(picture below) to 'be 1 with the habitat' as well while the rest remained on the road to watch from a distance (picture far below). There, I tasted the water as well to confirm that it was fresh water.
However, time passed quite quickly and we had to move on. Reluctantly, I had to leave the longkang, bringing nothing but pictures and an experience which not many urban dwellers have in our air-conditioned nation....
Note: scientifically, there is no such term as longkang habitat.
longkang ==> drain
Vetebrate: chordates which has a backbone, or vertebral column, that forms the skeletal axis of the body.
Chordates: Deuterostome animals that, at some time in their lives, have a cartilaginous, dorsal skeletal structure called a notochord; a dorsal, tubular, nerve cord; pharyngeal gill grooves; and a postanal tail.
Also featured in:
Solomon, Berg and Martin. (2008) Biology, 8th Edition. Thomson Brooks/Cole.
Peter K L Ng and N Sivasothi. (1999) "A Guide to the Mangroves of Singapore II: Animal Diversity". Singapore Science Centre. | <urn:uuid:3c283a33-37c2-461a-849e-f638a2408d32> | 2.84375 | 614 | Personal Blog | Science & Tech. | 51.700766 | 769 |
We got hit by one 1,200 years ago.
It came from two colliding Neutron Stars from a few thousand light years away and scientists were just now able to pick it up because of the existence of carbon-14 in tree rings.
What did it do around the year 775 AD? Pretty much nothing. The estimate two-second blast had really zero effect on the earth since the most high tech thing on the planet at the time was the castle and the crossbow. Had that blast happened today we would be in some serious trouble since it would short out power grids and knock out all of our satellites. If the blast happened from say, 100 light years away, we would have been a crispy cinder.
These gamma ray bursts were the result of the creation of a black hole from the collision of the Neutron Stars. So you'll have to excuse science for taking a while to figure this mystery out since there's no evidence visible. Had it been a supernova, people would have seen it in the 700s because it would have been so bright it would have been visible during the day. Had it been a solar flare, it would have been the largest flare every recorded. The black hole theory pretty much settles everything.
Except, when is this going to happen again?
(Buy this awesome book on space by Neil deGrasse Tyson - the guy that killed Pluto.) | <urn:uuid:b7693f17-084a-4de8-97c7-e3b5d8861d95> | 3.375 | 286 | Personal Blog | Science & Tech. | 67.524745 | 770 |
Wed July 25, 2012
Massive Ice Melt In Greenland Worries Scientists
Originally published on Wed July 25, 2012 2:53 pm
A pair of NASA satellite images taken just four days apart tells a potentially worrying story of melting ice in the polar summer.
The first, snapped from orbit on July 8, shows about 40 percent of the Greenland ice sheet shaded in pink or red to illustrate probable or confirmed surface melting. The second photo, taken on July 12, shows nearly the entire land mass — 97 percent — blotched in a red hue.
In a typical year, only about half of the Greenland ice sheet undergoes this kind of melting before it later refreezes. But the rapidity and extent of the July change is what has caught scientists off guard, said Thomas Mote, a professor at the University of Georgia, who helped confirm the data from three satellites.
"Several of us were looking at the data with multiple different instruments and we began talking to each other when we realized we were seeing something quite unusual," he says.
Scientists note that besides covering a large area, the melting is happening at the top of the ice cap, where temperatures are coldest. They blame a massive heat dome parked over the island that has set up perfect conditions for melting high-altitude snow and ice.
Alarming? "I wouldn't use that word," say Mote. "We know from looking at ice cores that melt at the highest levels of elevation in Greenland has occurred in the past — not in our lifetimes, and not since the era of satellites, but it certainly has occurred."
The last time it happened was about 150 years ago, in 1889, according to ice core records.
But the Greenland melt roughly coincides with a giant chunk of ice described as "twice the size of Manhattan," breaking off the Petermann Glacier in northern Greenland.
It's all part of a bad year for the Arctic, helped along by North America's record-breaking heat wave, says Mark Serreze, a senior research scientist at the National Snow and Ice Data Center at the University of Colorado Boulder.
The heat has shrunk and thinned ice not just in Greenland but across the region. "The Greenland ice sheet is part of a larger picture," he says.
"We've always known that it is the Arctic where we're going to be seeing the effects of climate change first and it is the Arctic where these changes are going to be most pronounced," Serreze says.
"The events unfolding over the past 30 years, of which 2012 is really just an exclamation point, are telling us that we've got it figured out." | <urn:uuid:bd137a12-1c9d-4e02-b3d4-ebdf7b88e2b2> | 3.65625 | 543 | News Article | Science & Tech. | 49.623293 | 771 |
Provided by: libacl1-dev_2.2.49-2_i386
acl_from_text - create an ACL from text
Linux Access Control Lists library (libacl, -lacl).
acl_from_text(const char *buf_p);
The acl_from_text() function converts the text form of the ACL referred
to by buf_p into the internal form of an ACL and returns a pointer to the
working storage that contains the ACL. The acl_from_text() function
accepts as input the long text form and short text form of an ACL as
described in acl(5).
This function may cause memory to be allocated. The caller should free
any releasable memory, when the new ACL is no longer required, by calling
acl_free(3) with the (void*)acl_t returned by acl_from_text() as an
On success, this function returns a pointer to the working storage. On
error, a value of (acl_t)NULL is returned, and errno is set
If any of the following conditions occur, the acl_from_text() function
returns a value of (acl_t)NULL and sets errno to the corresponding value:
[EINVAL] The argument buf_p cannot be translated into an ACL.
[ENOMEM] The acl_t to be returned requires more memory than is
allowed by the hardware or system-imposed memory
IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned)
acl_free(3), acl_get_entry(3), acl(5)
Derived from the FreeBSD manual pages written by Robert N M Watson
〈rwatson@FreeBSD.org〉, and adapted for Linux by Andreas Gruenbacher | <urn:uuid:03a84388-95b2-4fa4-bbe5-c06456a6ef6e> | 2.609375 | 401 | Documentation | Software Dev. | 55.928397 | 772 |
Constructing an Open Box: An open box with a square base is requried to have a volume of 10 cubic feet.
a) Express the amount A of material used to make such a box as a function of the length x of a side of the square base. MY answer: S=x^2 + 4x(10/x^2)
b)How much material is required for a base 1 foot by 1 foot?
c)How much material is required for a base 2 feet by 2 feet?
d) Graph A=A(x). For what value of x is A smallest?
I cannot figure out b,c,d...thank you | <urn:uuid:2d56224e-e6b6-4dad-bb65-a20c6e83e07e> | 3.578125 | 140 | Q&A Forum | Science & Tech. | 96.726656 | 773 |
Martin Harwit has argued that we cannot have made more than ten per cent of the crucial discoveries in Astronomy. He uses what John Barrow aptly calls `the proof-readers argument'. If two independent readers look at a manuscript then it is possible to estimate, by comparing their different results, how many errors there must be in total, including those not identified. In an analogous way two independent astronomical channels (say optical and X-ray) can be used to examine the Universe and a comparison of their separate key discoveries will yield an estimate of the numbers still to be found.
In any case with so little data to work on it shouldn't be too difficult to devise a plausible theory to account for them. It is, however, sobering to compare the cosmological situation with the history of other sciences.
Take geology. Men were living on the earth for millions of years, and quarrying rock, digging mines and canals and puzzling over its fossils for thousands of years, before unexpected palaeomagnetic patterns revealed for certain the key idea of Continental Drift.
In stellar physics two thousand years elapsed between Hipparcos's speculations and Bessel's first measurement of a stellar distance. Seventy years later the statistical patterns in the H-R diagram led to our understanding of stellar structure.
However the closest comparison comes from my own field of galaxy astronomy which is, as an observational science, almost exactly contemporary with cosmology. Although we now have good spectra and images of thousands of galaxies the list of fundamental things we don't know about them (Table 3) is far more striking that the list of things we do.
|1.||How our knowledge is warped by Selection Effects.|
|2.||What they are mostly made of. (Dark Matter?)|
|3.||How they formed - and when.|
|4.||How much internal extinction they suffer from.|
|5.||What controls their global star-formation rates.|
|6.||What parts their nuclei and halos play.|
|7.||If there are genuine correlations among their global properties.|
|8.||How they keep their gas/star balances.|
Of course these are only arguments by analogy. The optimistic cosmologist can always counter argue [I don't know how] that the Universe in the large is a great deal simpler than its constituent parts. | <urn:uuid:d9323dd8-b6ab-4284-9ad8-2d96d4a50574> | 3.5 | 497 | Academic Writing | Science & Tech. | 52.79938 | 774 |
|Annu. Rev. Astron. Astrophys. 1991. 29:
Copyright © 1991 by . All rights reserved
Several major themes have emerged from the preceding discussion:
|1.||In most or all galaxies, globular clusters are distinctly more metal-poor, by [Fe/H] ~ -0.5, than the spheroid-population field stars.|
|2.||Both the average and range of cluster metallicity increase with galaxy size. These correlations parallel the ones for the metallicities of the galaxies themselves, and support the view that similar enrichment processes generated both types of halo subsystems.|
|3.||Globular clusters in all galaxies have similar, though not identical, luminosity distributions. For distance scale purposes, the calibrations of GCLFs are not yet adequate for use as high-precision (± 0.2-mag) standard candles. The present data are, however, sufficient to exert strong theoretical constraints favoring a universal cluster formation process insensitive to metallicity and with only modest later influences from dynamical evolution in most of the halo.|
|4.||In giant ellipticals, GCSs are often (but not always) more spatially extended than the halo light. A few have been shown to have higher velocity dispersions as well, and thus to form a dynamically different halo population than the spheroid stars.|
|5.||In most large galaxies, the inner ~ 1-2 kpc of their spheroids have probably been almost totally depopulated of globular clusters through many dynamical mechanisms. At larger Rgc, the effectiveness of these mechanisms falls off rapidly, leaving only gradual erosive processes. It is not yet clear if these processes act much differently in practice for disk galaxies as opposed to ellipticals.|
|6.||In today's universe, few globular clusters are being formed (that is, the formation of dense clusters with a characteristic mass ~ 105-6 M is extremely rare). Though there is no reason to believe that the processes of star cluster formation 15 Gy ago were fundamentally different than today, the prevailing physical conditions of the protocluster gas then clearly favored more massive objects. The formation of globular clusters was an early, but secondary, process (that is, clearly associated with their surrounding protogalaxies).|
|7.||Arguments based on GCS spatial distributions, metallicity distributions, and dynamics suggest that the high-SN giant ellipticals such as in Virgo and Fornax did not form by mergers of disk galaxies or dwarfs. The merger scenario is, however, more viable for E galaxies in sparse environments (smaller groups and the field) and perhaps for some cD galaxies.|
How else may we use GCSs to understand the formation of galaxies? On the observational side, we need to fill in the many areas that are presently sketched out only with broad brush strokes:
|1.||The metallicity distribution of GCs has proven to be an effective touchstone of interpretive models. We need to accumulate spectra of clusters in a wider variety of galaxies, combined with multicolor photometry in metallicity-sensitive indices.|
|2.||Luminosity function work has just begun to be exploited. For example, the luminosity distributions of clusters within Rgc 3 kpc in giant galaxies should carry the strongest imprint of dynamical evolution; direct observations could be straightforwardly made in many galaxies. Larger samples of clusters taken from the rich Virgo and Fornax systems will reveal fine structure in the GCLF and provide essential constraints on eventual theoretical modelling. And deep photometry of clusters in additional near-field galaxies should finally tell us how accurate GCLFs can be as distance indicators. In addition, the brightest globulars in giant E galaxies may prove to be useful long-range standard candles.|
|3.||It is possible that the globular clusters in central giant ellipticals such as M87 are the oldest visible objects in the universe. High-S/N spectra of them, compared with integrated spectra of Milky Way globulars and fitted with population synthesis codes, may lead to useful age determinations relative to the Milky Way halo and to stronger limits on the Hubble time.|
|4.||Modern spectroscopic and imaging techniques are finally putting a complete and accurate understanding of the important M31 cluster system within reach.|
|5.||The advent of large-format CCD arrays will enable us to study the large-scale structures of globular cluster systems far more quantitatively and accurately.|
|6.||Comprehensive radial velocity surveys of GCSs can place unique limits on the large-scale mass distribution of galaxies, and on the orbital characteristics of the halo clusters. For GCSs at or beyond Virgo-like distances, the velocity measurements do press the limits of current technology, but this field will be a rich mine for the new generation of 8-meter-class telescopes to explore.|
On the theoretical side, recommendations for future work may be easy to prescribe but will be hard to execute. A formation model specific enough to predict an initial cluster mass spectrum as a function of density and metallicity would be a major achievement. The dynamical evolution of GCSs within galaxies of different types also needs to be modelled more comprehensively, with the eventual goal of predicting the full evolution of the GCLF as a function of parent galaxy type and galactocentric distance.
Because they are virtually the only remaining witnesses to the long-vanished first epoch of galaxy formation, globular clusters stand among the most powerful cosmological probes that we have. Although many intriguing new results and questions have emerged from the observational work of the past decade, we have also confirmed that globular cluster systems resemble each other rather closely. Thus by extending our study of these remarkable objects, we are uncovering a common theme in the earliest history of the galaxies.
It is a pleasure to give credit for projects, conversations, and ideas generated together over the years to many colleagues and friends, including David Hanes, Gretchen Harris, Hugh Harris, Jim Hesser, Chris Pritchet, Sidney van den Bergh, Malcolm Smith, Michael Fall, and Richard Larson. The healthy state of our field today owes a great deal especially to the vision of René Racine, who in the 1970s first set in motion much of the work discussed above. I am pleased to acknowledge the hospitality of Kitt Peak National Observatory, and D. and M. Gehret of the Orinda IMAC, as well as financial support from the Natural Sciences and Engineering Research Council of Canada. | <urn:uuid:ea01d3a4-98c3-4dcc-8f00-1451785f0e3d> | 2.671875 | 1,386 | Academic Writing | Science & Tech. | 35.371988 | 775 |
- Dense and accessible patches of prey, as opposed to just more food options, is better for marine animals.
- How different species are able to determine where the best patches of food sources are located is uncertain.
Marine animal populations thrive when presented with dense and accessible patches of prey, as opposed to just more of it, according to new research.
It turns out that sheer abundance of food is less important than what scientists sometimes call "patchiness" — the spatial distribution of a food source. Marine animals, from birds to dolphins, are able to home in on dense patches of food, making a more efficient use of precious energy for mealtime.
"Patchiness is not only ubiquitous in marine systems, it ultimately dictates the behavior of many animals and their relationships to the environment," Kelly Benoit-Bird, an Oregon State University oceanographer, said in a statement.
Benoit-Bird is the lead author of a study published this week in the journal Biology Letters. The research used sound waves to pinpoint the distribution of krill and other anchors of the food chain in waters near Hawaii.
The scientists found that the tiny crustaceans weren't uniformly distributed, but instead congregated in patches. This explained why two colonies of fur seals and seabirds were faring poorly but a third was healthy, the researchers said.
"The amount of food near the third colony was not abundant," Benoit-Bird said, "but what was there was sufficiently dense, and at the right depth. That made it more accessible for predation than the krill near the other two colonies."
The team also found that a type of bird that feeds on krill, called the thick-billed murre, was able to target the densest swarms of the tiny organisms. Murres dove to an astonishing 650 feet (200 meters) below the ocean's surface in search of their prey.
"The murres are amazingly good at diving right down to the best patches," Benoit-Bird said. It's not clear how the birds identify these feasts lurking deep beneath the surface of the ocean, she added.
The team used sound waves not only to identify the gatherings of krill but to track murres, dolphins, squid and other animals. Time and again, they found that by locating the densest clouds of phytoplankton, tiny ocean plants that are themselves a food source for krill, it was possible to figure out where these larger animals would gather.
Although the concept of "patchiness" is not new, Bird-Benoit said, it may play a larger role in the health of ocean ecosystems than thought.
"Now we need more research to determine how different species are able to determine where the best patches are," she said. | <urn:uuid:fef69cb6-b245-4ba4-af7e-f0e9cf2166f9> | 3.96875 | 567 | News Article | Science & Tech. | 44.05315 | 776 |
Liquid crystals, the state of matter that makes possible the flat screen technology now commonly used in televisions and computers, may have some new technological tricks in store.
Writing today (May 3, 2012) in the journal Nature, an international team of researchers led by University of Wisconsin-Madison Professor of Chemical and Biological Engineering Juan J. de Pablo reports the results of a computational study that shows liquid crystals, manipulated at the smallest scale, can unexpectedly induce the molecules they interact with to self-organize in ways that could lead to entirely new classes of materials with new properties.
"From an applied perspective, once we get to very small scales, it becomes incredibly difficult to pattern the structure of materials. But here we show it is possible to use liquid crystals to spontaneously create nanoscale morphologies we didn't know existed," says de Pablo of computer simulations that portray liquid crystals self-organizing at the molecular scale in ways that could lead to remarkable new materials with scores of technological applications.
As their name implies, liquid crystals exhibit the order of a solid crystal but flow like a liquid. Used in combination with polarizers, optical filters and electric fields, liquid crystals underlie the pixels that make sharp pictures on thin computer or television displays. Liquid crystal displays alone are a multibillion dollar industry. The technology has also been used to make ultrasensitive thermometers and has even been deployed in lasers, among other applications.
The new study modeled the behavior of thousands of rod-shaped liquid crystal molecules packed into nano-sized liquid droplets. It showed that the confined molecules self organize as the droplets are cooled. "At elevated temperatures, the droplets are disordered and the liquid is isotropic," de Pablo explains. "As you cool them down, they become ordered and form a liquid crystal phase. The liquid crystallinity within the droplets, surprisingly, induces water and other molecules at the interface of the droplets, known as surfactants, to organize into ordered nanodomains. This is a behavior that was not known."
In the absence of a liquid crystal, the molecules at the interface of the droplet adopt a homogeneous distribution. In the presence of a liquid crystal, however, they form an ordered nanostructure. "You have two things going on at the same time: confinement of the liquid crystals and an interplay of their structure with the interface of the droplet," notes de Pablo. "As you lower the temperature the liquid crystal starts to become organized and imprints that order into the surfactant itself, causing it to self assemble."
It was well known that interfaces influence the order or morphology of liquid crystals. The new study shows the opposite to be true as well.
"Now you can think of forming these ordered nanophases, controlling them through droplet size or surfactant concentration, and then decorating them to build up structures and create new classes of materials," says de Pablo.
As an example, de Pablo suggested that surfactants coupled to DNA molecules could be added to the surface of a liquid crystal droplets, which could then assemble through the hybridization of DNA. Such nanoscale engineering, he notes, could also form the basis for liquid crystal based detection of toxins, biological molecules, or viruses. A virus or protein binding to the droplet would change the way the surfactants and the liquid crystals within the droplet are organized, triggering an optical signal. Such a technology would have important uses in biosecurity, health care and biology research settings.
Explore further: Physicists develop revolutionary low-power polariton laser | <urn:uuid:725cafcd-e957-4ce9-ab40-14fa529641a6> | 3.765625 | 733 | News Article | Science & Tech. | 24.121142 | 777 |
A constant shower of subatomic particles rains down from space. A hundred years ago, this "cosmic radiation" was discovered by the Austrian physicist Victor Franz Hess. Among other things, the discovery laid the foundation for a whole new field of research: high energy physics - which recently gave us, for instance, the first experimental evidence for the Higgs boson. An anniversary conference looks at the past milestones of cosmic ray research and at future experiments.
When Hess landed his hydrogen balloon at Bad Saarow in the German state of Brandenburg at lunchtime on 7 August 1912, he had on board a discovery with far-reaching consequences, which he surely wasn't fully aware of at that very time. At his seventh balloon voyage in the course of this year, equipped with three ionization measuring instruments, Hess had just identified the existence of a pervasive radiation in 5300 metres altitude above the Schwieloch Lake in the southeast of Brandenburg. Only later it became evident that this so-called cosmic radiation was comprised mostly of energetic, electrically charged atomic nuclei. The discovery of cosmic rays won Hess the Nobel Prize in Physics 24 years later.
"The detection of the cosmic radiation was the discovery of a century and brought us completely new insights into the cosmos," says Prof. Christian Stegmann, head of the DESY institute at Zeuthen near Berlin. "Furthermore it became a cornerstone of early particle physics. Before the development of particle accelerators, cosmic ray research led to the discovery of many important elementary particles, among them the anti-particle of the electron - the positron - as well as the muon and the pion."
DESY, the University of Potsdam, and the Max Planck Institute for the History of Science in Berlin jointly organise a symposium on the 100th anniversary of the discovery of cosmic rays. From 6 to 8 August 2012, scientists from all over the world will meet in Bad Saarow, where Hess landed his balloon, to present and discuss the development of various sub-areas ranging from the historic beginnings up to ideas for new projects.
Along with physics nobelist Prof. James Cronin, one of the designers of today's largest cosmic ray observatory "Pierre Auger" in Argentina, and the 14th Astronomer Royal Prof. Sir Arnold Wolfendale, Hess' grandsons William and Arthur Breisky have also registered for the conference. A memorial stone will be unveiled, participants may book balloon flights, and electroscopes that were then used all over the world to carry out ionisation measurements will be on display.
"The advent of a Centenary is a time for both looking back at the development of the subject and forwards: 'where do we go from here?'," says Sir Wolfendale. "Cosmic ray research has led to new areas of research, including 'the new astronomies' and the future for them is bright, indeed. Neutrino Astronomy is on the verge of starting and gamma ray astronomy has begun in earnest."
Physicists expect to gain new insights into the nature of cosmic particle accelerators, which are a million times stronger than the best accelerators on earth, from gramma ray astronomy. Single protons from the cosmic radiation may have as much energy as a powerfully-hit tennis ball, but due to their electric charge, the fast particles are deflected by numerous magnetic fields as they travel through the cosmos. This means that one cannot retrace their point of origin from their direction of flight when they hit the earth.
Therefore, a hundred years since their discovery, the mystery of the origin of cosmic rays is far from being solved. "The universe is full of natural particle accelerators, as for example in supernova explosions, in binary star systems, or in active galactic nuclei. So far, only 150 of these objects are known to us, and we have just an initial physical understanding of these fascinating systems," says Stegmann.
In contrast to what the name might suggest, cosmic radiation is mostly comprised of particles, but a small fraction is indeed gamma radiation, which is not deflected on its way through space and thus points directly to its source. As physicists expect the sources of cosmic gamma radiation to be the same as for cosmic particles, they are on the hunt for cosmic particle accelerators with specialised gamma ray observatories.
Observatories like H.E.S.S. in Namibia, named in honour of the discoverer of cosmic radiation, MAGIC on the Canary Island La Palma, and VERITAS in the United States, with DESY participation, detected more than hundred high-energy cosmic gamma radiation sources by now. The planned Cherenkov Telescope Array CTA, for which DESY is currently building a first prototype telescope will follow this path of discovery. "The Cherenkov Telescope Array will observe thousands of these accelerators with unprecedented sensitivity," Stegmann points out.
Similar to gamma rays, cosmic neutrinos also open a window to the universe's particle accelerators. Neutrinos are lightweight, electrically neutral elementary particles, which are also not deflected by magnetic fields. This means that the incident path of a neutrino points back directly to its origin. With the participation of DESY, the world's largest neutrino telescope, IceCube in Antarctica, was finished in December 2010 and has just begun to look for cosmic neutrinos.
"On either route we expect fascinating insights into the natural particle accelerators in the universe, that will throw new light onto the remaining mysteries of cosmic rays," stresses Stegmann.
Explore further: Physicists develop revolutionary low-power polariton laser
More information: Conference website: www.desy.de/2012vhess | <urn:uuid:7e267347-4bdc-4c2c-ae17-bf5256f4a9aa> | 3.625 | 1,176 | News Article | Science & Tech. | 32.882184 | 778 |
New A team of researchers including scientists from the University of Florida has shown insect colonies follow some of the same biological "rules" as individuals, a finding that suggests insect societies operate like a single "superorganism" in terms of their physiology and life cycle.
For more than a century, biologists have marveled at the highly cooperative nature of ants, bees and other social insects that work together to determine the survival and growth of a colony.
The social interactions are much like cells working together in a single body, hence the term "superorganism" — an organism comprised of many organisms, according to James Gillooly, Ph.D., an assistant professor in the department of biology at UF's College of Liberal Arts and Sciences.
Now, researchers from UF, the University of Oklahoma and the Albert Einstein College of Medicine have taken the same mathematical models that predict lifespan, growth and reproduction in individual organisms and used them to predict these features in whole colonies.
By analyzing data from 168 different social insect species including ants, termites, bees and wasps, the authors found that the lifespan, growth rates and rates of reproduction of whole colonies when considered as superorganisms were nearly indistinguishable from individual organisms.
The findings will be published online this week in the Proceedings of the National Academy of Sciences Early Edition.
"This PNAS paper regarding the energetic basis of colonial living in social insects is notable for its originality and also for its importance," said Edward O. Wilson, a professor of biology at Harvard University and co-author of the book "The Super-Organism," who was not involved in the research. "The research certainly adds a new perspective to our study of how insect societies are organized and to what degree they are organized."
The study may also help scientists understand how social systems have arisen through natural selection — the process by which evolution occurs. The evolution of social systems of insects in particular, where sterile workers live only to help the queen reproduce, has long been a mystery, Gillooly said.
"In life, two of the major evolutionary innovations have been how cells came together to function as a single organism, and how individuals joined together to function as a society," said Gillooly, who is a member of the UF Genetics Institute. "Relatively speaking, we understand a considerable amount about how the size of multicellular organisms affects the life cycle of individuals based on metabolic theory, but now we are showing this same theoretical framework helps predict the life cycle of whole societies of organisms."
Researchers note that insect societies make up a large fraction of the total biomass on Earth, and say the finding may have implications for human societies.
"Certainly one of the reasons folks have been interested in social insects and the consequences of living in groups is that it tells us about our own species," said study co-author Michael Kaspari, Ph.D., a presidential professor of zoology, ecology and evolutionary biology at the University of Oklahoma and the Smithsonian Tropical Research Institute. "There is currently a vigorous debate on how sociality evolved. We suggest that any theory of sociality be consistent with the amazing convergence in the way nonsocial and social organisms use energy."
Explore further: Bird's playlist could signal mental strengths and weaknesses | <urn:uuid:75fa06dd-7dd0-4863-a9dc-3f9b196d29c1> | 3.328125 | 661 | News Article | Science & Tech. | 18.434939 | 779 |
There are two different questions at work here, that you've kind of mashed together. The first question is "What is the speed at which a change in the electric field propagates?" The answer to that is the speed of light. In QED terms, the electromagnetic interaction that we see as the electric field is mediated by photons, so any change in an established field (say, due to shifting the position of the charge creating the field) won't be felt by a distant object until enough time has passed for a photon from the source to make it to the observation point.
The second question is "What is the speed of propagation of electric current?" This speed is slower than the speed of light, but still on about that order of magnitude-- the exact value depends a little on the arrangement of wires and so on, but you won't be far off if you assume that electrical signals propagate down a cable at the speed of light.
This relates to electric field in that the charge moving through a circuit to light a light bulb has to be driven by some electric field, so you can reasonably ask how that field is established, and how much time it takes. Qualitatively, the necessary field is established by excess charge on the surface of the wires, with the surface charge being generally positive near the positive terminal of a battery and generally negative near the negative terminal, and dropping off smoothly from one to the other so that the electric field is more or less piecewise constant (that is, the field is the same everywhere inside a wire, and the field is the same everywhere inside a resistor, but the two field values are not the same).
When the circuit is first connected, there is a rapid redistribution of the charge on the surface of the wires which establishes the surface charge gradients that drive the steady-state current that will eventually do whatever it is you want it to do. The time required to establish the gradients and settle in to the steady-state condition is very fast, most likely on the order of nanoseconds for a normal circuit.
There's a good discussion of the business of how, exactly, charges get moved around to drive a current in the textbook that we use for our introductory classes, Matter and Interactions, by Chabay and Sherwood. It doesn't go into enough detail to let you calculate the relevant times directly, but it lays out the basic science pretty well.
(It's a textbook for a first-year introductory physics class, so it sweeps a lot of condensed matter physics under the metaphorical rug-- there's no discussion of band structure or surface modes, or any of that. It's fairly solid conceptually, though, at least according to colleagues who know more about those fields than I do.) | <urn:uuid:49279033-9e98-43e4-ba87-afe02bc68b49> | 3.515625 | 558 | Q&A Forum | Science & Tech. | 35.886967 | 780 |
In Jena, Graph is an interface. It abstracts anything that looks like RDF - storage options, inference, other legacy data sources.
The main operations are
addition, there are a number of getters to access handlers of various features
(query, statistics, reification, bulk update, event manager) .
Having handlers, rather than directly including all the operations for each
feature reduces the size of the interface and makes it easier to provide default
implementations of each feature.
Implementing a graph rarely needs to directly implement the interface.
More usually, an implementation starts by inheriting from the class GraphBase.
A minimal (read-only) implementation just needs to implement
Wrapping legacy data often only makes sense as a read-only graph. To provide update operations, just implement the methods
which are the methods called from the base implementations of
Then for testing with JUnit, inherit
from AbstractGraphTest (override tests that don't make sense in a particular circumstance)
and provide the
getGraph operation to generate a graph instance to test.
Where the graph level is minimal and symmetric (e.g. literal as subjects, inclusion of named variables) for easy implementation, the RDF API enforces the RDF conditions and provides a wide variety of convenience operations so writing a program can be succinct, not requiring the application writer to write unnecessary boilerplate code sequences. The ontology API does the same for OWL. If you look at the javadoc, you'll see the APIs are large but the system level interface is small.
A graph is turned into a Model by calling
ModelFactory.createModelForGraph(Graph). All the key application APIs
are interface-based although it's rarely needed to do anything other that use the
standard Model-Graph bridge.
Data access to the graph all goes via find. All the read operations of application APIs, directly or indirectly, come down to calling Graph.find or a graph query handler. And the default graph query handler works by calling Graph.find, so once find is implemented everything (read-only) works. ARQ's query API, which includes a SPARQL implementation, included. It may not be the most efficient way but importantly all functionality is available and so the graph implementer can quickly get a first implementation up and running, then decide where and when to spend further development time - or whether that's needed at all.
An example of this is a prototype Jena-Mulgara bridge (work in progress as of Jan'08). This maps the Graph API to a Mulgara session object, which can be a local Mulgara database or a remote Mulgara server. The prototype is a single class together with a set of factory operations for more convenient creation of a bridge graph wrapped in all Jena's APIs.
Implementing graph nodes, for IRIs and for literals is straight forward. Mulgara uses JRDF to represent these nodes and to represent triples. Mapping to and from Jena versions of the same is just the change in naming.
Blank nodes are more interesting. A blank node in Jena has an internal label (which is not a URI in disguise). When working at the lowest level of Graph, the code is manipulating things at a concrete, syntactic level.
A blank node in Mulgara has an internal id but it can change. It really is the internal node index as I found out by creating a blank node with id=1 and found it turned into rdf:type which was what was really at node slot 1. Paul has been (patiently!) explaining this to me on a Mulgara mailing list. The session interface is an interface onto the RDF data, not an interface to extend the graph details to the client. Both approaches are valid - it's just different levels of abstraction.
If the Jena application is careful about blank nodes (not assuming they are stable across transactions, and not deleting all triples involving some blank node, then creating triples involving that blank node) then it all works out. The most important case of reading data within a transaction is safe. Bulk loading is better down via the native Mulgara interfaces anyway. The Jena-Mulgara bridge enables a Jena application to access a Mulgara server through the same interfaces as any other RDF data. | <urn:uuid:4357bba8-8f33-427a-854e-0afa0f5e1dea> | 2.765625 | 909 | Personal Blog | Software Dev. | 41.344553 | 781 |
Text::UnicodeBox::Text - Objects to describe text rendering
This module is part of the low level interface to Text::UnicodeBox; you probably don't need to use it directly.
The string representation of the text.
How many characters wide the text represents when rendered on the screen.
The following methods are exportable by name or by the tag ':all'
Given the passed text,
figures out the a smart value for the
length field and returns a new instance.
my $text = BOX_STRING('Test'); $text->align_and_pad(8); # is the same as # $text->align_and_pad( width => 8, pad => 1, pad_char => ' ', align => 'left' ); $text->value eq ' Test ';
Modify the value of this object to pad and align the text according to the specification. Pass any of the following parameters:
Defaults to the object's
length. Specifies how wide of a space the string is to be fit in. Doesn't make sense for this value to smaller then the width of the string. If you pass only one parameter to
align_and_pad, this is the parameter it's assigned to.
If the string looks like a number, the align default to 'right'; otherwise, 'left'.
How much padding on the right and left
What character to use for padding
Returns the value of this object.
Return array of objects of this string split into new strings on the newline character
Provides the count of
Return the length of the longest line in
my @segments = $obj->split( max_width => 100, break_words => 1 );
Return array of objects of this string split at the max width given. If break_words => 1, break anywhere, otherwise only break on the space character.
Copyright (c) 2012 Eric Waters and Shutterstock Images (http://shutterstock.com). All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The full text of the license can be found in the LICENSE file included with this module.
Eric Waters <email@example.com> | <urn:uuid:d2d14240-d7e4-41a9-a9a0-1d4a9f3f76ad> | 3.234375 | 474 | Documentation | Software Dev. | 59.493246 | 782 |
Once the Sample Return Capsule is
recovered at the Utah Test and Training Range (UTTR),
its contents will be placed in the capable hands of
the Stardust Curation Team - who are based at the Johnson
Space Center (JSC).
This team will then go about the business of carefully
transporting the aerogel containing grains from Comet
Wild 2 and interstellar dust to their special facility
at JSC for examination. The samples gathered by Stardust
are expected to consist of approximately 1000 cometary
dust particles measuring less than 100 =B5m each, and
an additional 100 interstellar dust grains primarily
of sub-micron size. The expected total mass of the sample
will probably be 1 mg, less than a thimbleful.
For the Stardust Mission, both comet coma samples and
the interstellar grains must be captured at high velocity
with minimal heating and other effects of physical alteration.
Particle collection at this speed has been extensively
demonstrated in laboratory tests, Shuttle flights and
on the MIR Space Station. Researchers have additionally
shown that comet dust collection can be accomplished
with minimal amounts of sample alteration.
The JSC team has developed exacting techniques for the
removal and analysis of captured grains from the silica
aerogel used as a capture medium. They will continue
to improve and practice these techniques before the
comet samples are placed into their hands in 2006.
For additional information visit the JSC Stardust Curation
Last updated February 15, 2006 | <urn:uuid:972956fc-e530-4def-be54-32fbac3a54c7> | 2.859375 | 318 | Knowledge Article | Science & Tech. | 29.633385 | 783 |
News > Scientists reconstruct Red Sea parting
Researchers at the US National Center for Atmospheric Research (NCAR) have produced a computer simulation that demonstrates how the parting of the Red Sea described in the Book of Exodus could have been caused by strong winds.
The study, which is part of a larger project looking into the impact of winds on water depths, was published in the open-access journal Plos One. In it, researchers produced a reconstruction of the likely geography of the Nile Delta during the Old Testament period, which has changed considerably over the intervening centuries. The researchers have identified a stretch of the Nile where a strong east wind could conceivably have pushed the river back at a bend, opening up a walkway across the exposed mud flats and allowing the Israelites to flee the approaching Egyptians.
"The simulations match fairly closely with the account in Exodus," Carl Drews of the NCAR told the BBC. "The parting of the waters can be understood through fluid dynamics. The wind moves the water in a way that's in accordance with physical laws, creating a safe passage with water on two sides and then abruptly allowing the water to rush back in."
With the burning bush also potentially linked to freak environmental conditions, it remains to be seen how much else of the bible story can be explained by meteorology. | <urn:uuid:61945547-f2cd-4b80-b43e-e7d82c52b051> | 4.09375 | 264 | News Article | Science & Tech. | 36.408533 | 784 |
Texas skunks risk life and limb during mating season.
By Sheryl Smith-Rodgers
Alas, pity the poor skunk. Like snakes, spiders and vultures, this much-maligned creature receives little positive publicity and has next to no admirers. To top off its dismal — and foul-smelling — reputation, a skunk’s love life is rife with risks.
Come February — the start of breeding season — these shy, cat-sized creatures hit the road. Literally. In their after-dark quests to find mates, males often venture onto highways and rarely make it across alive.
“We see more numbers of roadkill skunks in February and March than other times of the year,” says Robert Dowler, a biologist with Angelo State University. “Preliminary data suggests that roadkill rates of skunks may double in parts of Texas during mating season.”
Last February, Dowler counted more than 50 dead skunks along the road on a 300-mile trip to Oklahoma. “That’s roughly one dead skunk every six miles,” he estimates.
Closer to home these days, Dowler and a team of graduate students are wrapping up a three-year study on skunks — striped, western spotted and hog-nosed — living in and around San Angelo State Park. (The two other North American species — eastern spotted and hooded — also live in Texas.)
Once completed, the study will reveal more about the secretive lives of skunks: what they eat (typically grubs, insects and sometimes, mice and eggs), how they interact, where they den, how far they roam, and what parasites afflict them.
In the field, university researchers successfully monitored striped and western spotted skunks using radio collars, remote cameras and analysis of tracks. “We found spotted skunks in thick brush and mesquite,” Dowler reports. “Striped skunks were there, too, and also in open fields.”
The hog-nosed species, however, stayed clear of traps. “They’re almost impossible to capture,” Dowler says. “We found them commonly as roadkill, but they wouldn’t go in our live traps. We tried for more than two years without success, using baits that included cat food, eggs, fruit and even a lure called Liquid Grub. Nothing worked.”
The males who do successfully cross the road likely mate, then move on to find more available females. Litters of four to seven blind kits are born in May or June. The young skunks remain in the burrow for about six weeks, and then venture out (usually single file) with their mother on nighttime hunts. By summer’s end, they’re on their own.
Unlike their relatives, western spotted males romance the ladies in September and October. After breeding, females keep fertilized embryos dormant — a process called delayed implantation — for several months until the embryos are implanted in the uterine wall, and development continues.
Data collected from the university will be used by the Texas Parks and Wildlife Department, which is funding the project. “We want to develop management actions that will help maintain skunk populations,” says John Young, a TPWD mammalogist. “Not much is known about them because people don’t want to handle them, for obvious reasons.” | <urn:uuid:0b270762-a177-4b5f-aeb6-89b9b3836a67> | 2.734375 | 720 | News Article | Science & Tech. | 50.297789 | 785 |
Authors: J. Marvin Herndon
Ours is a time of unparalleled richness in astronomical observations, but understanding seems to be absent throughout broad areas of astrophysics. Among some groups of astrophysicists there appears to be measured degrees of consensus, as indicated by the prevalence of so-called "standard models", but in science consensus is nonsense; science is a logical process, not a democratic process, and logical connections in many instances seem to be lacking. So the question astrophysicists should ask is this: "What's wrong with astrophysics?" Finding out what's wrong is not only the necessary precursor to righting what's wrong, but will open the way to new advances in astrophysics. Toward that end, one may question the basic assumptions upon which astrophysics is founded, as well as question the approaches astrophysicists currently employ. Here I describe one methodology and provide specific examples, the details of which are set forth elsewhere [1-3]. In doing so, I place into a logical sequence seemingly unrelated astronomical observations, including certain Hubble Space Telescope images, so that causal relationships become evident and understanding becomes possible; as a consequence, profound new implications follow, for example bearing on the origin of diverse galactic structures and the origin of the heavy elements.
Comments: recovered from sciprint.org
[v1] 2 Apr 2008
Unique-IP document downloads: 29 times
Add your own feedback and questions here: | <urn:uuid:c5e63dcc-4dd5-4354-acf2-fbb2aaac06fc> | 2.53125 | 294 | Truncated | Science & Tech. | 22.152994 | 786 |
NOAA: Sixth Warmest February in Combined Global Surface Temperature, Fifth Warmest December-February
Last month’s combined global land and ocean surface temperature made it the sixth warmest February ever recorded. Additionally, the December 2009 – February 2010 period was the fifth warmest on record averaged for any similar three-month Northern Hemisphere winter-Southern Hemisphere summer season, according to scientists at NOAA’s National Climatic Data Center in Asheville, N.C.
Based on records going back to 1880, the monthly NCDC analysis is part of the suite of climate services NOAA provides to businesses, communities and governments so they may make informed decisions to safeguard their social and economic well-being.
Separately, the average global ocean surface temperature for both February and the December-February season was second warmest on record, behind 1998. The global land surface temperature for February 2010 tied with 1992 as the 14th warmest on record, while December-February period was the 13th warmest on record.
Global Highlights – February
•The combined global land and ocean surface temperature for February 2010 was the sixth warmest on record, at 1.08 degrees F (0.60 degrees C) above the 20th century average of 53.9 degrees F (12.1 degrees C).
•The global land surface temperature for February 2010 was 1.35 degrees F (0.75 degrees C) above the 20th century average of 37.8 degrees F (3.2 degrees C)—tying with 1992 as the 14th warmest February on record.
•Anomalously cool conditions were widespread across the contiguous United States, Mexico, Europe and Russia. Overall, the United Kingdom had its coolest February since 1991, and the Irish Republic, its coolest February since 1986.
•Warmer-than-average temperatures enveloped much of the rest of the world’s land areas, with the warmest temperature anomalies occurring across Alaska, Canada and across the Middle East and northern Africa.
•The February worldwide ocean temperature was the second warmest, behind 1998, on record. The temperature anomaly was 0.97 degrees F (0.54 degrees C) above the 20th century average of 60.6 degrees F (15.9 degrees C).
•A moderate-to-strong El Niño continued in February. Sea surface temperatures across parts of the equatorial Pacific Ocean were more than 2.7 degrees F (1.5 degrees C) above average during the month. According to NOAA’s Climate Prediction Center, El Niño is expected to continue at least through the Northern Hemisphere spring 2010.
Global Highlights – December 2009 – February 2010
•The combined global land and ocean average surface temperature for December-February was 54.8 degrees F (12.7 degrees C), which is the fifth warmest on record and 1.03 degrees F (0.57 degrees C) above the 20th century average of 53.8 degrees F (12.1 degrees C).
•The worldwide land surface temperature for December-February was 1.15 degrees F (0.64 degrees C) above the 20th century average of 37.8 degrees F (3.2 degrees C) – the 13th warmest on record. (Cool temperatures enveloped much of Europe, Russia, Mexico, central and southeastern contiguous U.S., southern Chile, southern Argentina and parts of northern Australia.)
•The United Kingdom had its coolest Northern Hemisphere winter since 1978-1979. The Irish Republic experienced its coolest winter since 1962-1963. Conversely, much of Australia was engulfed by warmer-than-average conditions. The warmth was concentrated in Western Australia, resulting in the warmest December-February period on record.
•The worldwide ocean surface temperature was 0.97 degrees F (0.54 degrees C) above the 20th century average of 60.5 degrees F (15.8 degrees C) and the second warmest December-February on record, behind 1998.
•Arctic sea ice covered an average of 5.6 million square miles (14.6 million square kilometers) during February. This is 6.8 percent below the 1979-2000 average extent and the fourth lowest February extent since records began in 1979. This was also the 12th consecutive February with below-average Arctic sea ice extent. February Arctic sea ice extent has decreased by 2.9 percent per decade since 1979.
•Antarctic sea ice extent in February was 7.3 percent above the 1979-2000 average, resulting in the eighth largest February extent on record. February Antarctic sea ice extent has increased by 3.1 percent per decade over the same period.
•Northern Hemisphere snow cover extent during February was the third largest on record, behind 1978 and 1972. North American snow cover for February was also the third largest extent since satellite records began in 1967—behind 1978 and 1979. Northern Hemisphere December-February snow cover during December 2009 -February 2010 was the second largest extent, behind 1978. North American snow cover for December-February 2010 was the largest extent on record.
Scientists, researchers, and leaders in government and industry use NOAA’s monthly reports to help track trends and other changes in the world’s climate. This climate service has a wide range of practical uses, from helping farmers know what and when to plant, to guiding resource managers with critical decisions about water, energy and other vital assets.
Background information on winter snowstorms in the United States and links to climate change is available online.
NOAA has also posted a Q & A feature regarding the monitoring stations that track these measurements.
NOAA understands and predicts changes in the Earth’s environment, from the depths of the oceans to surface of the sun, and conserves and manages our coastal and marine resources. | <urn:uuid:85634396-e951-4b7e-8833-e0ee3022596e> | 3.03125 | 1,176 | News Article | Science & Tech. | 54.325381 | 787 |
How is global warming responsible for the death of corals?
Submitted by: Ng Jing Yi
Global warming has increased the temperature of our tropical oceans by about a degree over the last hundred years. This has increased the chance that corals will undergo something called coral bleaching, which is where the plant-like symbionts inside corals (also called zooxanthellae) leave their tissues. The symbionts are important to corals because they give them energy (trapped from our Sun) which they use to grow and maintain themselves. When they bleached, and loose the symbionts, they are more susceptible to disease and death.
Since 1979, there have been six episodes of mass coral bleaching across the planet. There are none reported before 1979. They have all been driven by small stressful temperatures, often only 1-2oC above the long-term summer maxima. In some episodes, such as that that happened in 1998, over 16% of the world’s corals have died. Given that corals build the habitat in which over one million species live, this is a very worrying impact of global warming on the planet’s tropical oceans.
Answer by: Prof.Ove Hoegh-Guldberg, Director, Centre for Marine Studies, University of Queensland, Australia. Deputy Director, ARC Centre for Excellence in Coral Reef Studies; BLOG: www.climateshifts.org | <urn:uuid:196a1ef3-9e84-4062-a135-bb8365d2cf9d> | 3.703125 | 296 | Q&A Forum | Science & Tech. | 47.624244 | 788 |
When All Data Are Not Created Equally
T idewater areas can be difficult places to acquire consistent-quality seismic data, because different sources have to be used across exposed land surfaces than what are used across shallow-water areas.
Typically, explosives are used in shot holes in the onshore portion of a tidewater prospect, whereas environmental regulations may require that an air-gun source be used in shallow-water areas.
These two seismic sources produce different basic wavelets – and profiles produced with explosives and air guns rarely tie in an optimal manner at common image coordinates without using wavelet-shaping algorithms to create equivalent reflection character across targeted intervals.
An example of using an explosive source and an air-gun source across a Louisiana tidewater area is documented as figures 1 and 2. This shallow-water test line was recorded twice because, at this location, explosive sources were allowed.
For one profile, the source was a 30-pound (13.6-kilogram) charge positioned at a depth of 135 feet (41 meters) at each source station.
For the second data acquisition along the same profile, the source was an array of four air guns with a combined volume of 920 in3, and eight air-gun pops were summed at each source station.
Considerable processing effort was expended to make the final reflection character identical on each test line. The data illustrated as figure 1 show the results of the data processing.
The frequency content of the two profiles is approximately the same, but wavelet character is not identical at the junction point (station 165). In this instance, the interpreter responsible for this prospect decided that the reflection character expressed by the explosive source was preferred rather than the wavelet response shown by the air-gun source.
The challenge was that in neighboring tideland areas, regulations required that an air-gun source be used in water-covered areas – shot-hole explosives could not be used in shallow water as they had been across this initial test site, and a method had to be developed that would allow air-gun-source data to be used in conjunction with explosive-source data acquired across adjacent exposed-land areas.
Said another way, the problem was to create a basic wavelet in air-gun-generated data that was equivalent to the basic wavelet embedded in explosive-source data.
This type of problem has to be solved by data-processing procedures, not by data-acquisition techniques.
An approach used by many data processors to ensure that equivalent basic wavelets exist in two seismic profiles acquired with different sources is to calculate numerical cross-equalization operators that convert the phase and frequency spectra of source A to be equivalent to the phase and frequency spectra of source B.
This technique was applied to the tidewater seismic data illustrated on figure 1 by using data from the image trace at station 153 to calculate cross-equalization operators that converted the phase/frequency spectra of the air-gun data to the spectra of the explosive-source data.
The result is exhibited as figure 2.
The wavelet character of the profiles now agrees better at the tie point so that common horizons, sequence boundaries, and facies character can be interpreted on both profiles with greater confidence.
The example discussed here is from a tidewater area where operating and environmental constraints forced different sources to be used on land-based and water-based seismic lines.
The concept of numerical equalization of the basic wavelets embedded in any grid of intersecting 2-D (or 3-D) data, however, applies to a variety of onshore and offshore areas where people have access to overlapping legacy seismic data that have been acquired by different companies at different times and with different energy sources. | <urn:uuid:6a86ef07-f85d-4bed-97d1-e8313081a771> | 2.9375 | 759 | Academic Writing | Science & Tech. | 18.732304 | 789 |
Science Fair Project Encyclopedia
Very nice diagrams of refraction (with the red lines). Very good at explaining the phenomenon.
I think that a rainbow is visible only when the sun is at a low altitude- mornings and late afternoon/ evenings. Isn't there some specific angle for this? KRS 15:33, 1 Feb 2004 (UTC)
- I added: Hence there is no rainbow if the sun is at a higher altitude than 42°: the rainbow would be below the horizon. --Patrick 23:30, 1 Feb 2004 (UTC)
Nevertheless, it is not true, as sometimes one can look below the horizon. For example, if you are looking down from a mountain, or - as mentioned in the article! - from an aeroplane.
I've deleted the incorrect reference to glories from the aeroplane comment. Glory is a different optical phenomenon from rainbow and it is incorrect to state that a full-circle rainbow is a glory. This error needs to be removed from the page Glory_(rainbow) and I've put that on my task list, but I'm not sure how to fix the problem that the error is incorporated into the page title. Advice welcome. --Richard Jones 13:45, 20 Mar 2004 (UTC)
Added: Even more rarely is a triple rainbow seen and a few observers have reported seeing quadruple rainbows in which a dim outermost arc had a rippling and pulsating appearance. - Sounds fantanstic, but I saw this, and I was not the only one - Leonard G. 03:50, 25 Aug 2004 (UTC)
The article does a clumsy job about what is special about the 42° or the 52° angle. The picture lead me to correctly see that light can be refracted-internally.reflected-refracted.again at a large range of angles, its just that 42° is where the largest intensity of refraction occurs. The page http://www.phy.ntnu.edu.tw/java/Rainbow/rainbow.html has a much better explanation for the angle. 220.127.116.11 21:46, 30 Aug 2004 (UTC)
I'm not clear on this section: In a very few cases, a moonbow, or night-time rainbow, can be seen on strongly-moonlit nights. As human visual perception for colour in low light is poor, moonbows are perceived to be white. In Hawaii, we see moonbows all the time, and it's possible to make out many colors. So, what does the editor (or author) mean by "in a very few cases"? --Viriditas 12:00, 29 Oct 2004 (UTC)
The article states: Even more rarely is a triple rainbow seen and a few observers have reported seeing quadruple rainbows... These things are not rare in Hawaii. I've seen triple rainbows many times and a quadruple rainbow only twice. --Viriditas 12:32, 29 Oct 2004 (UTC)
- More importantly we could use a scientific explanation of how they are possible. I've seen a 3+ rainbow and know that the additional bows cannot be explained using Descartes' internal reflections in a rain drop. -- Solipsist 08:32, 24 Nov 2004 (UTC)
The main mnemonic described in the article is 'Richard of York...', given the subject am I right in thinking that this is only commonly used in the UK?
Another editor has also added 'Roy G. Biv' saying it is more common. I haven't heard this one, is it common in the US? -- Solipsist 08:43, 24 Nov 2004 (UTC)
Total internal reflection?
The article states that light is reflected from the back of the drop under total internal reflection. I find this statement rather dubious at best. A quick derivation from snell's law shows that the minimum angle for total internal reflection in water (using nw = 1.33) is 48.7 degrees. That would imply that the angle at the back of the droplet is greater than 90 degrees, which by inspection is not the case.
Since light would therefore leave the back of the drop refracted, would it not be impossible to see a rainbow between the observer and the sun, if the appropriate areas of the sky were unobscured?Kenneth Charles
Edit: I did some research. Light is indeed passed out the back of a droplet, but due to the fact that there is no distinct peak of emission from this spectra, it does not form a visible rainbow. However, the statement that light is totally internally reflected inside a raindrop is wrong and should be removed.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:be7c449a-7a83-4c19-9cde-e1a313a1a2b7> | 3.28125 | 997 | Comment Section | Science & Tech. | 68.455178 | 790 |
This Month in Physics History
November 1887: Michelson and Morley report their failure to detect the luminiferous ether
Albert Abraham Michelson was born in Strelno, Germany in 1852. When he was two years old his family moved to the US, and he grew up in the rough mining towns of Murphy’s Camp, California and Virginia City, Nevada. As a youngster, he showed some aptitude for science, and at age 16 he obtained a special appointment to the U.S. Naval Academy from President U.S. Grant.
As a student at the Naval Academy, he excelled at optics and other sciences, and clearly had an aptitude for precision instruments and measurements. He graduated in 1873, and then became an instructor of physics and chemistry at the Naval Academy. In 1877, while conducting a classroom demonstration of Foucault’s measurement of the speed of light, he realized he could make significant improvements on the method. Within the next two years, Michelson managed to measure the speed of light with much greater precision than ever before. The measurement brought him some recognition as a scientist, and settled him on pursuing a career in physics research. He then headed to Europe to study for the next two years.
Working in Berlin, he invented the device known as the Michelson interferometer. He realized he could use the setup to detect the Earth’s velocity through the ether. The basic design is simple and elegant. A beam of light is split and sent down two perpendicular paths. Then, after bouncing off mirrors, the two beams are recombined, producing an interference pattern. If the Earth was indeed traveling through the ether, the speed of light would differ depending on its direction with respect to the Earth’s motion through the ether, and Michelson’s interferometer would pick up a slight shift in the interference fringes. However, these early efforts found no evidence of the Earth’s movement with respect to the ether. Michelson was disappointed by the result and considered the experiment a failure. Nonetheless, he continued his effort to detect the ether when he returned to the United States.
In 1882 Michelson took a position at the Case School of Applied Science in Cleveland, Ohio. There he teamed up with chemist Edward Morley, who helped make some improvements in the experiments Michelson had begun in Berlin. The new apparatus was similar in basic design to his previous ones, but much more sensitive. It used extra mirrors to allow the light beams to bounce back and forth, creating a much longer path length. Michelson and Morley conducted the experiments in a basement lab, and to minimize vibrations, the setup rested atop a huge stone block, which floated in a pool of mercury that allowed the entire apparatus to rotate.
Even with this exquisitely sensitive design, Michelson and Morley couldn’t detect evidence of motion through the ether. They reported their null result in November 1887 in the American Journal of Science, in a paper titled “On the Relative Motion of the Earth and the Luminiferous Ether.” (The paper is online at www.aip.org/history/gap/Michelson/Michelson.html.)
Though disappointing to Michelson and Morley, the experiment revolutionized physics. Some scientists initially tried to explain the results while keeping the ether concept. For instance, George FitzGerald and Hendrik Lorentz independently proposed that moving objects contract along their direction of motion, making the speed of light appear the same for all observers. Then in 1905 Albert Einstein, with his groundbreaking theory of special relativity, abandoned the ether and explained the Michelson-Morley result, though it is uncertain whether Einstein was actually influenced by their experiment.
Michelson and Morley nonetheless both continued to believe that light must be a vibration in the ether, though Michelson did acknowledge the importance of Einstein’s work on relativity.
Although it couldn’t detect the non-existent ether, the Michelson interferometer proved useful for other measurements. Michelson used his interferometer to measure the length of the international standard meter in terms of wavelengths of cadmium light, and in 1920 he was the first to measure the angular diameter of a distant star, also using an interferometer. In 1901 Michelson was the second president of the APS, and he became the first American to win the Nobel Prize in 1907, for his precision optical instruments and measurements made with them. In 1889 Michelson moved to Clark University in Worcester, Massachusetts, and then in 1892 to the University of Chicago. He returned to his work refining measurements of the speed of light, and continued making more and more precise measurements right up to his death in 1931.
©1995 - 2013, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Contributing Editor: Jennifer Ouellette
Staff Writer: Ernie Tretkoff
Art Director and Special Publications Manager: Kerry G. Johnson
Publication Designer and Production: Nancy Bennett-Karasik | <urn:uuid:0a58269b-cea8-4a0a-899f-17cfb5879699> | 3.8125 | 1,061 | Knowledge Article | Science & Tech. | 39.142476 | 791 |
Copyright © 2001–2008 jsd
I set up some spreadsheets to solve Laplace’s equation, with more-or-less any boundary conditions you want.
The spreadsheet becomes, essentially, a 2D cellular automaton that directly emulates the physics.
This version handles objects in a D=2 universe in rectangular coordinates. In flatland, i.e. D=2, the Z direction simply does not exist. Alas many people are unfamiliar with the laws of physics in flatland. Therefore it might be better to think of this as a D=3 universe in which all D=3 objects are infinitely tall and translationally invariant along the Z axis. In this case, the Z direction exists, but is uninteresting, and the essential physics is the same as the D=2 case. (This is not the same as considering a thin flat “D=2” object embedded in the D=3 universe!) In any case, each cell represents an area dx∧dy in the XY plane. The spreadsheet to handle this case can be found in reference 1.
Occupying a large area near the upper left of the spreadsheet is a grid that I call the potential grid. You can set boundary conditions for the problem by choosing cells that you want to represent electrodes, and specifying the potential on these electrodes. For example, reference 1 contains three electrodes:
Within the universe, cells that are not electrodes are called vacuum cells. They contain a formula that will be used to calculate their potential, in accordance with Laplace’s equation, subject to the specified boundary conditions. If you want to “erase” part of an electrode, you should use the copy-and-paste function to fill those cells with the vacuum formula.
Just to the right of the "potential" grid there is second grid that I call the |field| grid because it calculates and displays the magnitude of the electric field at each point. Farther right is a third grid that calculates the charge density (charge per unit volume). If you add up all the cells in a given area, you get a charge per unit length. This means length in the Z direction; it is the charge per unit length of the object rooted in the given area and extending infinitely far perpendicular to the screen.
Principle of operation: Consider a cross-shaped group of 5 elements somewhere on the spreadsheet, and label them as shown in figure 1.
Now the discrete approximation to the second derivative in the horizontal direction is b+c−2w, and in the vertical direction it is a+d−2w. The Laplacian vanishes if w=(a+b+c+d)/4, i.e. if the central element is equal to the average of its four neighbors. Recall we are assuming (d/dz) is zero. This leads to an algorithm that says that for each cell in the vacuum, we want to equal the average of its four neighbors. So the basic step of the algorithm is to run through the grid and just set each cell to the average of the neighboring cells. That does not immediately solve the problem, because whenever we change a cell it requires us to change all the neighbors. However, each basic step brings us closer to a good solution, so we just repeat the basic step several times. This is called the relaxation algorithm.
Another way to motivate the same algorithm is to consider the electrostatic field energy. It depends on the square of the electric field, i.e. the square of the first derivatives of the potential. This energy is minimized when the central cell is equal to the average of its four neighbors. Therefore each step of the update algorithm lowers the local energy.
Tangential remark: You can say that the field energy serves as a Lyapunov function for the relaxation algorithm ... but if this doesn’t mean anything to you, don’t worry about it.
Reference 1 has 841 cells arranged as a 29x29 grid. For a grid of this size, the relaxation algorithm converges in a few seconds. That’s fast enough that it’s not boring, but slow enough that you can observe the propagation of changes if you fiddle with the boundary conditions.
There is a cell just above the top right of the potential grid, labeled object potential. If you change the value of this cell, you can watch how the charge distribution responds.
While the algorithm is running, i.e. after you have changed something but before the algorithm has converged to a solution, the grid contains an approximate solution that doesn’t exactly satisfy Laplace’s equation. That is, during this phase, there will be nonzero charge in the “vacuum”. This is unavoidable; because the spreadsheet strictly enforces local conservation of charge, as discussed in section 2.2. That means there is no way for the objects to acquire the correct charge unless charge flows through the vacuum somehow. The algorithm gradually moves all this charge to the boundaries. The “Manual recalculation” mode (using the “F9” key) may help you observe this, as discussed in section 5. Excel evaluates cells in a sequence that it chooses. The sequence defies simple description, and it has nothing to do with the physics. (Remember, this is an electrostatic problem; there is no physically-significant timescale.) Unfortunately, this sequencing means charge propagates quickly in certain directions across the grid, and slowly in the opposite directions. If you were writing in a computer language that gave you more control than excel does, you could get rid of this unphysical asymmetry by evaluating things in checkerboard-sequence (all the black squares, then all the white squares) or in randomized order.
As mentioned above, just outside the edge of the potential grid is a layer of cells that implement the boundary conditions. In this example, they implement Born/von-Kármán periodic boundary conditions. That is, given a universe of N rows by M columns, row N+1 is constrained to equal row 1, and column M+1 is constrained to equal column 1. You can think of this as a torus, where the top edge of the N×M grid joins the bottom, and the left edge joins the right. Equivalently, you can imagine tiling an infinite region with copies of the N×M grid, subject to the constraint that corresponding cells have the same value in every tile.
Below the potential grid is a graph with many traces; each trace shows the potential as a function of x, while different traces show different y values (rows). Clicking on one of the traces highlights the corresponding row. This may help you locate extremal values.
Below the field grid is a similar graph with many traces.
You can make the universe bigger by adding more rows and columns if you like; use the "fill across" and "fill down" features to propagate the vacuum formula into the new cells. Beware: you must fill from a vacuum cell that is not adjacent to the newly-added cells or the results will be incorrect.
You could extend this calculation to D=3, removing any assumption of translational symmetry. One possible brute-force solution would be to make a spreadsheet with 29 different 29x29 grids and put the appropriately-generalized formula in them. On the other hand, when the problem gets this complicated, you’re probably better off using a more sophisticated programming language, such as C++.
Reference 2 is similar to reference 1, but has several additional features. For one thing, it uses a fancier formula in the vacuum cells. It uses a technique called “over-relaxation” to improve the speed of convergence. This is described at e.g. reference 3.
Basically the idea is to figure out how big a step the simple relaxation algorithm would have‘ taken, and take a step larger than that by a factor of gamma, in hopes of moving more quickly towards the final result. Gamma=1 corresponds to the plain old relaxation algorithm, with no over-relaxation. Values between 1 and 2 make sense. (If gamma were set greater than 2, the electrostatic energy would increase at every step, so the algorithm would not converge.) The value of gamma is controlled by a cell near the top right of the potential grid.
More generally, reference 4 describes a fancy fortran program for doing calculations of this sort. If you’re interested in such things, take a look there.
Reference 2 has another cute little feature, the “gate” cell at the lower right of the potential grid. Setting it to zero sets the vacuum potential to zero everywhere. Setting it back to a nonzero value allows the potentials to be recalculated. This is convenient if you just want to watch how the solution propagates. It is also invaluable for recovering from the following situation: If you enter an invalid expression into a cell in or near the vacuum, the spreadsheet will be unable to calculate the neighboring cell values, and the problem will spread from cell to cell like a disease.
As mentioned above, all the potential grids in reference 1 and reference 2 implement periodic boundary conditions – also known as Born/von-Kármán boundary conditions.
Periodic boundary conditions are not the only possible choice. Another option to have a hall of mirrors. That is, imagine that just to the left of the model universe there is a mirror-image copy of itself. Then impose periodic boundary conditions on the pair (with the appropriate double-length period). Do the same in the vertical direction. You can turn on this feature in the advanced spreadsheet by putting a nonzero value in the cell labeled “hall of mirrors” near the lower-right corner of the potential grid.
The hall-of-mirrors condition has an interesting property: it causes the directional derivative of the potential, in the direction perpendicular to the edge, to be zero at the edge of the universe.
For some applications, for instance if you are trying to model the “self-capacitance” of some object, the hall-of-mirrors boundary condition may approximate the desired physics better than periodic boundary conditions would.
In reference 2, over on the lower right below the main charge-density grid, there is a pair of smallish grids labeled “Charge Conservation”. They serve to illustrate the principle of global charge neutrality and local charge conservation. The pair consists of a potential grid and the corresponding charge-density grid. In this potential grid, you can put an arbitrary arrangement of values in the cells. No matter what you do, no matter how weird the potential-arrangement is, the total charge (i.e. the sum over the charge-density grid) comes out zero, provided you don’t mess with the periodic boundary conditions.
It is easy to see why this must be so: We calculate the charge by convolving the operator (a+b+c+d−4w) with the potential grid. Every nonzero potential cell contributes to the convolution grid five times: once as a, once as b, once as c, once as d, and once (weighted by -4) as w. If you add those five contributions, you get zero every time. (There may be small discrepancies due to roundoff errors, which we ignore.)
The cells in this little grid are just numbers. We do not run the relaxation algorithm on them. This should make it clear that the global charge neutrality, in this model system, has nothing to do with the relaxation algorithm. You could use potential values from the relaxation algorithm, or from some other algorithm, or from a random-number generator, and the total charge in the universe would still be zero. No algorithm can change this zero.
This zero can be seen as a manifestation of Gauss’s law. We can consider the edge of the universe to be a Gaussian pillbox. The periodic boundary condition ensures that whatever field lines leave the top of the universe re-enter the bottom of the universe. Therefore there is no net flux flowing into the universe. (In the example, the field happens to be zero at the edge, making it extra-obvious that there is no net flux.) Since there is no net flux, the net charge on the interior must be zero. The validity of Gauss’s law depends on the structure of the operator (a+b+c+d−4w) and not much else. Its applicability depends on the boundary condition for the universe itself.
Global charge neutrality automatically implies global conservation of charge. Global conservation is vaguely interesting, but it is important in physics, however, to have a local conservation law. Here’s why: Suppose some charge unaccountably disappeared from my lab. It would give me little comfort to be told that it reappeared in some unknowable distant part of the universe; I would be unable to distinguish non-local conservation from from non-conservation. Fortunately, our model system does have a local conservation law. If you increase the potential in any one cell, it causes an increase in the charge-density in the corresponding cell — but this increase is exactly counterbalanced by a decrease in the four neighboring cells (not in some goofy distant cells). Again, this depends on the structure of the Laplacian, not on the update algorithm.
Just below the aforementioned pair of grids is yet another pair of smallish grids, labeled “Gauge Invariance”.
As in most of the other grids, I have imposed Born/von-Kármán periodic boundary conditions. As before, this exhibits global charge neutrality and local charge conservation.
This grid is set up to make it easy to demonstrate the concept of gauge invariance. That is, if you add a uniform constant to all cells, the distribution of charge is unchanged and the electric field is unchanged. Note the contrast:
|If you change the potential of only one cell, the fields are changed, and the distribution of charge is changed; only the total charge is unchanged.||If you change the potential of all cells equally, nothing changes in the field distribution or the charge distribution.|
It is amusing to first prove the gauge invariance property for the N×M universe with periodic boundary conditions, and then let N and M become exceedingly large. That provides a nice way to describe a large universe without requiring any funny boundary conditions that might require gauge-non-invariant amounts of charge “at the edge of the universe”. The torus has no edges.
The Laplacian we have been using – (a+b+c+d−4w) – is manifestly gauge invariant because adding a constant to all cells of a solution produces another solution to Laplace’s equation, and both solutions have the same charge distribution.
In section 2.2, we concluded that the structure of the Laplacian guaranteed local conservation of charge. Here we just concluded that the same structure guarantees gauge invariance. These are two quite different conclusions, but they are profoundly related. Gauge symmetry necessarily implies conservation of charge. Actually this is just the tip of the iceberg; there is a deep and beautiful result called Noether’s theorem that says for every continuous symmetry, there is a corresponding conservation-of-something law. Examples include:
If you want to estimate how much capacitance of your system would have in a very large universe, it is nice to compare the various boundary-value options: grounded enclosure, ordinary periodic boundary conditions, or hall-of-mirrors boundary conditions. In the limit where the boundary becomes very far away from the other circuit elements, the latter two converge to the same limit. The grounded enclosure option does not generally converge to the same limit, as is obvious from the following:
Using any of the aforementioned spreadsheets, if you have just one object and nothing else, the capacitance of the object is always zero. Gauge invariance guarantees it. That is, you can put any potential you like on the singleton object, and it won’t develop any charge. The only way to produce a charge is to have two (or more) objects with different potentials. (The enclosure, if present, counts as an object like any other. There is nothing special about the enclosure.)
I created another very-similar spreadsheet that solves Laplace’s equation in D=3 for objects with rotational symmetry about the Z axis. It can be found in reference 5.
Unlike the previous versions, it does not assume translational invariance along the Z axis, so you can calculate the behavior of objects shaped like pears, or bowls, et cetera. Each cell represents an area dr∧dz in polar coordinates. Note that I am avoiding the word “cylindrical” because mathematicians use the word to describe anything with translational invariance, while physicists use the same word to describe anything with rotational invariance. Sigh.
This is the same as the previous spreadsheet, but it uses the formula for the Laplacian in polar coordinates as discussed at e.g. reference 6.
The potential grid represents a slice through the axis of symmetry. Rotational symmetry implies that any such slice has reflection symmetry. If you fill in the left half-plane of the potential grid (with your chosen objects and other boundary conditions), the spreadsheet formulas will mirror it in the right half-plane. It is not necessary or desirable for you to manually change anything in the right half-plane.
Similar remarks apply to the symmetry of the charge-density grid; it represents a slice through the axis of symmetry.
In D=3 with rotational symmetry about the Z axis, the Laplacian is (d/dr)2 + (1/r)(d/dr) + (d/dz)2; we know the phi-derivative is zero.
(You can contrast this with the previous cases, namely D=3 with translational symmetry in the Z direction, where the Laplacian was (d/dx)2 + (d/dyf)2; we knew the Z-derivative was zero.)
In the cells of the spreadsheet, I have simplified the formula by observing that (1/r)(d/dr) is equal to (1/x)(d/dx) on the slice of interest, by cancellation of a factor of sign(x).
In this spreadsheet there is a fourth grid, just to the right of the grid that shows the charge per unit volume. It shows the charge per unit area (dr∧dz) in a ring. You can find the total charge on an object by summing the numbers in this grid. There is no point in summing the numbers in the charge-per-unit-volume grid; that doesn’t make sense for several reasons, including dimensional analysis.
To improve the accuracy, I use a smart estimate of the quantity (1/r)(d/dr). In particular, I take the arithmetic mean of the left-hand difference (w−b)/x1 and the right-hand difference (c−w)/x2; this accounts for an important nonlinearity because the radius is different in the two denominators.
Validity checks: I verified that a region with a log(r) potential produces zero charge density, with high accuracy. I also checked that the field calculation and charge calculation are automatically gauge invariant, because of the structure of the Lapacian operator.
I implemented periodic boundary conditions in the Z direction, and this is the default behavior. I also implemented hall-of-mirrors boundary conditions, which you can optionally use instead.
In the R direction, there is only one choice: the perpendicular component of the electric field vanishes on this boundary. This is reminiscent of the hall-of-mirrors boundary condition, but there is no physical interpretation in terms of tiling the universe. Instead, this can be viewed as surrounding the region of interest, at each Z level, with an annulus extending to infinity. The potential on this annulus depends on Z but is independent of R. This means that outside the region of interest, there will be zero charge, although there will be nonzero fields. These fields seem a bit unphysical. To make these fields go away, you can arrange that the potential at the large-R boundary is independent of Z. To achieve this, it suffices to arrange that one of your electrodes fully encloses all the other electrodes.
You can use these spreadsheets to calculate capacitance.
We start by assigning suitable voltages to objects on the potential-grid and observing the induced charges. We find the total charge on each object by summing the cells of the charge-grid occupied by each object.
Then we hold N−1 of the objects at constant potential and wiggle the voltage on the remaining one. We observe what happens to the charge on each and every object by turning the crank on Laplace’s equation. This gives us numerical values for the matrix elements Cij.
For details on what a capacitance matrix is, and how to calculate its matrix elements, see reference 7.
The following remarks apply to all three versions of the spreadsheet.
I’ve got “iteration-mode” turned on.
Suggestion: If you are going to seriously play with this, you will want to make use of the spreadsheet’s recalculation controls. You might want to delay recalculation if you are making numerous changes to the grid, but thereafter you will want automatic recalculation:
At any time you can invoke the manual "recalculate now" function with the F9 key.
The following small features probably require having the “1997” (or later) version of excel. They are known to work with version 9 (the one that comes with office 2000):
Copyright © 2001–2008 jsd | <urn:uuid:ff04d2e1-0ceb-4898-ad7a-57be53abefec> | 3.328125 | 4,562 | Tutorial | Science & Tech. | 48.512402 | 792 |
Its a very general and regular ways to use
clause with Column Name to Specifies the sort order used on columns returned in a SELECT statement.
order by columnname asc/desc
I am also using the same method but do you know we can use Column Index (Integer representing the position of column name) instead of specifying the name or column alias in a SQL Server Order By expression.
order by Column_Index_Number asc/desc
we can use both queries because both of these queries having the same results.
let check both syntax and result too
here I am going to query based on AdventureWork database
Example :- Getting top 10 value from Employee table based on EmployeeID in asc order
select top 10 * from HumanResources.Employee order by EmployeeID asc
output will be as given below :-
when Using Column Index instead of Column Name for same query
Note :- Column Index can be changed as 1,2,3,etc based on column name condition
select top 10 * from HumanResources.Employee order by 1 asc
then output will be same as given below :-
Reference Link :- Matt Berseth Articles[^
Note :- ORDER BY clause is only Clause where we can you use the ordinal position[Column Numbers], because it's based on the column(s) specified in the SELECT clause.
its generally recommended that to use Column Name instead of Column Number.
but in some cases using Column Number can be useful like it can be used it in a dynamic sql where column names are unknown
for getting practical way you can follow this | <urn:uuid:61309e61-d2c1-4fdb-8f00-40270a95b3e6> | 2.640625 | 329 | Q&A Forum | Software Dev. | 28.568846 | 793 |
Histogram of the raw rainfall (mm) amount for running 3-month periods in chronological
order from 1955 through 1996. The seasonal cycle of the quartile boundaries (25 %ile: lower light line; 50 %ile [i.e., median]: dark line; and 75 %ile:
upper light line) are plotted with the actual rainfall amounts for the given period/year (vertical bars).
The year labels shown on the horizonal axis are placed at the center of the calendar years rather than at Dec-Jan-Feb, with the latter
denoted by tick marks.
The ENSO status of each boreal winter is shown underneath the main panels of the histogram. Boreal winters that are split between two rows of the
histogram (i.e., 1968-69 and 1982-83) have their ENSO status indicated in both rows. | <urn:uuid:54416580-72fc-46d4-8c33-021e084cae99> | 2.703125 | 181 | Structured Data | Science & Tech. | 68.527735 | 794 |
Common Lisp the Language, 2nd Edition
Several kinds of numbers are defined in Common Lisp. They are divided into integers; ratios; floating-point numbers, with names provided for up to four different floating-point representations; and complex numbers.
X3J13 voted in March 1989 (REAL-NUMBER-TYPE) to add the type real.
The number data type encompasses all kinds of numbers. For convenience, there are names for some subclasses of numbers as well. Integers and ratios are of type rational. Rational numbers and floating-point numbers are of type real. Real numbers and complex numbers are of type number.
Although the names of these types were chosen with the terminology of mathematics in mind, the correspondences are not always exact. Integers and ratios model the corresponding mathematical concepts directly. Numbers of type float may be used to approximate real numbers, both rational and irrational. The real type includes all Common Lisp numbers that represent mathematical real numbers, though there are mathematical real numbers (irrational numbers) that do not have an exact Common Lisp representation. Only real numbers may be ordered using the <, >, <=, and >= functions.
A translation of an algorithm written in Fortran or Pascal that uses real data usually will use some appropriate precision of Common Lisp's float type. Some algorithms may gain accuracy or flexibility by using Common Lisp's rational or real type instead. | <urn:uuid:bcd01b03-abb2-4458-a8b2-0122562fb6ac> | 3.828125 | 283 | Documentation | Software Dev. | 36.567112 | 795 |
Anti de-Sitter space
Bubbles, filaments, voids and sheets
Condensed matter system
Cosmic Microwave Background
Deep field survey
Degrees of freedom
Grand unification theory
Heisenberg uncertainty principle
Hubble's law and constant
Intercommuting and loop production
Laws of thermodynamics
Nematic liquid crystal
Quantum Field Theory
Speed of light
Strong and Electroweak forces
Surface of last scattering
The Great Attractor
Theory of Everything
When dealing with geometries that take place within the Universe, we deal not with conventional three-dimensional euclidean geometry, we have to adapt it to represent a four-dimensional spacetime. This results in what is known as a Lorentzian manifold. Within this geometry, we deal with three types of space, de-Sitter space, anti de-Sitter space and Minkowski space. They are analogues of spherical, hyperbolic and euclidean space with regards to four-dimensional spacetime.
This is a type of hypothetical particle of zero electrical charge that has come out of the framework of Quantum Chromodynamics. It is hypothesised that these were created during the very early Universe. They have little mass and do not easily interact with normal matter. No experimental evidence for them exists as of yet, but they are one of the possible contenders for dark matter.
A baryon is a category of subatomic particle which is composed of three quarks. This is opposed to a meson, which is composed of one quark and one antiquark. Baryons include protons and neutrons and make up the majority of the mass of visible matter in the Universe (i.e. the mass of the Universe that is not Dark Matter or Dark Energy). They participate in the Strong Nuclear force.
About thirteen billion years ago, the Universe began in a gigantic explosion. Every particle started rushing apart from every other particle in an early super-dense phase. The fact that galaxies are receding from us in all directions is a consequence of this initial explosion. Projecting galaxy trajectories backwards in time means that they converge to a high-density state.
This is one of the possible ends to the Universe as we know it. Cosmic inflation is expands the Universe and gravitation brings matter together. Depending on the density of the Universe, one of these forces may overcome the other, or alternatively the Universe may be of critical density, which would result in a "flat" Universe. If the Universe has a density higher than this critical density, then gravitation will eventually overcome the forces working to expand the Universe, and the matter in the Universe would start to converge on other matter, until all the matter in the Universe converges into a singularity.
Given that we now know that the expansion of the Universe is accelerating, it now seems unlikely that this situation will arise.
A black hole is a region of spacetime from which nothing can escape, even light.
To see why this happens, imagine throwing a tennis ball into the air. The harder you throw the tennis ball, the faster it is travelling when it leaves your hand and the higher the ball will go before turning back. If you were able to throw it hard enough, it would never return; the gravitational attraction will not be able to pull it back down. The velocity the ball must have to escape is known as the escape velocity.
As a body is crushed into a smaller and smaller volume, the gravitational attraction it exerts increases, and the escape velocity required to overcome this gets bigger. Things have to be thrown harder and harder to escape. Eventually, a point is reached when even light, which travels at 186 thousand miles a second, is not travelling fast enough to escape. At this point, nothing can get out as nothing can travel faster than light. This is a black hole.
Black hole formation starts when a large star has burnt all its fuel, exploding into a supernova. What remains after the supernova collapses down into a neutron star, which is extremely dense. If the neutron star is too large, its gravity overwhelms its internal pressure and the star collapses to form a black hole.
A blackbody is a theoretical construct that absorbs all radiation that strikes it. No known material absorbs all radiation - some is always reflected off of it. Such a body would therefore appear completely black to all types of radiation spectrography.
Blackbody radiation is radiation emitted from the said theoretical construct, a perfect emission of radiation with 100% efficiency. At a certain temperature, for example, the blackbody would radiate the maximum amount of energy for that temperature. It must emit this radiation across all possible wavelengths and frequencies and is must also absorb all possible wavelengths and frequencies, which means that it can emit radiation at infinite wavelength.
Named after Indian physicist Satyendra Nath Bose, these are particles with full integer spin, i.e. 1, 2, 3. (as opposed to fermions, which possess half integer spin). There are two categories of fundamental boson (bosons not composed of a combination of other particles); gauge bosons, which mediate the fundamental forces of nature; and scalar bosons, which are constituents of a scalar field, and include the elusive Higgs boson. Bosons can also be created from other particles whose spin totals an integer, for example, any meson.
Brane inflation uses fundamental object of string theory, called branes. In this theory, the Universe is a three dimensional slide (a brane) in a high dimensional space (the bulk), which may also contain other branes. These slices of spacetime have mass and can attract each other by gravity, so two almost parallel branes separated by some distance will start moving towards each other. In brane inflation, the closer the two branes get to each other, the more the branes expand, giving rise to inflation.
The process ends with the violent collision of the branes, leading to the copious production of radiation and relativistic particles. Hence, the new brane resulting from the collision is filled with a hot plasma, which is the starting point of the standard Big Bang model. There is another prediction in the model: the collision is also accompanied by the production of cosmic strings.
These are all types of large-scale structure formed from galactic distribution in the Universe. Galaxies form clusters and superclusters which arrange into sheets and filaments through the Universe. Between these sheets of galaxies, there is very low galaxy density, which leads to voids. These fill approximately 90% of space.
Bubble nucleation is a form of first-order phase transition. A phase transition occurs when temperatures and densities increase such that matter changes its form and properties, such as in the very early Universe, during the Big Bang. A simple analogy is water, which melts from ice to liquid, and then boils to gas as temperatures increase. For physicists, it is important to note that as the temperature increases, the symmetry of the matter increases. Thinking this through, we know that gas is more symmetry than water, which in turn is more symmetric than ice. It is through this phase transitioning from higher to lower temperatures that we obtain the matter particles with which we are familiar today, i.e. protons, neutrons, photons etc..
First-order phase transitioning, or bubble nucleation, occurs through the formation of bubbles of the new phase in the middle of the old phase. These bubbles then expand and collider until the old phase disappears completely and the phase transition is complete.
A manifold is a generalisation of a surface or space of N dimensions, which allows physicists to analyse that surface or space without reference to N+1 dimensions. When you look at any point on a manifold, the local area resembles traditional Euclidean space with N dimensions. Imagine you are sitting next to a piece of paper with pencil in hand. If you draw across the paper, you have draw a line, which itself is one dimensional, but sitting within the two dimensional plane in which you have drawn it (the X and Y axis). This line you have drawn can be represented by a manifold, which means that to analyse it you do not have to reference the two dimensional plane in which it is drawn. When dealing with cosmology, the kinds of manifold which are dealt with are much more complicated.
Calabi-Yau Manifold are very specific types of manifolds. Within string theory, it is predicted that the extra dimensions than we currently experience, with a total of 10 dimensions predicted (M-Theory predicts a total of 11). It is hypothesised that these extra six dimension could take the form of a Calabi-Yau Manifold, which would be so small that we cannot yet observe it.
A cepheid is a type of variable star, that is, a star whose luminosity changes over time. They are formed when normal stars get very old, and swell to become red giants. These red giants eventually change and start to pulsate as they die. These are cepheids. They are very large, luminous and yellow, and they expand and contract, growing and fading in luminosity in periods typically on the order of 1 to 70 days.
The European Organization for Nuclear Research, headquartered in Geneva, was established in 1954 and is a research body dedicated to high-energy physics. It counts over 2,000 people as amongst its permanent staff and at any one time has thousands of visiting scientists and scientists with whom it works in collaboration. CERN's Geneva headquarters are also the location of the Large Hadron Collider - the world's largest particle accelerator. CERN's discoveries are too numerous to list here, but some of them include the first creation of antihydrogen, the discovery of W and Z bosons, the direct discovery of CP violation and, potentially, the discovery of the Higgs boson (currently there has been a discovery of a boson with a mass of ≈125 GeV/c2 to a 4.9 sigma significance that may be the Higgs boson).
Similar to eternal inflation, this theory holds that what we consider to be our Universe is only part of an infinite multiverse. Within this multiverse, there are different areas of space undergoing expansion. Our Universe is one of these areas of space. In eternal inflation, these inflationary areas can decay to lower-energy phases and cease to undergo inflation. This is in contrast to chaotic inflation, which sees inflationary areas undergo positive feedback, inflating forever.
This refers to physics which does not take into account quantum physics. In different contexts, it can apply to different theories. Classical Newtonian physics, which treats quanta as waves or particles is considered classical physics. Within the world of quantum mechanics, the theories of general and special relativity can also be consider classical on the basis that they do not specifically form part of the quantum paradigm.
Unrelated to any visual representation of colour, such as we witness in rainbows and the like, colour is a type of charge associated with the Strong Nuclear force, one of the fundamental forces, in quarks and gluons. It is similar to, but not the same as the electrical charge that particles may exhibit. There are three values of colour, these are Blue, Green and Red. Colour is a degree of freedom that allows quarks to exist together to form hadrons, such as protons or neutrons, in otherwise identical quantum states. This is necessary as otherwise they would be in violation of the Pauli exclusion principle, which states that no two identical fermions may occupy identical quantum states simultaneously.
Comoving distance is the distance between two objects as it appears if the expansion of the Universe is factored out. At any given time, it is equal to the proper distance, which is the actual distance between two objects, and will change over time due to the expansion of the Universe. The comoving horizon is therefore the actual distance to the edge of what we can see at any given time.
Condensed matter systems deal with, as the name suggests, condensed matter. This includes matter in the liquid, solid, superconducting phases. Condensed matter systems can be used to study the effects of phase transitions on matter.
Around 370,000 years after the Big Bang, the temperature of the Universe dropped sufficiently for electrons and protons to combine into hydrogen atoms: p + e = H . From this time onwards, radiation was effectively unable to interact with the background gas, so it has propagated freely ever since, while constantly losing energy as its wavelength is being stretched by the expansion of the Universe. Originally, the radiation temperature was about 3000 degrees Kelvin (i.e. approximately 3300 degrees Celsius, 5000 degrees Fahrenheit), whereas today it has fallen to only 3K.
Observers detecting this radiation today are able to see the Universe at a very early stage. Photons in the CMB have been travelling towards us for over ten billion years, and have covered a distance of about a million, billion, billion miles. The CMB was discovered in 1964.
These are one-dimensional (that is, line-like) objects which form when an axial or cylindrical symmetry is broken. Strings can be associated with grand unified particle physics models, or they can form at the electroweak scale.
They are very thin and may stretch across the visible Universe. A typical GUT (Grand Unified Theory) string has a thickness that is less then a trillion times smaller that the radius of an hydrogen atom. Still, a 10 km length of one such string will weigh as much as the Earth itself!
Originally proposed by Einstein as a modification to General Relativity to result in a Universe which would neither expand nor contract. He later famously called it his greatest mistake after Hubble discovered that other galaxies were moving away from us using redshift. Different values of the constant can be used to explain different scenarios in which the Universe might contract or expand. Given that we now know that the expansion of the Universe is accelerating, physicists are now looking to the cosmological constant as a possible explanation. Specifically, the cosmological constant may be related to the dark energy that pervades our Universe, working against gravity to expand the Universe.
This states that the Universe appears the same in every direction from every point in space. It asserts that our position in the Universe - on the very largest scales - is in no sense preferred. There is considerable observational evidence for this assertion, including the measured distributions of galaxies and faint radio sources, though the best evidence comes from the near-perfect uniformity of the cosmic microwave background radiation. This means that any observer anywhere in the Universe will enjoy much the same view as we do, including the observation that galaxies are moving away from them. It should be noted that this does not mean local structures will be different, i.e. local stars etc. will be different for different observers, but more that the physical laws which govern these observable phenomena will be the same and background effects across the Universe will be equal.
Cosmology is the study of the large scale Universe, its origins, evolution, laws and its eventual fate. Whereas astronomy is concerned with objects within the Universe, cosmologists are more concerned by the Universe as a whole.
The UK's national cosmology supercomputer. It is housed within the Department of Applied Mathematics and Theoretical Physics, here in Cambridge. Having recently undergone its ninth iteration, it is the most powerful shared memory system in Europe. It is available for use for both academic and non-academic users, and is part of the STFC's high-performance computing DiRAC facility.
This reference to the density of our Universe. Matter density in the Universe plays a critical role towards understanding what will happen to the Universe in the future, specifically, whether it will continue expanding until perhaps the Universe grows so cold that life is unsustainable, or that matter is literally ripped apart, or whether gravitation will eventually overcome expansionary forces and the Universe will collapse in some Big Crunch. A Universe with less than critical density will continue expanding at a forever accelerating rate, whereas a supercritical Universe will invariably collapse.
In the late 1990s, it was discovered that the expansion of the Universe is accelerating. It is expected that due to gravity’s influence, bringing matter together, would slow down the acceleration of the expansion of the Universe. Therefore either whatever is responsible for this acceleration is not normal matter, or gravity must get weaker on large scales. This unknown physical phenomenon responsible for this acceleration is known as dark energy. To completely explain the acceleration then there isn’t only a little dark energy in the Universe, there’s a lot of it. In fact, it must make up 74% of the mass of the Universe. Dark matter must make up 22% of the Universe, with only 4% of the Universe being the matter that is currently known to us.
There is strong evidence that the Universe consists primarily of dark (non-luminous) matter, also that this matter is of an exotic, non-baryonic form. Baryons are made up of three Quarks, a type of elementary particle. Baryons include protons and neutrons.
A deep field survey is a galaxy survey which looks deeper into the sky than the average galaxy survey. Because electromagnetic waves have a speed limit (the speed of light), the further away from us that we look, the further back in time we are looking, as the waves that are currently reaching us will have been radiated billions of years ago. Hubble Ultra-Deep Field, the deepest image of the Universe that we currently have, shows us a time period corresponding to roughly 400-800 million years after the Big Bang.
A degree of freedom, in physics (as opposed to mathematics, which can have different meanings), is a parameter that can help define the state of an object to differentiate it from others. A simple example would be charge amongst particles. Some particles are charged, some are not, and as a result they behave differently. At a quantum level this becomes important because certain particles with the same values for their varying degrees of freedom (i.e. spin, charge etc.) cannot exist in the same place at the same time.
An object which generates a magnetic field emanates that field from two opposite poles, an example of this would be a bar magnet, which has a north and south pole. Each of these poles is a magnetic monopole. The magnet itself, having two of these poles, is a dipole. This is similar to an electric field, in which the field emanates from positive and negative charges. Whereas in electricity, negative and positive charges can be easily isolated in the form of electrons and positrons, magnetic monopole particles have yet to be discovered. For example, when you break up a bar magnet, you do not isolate the two monopoles, you simply have two bar magnets half the size of the previous one.
These are two-dimensional objects that form when a discrete symmetry is broken at a phase transition. A network of domain walls effectively partitions the Universe into various 'cells'. Domain walls have some rather peculiar properties. For example, the gravitational field of a domain wall is repulsive rather than attractive.
When the source of a wave moves away from us, we observe a change of frequency of that wave. An example would be an ambulance or fire-truck - we hear a lower pitch in its siren once it has passed us by. This is the Doppler-shift. It is not, however, limited to sound waves, but any kind of waves, including electromagnetic.
(b. 1879 d. 1955), was a German theoretical physicist who spent much of his career at the Kaiser Wilhelm Institute for Physics and Princeton University. He is regarded as one of the greatest physicists of the 20th century, and indeed, one of the most academically brilliant minds of all time. Awarded the Nobel Prize in Physics in 1921, for his work on the photoelectric effect where he described photons as discrete packets, known as quanta. This was in direct conflict with previous, classical descriptions of physics which defined photons as wave. His theories are now the basis of modern physics. These theories, whilst too numerous to list here, include special relativity, which describes how relative motion can result in different laws of physics being experienced by different observers as well as the energy-mass equivalence relationship, E=mc2, and general relativity, which generalises special relativity with respect to gravity and incorporates this with Newtonian laws of gravity to describe a how gravity is a geometry property of spacetime. When Hitler came to power in 1933, he was on a trip to America and did not return to German, instead opting to become an American citizen. His warning to President Roosevelt about the German research into nuclear weapons led to the eventual development of the atomic bomb, a weapon he later denounced and crusaded against. Such was Einstein's genius that upon his death his brain was removed for future study.
An elementary particle carrying a negative elementary electric charge (that is, the most fundamental electric charge, particles do not carry charge smaller than this). A fermion with spin 1/2. It is a lepton and therefore is a constituent of matter, but does not participate in the Strong Nuclear force. It does interact with Electromagnetism, Gravitation and the Weak Nuclear force.
Energy unit equal to approximately 1.6 x 10-19 Joules. It is the amount of energy gained by the charge of one electron as it moves across a one volt electric potential difference.
A period in time. In cosmology it is used to refer to different time periods in the chronology of the Universe. These include the Planck epoch; the Grand Unification epoch; the Electroweak epoch; the Quark epoch; the Hadron epoch; the Lepton epoch; and the Photon epoch (all of the epochs prior to the Photon epoch occurred within the first 10 seconds of time!). Time periods after this include Nucleosynthesis, Recombination and Reionization.
This is the speed required for any object to break free of another objects gravitational field. For the Earth, this is approximately 7,000 miles per second. Mathematically, it is described as the velocity at which the escaping object's kinetic energy and gravitational potential energy summate to zero. As the gravitational force exerted by an object on another object increases as the distance between the two decreases, the further away the escaping object is, the lower the escape velocity. For black holes, at the distance known as the event horizon, the escape velocity is greater than the speed of light, and therefore nothing can escape.
Eternal inflation refers to a series of models by which at least one region of the Universe is undergoing inflation at any one point in time. Due to the exponential increase in volume during these periods of inflation, it is theorised that at any given point the majority of the volume of the Universe is still expanding. This creates a multiverse, whereby each expanding area of the Universe appears, to be its own Universe, and the beginning period of expansion equivalent to the Big Bang. In eternal inflation it is possible for these expanding areas of space to decay into a lower energy phase, resulting in inflation ceasing.
Named after Euclid, a Greek mathematician of the third century BC. It is a system of geometry based around the geometry of the three dimensions that we are all taught at school; x, y and z. Points within the system can be described by a set of Cartesian coordinates. It is described by a system of postulates, or premises, for example, the parallel postulate, which states that "if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles".
In contrast to this is non-Euclidean geometry, which deals with curved space.
The event horizon is the boundary that marks the point where the escape velocity of a black hole exceeds the speed of light. Once the event horizon has been crossed, nothing can escape from the black hole’s gravitational pull, not even light.
Exotic particles are those made up of theorised particles not currently part of the standard model. An example of this would be the heavier partners of the current set of particles that make up the standard model, that are described within the theory of supersymmetry.
Full title: the Fermi National Accelerator Laboratory. Located near Chicago, IL., it is a United States Department of Energy laboratory focussed on high-energy physics. Until 2011, it house the Tevatron particle accelerator, which until the opening of the Large Hadron Collider at CERN, the largest in the world. In 1995 work done at the Tevatron led to the discovery of the Top Quark, one of the six different flavours of quark, and the most massive of them all.
These are particles with half integer spin. This is opposed to bosons, which have full integer spin. Only one fermion can occupy the same quantum state and space at any given time, this is known as the Pauli exclusion principle, and does not apply to the other class of particles, bosons. Elementary fermions (those not composed of other particles) are constituents of visible matter in the Universe, and include electrons and quarks. Particles composed of fundamental fermions, however, can have full integer spin and therefore can be classed as bosons.
A ferromagnet is an object which exhibits the property of ferromagnetism. Ferromagnetism is the strongest type of magnetism, and as such ferromagnets are the magnets that the average reader will be familiar with. They are the ones used in physics classes at school, they are the ones used to pick up scrap metal, they are the magnets on your fridge. Ferromagnetism is the only type of magnetism that has the strength to produce a force that can be felt. A ferromagnet can be defined as a material that can exhibit a net magnetic moment in the absence of an external magnetic field.
(b. 1918-d.1988), was a physicist who spent most of his life working at the California Institute of Technology (Caltech). Also worked on the Manhattan Project at Los Alamos National Laboratory, where he helped develop the atomic bomb. Won the Nobel Prize in Physics in 1965 for his work in Quantum Electrodynamics (QED). Developed the path integral formulation that we use today, and developed an illustrative representation scheme for the behaviour of subatomic particles which has become known as Feynman diagrams. Caltech has a named Chair of Physics in his honour.
Outside of his life in physics, he also was a member of the panel that investigated the Space Shuttle Challenge crash, and wrote two popular science books: "Surely you're joking, Mr. Feynman!" and "What do you care what other people think?".
There are four fundamental forces in nature. They are Electromagnetism, the Weak Nuclear Force, the Strong Nuclear Force and Gravitation.
The Weak Nuclear Force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton in the form of Beta radiation. The gauge bosons that mediate the force are the W and Z bosons. This interaction can cause quarks to change flavours.
The Strong Nuclear Force binds together quarks to form nucleons, in turn, it also acts to bind these nucleons together, forming atomic nuclei. The force is mediated by an exchange of gluons, which are a type of gauge boson. The charge associated with this force, analogous to the electric charge associated with electromagnetism, is the Colour charge, of which there are three varieties; Red, Green and Blue. The mathematical theory describing the elementary particles interacting with this force, Quarks and Gluons, is known as Quantum Chromodynamics (QCD). At atomic levels, it is by far the strongest of all forces, but only interacts on a scale on the order of 10-15m, and therefore, whilst incredibly important for the formation of matter, does not play any observable role in day to day life.
Electromagnetism is a force associated with the electric charge associated with certain molecules. Along with gravitation, is is one of the four forces that has a major noticeable effect on day to day human life. It manifests as two different fields electric fields and magnetic fields, although they are aspects of the same force and therefore interact with each other through electromagnetic induction. The gauge boson that mediates this force is the photon, which is also the quanta (discrete packet) of light and other forms of electromagnetic radiation, such as infra-red radiation (most thermal radiation), X-rays, Ultraviolet radiation etc..
Gravitation is a force of attraction between two massive bodies. Objects on Earth are attracted to the Earth via gravitation, why is why, when an apple falls from a tree, it falls down towards the Earth, instead of in any other direction. Gravitation also gives weight to objects, weight being the mass of an object multiplied by the gravitational force acted upon it by another object. Gravitation on a Universal scale is described by Einstein's theory of General Relativity, where it is described as being a result of curved spacetime. Classically, it has been described by Newton's law of gravitation, which is an accurate approximation up to a certain level of detail. Gravitation is mediated by the still-hypothetical gauge boson the Graviton. On a quantum level, there is no sufficient theory that can explain the force, although string and M-Theory are potential candidates. Explaining gravity on a quantum level is one of the major challenges in present-day physics.
This refers to so called “theories of everything”, which try to link the four known fundamental forces; electromagnetism, the weak nuclear force, the strong nuclear force, and gravitation. Fundamental theory can be thought of as beyond the standard model. This label encompasses theories such as superstring theory and M-Theory.
A galaxy is a group of gravitationally bound stars, solar systems, stellar remnants (such as neutron stars), interstellar gas and dust, and mysterious dark matter. There are many different categories of galaxy, including spirals, ellipticals, irregulars and lenticulars. Our own galaxy, the Milky Way, consists of around 200 billion stars, although galaxies can have as few as ten million and as many as one hundred trillion.
Galaxies form large-scale structures within the Universe due to their gravitational attraction, these include groups, clusters, filaments, voids, bubbles and superclusters.
These are groups of galaxies attracted together by gravity. Typically, they contain somewhere in the range of 1012 to 1015 solar masses (i.e. a thousand million to a million billion suns). Regular clusters normally have a concentrated central core and well-defined spherical structure, irregular clusters have no defined centre. Our Local Group consists of approximately 54 galaxies, including the nearby spiral galaxy, and largest member of the group, Andromeda.
This is a mapping/imaging of a section of the sky to measure redshifts of objects within that section. Comparing redshifts between different electromagnetic sources allows us to build up a three dimensional map of the sky, which allows us to gain insight into the large-scale structures within the Universe.
A gauge group is a set of gauge transformations which effect a system in similar manners. A gauge transformation is a transformation that acts on redundant degrees of freedom within a system, that is, it effects a property that does not really have any physical significance at the level at which the system operates.
A gauge transformation which is globally symmetric effects all points of space in the same way. An example of this would be a transformation of voltage that states that Voltage1 = Voltage2 + C (a constant). If we substitute the left hand side of the equation with the right in classical equations dealing with electromagnetism, there is no difference in the outcome and therefore this will hold across any difference in voltage.
If we impose a local symmetry on the gauge transformation, also known as gauge invariance, then these transformations become very significant. This is because the transformation holds true, but the transformation is now a function of the position in space and time.
Through introducing these conditions of gauge invariance into quantum equations, one can extrapolate that for particles that interact with fundamental forces, such as the electron, which carries electrical charge and is acted upon by electromagnetism, there is an underlying field which is also undergoing a gauge transformation. In the case of the electron, it is the electromagnetic field, which physicists were already aware of, however, gauge invariance has postulated the gluon field which is the basis for quantum chromodynamics, the theory which explains the strong nuclear force.
This is the modern geometric description of gravity. It says that the gravitational force is related to the curvature of spacetime itself, i.e. to its geometry. To this end, it generalises Einstein's theory of special relativity, and links it to Newton's laws of gravity. Unlike for non-gravitational physics, spacetime is not just an arena in which physical processes take place, but it is a dynamical field. The gravitational field at a fixed time can be described by the geometry of the three spatial dimensions at that time.
These are gauge bosons which mediate the Strong Nuclear force, one of the fundamental forces. Similar to the Photons which mediate the Electromagnetic force, gluons have no rest mass and so travel at the speed of light. Although unlike photons, which whilst mediating the electromagnetic force, are themselves electrically neutral, gluons have charge associated with the Strong Nuclear force, or Colour. There are 8 different colours of gluon. Gluons are confined within hadrons, particles made up of quarks (which have a colour charge), and are limited in interaction to a distance of approximately 10-15 metres.
See Grand Unified Theory.
In the aftermath of the Big Bang, the Universe was extremely hot and extremely dense. At these energies, the laws of nature that we know were changed. The fundamental forces that we see in nature were unified - the Universe was in a state of Grand Unification - it is only as the Universe expanded and cooled that Gravitation, Electromagnetism and the Strong and Weak nuclear forces all ceased to be as one. Electroweak theory describes the unification of the Weak Nuclear force and Electromagnetism. A Grand Unified Theory will marry up Electroweak theory with the Strong Nuclear force, brining us closer to a unification of the four fundamental forces.
Gravitational waves are propagating disturbances in spacetime. The effect of a passing gravitational wave is to periodically stretch and compress space in the two directions perpendicular to the direction of propagation. The expected strain on the Earth due to these disturbances, which can be caused by black holes merging, is very small, making detection extremely difficult.
This is an as yet undiscovered particle that is believed to mediate the force of gravitation. Much like the photon, which mediates the electromagnetic force and the gluon which mediates the Strong Nuclear force, it has no mass, and therefore travels at the speed of light. It has a spins quantum number of 2,and is the only massless particle with that spin number. It has zero electrical charge. Experimentally, the graviton is incredibly difficult to observe, and is beyond the reach of current physics. The detection of gravitational waves may lead to some further information about gravitons, but these have not yet been detected. Theories of quantum gravity are one of the largest standing issues in cosmology, and there are currently few mathematically consistent theories that can explain it. One of these theories is M-Theory, which we believe to be the best explanation at this point in time.
This is a type of blackbody radiation emitted by black holes. This radiation is a form of energy and because of the energy-mass equivalence relationship, E=mc2, the loss of energy through this radiation also leads to a loss of mass for the black hole. This not only means that black holes are not truly black (in the sense that they do have emissions), but it also means that should they not take in more mass than they radiate, they will eventually radiate away into nothing. This process is known as black hole evapouration.
The Heisenberg uncertainty principle states that the more precisely determined one of two properties, position and momentum, of a particle is, the less precisely determined is the other. This results from the wave-matter nature of particles and is independent of who is observing the momentum or position and so this principle can't be overridden by improvements in technology. This is in contrast to the similar observer effect, which states that in affecting the observation of a particle, say by bombarding an electron using gamma-rays, you are changing its momentum, and therefore the observer cannot obtain accurate knowledge of both at any one time.
The uncertainty principle is also in sharp contrast to classical wave mechanics, which says that precise simultaneous values can be assigned to different physical quantities.
This is a principle which operates within certain string theories and theories of quantum gravity. It was proposed in 1993 by G. 't Hooft. It consists of two basic assertions:
Assertion 1 The first assertion of the Holographic Principle is that all of the information contained in some region of space can be represented as a 'Hologram' - a theory which 'lives' on the boundary of that region. For example, if the region of space in question is a room, then the holographic principle asserts that all of the physics which takes place in the room can be represented by a theory which is defined on the walls of the room.
Assertion 2 The second assertion of the Holographic Principle is that the theory on the boundary of the region of space in question should contain at most one degree of freedom per Planck area.
Within M-theory, the holographic principle suggests we are the shadows on the wall. The 'room' is some larger, five-dimensional spacetime and our four-dimensional world is just the boundary of this larger space. If we try to move away from the wall, we are moving into an extra dimension of space - a fifth dimension.
(b.1889-d.1953) was one of the main figures of astronomy in the 20th century. Using the Hooker 100 inch telescope at Mount Wilson Observatory in California, discovered the galaxies are receding away from us and from each other via the changes in frequency that they exhibit - the shifting of frequency of electromagnetic emissions to the red end of the spectrum. This realisation was crucial as evidence for an expanding Universe, which, if reversed, supports the notion of a Big Bang at the beginning of the Universe. Famously not awarded the Nobel Prize on the basis that at the time, research in astronomy was not eligible for the Nobel Prize in Physics.
Hubble's law states that all objects in deep space (i.e. galaxies) are receding away from us and each other (as can be seen by the fact that they are Doppler-shifted), and that the velocity of this recession is proportional to their distance away from the Earth and other astral bodies. It is summarised mathematically by the equation: v=H0D, where v is the recession velocity, H0 is the Hubble constant and D is the distance away from us that the body is. H0 has an approximate value of 70 kms-1Mpc-1 (kilometers per second, per megaparsec), but a there is disagreement over its precise value.
According to the theory of inflation, the early Universe expanded exponentially fast for a fraction of a second after the Big Bang. A simple model for the expansion of the Universe is to consider the inflation of a balloon. A person at any point on the balloon might consider himself or herself to be at the centre of the expansion, as all neighbouring points are getting further away.
During inflation the Universe expanded by a factor of about e60=1026. This number is a one followed by 26 zeros. It transcends normal political/economic discussions of inflation.
This is a hypothetical particle and scalar field associated with the inflation of the Universe that occurred moments after the Big Bang. It is theorized that this occurred because of a phase transition which allowed the inflaton field to release potential energy as matter and radiation as it moved to a lower energy state. This energy acted as a repulsive force inflating the Universe.
We calculate the probability that a system in state A will end up in state B by using path integrals. This mathematical framework uses contributions from all the probabilities of the paths that the system could take in order to end up in state B to give an answer.
So, when we are using path integrals to try and calculate what the very early Universe looks like, we look at the geometry, which, as we know from general relativity is related to gravity and therefore the distribution of matter, and we try to work back towards times when the Universe was at a quantum level. For mathematical reasons, we work not in three dimensions of space and one of time, but four dimensions of spacetime, or four dimensions of geometry.
Path integrals work well on large scale systems, but less well on a quantum level - an approximation has to be used. This is known as the semiclassical approximation, because its validity lies somewhere between that of classical and quantum physics.
In the semiclassical approximation one argues that most of the four dimensional geometries occuring in the path integral will give very small contributions to the path integral and hence these can be neglected. The path integral can be calculated by just considering a few geometries that give a particularly large contribution. These are known as instantons. Instantons don't exist for all choices of boundary three geometry; however those three geometries that do admit the existence of instantons are more probable than those that don't. Therefore attention is usually restricted to three geometries close to these.
Remember that the path integral is a sum over geometries with four spatial dimensions. Therefore an instanton has four spatial dimensions and a boundary that matches the three geometry, or the geometry of the Universe at a given time, whose probability we wish to compute. Typical instantons resemble (four dimensional) surfaces of spheres with the three geometry slicing the sphere in half. They can be used to calculate the quantum process of Universe creation, which cannot be described using classical general relativity. They only usually exist for small three geometries, corresponding to the creation of a small Universe. Note that the concept of time does not arise in this process. Universe creation is not something that takes place inside some bigger spacetime arena - the instanton describes the spontaneous appearance of a Universe from literally nothing. Once the Universe exists, quantum cosmology can be approximated by general relativity so time appears.
There are properties exhibited by cosmic strings. Intercommuting refers to a process whereby strings exchange ends whenever they meet. A loop is produced whenever a string intercommutes with itself. Although cosmic strings have not been detected, this process of intercommuting can be seen in certain liquid crystals.
An interferometer is a machine that uses a process of wave interference to learn about the waves in question. That is, the waves are superimposed upon themselves to discover their properties.
Kaluza-Klein Theory is a theory that seeks to unify two of the four fundamental interactions; gravitation and electromagnetism. A similar theory, Electroweak Theory, already unifies the weak nuclear force and electromagnetism. Its proposals extend general relativity into five-dimensional spacetime.
The SI (or base) unit for temperature measurement. Kelvin and Celsius have the same magnitude scale, therefore you can transform one Kelvin into Celsius by adding 273.16 to the number. Whereas the Celsius scale was created by dividing the difference in temperature between water freezing and boiling by one hundred and labelling the freezing point of water as 0, 0 Kelvin is the point described by Lord Kelvin (after whom the unit is named) as "infinite cold", or absolute zero.
This is the mechanism by which cosmic topological defects form during a phase transition.
Causal effects in the early Universe can only propagate at the speed of light. This means that at a time t, regions of the Universe separated by more than a distance d=ct can know nothing about each other. In a symmetry breaking phase transition, different regions of the Universe will choose to fall into different minima in the set of possible states. Topological defects are precisely the 'boundaries' between these regions with different choices of minima, and their formation is therefore an inevitable consequence of the fact that different regions cannot agree on their choices.
These are laws which define the fundamental physical properties which characterize) thermodynamic systems. These are temperature, energy and entropy (a property that works systems towards equilibrium). They are:
The zeroth law: If two systems are in thermal equilibrium with a third, they must be in thermal equilibrium with each other also.
The first law: Heat and work are forms of energy transfer. This is the law of the conservation of energy. Internal energy in a closed system may change if heat or work are transferred in or out of the system.
The second law: The entropy of any isolated system not in thermal equilibrium almost always increases. That is, an isolated system will work towards thermal equilibrium.
The third law: The entropy of a system approaches a constant value as the temperature approaches zero.
This is not, despite the name, a measure of time, but rather a measure of length. It is the length that light will travel in a vacuum in a year, that is 365.25 days. Its exact value is 9,460,730,472,580,800 metres, but is approximately given by 9.4607x1015m. This is calculated by multiplying the number of days (365.25) by the number of seconds in each day (86,400) and then multiplying that by the speed of light in a vacuum, which is 299,792,458 metres per second.
In a mathematical function, the highest and lowest values of that function, over the domain of said function, are defined as the maximum or minimum points respectively. A local minimum or maximum value is defined by taking the highest or lowest value in the function over only part of the domain. An example of a function with several local minimum and maximums would be a graph of sin(x), which has no overall maximum or minimum value, but several local maximums and minimums of equal respective values.
An object's (in our context, an astronomical object) brightness as measured by the flux, or intensity of electromagnetic radiation, that the object gives out.
During the radiation era, shortly after the Big Bang, the Universe consisted of free moving protons, neutrons and electrons and other particles, including helium ions. All radiation was absorbed by these free electrons, making the Universe opaque. When the Universe was sufficiently expanded the radiation could no longer interact with the electrons, causing the Universe became transparent. This process is called decoupling, and it marked the beginning of the matter era. Electrons, now no longer absorbing radiation, instead joined with ions to form neutral atoms. Through gravity, these atoms clumped together, eventually forming stars, galaxies and other stellar bodies.
These are zero-dimensional (point-like) objects which form when a spherical symmetry is broken. Monopoles are predicted to be supermassive and carry magnetic charge. The existence of monopoles is an inevitable prediction of grand unified theories (GUTs); this is one of the puzzles of the standard cosmology.
We have five consistent String Theories that can describe both the forces and the matter in our Universe. We do not, however, have the tools to explore the theories overall possible values of the parameters in the theories. Over the past few years, however, we have been able to explore these theories more thoroughly, and we now believe that these five string theories are all different aspects of the same underlying theory: M-Theory. M-Theory goes beyond String Theory, in that it predicts not ten, but eleven dimensions of spacetime. The theory could have as a fundamental object as a membrane, as opposed to a string, which would look like strings when curled up in the eleventh dimension. It is for this reason that the M in M-Theory originally referred to a Membrane. Nowadays, however, the M doesn’t specifically refer to anything, and can stand for Mystery, or “Mother of all”, because M-Theory is still largely unknown.
Vast clouds of interstellar dusk, hydrogen, helium and ionized gas. As the mass of a nebula grows due to the slight gravitational attracts of dust towards each other, the mass compacts enough to form stars. Other material within the nebula, such as dust, can clump together to form planets and other planetary objects. Originally, any large astronomical object was referred to as a nebula - other galaxies, in particular.
A liquid crystal is a phase of matter which exhibits properties somewhere between those exhibited by a liquid and solid crystal. When viewed in high resolution, they can appear to be textured, as the molecules may be free to flow in a limited manner around, provided that they stay within a crystal like structure. Liquid crystals are used extensively in televisions and computer screens.
The nematic phase of a liquid crystal is temperature dependent. When in this phase, clamitic (rod-like) molecules align themselves individually roughly parallel to each other on their long-side axis, in a similar way to cigarettes in a package. The result of this is that the molecules are free-flowing within this directional order. In this phase, the crystals can show signs of intercommuting and loop production, which are properties expected to be exhibited by cosmic strings.
A neutron star is formed from the collapse of a larger star which has undergone supernova. These stars, as the name suggests, are composed mostly of neutrons. Neutron stars are extremely hot. They typically have masses between about 1 and 2 solar masses (1 solar mass is approximately 2x1030Kg, which is about 333,000 times the mass of the Earth), despite being somewhere on the order of 1015 smaller in radius than the Sun, which makes them extremely dense. The more compact a neutron star is, the more likely it is to form a black hole. This occurs when the star's density become so great that the gravitational force it exerts on itself is greater than its internal pressure, causing a collapse into a black hole.
This was developed in 1983 by Stephen Hawking and James Hartle. Describes a situation whereby the Universe can spontaneously come into existence from literally nothing. Once the Universe exists, quantum cosmology can be approximated by general relativity so that time appears.
A particle is a nucleon if it is a particle that forms an atomic nucleus. There are two nucleons: protons and neutrons.
These are complicated manifolds which, like Calabi-Yau Manifolds, may be the space in which six extra dimensions proposed by certain string theories are found within.
The study of the Universe up to around 10-11 seconds after the Big Bang. During this time, the electroweak and strong forces were unified in a grand unified phase, which quickly changed to separate out strong and the electroweak forces. Further on in time the electroweak interaction separated to become electromagnetism and the weak nuclear force. It is possible to reach temperature regimes within this cosmology, allowing us to experimentally test theories. Speculation, however, is still required within this time period.
A mathematical approach to non-gravitational quantum theory, introduced by Richard Feynman of Caltech. In the path integral approach, the probability that a system in an initial state A will evolve to a final state B is given by adding up a contribution from every possible history of the system that starts in A and ends in B. For this reason a path integral is often referred to as a 'sum over histories'. For large systems, contributions from similar histories cancel each other in the sum and only one history is important. This history is the history that classical physics would predict. For example, a system in the starting position of a ball on a non-symmetrical hill. The probability that the system will end up in the final position of the ball at the bottom of the hill on the side that is steepest is given by the summation of the probabilities of all paths that that ball could take, including going down the other side of the hill.
For mathematical reasons, path integrals are formulated in a background with four spatial dimensions rather than three spatial dimensions and one time dimension. There is a procedure known as 'analytic continuation' which is used to convert results expressed in terms of four spatial dimensions into results expressed in terms of three spatial dimensions and one time dimension. This effectively converts one of the spatial dimensions into the time dimension. This spatial dimension is sometimes referred to as 'imaginary' time because it involves the use of so-called imaginary numbers.
The path integral formulation of quantum gravity has many mathematical problems. It is also not clear how it relates to more modern attempts at constructing a theory of quantum gravity such as string/M-theory. However it can be used to correctly calculate quantities that can be calculated independently in other ways e.g. black hole temperatures and entropies.
A phase transition is the change in properties and form of matter due to temperature changes. For example, water changes from solid ice to liquid water to gaseous steam or vapour. As temperature drops and phase transitions occur, the symmetry of the resulting matter is reduced - again, vapour is more symmetric than water, which is more symmetric than ice. In terms of cosmology, when a phase transition in the early Universe occurs, topological defects are formed. Some of the symmetries that were broken in the early Universe led to the four fundamental forces becoming discrete forces. At higher temperatures, they reunite in a unified state.
The photon is an elementary particle. It is a gauge boson, in that it mediates one of the fundamental forces. In the case of the photon, it is the electromagnetic force. As mediators of the electromagnetic force, they allow us to see things through the visible light part of the electromagnetic spectrum, and are therefore often interchanged with "light". As they have no rest mass, they are able to travel at the fastest possible speed, which is know as the "speed of light" (299,792,458 metres per second) in a perfect vacuum. Their spin is 1 and no electrical charge.
This is simply the Planck length squared. Given that the Planck length is a fundamental unit of length, so too is the Planck area a fundamental unit of area.
This is the size of energy quanta (discrete packets of energy) in quantum mechanics - it is therefore the smallest amount of energy that anything can hold. It is the proportionality constant between the energy of a photon and the frequency of the associated electromagnetic wave, as denoted in the Planck-Einstein equation which links the two: E=hv, where v is frequency, h is Planck's constant and E is energy. It's value is 6.6260695729×10−34 J.s
This is the earliest period of time, from the beginning of time to 10-43 seconds after the beginning of time. During this period, the fundamental forces of nature were all unified due to the unimaginable temperature of the Universe, and it is believed that gravity was as strong as the other forces (it is now by far the weakest of the forces).
A very, very small unit of length. Its precise value is 1.61619997x10-35m. It is a base unit within the Planck unit system and it is calculated using the speed of light, c, Planck's constant, h and the gravitational constant, G. Specifically, it is given by the square root of ħG/c3 where ħ is the reduced Planck's constant, or Planck's constant divided by 2π. It is the shortest measureable length in existence. To discuss length on scale shorter than this would be meaningless because it is a physical impossibility to measure below this length. A theory that could describe physical laws at this level would be of great use in the search for a theory of everything.
This is the energy that exists in a body due to its position within a system. Forces act upon the body to restore it to a lower energy state or configuration, this difference in the energy states is the potential energy. When the force acts upon the body, the energy held within the body is converted into some other form of energy, this occurs because the conservation of energy law states that energy cannot be created or destroyed.
An example of potential energy being converted into other energy would be in someone skydiving. The position of the person (the body in the system) in the system (the Earth), i.e. being high up in the air in a plane, gives the person gravitational potential energy. Once they leap from the plane, this gravitational potential is turned into kinetic energy as the person falls toward Earth. Once they have landed, their position, at the surface of the Earth, means that they have lower amounts of gravitational potential energy, and they have been restored to a lower energy state.
This is the theory that explains the Strong Nuclear force that is mediated by gluons between different quarks. The charge of this force is known as colour. The force, which occurs due to an exchange of these gluons, does not weaken over distance, as say gravity does, but rather remains constant, on the order of several thousand Newtons. This means that at no point does any quark separate from another one, and so quarks can only be observed on a hadron level. This property is called confinement. Another property within QCD is asymptotic freedom. This results in a very weak interaction between quarks and gluons during extremely high energy reactions.
This is the study of cosmology at temperature regimes where all four fundamental forces were unified. This unification, it is theorised, occurred from the Big Bang to some 10-43 seconds after the Big Bang. Due to the temperatures involved all quantum cosmology is theoretical and highly speculative.
Quantum field theory is a framework that allows for the extension of quantum mechanics, which deals with individual particles, to field systems operating relativistically. Quantum field theories have been used to describe how three of the four fundamental forces act, being mediated by and exchange of particles called bosons. The photon and the gluon, for example, are exchanged between electrons and quarks in the case of electromagnetism and the strong nuclear force respectively.
With quantum field theory, these natural fields pervade an area of space. Particles that mediate these fields, the gauge bosons associated with the field (like the aforementioned photon with electromagnetism), are quanta of these fields, that is, ripples in the field carrying small amounts of energy, other particles that act within the field, for example the electron within the electromagnetic field, are though of in a similar manner., albeit different ripples and excitations. These fields are of variable range. The colour field within the quantum chromodynamic field theory, for example, acts in a range between quarks within a nucleon. Other fields, such as the electromagnetic field, are infinite in scope and range.
The search for a theory of quantum gravity is the search for a theory that can explain the effects of the fundamental force of gravity as explained by general relativity at a quantum level, and marry these up with quantum mechanics, which is a series of models which explain the other fundamental forces; the strong nuclear; weak nuclear and electromagnetic forces. Examples of quantum gravity include string theory, loop quantum gravity and M-theory.
This phase transmission occurred approximately one millionth of a second after the Big Bang. This was when quark-gluon plasma underwent a phase transition, resulting in quarks forming into hadronic matter, i.e. nucleons.
Quintessence is a theory of dark energy, given to explain the acceleration of the Universe’s expansion. It is a dynamic equation, resulting in an attractive or repulsive force depending on the amount of kinetic energy relative to potential energy in the Universe. As a repulsive force, it overcomes gravity’s attraction over large scales, resulting in an accelerated expansion. Quintessence is hypothesised to have become repulsive approximately 10 billion years ago.
This refers to a period of time from just after Big Bang to approximately 300,000 years after its beginning. During this time, the Universe consisted of free moving protons, neutrons and electrons and other particles. All radiation was absorbed by these free electrons, making the Universe opaque. Protons and neutrons were combining to form deuterium, a heavy isotope of hydrogen, and then helium, however, the temperature of the Universe was so high that these existed as free ions in the plasma that was the Universe. It was only when the Universe was sufficiently expanded that the electrons no longer absorbed the radiation and instead joined with the ions to form neutral atoms. This forms the beginning of the matter era, in which we still exist.
Recombination was a time period, approximately 300,000 years after the Big Bang, when electrons and protons bound to form atoms of hydrogen. Before 300,000 years had passed, the Universe was still too hot for atoms for hydrogen to form. Only after the Universe had expanded sufficiently did the Universe cool down sufficiently, making the formation of hydrogen possible.
When the source of a wave moves away from us, we observe a change of frequency of that wave. An example would be an ambulance or fire-truck - we hear a lower pitch in its siren once it has passed us by. This is the Doppler Effect. It is not, however, limited to sound waves, but any kind of waves, including electromagnetic.
This means that as an electromagnetic wave source is moving away from us, the frequency of the wave will decrease. As frequency and wavelength are inversely related, one goes up and the other goes down, the wavelength will increase. This shifts the wavelength closer to the red end of the spectrum (this, when talking about the visible part of the electromagnetic spectrum, of course the wavelength may not be in the visible part).
This is redshift, and it is something we detect from far away galaxies and other electromagnetic sources. This leads us to the conclusion that the Universe is expanding.
These associate a scalar (either a number, or a physical quantity) value to every point in a space within the field. Examples of scalar fields include pressure distribution, temperature variation, and gravitational fields.
This is a point in spacetime where the curvature of spacetime becomes infinite. It is an area of extremely high density into which matter or light is attracted. Singularities can be found both at the centre of black holes and on their own. Inside a singularity, the laws of physics are distorted to the point that they are no longer applicable.
Spacetime is the concept of space and time being part of the same continuum. We use the typical three dimensions that are everyday and commonplace - the x, y and z dimensions used in geometry - ascribing a fourth dimension of time. This allows us to map out any event that takes place in the Universe by a set of coordinates; three of space to give us the location, and one of time to give us when the event occurred. This merging of time and space is important and must be accounted for, because relativity tells us that the observed rate of time passing changes with the respect to an objects velocity relative to the observer. Gravitational fields can also change the passage of time. On quantum scales, therefore, it is important to account for time within theoretical frameworks, whereas in classical physics this is unnecessary. The structure of spacetime is detailed in Einstein's theory of special relativity.
This theory lays out the structure of spacetime. It draws on the principle of relativity as laid out by Galileo, which states that there is no absolute state of rest, and that all motion is relative to other motion. There are two principles that are laid out in the theory; that the laws of physics are the same for observers whose motion is uniform relative to each other and that the speed of light in a vacuum is the same for all observers, regardless of any relative motion. This means that with different relative velocities, observers will experience different physical laws. Effects of these principles can be seen in various manners. One of the most interesting is time dilation. A clock that is sitting stationary in front of you will tick faster than a clock which is moving away from you. This is has been shown to be true for astronauts, who come back from space younger than they would have been had they remained on Earth. Another well known consequence of the theory is the energy-mass equivalence relationship, as defined by the equation E=mc2, probably the most famous equation of all time. This states that energy and mass are interchangeable and are related by a function of the speed of light in a vacuum, c. The speed of light in a vacuum, c, is shown not to be just a speed that photons travel at, it is a key cosmological constant that is related to the nature of space and time. Special relativity shows us that any object with rest mass cannot travel at the speed of light.
The speed that photons, or indeed any particle with zero rest mass (as energy and mass are equivalent as shown by the equation E=mc2, a particle that is travelling will have kinetic energy and therefore more mass than a particle at rest), will travel at in a vacuum. Its value is 299,792,458 metres per second (ms-1). As explained in the theory of special relativity, the speed of light is the fastest that any form of energy or information can travel in the Universe.
An intrinsic quantum property of particles that is defined by a spin number that can be either a whole integer (1,2,3 etc.) or a half integer (1/2,3/2,5/2 etc.), and can be positive or negative. It is a property that all particles exhibit, the sole known exception being the Higgs boson, although other particles with zero spin, such as the inflaton, have been hypothesised. To an extent, it is easy to make an analogy of quantum spin with the classic rotational spin that we encounter in everyday life, for example with a spinning top.
Particles that are electrically charged, such as electrons or positrons, will generate a magnetic field through their spin, as movement of an electric charge will automatically generate magnetic fields. This analogy, however, only takes us so far.
Different spin quantum numbers can give us ideas as to the symmetry of these particles. A particle with zero spin looks exactly the same from all sides. A particle with spin will look different if rotated, but will regain its symmetry if it is rotated a certain number of time. In this instance, an analogy of a deck of cards is useful. Consider any face card, these are symmetrical every time you spin them half way around, or 180 degrees. Consider now the Ace of Spaces. This card, if places with the point of the space facing up as you look at it, will require a full 360 degree rotation until it looks the same again. A particle with spin 1 will act like an Ace of Spaces, requiring a full rotation, whereas a spin 2 particle will be symmetrical through 180 degree rotations. A half spin particle will require two rotations to be symmetrical. This kind of rotational symmetry does not have an analogue in the macroscopic world.
Crucially, whether a particle has half or whole integer spin tells us about how it reacts. Particles with half integer spin, or Fermions, obey a set of statistics known as Fermi-Dirac statistics. Particles with whole integer spin, or Bosons, obey a set called Bose-Einstein statistics. One of the key differences between these two sets of statistics is that those particles which obey Fermi-Dirac statistics are subject to the Pauli exclusion principle. This states that particles may not occupy the same quantum state as each other. Crucially, this means that you cannot make Fermions of the same quantum state occupy the same space. This is why Fermions are the particles which make up the matter of the Universe. They include quarks, which combine to make protons and neutrons, and leptons, a set of particles that include electrons. Bosons, which do not obey Fermi-Dirac statistics and are consequently not subject to the Pauli exclusion principle, fulfill other roles, some mediate the fundamental forces of nature, these are the gauge bosons, and the Higgs boson gives rise to mass in other particles.
Also known as the ΛCDM or Lambda-CDM model, this is the best and most widely used model to explain the expansion of the Universe, origins of the cosmic microwave background, nucleosynthesis of light elements and the formation of galaxies and large-scale structure.
This is a set of mathematical tools that allow us to study thermodynamical properties, such as work, heat and entropy, of a large number of particles, allowing us to look at both atomic level and macroscopic level detail of the system. This allows us to explain thermodynamics in ways that apply to both classical and quantum physics, and allows us to extrapolate macroscopic predictions from microscopic properties.
In the standard model of particle physics, particles are considered to be points moving through space, tracing out a line called the World Line. To take into account the different interactions observed in nature, one has to provide particles with more degrees of freedom than only their position and velocity. These include mass, electric charge, colour (which is the “charge” associated with the strong interaction) and spin. This model was designed within a framework known as Quantum Field Theory (QFT), which allows us to build theories consistent with both quantum mechanics and the special theory of relativity. These theories describe with great success three of the four known interactions in nature: electromagnetism, the strong and weak nuclear forces. Unfortunately, gravity, as described by Einstein’s General Relativity, does not fit into this scheme.
String Theory replaces these different particle types with a single fundamental building block: a “string”. These can be closed, like loops, or open, like a hair. As the string moves through time it traces out a tube or a sheet (depending on whether it is closed or open). This String is free to vibrate, and different vibrational modes of the string represent the different particle types, as difference modes are seen as different masses or spins.
One mode of vibration, or ‘note’, makes the string appear as an electron, another as a photon. There is even a mode describing the graviton, the particle carrying the force of gravity. This means we can make sense of the interaction of gravitons in a way we could not in QFT. It is this ability of String Theory to create a valid model that includes all four fundamental interactions that has dubbed it to be a ‘Theory of Everything’.
The problem is that there are five different versions of String Theory. This is why we now look to M-Theory, which has place for all five theories, as the greatest solution to our ‘Theory of Everything’. As a point of note, string theory predicts that spacetime has ten dimensions. Although we only have three dimensions of space and one of time, we can assume that six of these dimensions are curled up very tightly, so that we may never be aware of their existence. Having these so-called compact dimensions is very beneficial, as we can suggest that the degrees of freedom, such as electric charge of an electron, can simply arise as motion in the extra compact dimensions.
There are four fundamental forces in nature. They are Electromagnetism, the Weak Nuclear Force, the Strong Nuclear Force and Gravitation. The Weak Nuclear Force is associated with radioactivity in unstable nuclei, specifically the decay of a neutron into a proton. When the temperature is hot enough, such as that of the Universe shortly after the Big Bang, Electromagnetism and the Weak Nuclear Force will merge to form the Electroweak Force.
The Strong Nuclear Force binds together neutrons and protons inside nuclei. The mathematical theory describing the elementary particles in this theory, Quarks and Gluons, is known as Quantum Chromodynamics (QCD).
Theories that unify the strong nuclear force with Electroweak Theory are known as Grand Unified Theories, of GUTs.
A supercluster is a vast (the are some of the largest structures in the Universe) grouping of smaller galaxy clusters and groups. They can span between several hundred million light years to over one billion light years. Superclusters can contain galaxy bubbles, sheets, voids and filaments, which are smaller structures within the supercluster. Nearly all galaxies are found within superclusters, and inbetween superclusters thee are usually large voids. Our own supercluster, called the Virgo Supercluster, contains the Local Group, the Virgo cluster and some 100 other galactic groups and clusters. Its diameter is approximately 100 million light years.
Supergravity is a theory which follows on from supersymmetry. It is theorised that in the same way that photons mediate the electromagnetism, gluons the strong nuclear force and W and Z bosons the weak nuclear force, so to does the as-yet undiscovered graviton the gravitational force. In supergravity, the graviton has a heavier superpartner whose spin differs by 1/2. So far, as with supersymmetry, there has been no observational evidence for supergravity.
This is a very powerful stellar explosion that can quite often outshine galaxies. A star undergoes a supernova either when a very old massive star undergoes sudden gravitational collapse, releasing vast quantities of gravitational energy, or through the reignition of the nuclear fusion reaction in a degenerate star's (such as a white dwarf or neutron star) core. The explosion releases huge quantities of the star's matter, resulting in a supernova remnant. Certain types of supernova have luminosities of known quantity, such that they can be used as 'standard candles', which means that we can detect how far away the object is by comparing its known luminosity with our observed brightness.
String Theory states that all particles are representations of different vibrations on a fundamental building block; a string. As a theory, it is able to describe the interactions of the particle that mediates gravitation: the graviton. In this way, and by being able to describe all other particles and interactions thereof, it is able to unite the four fundamental forces in nature, and is therefore a ‘Theory of Everything’.
The original String Theory only described particles with integer spins, called bosons. These are the particles that mediate the fundamental forces, and include the photon, electron, gluon and graviton. The other class of particle, which have half integer spin, called fermions, were not described. These are particles that constitute matter as we know it, such as quarks and electrons.
By introducing supersymmetry to bosonic string theory, we obtain a new theory that describes both the forces and the matter that make up the Universe. This is Superstring Theory. There are three different Superstring Theories that have no mathematical inconsistencies. In two of them, the fundamental object is a closed string, whilst in the third, the string is open.
By mixing the best aspects of bosonic String Theory and Superstring Theory, we can create two other consistent theories of strings, Heterotic String Theories
Supersymmetry is a theory which postulates that for every elementary particle, there is a more massive "superpartner" whose spin is different by 1/2. The theory comes about to solve mathematical difficulties related to quantum field theory and the reconciling of general relativity and quantum field theory. These inconsistencies arise because the Higgs boson, a gauge boson whose interaction with other particles gives them mass, appears to gain large amounts of mass through interactions with itself. Solving these inconsistencies would give physicists a way to marry quantum mechanics and gravity at the smallest scales.
These superpartners are a possible candidate for dark matter. No superpartners have yet to be detected and no evidence exists as of yet to support supersymmetry. This is because in order to observe particles of this mass we need to use incredible amounts of energy, which so far we have been unable to generate. It is hoped that the Large Hadron Collider at CERN might detect evidence of supersymmetric particles.
This is the set of points in space where decoupling occurred, approximately 380,000 years after the Big Bang, at the right distance so that we are now seeing these photons reach us as part of the Cosmic Microwave Background relic radiation.
This occurs when a system in some state of symmetry moves into a different configuration, resulting in the loss of that symmetry. Consider a ball on a hill. The ball is symmetrical. The hills is also symmetrical. If the ball is on top of the hill, the ball and hill in system are symmetrical. If the ball rolls down the hill, the ball and hill are individually still symmetrical, but the system of the ball and the hill is now asymmetrical. This is symmetry breaking.
In a cosmological context, this happened as the Universe cooled down after the Big Bang. As this occurred, elementary particles changed state in what is known as a phase transition. As this occurred, symmetry that previously was exhibited by these particles was broken. These symmetries are associated with different fundamental forces. This is why some particles are acted upon by these forces, and others not. These symmetries are restored at higher temperatures, however.
These are a type of topological defect that is hypothesised to form when large symmetries are broken. They are unstable and prone to collapse. Unlike certain other topological defects, such as magnetic monopoles, these are delocalized and occur over large areas. No evidence has been found of them as yet.
This is a gravitational anomaly located in the Centaurus Supercluster. It is a localized concentration of mass of unknown origin that is equivalent to tens of thousands of galaxies. It mass is so large, that (as the name suggests) its gravitational attraction is altering the motion of galaxies and galaxy clusters in a region over hundreds of millions of light years across.
In the aftermath of the Big Bang, the Universe was extremely hot and extremely dense. At these energies, the laws of nature that we know were changed. The fundamental forces that we see in nature were unified - it is only as the Universe expanded and cooled that Gravitation, Electromagnetism and the Strong and Weak nuclear forces all ceased to be as one. Electroweak theory describes the unification of the Weak Nuclear force and Electromagnetism. A Theory of Everything will marry up all the fundamental forces.
The issue with this is that whilst quantum chronodynamics and the electroweak theory describe the Strong and Weak nuclear forces and electromagnetism on a well understood quantum basis, there is no consistent theory for describing gravity on such a basis. M-Theory, and the associated String Theories behind it are being explored as possible candidates.
These are configurations of matter that form during matter phase transitions and symmetry breakings, such as occurred during the very early Universe. They are configurations of matter in the old, symmetrical phase that remain stable in the new phase where the symmetry that was previously held is now broken. Examples of these defects include monopoles, cosmic strings, domain walls and textures.
Within quantum field theory, particles may move from higher to lower energy states, such as occurred in the very early Universe as the Universe was expanding and thus cooling. These lower energy states, or vacuum states, may be different whilst possessing the same amount of energy. This means these states are degenerate. The particle, therefore, has a chance of falling into any of these degenerate vacuum states, unless there is something outside the system described here which will cause one state to be preferred over the other. This set of vacuum states is called the vacuum manifold.
The world line of an object is the path it traces out as it travels through spacetime. It different from trajectory or orbit due to the inclusion of time as a dimension in addition to the three dimensions of space.
A Yang-Mills theory is any quantum theory that is symmetric under a non-abelian gauge group. A gauge group is abelian depending upon whether the gauge transformations are commutative, that is, it does not matter in which order you apply the transformations, you will have the same results (a simply analogy is 1x22, it does not matter in which order you perform the operations, the result will always be 4). An example of this kind of theory would be Quantum Chromodynamics, which deals with the strong nuclear force. | <urn:uuid:4f70319b-344c-4f28-8b53-f7830cc63f5d> | 3.015625 | 16,895 | Structured Data | Science & Tech. | 37.878074 | 796 |
This has proven to be a bad week for NASA rovers patrolling Mars. NASA has several rovers on the surface of Mars performing various missions including looking for water and existence of ice on the red planet.
Yesterday, NASA announced that it had lost communications with the Phoenix lander and had no expectations of the lander surviving the inhospitable Martian winter. Despite the fact that the rover has been declared dead by NASA, the Phoenix mission was a success and lasted longer than originally planned by NASA.
Today, NASA has announced that the Spirit rover is also in jeopardy of failing. Lack of sunlight hitting the solar panels of Spirit is causing serious concern at NASA. According to scientists on the mission, Spirit only produced 89 watt-hours of energy last weekend, which is half the amount of power the rover needs for full performance.
The reason for the drop in power production is a massive dust storm that deposited Martian dust on the solar panels and prevented sunlight form reaching them. Spirit's mission began in 2003 when it was sent to the red planet to search for clues on past water on the surface of the planet.
To help conserve power and prevent Spirit from running its batteries dry, NASA instructed the rover to turn off several heaters designed to keep scientific instruments warm. The rover was also ordered to stop communicating with Earth until Thursday.
NASA says that if it doesn't hear form Spirit on Thursday it will be extremely concerned. Scientists hope Spirit will make it, the dust storms over it position have abated. It's not known if the storm caused damage to any of the rover's instruments at this time or if the rover will be able to move again due to the dust on the panels. | <urn:uuid:76d092db-891e-4ce6-9b67-f8355bd11d53> | 2.9375 | 340 | News Article | Science & Tech. | 50.194631 | 797 |
Induced Seismicity Potential in Energy Technologies (2012)Board on Earth Sciences and Resources
Each report is produced by a committee of experts selected by the Academy to address a particular statement of task and is subject to a rigorous, independent peer review; while the reports represent views of the committee, they also are endorsed by the Academy. Learn more on our expert consensus reports.
Report in Brief >>
In the past several years, some energy technologies that inject or extract fluid from the Earth, such as oil and gas development and geothermal energy development, have been found or suspected to cause seismic events, drawing heightened public attention. Although only a very small fraction of injection and extraction activities among the hundreds of thousands of energy development sites in the United States have induced seismicity at levels noticeable to the public, understanding the potential for inducing felt seismic events and for limiting their occurrence and impacts is desirable for state and federal agencies, industry, and the public at large. To better understand, limit, and respond to induced seismic events, work is needed to build robust prediction models, to assess potential hazards, and to help relevant agencies coordinate to address them.
- Research has provided a better understanding of the factors that induce seismicity. Although existing faults and fractures are generally stable, changes in subsurface pore pressure, for example due to the injection or extraction of fluid from Earth's subsurface, may change the crustal stresses acting on a nearby fault and induce a seismic event. Net fluid balance appears to have the most direct correlation to the magnitude of induced seismic events, thus, energy technology projects that maintain a balance between the amount of fluid injected and the amount withdrawn may induce fewer felt seismic events than technologies that do not maintain balance.
- Although the general mechanisms that create induced seismic events are well understood, scientists are currently unable to accurately predict the magnitude or occurrence of such events due to the lack of comprehensive data on the complex natural rock systems at particular energy development sites. Predictions of induced seismicity at specific energy development sites will continue to rely on both theoretical modeling, and data and observations from measurements made in the field.
- Of all the energy-related injection and extraction activities conducted in the United States, only a very small fraction have induced seismicity at levels noticeable to the public (that is, above magnitude 2.0). Different energy technologies typically use different injection rates and pressures, fluid volumes, and injection duration—factors that affect the likelihood and magnitude of an induced earthquake.
- Geothermal energy—the use of heat from the Earth as an energy source—usually attempts to maintain a balance between fluid volumes extracted for energy production and those replaced by injection, which reduces the potential for induced seismicity. However, site-specific characteristics can make a difference. For example, the high-pressure hydraulic fracturing undertaken to produce geothermal energy from hot, dry rocks has caused seismic events that are large enough to be felt.
- Conventional oil and gas development extracts oil, gas, and water from pore spaces in rocks in subsurface reservoirs. Incidences of felt induced seismicity from conventional oil and gas development appear to be very rare.
- Shale formations may contain oil, gas, and/or liquids. Shales have very low permeability that prevent these fluids from easily flowing into a well bore, and so wells may be drilled horizontally and hydraulically fractured to allow hydrocarbons to flow up the well bore. Hydraulic fracturing to date has been confirmed as the cause for small, felt seismic events at one location in the world. The process of hydraulic fracturing a well as presently implemented for shale gas recovery does not pose a high risk for inducing felt seismic events.
- Tens of thousands of waste water disposal wells have been drilled in the United States to dispose of the water generated by geothermal and oil and gas production operations, including shale gas production. Water injection for disposal has been suspected or determined a likely cause for induced seismicity at approximately 8 sites in the past several decades. However, the long-term effects of increasing the number of waste water disposal wells on the potential for induced seismicity are unknown, and wells used only for waste water disposal usually do not undergo detailed geologic review prior to injection, in contrast to wells for enhanced oil recovery and secondary recovery.
- Capturing carbon dioxide and developing means to store it underground could, if technically successful and economical, help reduce carbon dioxide emissions to the atmosphere. However, carbon capture and storage differs from other energy technologies because it involves the continuous injection of very large volumes of carbon dioxide under high pressure, and is intended for long term storage with no fluid withdrawal. The large net volumes of carbon dioxide that would help reduce global carbon dioxide emissions to the atmosphere may have potential for inducing larger felt seismic events due to increases in pore pressure over time; potential effects of large-scale carbon capture storage projects require further research.
- Understanding hazard and risk related to induced seismicity is critical to any discussion of the option, but currently, there are no standard methods to implement risk assessments for induced seismicity. The types of information and data required to provide a robust risk assessment include net pore pressures and stresses; information on faults; data on background seismicity; and gross statistics of induced seismicity and fluid injection or extraction.
- Four federal agencies—the U.S. Environmental Protection Agency, the Bureau of Land Management, and the U.S. Department of Agriculture Forest Service and the U.S. Geological Survey—and several different state agencies have regulatory oversight, research roles and responsibilities relating to different parts of the underground injection activities associated with energy technologies, but there are currently no mechanisms in place for the efficient coordination of governmental agency response to induced seismic events. | <urn:uuid:d429ec11-f447-4736-a64e-74d9f35f7869> | 3 | 1,158 | Academic Writing | Science & Tech. | 12.675374 | 798 |
2D games using Silverlight - Collision detection implementation
This article shows how to implement collision detection in a Microsoft Silverlight game application. This is the second article in a series which will show how to create a complete working game (a clone of the classic Arkanoid game).
The first article in this series 2D games using Silverlight - implementing the game loop introduced the possibility of writing 2D games using Silverlight, an approach that has benefits when targeting all of the platforms: Windows Phone 7, Windows Phone 8 and Windows 8 and Internet browsers. The original article covered the creation of the game skeleton for a simple Arkanoid clone called JailBreaker, and included the games loop and the game controllable loop.
This article extends the game skeleton by adding collision detection.
Collision detection implementation strategy
There are numerous strategies and algorithms for collision detection: broad detection phase, narrow detection phase, how to use dot production to estimate distance, etc. However it does not make sense to re-invent the wheel: for more complicated games there are open source physics engines, and for simple games like this there are often platform APIs that can be used to help with collision detection. For this example we use the collision detection functionality built into the platform, along with a good understanding of the simple collision detection required for this game.
Considering the game environment:
- There are few components with complex non-predictable movement - essentially just the club and ball. All other elements either stand still or fall in predictable manner - they do not collide. As a result we only need to consider a single object, the ball, for determining whether a collision has occurred.
- We only need to consider collisions in three directions along the active object movement: forward, forward diagonal up, forward diagonal down. This limitation allows us to minimize number of tests required
Collision detection in the platform:
- There is a helper class VisualTreeHelper in the platform that allows hit test on the component tree (all visible components in Silverlight application form a component tree hierarchy)
- There is a support for object animation based on StoryBoard that is suitable for steady moving objects
- the game movement is based on screen refresh rate, which is not constant across devices. To make object speed the same on different devices/conditions we need to measure the refresh rate before calculation the object next position
- During object movement, the object perception will be conducted on the next the object position because we are going to make the calculations in a hook to the screen refresh loop.
New module in the project PhysicalBody
The new class PhysicalBody is attached to a UI element statically created in XAML. It handles the element:
- position update
- object collision assessment
- checking the class container extends to keep the element position inside the container
Also it accepts gameOver and hitScores() method delegates from the class container
The position update is a simple increment of the current position based on the current body speed. Note that the body speed defines the body direction according to the coordinate system: positive value is from left to right by X axis, and from top to bottom in Y axis and negative value is in the opposite direction.
Object collision assessment
This is the crucial part of the article. The collision detection is implemented in one function and follows the strategy we have declared above in the preceding sections.
Let us examine the code:
private void assessCollision()
// get bounding box and offset the box in movement direction on one step
Canvas canvas = (Canvas)body.Parent;
// (1) for converting screen-to-control coordinates
var transform = canvas.TransformToVisual(Application.Current.RootVisual); // for converting screen-to-control coordinates
var origin = transform.Transform(new Point(Canvas.GetLeft(body) + velocityX, Canvas.GetTop(body) + velocityY));
var boundingBox = new Rect(origin, new Size(body.ActualWidth, body.ActualHeight));
// (2) check whether there are element exist on the body's way
// in three directions : forward, forward-diagonal-up and forward-diagonal-down
// (3) use VisualTreeHelper helper class to walk through elements hierarchy tree
// (4) filter out all objects except Shapes
if (isCollided(VisualTreeHelper.FindElementsInHostCoordinates(new Point(boundingBox.Right, boundingBox.Bottom), (Canvas)body.Parent)) ||
isCollided(VisualTreeHelper.FindElementsInHostCoordinates(new Point(boundingBox.Right, boundingBox.Y), (Canvas)body.Parent)) ||
isCollided(VisualTreeHelper.FindElementsInHostCoordinates(new Point(boundingBox.X, boundingBox.Bottom), (Canvas)body.Parent)))
for more details please check the source code
Given approach shows how easy to implement collision detection in Silverlight game project if use only the platform means. Working example is available in source code that can be build and run on Windows Phone 7.X device. Also since Windows platform does not allow side-loading installation binaries, just for reference the same code has been deployed to a web site and can run embedded into a browser. | <urn:uuid:69dff074-0af7-434b-b25d-65aff6b83e6c> | 2.765625 | 1,110 | Documentation | Software Dev. | 28.059948 | 799 |