text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Top White Papers
Collect User Input in a Bash Shell ScriptFeb 02, 2011, 12:04 (0 Talkback[s])
[ Thanks to Andrew Weber for this link. ]
"The read command is designed to read and then use in the script input from the user. The input that is provided by the user is stored as a variable. This is a builtin variable that will store one line of input from the user in one or more variables. The read command is valuable as it is a major way to input information into a shell script. Lines are read into the script with standard input and split via the $IFS variable. This stands for internal field separator. The first word is assigned the first variable, the second word the second variable, etc.
0 Talkback[s] (click to add your comment) | <urn:uuid:6d66ee86-bcbf-4e69-bed7-b2fd38a23361> | 2.84375 | 171 | Content Listing | Software Dev. | 69.654585 |
The Deep Impact mission, which last year fired a massive projectile at the nucleus of comet Tempel 1, has inspired a proposal for a similar mission to Jupiter's moon Europa.
Evidence that Europa has a subsurface ocean, which would make it the only known "water world" besides Earth, emerged between 1996 and 2003 from NASA's Galileo mission. While this makes Europa a prime target in the search for alien life, any plan to reach its ocean faces the daunting challenge of how to drill or melt down through the 20 kilometres or more of ice lying above it.
To begin to tackle the problem, a team led by researchers at Johns Hopkins University in Baltimore, Maryland, is proposing to smash a spacecraft into the ice and analyse the plume of ejected material. While the impact would come nowhere near breaking through to the ocean below, the energy released would throw ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:6e5e569a-1328-450f-bd32-9dcc4f54b27a> | 4 | 201 | Truncated | Science & Tech. | 46.273269 |
When you think power, you probably DON’T think algae! But, believe it or not, algae—and its 50% oil content—is being considered as the next JET fuel. Experimentation is currently underway to convert the green sludge into renewable energy; Scientific America reports.
And this video shows how it’s being done. Via National Geographic News:
Now, there is a DOWNSIDE to algae too, and not just Soylent Green, scientists have determined that harmful algae blooms are creating “dead zones” in marine ecosystems.
Researchers BLAME nitrogen and phosphorus runoff from agriculture and burning of fossil fuels; more from National Geographic News. | <urn:uuid:f3844918-ffa2-4868-b8b1-f80362b785d1> | 2.828125 | 141 | Personal Blog | Science & Tech. | 41.944706 |
Feb 18, 2013
For decades, the strange substance called dark matter has teased physicists, challenging conventional notions of the cosmos.
Today, though, scientists believe that with the help of multi-billion-dollar tools, they are closer than ever to piercing the mystery — and the first clues may be unveiled just weeks from now.
“We are so excited because we believe we are on the threshold of a major discovery,” said Michael Turner, director of the Kavli Institute for Cosmological Physics at the University of Chicago, at an annual conference of the American Association for the Advancement of Science (AAAS).
Dark matter throws down the gauntlet to the so-called Standard Model of physics.
This article was posted: Monday, February 18, 2013 at 11:53 am | <urn:uuid:1775659a-624a-48c4-bdab-602d08facefa> | 3.15625 | 163 | Truncated | Science & Tech. | 33.66505 |
Making the Rounds
February 13, 2013
Early on Jan. 31, 2013, SDO observed a visual phenomena that most of us do not recall ever seeing before: a ring-shaped prominence that lay flat above the Sun's surface. Plasma streaming along the magnetic field lines appears to go in both directions at the same time along the field lines. Before long, the prominence became unstable and erupted in a large swirl with most of the materials falling back intio the Sun. You never know what you are going to see next.
Topics: Plasma physics, Space plasmas, Physics, Sun, Magnetic field, Plasma, Magnetism, Astrophysics, light sources | <urn:uuid:23a47034-6edc-4011-a72d-d1d01d758848> | 3.375 | 135 | Truncated | Science & Tech. | 50.16 |
The 2010 Summer season has been extremely warm across the southeastern United States, with the State of Alabama experiencing one of the warmest summers on record. Statistics from this summer can be found here:
Central Alabama Summer Heat Statistics
NOAA Global Temperature Statistics
The Fall season officially begins at 10:09pm CDT on September 22, 2010. This is also termed the "Autumnal Equinox," in which the daylight and nightime hours are equal. After that time, nighttime hours will be longer than daylight hours until December 21st, which is termed the "Winter Solstice."
The start of the fall season also helps us to look forward to cooler temperatures, especially with the scorching summer that we have gone through. Below are the average, earliest, and latest dates of the first frost, first freeze, and first significant freeze (28 Degrees or Below) for Birmingham, Anniston, Tuscaloosa, and Montgomery from 1970-2009. | <urn:uuid:b2a78165-8a3b-4f9f-bedd-f136ec53f1fb> | 3.109375 | 195 | Knowledge Article | Science & Tech. | 39.46455 |
An unusual event playing out high in the atmosphere above the Arctic Circle is setting the stage for what could be weeks upon weeks of frigid cold across wide swaths of the U.S., having already helped to bring cold and snowy weather to parts of Europe.
This phenomenon, known as a “sudden stratospheric warming event,” started on Jan. 6, but is something that is just beginning to have an effect on weather patterns across North America and Europe.
While the physics behind sudden stratospheric warming events are complicated, their implications are not: such events are often harbingers of colder weather in North America and Eurasia. The ongoing event favors colder and possibly stormier weather for as long as four to eight weeks after the event, meaning that after a mild start to the winter, the rest of this month and February could bring the coldest weather of the winter season to parts of the U.S., along with a heightened chance of snow.
Sudden stratospheric warming events take place in about half of all Northern Hemisphere winters, and they have been occurring with increasing frequency during the past decade, possibly related to the loss of Arctic sea ice due to global warming. Arctic sea ice declined to its smallest extent on record in September 2012.
An Arctic cold front was sliding south from Canada on Friday, getting ready to clear customs at the border on Saturday and Sunday, bringing an icy chill to areas from the Plains states through the Mid-Atlantic by early next week, including what promises to be a chilly second inauguration for President Obama. Temperatures in Washington on Monday are expected to hover in the low 30s, only a touch milder than Obama’s first inauguration, when the temperature was 28°F.
Reinforcing shots of cold air are likely to affect the Upper Midwest, Great Plains and into the East throughout February, with some milder periods sandwiched in between.
Sudden stratospheric warming events occur when large atmospheric waves, known as Rossby waves, extend beyond the troposphere where most weather occurs, and into the stratosphere. This vertical transport of energy can set a complex process into motion that leads to the breakdown of the high altitude cold low pressure area that typically spins above the North Pole during the winter, which is known as the polar vortex.
The polar vortex plays a major role in determining how much Arctic air spills southward toward the mid-latitudes. When there is a strong polar vortex, cold air tends to stay bottled up in the Arctic. However, when the vortex weakens or is disrupted, like a spinning top that suddenly starts wobbling, it can cause polar air masses to surge south, while the Arctic experiences milder-than-average temperatures.
During the ongoing stratospheric warming event, the polar vortex split in two, allowing polar air to spill out from the Arctic, as if a refrigerator door were suddenly opened.
Read More: Here | <urn:uuid:eb865a8e-2dd3-4cb3-b0b6-58ce302149bf> | 3.578125 | 596 | Truncated | Science & Tech. | 40.866 |
May 7, 2009
During the past decade, numerous discoveries have been made that have confirmed the hypothesis that birds evolved from dinosaurs. These fossils have given paleontologists important insight into how adaptations like feathers evolved, but one of the most hotly debated topics in paleobiology is how birds began to fly. Some scientists prefer a “ground up” model in which feathered dinosaurs began jumping into the air, but others think a “trees down” hypothesis (where feathered dinosaurs would have started gliding first) is more plausible. There once was another hypothesis, however, involving bird ancestors that lived along an ancient shoreline.
In 1920, the zoologist Horatio Hackett Newman published his textbook Vertebrate Zoology, and in it he proposed a unique idea for the origin of birds. Newman thought that the reptilian ancestors of birds had the beginnings of feathers in elongated scales, and if these bird ancestors jumped off cliffs to dive after fish, these scales could have aiding them in aiming their strike. If they could flap their arms, so much the better, and so flying birds would have evolved from these divers. Flightless birds like penguins, by contrast, would have evolved from similar reptiles that used their arms to flap underwater.
To bolster his case Newman even supposed that the earliest known bird, Archaeopteryx, was adapted to climbing rocky cliffs at the shore and had teeth adapted for catching fish. He did not have proof for his views, but there did not seem to be much evidence that directly refuted it. At the time he proposed this hypothesis, there were very few fossils to test his ideas.
Unfortunately for Newman, his hypothesis was not well accepted at the time and was soon relegated to the scientific dust bin. New evidence has also failed to throw support to his ideas, but this is not to say that we should ignore what Newman wrote. His hypothesis is important to understanding how scientists form ideas based upon the evidence available. Swimming proto-birds might seem a bit silly to us now, but it is an interesting tidbit of science history.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week.
No Comments »
No comments yet. | <urn:uuid:4fd9d6c7-024f-4607-8c2a-c12c48a880f5> | 4.0625 | 452 | Personal Blog | Science & Tech. | 41.693814 |
What is in a plume?
CIOERT’s partner, UNCW has created a web page describing expected forms of oil, gas and chemical dispersants that may travel over Florida reefs.
How to detect a plume?
Oil and gas in the water takes many forms, from dissolved gases to large blobs (mousse) at the surface. Most mapping is done via satellites and visual aerial imagery. It is more difficult to map subsurface plumes. An EPA directive prescribes a subsurface monitoring plan for BP, with parameters that need to be measured to detect and monitor subsurface oil/gas and dispersants, and related technologies for sampling from the sea surface down to 550 meters, including:
- Fluorometers: EPA document appendix includes a good discussion of hydrocarbon fluorescence. Many companies make sensors to detect different organic compounds, including crude oil. During the CIOERT expedition, the submersible payloads will include a fluorometric sensor package for measuring organic matter, chlorophyll, and oil.
- ADCP: Acoustic Doppler Current Profiler, similar to what NOAA is using to assess leak rate at the well site; latest generation of current meters gives a three dimensional instanteous view of current speed and direction.
- LISST particle analysis: Laser In Situ Scattering and Transmissometry (LISST) measures volume concentrations and size spectra of particles using laser diffraction, measuring the intensity of scattered laser light at different angles.
- Dissolved Oxygen: A DO sensor will be used during the CIOERT expedition, but the Spill Command Center has protocol for determining dissolved oxygen, which involves doing Winkler Titrations in the lab on water samples.
- CTD (Conductivity, Temperature, and Depth): Standard information that supports all other data collections. Salinity, derived from conductivity data, and temperature provide important clues as to the water mass sampled.
- Water sampling: The research submersible can collect water samples using various devices. A rosette of water bottles, a rack of bottles that can be tripped from the surface to collect samples at various depths, will also be deployed from the ship.
- PAH analysis: One type of hydrocarbon found in oil, polycyclic aromatic hydrocarbons are known carcinogens and detected using gas chromatography on lab samples.
- Rototox toxicity testing: A toxicity assay on sediments or water samples using rotifers. Rotifers are sensitive small invertebrates that occur in the Gulf of Mexico and are important to the food chain. They feed on bacteria and other small pieces of organic matter and, in turn, are fed upon by crustaceans and other organisms. Rototox® is a commercially-available procedure and is specified for the BP dispersant monitoring directive because it is a rapid test that can be performed remotely on a ship. The test exposes rotifers to water collected at different distances from the oil release location. Toxicity is determined by comparing the survival of the rotifers exposed to the offshore samples to survival in clean water.
- Spectrometry: There are a variety of underwater sensors that use spectrometry to detect a variety of elements in seawater, including hydrocarbon gases and fluids. CIOERT partners FAU and SRI International both have mass spectrometers for this purpose. SRI’s unit will be deployed by the research submersible during the expedition. | <urn:uuid:278ed8be-5b27-46d1-b3db-baca297c1635> | 3.25 | 713 | Knowledge Article | Science & Tech. | 23.936794 |
In summer 1992, the south side of the Alaskan Peninsula and the
Kodiak Archipelago were surveyed for harbor porpoise (Phocoena phocoena) from
the NOAA John N. Cobb using line transect methodology. The shipboard platform
was 28.36 m (93 ft) in length with a bridge height of 4.27 m (14 ft). To obtain
data on seasonal movements of porpoise, a limited number of transects were ... made
in areas of known porpoise concentrations. When a sighting was made, the
following variables were recorded: time, angle, number of reticles to the
sighting, radar distance (nm) to the shoreline at the same angle of the
sighting, the species, the count of the porpoise seen (best, high, and low),
and the animal(s) direction of travel. Search effort was recoded by noting the
time and location of the ship at the beginning and end of each transect.
Although harbor porpoises, Phocoena phocoena, are known to occur
throughout Alaska waters (Fiscus et al. 1976; Leatherwood and Reeves 1978;
Lawry et al. 1982; Leatherwood et al. 1983), few estimates of abundance exist.
In 1991, the National Marine Mammal Laboratory initiated a 3-yr study on Alaska
harbor porpoises. The objectives of this program were to: 1) Obtain minimum ... population estimates in coastal waters using line transect methodology with a
coefficient of variation for density estimates of less than 30% for each survey
area. 2) Establish a baseline for detecting changes in abundance through time,
for analysis of trends.
SUPPLEMENTAL INFORMATION: Note: This dataset and associated effort data was
updated on Sept 8, 2005. Those who downloaded them before that date should
renew them. The ability of the observers to sight porpoise was sensitive to
environmental conditions. As expected, when conditions deteriorated the number
of porpoises observed declined.
CURRENTNESS REFERENCE: ground condition
SPATIAL REFERENCE INFORMATION -
Horizontal Datum Name: D_WGS_1984
Ellipsoid Name: WGS_1984
Semi-major Axis: 6378137.000000
Denominator of Flattening Ratio: 298.257224 | <urn:uuid:08c97f1f-377d-4ce4-adc7-e5ef2f860701> | 3.296875 | 512 | Knowledge Article | Science & Tech. | 45.467044 |
I'm reading Goldstein's Classical Mechanics, the part on "Scattering" in the "Central Force" chapter.
In relation to the figure below, he says that angular momentum, $l$, is given by $$l=mv_0s$$ where $v_0$ is the velocity of the particle and $s$ is the distance from the line of the center of force(as shown in figure).
However, given that $l=\overrightarrow r \times \overrightarrow p=r.mv_0. sin (\theta)$, and $l$ is measured about the center of the sphere shown, it seems as if he has concluded that perpendicular drawn from the center of the sphere to the point of closest approach of the particle is of length $s$, which generally need not be true. To put it simply, how did he get the above expression? | <urn:uuid:d503f85f-396e-4df9-87c4-d35e5850c0e5> | 3.359375 | 185 | Q&A Forum | Science & Tech. | 62.247292 |
Henri Poincaré (1854-1912)
Born in 1854 in Nancy, France, Henri Poincaré was a visionary mathematician and philosopher whose innovative ideas continue to be used daily by engineers and scientists today.
After initial studies in mathematics and engineering, Poincaré was appointed as Professor of Mathematics at the University of Paris in 1881 and soon became well known for his ground-breaking research. His theoretical work focused on resolving problems such as number theory, differential equations (used to calculate how variables change in a dynamic system) and algebraic topology (equations which describe shapes and spaces). He also considered many topics in applied science such as celestial mechanics, fluid dynamics, optics, electricity and telegraphy. In particular, his work on celestial mechanics laid the foundations for Einstein’s subsequent theory of special relativity, while his mathematical work on dynamics led to the development of chaos theory.
As a result of his intense mathematical research, Poincaré became interested in philosophy and psychology, particularly in relation to human thought processes and problem-solving. He was also keen to share his passion for science and mathematics with general audiences and wrote a number of popular science books such as The Value of Science (1905) and Science and Method (1908).
Poincaré received numerous awards and honours throughout his life, both from national and international organisations. An asteroid and a crater on the Moon are both named in recognition of his research.
He died in Paris in 1912. | <urn:uuid:b40f40ab-9983-400f-a9d8-c7a5eff4fa8e> | 3.515625 | 309 | Knowledge Article | Science & Tech. | 21.735 |
The Great Dane would leave a legacy that few would even dream of rivaling. Niels Bohr was a quiet and shy man, whose contribution to 20th century physics is so fundamental that, without it, the whole edifice would only be half built. His brilliant insights into physical problems not only gave solutions, but also taught the world a new way of thinking of a physical problem – think of only what can be explained and what can be observed, leaving all your philosophical baggage behind. Bohr, the great, celebrates his birthday today and Google honours him with a doodle.
Bohr’s great insight
If you asked me to judge the doodle, I would just give it passing grades; Google could’ve easily been more imaginative. The doodle shows an atom, given by the Bohr’s atomic model (well, not quite, but more on that in a bit) and a photon (particle of light) being emitted having exactly the right frequency. This frequency times the Planck’s constant gives the energy of the photon. It turns out that the transition of an electron from a higher energy level to a lower one would involve the emission of a photon with energy equal to exactly the difference between the levels and this was Bohr’s great insight.
So what problem was I referring to about the Bohr’s Atomic Model? Well, Bohr’s Atomic Model doesn’t involve elliptic orbits*. It just involves electrons going around the nucleus in circular orbits. So just a bit mistaken there! (*See corrigendum below)
Bohr happened to be on the Manhattan Project too. His escape out of Nazi occupied Denmark is the stuff of folklore. Apparently, he received a permit letter, allowing him to leave Copenhagen when Denmark was being captured by the Nazi power in the 2nd World War. This special permit came probably from Heisenberg who was holding a high position in the department of science in Nazi establishment. Niels Bohr was then smuggled out of Denmark and given a safe haven in America.
More than just a scientist
So Bohr! More than being a brilliant physicist, he was also a pioneer. He was one of the founding fathers of CERN, 1954. Where we would be without the Great Dane. And of course, the Copenhagen Institute which housed the great minds of the 20th century at a time was co-founded by Niels Bohr.
An atom wouldn’t be the same without you! Happy Birthday Niels Bohr.
Corrigendum from the author: A good friend of mine and a regular reader of the blog, Joe Phillips Ninan, research scholar at TIFR, Mumbai, wrote to me pointing out that my conclusion of the orbits not being circular is incorrect. He says that they are circles oriented in 3D planes and thus just look like ellipses, but are not actually ellipses. This is what he wrote to me in an email:
I carefully checked the pixel position of the nucleus in the Google Doodle image. They lie exactly at the center of line connecting the diameter. Hence they are not elliptical orbits with nucleus at one of the foci. They have drawn only perfect circles, oriented in different planes in 3D.
Good point Joe and it’s gracefully accepted. The hasty error on my part is regretted. | <urn:uuid:a00e7afe-e74e-4072-9a21-0beb4eaeca7b> | 3.40625 | 698 | Personal Blog | Science & Tech. | 56.434521 |
by Denyse O'Leary
Well, isn't that the key epigenetics question - what we really want to know.
From "Why Does the Same Mutation Kill One Person but Not Another?" (ScienceDaily, Dec. 7, 2011), we learn:
The vast majority of genetic disorders (schizophrenia or breast cancer, for example) have different effects in different people. Moreover, an individual carrying certain mutations can develop a disease, whereas another one with the same mutations may not. This holds true even when comparing two identical twins who have identical genomes. But why does the same mutation have different effects in different individuals?Some researchers propose,
"In the last decade we have learned by studying very simple organisms such as bacteria that gene expression -- the extent to which a gene is turned on or off -- varies greatly among individuals, even in the absence of genetic and environmental variation. Two cells are not completely identical and sometimes these differences have their origin in random or stochastic processes. The results of our study show that this type of variation can be an important influence the phenotype of animals, and that its measurement can help to reliably predict the chance of developing an abnormal phenotype such as a disease ."This team's own research looked at the worm C. Elegans, the space shuttle blowup survivor. C. Elegans is too simple to feature many complicating factors.
The work suggests that, even if we completely understand all of the genes important for a particular human disease, we may never be able to predict what will happen to each person from their genome sequence alone. Rather, to develop personalised and predictive medicine it will also be necessary to consider the varying extent to which genes are turned on or off in each person.Goodbye, "genetics is destiny."
There is a sense in which no one can tell you why your brother died and you didn't. Perhaps some day they can point to a gene abnormality that affected him fatally and you minimally - and offer a credible explanation of the cascade of outcomes. But that's it. Some of what we need to know can only be addressed by philosophy, not science.
No Pingbacks for this post yet...
|<< <||> >>|
Evolution has become a favorite topic of the news media recently, but for some reason, they never seem to get the story straight. The staff at Discovery Institute's Center for Science and Culture started this Blog to set the record straight and make sure you knew "the rest of the story".
A blogger from New England offers his intelligent reasoning.
We are a group of individuals, coming from diverse backgrounds and not speaking for any organization, who have found common ground around teleological concepts, including intelligent design. We think these concepts have real potential to generate insights about our reality that are being drowned out by political advocacy from both sides. We hope this blog will provide a small voice that helps rectify this situation.
Website dedicated to comparing scenes from the "Inherit the Wind" movie with factual information from actual Scopes Trial. View 37 clips from the movie and decide for yourself if this movie is more fact or fiction.
Don Cicchetti blogs on: Culture, Music, Faith, Intelligent Design, Guitar, Audio
Australian biologist Stephen E. Jones maintains one of the best origins "quote" databases around. He is meticulous about accuracy and working from original sources.
Most guys going through midlife crisis buy a convertible. Austrialian Stephen E. Jones went back to college to get a biology degree and is now a proponent of ID and common ancestry.
Complete zipped downloadable pdf copy of David Stove's devastating, and yet hard-to-find, critique of neo-Darwinism entitled "Darwinian Fairytales"
Intelligent Design The Future is a multiple contributor weblog whose participants include the nation's leading design scientists and theorists: biochemist Michael Behe, mathematician William Dembski, astronomer Guillermo Gonzalez, philosophers of science Stephen Meyer, and Jay Richards, philosopher of biology Paul Nelson, molecular biologist Jonathan Wells, and science writer Jonathan Witt. Posts will focus primarily on the intellectual issues at stake in the debate over intelligent design, rather than its implications for education or public policy.
A Philosopher's Journey: Political and cultural reflections of John Mark N. Reynolds. Dr. Reynolds is Director of the Torrey Honors Institute at | <urn:uuid:ce1eacc4-eec9-4549-ba3d-1e8ec4d61e36> | 3.015625 | 896 | Personal Blog | Science & Tech. | 36.639335 |
Article on everything2 describing the topology of the Asteroids universe as a torus (donut shape).
…So the topological part is this: when you fly up off the top edge of the screen, you magically appear at the same position on the bottom of the screen, and vice-versa. The same is true of the left and right edges. So consider this: from the pilot’s perspective, he or she is flying around in a 2-dimensional universe with no edge, ie: where every spot the ship is in looks locally like two-dimensional Euclidean space. Mathematicians call this sort of thing a manifold, specifically a 2-manifold. I’m going to represent it like this, as it is represented on the game screen:
The edges ‘a’ and ‘b’ are labelled to indicate that the top and bottom are the same location in space (a), as are the left and right (b). In fact (when you think about it) the four corners are actually the same point! If you were to try to connect this up as a real physical surface (this is called anembedding), you could think about it as a sheet of paper where you first glued edge a-top to a-bottom (giving you a rolled-up paper tube), and then bent the resulting tube around gluing b-left to b-right. You would end up with…wait for it…a donut! Or, in topological jargon, a torus. So when you are playing Asteroids, you are actually playing it on a torus, mathematically speaking. (The advantage to this explanation is that in a bar, there’s always a napkin around that you can use to demonstrate. Sometimes there are even videogames.)…
Read the full article here: http://www.everything2.org/node/746760 | <urn:uuid:6b58f858-51d4-4b89-8f5e-00aa0b4faab4> | 2.90625 | 396 | Truncated | Science & Tech. | 56.012429 |
Tag: "chloroplast" at biology news
Transgenic plants remove more selenium from polluted soil than wild plants, new tests show
...eveloped by plant geneticists -- such as modifying chloroplast
DNA rather than nuclear DNA -- will eventually reduce the need for such constant monitoring. Since chloroplast
DNA is maternally inherited, there is little risk of pollen transfer, said Terry. Terry said it's w...
MSU researchers receive $4 million grant to uncover gene functions
... in advances in human health and agriculture. The chloroplast
gives green plants their color and carries out photosynthesis and produces oxygen. It can be thought of as the world's life-support system. It is an attractive target for biotechnology because it produces many different molecules important to agricul...
Effective, safe anthrax vaccine can be grown in tobacco plants
... his colleagues injected the vaccine gene into the chloroplast
genome of tobacco cells, partly because those plan...logy company called Chlorogen to apply his work in chloroplast
genetic engineering. In 2004, he won UCF's Pegasus Professor Award, the top honor given to a faculty...
Defense peptide found in primates may block some human HIV transmissions
...he retrocyclin gene would be incorporated into the chloroplast
genome of tobacco cells before the plants grow. Daniell has developed a similar approach to growing anthrax vaccine in tobacco plants. An inexpensive way to produce the drug with only a small amount of tobacco would help to make it accessible in ar...
...uantities of glycosylate recombinant proteins. The chloroplast
is a cell organ with great capacity for storing pr... Once the presence of this type of protein in the chloroplast
was ascertained, the scientists asked themselves if it were the cell organ itself that was glycosyla...
Deep in the ocean, a clam that acts like a plant
... get their energy and carbon via photosynthesis by chloroplast
symbionts, this clam gets its energy via chemosynthesis," said Jonathan Eisen, a professor at the UC Davis Genome Center and an author on the paper. The actual work of photosynthesis in green plants is done by chloroplasts, descended from primitiv...
Discovery of 'master switch' for the communication process between chloroplast and nuclei of plants
...ls of stress due to lack of water or salinity from chloroplast
to nucleus. They know that chloroplasts ?the cellu...ory’s lab. Many of the nuclear genes that encode chloroplast
proteins are regulated by a "master switch" in response to environmental conditions. This "master sw... | <urn:uuid:aa95e26d-e9e2-4acf-b0df-424e11e53a7d> | 2.6875 | 567 | Content Listing | Science & Tech. | 48.403884 |
What is Scribble?
Scribble is a language to describe application-level protocols among communicating systems. A protocol represents an agreement on how participating systems interact with each other. Without a protocol, it is hard to do a meaningful interaction: participants simply cannot communicate effectively, since they do not know when to expect the other parties to send their data, or whether the other party is ready to receive a datum it is sending. In fact it is not clear what kinds of data is to be used for each interaction. It is too costly to carry out communications based on guess works and with inevitable communication mismatch (synchronisation bugs). Simply, it is not feasible as an engineering practice.
Scribble presents a stratified description language:
- The bottom layer is the type layer, in which we describe the bare skeleton of conversations structures as types for interactions (known in the literature as session type).
- The assertion layer allows elaboration of a type-layer description using assertions.
- Finally the third layer, protocol document layer, allows description of multiple protocols and constraints over them.
Each layer offers distinct behavioural assurance for validated programs.
How can it be used?
The development and validation of programs against protocol descriptions could proceed as follows:
- A programmer specifies a set of protocols to be used in her application.
- She can verify that those protocols are valid, free from livelocks and deadlocks.
- She develops her application referring to those protocols, potentially using communication constructs available in the chosen programming language.
- She validates her programs against protocols using a protocol checker, which detects lack of conformance.
- At the execution time, a local monitor can validate messages with respect to given protocols, optionally blocking invalid messages from being delivered. | <urn:uuid:8f1f5b18-47f3-476d-a5e5-85b2a7608598> | 3.28125 | 360 | Documentation | Software Dev. | 24.845956 |
Photo courtesy Agricultural Biotechnology Council
What is biotechnology?
Biotechnology is the name that has been given to a very wide range of agricultural, industrial and medical technologies that make use of living organisms (e.g., microbes, plants or animals) or parts of living organisms (e.g., isolated cells or proteins) to provide new products and services.
Biotechnology's origins lie in the ancient crafts of brewing, baking and the production of fermented foods such as yoghurt and cheese. It was not until 1859 that microbes were identified as the cause of both desirable and undesirable changes in food. Louis Pasteur provided a scientific understanding of these natural processes, which helped to improve the reliability of traditional fermentations and ensure the safe preservation of food and drink.
Pasteur thought that microbes were always needed to bring about the changes which occur during fermentation. Towards the end of the last century, however, it was realised that non-living extracts from, for example, yeast cells could also cause changes that are normally associated with the activities of whole organisms. These extracts were named 'enzymes' (literally, 'in yeast'). We now know that all living things produce enzymes - proteins that are responsible for many of the processes of life.
During the 1940s, methods of growing microbes in large fermenter vessels were developed for the production of penicillin and other antibiotics used in medicine. Today this fermenter technology permits the commercial production of a wide range of products. These include enzymes for food and drink production processes, vitamins, amino acids and other useful chemicals.
Brewers have always maintained their own strains of yeast for beer production and similarly the producers of enzymes and other fermentation products nurture specially-selected strains of production organisms. These strains have inherited (i.e., genetic) characteristics that improve their performance. The traditional method of developing new strains involves laborious testing of populations of microbes to detect naturally-occurring genetic variants with useful properties.
In 1973, two scientists, Stanley Cohen and Herbert Boyer, managed for the first time to make very specific changes to the genetic make-up (i.e., the DNA) of microbes by means of 'genetic engineering' (also called genetic modification). The techniques developed using microbes have since been applied to plants and animals, and in a limited way they have also been applied to humans in an attempt to alleviate the symptoms of inherited illness.
Although the term biotechnology refers to a much older and broader technology than genetic engineering, the techniques of genetic engineering are of such importance that the two terms have become virtually synonymous, particularly in the USA.
You can read more about the history of biotechnology in Robert Bud's authoritative study 'The uses of life. A history of biotechnology' (1994) Cambridge University Press. ISBN: 0 521 47699 2 [Paperback].
What is plant biotechnology?
Traditional plant breeding is a relatively slow and labour-intensive process: if two parental plants are crossed, the seeds from them must be collected, planted and the resulting plants cultivated before the results of the cross can be seen. Furthermore, plant breeders must work with whole sets of inherited characteristics. Consequently a cross to introduce a desirable characteristic is likely to introduce one or more undesirable characteristics as well; and these must then be painstakingly 'bred out'. The techniques of biotechnology (including genetic modification) can be used to speed up the process and improve the precision of plant breeding compared with conventional methods such as random genetic changes introduced by radiation.
What are the main current applications of plant genetic MODIFICATION?
The majority of current plant biotechnology is directed towards the improvement of food plants; the remaining work is concerned with non-food crops such as cotton, tobacco, ornamental plants and pharmaceuticals. The initial emphasis has generally been on the improvement of qualities of value to the farmer. Most of this work has been initiated and funded by the seed industry. The second and third generations of genetically-modified food plants will bring benefits that more directly affect commercial food processors and consumers. Many thousands of field trials of genetically-modified plants have been carried out world-wide.
Athough several different modified crops are grown in the USA and elsewhere, none have so far been approved for commercial production in the UK. This means that, in Britain, all food derived from GM crops is imported. Only a handful of GM-derived products have been approved for food use in the EU: processed soya derivatives such as lecithin; oil from oil seed rape; processed tomato purée and maize. No fresh GM products (such as tomatoes, potatoes or unprocessed soya beans) have been approved for human consumption in the EU. The only GM crop currently grown to any extent (and then only in limited amounts) in the EU is maize, which is produced for animal feed.
What are the main techniques of plant biotechnology?
Plant tissue culture is the cultivation of plant cells or tissues on specially-formulated nutrient media. In appropriate conditions, an entire plant can be regenerated from each single cell, permitting the rapid production of many identical plants. Tissue culture is an essential tool in modern plant breeding. Since it was first developed the early 1960s, plant tissue culture has become the basis of a major industry, providing high-value plants for nurseries. Where a crop (e.g., banana) does not produce seeds, plants derived from tissue culture are sometimes planted directly in farmers' fields. However this is rarely done for seed-bearing species - more often tissue-cultured plants are used to produce the seeds from which crops are subsequently grown. Over the years, several attempts have been made to cultivate plant cells in fermenter vessels with the aim of producing valuable products such as medicines and natural food flavourings. To date, success in this area has been limited. Cell culture is also used for the conservation of those plant varieties that cannot be maintained in a normal seed bank.
Genetic engineering is the controlled modification of genetic material (DNA) by artificial means. It relies upon scientists' ability to isolate specific stretches of DNA using specialised enzymes which cut the DNA at precise locations. Selected DNA fragments can then be transferred into plant cells. This can be done in several ways.
The best-established gene-transfer method for plants uses a soil bacterium as a go-between. This organism, Agrobacterium, has a natural ability to alter the genetic material of plant cells so that outgrowths (or galls) are formed on the plant. Biologists have adapted the mechanism used by Agrobacterium so that desirable genetic information rather than that which promotes the formation of galls is transferred into plants. The Agrobacterium method has been used successfully with a wide variety of plants and has proved particularly useful for the modification of tree species which, because they are large and slow-growing are difficult to alter by conventional breeding. However, the most important cereal crops are not affected by Agrobacterium, so other mechanisms have to be used for them.
Ballistic impregnation is an unlikely-sounding method that has achieved some success with cereals and other crops. It involves sticking the DNA to be introduced into the plant onto minute gold or tungsten particles, then firing these (like bullets) into the plant tissue. A proportion of the plant cells treated in this way take up the DNA from the metal pellets. Whole plants are then re-grown from the cells by tissue culture.
Electroporation works best with plant tissues that have no cell walls (such as the tubes which develop from pollen grains). Micro- to millisecond pulses of a strong electric field cause minute pores to appear momentarily in the plant cells, allowing DNA to enter from a surrounding solution.
A more recent yet similar method uses microscopic crystals to puncture holes in the plant cells, again allowing DNA to enter them. Another novel method involves the direct injection of DNA into chloroplasts, which have their own DNA. Chloroplast DNA is usually found only in the female parts of plants, and not in pollen. This means that plants modified using this technique cannot transfer their introduced genes through pollen.
With all current transfer techniques, only a small proportion of the treated cells successfully incorporate the novel DNA. Therefore, so-called marker genes are usually linked to the DNA fragments before their transfer. These marker genes can then be detected easily, enabling scientists to see whether transfer of the desired DNA has taken place. To date, the main markers used have been genes which allow the plant to grow in the presence of a specific antibiotic or herbicide. Other marker systems are being developed.
Antisense technology is used to 'neutralise' the action of specific undesirable genes (such as those involved in the excessive softening of fruit). The same technique can be utilised to combat the activity of plant viruses, providing a means of controlling viral infection. Antisense technology lies behind many of the current applications of plant biotechnology.
Although genetic engineering receives much attention, of as much significance is the application of genetic mapping to plant breeding. By determining the location and likely action of many plant genes, conventional plant breeding is being conducted with greater precision, as it becomes possible to detect quickly and exactly those plants which carry desirable characteristics. | <urn:uuid:439febd4-8b25-4481-b413-a4d375fbac66> | 3.53125 | 1,884 | Knowledge Article | Science & Tech. | 31.060349 |
Renewable Energy - Pros and Cons
During the 1960's and 1970's people began to fear that the main source of energy, the fossil fuels, would run out. Fossil fuels are finite or non-renewable and once they are used they cannot be replaced. In recent years, people realised that there was enough fossil fuels to last for several hundred years. A greater problem was the damage that was being done to our atmosphere. This has led to interest in renewable energy sources that do very little damage to the environment. These are some of the main renewable energy sources:
Hydroelectric Power (HEP)
Dams are built to control fast flowing rivers so that the water can be used to turn turbines to generate electricity. At times when the energy is not needed, the water can be pumped back up to the storage reservoir.
|Adv. of Hydroelectricity||Disadvantages of Hydroelectricity|
- Abundant, clean, and safe
- Can have a significant environmental impact
- Easily stored in reservoirs
- People can lose their homes
- Offers recreational benefits like boating, fishing, etc
- Can be used only where there is a water supply
Barrages can be built across estuaries to use tidal flows to generate electricity.
|Advantages of Tidal power||Disadvantages of Tidal power|
- Abundant, clean, and safe
- Not commercially viable at present
- Shipping could be disrupt
The sun's warmth can be used to heat water and buildings. Solar cells can convert sunlight into electricity.
|Advantages of Solar power||Disadvantages of Solar power|
- Reliability depends on sunlight
- No water or air pollution
- Not really cost effective at present
- Storage and back-up are necessary
Biomass is the oldest of the renewable energy sources and, in Ireland, its main use is as wood fuel. Another source of Biomass energy comes from the production of biogas. Municipal solid waste, agricultural waste and sewage sludge break down to produce Methane. This methane can be collected in tanks and burned to produce heat.
|Advantages of Biomass||Disadvantages of Biomass|
- Burning biomass can result in air pollution
- Can be used to burn waste products
- May not be cost effective
Water is pumped through hot rocks under the ground. The hot water can be used to heat buildings and any steam produced can be used to generate electricity. Low temperature geothermal energy, found in Ireland, can be tapped using heat pump technology.
|Advantages of Geothermal Energy||Disadvantages of Geothermal energy|
- An unlimited supply of energy
- Best supplies limited to certain areas of the world
- Produces no air or water pollution
- Start-up costs are expensive
- Corrosion of pipes can be a problem
Wind Tall wind turbines on wind farms can use the power of the wind to generate electricity. Eleven wind farms are now operational in Ireland. These have a combined capacity of 68 MW - enough electricity for over 44,000 homes.
|Advantages of Wind energy||Disadvantages of Wind energy|
- Produces no water or air pollution
- Farmers can receive an income from any electricity generated and the land can have other uses
- The wind farms can have a significant visual impact
- Wind farms are relatively cheap to build
- Wind farms need a lot of land
Questions on Renewable Energy
1 The renewable energy sources can vary from area to area. This depends on the type of climate found in the area or the underlying geology or the shape of the land. What types of energy could be used in the following areas:
(a) A wet mountainous region with few people.
(b) A flat windswept region.
(c) A dry, hot region without much vegetation.
(d) A flat agricultural region beside a major city.
2 Study the tables below. What is the most important renewable energy source in Ireland? Why do you think that this is source is so important. Be sure to answer fully using information from the tables.
3 What types of renewable energy could be used in the following countries:
(a) Mali in West Africa
(b) Iceland or Japan (Both are on Plate margins)
(c) The Orkney Islands, north of Scotland
4 Look at the advantages and disadvantages of the renewable energy sources and try to decide which have the most benefits and least problems. Why do you think these sources of energy are not used much more than they are?
5 Using the Internet, your textbooks, newspapers and any other sources of information you may have, make a study of renewable energy sources. Are there any in your local area? Many areas have old water wheels or even windmills. What were these used for? Are there any plans to develop renewable energy sources in your area? | <urn:uuid:8daf9b8e-d271-4da4-87fa-3c465ec53cd8> | 3.203125 | 1,023 | Tutorial | Science & Tech. | 36.775565 |
Massive Smash-Up at Vega
This artist concept illustrates how a massive collision of objects, perhaps as large as the planet Pluto, smashed together to create the dust ring around the nearby star Vega. New observations from NASA's Spitzer Space Telescope indicate the collision took place within the last one million years. Astronomers think that embryonic planets smashed together, shattered into pieces, and repeatedly crashed into other fragments to create ever finer debris.
In the image, a collision is seen between massive objects that measured up to 2,000 kilometers (about 1,200 miles) in diameter. Scientists say the big collision initiated subsequent collisions that created dust particles around the star that were a few microns in size. Vega's intense light blew these fine particles to larger distances from the star, and also warmed them to emit heat radiation that can be detected by Spitzer's infrared detectors. | <urn:uuid:82577950-2996-4f93-aeaf-a4fd84278b17> | 3.84375 | 174 | Knowledge Article | Science & Tech. | 32 |
There are several cases where Python statements are illegal when used in conjunction with nested scopes that contain free variables.
If a variable is referenced in an enclosing scope, it is illegal to delete the name. An error will be reported at compile time.
If the wild card form of import -- "import *" -- is used in a function and the function contains or is a nested block with free variables, the compiler will raise a SyntaxError.
If exec is used in a function and the function contains or is a nested block with free variables, the compiler will raise a SyntaxError unless the exec explicitly specifies the local namespace for the exec. (In other words, "exec obj" would be illegal, but "exec obj in ns" would be legal.)
The builtin functions eval() and input() can not access free variables unless the variables are also referenced by the program text of the block that contains the call to eval() or input().
Compatibility note: The compiler for Python 2.1 will issue warnings for uses of nested functions that will behave differently with nested scopes. The warnings will not be issued if nested scopes are enabled via a future statement. If a name bound in a function scope and the function contains a nested function scope that uses the name, the compiler will issue a warning. The name resolution rules will result in different bindings under Python 2.1 than under Python 2.2. The warning indicates that the program may not run correctly with all versions of Python.
See About this document... for information on suggesting changes. | <urn:uuid:09713cff-2fe7-4bdc-9a15-fb07f9d2f4a5> | 2.703125 | 318 | Documentation | Software Dev. | 52.210497 |
An Earth-approaching asteroid discovered in 2002 will be passing through the neighborhood Sunday night (July 22). 2002 AM31, a leftover fragment of rock from the solar system’s youth, will fly safely by Earth at a distance of 3.2 million miles (13.7 times the distance to the moon) around 8 p.m. (CDT) July 22.
With a diameter estimated between 1,115 and 2,600 feet, 2002 AM31 is bigger than many close approaching asteroids. Pity it won’t be very bright – only 14th magnitude – but savvy amateur astronomers with 10-inch or larger telescopes can track it in the northern sky as it creeps through the constellation Cepheus. To get a list of it hour-by-hour positions that you can plot on a star atlas, click HERE and then click the “Generate Ephemeris” button.
If you don’t have the equipment, no worries. The SLOOH Space Camera will broadcast the asteroid flyby live beginning 6:30 p.m. (CDT) Sunday. Because of its relatively small size and distance, 2002 AM31 will look like a “star” moving across a field of background stars. I’ve watched these webcasts before and they’re a lot of fun. You not only feel like you’re “right there” in real time, but you’ll learn a lot about asteroids from the accompanying commentary. | <urn:uuid:4164aafc-5556-4b19-b4bb-aebc1348941a> | 3.265625 | 310 | Personal Blog | Science & Tech. | 74.033929 |
Thomas, J.A., Telfer, M.C., Roy, D.B., Preston, C.D., Greenwood, J. J. D., Asher, J., Fox, R., Clarke, R. T. and Lawton, J.H., 2004. Comparative Losses of British Butterflies, Birds, and Plants and the Global Extinction Crisis. Science, 303 (5665), pp. 1879-1881.
Full text not available from this repository.
Official URL: http://web.ebscohost.com/ehost/detail?vid=6&hid=10...
There is growing concern about increased population, regional, and global extinctions of species. A key question is whether extinction rates for one group of organisms are representative of other taxa. We present a comparison at the national scale of population and regional extinctions of birds, butterflies, and vascular plants from Britain in recent decades. Butterflies experienced the greatest net losses, disappearing on average from 13% of their previously occupied 10-kilometer squares. If insects elsewhere in the world are similarly sensitive, the known global extinction rates of vertebrate and plant species have an unrecorded parallel among the invertebrates, strengthening the hypothesis that the natural world is experiencing the sixth major extinction event in its history.
|Subjects:||Geography and Environmental Studies|
Science > Biology and Botany
|Group:||School of Applied Sciences > Centre for Conservation, Ecology and Environmental Change|
|Deposited By:||INVALID USER|
|Deposited On:||27 Nov 2008 18:26|
|Last Modified:||07 Mar 2013 14:58|
|Repository Staff Only -|
|BU Staff Only -|
|Help Guide -||Editing Your Items in BURO| | <urn:uuid:3583164a-beb5-4c3f-b0d5-9dec37d47cca> | 3.578125 | 378 | Academic Writing | Science & Tech. | 54.432433 |
XPath is a language for finding information in documents. XPath is generally used to navigate through elements and attributes in an XML document but can also be used as a powerful query language.
Why does XPath matters? XPath is a major element in the W3C's XSLT standard - and XQuery and XPointer are both built on XPath expressions. Combined with AJAX, XPath can be used to reshape / augment your HTML code using DOM on the fly.
Have you ever wondered why there is no support for XPath built-in? Actually,
there is; Mozilla has a pretty solid support of the DOM Level 3 XPath through
the document.evaluate method:
document.evaluate(expression, contextNode, resolver, type, result);
By default, this method returns an iterator, which can be worked through as:
while(item = iterator.iterateNext())
// do something with item
The iterator will return null once all items are exhausted. By modifying the type parameter, you can make the method return other types such as string, boolean, number, and a snapshot. Snapshot is kind of like an iterator, except the DOM is free to change while the snapshot still exists. If you try to do the same with the iterator, it will throw an exception.
the following two cases:
1) As call to an Msxml.DOMDocument object, created using the new ActiveXObject statement. | <urn:uuid:5cf96609-4cb8-4c19-a88b-7a691989d612> | 3.203125 | 301 | Knowledge Article | Software Dev. | 50.435606 |
The astrophysically important wavelength region below ~ 1200 Å is still relatively unexplored, at least at low redshift where restframe observations must be obtained from space. Prior to the launch of FUSE (Moos et al. 2000), far-ultraviolet (far-UV) studies were limited to bright objects. The earliest spectral data for bright stars were obtained by Copernicus (Rogerson et al. 1973) and ORFEUS (Grewing et al. 1991), and with the UV spectrometers on the Voyager 1 and 2 spacecraft (Longo et al. 1989). Voyager 2 also succeeded in recording a far-UV spectrum of M33 (Keel 1998). HUT (Davidsen 1993) was the first instrument sensitive enough to collect spectra of faint galaxies below Ly-. The mission was flown on two missions and generated a rich database of far-UV spectra of actively star-forming and starburst galaxies. Subsequently, FUSE with its superior resolution and sensitivity fully opened the far-UV window to starburst galaxies. Most of this review deals with results obtained with FUSE and, to a smaller degree, with HUT. | <urn:uuid:44a802a5-5fb0-455b-a4bd-dbcf5e232cc1> | 3.421875 | 238 | Academic Writing | Science & Tech. | 54.109951 |
Short Summaries of Articles about Mathematics
in the Popular Press
"Error-Correcting Code Keeps Quantum Computers on Track," by by Barry Cipra.Science, 12 April 1996, page 199.
Today quantum computers are only a theoretical possiblity, but they have thepotential to surpass traditional computers by performing vast numbers ofcomputations simultaneously. One difficulty researchers have found is that theerror-correcting schemes developed for traditional computers would not work onquantum computers. This article describes the work of Peter Shor, who hasdiscovered a new breed of error-correcting scheme that, at least in theory,would work on a quantum computer. | <urn:uuid:5a24657e-fbc6-40e3-813f-f30013225ba2> | 2.8125 | 138 | Content Listing | Science & Tech. | 22.304825 |
Invasive comb jellyfish (M leidyi)
The American comb jellyfish (Mnemiopsis leidyi) invaded
the Black and Caspian Seas in large numbers in the 1980s. Its
presence and distribution led to major changes in the marine
ecosystem and resulted in economic losses, due to a decline in fish
and shellfish stocks.
In 2006 this species was detected in several locations in the
southern North Sea, supposedly transferred from the North American
east coast through ships' ballast water.
The potential spread of M. Leidyi in the "2-Seas area"
(the southern North Sea and the Channel) is a major concern because
of the presence of important spawning and nursery grounds and
migration routes for many commercial fish and shellfish
The impact of the interaction of M. Leidyi with
potential prey and predators must be closely monitored to avoid
repetition of the events seen in the 1980s.
The MEMO project
This project - the acronym of which stands for
"Mnemiopsis Ecology, Modelling and Observation" - aims to
study this invasive comb jelly in the southern North Sea and the
It is being funded by the EU's Interreg Iva - 2-Seas programme.
A budget of €3.5 million has been allocated to five scientific
The ultimate goal is to raise awareness about the potential risk
of M. leidyi on marine ecosystems and professional
activities in the 2-Seas region, and to identify possible measures
to counter this threat.
The project is split into three activities:
- Development of standard procedures for the identification,
monitoring and modelling of potential habitats and population
dynamics of M. leidyi (ILVO to lead)
- Study of the physiology, feeding behavior and potential prey
and predators of the species through experiments and mathematical
models (IFREMER to lead)
- Evaluation of the potential environmental and socio-economic
costs of the impact of this species using an ecosystem-based
approach (Cefas to lead).
During the three years of the project, which began on 1 January
2011, the partners also hope to improve and standardise monitoring
capabilities among themselves. Through their cross-border
co-operation, they will exchange expertise and knowledge on
taxonomy, databases, data analysis and modelling techniques.
For more about the MEMO project visit www.ilvo.vlaanderen.be/memo.
- American comb jellyfish (Mnemiopsis
leidyi, whose taxonomic phylum is ctenophore) measures up to
12cm, although in the 2-Seas area it has been observed to be around
- It is a voracious animal that feeds on fish eggs, larvae and
plankton. It appears to need little energy.
- It is a hermaphrodite with a reproductive cycle of about two
- It has been found to survive in the North Sea during cold
For more information about this species visit: | <urn:uuid:a9683a31-8a70-4b3b-bd54-16112a8f6962> | 3.203125 | 647 | Knowledge Article | Science & Tech. | 35.397663 |
Weather and climate are different. Weather varies tremendously from day to day, week to week, season to season. Climate, on the other hand is average weather over a period of years; it can be thought of as the boundary conditions on the variability of weather. We might get an extreme cold snap, or a heatwave at a particular location, but our knowledge of the local climate tells us that these things are unusual, temporary phenomena, and sooner or later things will return to normal. Forecasting the weather is therefore very different from forecasting changes in the climate. One is an initial value problem, and the other is a boundary value problem. Let me explain.
Good weather forecasts depend upon an accurate knowledge of the current state of the weather system. You gather as much data you can about current temperatures, winds, clouds, etc., feed them all into a simulation model and then run it forward to see what happens. This is hard because the weather is an incredibly complex system. The amount of information needed is huge: both the data and the models are incomplete and error-prone. Despite this, weather forecasting has come a long way over the past few decades. Through a daily process of generating forecasts, comparing them with what happened, and thinking about how to reduce errors, we have incredibly accurate 1- and 3- day temperature forecasts. Accurate forecasts of rain, snow, and so on for a specific location is a little harder because of the chance that the rainfall will be in a slightly different place (e.g a few kilometers away) or a slightly different time than the model forecasts, even if the overall amount of precipitation is right. Hence, daily forecasts give fairly precise temperatures, but put probabilistic values on things like rain (Probability of Precipitation, PoP), based on knowledge of the uncertainty factors in the forecast. The probabilities are known because we have a huge body of previous forecasts to compare with.
The limit on useful weather forecasts seems to be about one week. There are inaccuracies and missing information in the inputs, and the models are only approximations of the real physical processes. Hence, the whole process is error prone. At first these errors tend to be localized, which means the forecast for the short term (a few days) might be wrong in places, but is good enough in most of the region we’re interested in to be useful. But the longer we run the simulation for, the more these errors multiply, until they dominate the computation. At this point, running the simulation for longer is useless. 1-day forecasts are much more accurate than 3-day forecasts, which are better than 5-day forecasts, and beyond that it’s not much better than guessing. However, steady improvements mean that 3-day forecasts are now as accurate as 2-day forecasts were a decade ago. Weather forecasting centres are very serious about reviewing the accuracy of their forecasts, and set themselves annual targets for accuracy improvements.
A number of things help in this process of steadily improving forecasting accuracy. Improvements to the models help, as we get better and better at simulating physical processes in the atmosphere and oceans. Advances in high performance computing help too – faster supercomputers mean we can run the models at a higher resolution, which means we get more detail about where exactly energy (heat) and mass (winds, waves) are moving. But all of these improvements are dwarfed by the improvements we get from better data gathering. If we had more accurate data on current conditions, and could get it into the models faster, we could get big improvements in the forecast quality. In other words, weather forecasting is an “initial value” problem. The biggest uncertainty is knowledge of the initial conditions.
One result of this is that weather forecasting centres (like the UK Met Office) can get an instant boost to forecasting accuracy whenever they upgrade to a faster supercomputer. This is because the weather forecast needs to be delivered to a customer (e.g. a newspaper or TV station) by a fixed deadline. If the models can be made to run faster, the start of the run can be delayed, giving the meteorologists more time to collect newer data on current conditions, and more time to process this data to correct for errors, and so on. For this reason, the national weather forecasting services around the world operate many of the world’s fastest supercomputers.
Hence weather forecasters are strongly biased towards data collection as the most important problem to tackle. They tend to regard computer models as useful, but of secondary importance to data gathering. Of course, I’m generalizing – developing the models is also a part of meteorology, and some meteorologists devote themselves to modeling, coming up with new numerical algorithms, faster implementations, and better ways of capturing the physics. It’s quite a specialized subfield.
Climate science has the opposite problem. Using pretty much the same model as for numerical weather prediction, climate scientists will run the model for years, decades or even centuries of simulation time. After the first few days of simulation, the similarity to any actual weather conditions disappears. But over the long term, day-to-day and season-to-season variability in the weather is constrained by the overall climate. We sometimes describe climate as “average weather over a long period”, but in reality it is the other way round – the climate constrains what kinds of weather we get.
For understanding climate, we no longer need to worry about the initial values, we have to worry about the boundary values. These are the conditions that constraint the climate over the long term: the amount of energy received from the sun, the amount of energy radiated back into space from the earth, the amount of energy absorbed or emitted from oceans and land surfaces, and so on. If we get these boundary conditions right, we can simulate the earth’s climate for centuries, no matter what the initial conditions are. The weather itself is a chaotic system, but it operates within boundaries that keep the long term averages stable. Of course, a particularly weird choice of initial conditions will make the model behave strangely for a while, at the start of a simulation. But if the boundary conditions are right, eventually the simulation will settle down into a stable climate. (This effect is well known in chaos theory: the butterfly effect expresses the idea that the system is very sensitive to initial conditions, and attractors are what cause a chaotic system to exhibit a stable pattern over the long term)
To handle this potential for initial instability, climate modellers create “spin-up” runs: pick some starting state, run the model for say 30 years of simulation, until it has settled down to a stable climate, and then use the state at the end of the spin-up run as the starting point for science experiments. In other words, the starting state for a climate model doesn’t have to match real weather conditions at all; it just has to be a plausible state within the bounds of the particular climate conditions we’re simulating.
To explore the role of these boundary values on climate, we need to know whether a particular combination of boundary conditions keep the climate stable, or tend to change it. Conditions that tend to change it are known as forcings. But the impact of these forcings can be complicated to assess because of feedbacks. Feedbacks are responses to the forcings that then tend to amplify or diminish the change. For example, increasing the input of solar energy to the earth would be a forcing. If this then led to more evaporation from the oceans, causing increased cloud cover, this could be a feedback, because clouds have a number of effects: they reflect more sunlight back into space (because they are whiter than the land and ocean surfaces they cover) and they trap more of the surface heat (because water vapour is a strong greenhouse gas). The first of these is a negative feedback (it reduces the surface warming from increased solar input) and the second is a positive feedback (it increases the surface warming by trapping heat). To determine the overall effect, we need to set the boundary conditions to match what we know from observational data (e.g. from detailed measurements of solar input, measurements of greenhouse gases, etc). Then we run the model and see what happens.
Observational data is again important, but this time for making sure we get the boundary values right, rather than the initial values. Which means we need different kinds of data too – in particular, longer term trends rather than instantaneous snapshots. But this time, errors in the data are dwarfed by errors in the model. If the algorithms are off even by a tiny amount, the simulation will drift over a long climate run, and it stops resembling the earth’s actual climate. For example, a tiny error in calculating where the mass of air leaving one grid square goes could mean we lose a tiny bit of mass on each time step. For a weather forecast, the error is so small we can ignore it. But over a century long climate run, we might end up with no atmosphere left! So a basic test for climate models is that they conserve mass and energy over each timestep.
Climate models have also improved in accuracy steadily over the last few decades. We can now use the known forcings over the last century to obtain a simulation that tracks the temperature record amazingly well. These simulations demonstrate the point nicely. They don’t correspond to any actual weather, but show patterns in both small and large scale weather systems that mimic what the planet’s weather systems actually do over the year (look at August – see the the daily bursts of rainfall in the Amazon, the gulf stream sending rain to the UK all summer long, and the cyclones forming off the coast of Japan by the middle of the month). And these patterns aren’t programmed into the model – it is all driven by sets of equations derived from the basic physics. This isn’t a weather forecast, because on any given day, the actual weather won’t look anything like this. But it is an accurate simulation of typical weather over time (i.e. climate). And, as was the case with weather forecasts, some bits are better than others – for example the Indian monsoons tend to be less well-captured than the North Atlantic Oscillation.
At first sight, numerical weather prediction and climate models look very similar. They model the same phenomena (e.g. how energy moves around the planet via airflows in the atmosphere and currents in the ocean), using the same computational techniques (e.g., three dimensional models of fluid flow on a rotating sphere). And quite often they use the same program code. But the problems are completely different: one is an initial value problem, and one is a boundary value problem.
Which also partly explains why a small minority of (mostly older, mostly male) meteorologists end up being climate change denialists. They fail to understand the difference in the two problems, and think that climate scientists are misusing the models. They know that the initial value problem puts serious limits on our ability to predict the weather, and assume the same limit must prevent the models being used for studying climate. Their experience tells them that weaknesses in our ability to get detailed, accurate, and up-to-date data about current conditions is the limiting factor for weather forecasting, and they assume this limitation must be true of climate simulations too.
Ultimately, such people tend to suffer from “senior scientist” syndrome: a lifetime of immersion in their field gives them tremendous expertise in that field, which in turn causes them to over-estimate how well their expertise transfers to a related field. They can become so heavily invested in a particular scientific paradigm that they fail to understand that a different approach is needed for different problem types. This isn’t the same as the Dunning-Kruger effect, because the people I’m talking about aren’t incompetent. So perhaps we need a new name. I’m going to call it the Dyson-effect, after one of it’s worst sufferers.
I should clarify that I’m certainly not stating that meteorologists in general suffer from this problem (the vast majority quite clearly don’t), nor am I claiming this is the only reason why a meteorologist might be skeptical of climate research. Nor am I claiming that any specific meteorologists (or physicists such as Dyson) don’t understand the difference between initial value and boundary value problems. However, I do think that some scientists’ ideological beliefs tend to bias them to be dismissive of climate science because they don’t like the societal implications, and the Dyson-effect disinclines them to finding out what climate science actually does.
I am, however, arguing that if more people understood this distinction between the two types of problem, we could get past silly soundbites about “we can’t even forecast the weather…” and “climate models are garbage in garbage out”, and have a serious conversation about how climate science works.
Update: Zeke has a more detailed post on the role of parameterizations climate models. | <urn:uuid:53d6d46c-c83b-44bb-ab60-58869ff55a1d> | 3.328125 | 2,717 | Personal Blog | Science & Tech. | 39.06733 |
Facts about Praseodymium
Facts about Praseodymium - Element included on the Periodic Table
Facts about the Definition of the Element Praseodymium
The Element Praseodymium is defined as...
A soft, silvery, malleable, ductile rare-earth element that develops a characteristic green tarnish in air. It occurs naturally with other rare earths in monazite and is used to color glass and ceramics yellow, as a core material for carbon arcs, and in metallic alloys.
Interesting Facts about the Origin and Meaning of the element name Praseodymium
What are the origins of the word Praseodymium ?
The name originates from the Greek words 'prasios' meaning green and 'didymos' meaning twin.
Facts about the Classification of the Element Praseodymium
Praseodymium classified as an element in the Lanthanide series as one of the "Rare Earth Elements" which can located in Group 3 elements of the Periodic Table and in the 6th and 7th periods. The Rare Earth Elements are divided into the Lanthanide and Actinide series. The elements in the Lanthanide series closely resemble lanthanum, and one another, in their chemical and physical properties. Their compounds are used as catalysts in the production of petroleum and synthetic products.
Brief Facts about the Discovery and History of the Element Praseodymium
Praseodymium was discovered by the Austrian chemist Baron Aver von Welsbach in 1885
Occurrence of the element Praseodymium in the Atmosphere
Found in the rare earth minerals monazite and bastnasite
Common Uses of Praseodymium
The Properties of the Element Praseodymium
Name of Element : Praseodymium
Symbol of Element : Pr
Atomic Number of Praseodymium : 59
Atomic Mass: 140.90765 amu
Melting Point: 935.0 °C - 1208.15 °K
Boiling Point: 3127.0 °C - 3400.15 °K
Number of Protons/Electrons in Praseodymium : 59
Number of Neutrons in Praseodymium : 82
Crystal Structure: Hexagonal
Density @ 293 K: 6.77 g/cm3
Color of Praseodymium : silvery
The element Praseodymium and the Periodic Table
Find out more facts about Praseodymium on the Periodic Table which arranges every chemical element according to its atomic number, as based on the periodic law, so that chemical elements with similar properties are in the same column. Our Periodic Table is simple to use - just click on the symbol for Praseodymium for additional facts and info and for an instant comparison of the Atomic Weight, Melting Point, Boiling Point and Mass - G/cc of Praseodymium with any other element. An invaluable source for more interesting facts and information about the Praseodymium element and as a Chemistry reference guide.
Facts and Info about the element Praseodymium - IUPAC and the Modern Standardised Periodic Table
The Standardised Periodic Table in use today was agreed by the International Union of Pure Applied Chemistry, IUPAC, in 1985 which includes the Praseodymium element. The famous Russian Scientist, Dimitri Mendeleev, perceived the correct classification method of "the periodic table" for the 65 elements which were known in his time. Praseodymium was discovered by the Austrian chemist Baron Aver von Welsbach in 1885. The Standardised Periodic Table now recognises more periods and elements than Dimitri Mendeleev knew in his day but still all fitting into his concept of the "Periodic Table" in which Praseodymium is just one element that can be found.
Facts and Info about the Element Praseodymium
Information Facts about the Praseodymium Element | <urn:uuid:2717eb75-cc1b-412c-9e61-71e65dd3d26b> | 3.5625 | 857 | Knowledge Article | Science & Tech. | 27.619258 |
Planet Earth - Autumn 2007
NERC's award-winning free magazine, Planet Earth, is aimed at non-specialists with an interest in environmental science.
This issue is no longer in print.
* Unless specified, all articles are less than 1MB in size.
Leader Setting priorities.
News Summer floods, Antarctic monkeys and the origin of bling.
Next generation science for Planet Earth NERC launches its new strategy.
Solar power The sun and Earth's climate.
Whale fall Dead whales provide a haven for life.
(Cover story) On being the wrong size Why big animals are in big trouble.
Fowl play Dominant males have lower sperm quality.
Releasing the strain Forecasting tsunami size.
Transatlantic tsunamis Could a large wave travel across an ocean?
Wildflower power How farmers can increase hay yields.
Science in the city Scientists team up with insurers.
The laws of nature Legal pitfalls facing researchers.
Survival of the fastest Male-killing drives rapid evolution.
The answer is blowing in the wind Explaining the UK's recent warm winters.
Seeing red by accident? On evolutionary bottlenecks.
Megacities: megapollution Tracking air pollution above Mexico City.
Seeing great sorrows Letters from missionaries link disease to climate.
Weathering: minerals, mud and microbes Weathering on a nano-scale. | <urn:uuid:a4910063-d7a4-4e4f-9557-dd5d3a3e7a75> | 2.953125 | 287 | Content Listing | Science & Tech. | 46.324365 |
|Feb24-13, 03:53 PM||#1|
Clarification of forces involved in electron shielding
In my chemistry class we just started doing stuff with ionization energy, atomic radius, etc., and I've heard the phrase "electron shielding" tossed around a lot. When I tried to look it up online, most places use "shield" as the verb describing this process, which is not very helpful. The most specific explanation I have gotten so far is that shielding has to do with the electrons in inner shells repelling those in the outer shells, diminishing the attraction of the nucleus.
My question is: Why is it only the inner shells that "shield" the valence electrons? Wouldn't the electrons in the valence shell "shield" each other much more because they are much closer (although this doesn't seem to be the case looking at a graph of atomic radius wrt atomic number, which shows that the atomic radius increases much more when a new shell is added than when another valence electron in the same shell is added)?
Thanks a lot, BritKnight.
|Feb24-13, 05:55 PM||#2|
To get an effective shielding, the electrons should be "between" the nucleus and the orbital where you calculate the shielding. Inner electrons are better. Outer electrons have a part of their orbitals outside, where they do not contribute to the shielding.
|electron, electron shielding, forces|
|Similar Threads for: Clarification of forces involved in electron shielding|
|Satellite in orbit - forces involved||Introductory Physics Homework||3|
|Forces Involved in Jumping||General Physics||13|
|Exact forces (not net forces) involved in pushing a line of boxes?||Introductory Physics Homework||7|
|Name the forces involved.||General Physics||5|
|Forces involved in an elevator||Introductory Physics Homework||6| | <urn:uuid:9e3c8c29-b43a-4c96-afa6-296fb909c8d1> | 3.34375 | 404 | Comment Section | Science & Tech. | 42.542162 |
Source: Origins, A NOVA Presentation: "Where are the Aliens?"
Scientists have been looking for extra-solar planets for decades, but only recently, with better equipment and improved techniques, have they finally unveiled new and unusual planets. Since 1995, over 155 planets have been discovered orbiting stars other than our Sun. In this video segment adapted from NOVA, two of the most successful planet-hunters discuss the search for extra-solar planets.
Looking for a planet outside our solar system is difficult, but not impossible. For years, scientists failed to find any extra-solar planets. However, since the discovery and confirmation of one such planet in 1995, the daunting task has become more possible. Once astronomers found a successful technique, they continued to discover extra-solar planets at an increasing rate.
The discovery of the first extra-solar planet confused the scientific community. A massive planet, roughly the size of Jupiter, seemed to be orbiting its star in just four days. Such a short orbit indicates that the planet is very close to the star, but according to the most accepted theory of solar system formation, such a massive planet should have a fairly large orbit. Our solar system neatly fits the formation theory, with smaller rocky planets near the Sun while larger gaseous planets reside in the cooler distant regions. It seemed impossible to have a Jupiter-sized planet orbiting so closely to its star. As the observations were confirmed, astronomers questioned their understanding of solar system formation. They have since realized that the formation theory may still be correct, but that massive planets may be able to migrate inwards some time after the initial formation.
Most of the discovered extra-solar planets have been massive — ranging from the size of Neptune to ten times the size of Jupiter. Rather than indicating that large planets are more common than small ones, these findings may just be a result of massive planets being easier to find. Our instruments are not yet advanced enough to detect smaller, Earth-like planets, but new technology is being developed to improve the search.
Current research techniques rely on indirect evidence of planets. Indirect searches rely on observations of effects caused by the planet. The majority of planets have been found with the radial velocity, or Doppler, technique, which looks for a back-and-forth shift in the star's spectra due to the gravitational pull of the planet as it orbits the star. Other methods of detection include: the astrometric technique, which looks for a side-to-side wobble against the background stars; the gravitational lensing technique, which looks for a change in the positions of background stars; and the transit technique, which looks for the periodic dimming of the star by the crossing of a planet.
Learn in this NOVA classroom activity how planetary spectra can be used to search for life on other worlds.
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment. | <urn:uuid:bd08f146-4e22-408c-bce0-e7cc36f15b72> | 4 | 707 | Knowledge Article | Science & Tech. | 36.106082 |
Biowaste to ethanol could soon power cars.
Converting a vehicle to run primarily on ethanol costs just a couple of hundred dollars. But ethanol won’t make much of a dent in gas use as long as the source of ethanol in the United States remains corn grain, which requires a lot of energy and land in order to grow. A much better alternative is cellulosic materials such as wood chips and switchgrass, which are both cheap to grow and require fewer natural resources. (See “Biomass: Hope and Hype.”) In an effort to reduce the processing costs of these materials, researchers are genetically engineering organisms that can devour grasses and waste biomass, digest the complex sugars, and then transform the resulting simple sugars into alcohol. (See “Better Biofuels” and “Redesigning Life to Make Ethanol.”) Already, advances in parts of this process have led to planned cellulosic-ethanol plants. (See “Making Ethanol from Wood Chips.”)
The plug-in hybrid-vehicle era begins.
For years, hobbyists and a few companies have been adding bigger battery packs to hybrid vehicles, which have both battery power and an internal combustion engine, and plugging them into electrical outlets. This allows the cars, which typically rely on the electric power only for short bursts or to assist the onboard gasoline engine, to run on electricity alone for short trips. The idea of the “plug-in hybrid” has now caught the attention of government officials and researchers, who note that gas consumption would plummet if drivers could rely almost exclusively on electricity for average daily driving of about 33 miles. The gasoline engine would be available to boost performance and make it possible to use the car for long trips. Now the major car companies are taking notice and are finally developing plug-in hybrids. (See “GM’s Plug-In Hybrid.”) Meanwhile, researchers are beginning to anticipate benefits from plug-ins beyond gasoline conservation: millions of plug-in vehicles could serve as massive energy storage to stabilize the electric grid and make renewable energy sources more feasible. (See “How Plug-In Hybrids Will Save the Grid.”) Battery costs still need to drop before such cars will approach the price of conventional hybrids or gas-only vehicles. But better batteries are already becoming available.
Massive recalls spark interest in better batteries.
The safety-related recall of millions of lithium-ion laptop and cell-phone batteries made by Sony and Sharp put batteries in the spotlight this year. Just in time, a new type of lithium-ion battery that uses materials inherently much safer than those involved in the battery recall started appearing in professional power tools. In addition to being safer, the new batteries are more powerful, have longer useful lifetimes, and are potentially less expensive than those utilized in laptops and cell phones today. All of this could make them attractive for use in mass-produced plug-in hybrids. (See “More Powerful Hybrid Batteries.”) Meanwhile, a number of materials-science advances promise to as much as double the storage capacity of batteries and make them more long-lived. (See “3M’s Higher-Capacity Lithium-Ion Batteries” and “Making Electric Vehicles Practical.”) | <urn:uuid:404a4999-f68b-4a2a-960f-84a9154546aa> | 3.578125 | 692 | Content Listing | Science & Tech. | 35.107939 |
A thunderstorm, also known as an electrical storm, a lightning storm, thundershower or simply a storm, is a form of turbulent weather characterized by the presence of lightning and its acoustic effect on the Earth's atmosphere known as thunder. The meteorologically assigned cloud type associated with the thunderstorm is the cumulonimbus. Thunderstorms are usually accompanied by strong winds, heavy rain and sometimes snow, sleet, hail, or no precipitation at all. Those that cause hail to fall are called hailstorms. Thunderstorms may line up in a series or rainband, known as a squall line. Strong or severe thunderstorms may rotate, known as supercells. While most thunderstorms move with the mean wind flow through the layer of the troposphere that they occupy, vertical wind shear causes a deviation in their course at a right angle to the wind shear direction.
In this amazing slow-motion video, the folks from ZT Research used a high resolution camera to capture a full lightning bolt from inception to it striking the ground. NASA‘s APOD serves a scientific explanation of the phenomenon: “The above lightning bolt starts with many simultaneously creating ionized channels branching out from an negatively charged pool [...]
Well, this picture is 3 years old or so, but I haven’t seen it, so I figured there’s a chance some of you guys might have missed it too. Taken in Perth, when some locals were out on the beach to see the fireworks, the picture also captures spectacular lightning, but the really amazing thing [...] | <urn:uuid:01b5a97a-7fb1-4720-849c-b401cd65e747> | 3.015625 | 320 | Content Listing | Science & Tech. | 49.272646 |
Hanny's Voorwerp / /, Dutch for Hanny's object, is an astronomical object of unknown nature. It was discovered in 2007 by Dutch school teacher Hanny van Arkel, while she was participating as an amateur volunteer in the Galaxy Zoo project. Photographically, it appears as a bright blob close to spiral galaxy IC 2497 in the constellation Leo Minor.
The object, now referred to as a "voorwerp" (a Dutch word for "object") is about the size of our Milky Way galaxy and has a huge central hole over 16,000 light years across. In the image, the voorwerp is colored green, a false color that is standardly used to represent the presence of several luminous emission lines of glowing oxygen. It has been shown to be at the same distance from Earth as the adjacent galaxy, both about 650 million light-years away.
Star birth is occurring in the region of the object that faces IC 2497. Radio observations indicate that this is due to an outflow of gas arising from the IC 2497's core which is interacting with a small region of Hanny's Voorwerp to collapse and form stars. The youngest stars are several million years old.
A picture of the object appeared on the Astronomy Picture of the Day website in data taken by Dan Smith (Liverpool John Moores University), Peter Herbert (University of Hertfordshire) and Chris Lintott (University of Oxford) on the 2.5 metre Isaac Newton Telescope.
One hypothesis suggests that it consists of remnants of a small galaxy showing the impact of radiation from a bright quasar event that occurred in the center of IC 2497 about 100,000 years before how it is observed today. The quasar event is thought to have stimulated the bright emission that characterizes the voorwerp. The quasar may have switched off in the last 200,000 years, and is not visible in the available images.
One possible explanation for the missing light-source is that illumination from the assumed quasar was a transient phenomenon. In this case, its effects on the voorwerp would be still visible because of the distance of several tens of thousands of light years between the voorwerp and the quasar in the nearby galaxy: the voorwerp would show a "light echo" or "ghost image," of events that are older than those currently seen in the galaxy.
On 17 June 2010, a group of researchers at the European Very Long Baseline Interferometry Network (EVN) and the UK’s Multi-Element Radio Linked Interferometer Network (MERLIN), proposed another related explanation. They hypothesized that the light comes from two sources: a supermassive black hole at the center of IC 2497, and light produced by an interaction of an energetic jet from that black hole and the gas surrounding IC 2497.
The voorwerp and the neighboring galaxy are the object of active astrophysical research. Observations of IC 2497 with the XMM-Newton and Suzaku X-ray space telescopes to probe the current activity of the supermassive black hole have been arranged.
- Pea galaxy, another class of objects discovered by Galaxy Zoo participants
- Rincon, Paul (5 August 2008). "Teacher finds new cosmic object". BBC News. Retrieved 19 October 2012.
- "Hubble Zooms in on a Space Oddity". European Southern Observatory. Retrieved 11 January 2011.
- What is Hanny's Voorwerp? NASA Astronomy picture of the day June 25, 2008.
- "Stars in their eyes: An armchair astronomer discovers something very odd". The Economist. 2008-06-26. Retrieved 2008-06-30.
- 'Cosmic ghost' discovered by volunteer astronomer Physorg.com August 05, 2008
- "Radio observations shed new light on Hanny's Voorwerp". Astronomy Now Online. June 29, 2010
- Lintott, C. J.; et al. (2009). "Galaxy Zoo: 'Hanny's Voorwerp', a quasar light echo?". Monthly Notices of the Royal Astronomical Society 399: 129. arXiv:0906.5304. Bibcode:2009MNRAS.399..129L. doi:10.1111/j.1365-2966.2009.15299.x.
- Rampadarath, H.; et al. (2010). "Hanny's Voorwerp: Evidence of AGN activity and a nuclear starburst in the central regions of IC 2497". arXiv:1006.4096 [astro-ph.GA].
- Galaxy Zoo's blue mystery (part I) by Janet Raloff at Science News June 19, 2008
- Galaxy Zoo's blue mystery (part 2) Janet Raloff June 20, 2008
- Hanny's Voorwerp (SDSS J094103.80+344334.2) Hanny's Voorwerp at Bill Keel's pages University of Alabama. Accessed June 2008
- Hanny's Voorwerp – Still Alive and Kicking by Tammy Plotner, Universe Today, June 9, 2008
- www.hannysvoorwerp.com Personal website of discoverer Hanny van Arkel
- Astronomers Solve The Mystery of Hanny's Voorwerp, Technology Review
- Gray, Meghan (2009). "Hanny's Voorwerp". Sixty Symbols. Brady Haran for the University of Nottingham. | <urn:uuid:ec559d8c-5054-450d-9fec-9a810d2a7b1b> | 3.59375 | 1,154 | Knowledge Article | Science & Tech. | 60.949696 |
See also the
Dr. Math FAQ:
Browse High School Constructions
Stars indicate particularly interesting answers or
good places to begin browsing.
- Drawing An Ellipse [11/24/1997]
How do you draw an ellipse with only a straight edge and a compass?
- Drawing Diagrams [08/02/1998]
I'm having trouble drawing a good geometry diagram.
- Drawing or Constructing an Ellipse or Oval [02/22/2006]
I know you can draw an ellipse using a string and two tacks. How do I
determine the length of the string and the location of the tacks to
draw an ellipse of a particular size?
- Find the Center of a Circle Using Compass and Straightedge [10/15/2003]
How can I find the center of a circle?
- Folding a Circle to Get an Ellipse [01/08/2001]
How can I prove that taking a point on a circle, folding it to an
interior point, and repeating this process creates an envelope of folds
that forms an ellipse?
- How Did Socrates Teach the Boy to Double the Area of a Square? [06/15/2010]
Reading Plato's Meno leaves a student confused about how the ancient Greeks scaled
squares. Doctor Rick walks through this story of Socrates and his method,
emphasizing that they would have approached this puzzle -- as well as the
Pythagorean Theorem -- geometrically.
- The Importance of Geometry Constructions [12/29/1998]
Why are geometry constructions important? What do we learn from them?
Where have they appeared in math history?
- Impossibility of Constructing a Regular Nine-Sided Polygon [04/07/1998]
Can you construct a regular 9 sided polygon with just a compass and
- Impossible Constructions [01/14/1998]
What are the three ancient impossible construction problems of Euclidean
- Impossible Constructions? [04/08/1997]
My geometry teacher told us there are 3 impossible problems or
constructions - what are they?
- Inconstructible Regular Polygon [02/22/2002]
I've been trying to find a proof that a regular polygon with n sides is
inconstructible if n is not a Fermat prime number.
- Inscribing a Regular Pentagon within a Circle [04/15/1999]
What are the reasons for the steps in inscribing a regular pentagon
within a circle with only the help of a compass and a straightedge?
- Inscribing a Square in a Triangle [10/13/2000]
How do you inscribe a square in a scalene triangle?
- Line with Small Compass and Straightedge [10/16/1996]
Construct a line segment joining two points farther apart than either a
compass or the straightedge can span.
- Nine-Sided Polygon [06/11/1997]
Can you construct a regular 9-sided polygon inside a circle using only a
compass and straight-edge?
- Octagon Construction Using Compass Only [02/22/2002]
Construct the vertices of a regular octagon using just a compass. The
only thing you know about the octagon is the circumradius.
- A Point in the Triangle [02/12/1999]
Finding the point P in a plane of triangle ABC, where PA + PB +PC is
- Precision in Measurement: Perfect Protractor? [10/16/2001]
Given that protractors are expected to be accurate to the degree, and in
some instances the minute or second, how are angles accurately
constructed and marked?
- Proving Quadrilateral is a Parallelogram [11/30/2001]
We are having a problem with the idea of a quadrilateral having one pair
of opposite sides congruent and one pair of opposite angles congruent.
- Regular Pentagon Construction Proof [11/23/2001]
What is the proof of the construction of a regular pentagon?
- Rotate the Square [09/19/2002]
Which points on the half-circles are B and D?
- Sin 20 and Transcendental Numbers [6/29/1995]
What is the significance of sin 20 in geometry?
- Squaring the Circle [12/22/1997]
Can you construct a square at all with the same area as a circle with a
- Squaring the Circle [3/16/1996]
Where did the phrase "squaring the circle" come from? We found it in
literature and wonder about its origins and what it means.
- Straightedge and Compass Constructions [12/14/1998]
Can you help me with these constructions, using only a straightedge and a
compass? A 30, 60, 90 triangle, the three medians of a scalene
- Triangle Construction [03/11/2002]
Let ABC be a triangle with sides a, b, c. Let r be the radius of the
incircle and R the radius of the circumcircle. Knowing a, R, and r,
onstruct the triangle using only ruler and compass.
- Triangle Construction [09/09/2001]
Given a triangle ABC and point D somewhere on the triangle (not a
midpoint or vertex), construct a line that bisects the area.
- Triangle Construction Given an Angle, the Inradius, and the Semiperimeter [03/26/2002]
Given an angle, alpha, the inradius (r), and the semi-perimeter (s), construct the triangle.
- Triangle Construction Given Medians [12/12/2001]
Given median lengths 5, 6, and 7, construct a triangle.
- Trisecting a Line [11/03/1997]
How would you trisect a line using a compass and a straight edge?
- Trisecting a Line [01/25/1998]
Is it possible to trisect a line? (Using propositions 1-34, Book 1 of
- Trisecting a Line [01/30/1998]
How do I trisect a line using only a straightedge and compass?
- Trisecting a Line Segment [08/13/1999]
How can I measure one-third of a line of an unknown length using a
compass and a straightedge?
- Trisecting an Angle [11/21/1996]
Is there a proof that you can't trisect an angle?
- Trisecting an Angle [06/15/1999]
I've come up with a method of approximately trisecting any angle. Can you
tell me how accurate it is?
- Trisecting an Angle [06/17/2000]
I believe I have a simple straightedge and compass construction that
trisects any angle except a right angle, but have not been able to write
- Trisecting an Angle [4/16/1996]
I can bisect an angle easily but I can't trisect it perfectly. Would you
please send me instructions?
- Trisecting an Angle: Proof [6/3/1996]
Is there a proof for how to trisect an angle?
- Trisecting an Angle Using Compass and Straightedge [04/29/2004]
A student claims he can trisect an arbitrary angle with no measuring
and only a straightedge and a compass, using Geometer's Sketchpad to
prove his method is correct. Doctor Math talks about why a
construction alone is not enough to prove the method.
- Trisecting an Angle Using the Conchoid of Nicomedes [08/16/2002]
Is it possible that I could have trisected an angle using the | <urn:uuid:dc7dfd0d-308c-4875-a5bb-967c68af5ddf> | 4.09375 | 1,699 | Q&A Forum | Science & Tech. | 61.799259 |
NGC 2903. - This galaxy has been observed by Burbidge et al. (1960a), who note a "hump" on the rotation curve along the northeast part of the major axis between 20" and 60" from the center. The only bright emission feature in this area is embedded in a broad dust lane which lies well inside of the luminous arm. Velocities in this arm show that it is approaching with a line-of-sight velocity of about 200 km s-1 while the interarm HII region is approaching with a velocity of only about 75 km s-1.
There can be little doubt that the eastern half of the galaxy is the near side; therefore, this galaxy is a fine example of a very open, two-arm spiral with trailing arms.
This galaxy is one of the group showing "hot spots" in the nucleus. The suggested identification of the nucleus of the galaxy is identified by a cross in the sketch. Other bright nuclear emission regions are also identified. | <urn:uuid:d9f84f32-cb17-4828-abb9-10ecfb3ccb1d> | 2.84375 | 206 | Knowledge Article | Science & Tech. | 60.847818 |
Musical Instruments, Quality Characteristics
When the same musical note is played on a guitar and
violin, what makes violin and guitar sounds different?
Stringed instruments sound different for many reasons. What you hear depends
on how and of what the string is made, how the string is vibrating, how long
it vibrates, and how long you can hear it vibrate.
Perhaps the most important difference is in how different strings vibrate.
There is a concept in string physics called "harmonics". When a string
vibrates, its wavelength and frequency of vibration strongly affect the
sound it makes. However, there can be additional vibrations in the string at
the same time called "harmonics" that "stack on top" of the main vibration.
In other words, stringed instruments are actually making multiple different
vibrations, and therefore multiple sounds, at once. The combination of these
different vibrations, or harmonics, give different instruments the different
sound textures that you can hear.
The harmonics that a string makes depends on a lot of factors, including how
the string is made, how it is made to vibrate (picked, plucked, bowed,
etc.), and if it is being touched while it is vibrating. Different harmonics
last for different amounts of time -- they do not all dissipate at the same
rate. You cannot hear all the harmonics equally, either; an instrument with a
resonating chamber (such as a violin or an acoustic guitar) will sound
different than similar instruments without a resonating chamber because they
make you able to hear different harmonics for different amounts of time.
Along the same lines, an electric guitar sounds different than an acoustic
guitar because of how harmonics are being amplified and heard. With an
electric guitar the pickups sense magnetic field changes, while in an
acoustic guitar the resonating chamber works directly on sound waves. How
you play the instrument affects sound too. The sound an instrument makes can
be divided into four parts: the attack, decay, sustain, and release, all of
which are affected by string type, playing method, amplification, and
resonance. Other factors such as the overall loudness (amplitude), and the
time over which the overall sound changes (envelope) also influence what you
For more information, and a little more physics, this is a good resource:
There are plenty more similar web pages out there to give you all the detail
Hope this helps,
p.s. My brother, who is an accomplished musician and electrical engineer,
contributed to this answer -- thanks, bro!
Each musical instrument emits many frequencies when it plays a certain note,
that is a certain pitch.
The number of and relative intensity of these frequencies determine the
characteristic sound that allows one to distinguish one instrument from
another. Incidentally, the same is true of the voice, which is a very
complex musical instrument. If you do a search on a topic such as "the
physics of sound".
One good resource is:
Another is Tom Rossing's book, Physics of Sound. There are some Scientific
American articles on this subject, too.
A good conceptual physics source that is easy to read and will help you is:
Paul Hewitt "Conceptual Physics" Addison-Wesley
Once you find a few "hits" on "the physics of sound" or "the physics of musical
instruments" or "the physics of the voice" you will explanations at any degree
of sophistication you choose.
The difference you hear in musical instruments is due to overtones that they
produce. A tone from an electronic tuner or a tuning fork is almost a "pure"
tone. If you could look at the wave in would on an oscilloscope it would appear
smooth. Instruments create sounds with something that vibrate and something that
lets the sound resonate. In the case of the violin and the guitar, the strings
may be made of different materials, the instruments themselves are constructed
of different types of woods. In wind instruments a reed or some type of mouthpiece
is employed. The size and shape of the resonating cavity also plays a part. All of
these contribute to the sound "color" or timbre of the instrument.
What happens at the sound wave level is that the wave itself is no longer smooth.
It has jagged edges and looks like a mountain range. (I am referring to if you
could see it on the oscilloscope.) The wood vibrates along with the strings, as
well as the air in the resonating cavity, causing the wave pattern to alter. The
pitch remains the same; the frequency of the wave is unchanged, but the timbre of
the instrument shines through. No two instruments produce exactly the same timbre.
The exception would be an electronic instrument. An A on an electric keyboard would
sound the same as an A produced by the same make and model of the same keyboard.
Even our voices have a distinct tone quality. That is what enables us to figure
out who is speaking in another room even if we cannot see the speaker.
I hope this enlightens you a bit. Take Physics when you can for the rest of the
When a note is played on a musical instrument, a sound wave travels out
into the air. Eventually the wave hits your ear. The overall
frequency, or rate of vibration, tells your ear which not is played.
Sound waves for a low E vibrate at the same rate for all instruments.
This rate of vibration is also called "fundamental frequency" and
"pitch". What is different for each instrument is the pattern of little
wiggles within the wave. This pattern is known as the "quality". This
pattern depends greatly on the shape of the instrument and the actual
source of the sound (a string, vibrating lips, a thin piece of wood,
Dr. Ken Mellendorf
Illinois Central College
the note played on a musical instrument is the base frequency being
resonated. (In other words, the lowest frequency) Simple tones are not
usually musically appealing though, so musical instruments are designed to
resonate a number of harmonic frequencies. (2x the base frequency, 3x the
base frequency, 4x the....) Depending on the size and shape of the
resonating chamber, the strength of each of these frequencies will change,
thus giving a violin a distinctly different sound from a guitar.
Another difference is the method of creating the sound. While a Violin and
a Guitar both rely on strings to create their sound, drawing a bow across a
violin creates a steady note, while a guitar tends to be 'plucked', creating
a much stronger note that quickly begins to fade.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:b34d7b8b-91e0-40c4-bf36-30c7ec788184> | 3.515625 | 1,452 | Q&A Forum | Science & Tech. | 48.334166 |
Taxonomy: Class Blastocystea
Animal: Blastocystis hominis 5 05.jpg
Blastocystis hominis granular form. Cationized ferritin dispersed in central vacuole. Avacuolar forms change to multi-vacuolar forms and then central vacuolar forms and then granular forms (all 5-20 microns with 1-4 nuclei) when cultured or passed in faeces. These forms were thought to be possibly the typical and only forms of B.hominis but they are not the forms in the colon. (EM by Deb Stenzel).
First Picture |
Previous Picture |
Next Picture | | <urn:uuid:5b536ad1-1d3e-4e43-81e2-af4004c85ff1> | 2.765625 | 143 | Knowledge Article | Science & Tech. | 50.062651 |
How to cool polar molecules
By Hamish Johnston
Talks at the APS are very hit and miss — especially for someone like me who wants a gentle introduction to a field rather than a full-on blitz of data and equations.
However, some talks are pure gold…it was definitely worth getting up early to hear Silke Ospelkaus’s 8 am lecture on how to create a gas of ultracold polar molecules.
Physicists have already perfected cooling atomic gases to very low temperatures using lasers — leading to a renaissance in the study of quantum systems.
Polar molecules are attractive because unlike ultracold atoms, they interact via long-range forces and thefore could be used to investigate a broader range of quantum phenomenon.
But molecules pose an additional challenge because they have rotational and vibrational energy, which must also be removed.
Although one could try to cool the atoms directly — or cool individual atoms and then combine them to make molecules — but both of these approaches have their problems.
According to Ospelkaus — who is at JILA in Boulder, Colorado — there is a better way. Her team began with “Feshbach molecules” which are made by taking ultracold potassium rubidium atoms and binding pairs together very weakly by applying an external magnetic field.
Although the molecules are ultracold, the separation between atoms is great, which means that they have a tiny dipole moment.
The next step is to gently coax the Feshbach molecules into the ground state of potassium-rubidium, which has a much higher dipole moment. This is tricky because there is very little overlap between the states. To get around this problem, Ospelkaus and crew shunted the Feshbach molecules into a third state that overlaps the two.
Easy right? Except that transition requires a 125 THz laser — and such things don’t exist!
Undaunted, Ospelkaus used the “beating” of two lasers to obtain light at the right frequency.
So after all that, did they manage to create a “quantum degenerate” gas?
Not quite, the team managed to get the molecules as cold as 400nK, whereas the onset of degeneracy is at about 100nK.
But now that they have a nearly degenerate gas of polar molecules Ospelkaus believes that it could be cooled further by applying electric fields.
…who said this sort of work was complicated? | <urn:uuid:713832c7-b078-46a7-8e1b-e86358020b24> | 3 | 521 | Nonfiction Writing | Science & Tech. | 37.496408 |
float.h - floating types
[CX] The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of IEEE Std 1003.1-2001 defers to the ISO C standard.
The characteristics of floating types are defined in terms of a model that describes a representation of floating-point numbers and values that provide information about an implementation's floating-point arithmetic.
The following parameters are used to define the model for each floating-point type:
- Sign (±1).
- Base or radix of exponent representation (an integer >1).
- Exponent (an integer between a minimum emin and a maximum emax).
- Precision (the number of base-b digits in the significand).
- Non-negative integers less than b (the significand digits).
A floating-point number x is defined by the following model:
In addition to normalized floating-point numbers (f1>0 if x!=0), floating types may be able to contain other kinds of floating-point numbers, such as subnormal floating-point numbers ( x!=0, e= emin, f1=0) and unnormalized floating-point numbers ( x!=0, e> emin, f1=0), and values that are not floating-point numbers, such as infinities and NaNs. A NaN is an encoding signifying Not-a-Number. A quiet NaN propagates through almost every arithmetic operation without raising a floating-point exception; a signaling NaN generally raises a floating-point exception when occurring as an arithmetic operand.
The accuracy of the floating-point operations ( '+', '-', '*', '/' ) and of the library functions in <math.h> and <complex.h> that return floating-point results is implementation-defined. The implementation may state that the accuracy is unknown.
All integer values in the <float.h> header, except FLT_ROUNDS, shall be constant expressions suitable for use in #if preprocessing directives; all floating values shall be constant expressions. All except DECIMAL_DIG, FLT_EVAL_METHOD, FLT_RADIX, and FLT_ROUNDS have separate names for all three floating-point types. The floating-point model representation is provided for all values except FLT_EVAL_METHOD and FLT_ROUNDS.
The rounding mode for floating-point addition is characterized by the implementation-defined value of FLT_ROUNDS:
- Toward zero.
- To nearest.
- Toward positive infinity.
- Toward negative infinity.
All other values for FLT_ROUNDS characterize implementation-defined rounding behavior.
The values of operations with floating operands and values subject to the usual arithmetic conversions and of floating constants are evaluated to a format whose range and precision may be greater than required by the type. The use of evaluation formats is characterized by the implementation-defined value of FLT_EVAL_METHOD:
- Evaluate all operations and constants just to the range and precision of the type.
- Evaluate operations and constants of type float and double to the range and precision of the double type; evaluate long double operations and constants to the range and precision of the long double type.
- Evaluate all operations and constants to the range and precision of the long double type.
All other negative values for FLT_EVAL_METHOD characterize implementation-defined behavior.
The values given in the following list shall be defined as constant expressions with implementation-defined values that are greater or equal in magnitude (absolute value) to those shown, with the same sign.
Radix of exponent representation, b.
Number of base-FLT_RADIX digits in the floating-point significand, p.
Number of decimal digits, n, such that any floating-point number in the widest supported floating type with pmax radix b digits can be rounded to a floating-point number with n decimal digits and back again without change to the value.
Number of decimal digits, q, such that any floating-point number with q decimal digits can be rounded into a floating-point number with p radix b digits and back again without change to the q decimal digits.
Minimum negative integer such that FLT_RADIX raised to that power minus 1 is a normalized floating-point number, emin.
Minimum negative integer such that 10 raised to that power is in the range of normalized floating-point numbers.
Maximum integer such that FLT_RADIX raised to that power minus 1 is a representable finite floating-point number, emax.
Maximum integer such that 10 raised to that power is in the range of representable finite floating-point numbers.
The values given in the following list shall be defined as constant expressions with implementation-defined values that are greater than or equal to those shown:
Maximum representable finite floating-point number.
The values given in the following list shall be defined as constant expressions with implementation-defined (positive) values that are less than or equal to those shown:
The difference between 1 and the least value greater than 1 that is representable in the given floating-point type, b1-p.
Minimum normalized positive floating-point number, bemin -1.
First released in Issue 4. Derived from the ISO C standard.
The description of the operations with floating-point values is updated for alignment with the ISO/IEC 9899:1999 standard. | <urn:uuid:de58604f-3a8f-4ab9-91f9-d69eb47ed5a7> | 3.375 | 1,174 | Documentation | Software Dev. | 39.782707 |
The central dogma of molecular biology DNA-> RNA-> Protein shows the direction of flow of information of how the cells use the information stored in our DNA to make the necessary proteins. But the situation in most eukaryotes is a little more complex than that simple statement. In most eukaryotes, a gene sequence in a DNA is interrupted by non- coding information. Hence to make a protein, a cell first has to transcribe the gene (make a RNA copy of the gene, called pre-mRNA) and then modify the pre-mRNA by removing the non-coding sequence (intron) and joining the coding sequences (exons) together. The modified mRNA is then exported from the nucleus (where it was made) to the cytoplasm where the ribosome uses it as a template to make the protein. In simple English, the gene for making a proteinA looks like this "HEREabhjhdyfrhUSEndcbldfhdfmMEd ldshhglgmcFORdbfhdflhfnmc PROTEIN A". The task of the cells is to remove the gibberish and make a readable text out of the given instruction - HERE USE ME FOR PROTEINA. The cells then send this information to the ribosome (the protein factory) to make the protein.
Pre-mRNA splicing is the process in which the intronic sequences are removed within a large RNA-protein complex called spliceosome.
Why is splicing important? A spliceosme can remove the non-coding introns present in a given transcript varying combination in response to cellular cues, a process called alternative splicing. The recent completion of a draft of the human genome indicated that more than 59% of the human genes seem to be alternatively spliced (Hastings and Krainer,2001) and thus we can have more complexity (make a larger number of proteins) without increasing the number of genes present. For eg, the Dscam gene in flies has 38,000 alternatively spliced isoforms from four variable exon clusters!
More importantly, it is estimated that aberrant splicing causes about 15% of genetic diseases in humans (Philips and Cooper,2000). Thus, the spliceosome plays a critical role in generating the right template for making a protein and any abnormality in this process would be deleterious to the organism.
What do we know about this process? From genetic and biochemical experiments in the humble budding yeast, scientist have been able to understand how this process occurs. Because both the mechanism of splicing and the splicing machinery are highly conserved throughout eukaryotes, knowledge of yeast splicing gives us insights into the basic process in humans.
The spliceosome is the largest structure in the cell and is composed of five small nuclear RNAs ( called U1, U2, U4, U5 and U6 snRNAs) and over 100 different proteins (Stevens and Abelson , 2002). Under standard in vitro (i.e. in a test tube) assay conditions, the spliceosome assembles in a step wise manner through the addition of the U1 -> U2-> U4/U6.U5 snRNP particles (the small nuclear RNA along with its associated proteins, represented by a colored blob in the picture) on the pre-mRNA (See Figure). This assembly is an expensive process for the cell as each step consumes energy. But it also allows the apparatus to check each step and hence allows for a greater control over the overall process. Remember, a single mistake here would result in a protein that either does not function or functions abnormally. That to a cell would be hazardous and hence the cells err on the side of caution. After the assembly of the spliceosome, it undergoes structural rearrangements, resulting in the loss of U1 and U4 snRNAs, to become catalytically active (Brow D. A, 2002). Then, it proceeds to remove the intron by two transesterification reactions.
The resultant message is released from the spliceosome along with the intron. The spliced RNA is exported to the cytoplasm for translation into the protein and the intron degraded by enzymes in the cell. The spliceosome is disassembled and the components (proteins and the snRNAs) recycled for another round of splicing.
Though much is known about the overall process, there is no insights into what triggers the activation. What informs the spliceosome that everything is set in place and hence go ahead and splice? How does the cell control the ATP driven helicases that remodel the spliceosome at each step? Or what cues the cell about abnormal spliceosome and how does it take a stalled spliceosome apart?
Next time I will try and address the role splicing plays in Humans. How does a cell choose which exon to keep? How do DNA elements present in the gene (ISEs) affect choice of exon? Does the rate at which the transcript is made affect exon choice? So keep your eyes out for Splicing -part deux.
Brow D. A, Annu Rev Genet., 2002, Jun 11; 36:333-60.
Hastings and Krainer, Curr Opin Cell Biol., 2001, Jun; 13(3):302-9
Philips and Cooper, Cell Mol Life Sci., 2000, Feb;57(2):235-49
Stevens and Abelson , Methods Enzymol. 2002;351:200-20.
Check this Animation | <urn:uuid:ea423ac5-aa50-44dc-aef8-7d48092e4367> | 4.09375 | 1,177 | Personal Blog | Science & Tech. | 51.282158 |
This is used to describe a class method. This is a function which takes
an extra argument as its first argument, for the
If the ‘#’ is immediately followed by another ‘#’, the second one will be followed by the return type and a semicolon. The class and argument types are not specified, and must be determined by demangling the name of the method if it is available.
Otherwise, the single ‘#’ is followed by the class type, a comma,
the return type, a comma, and zero or more parameter types separated by
commas. The list of arguments is terminated by a semicolon. In the
debugging output generated by gcc, a final argument type of
indicates a method which does not take a variable number of arguments.
If the final argument type of
void does not appear, the method
was declared with an ellipsis.
Note that although such a type will normally be used to describe fields in structures, unions, or classes, for at least some versions of the compiler it can also be used in other contexts. | <urn:uuid:3886fadf-0ab3-4fd2-9bad-b02bf0d619b1> | 2.953125 | 232 | Documentation | Software Dev. | 41.053589 |
Octopuses Carry Coconuts as Instant Shelters
Octopuses have been discovered tip-toeing with coconut-shell halves suctioned to their undersides, then reassembling the halves and disappearing inside for protection or deception, a new study says. “We were blown away,” said biologist Mark Norman of discovering the octopus behavior off Indonesia. The coconut-carrying behavior makes the veined octopus the newest member of the elite club of tool-using animals—and the first member without a backbone, researchers say.
A team led by biologist Julian Finn of Museum Victoria in Melbourne, Australia, was observing 20 veined octopuses (Amphioctopus marginatus) on a regular basis. The researchers noticed that the animals were frequently using their approximately 6-inch-long (15-centimeter-long) tentacles to carry coconut shells bigger than their roughly 3-inch-wide (8-centimeter-wide) bodies. An octopus would dig up the two halves of a coconut shell, then use them as protective shielding when stopping in exposed areas or when resting in sediment.
To carry the shells, a veined octopus has to stick its arms out and over the edges of the coconut and walk around as if on stilts—making the octopus, while in motion, more vulnerable to predators—study leader Finn explained.
“An octopus without shells can swim away much faster by jet propulsion,” he said. “But on endless mud seafloor, where are you fleeing to?” In other words, a coconut-carrying octopus may be slow, but it’s always got somewhere to hide.
Source: Matt Kaplan for National Geographic News | <urn:uuid:0dd49b18-2caf-41a3-b1b3-1e3a51d10207> | 3.71875 | 356 | Truncated | Science & Tech. | 32.353128 |
Jupiter Distant Encounter
Ulysses's orbit brought it into the vicinity of Jupiter during 2003 and 2004. During this encounter it made a unique path that reached high Jovian latitudes. Ulysses investigators made detailed observations in order to take advantage of this unique opportunity. Details of the encounter are given below.
The table below lists a number of investigations that have been identified as valuable research topics that were carried out during the Jovian encounter. These included not only Ulysses investigators but other research groups interested in Jupiter and its environment.
|Ulysses URAP Observing Goals - Jupiter Distant Encounter|
|#||Observing Goal||Non-Ulysses Collaborators||Observing Interval(s)||Priority||Comments|
|1||Compare Ulysses- Galileo observations of Jovian emissions||Galileo radio investigation||Present - 9/2003||high||Galileo end of mission in 9/2003; Ulysses latitude variation important|
|2||Categorize Jovian emissions in latitude- longitude space||-||Present - 12/2004||high||Need at least 1 month observing per 5 deg lat. interval (jovicentric)|
|3||Measure scattering of kHz waves in IPM||-||Present -- 2/2004||medium||Using time profiles of QP bursts|
|4||Conduct joint studies of Jovian quasi- periodic (QP) bursts (J_lat 30 -- 60 deg)||CXO, HST, etc.||10/2002 - 2/2003 1/2004 -- 3/2004 (rel. long. Earth & Jupiter similar)||very high||Planning w/ H. Waite (coauthor of Chandra/QP burst Science article)|
|5||Study solar wind control of Jovian auroral emissions||Ground-based IR, HST, Int'l Jupiter Watch||10/2002 -- 2/2003 1/2004 -- 4/2004 (same as #4)||very high||In 2004, Ulysses location permits good projection of SW conditions at Jup.|
|6||Study the low freq. elliptically-polarized source (aka BHM)||-||1/2004 -- 3/2004||high||1024 bps required for polarization; need to be <1 AU from Jupiter|
|7||Conduct joint Ulysses-Cassini observations of SKR||Cassini radio investigation||7/2004 - ??||medium||Cassini arrives at Saturn 7/2004|
Ulysses Trajectory Plots
Shown below are plots of the Ulysses-Jupiter range,
the jovicentric and heliographic latitudes of Ulysses,
the relative heliolongitudes,
and the position of Ulysses in Jovian local time.
PDF version of this plot.
Below is a plot of the trajectory Ulysses will follow from 2002 to 2005. As can be seen in the plot, Ulysses reaches to about 75 deg. North Jovigraphic Latitude in late 2003 and remains there for several months. Ulysses is the first spacecraft to allow a long period of observations at these high Jovian latitudes and thus presents an opportunity to make unique new discoveries about the Jovian environment. Postscript version of this plot.
The plot below shows the orbital path of Ulysses in ecliptic coordinates during January 2003 to September 2004. Postscript version of this plot. | <urn:uuid:32358603-c1b0-45bc-a7b0-12e54d6ba3cb> | 2.703125 | 733 | Structured Data | Science & Tech. | 45.355658 |
Climate Literacy Essential Principle 7
*7a. Melting of ice sheets and glaciers, combined with the thermal expansion of seawater as the oceans warm, is causing sea level to rise. Seawater is beginning to move onto low-lying land and to contaminate coastal fresh water sources and beginning to submerge coastal facilities and barrier islands. Sea-level rise increases the risk of damage to homes and buildings from storm surges such as those that accompany hurricanes.
*7b. Climate plays an important role in the global distribution of freshwater resources. Changing precipitation patterns and temperature conditions will alter the distribution and availability of freshwater resources, reducing reliable access to water for many people and their crops. Winter snowpack and mountain glaciers that provide water for human use are declining as a result of global warming.
*7c. Incidents of extreme weather are projected to increase as a result of climate change. Many locations will see a substantial increase in the number of heat waves they experience per year and a likely decrease in episodes of severe cold. Precipitation events are expected to become less frequent but more intense in many areas, and droughts will be more frequent and severe in areas where average precipitation is projected to decrease.2
*7d. The chemistry of ocean water is changed by absorption of carbon dioxide from the atmosphere. Increasing carbon dioxide levels in the atmosphere is causing ocean water to become more acidic, threatening the survival of shell-building marine species and the entire food web of which they are a part.
*7e. Ecosystems on land and in the ocean have been and will continue to be disturbed by climate change. Animals, plants, bacteria, and viruses will migrate to new areas with favorable climate conditions. Infectious diseases and certain species will be able to invade areas that they did not previously inhabit.
*7f. Human health and mortality rates will be affected to different degrees in specific regions of the world as a result of climate change. Although cold-related deaths are predicted to decrease, other risks are predicted to rise. The incidence and geographical range of climate-sensitive infectious diseases—such as malaria, dengue fever, and tick-borne diseases—will increase. Drought-reduced crop yields, degraded air and water quality, and increased hazards in coastal and low-lying areas will contribute to unhealthy conditions, particularly for the most vulnerable populations.2 Based on IPCC, 2007: The Physical Scicne Basis: Contribution of Working Group 1 | <urn:uuid:4b6f218a-962a-4cf0-a112-ebb52c789db7> | 3.84375 | 501 | Knowledge Article | Science & Tech. | 30.747647 |
I think it is called overloading. I'm not to sure what the first one is doing, but the second one is creating a new operator for the class A with the symbol =. This is done so that the members of the class can use that symbol in place of some more tedious operations.
Just to add on to what Smac said those are called the copy constructor and assignment operator respectively (The copy constructor is always around even if you don't define it but it doesn't handle dynamic memory allocation well).
Normally if you have one you are going to need the other seeing as they both handle assigning values to an object.
If you are reading a book on C++ then why are you asking what is it? I think the book contains an explanation what is it. So you behave yourself very strange.
Or you should read another book on C++ for beginners. | <urn:uuid:ff648592-194c-4a13-9120-739ac4ff8bdd> | 3.109375 | 178 | Comment Section | Software Dev. | 62.651554 |
Often in nature, the things with the least-impressive appearance prove to be the most significant. Take the case of the tiny Anopheles mosquito. At a fraction of an inch in length, it appears far less threatening to humans than, say, a great white shark or a grizzly bear. But appearances can be deceiving. Perhaps no other organism on Earth has had a greater impact on human civilization than the Anopheles—the mosquito responsible for the spread of malaria and the deaths of more than one million people each year.
Isolated temporary ponds certainly do not look very impressive either. In fact most people are probably unaware these naturally occurring wetlands are even referred to as anything other than “puddles.” But what many look upon as nothing more than mosquito breeding grounds, biologists and ecologists view as valuable resources for the study of biological diversity.
Dr. David Chalcraft, assistant professor of biology at East Carolina University, is an expert in the ecology of temporary ponds. These ponds are water-filled depressions in forests or fields independent of existing lakes, rivers, or streams. They are temporary because they eventually dry out, a key characteristic because it prevents the establishment of predatory fish species. Fish tend to be voracious predators, and their presence in temporary ponds would hinder the pond's ability to spawn life. Temporary ponds are flush with diverse species of amphibians, aquatic insects, and plants. Their isolation from larger waterways makes them essentially closed ecosystems and easier to study than larger wetlands like rivers, lakes, or oceans.
Chalcraft is currently operating a laboratory at West Research Campus comprising hundreds of artificial isolated temporary ponds. He uses the artificial ponds to learn what factors control the biological diversity in an ecosystem and the consequences of changing that biodiversity within the system. The artificial ponds give Chalcraft complete control over the ecosystems and allow him to study specific scenarios based on things such as the introduction of predatory species into the ponds, an increase or decrease in plant life within the ponds, or the effects of commonly occurring pollutants such as pesticides or fertilizers.
“One of the goals of my research is to see how a change in the biological diversity of amphibians in those ponds influences a variety of ecosystem functions that operate in those ponds,” said Chalcraft. “Some of these functions include the rate at which plants produce energy for food webs, rates of decomposition, and rates of energy flow between aquatic and terrestrial ecosystems.”
Chalcraft has based a large portion of his research on the amphibian populations that thrive in temporary ponds. Amphibians, he says, represent a diverse group of organisms and they make excellent research subjects because they are readily available in nature. Also, their small size makes them amenable to experimental research without being so small as to require special tools to study them. He is quick to point out the importance of amphibians not only to the ponds they call home, but also to the ecology of the planet as a whole.
“Amphibians are often really good bioindicators in the environment. They are susceptible to toxins and pesticides that people may be putting out in the environment intentionally or unintentionally. Because amphibians are particularly sensitive to environmental pollutants, we can actually see how these pollutants may be influencing the environment on a short time scale and potentially how these pollutants may come back and have some negative impact on humans as well,” he said. | <urn:uuid:d5346c7c-0ffd-41df-b856-f51567608c17> | 4.03125 | 693 | Knowledge Article | Science & Tech. | 25.950754 |
The Columns Strike Back
Table rows tend to make table columns look rather stupid. They do all the work, as the table is built row by row, leaving the columns feeling quite rejected.
Luckily for those eager columns though, the
col tags have come to their rescue.
These tags allow you to define the table columns and style them as desired, which is particularly useful if you want certain columns aligned or colored differently, as, without this, you would need to target individual cells.
<table> <colgroup> <col> <col class="alternative"> <col> </colgroup> <tr> <td>This</td> <td>That</td> <td>The other</td> </tr> <tr> <td>Ladybird</td> <td>Locust</td> <td>Lunch</td> </tr> </table>
In this example the styles of the CSS class “alternative” will be applied to the second column, or the second cell in every row.
You can also use the
span attribute in a similar way to
colspan. Using it with the
colgroup tag will define the number of rows that the column group will belong to, for example
<colgroup span="2"></colgroup> would group the first two columns. Using
span in the
col tag is usually more useful, and could, for example, be applied to the above example like this:
<table> <colgroup> <col> <col span="2" class="alternative"> </colgroup> <!-- and so on -->
This would apply “alternative” to the last two columns.
A brief and easy accessibility consideration is to apply a caption to the table. The
caption element defines the caption and should be used straight after the opening
<table> <caption>Locust mating habits</caption> <!-- etc. -->
Headers and Footers
tbody allow you to separate the table into header, footer and body, which can be handy when dealing with larger tables.
thead needs to come first,
tfoot can, in fact come before a
tbody (and you can have more than one
tbody, if it takes your fancy) although browsers will render the
tfoot element at the bottom of the table.
<table> <thead> <tr> <td>Header 1</td> <td>Header 2</td> <td>Header 3</td> </tr> </thead> <tfoot> <tr> <td>Footer 1</td> <td>Footer 2</td> <td>Footer 3</td> </tr> </tfoot> <tbody> <tr> <td>Cell 1</td> <td>Cell 2</td> <td>Cell 3</td> </tr> <!-- etc. --> </tbody> </table> | <urn:uuid:a9aed1de-fa3f-4291-a884-eba49ee6a941> | 2.796875 | 621 | Tutorial | Software Dev. | 65.7575 |
Earlier this week, I visited GE’s Global Research Center in Niskayuna, New York, near Albany. Cool place, the home base for about 1,900 scientists, and one of four GE research centers around the world. The others are in Bangalore, Munich and Shanghai.
I wrote a column for FORTUNE’s website about GE’s venture investments (GE brings good things to startups), about which I’ll blog a little more next week. But for today, a look at how GE’s research into how nanotechnology, which is the study of matter on a molecular and atomic scale, could help drive the wind turbine industry. This technology is inspired, in part, by lotus plants leaves that are able to repel water–an example of biomimicry, which studies nature’s best ideas and using them to solve human problems. GE’s goal, as the video below shows, is to come up with nano-coatings on wind blades or aircraft engines that repel water. This technology is inspired, in part, by lotus plants leaves that are able to repel water–an example of biomimicry, which studies nature’s best ideas and using them to solve human problems.
Materials that do a great job of repelling water are called superhydrophobic. An example would be nanopants–spill a soda on them, and the liquid would roll right off. Check out this video to see how it works–the water droplets below really, really don’t like the nanocoating. My only critique: GE should have set this video to music.
You can read a blogpost from GE engineer Joseph Vinciquerra about superhydrophobic technology, “Creating anti-icing surfaces,” on GE’s global research blog. | <urn:uuid:476cd74b-ae19-465c-b03f-5530a70235fb> | 3.140625 | 383 | Personal Blog | Science & Tech. | 50.013069 |
State of the Climate
The State of the Climate is a collection of monthly summaries recapping climate-related occurrences on both a global and national scale.
- Global Analysis — a summary of global temperatures and precipitation, placing the data into a historical perspective
- Upper Air — tropospheric and stratospheric temperatures, with data placed into historical perspective
- Global Snow & Ice — a global view of snow and ice, placing the data into a historical perspective
- Global Hazards — weather-related hazards and disasters around the world
- El Niño/Southern Oscillation — atmospheric and oceanic conditions related to ENSO
- National Overview — a summary of national and regional temperatures and precipitation, placing the data into a historical perspective
- Drought — drought in the U.S.
- Wildfires — a summary of wildland fires in the U.S. and related weather and climate conditions
- Hurricanes & Tropical Storms — hurricanes and tropical storms that affect the U.S. and its territories
- National Snow & Ice — snow and ice in the U.S.
- Tornadoes — a summary of tornadic activity in the U.S.
- Synoptic Discussion — a summary of synoptic activity in the U.S.
National Summary Information - May 2013
Contiguous U.S. cooler and slightly wetter than average during spring
Spring temperatures coolest since 1996 despite May being warmer than average. Nation experiences floods in the Midwest, two EF-5 tornadoes in the Plains, and drought expansion in the West.
The average temperature for the contiguous U.S. during the spring season (March-May) was 50.5°F, 0.5°F below the 20th century average, making it the 38th coolest spring on record. The May temperature for the contiguous U.S. was 61.0°F, 0.9°F above the 20th century average and the 40th warmest May on record.
The total spring precipitation averaged across the contiguous U.S. was 7.92 inches, 0.21 inch above the 20th century average. May contributed 3.34 inches of rain to the spring precipitation total, and was the 17th wettest May on record. Drought conditions continued to improve for parts of the Plains, but worsened in the West.
Significant climate events for May and Spring 2013.
Click image to enlarge, or click here for the National Overview.
Note: The May Monthly Climate Report for the United States has several pages of supplemental information and data regarding some of the weather/climate events from the month and spring season.
U.S. climate highlights: May
- Most of the northern U.S. had above-average May precipitation. Iowa had its wettest May on record with 8.84 inches of precipitation, 4.77 inches above average. Montana and North Dakota each had one of their ten wettest Mays. The above-average precipitation contributed to flooding along several major rivers in the region including the Mississippi River and the Illinois River.
- Alaska was cooler and wetter than average during May. The statewide temperature was 5.8°F below the 1971-2000 average, the 20th coolest May on record. The statewide precipitation total was 25.1 percent above average, making it the 14th wettest May.
- According to the June 4 U.S. Drought Monitor Report, 44.1 percent of the contiguous U.S. was experiencing moderate-to-exceptional drought, smaller than the 46.9 percent at the beginning of May. Drought continued to improve for parts of the Great Plains, but worsened in the West. Several months of warm and dry conditions in California led to nearly the entire state being in drought by early June.
- Despite a below-average preliminary tornado count during May for the contiguous U.S., several large and powerful tornadoes hit populated areas resulting in significant damage and loss of life. Two EF-5 tornadoes, the highest strength rating given to a tornado, were confirmed near Oklahoma City. The EF-5 tornado that hit Moore, Oklahoma, on May 20th destroyed thousands of homes and businesses in and around the city and was blamed for over 20 fatalities. According to preliminary analysis, the EF-5 near El Reno, Oklahoma, on May 31st had a path width of approximately 2.6 miles, the widest tornado ever observed in the United States. These two events were only the 7th and 8th EF-5 tornadoes confirmed in Oklahoma in the 64-year period of record.
- Spring was cooler than average for a large portion of the contiguous United States east of the Rockies. Fourteen states, from North Dakota to Georgia, had spring temperatures that ranked among the ten coldest.
- This was the first season since Spring 2011 not classified as "warmer than normal", or in the warmest one-third of the historical distribution.
- New England and the West were both warmer than average. California had its seventh warmest spring on record with a seasonal temperature 3.5°F above average.
- Spring brought both wet and dry precipitation extremes to the United States. Iowa had its wettest spring on record with 17.61 inches of precipitation, 8.63 inches above the seasonal average. Wetter-than-average conditions were observed in the Northern Plains and Midwest, where North Dakota, Minnesota, Wisconsin, Illinois, and Michigan each had one of their ten wettest spring seasons.
- Below-average precipitation was observed in the Mid-Atlantic, Southern Plains, and West. New Mexico had its second driest spring with 0.66 inch of precipitation, 1.72 inches below average. California had its eighth driest spring, with 2.34 inches of precipitation, 3.33 inches below average.
- The above-average precipitation and below-average temperatures in the north-central United States were associated with a spring snow cover extent that was above average. According to data from the Rutgers Global Snow Lab, the spring snow cover extent was the eighth largest on record and the largest since 1984.
- The U.S. Climate Extremes Index (USCEI), an index that tracks the highest and lowest 10 percent of extremes in temperature, precipitation and drought across the contiguous U.S., was 1.4 times its average during spring. The above-average USCEI was driven by extremes in below-average temperatures, extremes in 1-day precipitation totals, and the spatial extent of drought.
- The year-to-date national temperature of 43.6°F was 0.2°F above the 20th century average. Below-average temperatures were observed for much of the central United States, from the Rockies to the Mid-Atlantic. The Northeast and parts of the West had above-average year-to-date temperatures.
- The January-May precipitation total for the contiguous U.S. was 12.28 inches, 0.33 inch above average. North Dakota, Minnesota, Wisconsin, Missouri, and Mississippi each had a top ten wet 5-month period; Iowa, Illinois, and Michigan were record wet during January-May.
- The West, Southern Plains, and Northeast were drier than average. Oregon, Nevada, and Idaho each had a top ten dry year-to-date period, while California had its driest January-May on record with 4.09 inches of precipitation, 9.87 inches below average. | <urn:uuid:767902cf-7e28-48c3-93d5-203fd13a5e39> | 3.109375 | 1,533 | Knowledge Article | Science & Tech. | 51.018485 |
A tiny wind turbine in a test tank at Teddington, Middlesex, could hold the key to wind power's most difficult public relations problem: making this form of renewable energy environmentally acceptable to all.
The model turbine, which stands around 3 metres high, was developed as the first part of a three-stage research programme called FLOAT. The first stage cost £750 000, half of which was financed by the Department of Trade and Industry.
Floating on a hollow concrete hull, the turbine would be moored to anchors by polyester ropes designed to keep it stable even in hurricane-force winds. An undersea cable would feed the power ashore and into the National Grid.
Tests to assess the project's environmental impact, design requirements and economics have now been completed successfully, yet its future still hangs in the balance. Between £2 million and £3 million is needed to finance the second stage of the ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:67da60de-95da-4e38-a31a-62f44d43aadc> | 3.234375 | 215 | Truncated | Science & Tech. | 48.807132 |
Developing a comprehensive picture of the geographic location and monitoring the health status of the earth's coral reefs is an enormous task, but one necessary to the development of sustainable resource managment strategies, practices and policies. Below is a list of mapping and monitoring techniques currently employed:
Satellite-based sensors: Satellite imagery can be used for low-cost, albeit coarse-scale mapping of coral reefs, and as such is probably the most effective way to build a comprehensive picture of where the world's reefs are located. Satellite data can also provide information on sea-surface temperatures, wave height and direction, and primary production in upper waters. They may also be useful for distinguishing living from dead coral in very shallow waters. Military agencies have the most comprehensive satellite data, often at a much finer resolution; however, these data are rarely available for public use.
Aerial photography and sensors: Photos and data from overflights of reefs can provide a more detailed picture of reef location, and can yield bathymetric data to depths of several tens of meters. However, aerial surveys and the analysis of their products are far more costly than those derived from satellite information and are difficult or impossible to conduct legally in many countries because of security concerns. These data can determine living from dead coral, but only within very shallow water. Costs have been reduced by using ultralight aircraft, balloons, kites, and other devices. With improvements in computer technology, it will be possible to survey reefs with remote-controlled aircraft, further cutting costs.
Ship and boat-based sensors: Research vessels carry a range of sensors useful for detailed mapping of coral reefs. Various types of sonar can be used to produce three-dimensional images of coral and distinguish between different types of bottom substrate. Passive acoustic analysis, along with sonar in some instances, can distinguish between live and dead reefs. Research vessels play a vital role in surveying and mapping coral reef habitats. However, they are costly to operate (generally ranging around US$10,000 per day). One way to reduce costs, and better utilize existing research vessel fleets, is to conduct reef surveys during the course of other oceanographic and fishery investigations.
Submersibles: Manned and unmanned submarines play an essential role in assessing coral reefs in waters below a 30-meter depth-beyond the practical working limits for scuba diving. Although the technological capacity available for exploring the world's oceans is highly developed, there are very few submersibles in the world that are available for undersea research. Promising new technologies are coming on line for conducting transect surveys, distinguishing live from dead coral cover using laserline sensing devices, and conducting rapid, large area assessments at various depths including shallower waters accessible by scuba divers.
Diving surveys: Scuba-diving scientists are the main source of information on reefs in shallower waters today (down to 30 meters in depth). However, the specific objectives, taxa of focus, and sampling approaches severely limit the comparability of the data among regions and over time. In addition, scuba-based assessments and monitoring are limited by the number of scientists available for this work and the small area that can be covered by one individual. Survey protocols are being developed so that recreational divers and others can help gather data, often on a volunteer basis. This offers tremendous potential for gathering new information on reefs, since there are several million scuba divers in the world and several times as many people proficient at skin diving with mask or goggles. Similarly, residents of coastal communities can be recruited to evaluate their reefs through participatory resource mapping. This low-tech approach is particularly relevant in developing countries, where few can afford expensive scuba equipment. Here, villagers are trained to gather general information on the coverage of various ecosystems, supplemented with descriptions of simple factors such as hard coral cover, and then transfer the data to a map using a simple compass. Work on this type of approach is underway through various programs, such as the Coastal Resource Management Program in the Philippines.
Source: D. Bryant et al. Reefs at Risk: A map-based indicator of threats to the world's coral reefs. (Washington DC: World Resources Institute, 1998) Introduction by IOC. | <urn:uuid:84a23c37-9dea-42e2-8216-7db2b43b69f7> | 3.921875 | 856 | Knowledge Article | Science & Tech. | 21.947817 |
var thetime=new Date();
As you can see, we are not assigning the new variable a direct value. Instead, the code above defines the variable as a new instance of the date object. This means we can use this variable in order to access the method functions of the date object. What are the method functions? Well, here a list of some of them:
|Method Function||What the Function Does|
|getHours()||Returns the current number of hours into the day: (0-23)|
|getMinutes()||Returns the current number of minutes into the hour: (0-59)|
|getSeconds()||Returns the current number of seconds into the minute: (0-59)|
|getDay()||Returns the number of days into the week: (0-6)|
|getMonth()||Returns the number of months into the year: (0-11)|
|getYear()||Returns the number of years into the century: (0-99)|
These are not all of the method functions, but they are enough to create a decent clock. Our example clock for this tutorial will only use the first three, but the rest my be useful to you to customize your clock.
Now, suppose we want to get the number of hours into the day. We could do this by defining a variable and giving it the value of the getHours() method function. However, the following line will give you an error:
Why? Well, remember that the getHours() function is a method function of the date object. In order to get that value for our variable, we have to use our instance of the date object we created earlier:
var thetime=new Date();
This is part of object-oriented programming. Once we have created an object, we are able to access its member functions with what is called "the dot operator". By placing the dot between our object name (thetime) and its member function, we are able to execute the member function. In this case, the function simply returns a value (the number of hours into the day). We are using this value as the value of our new variable, nhours.
We have used the dot operator in the past, when we used the document.write() function. The write() function is a member function of the document object, and the function writes text to the screen.
An advantage of using objects is that objects can hold multiple values, where variables can only keep one value at a time. An object can also use any member functions it may have, like we did above. So, our date object named thetime can use all of the member functions. Now we can start getting the rest of the values for our clock. Let's get the hours, minutes, and seconds:
var thetime=new Date();
See how we were able to use our object named thetime to grab three different values? It's almost fun, then again...yes, a bit more complicated than having a pre-defined object to use.
With these values and a bit of extra coding, we can create a clock like the one below:
Not bad, but now we must get into the extra coding we need to get the clock running smoothly. Remember, thetime.getHours() returns the number of hours into the day. This value is a number between 0 and 23. What can we do if we do not want a 24 hour clock, if we don't want 11:24 P.M. to be 23:24? To get around that, we need some code that will change any number over 12 back to 1,2,3,4, and so on. We can do this by subtracting 12 from our variable nhours when it has a value greater than 12:
if (nhours>=13) nhours-=12;
is shorthand for:
Also, if the hour is 0, we know that is 12 A.M., so we need to make the zero into a 12. if (nhours==0) nhours=12;
Now, when the number of hours is 14, the script changes it to 14-12=2. Just what we needed.
We don't have the same problem for the minutes and seconds, but they present a problem of their own. Suppose the number of minutes into the hour is 32. This value is fine, the script will display a 32. However, what if the value is 2? The script would display something like this:
What? Yes, we need a way to get a zero in there so the clock is readable. We want it to say 11:02. So now we need something to add in a zero before the 2 if the number of minutes (nmins) is less than 10. Let's try this:
This code will add in the character 0 in front of the 2 if the number of minutes is less than ten. Another little problem out of the way, but we will also need to do this for the seconds:
Had enough? Me too, but this clock still has another value we need to get. The clock displays A.M. or P.M. for morning and evening hours. This isn't to terrible. We can define another variable, let's call it AorP. We need the value to be "P.M." if the hour is greater than 12, and "A.M." otherwise. Here we go:
One problem though. You need to place this section of code before our code that changes the clock to a twelve hour system. Otherwise, the hours will never get past 12, and it will always read A.M. Why? This is because when we change the clock to the twelve hour system, we change the value of nhours so that it is 0-12 in all cases. So, we must get our AorP value before we change the time system.
Now, I'll give you the code for the script:
You may have noticed we are accessing a form value and changing it in this line:
So, you now know we need a form in the body of the document. Here is the one that goes with the script:
<FORM name="clockform"> Current Time: <INPUT TYPE="text" name="clockspot" size="15"> </FORM>
Also, you can see that we use the setTimeout() function to run our startclock() function again after one second. In this way, the clock will refresh every second so that you get a constantly running clock with the seconds ticking away.
Of course, how do we get the thing started to begin with? The browser won't just run the startclock() function on its own. It needs some kind of event to happen. Well, how about when the page loads? Excellent choice, let's use the onLoad event. The onLoad command is used as an attribute in the body tag, much like defining a text color:
I'm not sure I could handle that text color for long, but now we can see how the onLoad command will be put to work:
Now, you can go make use of this clock if you need one. Otherwise, you have gotten a rather unorthodox introduction to objects and object-oriented programming. Well, maybe I'll add a more orthodox tutorial on the subject sometime, if I can stop saying the word orthodox. You know, I think the last half of this paragraph has been extremely unorthodox. Oops, I think I had better get going now..
Well, that does it for now, lets go to the next section:
JS Clock 2.
||By: John Pollock|| | <urn:uuid:72dabbdd-d61e-42cb-98ab-8aafad4527d4> | 4.28125 | 1,587 | Tutorial | Software Dev. | 76.098962 |
Often, singleton pattern classes are used to broker concurrent access to a shared resource. You will see it in quite a few logger implementations. The pattern particularly useful if you have a an "expensive" resource that might not be needed during the course of the application. Since the singleton supports "lazy initialization," you only pay the cost for the resource if you actually need it. Also, supporting polymorphism for an object can be useful
Here are some difference between using classes with static methods and variables:
1. Singletons can implement interfaces and inherit from other classes.
2. Singletons can be lazy loaded. Only when it is actually needed. That's very handy if the initialization includes expensive resource loading or database connections.
3. Singletons can be extended into a factory. The object management behind the scenes is abstract so it's better maintainable and results in better code.
Of course, don't use this as license to go "Singleton" wild. As with most design patterns, the patterns pros and cons must be carefully considered before implementation.
Ranjan Maheshwari, on 05 October 2012 - 11:31 PM, said:
Why would one ever require one and only one instance? Same purpose can be achieved using classes with static member variables and static methods. As far as I can understand the difference between the two is Polymorphism.
It would be really helpful if someone can provide an example from real life scenario or from any Java API where Singleton objects need to participate in Polymorphism? | <urn:uuid:9220709d-bf5e-4f81-be84-6d0d57a9c82d> | 2.796875 | 316 | Comment Section | Software Dev. | 38.719543 |
Synonymous Platform Names:
Student Nitric Oxide Explorer
LEO > Low Earth Orbit > Polar Sun-Synchronous
Related Data Sets
There are no related records to this platform.
SNOE ("snowy") was a small scientific satellite that measured the effects of energy from the sun and from the magnetosphere on the density of nitric oxide in the Earth's upper atmosphere. The ... spacecraft and its instruments were designed and built at LASP, the Laboratory for Atmospheric and Space Physics at the University of Colorado, Boulder. SNOE was launched on February 26, 1998, and was operated from the mission operations center at the LASP Space Technology Research building. SNOE re-entered the Earth's atmosphere on Dec. 13, 2003, completing a very successful mission. This site contains a description of the SNOE mission, spacecraft drawings and images, and provides access to the scientific data and publications. An archive of launch activities and development personnel have been retained.
Information provided by http://lasp.colorado.edu/snoe/
Vandenberg Air Force Base, USA
University of Colorado | <urn:uuid:dbd9f191-8458-41e1-b1fc-bbcd0a3b285f> | 2.953125 | 230 | Knowledge Article | Science & Tech. | 35.596648 |
Goddard’s First Homegrown Satellite, Explorer 10, Was Launched 50 Years Ago Today: We Talk to the Father of Explorer 10, James Heppner, About the ‘Opportunity Years’ at the Dawn of NASA
This photo from the early 1960s shows Goddard employees Earl Angulo (at left) and Ron Browning examining an Explorer 10 model attached to a test fixture. They were responsible for the mechanical engineering and testing of the satellite.
Fifty years ago today, Goddard’s first homegrown scientific satellite roared off the pad at Cape Canaveral on a Thor-Delta rocket. Although key components came from outside the gates, Explorer 10 was the first satellite to be designed, assembled, tested, and flown from Goddard Space Flight Center.
James Heppner, a young space physicist (barely 30 then) and one of NASA’s early employees, conceived of the mission that came to be called Explorer 10. Heppner functioned as a sort of one-man band — Project Manager, Project Scientist, and Principal Investigator for the magnetometer instruments on the satellite.
Before NASA was founded, Heppner worked for the Naval Research Laboratory (NRL) on the Potomac River in Washington, D.C. It was there he developed methods to measure Earth’s magnetic field. At NRL he used sounding rockets to study charged particles and magnetic fields high in Earth’s atmosphere. His earlier research in Alaska focused on the aurora and its effects on radio wave propagation, and was the basis for his Caltech PhD thesis.
Heppner calls these times the “opportunity years,” a period when methods and technology for measuring magnetic fields and space plasma — the bread and butter of space physics — were being invented. He was at the right place at precisely the right time.
In late 1958, as Heppner and many of his colleagues were being “handed over” to the nation’s new aerospace agency, he had already helped create a magnetometer for the Vanguard program. Vanguard, an NRL project, was created to loft the first civilian scientific payloads into space for the International Geophysical year of 1957-58. Heppner’s proton magnetometer went into space aboard Vanguard 3 on September 18, 1959.
At the time of the transition to NASA, Heppner today recalls, he conceived of a satellite to measure the magnetic field of the moon. The mission, then called P-14, would accomplish its goal by extreme measures:
“I originally proposed Explorer 10 when NASA was formed,” explains Heppner, 83, who spoke with me recently from his home in New Market, Maryland. “And the intent was to try to hit the moon and measure the moon’s magnetic field on the way in.”
The original plan was deferred. The truth is, hitting the moon — even intentionally — was no simple trick in those days. It wasn’t clear the Thor-Delta launch system would accomplish the task, and even tracking a spacecraft to the moon was straining the technical capabilities of the time.
“With time we realized that the odds of hitting the moon would be extremely low, from the vehicle performance and ability to track, things like that,” Heppner explains. “I was told that with the odds of hitting the moon being so low, it would be embarrassing to even try. So I was essentially directed by NASA headquarters to make sure that the trajectory was such that it couldn’t be interpreted as an attempt to hit the moon.”
The new mission goal was to measure magnetism and plasma particles in space from outside of Earth’s protective magnetic bubble, or magnetosphere. This had been attempted previously, but not with great success. To do it required launching P-14/Explorer 10 into a highly elliptical orbit that would take it a great distance from Earth, dozens of time the planet’s radius.
The satellite weighed approximately the same as a space physicist: 79 kilograms, or 178 pounds. “It was very light,” Heppner says. “We were trying to get distance.” An engineering model hangs in the Smithsonian if you care to look at the real thing..
For the records, here is the complete entry in the NASA/National Space Science Data Center mission database:
“Explorer 10 was a cylindrical, battery-powered spacecraft instrumented with two fluxgate magnetometers and one rubidium vapor magnetometer extending from the main spacecraft body, and a Faraday cup plasma probe. The mission objective was to investigate the magnetic fields and plasma as the spacecraft passed through the earth’s magnetosphere and into cislunar space. The satellite was launched into a highly elliptical orbit. It was spin stabilized with a spin period of 0.548 s. The direction of its spin vector was 71 deg right ascension and minus 15 deg declination. Because of the limited lifetime of the spacecraft batteries, the only useful data were transmitted in real time for 52 h on the ascending portion of the first orbit. The distance from the earth when the last bit of useful information was transmitted was 42.3 earth radii, and the local time at this point was 2200 h. All transmission ceased several hours later. “
Rubidium vapor magnetometers could measure extremely weak magnetic fields, and were a totally new technology, Heppner says. They were invented at a company called Varian Associates in Palo Alto, California. The Faraday cup plasma instrument, which measured particles streaming off the sun’s “solar wind,” came courtesy of a team of scientists at MIT led by the pioneering X-ray astronomer and plasma physicist Bruno Rossi.
Finally the big day came on March 25, 1961. The launch managers for the Thor-Delta rocket worked in “the block house” at the Cape, while Heppner and his colleagues were encamped in a machine shop, peering at oscilloscopes to assess the health of their satellite and staying in contact with the blockhouse, and the other scientists and engineers, by telephone.
Explorer 10, as was typical in those days, was powered by a expendable battery. The craft radioed back data for 52 hours as it swooped through and outside of the magnetosphere, travelling for 42.3 Earth radii — about 167,466 miles — before the battery dimmed and the craft shut down. (For comparison, consider that the average distance form Earth to the moon is 238,857 miles.)
After launch, tracking stations record data on tapes and send them to the scientists. Heppner published a number of scientific papers from the data. He headed the Goddard Magnetic Fields Group, and worked on many major missions over the succeeding years.
The next big missions for Heppner after Explorer 10 were the Orbiting Geophysical Observatories, which grew substantially in mass and capability. He retired from the civil service in 1989, but continued to work as a contractor until 1996.
How were those days different from the later, larger, more complex place NASA has become? What was it like in the opportunity years?
“It was a very busy period in the sense that the technology was developing,” Heppner explains. “The early satellites weren’t very sophisticated because everything was new.”
But things moved fast. Heppner summed it up best in a chapter he wrote for a 1997 book, Discovery of the Magnetosphere.
“Opportunities for new endeavors were plentiful and the time between conception and results was unbelievably short when viewed in the light of today’s space programs.”
OH AND DID I MENTION? All opinions and opinionlike objects in this blog are mine alone and NOT those of NASA or Goddard Space Flight Center. And while we’re at it, links to websites posted on this blog do not imply endorsement of those websites by NASA. | <urn:uuid:972bf9f7-5fdd-41f6-9d0a-b682b8fa38dc> | 3.5 | 1,651 | Personal Blog | Science & Tech. | 47.322058 |
The manner of interaction (termed according to the Greek 'to hold or clasp to oneself as a shield
') of a coordinatively unsaturated metal atom with a ligand, when the metal atom draws the ligand towards itself. An important type of agostic
interaction is the C–H–Metal coordination
providing for the activation
of the C–H bond in transition metal complexes.
IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. doi:10.1351/goldbook | <urn:uuid:8dac6743-5a23-4a54-ab3a-68588562ea58> | 2.8125 | 191 | Knowledge Article | Science & Tech. | 68.632985 |
Earth System Science
All physical, chemical, and biological components (give examples) of the Earth are inter-twinned and therefore must be studied simultaneously to be understood from a process-oriented point of view
Life history, therefore, is best understood in the context of a changing and evolving Earth System.
The geologic record provides an important and singularly unique insight into how evolution and extinction events and cycles take place on our planet.
The study of complexity arising from the interplay of biological, physical, and chemical systems across multiple spatial (microns to thousands of kilometers) and temporal (nanoseconds to eons) scales. Research on the individual components of complex systems provides only limited information about the behavior of these systems as a whole.
Key Unifying Themes
1. The Earth is a unique evolving system requiring an earth System Science approach
2. Plate Tectonics (Harry Hess, 1960's) is a unifying theme that explains and provides a dynamic context for earth system processes (Fig. P-6). Made up of core, mantle, crust concentric layers (Fig. 1.19). Crust composed of large discrete pieces, called plates. Mantle convection the driving force (Fig. 1.18)
3. The Earth is very old ~ 4.6 billion, and thus the present and future earth system is the product of a long and complex history
4. Internal and external earth processes interact at the earth's surface, which influence life. Physical environments have influenced the development of life, and life in turn has emerged to influence the development of the Earth's physical environments
Conceptual Approaches -
Uniformatarianism (James Hutton, Charles Lyell) actualism - "the present is the key to the past" a long gradual Earth history (give examples)
Catastrophism (Abraham Werner, Charles Cuvier) Earth history is punctuated by sudden large episodic events that control overall development of geology and life on Earth (give examples)
Reality is that both types of processes have and do occur
Types of Rocks
Igneous crystalline rock formed from the cooling of molten rocks (intrusive magma, extrusive volcanics)
Sedimentary rocks formed from grains created by the erosion of prexisting rocks, biotic skeletons, or chemical precipitates.
Tied closely to the Water Cycle (Fig. 1.21)
Stratification and bedding
Nicolaus Steno's Three Laws for Sedimentary Rocks
1. Principle of Superposition
2. Principle of Original Horizontality
3. Principle of Lateral Continuity
Metamorphic crystalline rock formed by the physical and
chemical alteration of
pre-exisitng rocks under elevated temperatures and pressures (Fig. 1.7)
Global Dating and Geologic Time
Relative Age Dating
fossils and their stratigraphic distribution, fossil succession (consistent and predictable appearance and disappearence through time, William "Strata" Smith)
event markers Ir anomalies, volcanic ash beds, oceanic anoxic events
Absolute Age Dating
Radioactive decay of naturally occurring elements and their isotopes
Layers (lake sediments) and rings (trees)
The Geologic Time Scale Fig. 1.13 p. 13, Fig.
The oldest rocks are ~ 3.8 billion years old | <urn:uuid:b284577a-0058-4941-8def-0846e381f97d> | 3.5625 | 702 | Academic Writing | Science & Tech. | 32.072639 |
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia
PERU'S 0.5 PERCENT contribution of greenhouse gases to the world's atmosphere is small, compared to the impact on Peru expected as a result of climate change. Peru is ranked as the fourth country most impacted by climate change. El Nino has regularly affected the 386,102 sq. mi. (one million sq. km.) of Peruvian territory, with droughts in the Andean south, and floods in the northern Pacific coast. These impacts, however, are small compared to the impacts of climate change. The 125 mi. (200 km.)-long White Mountain Range, the world's largest ice-covered tropical range and Peru's main concentration of ice, has been losing volume in the last 50 years. The glaciers are melting, leading to glacier reduction, the formation or increase of glacial lakes, and changes in ecosystem composition. The glaciers of White Range Park have retreated 82 ft. (25 m.) in Glacier ... | <urn:uuid:839f7399-2475-459c-b35f-71f5e2497b4a> | 3.765625 | 252 | Truncated | Science & Tech. | 69.411591 |
An infinite series is an expression such as:
the dots imply that an infinite number of terms will be added.
To find the sum of an infinite series, we exam the
partial sums. The sum of the series will be the
limit of the partial sums (that is, the number that the
partial sums are approaching, if, in fact, they do approach
|If the limit
exists, the series is said to converge.
If the limit does not exist the series is said to
The sum of an infinite
series is the limit of the partial sums.
On the graphing calculator: | <urn:uuid:962923d6-5ff3-4113-8a9b-52858d463e5d> | 2.984375 | 127 | Tutorial | Science & Tech. | 58.79 |
Math in the News
Scanning the Brain for Impending Error
Federico Cirett, a computer science doctoral student at the University of Arizona, is studying brain wave activity to predict when students will make mistakes.
Cirett observed university students who spoke English as a second language while they took the math portion of the SAT. The students wore headsets developed to monitor high-stress and fatigue in military personnel.
“Measuring the activity, Cirett was able to detect with 80 percent accuracy whether a student would answer a question incorrectly about 20 seconds after they began the question,” wrote La Monica Everett-Haynes for UA News.
"If we can detect when they are going to fail, maybe we can change the text or switch the question to give them another one at a different level of difficulty, but also to keep them engaged," Cirett said. "Brain wave data is the nearest thing we have to really know when the students are having problems."
Read the full article from UA News
Browse News Archives
Search News Archives | <urn:uuid:ac7d2149-c3b0-4ce6-8128-a498125e413b> | 2.765625 | 218 | Content Listing | Science & Tech. | 32.510345 |
Hosted by The Math Forum
Problem of the Week 1135
A "vampire number" is an integer with 2n digits that is the product of two n-digit numbers — the fangs — whose digits, when combined, form a permutation of the original digits (thus, multiplicity counts). The smallest vampire numbers are
Technical point: The fangs cannot each end in a 0. So 126000 is not a vampire number.
You might have missed it, but October 5, 2010, was a vampire day because
When is the next vampire day?
What is the first double vampire number? In other words, what is the first number having two different factorizations into fangs,
Source: Ed Pegg's blog.
© Copyright 2010 Stan Wagon. Reproduced with permission.
Home || The Math Library || Quick Reference || Search || Help | <urn:uuid:29e7a888-fb8b-4367-80da-3c18c9f575e2> | 2.75 | 179 | Content Listing | Science & Tech. | 60.824237 |
How to: Access Settings Events
Settings events allow you to write code in response to changes in application- or user-scoped settings. Settings events include the following:
The SettingChanging event is raised before a setting's value is changed.
The PropertyChanged event is raised after a setting's value is changed.
The SettingsLoaded event is raised after the setting values are loaded.
The SettingsSaving event is raised before the setting values are saved.
For information on how to program using these events, see Accessing Application Settings (Visual Basic).
Settings events can be accessed from the Settings pane of the Project Designer.
To access settings events
Select a project in Solution Explorer, and then on the Project menu, click Properties.
Select the Settings pane.
Click the View code button to open the Settings.vb or Settings.cs file in the Code Editor. This file defines methods that handle the events raised when user settings are changed, loaded, or saved. | <urn:uuid:0f104e9a-013d-47cd-89b5-bdb1b7001b19> | 3.1875 | 201 | Documentation | Software Dev. | 44.574796 |
4.5. Comparing clusters
In comparing the different clusters, it must be taken into account to which limit sources can be detected. Low-luminosity low-mass X-ray binaries with a neutron star tend to be more luminous than cataclysmic variables, which in turn tend to be more luminous than magnetically active binaries. This ordering is reflected in the numbers of currently known cataclysmic variables and magnetically active binaries listed in Table 4 as a function of the detection limit.
Another number that is important is the estimated number of close encounters between stars in the globular cluster. Pooley et al. (2003) show that the number of X-ray sources detected in a globular cluster above an observational threshold of Lx 4 × 1030 erg s-1(0.5-6 keV) scales quite well with this number, as shown in Figure 13. Heinke et al. (2003d) find that the number of cataclysmic variables alone (at Lx × 1031 erg s-1) possibly increases slower with central density than predicted by proportionality to the number of close encounters.
Figure 13. Number N of X-ray sources with Lx 4 × 1030 erg s-1(0.5-6 keV) detected in globular clusters, as a function of the collision number . is a measure of the number of close encounters between stars in a cluster (see Eqs. 5, 6). The luminosity limit implies that most sources are cataclysmic variables. In general N scales quite well with , indicating that cataclysmic variables in globular clusters are formed via close encounters between a white dwarf and another star or a binary. Arrows indicate lower limits. NGC6397 doesn't follow the general trend. From Pooley et al. (2003).
An exception to this scaling is NGC6397. This cluster has a higher number of neutron star binaries and cataclysmic variables than expected on the basis of its rather low collision number. Remarkably, the number of magnetically active binaries in this cluster is not very high, and this is reflected in a relatively flat X-ray luminosity function (Pooley et al. 2002b). If it is true, as argued by Pooley et al. (2003), that the high number of neutron star binaries and cataclysmic variables in NGC6397 is due to its being shocked and stripped in multiple passages near the galactic centre, it has to be explained why these mechanisms are more efficient in removing magnetically active binaries than in removing cataclysmic variables and binaries with neutron stars. | <urn:uuid:f2b9ef43-0e01-4d73-a548-f0aa48733bc3> | 2.953125 | 535 | Academic Writing | Science & Tech. | 54.191485 |
To encrypt data, enter the data ("plaintext") and an encryption key to the encryption portion of the algorithm. To decrypt the "ciphertext," a proper decryption key is used at the decryption portion of the algorithm. Those keys, which contains simply a string of numbers, are called public key and private key, respectively. For example, suppose Alice intends to send e-mail to Bob. Through a public-key directory, she finds his public key. Then, she encrypts her message using the key and send it to Bob. This public key, however, will not decrypt the ciphertext. Knowledge of Bob's public key will not help an eavesdropper. In order for Bob to decrypt his ciphertext, he must use his private key. If Bob wants to respond to Alice, he encrypts his message using her public key.
The challenge of public-key cryptography is developing a system in which it is impossible to determine the private key. This is accomplished through the use of a one-way function. With a one-way function, it is relatively easy to compute a result given some input values. However, it is extremely difficult, nearly impossible, to determine the original values if you start with the result. In mathematical terms, given x, computing f(x) is easy, but given f(x), computing x is nearly impossible. The one-way function used in RSA is multiplication of prime numbers. It is easy to multiply two big prime numbers, but for most very large primes, it is exremely time-consuming to factor them. Public-key cryptography uses this function by building a cryptosystem which uses two large primes to build the private key and the product of those primes to build the public key.
Remember, the main purpose of this model is understanding the RSA algorithm, not necessarily for encryption purpose. A lot of simplification has been made, while the mathematics and algorithm stay the same. So, ENJOY !
Now, proceed to: key generation page, encryption page, or decryption page.
For more information about RSA algorithm, check out RSA homepage.
This is page is created on June 12, 1996.
Last updated on Wed Dec 31 19:00:00 1969. | <urn:uuid:f4ad4c44-8f51-4d27-b5f7-5fd819e4c420> | 4.3125 | 461 | Tutorial | Science & Tech. | 51.689 |
Symmetry breaking during flapping generates lift
New phenomenon in nanodisk magnetic vortices
(Phys.org) -- The phenomenon in ferromagnetic nanodisks of magnetic vortices hurricanes of magnetism only a few atoms across has generated intense interest in the high-tech community because ...
Tying light in knots
(PhysOrg.com) -- The remarkable feat of tying light in knots has been achieved by a team of physicists working at the universities of Bristol, Glasgow and Southampton, UK, reports a paper in Nature Physics this w ...
Scientists discover rigid structure in centre of turbulence
Pioneering mathematical engineers have discovered for the first time a rigid structure which exists within the centre of turbulence, leading to hope that its chaotic movement could be controlled in the future.
Magnetic vortex antennas for wireless data transmission
Three-dimensional magnetic vortices were discovered by scientists from the Helmholtz-Zentrum Dresden-Rossendorf together with colleagues from the Paul Scherrer Institute within the scope of an international ...
Computer simulation shows the sun's "heartbeat" is magnetic
Sonic lasso catches cells
(Phys.org) —Academics have demonstrated for the first time that a "sonic lasso" can be used to grip microscopic objects, such as cells, and move them about.
Mathematical butterflies provide insight into how insects fly
Researchers have developed sophisticated numerical simulations of a butterfly's forward flight.
Study shows hovering hummingbirds generate two trails of vortices under their wings, challenging one-vortex consensus
As of today, the Wikipedia entry for the hummingbird explains that the bird's flight generates in its wake a single trail of vortices that helps the bird hover. But after conducting experiments with hummi ...
Aviation industry dons 'shark skins' to save fuel
In its never-ending quest to develop more aerodynamic, more fuel-efficient aircraft, the aviation industry believes the ocean's oldest predator, the shark, could hold the key to cutting energy consumption.
Vortex pinning could lead to superconducting breakthroughs
A team of researchers from Russia, Spain, Belgium, the U.K. and the U.S. Department of Energy's (DOE) Argonne National Laboratory announced findings last week that may represent a breakthrough in applications ...
Beautiful physics: Tying knots in light
New research published today seeks to push the discovery that light can be tied in knots to the next level.
Earth's magnetosphere behaves like a sieve
ESA's quartet of satellites studying Earth's magnetosphere, Cluster, has discovered that our protective magnetic bubble lets the solar wind in under a wider range of conditions than previously believed.
Modeling the bizarre: Quantum superfluids
(PhysOrg.com) -- More than 100 years since superconductivity was discovered, a comprehensive description for the behavior of a broad class of fundamental physical systems that exhibit the bizarre properties ... | <urn:uuid:8a4152a3-6dd6-4a7a-95f8-9d877363be90> | 2.71875 | 605 | Content Listing | Science & Tech. | 33.833721 |
Kate Stafford, an oceanographer at the Applied Physics Laboratory at the University of Washington, writes from Alaska, where she is participating in a visual census of bowhead whales.
We have been seeing a lot of polar bears lately. The first three weeks I was up here, I saw three. One was napping on a large ice floe that was being pushed south by the current. The other two were off an abandoned trail deep into the ice. The best evidence of bear presence was the tramping down of our trail markers; nature may abhor a vacuum, but polar bears can’t stand survey flags.
In the past few days, seeing bears from the perch has been a common occurrence, and during some watches the number of bears sighted has surpassed that of whales. Everyone stays alert while traveling to and from the perch to prevent any unexpected encounters with bears. The lead has been open, but we’ve had low winds and currents, so new ice has formed over much of the previously open water. This new ice provides good resting habitat for ringed seals (a preferred food of polar bears) and is a bridge of sorts by which bears can pass from the pack ice to the fast ice in search of food.
While these frequent visitors have caused us to increase vigilance while on watch, they have also provided a rare opportunity (for most of us) to observe polar bears in the wild. Many of the bears pass by the perch on their way somewhere else, usually along the lead edge. Some decide to nap nearby and drape themselves over a convenient chunk of ice. A female with two older cubs spent an hour or two on the new ice in front of the perch chasing, eating and playing with eiders that were sitting in nearby ponds of open water. A few days later another female, with smaller cubs, nursed them out on the edge of the new ice. We had one bear pass by the perch twice in just over an hour, both times heading south. Just when and where did he circle back? It is such a privilege to watch these large predators go about their day in their natural habitat, and it is impossible not to consider that this habitat is changing, rapidly, and wonder how well they will adapt to these changes.
Jason Herreman, a biologist at the North Slope Borough Department of Wildlife Management who helps run the whale census, is also looking at polar bears’ use of bone piles north of Barrow, a project that involves collecting small hair samples on a snare set up near areas used frequently by bears. Genetic material from the hair (which is snagged painlessly on barbed wire) can be used to look at how many bears rely on bone piles, what time of year the activity occurs, and whether the same individuals return again and again. It also provides the genetic identification of individuals to include in future population estimates of bears using capture-recapture methods similar to the photo identification project for bowhead whales.
In addition to the hair snare, Jason has a motion-sensor camera set up at the site to photograph the animals and help monitor the effectiveness of the snare. This project is a collaboration between the North Slope Borough and the United States Geological Survey in an effort to use noninvasive techniques to monitor polar bear populations. There are plans to expand this project to other areas. | <urn:uuid:4b19ffc0-e364-41c6-9595-58ab8ecc4b08> | 3.0625 | 688 | Personal Blog | Science & Tech. | 44.901913 |
Left second from top: Kakapo
Left third from top:
Left fourth from top:
Crown Copyright, DoC
Left sixth from top:
Kauri, Alexander Turnbull Library
Top right: An immature
North Island saddleback,
Geoff Moon Illustration Credit
Left above top:
John Gerrard Keulemans
1842-1912, Huia (male
and female) Heteralocha
John Gerrard Keulemans
1842-1912, Jack-bird Creadion cinereus,
Permission of the Alexander Turnbull Library, National Library of New Zealand,
Te Puna Matauranga o Aotearoa must be obtained before any re-use of these images.
Before mammals were brought to New Zealand, the saddleback was one of the most
common birds in native forests on both mainland islands. But by 1900 they were
only found on offshore islands.
As saddlebacks mainly inhabit the middle and lower layers of the forest,
roost in tree holes near the ground, and probe on the ground through litter for weta,
grubs and other insects, they are more vulnerable to mammal predators. Saddleback
also eat the fruit of forest trees such as kawakawa and coprosma, so habitat loss was
another factor in their extinction on the mainland.
North Island saddleback (Tieke) Philesturnus carunculatus rufusater
The comeback of the North Island saddleback is one of the early success stories of
New Zealand bird protection. In 1964 it was found only on Hen Island in the Hen
and Chicken Islands north of Whangarei Harbour. Some of them were moved to adjoining
Whatapuke Island, in the first native bird translocation in New Zealand by NZ Wildlife
Another early translocation moved saddleback to predator-free Cuvier Island in the
Hauraki Gulf north of the Coromandel Peninsula, where they have thrived. Cuvier
has since been an aviary supplying saddleback for other islands. They are now
on nine northern islands including restored Tiritiri Matangi Island, and
protected Little Barrier Island.
Saddlebacks are very vocal, especially the male, which has a repertoire of melodious
calls used during mating and in territorial disputes. They are a medium size
of 25 cm., weighing 70-80g, with both adults having a similar appearance. The
female has smaller orange wattles and weighs about 10g less than the male.
The young saddleback shown above has undeveloped wattles.
South Island saddleback Philesturnus carunculatus carunculatus
At the turn of the 20th century, the South Island subspecies was also extinct on
the mainland island, and limited to Big South Cape Island, Pukeweka Island, and
Solomon Island which are near Stewart Island.
In 1964, ship rats got ashore on Big South Cape Island from a wrecked boat, and quickly
spread to Pukeweka and Solomon Islands. This resulted in New Zealand's worst ecological
disaster in modern times, and the fastest extinction of three species, with the loss
of the Stewart Island snipe, Stead's bush wren and greater short-tailed bat.
The NZ Wildlife Service (now the Department of Conservation) rescued the last 36 South
Island saddleback from extinction, by moving them to an island free of predators. It
is now on eleven offshore islands and the population has grown to about 650 birds.
South Island saddlebacks younger than 15 months (called "jack birds") have dark
brown plumage, as shown in the illustration above. The chestnut colored saddle
forms on its back after the second time it moults. Juvenile North Island birds
get their "saddleback" marking before leaving the nest. The North Island
race is slightly different with a distinct narrow pale margin on the front edge
of the saddle.
International Threatened & Endangered Listing
2000 IUCN Red List of Threatened Species
Huia Heteralocha acutirostris Extinct North Island kokakoCallaeas cinerea
Endangered SaddlebackPhilesturnus carunculatus
Lower risk, near threatened | <urn:uuid:5c323ef3-9e08-4807-897b-32cb5ebeb2d0> | 3.734375 | 890 | Knowledge Article | Science & Tech. | 34.383698 |
|Java IO Introduction|
|Java IO Overview|
|Java Byte + Char Arrays|
|Java System.in / .out / .error|
|Java Readers / Writers|
|Java Input Parsing|
|Java Concurrent IO|
|Java IO Exception Handling|
File class in the Java IO API gives you access to the underlying file system. Using the
File class you can:
This text will tell you more about how.
File only gives you access to the file and file system meta data.
If you need to read or write the content of files, you should do so using either FileInputStream,
FileOutputStream or RandomAccessFile.
Before you can do anything with the file system or
File class, you must obtain a
instance. Here is how that is done:
File file = new File("c:\\data\\input-file.txt");
Simple, right? The
File class also has a few other constructors you can use to instantiate
File instances in different ways.
Once you have instantiated a
File object you can check if the corresponding file actually
exists already. The
File class constructor will not fail if the file does not already exists.
You might want to create it now, right?
To check if the file exists, call the
exists() method. Here is a simple example:
File file = new File("c:\\data\\input-file.txt"); boolean fileExists = file.exists();
To read the length of a file in bytes, call the
length() method. Here is a simple
File file = new File("c:\\data\\input-file.txt"); long length = file.length();
To rename (or move) a file, call the method
renameTo() on the
File class. Here is
a simple example:
File file = new File("c:\\data\\input-file.txt"); boolean success = file.renameTo(new File("c:\\data\\new-file.txt"));
As briefly mentioned earlier, the
renameTo() method can also be used to move a file to a different
directory. The new file name passed to the
renameTo() method does not have to be in the same directory
as the file was already residing in.
renameTo() method returns
boolean (true or false), indicating whether the renaming
was successful. Renaming of moving a file may fail for various reasons, like the file being open, wrong file
To delete a file call the
delete() method. Here is a simple example:
File file = new File("c:\\data\\input-file.txt"); boolean success = file.delete();
delete() method returns
boolean (true or false), indicating whether the deletion
was successful. Deleting a file may fail for various reasons, like the file being open, wrong file
File object can point to both a file or a directory.
You can check if a
File object points to a file or directory, by calling its
method. This method returns
true if the
File points to a directory, and
File points to a file. Here is a simple example:
File file = new File("c:\\data"); boolean isDirectory = file.isDirectory();
You can obtain a list of all the files in a directory by calling either the
listFiles() method. The
list() method returns an array of String's with
the file and / or directory names of directory the
File object points to. The
returns an array of
File objects representing the files and / or directories in the directory
File points to.
Here is a simple example:
File file = new File("c:\\data"); String fileNames = file.list(); File files = file.listFiles(); | <urn:uuid:da71a81d-440e-4187-8748-9dce193c26ca> | 3.984375 | 819 | Documentation | Software Dev. | 52.212645 |
Science Fair Project Encyclopedia
If you're looking for the revolutionary communist Weather Underground Organization, see Weathermen
Weather forecasting is the science of making predictions about general and specific weather phenomena for a given area based on observations of such weather related factors as atmospheric pressure, wind speed and direction, precipitation, cloud cover, temperature, humidity, frontal movements, etc.
Meteorologists use several tools to help them forecast the weather for an area. These fall under two categories: tools for collecting data and tools for coordinating and interpreting data.
- Tools for collecting data include instruments such as thermometers, barometers, hygrometers, rain gauges, anemometers, wind socks and vanes, Doppler radar and satellite imagery (such as the GOES weather satellite).
- Tools for coordinating and interpreting data include weather maps and computer models in the form of Numerical Weather Predictions.
In a typical weather-forecasting system, recently collected data are fed into a computer model in a process called assimilation. This ensures that the computer model holds the current weather conditions as accurately as possible before using it to predict how the weather may change over the next few days.
Weather forecasting involves processing a lot of data, but interpretation can be difficult because of the chaotic nature of the factors that affect the weather. These factors can follow generally recognized trends, but meteorologists understand that many things can affect these trends. With the advent of computer models and satellite imagery, weather forecasting has improved greatly. Since lives and livelihoods depend on accurate weather forecasting, these improvements have helped not only the understanding of weather, but how it affects living and nonliving things on Earth.
The chaotic nature of the atmosphere imposes a limit on the predictability of the weather. The predictability limit is estimated to be about two weeks. Predictions beyond this limit are necessarily statistical rather than deterministic. Current operational weather prediction has not yet reached this predictability limit.
Below is a sample Hurricane Warning issued by the Cape Cod Hurricane Center:
EXTREMELY DANGEROUS HURRICANE ISABEL UPDATED
As of the 5PM advisory, Hurricane Isabel has been updated to a category 5 on the Saffir-Simpson Intensity Scale. Isabel's position is at 21.6N and 55.3W with maximum sustained winds at 155 KTS (160 MPH.)
The exact track of Isabel is still very uncertain past the 5th day. The GFDL has Isabel heading in the Long Island direction. Most of the models agree on a slow but steady WNW to NW turn over the next 3 days. All interests on the Eastern Seaboard should monitor this extremely dangerous threat. It is still too early to determine the exact track of Isabel. If Isabel does indeed head up the Eastern Seaboard I am expecting the possible strike area will be from Cape Hatteras to Block Island.
Repeating the 5PM advisory, Hurricane Isabel is at 21.6N and 55.3W with maximum sustained winds at 155 KTS.
Forecaster Bryant Cape Cod, Mass.
Historically, the two men most credited with the birth of forecasting as a science were Francis Beaufort (remembered chiefly for the Beaufort scale) and his protegé Robert Fitzroy (developer of the Fitzroy Barometer). Both were influential men in British Naval and Governmental circles, and though ridiculed in the press at the time, their work gained scientific credence, was accepted by the British Navy and formed the basis for all of today's weather forecasting knowledge.
Television weather reporters have sometimes used gimmicks to attract viewers. One trend that started in the 1970s was "backyard" weather where the forecaster would stand in an outdoor setup while making predictions. WNEP-TV in Scranton, Pennsylvania has been doing this since 1978.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:7d7fe50b-2f73-4304-9e8f-611cb653e94d> | 3.875 | 813 | Knowledge Article | Science & Tech. | 32.298706 |
This is part of the Regional Summary series at www.appinsys.com/GlobalWarming
The Baltic Sea area has very few long-term stations in the GHCN database – and most are urban. The Baltic Sea area is interesting since it is a sea in the northern hemisphere with the countries to the north reaching the Arctic Circle. According to the CO2 theory, this mid-to-high latitude location is the area where the warming should be most pronounced. However, the area is not exhibiting warming that exceeds the 1930s, and thus does not match the models.
The following figure shows all of the available long-term temperature stations in the area.
The following figure shows the available long-term sea level stations in the area.
A recent article (Hansson, D. and A. Omstedt. 2007: “Modelling the Baltic Sea Ocean Climate on Centennial Time Scale: Temperature and Sea Ice”, Climate Dynamics, DOI 10.1007/s00382-007-0321-2) [
http://www.springerlink.com/content/38218106v2gwq380/] provides reconstruction of sea temperatures and sea ice. The authors state: “It appears that the late twentieth century warming in the Baltic Sea region cannot be determined to be unprecedented over the past 500 years, as the mid-eighteenth century warming is of comparable magnitude.”
The following figure is Figure 2 from their paper, showing reconstructed water temperatures of the Baltic Sea from 1500 to 2001. They state: “From 1935 to the present no statistically significant water temperature trend can be determined.”
The following figure is from Figure 3 from their paper, showing reconstructed and modeled maximum ice extent of the Baltic Sea. They state: “The 1730s is the decade with the least ice, followed by the 1740s and the 1930s. …Bearing in mind that the LIA, with a generally colder climate, ended in the 1870’s” any reduction in sea ice in the century that followed would be expected.”
The following figure shows Finland along with rectangles representing the four 5x5 degree grids covering most of the country. The upper graph shows the annual temperature anomaly data for the four 5x5 degree grids from the Hadley Climatic Research Unit, which provides data to the IPCC. Recent temperatures are similar to the 1930s. The lower graph shows the annual mean temperature data from the NOAA GHCN database for Helsinki.
A recent presentation at a tree ring conference presented work on Finland summer temperatures, conducted by three scientists from the Finnish Forest Research Institute and the University of Helsinki (Timonen, M., Mielikäinen, K., and Helama, S. 2008. Climate variation (cycles and trends) and climate predicting from tree-rings. Presentation at TRACE 2008: Tree Rings in Archaeology, Climatology and Ecology, April 27-30, Zakopane, Poland) [http://www.worldclimatereport.com/index.php/2008/06/20/finnish-finish-global-warming/#more-329]
The following figure shows reconstructed summer temperature data for the last 1,300 years (gray) with smoothed data at the decadal, multi-decadal, centennial, and multi-centennial time frames. The authors wrote “The warmest and coldest reconstructed 250-year periods occurred AD 931-1180 and AD 1601-1850. These periods overlap with the Medieval Warm Period (MWP) and the Little Ice Age (LIA). The coldest and warmest of all reconstructed 100-year periods occurred AD 1587-1686 and AD 1895-1994, respectively.” | <urn:uuid:a2ee94a6-5424-49f6-b084-ad49566b5e45> | 3.78125 | 774 | Knowledge Article | Science & Tech. | 52.26736 |
Next: Comparing Angles and Sides in Triangles
Previous: Corresponding Parts of Congruent Figures
Back to the
top of the page ↑
You need to be signed in to perform this action. Please sign-in and try again.
Oops, looks like cookies are disabled on your browser. Click
to see how to enable them. | <urn:uuid:8a12be54-305a-48b6-b387-2a32d02ccaac> | 2.765625 | 75 | Truncated | Science & Tech. | 66.74 |
C IS USUALLY FIRST
The programming language C was originally developed by Dennis Ritchie of Bell Laboratories and was designed to run on a PDP-11 with a UNIX operating system. Although it was originally intended to run under UNIX, there has been a great interest in running it under the MS-DOS operating system on the IBM PC and compatibles. It is an excellent language for this environment because of the simplicity of expression, the compactness of the code, and the wide range of applicability. Also, due to the simplicity and ease of writing a C compiler, it is usually the first high level language available on any new computer, including microcomputers, minicomputers, and mainframes.
C is not the best beginning language because it is somewhat cryptic in nature. It allows the programmer a wide range of operations from high level down to a very low level, approaching the level of assembly language. There seems to be no limit to the flexibility available. One experienced C programmer made the statement, "You can program anything in C", and the statement is well supported by my own experience with the language. Along with the resulting freedom however, you take on a great deal of responsibility because it is very easy to write a program that destroys itself due to the silly little errors that a good Pascal compiler will flag and call a fatal error. In C, you are very much on your own as you will soon find.
I ASSUME YOU KNOW NOTHING ABOUT C
In order to successfully complete this tutorial, you will not need any prior knowlede of the C programming language. I will begin with the most basic concepts of C and take you up to the highest level of C programming including the usually intimidating concepts of pointers, structures, and dynamic allocation. To fully understand these concepts, it will take a good bit of time and work on your part because they are not particularly easy to grasp, but they are very powerful tools. Enough said about that, you will see their power when we get there, just don't allow yourself to worry about them yet.
Programming in C is a tremendous asset in those areas where you may want to use Assembly Language but would rather keep it a "simple to write" and "easy to maintain" program. It has been said that a program written in C will pay a premium of a 20 to 50% increase in runtime because no high level language is as compact or as fast as Assembly Language. However, the time saved in coding can be tremendous, making it the most desirable language for many programming chores. In addition, since most programs spend 90 percent of their operating time in only 10 percent or less of the code, it is possible to write a program in C, then rewrite a small portion of the code in Assembly Language and approach the execution speed of the same program if it were written entirely in Assembly Language.
Even though the C language enjoys a good record when programs are transported from one implementation to another, there are differences in compilers that you will find anytime you try to use another compiler. Most of the differences become apparent when you use nonstandard extensions such as calls to the DOS BIOS when using MS-DOS, but even these differences can be minimized by careful choice of programming constructs.
Throughout this tutorial, every attempt will be made to indicate to you what constructs are available in every C compiler because they are part of the ANSI-C standard, the accepted standard of C programming.
WHAT IS THE ANSI-C STANDARD?
When it became evident that the C programming language was becoming a very popular language available on a wide range of computers, a group of concerned individuals met to propose a standard set of rules for the use of the C programming language. The group represented all sectors of the software industry and after many meetings, and many preliminary drafts, they finally wrote an acceptable standard for the C language. It has been accepted by the American National Standards Institute (ANSI), and by the International Standards Organization (ISO). It is not forced upon any group or user, but since it is so widely accepted, it would be economic suicide for any compiler writer to refuse to conform to the standard.
YOU MAY NEED A LITTLE HELP
Modern C compilers are very capable systems, but due to the tremendous versatility of a C compiler, it could be very difficult for you to learn how to use it effectively. If you are a complete novice to programming, you will probably find the installation instructions somewhat confusing. You may be able to find a colleague or friend that is knowledgeable about computers to aid you in setting up your compiler for initial use.
This tutorial cannot cover all aspects of programming in C, simply because there is too much to cover, but it will instruct you in all you need for the majority of your programming in C, and it will introduce essentially all of the C language. You will receive instruction in all of the programming constructs in C, but what must be omitted are methods of programming, since these can only be learned by experience. More importantly, it will teach you the vocabulary of C so that you can go on to more advanced techniques using the programming language C. A diligent effort on your part to study the material presented in this tutorial will result in a solid base of knowledge of the C programming language. You will then be able to intelligently read technical articles or other textbooks on C and greatly expand your knowledge of this modern and very popular programming language.
HOW TO USE THIS TUTORIAL
This tutorial is written in such a way that the student should sit before his computer and study each example program by displaying it on the monitor and reading the text which corresponds to that program. Following his study of each program, he should then compile and execute it and observe the results of execution with his compiler. This enables the student to gain experience using his compiler while he is learning the C programming language. It is strongly recommended that the student study each example program in the given sequence then write the programs suggested at the end of each chapter in order to gain experience in writing C programs.
THIS IS WRITTEN PRIMARILY FOR MS-DOS
This tutorial is written primarily for use on an IBM-PC or compatible computer but can be used with any ANSI standard compiler since it conforms so closely to the ANSI standard. In fact, a computer is not even required to study this material since the result of execution of each example program is given in comments at the end of each program.
RECOMMENDED READING AND REFERENCE MATERIAL
"The C Programming Language - Second Edition", Brian W. Kernigan & Dennis M. Ritchie, Prentice Hall, 1988
This is the definitive text of the C programming language and is required reading for every serious C programmer. Although the first edition was terse and difficult to read, the second edition is easier to read and extremely useful as both a learning resource and a reference guide.
Any ANSI-C textbook
Each student should posess a copy of a book that includes a definition of the entire ANSI-C specification and library. Go to a good bookstore and browse for one.
Return to Table of Contents
Advance to Chapter 1 | <urn:uuid:c5e35e64-dd5b-4e52-9a15-550563f10019> | 3.234375 | 1,464 | Tutorial | Software Dev. | 36.120043 |
About Chlamydomonas reinhardtii
Chlamydomonas reinhardtii is a unicellular green alga from the phylum Chlorophyta, which diverged from land plants over a billion years ago. C. reinhardtii is a model species for studying a broad range of fundamental biological processes including the evolution of chloroplast-based phytosysnthesis and the structure of eukaryotic flagella. Chlamydomonas reinhardtii is haploid, and has a nuclear genome comprising 17 chromosomes with a total size of ~120 Mbp, a 203 kbp chloroplast genome and a ~16 kbp mitochondrial genome.
What can I find? Protein-coding and non-coding genes, splice variants, cDNA and protein sequences, non-coding RNAs.
This species currently has no variation database. However you can process your own variants using the Variant Effect Predictor:
Gramene/Ensembl Genomes AnnotationAdditional annotations generated by the Gramene/Ensembl Genomes projects include:
- The standard set of Gramene analyses detailed here. | <urn:uuid:240ad128-338d-4a9a-a109-ad319ceffce6> | 3 | 239 | Knowledge Article | Science & Tech. | 20.345625 |
Such is the case at UC Santa Barbara, where theoretical physicists at the Kavli Institute for Theoretical Physics (KITP) cover the range of questions in physics.
Recently, theoretical physicists at KITP have made important strides in studying a concept in quantum physics called quantum entanglement, in which electron spins are entangled with each other. Using computers to calculate the extreme version of quantum entanglement –– how the spin of every electron in certain electronic materials could be entangled with another electron’s spin –– the research team found a way to predict this characteristic. Future applications of the research are expected to benefit fields such as information technology.
To view the complete story click here. | <urn:uuid:e9fd60f0-c823-4ba2-8e81-99eab7cd251f> | 2.765625 | 143 | Truncated | Science & Tech. | 21.434053 |
How to synthesize a new kind of yeast cell — or person
September 19, 2011 by Amara D. Angelica
Scientists, in theory, could one day create whole new lifeforms, going way beyond simple cloning, new research at Johns Hopkins University School of Medicine suggests.
The scientists have now replaced the DNA in a yeast chromosome with computer-designed, synthetically produced DNA (structurally distinct from its original DNA), producing a healthy yeast cell.
The researchers have also reported a method for changing the structure of the synthetic DNA, a process called “scrambling,” which could be applied to other organisms in addition to yeast.
Hey, I’m all for ways to create better beer. Can they make it non-fattening?
How to create better beer:
1. Produce “semi-synthetic” DNA based on a computer-generated blueprint for the sequence of nucleotides (the building blocks of DNA).
2. Use this new semi-synthetic DNA to replace the DNA in a chromosome arm of a yeast cell without impacting its health.
3. Next, scramble the synthesized DNA by adding a chemical to the yeast culture that causes major changes to gene-sized blocks of nucleotides in the synthesized DNA. By scrambling, some genes are lost and the order of other genes is shuffled.
4. Repeat this entire process in various yeast cultures to produce a multitude of modified yeast arms — just as shuffling and randomly removing cards from multiple decks would produce a multitude of different decks.
5. Synthesize all 16 yeast chromosomes to order to give the organism desired traits. (So far, only about one percent of the DNA in a yeast cell has been synthesized and scrambled through this research.)
6. Test your new beer. Repeat the process until you find one you like … or you’re pie-eyed. Biology is fun.
But why yeast? Because it’s used in many industrial fermentation processes, including the production of vaccines and biofuels. So being able to more efficiently confer desired traits on this organism may lead to the production of new vaccines and more efficient biofuels. Also, yeast is a eukaryote — its cells contain complex internal structures, such as a nucleus enclosed by a membrane. Because of these similarities between yeast cells and human cells, insights into cellular processes in yeast may yield insights into basic processes in human cells.
According to the National Science Foundation, this achievement represents a significant advancement for the field of synthetic biology — an emerging field in biology addressing the design and construction of new biological functions and systems not found in nature. Although researchers at the J. Craig Venter Institute have previously synthesized bacterial chromosomes, yeast chromosomes are larger and more complicated than them and so are more difficult to synthesize.
Ref.: Jessica S. Dymond, et al., Synthetic chromosome arms function in yeast and generate phenotypic diversity by design, Nature, 2011; [DOI:10.1038/nature10403] | <urn:uuid:9225ec42-e225-460b-9414-6d56f2dd81cd> | 3.234375 | 629 | Personal Blog | Science & Tech. | 46.097034 |
Try This: Space of Air
Does air take up space?
Balloon (minimum 9 inches)
Glass bottle with a small mouth
Pot of boiling water
Pot of ice water
These links below will take you into the world of astronomy and will help you learn more about the planets and stars:
Visit Astronomy magazine at
Visit Sky and Telescope magazine at
Visit the Star Gazer home page at
Visit The Astronomy Café at
Place the mouth of the balloon over the mouth of the bottle. It should hang limply at the side of the bottle.
Make sure the balloon makes a good seal around the top of the bottle and gently place the bottle into the pot of boiling water. Be careful not to stand too close to the boiling water. Observe the changes in the balloon.
Remove the bottle from the hot water, remove the balloon, then replace it over the mouth of the bottle. The bottle now contains very hot air.
Place the bottle into the pot of ice water and observe the changes in the balloon.
Remove the bottle from the water and let it sit at room temperature for 10 minutes.
Remove the balloon from the top of the bottle.
Place the funnel in the mouth of the bottle and tape the mouth of the bottle to the funnel so that no air can escape.
Pour water into the funnel and watch what it does.
Air definitely takes up space! When you first put the balloon on the bottle, you “captured” the air that was in the bottle. It didn't inflate the balloon because it fit nicely into the bottle. When you heated it up, however, the air expanded and took up even more room. The only place it could go was into the balloon, so the balloon inflated. When you removed the bottle from the hot water and placed it into the ice water, the air was compressed. Not only did it not inflate the balloon, it pulled the balloon down into the bottle. When you returned the bottle to its original temperature, the balloon should have returned to its original size, shape, and location.
The funnel experiment shows that air takes up room and can't easily be squeezed. When you sealed the top of the bottle, you gave the air nowhere to go. So when you poured the water into the funnel, it wasn't heavy enough to compress the air in the bottle and it remained in the funnel, apparently defying gravity.
Can you think of other examples of air expanding or contracting that you might encounter?4
QUESTION How can you use the sun to tell time?
EXPERIMENT OVERVIEW In this experiment you'll get to build your own sundial. With it, you can keep time the way ancient civilizations did. As the sun rises and sets, it makes shadows of different lengths and angles. You'll use the location of the sun's shadow on your sundial to tell you exactly what time it is.
SCIENCE CONCEPT The sun doesn't actually move around the earth; it only seems that way. Instead, the earth rotates on its axis, so at any one time about half the people on Earth can see the sun and the other half cannot. This is how we get night and day. What a sundial does is track the location of the shadow that the sun makes, and it uses that location to determine the time of day. You have to know a few things in order for your sundial to work. For example, you need to know where true north is, and you need to know where the sun's shadow will be at certain times of day. Once you have set up your sundial, you should find it to be pretty accurate!
Sturdy paper plate
Poke a hole in the middle of the paper plate large enough for the pencil to fit through.
Stick the pencil through the plate. Make sure the bottom of the plate is facing up.
Place the end of the pencil in a lump of clay below the plate to anchor it down.
Use the compass to locate true north and place your sundial in an open space with the pencil pointing slightly to the north. (This method works for anyone who lives in the Northern Hemisphere. If you live in the Southern Hemisphere, you will point the pencil to the south.)
5. At 8:00 in the morning, mark on the sundial the location of the pencil's shadow. Label it “8:00 A.M.” Repeat this step every two hours until sunset. Your sundial is ready!
QUESTIONS FOR THE SCIENTIST
Are the markings evenly spaced?____________________________
Do you think it matters what time of the year you build or use your sundial? What happens when the days get longer or shorter?_______________________
At what time of day does the shadow of the sun point true north? Is it this way all year round?______________________________________________
FOLLOW-UP Research some of the civilizations that used sundials and think about these questions:
What were some of the variations they built?
Were any of them like yours?
Why do you think people stopped using sundials?
Look around your town to see of you can find any sundials. Check the accuracy of any you find. | <urn:uuid:0845e953-2b51-4158-8309-a416cd60b4f6> | 3.4375 | 1,075 | Tutorial | Science & Tech. | 69.3978 |
Wave motions through a medium such as the ocean, that the physical
characteristics change with position in it, can create reflection within
these changes, so that the waves are trapped within. This trapping of the
waves is called a wave guide. Equatorial Kelvin Waves are non-dispersive
due to traveling within a wave guide. | <urn:uuid:dea76acd-104c-4978-ba51-3f9428bca1c5> | 2.71875 | 69 | Knowledge Article | Science & Tech. | 38.165 |
Curassanthura bermudensis Wägele & Brandt, 1985
Curassanthura bermudensis: lateral & dorsal view, after Wägele & Brandt, 1985
Taxonomic Characterization: Curassanthura bermudensis is a blind, unpigmented paranthurid. The body is 13 times longer than wide. The cephalothorax is longer than wide, with a pronounced rostral point. The mouthparts are of the stinging/sucking type, covered dorsally by the labrum. The palp of the mandible is 3-segmented. Maxilla 1 is lanceolate. The basipodite of the maxillipod is slender. The pereopods are very slender, with only the proximally broadening propodus of pereopod stout. The propodal palm of subchelate pereopod is convex. The carpus of pereopod 2 and 3 is small, and triangular is the lateral view, while the carpus of pereopod 4-6 is long cylindrical, longer than merus. The carpi each have 1 sensory spine, basipodites with 3 long scolopidial feather-like setae. Each dactylus bears a claw with 3 notches. Pereopod 7 is not well developed. Palp 1 is operculiform. The endopodite of the uropod is considerably shorter than the sympodite, with the apex rounded, bearing 2 long feather-like and 7 simple setae. The telson is proximally widest, tapering to a narrow apex. The single proximomedial statocyst is very large with the telsonic apex containing 4 pairs of setae (Wägele & Brandt, 1985).
Disposition of Specimens: Specimens are located in the Zoological Museum of Amsterdam, immature adult holotype (ZMA Is. 105.284).
Ecological Classification: Stygobitic
Size: Total body length of specimen is 3 mm.
Number of Species in Genus: Three, all stygobitic
Species Range: Known only from Church Cave in Bermuda.
Closest Related Species: Curassanthura bermudensis differs from C. halma in having a more slender antennae 1, and the first peduncular article has 4 feather-like setae, instead of none.
Habitat: Anchialine limestone caves
Ecology: C. bermudensis was collected from Church Cave in Bermuda, washed from coarse sediments on the shore of the large cave pool in a collapse cave (semi-dark), clean with some wood debris, salinity (surface) 15.54 ppt. This cave is isolated from the sea with a very slow replacement time.
Life History: One immature adult was collected.
Evolutionary Origins: Curassanthura is a strictly stygobiont genus of marine paranthurid ancestors, with no relatives in the deep sea. Its natural active dispersion is only conceivable by using "land bridges" between islands; to postulate the very improbable accidental passive dispersion is not satisfactory, especially when dealing with cave animals.
Conservation Status: This species is listed as critically endangered (IUCN, 1996).
Please email us your comments and questions. | <urn:uuid:3ed01698-81f6-48a5-af88-6ad8795649de> | 2.921875 | 703 | Knowledge Article | Science & Tech. | 32.364368 |
The desert soil is alive! Well, the soil itself isn’t really living, but life occurs throughout the soil of the Mojave Desert, so it’s important to always stay on designated trails and roads when you are in the desert.
Small microorganisms called cyanobacteria, which are from the same family as blue-green algae, actually live on the surface of bare soil in the desert. For most of the Mojave Desert, the soil is usually characterized by rough dark patches as shown in the photo, but these cyanobacteria, with the aid of different types of lichens, mosses, and other colonies of microorganisms, can sometimes produce colorful soil crusts. In both cases, the soils are called cryptobiotic crust.
Cryptobiotic crust is very important to the health of the desert—a great sign that barren land is actually growing and thriving. In fact, cryptobiotic crust helps produce nutrients and organic material that are recycled back into the soil, and this supports vegetation in the desert. This is great news for all the desert animals, like desert tortoises, that feast on plants as their main source of nutrition. The organic structure of cryptobiotic soil can also help native seeds to germinate (sprout), again an important feature for plant eaters like desert tortoises.
It takes a very long time for cryptobiotic soil to form, and it is also very sensitive to changes in its environment, so when it is disturbed, it does not have an easy time recovering. Some estimates indicate that it takes 250 years for damaged desert habitat to recover! When people use the desert for recreation, they have the opportunity to see and experience some of the most amazing scenery in the world. But if they are not careful, or they purposefully hike or drive off designated trails, cryptobiotic soil can be devastated.
When you step on cryptobiotic soil or drive over it, you kill millions of organisms that support the plant life that desert tortoises eat. If the soil is destroyed, then plants cannot grow, and tortoises will have nothing to eat. So if you know anyone who drives or hikes off trail and they tell you it’s okay because they are always careful not to run over tortoises or their burrows, you can now tell them it’s not okay because they are destroying cryptobiotic soils that allow plants to grow to feed the tortoises that they are being so careful to avoid!
As you can see, cryptobiotic soil is very important to the Mojave Desert ecosystem, and we should make every effort to avoid walking on or touching the soil. The next time you are out on a desert hike or driving down an old desert road, please stay on the designated routes to avoid harming the living soil below you.
Daniel Essary is a research associate at the San Diego Zoo’s Desert Tortoise Conservation Center in Las Vegas. Read his previous post, A Desert Tortoise Isn’t Just Any Old Tortoise. | <urn:uuid:25b6930f-8f79-43ef-9d35-0ed4b7624093> | 3.71875 | 620 | Personal Blog | Science & Tech. | 43.052089 |
Fish farms have their proponents and their critics. But whether you're of the view that they provide an important source of protein or you think that fish farms breed diseases, there is one fact that's not under dispute: they have to be moved around every so often. That is because conventional fish farms are set up in sheltered waters but have to be moved once disease accumulates. When that happens, the cages are relocated using massive and carbon spewing towboats which haul the cages from one site to its next location.
Off the shores of Puerto Rico, a test project is underway by researchers with MIT. Scientists with the university's Sea Grant's Offshore Aquaculture Engineering Center are testing a different kind of fish cage: one that can propel itself and not require the use of a massive energy-intensive operation to drag it through the water.
The spherical fish cage, developed by Ocean Farm Technologies, Inc. of Searsmont, Maine, is fully submerged and able to move itself using slow-moving propellers. The 62-foot diameter mesh sphere bobs along in the ocean with electric powered propellers. Initial tests don't show great results. While the cage maneuvers well, momentum and direction were unpredictable. But the future could show improvement if researchers can successfully outfit these self-propelled fish farms with solar cells or wave motion apparatuses to get them moving without the use of grid electricity
|< Prev||Next >| | <urn:uuid:22192a0e-2809-4665-98e0-9847605c6323> | 3.484375 | 288 | Truncated | Science & Tech. | 38.377727 |
Taking Derivatives of Parametric Systems
Just as we are able to differentiate functions of x, we are able to differentiate x and y, which are functions of t. Consider:
We would find the derivative of x with respect to t, and the derivative of y with respect to t:
In general, we say that if
It's that simple.
This process works for any amount of variables.
Slope of Parametric Equations
In the above process, x' has told us only the rate at which x is changing, not the rate for y, and vice versa. Neither is the slope.
In order to find the slope, we need something of the form .
We can discover a way to do this by simple algebraic manipulation:
So, for the example in section 1, the slope at any time t:
In order to find a vertical tangent line, set the horizontal change, or x', equal to 0 and solve.
In order to find a horizontal tangent line, set the vertical change, or y', equal to 0 and solve.
If there is a time when both x' and y' are 0, that point is called a singular point.
Concavity of Parametric Equations
Solving for the second derivative of a parametric equation can be more complex than it may seem at first glance. When you have take the derivative of in terms of t, you are left with :
By multiplying this expression by , we are able to solve for the second derivative of the parametric equation:
Thus, the concavity of a parametric equation can be described as:
So for the example in sections 1 and 2, the concavity at any time t: | <urn:uuid:ee3b020e-6058-451c-9161-2e546b7b840a> | 3.8125 | 358 | Tutorial | Science & Tech. | 47.629696 |
(For previous posts in this series, see here.)
In my series on the logic of science, I recounted how philosopher of science Pierre Duhem had pointed out as far back as 1906 that the theories of science are all connected to each other and changes in one area will have unavoidable effects on others that should be discernible. In this case, if neutrinos in the OPERA experiment did in fact travel faster than the speed of light, then we should be able to look at some other effects that should occur and see if they are observed.
One of them is the ‘Cherenkov effect’. This effect says that when something travels faster than the speed of light, it should emit a certain kind of radiation that is analogous to the shock waves that are produced when something travels faster than the speed of sound. This is known as the ‘sonic boom’ that we can hear when jet planes break the speed of sound. It also occurs when bullets are fired at speeds greater than the speed of sound but because bullets are so small the sonic boom is too weak for us to hear it.
The Cherenkov effect is well known and has been studied and confirmed. How can this be if it requires something to travel faster than the speed of light? Recall that the speed of light barrier in Einstein’s theory is that in a vacuum. When light travels through any medium (light, water, atmosphere), it is slowed down by the interactions of the medium with the light particles. Other particles such as electrons are also slowed down by the medium but they may not be to the same extent, in which case it can be possible for some particles in a medium to travel faster than the speed of light in that same medium. If they do so, they should emit the light equivalent of the sonic boom and this is called Cherenkov radiation. The spectrum of light emitted lies mainly in the ultraviolet region and its overlap with the visible spectrum produces a characteristic blue glow. One can see this in the cooling water that surrounds nuclear reactors, as in the image on the right, and in this video of a pulse of radiation being sent into the cooling liquid.
In a paper, Andrew Cohen and Sheldon Glashow calculate that high energy, faster-than-light neutrinos as produced in the OPERA experiment would lose much of their energy due to Cherenkov radiation, mainly by the production of electron-positron pairs, on their way from CERN to Gran Sasso. But that does not seem to have happened, according to a different experiment at Gran Sasso (known as ICARUS) that works with the same neutrino source as the OPERA experiment.
Another concern involving consistency is with the supernova SN1987A that was observed in 1987. It turned out that a cluster of 24 neutrinos were detected in three different detectors on the Earth about three hours before the supernova was observed, i.e. before the light signals reached Earth. That difference was not put down to the neutrinos traveling faster than the speed of light but to the fact that the neutrinos, while created at the same time as the light, escaped from the exploding star three hours before the light did due to their low interactivity with matter, and so had a head start on the journey to Earth, even though they traveled in free space at the same speed as light. The measured time difference was consistent with our understanding of the processes involved in a supernova.
If the neutrinos had speeds greater than that of light by even the small amount given by the OPERA experiment, then because of the huge distance of the supernova from Earth (about 168,000 light years), the supernova neutrinos should have reached Earth about 4.7 years before we saw the supernova. If neutrinos in the OPERA experiment had in fact, been traveling faster than the speed of light, why had they not done so in other situations, such as the 1987 supernova?
The working model of science is that things behave in a law-like, repeatable manner and not idiosyncratically. If we observe something in one situation, we expect to see it happening again in similar situations. If a deviation from law-like behavior is observed, we assume that this is due to the existence of another, deeper, hitherto unknown law whose effect only became apparent because of some conditions that had been incorrectly assumed to be unimportant.
In this case, one could postulate that since the OPERA neutrinos have a thousand times as much energy as the supernova neutrinos, faster-than-light speeds only arise for such high-energy neutrinos. Of course, such a new explanation requires new corroborative evidence and so the discussion will go on as explanations and evidence play out their dialectical relationship until a consensus emerges. That is how science works.
Next: Science and public relations | <urn:uuid:842e052d-9e12-4d12-b938-66839c2b6f00> | 3.421875 | 1,001 | Personal Blog | Science & Tech. | 44.525166 |
Back to Project Listings
Objective: To introduce students to basic concepts of DC Electricity
Atoms, the fundamental building blocks of matter, are made of three kinds of particles:
The figure above shows a particularly simple atom (an "isotope" of hydrogen, called deuterium, which just happens to have one particle of each type). All atoms except hydrogen atoms have more than one electron, proton, and usually one or more neutrons.
Electron's orbit around the nucleus, which lies at the center and contains the protons and neutrons.
The electron stays in orbit because it is negatively charged, and is therefore attracted to the positively charged proton in the nucleus. This is an example of the Law of electrical charges:
Like charges repel, opposite charges attract.
Most atoms, when in their normal state as components of matter, have the same number of electrons as protons, so that the total, or net charge of the atom is zero. That is, the atom is neutral. The number of protons (which is equal to the number of electrons if the atom is neutral) is called the atomic number of the atom, and this determines what element (hydrogen, helium, iron, etc), that the atom is classified as.
Ions are atoms to which some electrons have been added or taken away, so that the atom has a net charge (positive if electrons have been removed, negative if electrons have been added). Ions that have opposite net charges attract, and those with net charges of the same sign repel.
Atoms of the same atomic number can have different numbers of neutrons. These are the different isotopes of an element. Hydrogen has three isotopes: no neutrons - called "protium" (this is the hydrogen isotope most commonly found), one neutron - called deuterium, and two neutrons - called tritium.
Although he didn't know about electrons and protons, the designations of positive and negative were made by Benjamin Franklin over two hundred years ago! Later on people realized they corresponded to the charges of protons and electrons, respectively.
In some materials, particularly metals, the electrons farthest from the nucleus are not bound to a particular atom - they can move freely from one atom to another. Electricity is the flow of these free electrons in a wire:
Such a flow of electrons is called a current.
What makes these free electrons move? Suppose we put something that has a net positive charge at the one end of the wire (say, at the right end of the wire pictured above). Let's also suppose that we put something with a net negative charge at the other end (the left end in the wire above). Then the electrons in the wire will be attracted to the positive end and repelled by the negative end. Hence, they will flow from left to right. That's electricity!
Batteries are devices which can do exactly what is described just above - make a current flow by creating a positive charge at one end of a wire and a negative charge at the other. A battery has two terminals (wire contacts), called positive and negative, corresponding to the net charges created at the terminals. The symbol for a battery is the following:
When a wire is connected across these terminals, forming a closed circuit, the positive and negative charges created by the battery cause a current to flow:
Note that the electrons flow from the negative terminal to the positive terminal. Eventually, when enough electrons have flowed, the battery will become drained, and the current will cease.
Even though the electrons flow from the negative to the positive terminals, it is conventional to say that the current flows from positive to negative:
Why is this? This is simply because people can't see electrons, and so they guessed wrong when the settled on a convention. But this is not really a problem, because even today we still don't see electrons in most applications, so it doesn't really matter for most purposes which direction the electrons actually go.
How do we measure current? Current is measured by literally counting the number of electrons that pass a given point in the wire. Any point will do - it doesn't matter which one because the current will be the same in each point of the wire, unless the wire branches off into a more complicated circuit.
Because there are billions of billions (much more than billions and billions! Carl Sagan would be impressed!) of electrons in even a very little piece of wire, we need to have a unit of measurement that will make it easy to count so many.
The basic unit for counting electrons (that is, charge) is the "coulomb" (pronounced "cool lum"):
1 coulomb = 1.6 x 1019 electrons = 16,000,000,000,000,000,000
= 16 billion billion electrons!
To measure current, we pick one point along the wire and count the electrons that go by, like watching things go by on an assembly line. If 1 coulomb of electrons go by each second, then we say that the current is 1 "ampere" (pronounced "am - peer"), or 1 amp for short. If 2 coulomb's per second goes by, we say the current is 2 amps, and so on:
1 ampere = 1 coulomb per second
It is traditional to represent the current with the symbol I, as in I = 1 amperes, or I = 15 amperes, etc.
Some batteries try to push electrons through the wire more strongly than others. How strongly the battery pushes is a measure of its voltage, symbolized with the letter V (as in the diagrams above). You can think of voltage like pressure: the higher the voltage, the higher the pressure is to push electrons through the wire. The lower the voltage, the lower the pressure.
Voltage is measured in volts. For example, common voltages for batteries are 1.5 volts, 6 volts, 9 volts, and 12 volts. Car batteries are typically 12 volts. An electrical outlet in the United States has a voltage of 110 volts (pretty high!).
The voltage of a battery is related to the amount of energy that the battery can deliver. A voltage of V = 1 volt means that the battery will deliver 1 "Joule" of energy for each coulomb of charge that flows through the circuit. A voltage of V = 2 means that the battery will deliver 2 Joules of energy for each coulomb. A Joule is the basic unit of energy in the metric International system of units - its about the amount of energy it takes to lift two pounds 9 inches. How high the voltage of the battery is depends in detail on the internal construction of the battery, which is an "electro-chemical" energy storage device.
How are amperes (current) and voltage (electrical pressure), related to one another? For a given voltage, some wires let more current flow than others. A wire that doesn't let very much current flow is said to have high resistance. Resistance is symbolized with the letter R.
To simplify matters, we usually assume that the wire itself is an ideal wire with no resistance, and we represent the resistance as a localized component of the circuit symbolized with a broken line:
In fact the resistor now can represent the sum total of the wire's resistance, including that contributed by any additional components in the circuit that have resistance. There are devices call "resistors" whose function is simply to provide additional resistance.
The resistance is related to how much the current I we get from a given applied voltage V by "Ohm's Law" in the following way:
Ohm's Law: I = V / R
If I is measured in amperes, and V in volts, then we say that R has units of "ohms" (pronounced "olms"). More specifically, if we have a 1 volt battery, and a wire with a resistance of 1 ohm, then the current that results when the wire is placed across the battery's terminals is given by
I = V / R = 1 volt / 1 ohm = 1 ampere.
Likewise, if we have a 2 volt battery, and a wire with a resistance of 3 ohms, then the current that results when the wire is placed across the battery's terminals is given by
I = V / R = 2 volt / 3 ohm = 2/3 amperes.
Thus, knowing the voltage and the resistance, we can now predict the current. Likewise, we can also no turn the problem around, and say, calculate the resistance by measuring the voltage and current with a voltmeter (voltmeters are capable of measuring both voltage and current - but not generally at the same time!).
What happens to the energy delivered to the wire? For the case of a simple piece of wire plus some additional devices with are purely resistive, the energy is completely converted into heat energy in the wire (as shown in the figure above), which escapes into space.
As discussed above, the voltage of a battery is related to the amount of energy that the battery can deliver. A voltage of V = 1 volt means that the battery will deliver 1 Joule of energy for each coulomb of charge that flows through the circuit, a voltage of V = 2 yields two Joules per coulomb, and so forth.
In many practical applications, we want to know the rate at which energy is delivered, not how much energy is delivered per coulomb of charge. For example, you might need to insure that a circuit you design will deliver 2 joules per second to make a toy car go fast enough.
The rate at which energy is delivered is called power. Power is thus defined:
Power = Energy / Time.
In the metric International units, the unit of power corresponding to 1 joule per second is called a watt:
1 watt = 1 joule per second.
Because the voltage V tells us the number of joules per coulomb, and the current I tells us the number of coulombs per second, all we have to do to get the current is to multiply them:
Power = number of watts = number of joules/second
= joules/coulomb x coulombs/second = I V,
Power Formula: P = I V
Thus, suppose we had a circuit with a battery voltage of 2 volts and a current of 3 amps. Then the power delivered to the resistor would be
P = I V = (3 amps) (2 volts) = 6 watts.
You are now ready to start doing some simple projects with electricity!
Back to Project Listings | <urn:uuid:abfdb595-f8ef-4f4f-bb96-466a6b43ec04> | 4.3125 | 2,219 | Tutorial | Science & Tech. | 47.818421 |
What is method?
In programming language, method is something that you do to an object.
For example, the object is ‘door‘ and you want to apply an action ‘open‘.
In python you can write as: door.open().
door = object
open = method
( ) = argument
Argument is the place for parameters that you apply to the method.
For example, you want to open your door half only or full open then changing the color painting to blue or red.
The syntax will be:
In python, you can use built-in methods or create it manually.
Let me explain more details. | <urn:uuid:413ccabc-a30e-4365-84fe-9fb6d9f6c00d> | 4.03125 | 138 | Personal Blog | Software Dev. | 68.559473 |
WHAT IS WIND ENERGY? Wind is a form of solar energy, caused by the uneven warming of the earth's surface. This is why air masses have different temperatures and pressures, and are constantly moving to find a balance. The greater the difference in pressure, the swifter the air moves and the stronger the wind. People have used wind energy for thousands of years, using it to pump water, grind flour, press olives, and even to explore the world in wind-driven sailing ships. Wind farms use turbines to generate electricity, converting the kinetic energy of the wind into mechanical energy. The wind's force causes the long blades of the turbine to rotate. This rotation starts a generator, which produces low-voltage electric energy.
BENEFITS OF WIND ENERGY: Wind power is a renewable energy source that requires no fuel to operate and does not produce any emissions that are harmful to the environment. Wind turbines are made of plastic and metallic materials, so they don't have any radioactive or chemical impact either. Wind farms take up much less space than conventional power plants, and they also don't produce noise pollution. However, electricity produced from windmills generally costs more than that produced from traditional sources like natural gas and coal. At best, wind farms produce electricity at an efficiency rate of 30 percent, compared to a 70 percent efficiency rate from natural gas and coal. Wind energy is also unreliable. Electricity can't be stored: it must be produced on demand, yet wind is inherently unpredictable. The new turbine blades are designed to increase reliability and efficiency as well as reduce maintenance costs.
The American Society of Mechanical Engineers, the Institute of Electrical and Electronics Engineers, Inc.-USA, and the American Geophysical Union contributed to the information contained in the TV portion of this report. | <urn:uuid:652a94bc-374a-4e43-b231-a93e1bf0b87d> | 3.5625 | 364 | Knowledge Article | Science & Tech. | 32.448693 |
Mid-19th century tool for converting between different standards of the inch
An inch is an Imperial unit of length. Sweden also briefly had a "decimal inch" based on the metric system: see below for more.
According to some sources, the inch was originally defined informally as the distance between the tip of the thumb and the first joint of the thumb. Another source says that the inch was at one time defined in terms of the yard, supposedly defined as the distance between Henry I of England's nose and his thumb. There are twelve inches in a foot, and three feet in a yard.
The word for "inch" is similar to the word for "thumb" in some languages. French: pouce inch, pouce thumb; Italian: pollice inch, pollice thumb; Spanish: pulgada inch, pulgar thumb; Swedish: tum inch, tumme thumb. In Dutch it is even the same: duim inch and thumb.
Historically, the inch has referred to several slightly different units of length, used in different parts of the world. There was little uniformity; different countries, and even different cities within the same country, used their own standard length. Today there are two units called the "inch" still in use, both being largely confined to the United States. Other countries, which previously had their own separate definitions of the inch, have converted to using the metric system instead. When the inch being referred to is not specified, it almost always means the international inch.
The international inch is defined in terms of the metric system of units to be exactly 25.4 mm. This definition was agreed upon by the U.S. and the British Commonwealth in 1958. Prior to that, the U.S. and Canada each had their own, slightly different definition of the inch in terms of metric units, while the UK and other Commonwealth countries defined the inch in terms of the Imperial Standard Yard . The definition adopted was the Canadian definition. A metric inch was also used in some Soviet clones of Western computers. The clones were a slightly scaled copy, and hence Soviet parts didn't match exactly with Western ones.
U.S. survey foot
However, the US continued to use its previous national definition of the length units for surveying purposes. The US survey foot is defined so that 1 metre is exactly 39.37 inches (inches, however, are rarely if ever used in surveying). The international inch is exactly 0.999998 times the old US definition of an inch; 1 survey inch equals 25.4/0.999998 or approximately 25.4000508001016002032004(...) mm. Whilst the difference between the two units is only two parts per million, the difference between the two units makes a significant difference of many metres when the unit is used to define measurements made on the scale of distances of thousands of kilometres.
As a result the U.S. survey acre is 4046.8726 m², compared with the international acre which is 4046.8564 m², a difference equal to about a sheet of A6 paper or 1/4 of a U.S. letter size paper.
The thou or mil is a unit sometimes used in engineering equivalent to one-thousandth of an international inch, and thus defined to be 25.4 µm. Use of the thou is now generally deprecated in favour of the use of SI units. When "thou" is the measurement, its "th" is pronounced as in "thousand" — IPA — and not as in "that" or the pronoun "thou" — IPA /ðaʊ/.
The inch unit may be denoted by a double prime (ex. 30″ = 30 inches), often approximated by a quotation mark. Similarly, feet can be denoted by a prime (often approximated by an apostrophe), so 6′2″ means 6 feet 2 inches.
In the 19th century, Sweden devised a way into the metric world. First, in 1855–1863 the existing "working inch" was changed into a "decimal inch" which was 1/10th foot or approximately 0.03 metres. Proponents argued that a decimal system simplifies calculations, but having two different inch measures turned out to be so complicated that in 1878–1889 it was agreed to introduce the metric units. | <urn:uuid:af2bf7ba-5e53-45ce-9cb9-f76c43fc25f1> | 3.421875 | 898 | Knowledge Article | Science & Tech. | 62.452159 |
On covering a figure with diamonds
In the American Mathematical Monthly (May 1989, pp.429–431), under the title “The Problem of the Calissons”, Guy David and Carlos Tomei present the theorem given below. Before formulating the theorem, we introduce some terminology.
We consider a regular triangularization of the Euclidean plane, the grid lines of which cut up the plane into equilateral triangles with sides of length 1; in the following, “triangle” refers to such an equilateral triangle. Note that triangles occur in two orientations and that we can colour them accordingly, e.g.
A pair of differently coloured triangles that share a side is referred to a “diamond”, and a moment’s reflection tells us that each diamond has one of three possible orientations, viz.
A “figure” is a finite set of triangles; a “covering”of a figure is a partitioning of the figure’s triangles into diamonds.
The theorem presented by David and Tomei states that in any covering of a regular hexagon with sides of length n (and comprising 6n² triangles), the diamonds occur in the three orientations in equal numbers.
* * *
Because of the rotational symmetry of the hexagon, the above theorem is an immediate consequence of the following one.
Theorem Consider a figure that admits one or more coverings. Each covering yields a triple of frequencies, each counting the diamonds in one of the orientations. All coverings of the figure considered yield the same triple.
With the aid of some colleagues, we found for the latter theorem an argument so sweet that it was submitted to the AMM and accepted for publication. But then I withdrew it when I learned that undergraduate students at the Pontifícia Universidade Católica do Rio de Janeiro knew the following, much shorter proof.
Consider a covering of the figure. Draw in each diamond a vector from the centre of its white triangle to the centre of its black triangle. Denote the three possible vectors by ξ,η,ζ and the triple of their frequencies by x,y,z . By adding all these vectors in the covering of the figure, we get the vector equation
x∙ξ + y∙η + z∙ζ = (the sum of the black centres) – (the sum of the white centres) .
Moreover we have the scalar equation
x + y + z = half the number of triangles in the figure .
These three independent linear equations in x,y,z have right-hand sides wholly determined by the figure, and hence determine the triple of frequencies independent of the covering.
I thought the above, as yet anonymous argument too beautiful not to be publicly recorded.
Austin, 9 December 1989
prof.dr. Edsger W.Dijkstra
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712-1188 | <urn:uuid:c02a0f48-788c-46e0-85a8-2872a7ca6e07> | 2.6875 | 627 | Academic Writing | Science & Tech. | 48.539582 |
A sungrazing comet is a comet that passes extremely close to the Sun at perihelion – sometimes within a few thousand kilometres of the Sun's surface. While small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong evaporation and tidal forces they experience often lead to their fragmentation.
The Kreutz Sungrazers
The most famous sungrazers are the Kreutz Sungrazers, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System. An extremely bright comet seen by Aristotle and Ephorus in 371 BC is a possible candidate for this parent comet.
The Great Comets of 1843 and 1882, Comet Ikeya–Seki in 1965 and C/2011 W3 (Lovejoy) in 2011 were all fragments of the original comet. Each of these three was briefly bright enough to be visible in the daytime sky, next to the Sun, outshining even the full moon.
Since the launch of the SOHO satellite in 1995, hundreds of tiny Kreutz Sungrazers have been discovered, all of which have either plunged into the Sun or been destroyed completely during their perihelion passage, with the exception of C/2011 W3 (Lovejoy). The Kreutz family of comets is apparently much larger than previously suspected.
Other sungrazers
About 83% of the sungrazers observed with SOHO are members of the Kreutz group. The other 17% contains some sporadic sungrazers, but three other related groups of comets have been identified among them: the Kracht, Marsden and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz. These comets have also be linked to several meteor streams, including the Daytime Arietids, the delta Aquariids, and the Quadrantids. Linked comet orbits suggest that both Marsden and Kracht groups have a small period, on the order of five years, but the Meyer group may have intermediate- or long-period orbits. The Meyer group comets are typically small, faint, and never have tails. The Great Comet of 1680 was a sungrazer and while used by Newton to verify Kepler's equations on orbital motion, it was not a member of any larger groups. However, comet C/2012 S1 (ISON) has orbital elements similar to the Great Comet of 1680 and could be a second member of the group.
Origin of sungrazing comets
Studies show that for comets with high orbital inclinations and perihelion distances of less than about 2 astronomical units, the cumulative effect of gravitational perturbations over many orbits is adequate to reduce the perihelion distance to very small values. One study has suggested that Comet Hale–Bopp has about a 15% chance of eventually becoming a sungrazer.
- cometography.com, C/1979 Q1 – SOLWIND 1
- Complete list of SOHO comets
- J. Bortle (2012-09-24). "the orbital elements' distinct and surprising similarity to those of the Great Comet of 1680". comets-ml · Comets Mailing List. Retrieved 2012-10-05.
- Bailey, M. E.; Emel'yanenko, V. V.; Hahn, G.; Harris, N. W.; Hughes, K. A.; Muinonen, K. (1996). "Orbital evolution of Comet 1995 O1 Hale-Bopp". Monthly Notices of the Royal Astronomical Society 281 (3): 916–924. Bibcode:1996MNRAS.281..916B.
- Bailey, M. E.; Chambers, J. E.; Hahn, G. (1992). "Origin of sungrazers – A frequent cometary end-state". Astronomy and Astrophysics 257 (1): 315–322. Bibcode:1992A&A...257..315B.
- Ohtsuka, K.; Nakano, S.; Yoshikawa, M. (2003). "On the Association among Periodic Comet 96P/Machholz, Arietids, the Marsden Comet Group, and the Kracht Comet Group". Publications of the Astronomical Society of Japan 55: 321–324. Bibcode:2003PASJ...55..321O.
- SOHO sungrazers information
- Cometography sungrazers page
- Sun approaching comets
- Mass Loss, Destruction and Detection of Sun-grazing and -impacting Cometary Nuclei (arXiv:1107.1857 : 10 Jul 2011) | <urn:uuid:c4321242-98c1-403c-8200-ea03132de728> | 3.625 | 994 | Knowledge Article | Science & Tech. | 60.262249 |
Students can learn about the conditions necessary for hurricanes to form and sustain themselves. The discussion is accompanied by remote imagery and an animation showing how wind shear can retard the development of a hurricane. Links to more detailed information are embedded in the text.
Intended for grade levels:
Type of resource:
Quicktime is required to view animation.
Cost / Copyright:
Copyright 1997 by the University of Illinois Board of Trustees (except in the the case of photos and other resources which are specifically identified). The names "Weather World 2010" and "WW2010" are trademarks of the University of Illinois. Non-commercial use must be accompanied by appropriate credit visible with respect to the used item. "Image/Text/Data from the University of Illinois WW2010 Project." Web utilizations should, if possible, provide a link to our server nearby.
DLESE Catalog ID: NASA-Edmall-557
Resource contact / Creator / Publisher:
Publisher: The University of Illinois at Urbana-Champaign
Department of Atmospheric Sciences | <urn:uuid:68e2cf66-2896-4b10-897c-cec4e0d726b5> | 3.359375 | 212 | Structured Data | Science & Tech. | 23.564212 |
First-of-Its-Kind Map Details the Height of the Globe's Forests
ScienceDaily (July 21, 2010) — Using NASA satellite data, scientists have produced a first-of-its kind map that details the height of the world's forests. Although there are other local- and regional-scale forest canopy maps, the new map is the first that spans the entire globe based on one uniform method.
The work -- based on data collected by NASA's ICESat, Terra, and Aqua satellites -- should help scientists build an inventory of how much carbon the world's forests store and how fast that carbon cycles through ecosystems and back into the atmosphere. Michael Lefsky of the Colorado State University described his results in the journal Geophysical Research Letters.
The new map shows the world's tallest forests clustered in the Pacific Northwest of North America and portions of Southeast Asia, while shorter forests are found in broad swaths across northern Canada and Eurasia. The map depicts average height over 5 square kilometers (1.9 square miles) regions), not the maximum heights that any one tree or small patch of trees might attain.
Temperate conifer forests -- which are extremely moist and contain massive trees such as Douglas fir, western hemlock, redwoods, and sequoias--have the tallest canopies, soaring easily above 40 meters (131 feet). In contrast, boreal forests dominated by spruce, fir, pine, and larch had canopies typically less than 20 meters (66 feet). Relatively undisturbed areas in tropical rain forests were about 25 meters (82 feet), roughly the same height as the oak, beeches, and birches of temperate broadleaf forests common in Europe and much of the United States.
Scientific interest in the new map goes far beyond curiosities about tree height. The map has implications for an ongoing effort to estimate the amount of carbon tied up in Earth's forests and for explaining what sops up 2 billion tons of "missing" carbon each year.
Humans release about 7 billion tons of carbon annually, mostly in the form of carbon dioxide. Of that, 3 billion tons end up in the atmosphere and 2 billion tons in the ocean. It's unclear where the last two billion tons of carbon go, though scientists suspect forests capture and store much of it as biomass through photosynthesis.
There are hints that young forests absorb more carbon than older ones, as do wetter ones, and that large amounts of carbon end up in certain types of soil. But ecologists have only begun to pin down the details as they try to figure out whether the planet can continue to soak up so much of our annual carbon emissions and whether it will continue to do so as climate changes.
Article continues: http://www.sciencedaily.com/releases/2010/07/100720162306.htm | <urn:uuid:6deb6827-1868-4c55-8279-ea7c2c373be7> | 3.515625 | 586 | Truncated | Science & Tech. | 49.037534 |
A celestial speck that astronomers thought was a meteoroid is in fact Jupiter's 17th moonits smallest, and the first found for 26 years. University of Arizona astronomers used the 36-inch telescope at Kitt Peak to track the object for a month last year. They estimate that the object orbits Jupiter once every two years at an average distance of 24 million kilometres. The new moon may measure only 5 kilometres across, and it won't get a permanent name and number unless researchers spot it re-emerging from the Sun's glare in September.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:fac61dab-ddd3-4679-8a9b-fe15986909a4> | 3.03125 | 136 | Truncated | Science & Tech. | 50.101322 |
Look closely to see these tiny skeleton shrimp clinging to bryozoans, hydroids or algae. Their body shape and color help the shrimp to blend into their background. Their bodies are long, cylindrical and range from pale brown and green to rose. Some species can quickly change color to blend into their backgrounds.
Skeleton shrimp look like, and sometimes are called, "praying mantises of the sea." They have two pairs of legs attached to the front end of their bodies, with three pairs of legs at the back end. The front legs form powerful "claws" for defense, grooming and capturing food. The rear legs have strong claws that grasp and hold on to algae or other surfaces. They use their antennae for filter feeding and swimming.
diatoms (microscopic plants), detritus, filtered food particles, amphipods
to 1.5 inches (4 cm) long
low intertidal zone and subtidal waters in bays,
Skeleton shrimp are abundant and live in many habitats, including the deep sea. They play an important role in the ecosystem by eating up detritus and other food particles.
Shrimp, sea anemones and surf perch prey on skeleton shrimp. The females of some skeleton shrimp species kill the male after mating.
Skeleton shrimp use their front legs for locomotion. To move, they grasp first with those front legs and then with their back legs, in inchworm fashion. They swim by rapidly bending and straightening their bodies.
To grow, skeleton shrimp shed their old exoskeletons and form new, larger ones. They can mate only when the female is between new, hardened exoskeletons. After mating, the female deposits her eggs in a brood pouch formed from leaflike projections on the middle part of her body. Skeleton shrimp hatch directly into juvenile adults.
Source: Monterey Bay Aquarium:
Online Field Guide http://www.mbayaq.org/efc/living_species/default.asp?hOri=1&inhab=521 | <urn:uuid:daf184fe-c00d-4cf2-91b8-eb867492fe3c> | 3.25 | 430 | Knowledge Article | Science & Tech. | 59.617054 |
Friday, September 14, 2012
The Sun rotates more rapidly at its equator than near its poles. The magnetic fields near sunspots reverse polarity from one eleven-year sunspot cycle to the next.
Changing Speed Polar Fields
In this model, electric current passes through both poles of the star. It then flows in long tubes emanating from the star. A secondary leakage current that flows on or just below the Sun’s surface, back toward the equator from each of the poles.
It is highly likely that such a current system has already been discovered. Stanford University recently announced: “Scientists using the joint European Space Agency (ESA)/NASA Solar and Heliospheric Observatory (SOHO) spacecraft have discovered ‘jet streams’ or ‘rivers’ of hot, electrically charged gas (plasma) flowing beneath the surface of the Sun. They also found features similar to ‘trade winds’ that transport gas beneath the Sun’s fiery surface.” Rivers of plasma are electric currents. Currents cause magnetic fields.
A diagram of the Electric Sun.
Illustration from Don Scott’s book, The Electric Sky.
Regardless of the direction of the main driving current coming into the Sun,
the eleven-year reversal of the magnetic loops can be explained by the change of the speeds of the polar fields. If the main magnetic field starts to weaken in speed, the secondary (surface) current will reverse direction. Consequently the magnetic polarity of the loops will also reverse.
Low Sunspot Activity During Reversal
If a filament is flowing southward from near the Sun’s north pole and it is on or just beneath the Sun’s surface, a looping magnetic field will emerge to the east of the current creating a north magnetic pole there. In the Sun’s southern hemisphere, the secondary surface current is flowing northward toward the solar equator. The resulting magnetic field will emerge (north magnetic pole) to the west of the current and return down to the surface (forming a south magnetic pole) to the east of the current.
The change of sunspot’s polarity implies changes in the speeds of the polar magnetic fields of the Sun. We observe such change relative to a fixed value of equatorial speed of 25.75 days. We obtain N-S and S-N polarized sunspots on different hemispheres of the Sun, by calculated polar field speeds of 37.176 and 37.4075 days respectively.
To conserve the natural law of changing polarity of sunspots at each new cycle, we conclude that the polar speeds must also undergo change. If we assume that the average equatorial speed of the next is also 25.75 days, then the polar speed of 37.176 days of the previous cycle will have to decrease to 37.4075 an vice versa… then the plus changes into minus with a speed of 37.2915 days. The sunspot activity is then almost zero. This normally happens at the end of a cycle.
I expect the magnetic fields are slowly going over in each other at this moment. In other words in 2012 this phenomenon will happen not at the end of a cycle, but right in the middle. A dramatic switch in the magnetic field of the sun… The result is a ‘Killer Flare’. It is this Flare that will destroy our civilisation in 2012. Watch this video:
Lowest Sunspot Activity Since Beginning of Measurements!
The counting of the 11-year sunspot cycle was started in 1755 and the 23rd solar cycle has been completed. Recently the 24th solar cycle has started. Patrick Geryl has been predicting very low sunspot activity since August 2010 after cracking a further part of the Maya sunspot code. An update gives us the recent values:
The current solar cycle (solar cycle 24) has confounded many observers, perhaps even the NOAA Solar Cycle Prediction Panel who had come to a consensus that the solar cycle presently underway would peak sometime during early 2013.
In fact, the current prediction model for this month of January, 2011, has it’s value way down...!
Sunspot Index Graphics
The monthly (blue) and monthly smoothed (red) sunspot numbers for the latest five cycles:
The Ten Centimetre Solar Radio Flux
Radioflux very low!
The radio emission from the sun at a wavelength of 10.7 centimetres (often called "the 10 cm flux") has been found to correlate well with the sunspot number. Sunspot number is defined from counts of the number of individual sunspots as well as the number of sunspot groups and must be reduced to a standard scale taking into account the differences in equipment and techniques between observatories. On the other hand, the radio flux at 10.7 centimetres can be measured relatively easily and quickly and has replaced the sunspot number as an index of solar activity for many purposes.
Sunspot activity lower then the low one from 1798! This was untill now the lowest since the beginning of measurements:
How to Survive 2012
To help us go ahead with the same spirit, a small contribution from your side will highly be appreciated. | <urn:uuid:fb80a79a-2323-49d3-9778-4ead9932f00d> | 3.9375 | 1,079 | Personal Blog | Science & Tech. | 55.447692 |
June 16, 2004 ANN ARBOR, Mich. -- As our planet's population swells and increases demands on natural resources, ecological scientists should work with other experts to make sure our basic survival needs are met.
That's the conclusion of a group of researchers who have been pondering ways for ecological science to tackle the big scientific challenges of the future. The group, which includes University of Michigan ecologist Mercedes Pascual, summarized its position in the May 28 issue of Science.
While it's important to continue studying rare and rapidly shrinking undisturbed ecosystems, ecological research needs to reflect the reality that Earth will be overpopulated and increasingly affected by human activities for the foreseeable future, the scientists assert. They propose a new research agenda centered on finding ways to maintain the benefits that natural ecosystems provide humans, such as clean drinking water, stabilized soil and the buffering of infectious disease outbreaks. Specific research projects might look, for instance, at how to protect habitats to make sure that the most important benefits to people are not compromised.
Restoration has been a focus of ecological research for some time, but restoring an altered ecosystem to its original state may not always be possible, the researchers note. In some cases, the best alternative may be "designed ecological solutions" that combine ecological approaches with technological innovations.
In the Netherlands, for example, groundwater has been extracted from under coastal dunes for many years to provide drinking water for cities, but over-extraction has caused environmental damage. The designed ecological solution was to build artificial lakes that were filled with river water piped into the dune subsoil.
Developing such approaches will require ecologists to work with other experts, such as wastewater engineers, with whom they've had little contact in the past. Social scientists may also get involved, studying the tension between human needs and ecosystem needs.
Particularly pressing are solutions for problems related to three issues: urbanization, the degradation of fresh water, and the movement of materials between ecosystems, according to the authors, who are members of the Ecological Visions committee of the Ecological Society of America. The Science article summarizes the committee's full report.
Mercedes Pascual -- http://www.eeb.lsa.umich.edu/eebfacultydetails.asp?ID=60
Science -- http://www.sciencemag.org/
Ecological Society of America -- http://www.esa.org/
Ecological Visions Project -- http://www.esa.org/ecovisions/ev_projects/about_project.php
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:90424c48-540f-4e82-990c-ef6777ec76d1> | 3.5 | 574 | Truncated | Science & Tech. | 29.169651 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Superstring theory is a theory in theoretical physics that seeks to reconcile quantum mechanics with the theories of relativity especially in the case of gravitation. The attempt to unite these two disparate disciplines of physics has been a sort of Holy Grail that even eluded Einstein and still works to baffle scientists to this day. To deal with this a series of theories called string theories eventually arose in an attempt to achieve this comprehensive explanation of the universe’s various interactions and why they are. However there were simply too many string theories to make a conclusive theory. To achieve this there has been the attempts of many top physicists to create a superstring theory.
To understand what it all means you need to know what question string theory seeks to explain. The problem is that gravitation is a fundamental force that has two separate explanations depending on whether you use relativity or quantum mechanics. In quantum mechanics the explanation is that gravity may be the result of a fundamental particle that has not been discovered yet. This theoretical particle is known as a graviton. However the explanation of gravity is quite different when you use relativity. According to Einstein’s Theories of Relativity, gravity is actually a curve in the fabric of space time created by the mass of objects.
The goal of superstring theory was to create a frame where the two conflicting explanations can exist. The first step was that fundamental particles are not the most basic building blocks of matter. Instead all matter is made up of strings, vibrating pieces of energy. This is all within the framework of a universe that has 10 to 11 dimensions. The first four dimensions are length, width, height, and time. These are the four dimensions that we experience every day. However the two more are derivatives of time and the remainder deal with space. So in essence all dimensions are expressions of the simple strings that are the basic building blocks.
The main problem facing superstring theory is that strings have been proposed to act in at least 5 different ways. These different theories all seemingly conflict with one another. One attempt at uniting was M theory. However many of these theories are still only conjecture. The biggest problem is that superstring theory is still theoretical. Little physical evidence has been found to support it.
We’ve also recorded an entire episode of Astronomy Cast all about the String Theory. Listen here, Episode 31: String Theory, Time Travel, White Holes, Warp Speed, Multiple Dimensions and Before the Big Bang. | <urn:uuid:08baa1ae-08e7-48b1-8b61-62ae8fdfa752> | 3.78125 | 515 | Truncated | Science & Tech. | 42.606222 |
Scientists in the US have demonstrated a new technique for generating photons for use in optical quantum information processing: using a laser to excite a single photon from a cloud of rubidium gas.The technique, developed at the Georgia Institute of Technology Research, exploits the properties of an atom in which one or more electrons has been excited near ionisation energy levels, the so-called Rydberg state.
Qubits and Pieces
News from the frontline of the weird and wonderful world of quantum computing. From the theoretical musings of solid state physicists to breakthroughs you might actually see in a data centre in your lifetime, we'll be keeping an eye on stuff that matters in materials science, including graphene, condensed matter, diamonds and so on. And last, but by no mean least, we'll be tracking the spin on spintronics. Just don't mention room temperature.
Lucy Sherriff is a journalist, science geek and general liker of all things techie and clever. In a previous life she put her physics degree to moderately good use by writing about science for that other tech website, The Register. After a bit of a break, it seemed like a good time to start blogging about weird quantum stuff for ZDNet. And so here we are.
Researchers at the Max Planck Institute of Quantum Optics (MPQ) are claiming a world first with a demonstration of a quantum switching network. The Institute reports data being exchanged successfully "with high efficiency and fidelity" between two quantum nodes installed in two separate labs, connected by a 60-metre long optical fibre.
Diamonds are forever in the movies, and now they are making a stab at eternity in quantum computing.An international group of scientists have built a working quantum computer inside a diamond, and for the first time, have included protection against decoherence.
Researchers at Ruhr-Universitat Bochum report the creation of electron qubits in semiconductors. So far, the team says, electron qubits have all been created in a vacuum, so this development really does look like a next step on the oft-mentioned road to quantum computing.
As if its list of properties was not already impressive enough, materials scientists working with sophisticated computer models at Stanford University have added another useful trick to graphene’s repertoire: they have made it piezoelectric."We thought the piezoelectric effect would be present, but relatively small.
Graphene may be a wonder material, poised to revolutionise the electronics industry; with applications far beyond the humble CPU. But it isn’t magic.
Scientists at UCLA have put a Lightscribe DVD optical drive to work in their graphene research, and have used them to produce a graphene-based electrochemical supercapcitor that could make itself very useful in a world ever more dependent on battery power.In a paper published in the March 16 edition of the journal Science, the researchers explain that electrochemical capacitors have attracted a lot of interest because they can be charged and discharged much faster than traditional batteries.
Researchers at Berkeley Lab have discovered that they can control the Curie temperature, and hence the magnetism of the semiconductor gallium manganese arsenide (GaMnAs). The breakthrough settles a long running controversy over the usefulness of the material in the emerging field of spintronics.
IBM researchers will announce at the annual meeting of the American Physical Society in Boston today that they have established three new records for error correction in quantum computing.In a paper submitted on Feb 23rd for the conference, the researchers report a 95 per cent success rate with a two-qubit CNOT operation.
It has been a good week for quantum computing. Scientists in Australia announced that they have successfully built a single atom transistor, and researchers writing in Nature, have demonstrated an error correction technique that could make quantum computers more reliable. | <urn:uuid:5f91c5db-3a8d-420b-b269-ad9a186e8e4a> | 3.328125 | 793 | Content Listing | Science & Tech. | 31.277816 |
Assassin bugs (Reduviidae) belong to the Hemipteran order, sometimes referred to as “true bugs.” Hemipterans also include aphids, leafhoppers, and cicadas. Like all Hemipterans, assassin bugs feed using a specialized proboscis, called a rostrum. However, unlike their vegetarian, sap-sucking cousins, assassins use their rostrum for extracting fluids from living prey.
Though most assassin bugs feed on other insects and arachnids, some species predate mammals such as bats (video) and humans. In fact, certain species of assassins are the primary vector of Chagas disease in humans. Assassin bugs are evolutionarily specialized for their predatory lifestyles in an astonishing number of ways:
- Stylets and digestive venom: Integrated with the rostrum, assassin bugs have specialized serrated stylets for tearing into crevices in animal tissue. Once inside, the assassin injects a digestive saliva that breaks down the unlucky prey’s innards into a nutrient rich slurry; which the assassin then sucks back up through the stylet.
- Raptorial forelegs: Thread-legged assassin bugs (Emesinae) have beefed-up forelegs equipped with sharp spines for grabbing, impaling, or pinning their prey. Liken this adaptation to the forelegs of praying mantids and “spearer” mantis shrimp.
- Sticky hairs: Assassin bugs of the genus Zelus also use their forelegs to capture prey. However, instead of sharp spines, they use fine hairs coated in a glue like substance. It is not known if the bugs produce their own glue, or if they obtain it from plant sap. Regardless, they use it to ambush and immobilize prey as they begin to liquidate their insides.
- Lure signals: Feather-legged assassin bugs (Holoptilinae) lure ants to their doom with visual signals and pheromones produced in a special organ, called a trichome. Read more at Myrmician.
The specialized foreleg of thread-legged assassin bugs (Redei, 2007; artour_a).
Zelus longipes has fine hairs (electron micrograph, right panel) on its forelegs which are coated with a viscous, glue-like material. This is used to immobilize prey. Click the image for a much larger version. (Photo by: Chuck Ulmer; SEM image adapted from Werner and Reid, 2001)
Wow, I really got sidetracked in that introduction. I had no idea how awesome assassin bugs were. Every bit of research I completed for this post led me to another exciting factoid.
Regardless, I need to get to the point of this post, which is a new paper about some disturbingly sinister predatory tactics in the assassin bug species, Stenolemus bituberus. This assassin bug has its work cut out for itself, as it predominately stalks some truly dangerous quarry, arachnids. The assassin bug faces the challenge of obtaining an advantageous position on the spider; from which it can launch a swift, fatal strike. The researchers found that these sneaky assassins use more than one technique to outsmart and turn the tables on their cunning prey.
The Australian based researchers placed assassin bugs on the webs of five species of spider. Through tedious observation they discovered that S. bituberus uses two contrasting methods to get the drop on web-building spiders. Both methods involve manipulation of the spider’s own web.
First, in a stalking behavior, the assassin sneaks up on the spider. In order to accomplish this, the bug walks over the web with an irregular pattern of footsteps. The spider does not notice arrhythmic motions, and the assassin is able to get within striking distance. This technique is also used by web-invading jumping spiders (and is useful when you want to avoid drawing the attention of colossal sandworms while crossing the deserts of Arrakis). In addition, the assassin bugs also make use of natural “smokescreens” such as strong gusts of wind on the webs in order to advance on the unwitting spiders.
The researchers also noted a second predatory behavior in which the assassin bugs bait and lure the spiders. They accomplish this by plucking the web in such a way as to mimic the struggles of a helplessly trapped insect. When the spider comes over to inspect and process its captive, it instead gets an carapace-full of rostrum, as the assassin bug pounces it. Also, considering the ant-luring techniques of the feather-legged assassins described above, one must wonder if chemical attractants are involved in this case as well. Watch a video of the luring and striking behavior, here.
As an interesting aside, the researchers also noted that the assassin bugs habitually tapped the spiders with their antenna just prior to the strike. This behavior is seen in other predatory arthropods, however its purpose is not clear. It is possible that the assassin bug is getting last-second distance, orientation, and identity information about the spider before launching its attack. Another possibility is that the assassin bug is hypnotizing the spider, habituating it to stimuli, so that it is less likely to respond violently when the assassin strikes for real.
Damn, these bugs are awesome.
Via, New Scientist. | <urn:uuid:cfdc6a54-0fc5-486a-92bc-4f8615784fac> | 3.359375 | 1,119 | Personal Blog | Science & Tech. | 39.328582 |
Document: Diagram of seismic wave movement
There are two types of waves that can be used to study the Earth's interior:
- P-waves are Primary or Compressional Waves. Faster waves that have speeds of 5 km/s at the top of the crust, 8 km/s at the top of the mantle, and 14 km/s at the bottom of the mantle. They can travel through liquids and so can pass through Earth's liquid outer core.
- S-waves are Shear or Transverse waves. Slower than P-waves, they cannot travel through liquids and do not pass through the Earth's core.
- File type:
- File size:
- 25 kB
- © Australian Museum | <urn:uuid:2a0a22b8-1f00-445f-90a4-d8cc203549e1> | 3.984375 | 150 | Truncated | Science & Tech. | 73.026324 |