text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Who can we blame for all this?
Molecular manufacturing enthusiasts trace the idea to 1959 and Nobel laureate Richard Feynmann's famous lecture "There's Room at the Bottom". In it, he suggested that it ought to be possible to rearrange atoms "the way we want…all the way down." Far enough down, he said, "all of our devices can be mass produced so that they are absolutely perfect copies of one another." (This idea raises the possibility of hardware-sharing wars far worse than today's copyright battles.) "The principles of physics, as far as I can see, do not speak against the possibility of manuvering things atom by atom." The problem: our fingers are too big.
The next step wasn't until the early 1980s: Eric Drexler, in his popular book Engines of Creation. Drexler posited the idea of general-purpose molecular assemblers and also the problem of "grey goo" which, Phoenix says ruefully, "is still haunting the industry". If a really small manufacturing machine escaped and fed off the biosphere, sucking up chemicals it wasn't originally designed for, it could turn the world into an amorphous grey mass. Drexler was thinking in biological terms: bacteria are very inventive, and an invasive species becomes more so if it has no predators. Drexler went on to write the more technical Nanosystems in 1992, though "it was ignored outside of the community".
The word nanotechnology became coopted to describe nanometer-scale polymer science and other areas. Meanwhile, thinkers like the science fiction writer Vernor Vinge and Ray Kurzweil surmised that humans and artificial intelligence would merge to become something beyond our current comprehension on the other side of a moment Vinge dubbed the Singularity.
By the mid 1990s people were talking about nanomedicine. Still, in 1997, when James van Ehr, CEO and founder of Zyvex, used some of the $100 million he made from selling a company to Macromedia to found a nanotechnology company, a professor he consulted burst out laughing. But by 2000 nanotechnology, in its less far-out meaning, was becoming mainstream. The US government allocated $1 billion a year for research under the National Nanotechnology Initiative, none of it for molecular manufacturing. (The EU also has a plan (PDF)). And then a sort of disaster struck: in 2000 Sun Microsystems' Bill Joy published Why the Future Doesn't need Us in Wired, in which he suggested that molecular manufacturing could destroy the world and should not be invented. It's only now, says Phoenix, that the influence of that article is passing enough for people to be able to admit again that they're interested in researching this field.
Building utopia, atom by atom
Let's say molecular manufacturing is going to happen. When? And with what consequences? Phoenix thinks we could have nanofactories by 2022, leading, over the next five to seven years, to a brain-machine interface and, given the raw materials, planet-scale engineering. Sooner, he thinks, is better: if it's delayed until after 2025 the related technologies could be so powerful the whole thing will hit like a tidal wave.
Brian Wang, a futurist and member of the CRN taskforce, has a more detailed set of economic projections as Moore's Law accelerates and extends outside computing and China's economy passes that of the US (which he dates to 2018, plus or minus three years). Wang puts the development of molecular manufacturing at 2015, despite road blocks in the form of energy (which he thinks it will take decades to solve) and conquering space (still hard). But a 1kg nanofactory could, if supplied with enough feedstock and energy, make 4,000 tons of nanofactories and 8,000 tons of products in a single day – making it possible to replace or upgrade more than our current production capability in weeks to months. It will bring with it long-term acceleration of economic growth: wealth for all.
The Precautionary Principle
I don't know who this Marchant is, but I must say that I agree with him on that point. If we were doing biological weapons research like we do GM crop "research", then we'd all be dead.
I fail to understand how, in the name of Heaven, anyone was able to get a license to do open-field GM crops before all lab-environment tests were concluded. Well, actually I understand very well how this happened : Monsanto poured a few billion dollars into the right ears and all went the way they wanted.
Thanks to that disastrous decision, we now have GM crops cross-pollinated with non-GM crops, and God only knows what is going to happen tomorrow.
This GM stuff should have been grown under vast, environmentally-sealed white domes, just like in the X-Files film. Any precaution-taking at this point is closing the barn door after the horse has bolted.
Call to helpdesk..
Embarassed User:" errrr, I think I've just dissolved London.."
Nanotechnology Helpdesk:"OK, had you made a backup ?"
Imagine the potential size of a PEBKAC involving nanomanufacturing... Before you know it you could have several tons of self-replicating nanomachines running havoc in the area... | <urn:uuid:331c9d32-0753-40be-b4a0-c5ba883e059d> | 2.859375 | 1,116 | Comment Section | Science & Tech. | 48.991244 |
Each of the two atomic bomb blasts were thought to have been approximately equivalent to a blast produced from 20,000 tons of TNT (The Manhattan Engineer District, 1946).
Figures 4 and 5 are photos depicting the “mushroom clouds” that formed as the bombs were dropped on Hiroshima and Nagasaki respectively.
(To see recovered video footage of the Hiroshima bombing, click on the following link: Hiroshima bombing)
According to the Atomic Bomb Museum (2006), there were three main forms of energy released as a result of the nuclear bombs dropped over Hiroshima and Nagasaki:
1. Fireball (heat)
2. Shock wave and air blast (accounted for 50% of energy)
Figure 6 is a graphical representation of the energy released from the bomb explosions.
Directly beneath where the bomb was dropped on the ground (hypocenter) it has been estimated that the temperature reached approximately 7000 degrees F (Atomic Bomb Museum, 2006).
The explosions created areas of extremely high pressure which resulted in winds in excess of 980 mph at the hypocenters. The pressures created were approximately 8,600 pounds per square feet. From the hypocenters out to approximately 1/3 of a mile, most substantial concrete buildings were obliterated. Even a mile from the hypocenter, all brick buildings were destroyed as the wind velocity in these areas reached 190 mph and pressure was approximately 1,180 pounds per square feet (Atomic Bomb Museum, 2006).
Alpha, beta, gamma, and neutron rays were generated by the nuclear bombs, with the
gamma and neutron rays doing the most immediate damage and causing most early raditation deaths.
From 1/16 mile out in all directions from the hypocenter, most people died within a few hours.
Those located 1/2 mile from the hypocenter died within 30 days (Atomic Bomb Museum, 2006). | <urn:uuid:f5b40e65-ec1d-49f5-af88-3dbe06b58cce> | 3.75 | 382 | Knowledge Article | Science & Tech. | 41.846375 |
Global names are used to denote value variables, value constructors (constant or non-constant), type constructors, and record labels. Internally, a global name consists of two parts: the name of the defining module (the module name), and the name of the global inside that module (the local name). The two parts of the name must be valid identifiers. Externally, global names have the following syntax:
global-name: ident | ident
The form ident
__ ident is called a qualified name. The first
identifier is the module name, the second identifier is the local
name. The form ident is called an unqualified name. The identifier
is the local name; the module name is omitted. The compiler infers
this module name following the completion rules given below, therefore
transforming the unqualified name into a full global name.
To complete an unqualified identifier, the compiler checks a list of modules, the opened modules, to see if they define a global with the same local name as the unqualified identifier. When one is found, the identifier is completed into the full name of that global. That is, the compiler takes as module name the name of an opened module that defines a global with the same local name as the unqualified identifier. If several modules satisfy this condition, the one that comes first in the list of opened modules is selected.
The list of opened modules always includes the module currently being compiled (checked first). (In the case of a toplevel-based implementation, this is the module where all toplevel definitions are entered.) It also includes a number of standard library modules that provide the initial environment (checked last). In addition, the #open and #close directives can be used to add or remove modules from that list. The modules added with #open are checked after the module currently being compiled, but before the initial standard library modules.
variable: global-name |
<=.cconstr: global-name |
()ncconstr: global-name |
::typeconstr: global-name label: global-name
Depending on the context, global names can stand for global variables
(variable), constant value constructors (cconstr), non-constant
value constructors (ncconst), type constructors (typeconstr),
or record labels (label). For variables and value constructors,
special names built with
prefix and an operator name are
recognized. The tokens
() are also recognized as
built-in constant constructors (the empty list and the unit value).
The syntax of the language restricts labels and type constructors to appear in certain positions, where no other kind of global names are accepted. Hence labels and type constructors have their own name spaces. Value constructors and value variables live in the same name space: a global name in value position is interpreted as a value constructor if it appears in the scope of a type declaration defining that constructor; otherwise, the global name is taken to be a value variable. For value constructors, the type declaration determines whether a constructor is constant or not. | <urn:uuid:d2578867-56a5-4d08-b8d6-b6c44ca0c7a5> | 3.78125 | 646 | Documentation | Software Dev. | 27.833896 |
The Acoustic Doppler Current Profiler (ADCP) measures the speed and direction of ocean currents using the principle of “Doppler shift”. Anyone who has ever heard a train whistle is familiar with the Doppler effect. When the train is traveling towards you, the whistle’s pitch is higher. When it is moving away from you, the pitch is lower. The change in pitch is proportional to the speed of the train. The ADCP exploits the Doppler effect by emitting a sequence of high frequency pulses of sound that scatter off of moving particles in the water. Depending on whether the particles are moving toward or away from the sound source, the frequency, or pitch, of the return signal bounced back to the ADCP is either higher or lower. Particles moving away from the instrument produce a lower frequency return and vice versa. Since the particles move at the same speed as the water that carries them, the frequency shift is proportional to the speed of the water, or current. The ADCP has 4 acoustic transducers that emit and receive acoustical pulses from 4 different directions. Current direction is computed by using trigonometric relations to convert the return signal from the 4 transducers to ‘earth’ coordinates (north-south, east-west and up-down). Because the emitted sound extends from the ship down to the bottom of the ocean, the ADCP measures the current at many different depths simultaneously. This way, it is possible to determine the speed and direction of the current from the surface of the ocean to the bottom.
A diver deploying a bottom-mounted acoustic doppler current profiler to investigate the hydrodynamics of coral reef systems. Click image for larger view.
Measuring currents is a fundamental practice of physical oceanographers. By determining how ocean waters move, scientists can determine how organisms, nutrients and other biological and chemical constituents are transported throughout the ocean. Ocean waters have varied temperatures and in places like the warm Gulf Stream the movement of water means the movement of heat. Heat transport in the ocean is a critical component of the global heat budget and, therefore, it contributes to global climate change. Because of its high-resolution and ability to sample deep within the ocean interior, the ADCP is an efficient tool for sampling a large section of the ocean in a limited amount of time.
On large research vessels the ADCP is permanently mounted on the bottom of the ship’s outer hull. A typical unit (75 kHz) is powerful enough to sample waters as deep as 700 m (2275 ft). During operation, the ADCP is sending out and receiving several acoustic pulses every second. An on-board computer processes the returned signal and a real-time display of the magnitude and direction of the current throughout the water column is produced on the computer monitor. This way, scientists can observe the changing ocean current structure nearly continuously while the ship is in motion. The data are stored on CDs so that scientists can conduct a thorough analysis when they return to their laboratory after the cruise. ADCP technology is very robust and the system requires little technical support or training to operate.
To learn more: | <urn:uuid:56c9cde1-cf8f-4c20-85ba-2fca53ea3f76> | 4.03125 | 642 | Knowledge Article | Science & Tech. | 43.311469 |
Siam named the following as the most important algorithms of the 20th century:
1946: The Metropolis Algorithm for Monte Carlo. Through the use of random processes, this algorithm offers an efficient way to stumble toward answers to problems that are too complicated to solve exactly.
1947: Simplex Method for Linear Programming. An elegant solution to a common problem in planning and decision-making.
1950: Krylov Subspace Iteration Method. A technique for rapidly solving the linear equations that abound in scientific computation.
1951: The Decompositional Approach to Matrix Computations. A suite of techniques for numerical linear algebra.
1957: The Fortran Optimizing Compiler. Turns high-level code into efficient computer-readable code.
1959: QR Algorithm for Computing Eigenvalues. Another crucial matrix operation made swift and practical.
1962: Quicksort Algorithms for Sorting. For the efficient handling of large databases.
1965: Fast Fourier Transform. Perhaps the most ubiquitous algorithm in use today, it breaks down waveforms (like sound) into periodic components.
1977: Integer Relation Detection. A fast method for spotting simple equations satisfied by collections of seemingly unrelated numbers.
1987: Fast Multipole Method. A breakthrough in dealing with the complexity of n-body calculations, applied in problems ranging from celestial mechanics to protein folding.
Personally I would replace Integer Relation Detection with PageRank. | <urn:uuid:6c781eea-ee35-4f64-a744-c5e6dde7b690> | 3.203125 | 296 | Q&A Forum | Software Dev. | 32.795534 |
Lua is an extension programming language designed to support general procedural programming with data description facilities. It also offers good support for object-oriented programming, functional programming, and data-driven programming. Lua is intended to be used as a powerful, light-weight scripting language for any program that needs one. Lua is implemented as a library, written in clean C (that is, in the common subset of ANSI C and C++).
Being an extension language, Lua has no notion of a “main” program: it only works embedded in a host client, called the embedding program or simply the host. This host program can invoke functions to execute a piece of Lua code, can write and read Lua variables, and can register C functions to be called by Lua code. Through the use of C functions, Lua can be augmented to cope with a wide range of different domains, thus creating customized programming languages sharing a syntactical framework. The Lua distribution includes a sample host program called lua, which uses the Lua library to offer a complete, stand-alone Lua interpreter.
The primary web site for Lua is www.lua.org; the reference manual is available from that site. Roberto Ierusalimshy, the primary designer of Lua, has written the definitive book about Lua, Programming in Lua; an earlier but still useful version of the book is available at http://www.lua.org/pil/. Another book, Lua Programming Gems, by Luiz Henrique de Figueiredo, Waldemar Celes, and Roberto Ierusalimshy, provides a useful collection of articles on programming with Lua.
There are several other web sites of interest to those who program with Lua. Lua-users provides a wiki with community-maintained information and resources. LuaForge hosts projects written in Lua. The Kepler Project provides an open-source web development platform written in Lua. Metalua gives Lua a macro system similar to lisp. Lua has an active mailing list that is one of the focal points of the Lua community. | <urn:uuid:876d5a6b-8734-498e-8898-ce4d649fefb2> | 3 | 419 | Knowledge Article | Software Dev. | 43.820826 |
strchr - string scanning operation
char *strchr(const char *s, int c);
[CX] The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of IEEE Std 1003.1-2001 defers to the ISO C standard.
The strchr() function shall locate the first occurrence of c (converted to a char) in the string pointed to by s. The terminating null byte is considered to be part of the string.
Upon completion, strchr() shall return a pointer to the byte, or a null pointer if the byte was not found.
No errors are defined.
strrchr(), the Base Definitions volume of IEEE Std 1003.1-2001, <string.h>
First released in Issue 1. Derived from Issue 1 of the SVID.
Extensions beyond the ISO C standard are marked. | <urn:uuid:29b44ee6-a18c-4b53-8d2d-3bddd75c56c5> | 2.734375 | 206 | Documentation | Software Dev. | 76.329423 |
Secrecy and competition to achieve breakthroughs have been part of scientific culture for centuries, but the latest Internet advances are forcing a tortured openness throughout the halls of science and raising questions about how research will be done in the future.
The openness at the technological and cultural heart of the Internet is fast becoming an irreplaceable tool for many scientists, especially biologists, chemists and physicists — allowing them to forgo the long wait to publish in a print journal and instead to blog about early findings and even post their data and lab notes online. The result: Science is moving way faster and more people are part of the dialogue.
The open science approach forces researchers to grapple with the question of whether they can still get sufficient credit for their ideas, said physicist Sabine Hossenfelder, co-organizer of a conference on the topic set to begin Sept. 8 at the Perimeter Institute in Ontario, Canada.
[BTW: I Will Be Attending This Unique Conference Science in the 21st Century: Science, Society, and Information Technology [http://tinyurl.com/6ll8fb] / Look For Conference-Related Postings on the _Scholarship 2.0_ Blog [http://scholarship20.blogspot.com/] within the next two weeks]
Open science is a shorthand for technological tools, many of which are Web-based, that help scientists communicate about their findings. At its most radical, the ethos could be described as "no insider information." Information available to researchers, as far as possible, is made available to absolutely everyone.
Beyond email, teleconferencing and search engines, there are many examples: blogs where scientists can correspond casually about their work long before it is published in a journal; social networks that are scientist friendly such as Laboratree and Ologeez; GoogleDocs and wikis which make it easy for people to collaborate via the Web on single documents; a site called Connotea that allows scientists to share bookmarks for research papers; sites like Arxiv, where physicists post their "pre-print" research papers before they are published in a print journal; OpenWetWare which allows scientists to post and share new innovations in lab techniques; the Journal of Visualized Experiments, an open-access site where you can see videos of how research teams do their work; GenBank, an online searchable database for DNA sequences; Science Commons, a non-profit project at MIT to make research more efficient via the Web, such as enabling easy online ordering of lab materials referenced in journal articles; virtual conferences; online open-access (and free) journals like Public Library of Science (PLoS); and open-source software that can often be downloaded free off Web sites.
[BTW: Several Of These Innovations Have Been Profiled In My SciTechNet(sm) Blog [http://scitechnet.blogspot.com/] and/or The Scholarship 2.0 Blog [http://scholarship20.blogspot.com/]
The upshot: Science is no longer under lock and key, trickling out as it used to at the discretion of laconic professors and tense PR offices. For some scientists, secrets no longer serve them. But not everyone agrees.
Just a few decades ago, as a scientist, here is how you did your work: You toiled in obscurity and relative solitude.
However, today, more and more scientists, as well as researchers in the humanities, operate like transparent, networked cyborgs. Background research is mostly done online, not in the library. Some data and preliminary research might be posted online via a blog or open notebook. Early write-ups of the work might be announced to the public, or at least discussed online with peers. And these early write-ups might also be posted to an online publication that is not peer-reviewed in the strict sense.
"In areas like my own subfields of theoretical physics," said MIT physicist David Kaiser, "the only constraint [on how rapidly one generates research papers] is, 'Did you have more coffee that day?' We aren't usually held up trying to get an instrument to work, or slogging through complicated data analysis."Most people think faster is better, but there are other issues.
Is It A Good Thing?
There is "no question" that all efforts to make science more open are positive for the progress of science, says open science proponent and chemist Jean-Claude Bradley at Drexel University in Philadelphia, who posts his lab notebook online and started a blog in 2005 called UsefulChemistry where he and his colleagues regularly discuss chemistry problems as well as Web 2.0 tools and the technical and philosophical issues they raise.His online notebook and blog definitely make it easier to communicate with colleagues, he said. Such sharing also makes it easier for others to "replicate" scientists' work — try it themselves and convince themselves that you are right. And this replication issue is one of the principles behind scientific research. Anyone who has written down a recipe for a friend knows that we all tend to spell things out more clearly when sharing them than we would if we were just taking notes for ourselves in our own shorthand.
Open science also has the potential to prevent discrimination in access to information. Arxiv, the site for posting pre-print physics papers, was started in 1991 by Cornell physicist Paul Ginsparg, then at Los Alamos National Laboratory, to help provide equal access to prepublication information to graduate students, postdocs and researchers in developing countries.
[BTW: Paul Ginsparg will be one of several Major Players attending/presenting at The Conference [http://science21stcentury.org/abstracts.html]]
And open science benefits the public, Bradley said. He tries to keep his posts fairly accessible (although this is not the case for all open notebooks and open science blogs).
"It's not clear to me that professional scientists or people in academic institutions have a monopoly on good ideas," he said. "There are very smart people outside of academia, for example hobbyists or people in industry who could contribute, and having more contributors can only help. The same applies to interdisciplinary and cross-disciplinary approaches."
Drawbacks of Open Science
One of the biggest fears of nearly all researchers is that someone else hears what you're doing and beats you to publication. That means you wasted a lot of time (and most researchers work extremely long hours, so loss of productivity is especially painful and can also harm one's chances for getting a job or promotion or funding for the next research project). Once you publicly reveal your thoughts, data or experimental results, some say, you lose control over ownership of that information. This topic is covered by an area of law called intellectual property, as well as patent law, and there can be significant money to be fought over when it comes to patents.
Hossenfelder, the conference organizer, says she knows of several examples in which scientists have had an idea for something, talked about it openly and then somebody else has published the fleshed-out idea first without giving any credit beyond an acknowledgment to the original idea-holder. Acknowledgments don't advance careers.
However there are solutions to this, she said. For instance, the prominent scientific journal Nature encourages authors to include brief summaries of which author contributed what to a project. Some say that online posts provide a time-stamped record of when an experiment was documented. Those stamps can easily be arbitrarily altered after the fact, but it might also be possible to "lock" posts at a certain date after which they could not be changed without some sign-off permission to break the lock, Hossenfelder said. [snip]
Fear of Losing Peer Review
Another drawback of open science can be that results go public before they should. In science, experimental results are frequently proven wrong by subsequent work. Yet even peer review cannot ensure against this, nor can it prevent outright fraud, as proven by a 2005 case involving a South Korean scientist who claimed to have achieved the first cloning of a human embryo. A later examination of his work showed he had fabricated his results.
"The social system of science has become so complicated, unregulated and dispersed in terms of geography and disciplines, so peer review has been elevated to a principle that unifies a fragmented field," Biagioli said.
And today, Arxiv, one of the most frequently cited examples of open science, has no peer review for individual papers, but it has begun to add in some constraints on allowable authors. The site used to allow anyone with email addresses associated with academic institutions to post their papers. Now, authors of research papers who post in Arxiv are vetted before they can post for the first time. In some ways, things are tightening up when it comes to openness in physics, Kaiser said. In any case, the function of print journals, in physics at least, is changing.
"Ease of sharing everything prior to peer review is flourishing, and in my opinion very few physicists are reading journals for information these days," Kaiser said. "Journals have largely lost their information function."
For The Good Of Truth, Humanity, Economies?
Another argument in favor of open science is sort of a big picture issue for humanity, scientific truth and economies, Neylon said.
"Making things more open leads to more innovation and more economic activity, and so the technology that underlies the Web makes it possible to share in a way that was never really possible before, while at same time it also means that kinds of models and results generated are much more rich," he said.
This is the open source approach to software development, as opposed to commercial closed source approaches, Neylon said. The internals are protected by developers and lawyers, but the platform is available for the public to build on in very creative ways.
"Science was always about mashing up, taking one result and applying it to your [work] in a different way," Neylon said. "The question is 'Can we make that as effective as samples data and analysis as it does for a map and set of addresses for a coffee shop?' That is the vision."
Thanks to Sabine Hossenfelder For The HeadsUp ! | <urn:uuid:f033c30e-5dbc-4208-b24e-3321992250e7> | 3.078125 | 2,114 | Personal Blog | Science & Tech. | 37.247523 |
lurk in clouds of glowing hydrogen gas in
emission nebula is found near the edge of a
large molecular cloud, unseen at visible wavelengths,
in the southern
Ara, about 4,000 light-years away.
stars of the embedded Ara
were formed in that region only a few million years ago,
sculpting the dark shapes and
powering the nebular glow with
stellar winds and intense ultraviolet radiation.
formation itself was likely triggered by
winds and supernova explosions, from previous generations of massive
stars, that swept up and compressed the molecular gas.
Joining NGC 6188 on
is rare emission nebula NGC 6164,
also created by one of the region's massive O-type stars.
Similar in appearance to many
planetary nebulae, NGC 6164's striking,
symmetric gaseous shroud and faint halo surround
its bright central star at the upper right.
The field of view spans about two full Moons, corresponding to
70 light years at the estimated distance
of NGC 6188.
Credit & Copyright: | <urn:uuid:4482fc8d-ab15-47ac-a88d-23a0651366bf> | 3.375 | 234 | Knowledge Article | Science & Tech. | 43.503514 |
Alaska’s Ice Drain 2004-07
Alaska has only about 5% of the total ice Greenland has — but from 2004 to 2007, Alaska shed an average of 80 billion metric tons per ear, under half the amount that Greenland did. Put another way, relative to its own stock, Alaska is losing ice about eight times faster than Greenland.
This trend especially concerns scientists because meltwater and ice emptying into the oceans raise global sea level. Currently, sea level is increasing at about 1.25 inches per decade, and researchers estimate that, globally, glaciers and ice caps — among which Alaska is making the biggest contribution — are contributing perhaps over 20% of this rate.
Why is Alaska losing ice? It appears linked in several ways to climate warming, which is strongest in the Arctic. First, surface melt of ice has been increasing. Second, much of the meltwater drains to the base of glaciers and then lubricates the glaciers’ flow toward the sea. And finally, where the glaciers plunge into the ocean, warmer water appears to be eroding the glacial tongues that help hold flow back.
How do we know Alaska has been losing ice? The best evidence comes from NASA’s GRACE satellite mission. GRACE has provided a direct measure of mass change through time, through its unique “scale in the sky” capabilities.
The calculations behind this graphic assume it is possible to pack 100 metric tons of ice per 60 feet of freight train length (the 60 feet composed of one car plus the space separating it on one side from the next car).
GRACE: Launched in March 2002, NASA’s Gravity Recovery and Climate Experiment mission (GRACE) deploys two satellites that orbit the Earth in tandem. The pair measure the distance separating each other to an accuracy of 1% of the width of a human hair — and they orbit as far apart as Washington, DC and Philadelphia. Because each satellite accelerates or decelerates depending on the mass of the area beneath it (for example, a massive mountain range vs. flat lowlands), and because one satellite trails the other at some distance, the record of the shifting distance between them can be read like a giant planetary scale. And since they orbit over the same areas every ten days, the GRACE satellites provide a detailed record of mass changes in time, even tracking the seasonal accumulation and melting of Arctic snow.
See related graphic: Alaska’s Ice Loss. | <urn:uuid:18358d70-1dfa-4f81-9336-73a105d4b9bc> | 3.75 | 502 | Knowledge Article | Science & Tech. | 45.249037 |
THE BASIC FACTS and INFORMATION ABOUT SOLAR ENERGY
WHAT IS IT?
The sun is shining brightly every day, bathing the world with light and heat. But besides for these benefits, sunlight also miraculously carries energy, which can be captured and used for a variety of purposes. Before we go over practical solar energy facts, let's first quickly revisit sunlight origin and properties. The nuclear reactions within the Sun produce among other things electromagnetic radiation which is emitted into space. It has a dual nature as both particles and waves, which transport energy through empty space. This is what is casually called solar energy
. It can naturally convert directly into three usable forms of energy: electric, thermal, and chemical. This page will deal only with conversion of sunlight into electricity.
HOW MUCH SUNLIGHT DO WE GET?
The scientists estimate the net amount of power radiated from the Sun is approximately about 63 MW for each square meter of its surface. Its intensity drops with the squared distance from the Sun as the sphere of this emission is expanding. An average radiation density at the top Earth's atmosphere is about 46,000 times less. Just for a reference, there is some dispute about its exact value. According to National Institute of Standards and Technology (NIST), the currently accepted value is 1366 watt/sq.m, while according to NASA's SORCE measurements it is 1361.
If we take into account the fact that only half the Earth is lit at one time and if we divide the total amount of the irradiance intercepted by the Earth by its surface area, after simple math we'll get 1/4 of the above quantity, or approximately 341 watts/sq. meter. This is just a hypothetical number. Only about half of this amount actually reaches our planet's surface.
What I am going to tell you may sound a bit too technical, but this is what you might want to know if you are considering buying a home PV system and want to know exactly how much electricity it can produce for you. The sunlight, of course, does not fall on the entire planet evenly. Its actual intensity varies by time and latitude. Not surprisingly, the irradiance is largest near the equator. Numerically, on a clear day at noon at sea level it is close to 1000W per meter squared. The solar industry casually refers to this value as "standard sun
It is customary used in rating of photovoltaic panels
, although outside of equator you may rarely get this amount of sunlight. We need to note that "standard sun" is just potential peak
irradiance at noon. It is not the actual condition under which your photovoltaic system will be working during an entire day. So, an important fact to remember is that claimed wattage of a PV module is just potential
power it may generate under some ideal conditions. As sun moves in the sky, the irradiance per unit area obviously drops because the same wattage is now spread over a larger area (see the solar energy diagram
to the right, which illustrates why we need to tilt the panels). If we denote the density of radiant energy into a perpendicular surface by E
), then the density on a horizontal surface will be E×cosZ
, where Z- is zenith angle. Zenith angle by definition is the angle from a vertical line to the sun's position in the sky. It is a function of latitude, time of year, and time of day. Note that E<1000W/m2
and cosZ<1. Since the incident radiation varies due to many factors, a more useful characteristic of sunlight is so-called net insolation
(which is abbreviation for INcident SOLar radiATION). It is a measure of the total amount of solar energy over an entire day for a unit area on a given surface. Insolation is usually expressed in kilowatt-hours per square meter or in equivalent "sun-hours" per day. Both measures are numerically the same because one sun-hour is 1 kw-h/m2
. Since daily insolation varies with weather conditions, geographical locations and time of the year, it is often averaged over a certain period of time, usually a month or a year. Various organizations maintain databases and maps of mean yearly insolation for most locations worldwide. In US this value varies typically from 4-5 hours in Northeast to 5-7 hours in Southwest. If for example, a particular area has yearly average of 5 sun-hours per day, it gets mean energy per day of 5 kW-hours/sq.m. If you spread this amount over a 24-hour period, it yields 5000W/24=208 W/sq.m. How much of this power can be transformed to electricity? The answer depends on the efficiency
of the solar panels you are using. The best residential-grade models under optimal conditions have efficiency η<21%. Hence, in our example they would produce about 1kWh/sq.m daily. If you can accumulate this energy in batteries and use it evenly over an entire day, you would get some 1000/24=41.6 watt/sq.m (see our calculator
for a detailed analysis).
PRACTICAL NUMBERS ON SOLAR ELECTRICITY
Here is a quick summary of the facts
and definitions related to solar electricity generation.
- Irradiance (average power measured at the top Earth’s atmosphere perpendicular to the sunrays): 1,366 watt per square meter
- Peak solar power flux at noon on a perpendicular surface: 1 kW/sq.m;
- Insolation (average net daily sunlight per square meter) at optimum panels tilt: 4 to 7 kWh depending on your location;
- Total electric energy produced by a PV array over a day: Epv=Insolation×Efficiency/100 kWh/m2, where panel's efficiency is from 6% to 21%.
For example, for the cells with 20% efficiency, the daily yield would be 800 to 1400 watt-hours per sq.m.
- Solar power generated by PV panels averaged over a day: Ppv=Epv/24. For the most efficient models this yields 33 to 58 watt/m2.
Note that the above numbers are related just to DC output of PV panels. In a complete system there will be additional energy losses of 3 to 10% in the inverter and another 3-5% in the wiring. So, the resulting net amount of solar energy will be 6 to 15% lower.
- Net solar electricity generated over a day by a PV system per each kW of nameplate DC power: ≈0.8×Insolation, where 0.8 is a factor that accounts for losses in inverter and wires. For example, in the area with insolation 5 kWh/m2 a 5,000 watt system will produce 5x5x0.8=20 kWh over a day, which is just 20/24=833 watt in average.
Note that 1 ft^2=0.0929 m^2. So, if you want to get the numbers per square foot, roughly, they will be 10 times less than the respective numbers per square meter. To convert watt to kilowatt you divide watts by 1000.
QUICK REFERENCE INFORMATION ABOUT SUN
The Sun just like probably all stars, consists mainly of hydrogen and heliumis. High pressure and temperature in the Sun's core cause hydrogen atoms to break up. Their nuclei then combine forming helium nuclei. This process is called nuclear fusion. In this reaction, the resulting atoms have less internal energy than the starting particles. Since energy is conserved, the balance of it is released in the form of heat, photons and other particles. The photons are constantly intercepted, absorbed and re-emitted by surrounding molecules. Eventually they reach the surface and emitted to outer space. This emission is referred to as radiated solar energy.
In conclusion, below is a brief summary of information about the Sun
Diameter: 1,392,000 km (863,040 miles);
Outside temperature: ~5700 oC;
Average Earth-Sun Distance: 150 million km (93 million miles);
Content by mass: 74% Hydrogen, 25% Helium, 1% other;
Luminosity (total amount of power radiated in all directions): 3.85*1026
watt (~385 billion megawatts);
Radiated power density at Sun's surface: 63,300 kW/m2
References and additional information:
US solar energy resource
Earth radiation balance
and photovoltaic basics | <urn:uuid:94f699a4-2e40-4029-9b3d-8b2406b87ae2> | 3.609375 | 1,779 | Knowledge Article | Science & Tech. | 54.578825 |
Kepler in Brief
A Nutshell Description of the Kepler Mission
The Kepler Mission is a NASA Discovery Program for detecting potentially life-supporting planets around other stars. All of the extrasolar planets detected so far by other projects are giant planets, mostly the size of Jupiter and bigger. Kepler is poised to find planets 30 to 600 times less massive than Jupiter.
By a method known as the transit method of planet finding. When we see a planet pass in front of its parent star it blocks a small fraction of the light from that star.When that happens, we say that the planet is transiting the star. If we see repeated transits at regular times, we have discovered a planet! From the brightness change we can tell the planet size. From the time between transits, we can tell the size of the planet's orbit and estimate the planet's temperature.These qualities determine possibilities for life on the planet.
The Kepler satellite has a 0.95-meter diameter telescope that is a photometer having a field of view a bit over 10 degrees square (and area of sky the size of about two open hands). It is designed to continuously and simultaneously monitors brightnesses of 100,000 stars brighter than 14th magnitude in the constellations Cygnus & Lyrae.
To detect an Earth-size planet, the photometer must be able to sense a drop in brightness of only 1/100 of a percent. This is akin to sensing the drop in brightness of a car's headlight when a fruitfly moves in front of it! The photometer must be spacebased to obtain this precision.
Kepler was launched in March 2009. | <urn:uuid:4bf12376-6fa9-4a29-9c17-05e600806334> | 4.28125 | 338 | Knowledge Article | Science & Tech. | 58.080924 |
Pumping sulphur particles into the atmosphere to mimic the cooling effect of a large volcanic eruption has been proposed as a last-ditch solution to combating climate change - but doing so would cause problems of its own, including potentially catastrophic drought, say researchers.
Sulphur "sunshades" are just one example of a "geo-engineering" solution to climate change. Such solutions involve artificially modifying our climate to counteract the effects of human greenhouse gas emission. Other examples include space mirrors and iron fertilisation of the ocean (see also Sunshade for the planet.
Recent research has suggested that sulphur sunshades could rapidly cool the climate back down to pre-industrial temperatures (see Solar shield could be quick fix for global warming).
However, a study, led by Ken Caldeira of the Carnegie Institution of Washington in the US, warned that failing to correctly deploy or maintain such a scheme would result in sudden warming - which would be worse than the long-term warming that had been avoided because of its swiftness.
Now, Kevin Trenberth and Aiguo Dai of the National Center for Atmospheric Research in Colorado, US, have shown that - even if correctly deployed - a sulphur sunshade could have deleterious effects on the environment by reducing rainfall.
Sulphur sunshades are inspired by the cooling effects of large volcanic eruptions, which blast sulphate particles into the stratosphere. The particles reflect part of the Sun's radiation back into space, reducing the amount of heat that reaches the Earth. In 1991, the eruption of Mount Pinatubo in the Philippines cooled Earth by a few tenths of a degree for several years.
To study the effects that sulphur sunshades might have on rainfall, Trenberth and Dai looked at trends in precipitation and continental run-off from 1950 to 2004 to try to detect the impact of the eruptions of Mount Agung in Indonesia 1963, El Chichón in Mexico in 1982, and Pinatubo in 1991.
The researchers had to account for the effects of El Niño, which tends to decrease rain over land, and increase it over the oceans. After this, a marked decrease in rainfall and run-off in the year after the Pinatubo eruption was clear (see graph, right).
However, the Agung and El Chichón eruptions did not produce a detectable signal in the precipitation records. Pinatubo is thought to have pumped significantly more particles into the atmosphere than Agung and El Chichón, releasing aerosols that increased the optical density of the atmosphere by about 10 times more than each of the other two. "We think those two were not strong enough to have an effect on precipitation," says Dai.
Dai and Trenberth say their results suggest that artificially putting large amounts of sulphate particles into the atmosphere in order to decrease solar radiation could have catastrophic effects on the planet's water cycle. "Creating a risk of widespread drought and reduced freshwater resources does not seem like an appropriate fix," they say.
They note that the negative effects experienced after Pinatubo erupted were harshest in the tropics.
Climate Change - Want to know more about global warming: the science, impacts and political debate? Visit our continually updated special report.
Journal reference: Geophysical Research Letters (DOI:10.1029/2007GL030524).
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Scary or what ?
Thu Aug 02 11:19:47 BST 2007 by bron
As climate change is a nature process and the theory of man made global warming a religion more than a science I think any deliberate interference by man would be ludicrous in the extreme.
Scary or what ?
Thu Aug 02 13:02:14 BST 2007 by Michael Marshall
At New Scientist we report on the evidence, and as a result our coverage reflects the overwhelming scientific consensus that climate change is happening because of anthropogenic emissions of greenhouse gases. We will continue to cover all important new studies relating to the issue in our Special Report on Climate Change (http://environment.newscientist.com/channel/earth/climate-change). We have also addressed 26 common climate myths in our feature Climate Change: A Guide for the Perplexed ((long URL - click here). Michael Marshall, online editorial assistant
Scary or what ?
Fri Aug 03 04:49:35 BST 2007 by The Respected Doofinator
'By Michael Marshall Thu Aug 02 13:02:14 BST 2007 At New Scientist we report on the evidence, and as a result our coverage reflects the overwhelming scientific consensus that climate change is happening because of anthropogenic emissions of greenhouse gases. ' Ummm. No. There is no 'overwhelming scientific consensus'. This is a crock. Show me the evidence of 'overwhelming scientific consensus'. The Respected Doofinator
Scary or what ?
Sun Aug 05 21:58:56 BST 2007 by MvL
Humans are almost certainly the cause, or one of the big causes, of this global warming. Even if we weren't, we'd need to find a way to stop it and generate cooling. Massive warming trends have happened in the past without human intervention (CO2 emissions), and that means they'll happen again in the future without us too, again, even if we are causing the current warming, which I believe the evidence suggests we are. Whether caused by us or not, we need to do something about it. That said, let's seriously consider human geo-engineering strategies. The alternative is to give up - some science suggests that even if we halted all CO2 emissions now, we're still in for potentially catastrophic warming, and halting them all now is totally unrealistic and ridiculous, even if it is desirable. We've managed, since the advent of the industrial revolution, to significantly increase global CO2 levels in the atmosphere, just as an unintentional side effect of our energy production methods. Let's not, with full intention, alter the atmosphere and the global temperature. Whatever we can do as an unintentional side effect, we can certainly undo with intent and ingenuity, and good engineering. Or, it's worth trying, since we might have nothing to lose, and everything to gain.
Scary Or What ?
Fri Dec 28 19:32:20 GMT 2007 by Erik
Wow, what an idiot!
So what you are saying is that even if we aren't causing global warming, you think we should screw with the natural cycles of the earth by taking extreme measures that could very well have negative effects?
You probably call yourself an environmentalist, I bet.
Truly mind-boggling logic!
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:96883693-c3f4-443a-a39a-70f520062199> | 3.890625 | 1,504 | Comment Section | Science & Tech. | 44.312608 |
A lot of information can be gained from stellar spectra e.g. the stellar ages, velocities, evolutionary stage, abundances, as well as the composition of the environment they were born in. High quality spectra allow us to derive stellar abundances of about 2/3 of the elements in the periodic table. By comparing abundances of elements belonging to different groups in the periodic table, we can gain information about long gone supernovae - maybe even the first ones? These supernovae enriched the gas we observe today in later generations of stars. To date we still do not fully understand how the heavy elements (Z > 37) are formed. Two main production channels are known to create the majority of the heavy elements through neutron captures, namely the rapid neutron capture (r-)process and the slow neutron-capture (s-)process. Recently we have seen that each of these production channels seem to branch into two, a main and a weak process. The nature of weak s-process is fairly well known, while the weak r-process remains a puzzle. Silver and palladium, as well as other elements in the range 40< Z < 50, are thought to form via the weak r-process. Hence, Pd and Ag may carry key information on this process. By studying elements (Sr, Y, Zr, Ba, Eu) with well known formation processes, we can compare their abundances to those of Ag and Pd, and thereby learn about the differences/similarities of various formation processes. Here, I will outline the procedure astronomers go through to derive the stellar abundances, the approximations, and problems we face when doing so. | <urn:uuid:7f97256a-bc91-45e3-895a-c3db00da4b8c> | 3.953125 | 343 | Academic Writing | Science & Tech. | 46.299776 |
|Nov14-12, 02:57 AM||#1|
Flux trapping effect.
Magnetic suspension and levitation is the caused by the flux trapping effect in superconductors.How does this flux get "trapped"?
Another quick question-
We link a magnet to a superconductor by bringing it very close to one ,until both of them start attracting and repelling. My question is, if the magnetic field strength of this magnet is greater than the critical field of the superconductor,the superconductor will lose its properties and become normal, instead of demonstrating magnetic levitation and suspension ,right ?
physics news on PhysOrg.com
>> Promising doped zirconia
>> New X-ray method shows how frog embryos could help thwart disease
>> Bringing life into focus
|Nov14-12, 07:36 PM||#2|
This phenomena is called the Meissner Effect. Wikipedia offers a great explanation of the subject. If you are looking for a mathematical explanation, you should look up the "London Equations" I believe the explain the expulsion of flux mathematically.
|Nov15-12, 04:22 AM||#3|
The flux lines are trapped around impurties, gain boundaries etc. The math is quite complicated, but can be found in standard textbooks (see e.g. Tinkham)
|Similar Threads for: Flux trapping effect.|
|Why does emphysema cause air trapping?||Biology||0|
|Trapping electrons||Atomic, Solid State, Comp. Physics||1|
|Particle Trapping||General Physics||3|
|Effect of outward negative energy flux from accreting black hole?||General Physics||0|
|Trapping Photons||Quantum Physics||2| | <urn:uuid:96f7ef48-b8c9-4d59-bfb0-cbc0386d5e3a> | 3.34375 | 382 | Comment Section | Science & Tech. | 52.228161 |
The Trouble with Gerrold: Self-validating code
January 15, 2013 —
(Page 1 of 4)
Related Search Term(s): debugging
At the beginning, it was called “microcomputing.” Enthusiasts were delighted at the idea that computing could finally be freed from the “priesthood.” Magazines like “Creative Computing” and "Byte” and “Kilobaud” foresaw a future when programming would be a skill as ubiquitous as reading, writing and long division.
Right. Nbdy duz lng div anymor & we all rite lk ths now.
The future refuses to cooperate with our predictions and forecasts. But aside from that, the early days of microcomputing were very exciting, because you could watch the first stages of evolution at work. There wasn’t a lot of software at the beginning. You had to write your own. So you went to the magazines to learn, and later on, CompuServe.
A useful article might explain why Quick Sort was better than Bubble Sort, comparing sort times, explaining the algorithm, and finally providing a sample code listing that you could adapt to your own use. Another article might do the same for hash tables. A third would walk you through string handling.
A lot of those early tutorials were linked to simple games like Hammurabi and Tic-Tac-Toe, so after you finished entering the code (learning as you went), you could play the game—and as you learned, you could add your own modifications. Eventually, “Creative Computing” showed how to write Colossal Cave Adventure in BASIC and store it all on two floppy disks, and that was the beginning of the text adventure explosion.
In those days, every manufacturer had their own implementation of BASIC, so listings often had to be translated. That meant learning familiarity with a lot of different dialects. When Turbo Pascal arrived, it unified a lot of software development, and because it compiled directly to a COM file, it was faster than interpreted BASIC, a very important advantage when you’re running at only 2MHz.
BASIC was notorious for resulting in spaghetti code. Turbo Pascal made it possible to write structured code. Although Pascal was originally intended as a learning language, Turbo Pascal was both an editor and a compiler. It was a powerful breakthrough for both hobbyists and professional programmers. | <urn:uuid:cbc27a8e-dfc9-4105-a22a-2e8f5ef19ce3> | 2.90625 | 510 | Truncated | Software Dev. | 44.583743 |
WHY DOES A PINE TREE PRODUCE TURPENTINE?
Q. Why would a tree living in a habitat that catches fire every few years produce turpentine, a highly flammable substance?
That question was asked as I was building a fire at home with a piece of fat lighter, wood from the stump of a long-dead pine tree. Fat lighter, also known as fatwood, catches fire immediately and burns longer and hotter than the driest wood. With a piece no bigger than a cell phone, you can start a fire without paper. Before you strike a match to fat lighter, smell it. Good fat lighter is permeated with turpentine. The turpentine neither harms nor aids the tree while it is alive, becoming of value to it only after the tree dies. Does that sound like a riddle worthy of the Sphinx? The answer to the apparent conundrum lies in the natural world's extraordinary ability to adapt and evolve.
Turpentine, a substance characteristic of pine trees and other conifers, is composed of a mixture of resins and volatile oils. The by-products have been used in a wide variety of applications including caulking for wooden ships, solvent for paint and varnish, and as an ingredient in insecticides, cleaning agents, and shoe polish. Turpentine products have even been used for medicinal purposes. A great turpentine industry was once centered in the South, where pine trees, especially longleaf and slash pine, were tapped for turpentine, the way sugar maples are tapped for sap to produce maple syrup.
The turpentine industry took advantage of a pine tree's natural response to injury. If the bark is broken, the tree begins to ooze sticky, yellowish sap that eventually dries and seals the wound with a layer of resin. The material is resistant to most wood-eating insects that might further damage the tree. The liquid can be distilled to produce turpentine.
But longleaf pines also have a characteristic that makes turpentine production seem counterintuitive. They live in what is known as a fire climax community. This means that, historically, trees and other plants that persisted in a longleaf pine community had to survive natural, periodic fires that swept through the forests, primarily as a result of summer lightning strikes. Some ecologists criticize forest management programs that prescribe controlled burns during winter because natural fires would usually have occurred in summer. Presumably, plants and animals in regions that experienced frequent fires evolved to tolerate warm weather fires.
Longleaf pine is a species well-adapted to survive fires at intervals of less than ten years. Young longleaf seedlings, in the so-called grass stage, can be burned back to the ground and then, unharmed, resprout the same season. A larger, more mature tree is also immune to a fast-burning forest fire because its thick bark is resistant to fire (and has no turpentine in it).
But why would a pine tree that, under former natural conditions, was sure to be subjected to numerous fires during its lifetime be saturated with readily flammable turpentine? An ecologically harmonious answer is that the turpentine is advantageous to the tree after it, or any part of it, dies.
Here's how. A pine tree dies, and within a few months or years, after the tree's bark has fallen off, a fire sweeps through the area. The dead tree, especially the stump of fat lighter, burns to the ground. So do any dead needles or limbs that were already on the ground. Nutrients bound inside the dead tree are returned to the soil and once again become available for other pine trees.
But animals and plants, including pine trees, are not altruistic, so why would this be of advantage to the tree? The simplest explanation is that most of the nearby trees would be descendants of the burned tree. The tree would be returning the nutrients to its own kin. In addition, adding fuel to the periodic fires would eliminate other trees that were not fire-tolerant species and that might otherwise compete with the pine trees.
you have it. Pine trees have worked out an efficient and effective mechanism
to deal with periodic fires over evolutionary time. The riddle of the
turpentine-saturated pine tree is solved.
you have an environmental question or comment, email | <urn:uuid:23105efc-c7e6-4661-9d34-c8fad0a1a9f0> | 3.796875 | 903 | Q&A Forum | Science & Tech. | 44.781943 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 24 April 2013 7
Ask an Expert Where does the solar system end, and has Voyager 1 crossed the edge yet?
Monday, 22 April 2013
All systems go An Antares rocket, one of two launchers developed with NASA backing to fly cargo capsules to the International Space Station, has blasted off on its debut mission, successfully depositing a dummy spacecraft into orbit.
Friday, 12 April 2013
Lost in space Australia's first space policy has received a mixed response with some saying it lacks inspiration and long-term vision, while others claim it ensures future access to vitally needed space resources.
Thursday, 21 March 2013
Despite a debate between NASA and astronomers over whether the Voyager 1 spacecraft has left the solar system, one thing is sure - it is where no man has gone before.
Wednesday, 6 March 2013
StarStuff Podcast Monster flows from the Milky Way could hold a clue to solving the mystery of dark matter. Also; scientists spot the birth of a new planet; and two comets streak across southern skies.
Friday, 1 March 2013
Radiation ring For more than four weeks last year, a previously unknown third radiation belt circled Earth before it was annihilated - along with the entire outer belt - by a shock wave, a pair of NASA probes show.
Wednesday, 13 February 2013
StarStuff Podcast Get ready for an asteroid half the size of a football field to skim past Earth. Also; asteroid collision and dinosaur extinction much closer than previously thought; and have scientists discovered the last warnings signs of a supernova?
Wednesday, 6 February 2013
StarStuff Podcast Time to rewrite the text books on planetary evolution. Also; new CSIRO observations support Big Bang theory, and South Korea launches its first rocket into space.
Wednesday, 30 January 2013
StarStuff Podcast Astronomers discover why the Sun's atmosphere is hotter than its surface. Also; biggest asteroid ever detected about to fly close to Earth. And new theory explores origins of mysterious red-glowing objects in deep space.
Tuesday, 15 January 2013
Jumping into hyperspace onboard the Millennium Falcon won't result in cascade of streaking stars, according to a study.
Monday, 17 December 2012
Mission's end A pair of robotic space probes circling the Moon to reveal what is inside will plunge into the lunar surface tomorrow.
Wednesday, 12 December 2012
North Korea has successfully launched a rocket into orbit, the same day the US Air Force's secretive X-37B spacecraft returned to space.
Wednesday, 12 December 2012
StarStuff Podcast Historic letters reveal Einstein and Schrodinger discussed the concept of dark energy long before it was discovered. Also; new theory on why some stars explode, and and North Korea successfully launches its first rocket into space.
Thursday, 6 December 2012
Asteroids and comets colliding with the Moon not only pitted its surface but also severely fractured its crust, say NASA researchers.
Tuesday, 4 December 2012
NASA's Mars rover Curiosity has found traces of compounds containing carbon, an essential building block for life, scientists say. | <urn:uuid:4b103bcc-50ad-4e27-9276-a3626138387d> | 3.015625 | 643 | Content Listing | Science & Tech. | 41.550814 |
lugwormArticle Free Pass
lugworm, (genus Arenicola), any of several marine worms (class Polychaeta, phylum Annelida) that burrow deep into the sandy sea bottom or intertidal areas and are often quite large. Fishermen use them as bait. Adult lugworms of the coast of Europe (e.g., A. marina) attain lengths of about 23 cm (9 inches). The lugworm of the coasts of North America (A. cristata) ranges in length from 7.5 to 30 cm.
The body is segmented, or ringed. The head end is dark red; behind it the body is fatter and lighter in colour. Toward the tail the body becomes thinner and yellowish red. The middle of the body has bristles and about 12 pairs of feathery gills.
Lugworms feed on decayed organic matter and ingest sand along with the food particles. At low tide their coiled casts (masses of excrement) may often be seen piled above their burrows. Their burrows may extend as deep as 60 cm (2 feet). The animals are hermaphroditic; i.e., functional reproductive organs of both sexes occur in the same individual. The eggs of one individual, however, are fertilized by the sperm of another.
What made you want to look up "lugworm"? Please share what surprised you most... | <urn:uuid:c4b228b4-04a2-469e-b2ed-56d368c20c34> | 3.4375 | 299 | Knowledge Article | Science & Tech. | 63.420004 |
Common Lisp the Language, 2nd Edition
The Common Lisp facility for generating pseudo-random numbers has been carefully defined to make its use reasonably portable. While two implementations may produce different series of pseudo-random numbers, the distribution of values should be relatively independent of such machine-dependent aspects as word size.
random number &optional state
(random n) accepts a positive number n and returns
a number of the same kind between zero (inclusive) and n (exclusive).
The number n may be an integer or a floating-point number.
An approximately uniform choice distribution is used.
If n is an integer, each of the possible results
occurs with (approximate) probability
1/n. (The qualifier ``approximate'' is used because of implementation considerations; in practice, the deviation from uniformity should be quite small.)
The argument state must be an object of type random-state; it defaults to the value of the variable *random-state*. This object is used to maintain the state of the pseudo-random-number generator and is altered as a side effect of the random operation.
To produce random floating-point numbers in the half-open
range [A, B),
accepted practice (as determined by a look through the
Collected Algorithms from the ACM, particularly algorithms
133, 266, 294, and 370) is to compute
X * (B - A) + A, where X is a floating-point number uniformly distributed over [0.0, 1.0) and computed by calculating a random integer N in the range [0, M) (typically by a multiplicative-congruential or linear-congruential method mod M) and then setting X = N/M. See also . If one takes , where f is the length of the significand of a floating-point number (and it is in fact common to choose M to be a power of 2), then this method is equivalent to the following assembly-language-level procedure. Assume the representation has no hidden bit. Take a floating-point 0.5, and clobber its entire significand with random bits. Normalize the result if necessary.
For example, on the DEC PDP-10, assume that accumulator T is completely random (all 36 bits are random). Then the code sequence
LSH T,-9 ;Clear high 9 bits; low 27 are random FSC T,128. ;Install exponent and normalize
will produce in T a random floating-point number uniformly distributed over [0.0, 1.0). (Instead of the LSH instruction, one could do
TLZ T,777000 ;That's 777000 octal
but if the 36 random bits came from a congruential random-number generator, the high-order bits tend to be ``more random'' than the low-order ones, and so the LSH would be better for uniform distribution. Ideally all the bits would be the result of high-quality randomness.)
With a hidden-bit representation, normalization is not a problem, but dealing with the hidden bit is. The method can be adapted as follows. Take a floating-point 1.0 and clobber the explicit significand bits with random bits; this produces a random floating-point number in the range [1.0, 2.0). Then simply subtract 1.0. In effect, we let the hidden bit creep in and then subtract it away again.
For example, on the DEC VAX, assume that register T is completely random (but a little less random than on the PDP-10, as it has only 32 random bits). Then the code sequence
INSV #^X81,#7,#9,T ;Install correct sign bit and exponent SUBF #^F1.0,T ;Subtract 1.0
will produce in T a random floating-point number uniformly distributed over [0.0, 1.0). Again, if the low-order bits are not random enough, then the instruction
should be performed first.
Implementors may wish to consult reference for a discussion of some efficient methods of generating pseudo-random numbers.
This variable holds a data structure, an object of type random-state, that encodes the internal state of the random-number generator that random uses by default. The nature of this data structure is implementation-dependent. It may be printed out and successfully read back in, but may or may not function correctly as a random-number state object in another implementation. A call to random will perform a side effect on this data structure. Lambda-binding this variable to a different random-number state object will correctly save and restore the old state object.
make-random-state &optional state
This function returns a new object of type random-state, suitable for use as the value of the variable *random-state*. If state is nil or omitted, make-random-state returns a copy of the current random-number state object (the value of the variable *random-state*). If state is a state object, a copy of that state object is returned. If state is t, then a new state object is returned that has been ``randomly'' initialized by some means (such as by a time-of-day clock).
To handle the common situation of executing the same program many times in a reproducible manner, where that program uses random, the following procedure may be used:
It is also possible to make copies of a random-state object directly without going through the print/read process, simply by using the make-random-state function to copy the object; this allows the same sequence of random numbers to be generated many times within a single program.
#S(RANDOM-STATE DATA #(14 49 98436589 786345 8734658324 ...))where the components are of course completely implementation-dependent.
random-state-p is true if its argument is a random-state object, and otherwise is false.
(random-state-p x) == (typep x 'random-state) | <urn:uuid:2c466ce2-1fe1-410b-babd-95105cbc53f9> | 3.234375 | 1,278 | Documentation | Software Dev. | 53.45356 |
Lightning in Canada
Did you know that lightning flashes occur in Canada about 2.34 million times a year, including about once every three seconds during the summer months? Did you also know that each year in Canada, lightning strikes kill up to 10 people, seriously injure up to 164 others, and ignite some 4,000 forest fires?
Clearly this is a form of severe weather which affects Canadians often, and that is why Environment Canada is working hard not only to better predict when and where lightning will strike, but to keep you informed about lightning and its dangers.
In this section, you will find out where Environment Canada scientists have identified lightning strike hot spots across the country. You can find out where strikes are currently happening, in near real-time in your local area and across the country. You can also check out important lightning safety tips, learn about the Canadian Lightning Detection Network, and much more!
Visit these helpful links to find out more about lightning in Canada!
- Lightning safety – what you need to know to stay safe
- Science of lightning – how does it work?
- Canadian lightning statistics – what are the risks?
- Canada’s Lightning Detection Network
- Lightning and Forest Fires
- Learn more….stories, test your knowledge, and strange but true
- Date Modified: | <urn:uuid:17c16dfe-adb3-477d-b7b6-fca6934d9577> | 3.515625 | 267 | Knowledge Article | Science & Tech. | 50.360205 |
How Science Works
Scientific Knowledge - Uses - Limitations.
Scientific knowledge can used to develop technology.
The results of improved
can be more useful to some people than to others.
For example, developing genetic changes to crops
to make them grow in hot dry conditions might be more useful
to people living nearer the equator than the poles.
Improved technology can be expensive and some people
might not benefit because they do not have enough money.
For example, radiotherapy used to treat cancer
may only be available in richer countries.
Scientific knowledge has limitations.
There are some questions which can
not be answered
because the evidence is incomplete or inconclusive.
This type of question can be answered by
science in the future after doing further research.
There are some questions which can never
by science. Science can tell us how to do things but not
whether it is good or right to do things. People look to
philosophy, politics or religion to answer these questions.
Links How Science Works Search Questions
gcsescience.com Chemistry Contents Physics Contents gcsescience.com
Copyright © 2012 Dr. Colin France. All Rights Reserved. | <urn:uuid:1252348d-beb5-49b2-8aea-8079dceeb94c> | 3.78125 | 247 | Knowledge Article | Science & Tech. | 43.509412 |
1. A glass flask whose volume is exactly 1000cm^3 at 0 degrees Celsius is completely filled with mercury at this temperature. When the flask and mercury are heated to 100 degrees Celsius, 15.2cm^3 of mercury overflow. If the coefficient of volume expansion of mercury is 18 x 1...
variable saparable(separation of variable) 1.) (2x+y)dy + (2x+y+6)dx=0 2.)(5t+1)t ds + (25t-1)sdt=0 I don't the solution of this problem I think it a "repeated expression". Can someone help me on this? thanks a lot. :)
1.) (2x+y)dy + (2x+y+6)dx=0 2.)(5t+1)t ds + (25t-1)sdt=0 thanks. :)
thanks for your help. :)
1.) Water flows from a 2 cm diameter pipe, at a speed of 0.35m/sec. How long will it take to fill a 10 liter container? 2.) A horizontal segment of pipe tapers from a cross-sectional area of 50 cm^2 to 0.5 cm? The pressure at the larger end of the pipe is 1.2x 10^5 Pa and the ...
can someone help me on this? thanks a viewing window 30cm in diameter is installed 3m below the surface of an aquarium tank filled with sea water. The force the window must withstand it approximately. a. 22N b. 218N c. 2140N d. 8562N thanks
that is the only information I have copied.
Can someone help me on here? thanks. :) A U-shaped tube is partly filled w/ water and partly filled w/ liquid that does not mix w/ water. Both sides of the tube are open to the atmosphere. What is the density of the liquid?
There are four types/purposes of Communication 1. Exposistion 2. Narration 3. Description 4. Argumentation can someone give me a definition and an example of Exposistion? thanks. :)
For Further Reading | <urn:uuid:b19c53d5-59d6-4a85-a23f-00ae3f087b1e> | 3.296875 | 453 | Comment Section | Science & Tech. | 92.470912 |
“Predator” bacteria (green) surround “prey” bacteria (red) in this petri dish version of the Serengeti. Rather than eating their prey, however, predator cells release a chemical that activates a suicide gene in the prey. Prey cells also release a chemical, but one that promotes survival of the predators. Researchers genetically programmed the cells to “communicate” with each other in this way and function as a synthetic ecosystem. The artificial system acts as an experimental model and can help us understand behaviors in more complex, natural ecosystems. July 9, 2008
Courtesy of Hao Song, Duke University.
Full Story: (http://publications.nigms.nih.gov/computinglife/predator.htm) | <urn:uuid:ccdd3db1-ea0b-4b2b-885f-f0b5243ec8b3> | 3.109375 | 160 | Knowledge Article | Science & Tech. | 36.646071 |
Solar Energy it is Energy from the Sun
Solar Energy Can Be Used for Heat and Electricity
Solar thermal energy:
There are many applications for the direct use of solar thermal energy, space heating and cooling, water heating, crop drying and solar cooking. It is a technology which is well understood and widely used in many countries throughout the world. Most solar thermal technologies have been in existence in one form or another for centuries and have a well established manufacturing base in most sun-rich developed countries.
The most common use for solar thermal technology is for domestic water heating.
Annual Production Liter Water
Annual Running Cost ($)
Solar Water Heater
There are two basic types of solar thermal power station. The first is the 'Power Tower' design which uses thousands of sun-tracking reflectors or heliostats to direct and concentrate solar radiation onto a boiler located atop a tower. The temperature in the boiler rises to 500 - 700EC and the steam raised can be used to drive a turbine, which in turn drives an electricity producing turbine.
The second type is the distributed collector system. This system uses a series of specially designed 'Trough' collectors which have an absorber tube running along their length. Large arrays of these collectors are coupled to provide high temperature water for driving a steam turbine. Such power stations can produce many megawatts (MW) of electricity, but are confined to areas where there is ample solar insulation.
There are other uses of thermal energy such as Solar cooking, Crop drying, Space heating, Space cooling, Day-lighting.
Photovoltaic modules or panels are made of semiconductors that allow sunlight to be converted directly into electricity. These modules can provide you with a safe, reliable, maintenance-free and environmentally friendly source of power for a very long time. Most modules on the market today come with warranties exceeding 20 years, and will perform much longer
How it works:
PV cells convert sunlight directly into electricity without creating any air or water pollution. PV cells are made of at least two layers of semiconductor material. One layer has a positive charge, the other negative. When light enters the cell, some of the photons from the light are absorbed by the semiconductor atoms, freeing electrons from the cell’s negative layer to flow through an external circuit and back into the positive layer. This flow of electrons produces electric current.
Basic solar cell construction:
Individual PV cells are interconnected together in a sealed, weatherproof package called a module. When two modules are wired together in series, their voltage is doubled while the current stays constant. When two modules are wired in parallel, their current is doubled while the voltage stays constant. To achieve the desired voltage and current, modules are wired in series and parallel into what is called a PV array. The flexibility of the modular PV system allows designers to create solar power systems that can meet a wide variety of electrical needs, no matter how large or small.
Photovoltaic cells, modules and arrays
Using solar energy produces no air or water pollution and no greenhouse gases, but does have some indirect impacts on the environment. For example, there are some toxic materials and chemicals, and various solvents and alcohols that are used in the manufacturing process of photovoltaic cells (PV), which convert sunlight into electricity. Small amounts of these waste materials are produced.
In addition, large solar thermal power plants can harm desert ecosystems if not properly managed. Birds and insects can be killed if they fly into a concentrated beam of sunlight, such as that created by a “solar power tower.” Some solar thermal systems use potentially hazardous fluids (to transfer heat) that require proper handling and disposal.
Concentrating solar systems may require water for regular cleaning of the concentrators and receivers and for cooling the turbine-generator. Using water from underground wells may affect the ecosystem in some arid locations. | <urn:uuid:903ec154-718e-46c4-a3d7-14354d8ab29c> | 3.21875 | 796 | Knowledge Article | Science & Tech. | 29.723841 |
The 'Pioneer anomaly' - the mystifying observation that NASA's two Pioneer spacecraft have drifted far off their expected paths - cannot be explained by tinkering with the law of gravity, a new study concludes.
The study's author suggests an unknown, but conventional, force is instead acting on the spacecraft. But others say even more radical changes to the laws of physics could explain the phenomenon.
Launched in the early 1970s, NASA's Pioneer 10 and 11 spacecraft are drifting out of the solar system in opposite directions, gradually slowing down as the Sun's gravity pulls back on them.
But they are slowing down slightly more than expected and no one knows why. Some physicists say the law of gravity itself needs revising, so that gravity retains more strength in the outer solar system. But there has been disagreement about whether such modifications would accurately predict the orbits of the outer planets.
Now, Kjell Tangen, a physicist at the firm DNV in Hovik, Norway, says tweaking the law of gravity in a variety of ways cannot explain the anomaly - while also getting the orbits of the outer planets right. After modifying gravity in ways that would match the Pioneer anomaly, he inevitably got wrong answers for the motion of Uranus and Pluto.
That suggests conventional physics - such as drag due to dust grains in space, or the emission of heat from small nuclear generators on board, known as RTGs, in some directions more than others - probably causes the anomaly, Tangen says. But he admits a definitive cause remains elusive. "It is easier to be conclusive about what cannot be the cause," he told New Scientist.
Myles Standish, who calculates solar system motions at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, says most scientists suspect the asymmetrical radiation of heat from the spacecraft is to blame.
But he also acknowledges that the orbits of Uranus, Neptune and Pluto have not been measured as precisely as those of the inner planets, suggesting the new study by Tangen cannot rule out modified gravity as a cause. "The measurements are not able to support any definite conclusions," he told New Scientist.
Other scientists say the effect could be explained by even more extreme changes to Einstein's general theory of relativity, since Tangen did not alter one of its central tenets - the equivalence principle.
The principle says that all objects respond to gravity in the same way regardless of their mass, composition or the paths they took to their present location. Among other things, it explains why a feather and a bowling ball fall at the same rate in a vacuum.
If you allow violations of the equivalence principle, modifying the laws of physics can explain the Pioneer anomaly without messing up the orbits of the outer planets, says Robert Sanders of the University of Groningen in the Netherlands.
A theory called modified inertia, proposed by Mordehai Milgrom of the Weizmann Institute of Science in Rehovot, Israel, does just this. It says the way objects accelerate under gravity depends on their past trajectories - a breach of the equivalence principle. In this scenario, the Pioneer spacecraft, whose trajectories are taking them out of the solar system, experience an anomaly, while the outer planets, whose orbits keep them bound to the Sun, do not.
Re-flying the mission
If confirmed, Tangen's conclusions would be very significant, Sanders says. "Either the Pioneer anomaly isn't real - that is, it's another physical effect that they haven't taken into account, or whatever modification of gravity it is doesn't obey the equivalence principle," he told New Scientist.
That confirmation could come relatively quickly. Slava Turyshev of NASA has been compiling additional data from the Pioneer 10 and 11 spacecraft that had been unavailable because they were in archaic file formats and storage media.
The data holds information about the spacecraft's internal behaviour, including the heat released by the RTGs. This can be compared to the tracking data to see whether the Pioneer anomaly matches the changes in heat radiated throughout the spacecraft's lifetime.
The analysis is "going reasonably well", Turyshev told New Scientist. "We should be able to tell more on the anomaly in a year or so."
Journal reference: Physical Review D (in press)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Tue Nov 13 17:18:37 GMT 2007 by Jaap Kannegieter
Gravity is unknown, and secondary E-M effects can't be ruled out. Polarized virtual particles near real particles are the source of Newtonian gravity (1/r2), but non-Newtonian at extreme distances (1/r, responsible for the Pioneer anomaly and the flat rotation curve of galaxies).
Tue Nov 27 15:18:27 GMT 2007 by Simon E. Bode
This is an interesting concept: E-M interaction. I had always thought (before I was instructed differently) that the galactic rotation WAS as a rigid body --> you say held by E-M effects. The galaxay behaves much more than a collection of independent stars - our own Solar Wind is an indication that there is more to the galaxay than stars and a vacuum.
Pioneer Anomaly Gravitational?
Tue Nov 27 15:14:08 GMT 2007 by Simon E. Bode
Surely on a basic level, accepting that gravitational theory in the outer solar system is no different than near Earth, the slight anomaly is due to an unexplained amount of extra matter in the dust disk around the Sun? Within a dust shell, or a wide dust disk, there would be no net gravitational effect on planets or probes, yet if the probe is inside the shell or disk the acceleration anomaly depends on the mass distribution (and significant masses of gas and dust - which is odd if the solar system is not young). Surely it's not more complicated than that?
Pioneer Anomaly Gravitational?
Wed Dec 12 17:55:13 GMT 2007 by Richard D. Saam
The Pioneer deceleration anomaly
may be interpreted as due to transfer of momentum from momentum space (density ~6E-30 g/cm^3)
(independent of frame -proportional to c^2)
at an extremely cold 8.11E-16 K
an effect on all objects according to their area/mass ratio
(effect is greater on smaller particles)
which accounts for the flat galactic rotation curves.
Tue Dec 25 05:43:29 GMT 2007 by Gw Fourmyle
Well, pilgrims--you've a few 'anomalies', hardly 1. Start with the galaxy chains, loops around huge voids--sure, put an 'inflationary-theory' band-aid on it. Yeah. I suggest an elegant solution--as -entropy has been produced on earth, and +entropy, i.e. 'S' is rather obvious at reaching far beyond its initial thermodynamic concept--seems to me S is the 'property-of-mass', and 'G'? a myth. Sure, mass curves space/time, i contend that is an 'S' function. 'disorder-with-respect-to mass'. Realizing Dr. E. Was doing his work, on a massive planet, in a massive stellar system, one would expect these 'anomalies', as probes exit, or universes of infinite mass will bear infinite S---fortunately infinite -S must be equal. Extrapolating calculations made in a burning house to All houses? here i see massive 'error-of-reasoning'. The stunning aspect is if S may 'disorder' space/time? why not living-systems? the simple experiment, for most, is simply looking in a mirror, reading the news, or fighting cancer---too long, physics has only dealt in 'dead-stuff'---there is more---
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:48c72ba4-7085-46b3-8117-81921da98f66> | 3.6875 | 1,746 | Comment Section | Science & Tech. | 47.304614 |
When an electron collides with an atom or ion, there is a small probability that the electron kicks out another electron, leaving the ion in the next highest charge state (charge q increased by +1). This is called electron-impact ionization and is the dominant process by which atoms and ions become more highly charged. The rate equation is given by:
e- + A(q) e- + e- + A(q+1).
From energy conservation, it is clear that the initial energy of the incident electron must be larger than the ionization potential of the electron being removed.
Theorists have found electron impact ionization cross sections difficult to calculate from first principles, even for relatively simple systems like hydrogen-like (one electron) and helium-like (two electron) ions. There has been recent progress in developing a phenomenological theory by Dr. Yong-Ki Kim here at NIST.
A simple empirical formula for calculating electron impact ionization cross sections was developed by W. Lotz over 25 years ago. It is not very accurate, but it does give experimentalists a useful qualitative picture.
Radiative recombination is a process which takes place when a positively charged ion captures an electron to one of its bound orbits with a simultaneous emission of a photon:
e- + A(q+) A(q-1) + photon
In the dielectronic recombination process the energy which becomes available during the capture process is carried away by the promotion of a bound electron to another bound orbit:
e- + A(q+) A(q-1)** A(q-1) + photon
In the second step of the dielectronic recombination process a photon is emitted characteristic to the doubly excited state (**) of the q-1 times ionized ion. The dielectronic recombination is a resonant process, because of the discrete energy nature of the bound electron orbits.
Both radiative and the dielectronic recombination are important capture processes which play a dominant role in determining the charge state balance of highly ionized astrophysical and laboratory plasmas.
In a recent investigation carried out with our EBIT , scandium-like and titanium-like barium ions were created, trapped, and excited. X-ray peaks arising from both radiative recombination and dielectronic recombination were studied simultaneously. In the DR process a 2p electron was promoted to the 3d orbital. One of the M-shell electrons of the recombined ion subsequently decayed radiatively to the 2p vacancy, and emitted an x-ray of energy almost twice the incident kinetic energy of the projectile electron. Comparison with theoretical estimates showed a favorable agreement with the data. The theoretical calculations were carried out by the theory group of the University of Connecticut (McLaughlin and Hahn).
The measurement of excited-state lifetimes is complementary to measuring transition wavelengths as a way of studying atomic structure. Although the lifetimes are determined by the same wavefunctions as the energy levels the measurements of the atomic decays carry different information since they are sensitive to the long-range behavior of the wavefunctions. The knowledge of the lifetimes also has important practical applications. They are critical in the density diagnostics of laboratory and astrophysical plasmas.
The principle of measuring lifetimes with an EBIT lies in the periodic fast switching of different voltages in the machine. Since the ions are created and excited with the same beam of electrons, by changing the electron beam energy one can selectively exclude certain levels from being excited. This can simply be done by setting the electron beam energy below the excitation threshold of the level to be excluded. Without further excitation the time dependence of the emitted photon signal carries the information about the lifetime of the level. After a certain period of time (determined by the lifetime of the level) the electron beam energy is set to be above the excitation threshold to repopulate the level and repeat the sequence. An alternative method for measuring lifetimes with an EBIT is to switch off the electron beam completely, take data, and turn the beam back again to re-excite the ions in the trap. While the electron beam is off, the ions remain trapped by the magnetic field. The lifetime range that can be measured with an EBIT is determined by the capabilities for the fast switching of voltages. In principle the 10 ns to 10 ms lifetime range can be addressed by this method. Since this lifetime range is only partially covered by other methods the EBIT is a unique tool for measuring the lifetime of long living metastable levels.
In a recent experiment we have measured the lifetime of a visible light emitting metastable level . The transition takes places within the ground state configuration of titanium-like ions. The measured lifetimes fall into the millisecond range. | <urn:uuid:716dc55f-ad7c-480f-828e-32665f8aa931> | 3.09375 | 985 | Academic Writing | Science & Tech. | 24.725172 |
Ether is a class of organic compounds which contain an ether group — an oxygen atom connected to two (substituted) alkyl or aryl groups — of general formula R–O–R'. A typical example is the solvent and anesthetic diethyl ether, commonly referred to simply as "ether" (ethoxyethane, CH3-CH2-O-CH2-CH3).
cannot form hydrogen bonds
among each other, resulting in a relatively low boiling point
comparable to that of the analogous alcohols
. However, the differences in the boiling points of the ethers and their isometric alcohols become smaller as the carbon chains become longer, as the hydrophobic
nature of the carbon chain becomes more predominant over the presence of hydrogen bonding.
Ethers are slightly polar as the COC bond angle in the functional group is about 110 degrees, and the C - O dipoles do not cancel out. Ethers are more polar than alkenes but not as polar as alcohols, esters or amides of comparable structure. However, the presence of two lone pairs of electrons on the oxygen atoms makes hydrogen bonding with water molecules possible, causing the solubility of alcohols (for instance, butan-1-ol) and ethers (ethoxyethane) to be quite dissimilar.
Cyclic ethers such as tetrahydrofuran and 1,4-dioxane are totally miscible in water because of the more exposed oxygen atom for hydrogen bonding as compared to aliphatic ethers.
Ethers can act as Lewis bases. For instance, diethyl ether forms a complex with boron compounds, such as boron trifluoride diethyl etherate (BF3.OEt2). Ethers also coordinate to magnesium in Grignard reagents (RMgBr).
In the IUPAC nomenclature
system, ethers are named using the general formula "alkoxyalkane"
, for example CH3
. If the ether is part of a more complex molecule, it is described as an alkoxy substituent, so -OCH3
would be considered a "methoxy-"
group. The simpler alkyl
radical is written in front, so CH3
would be given as methoxy
). The nomenclature of describing the two alkyl groups and appending "ether"
, e.g. "ethyl methyl ether"
in the example above, is a trivial usage
Ethers are not to be confused with the following classes of compounds with the same general structure R-O-R.
- Aromatic compounds like furan where the oxygen is part of the aromatic system.
- Compounds where one of the carbon atoms next to the oxygen is connected to oxygen, nitrogen, or sulfur:
Primary, secondary, and tertiary ethers
The terms "primary ether"
, "secondary ether"
, and "tertiary ether"
are occasionally used and refer to the carbon atom next to the ether oxygen. In a primary ether
this carbon is connected to only one other carbon as in diethyl ether
. An example of a secondary ether
is diisopropyl ether
and that of a tertiary ether
is di-tert-butyl ether
Dimethyl ether, a primary, a secondary, and a tertiary ether.
Polyethers are compounds with more than one ether group. While the term generally refers to polymers
like polyethylene glycol
and polypropylene glycol
, low molecular compounds such as the crown ethers
may sometimes be included.
Ethers can be prepared in the laboratory in several different ways.
- R-OH + R-OH → R-O-R + H2O
- This direct reaction requires drastic conditions (heating to 140 degrees Celsius and an acid catalyst, usually concentrated sulfuric acid). Effective for making symmetrical ethers, but not as useful for synthesising asymmetrical ethers because the reaction will yield a mixture of ethers, making it usually not applicable:
- 3R-OH + 3R'-OH → R-O-R + R'-O-R + R'-O-R' + 3H2O
- Conditions must also be controlled to avoid overheating to 170 degrees which will cause intramolecular dehydration,a reaction that yields alkenes. In addition, the alcohol must be in excess.
- R-CH2-CH2(OH) → R-CH=CH2 + H2O
- Such conditions can destroy the delicate structures of some functional groups. There exist several milder methods to produce ethers.
- R-O- + R-X → R-O-R + X-
- This reaction is called the Williamson ether synthesis. It involves treatment of a parent alcohol with a strong base to form the alkoxide anion followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R-X). Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method does not work if R is aromatic like in bromobenzene (Br-C6H5), however, if the leaving group is separated by at least one carbon from the benzene, the reaction should proceed (as in Br-CH2-C6H5). Likewise, this method only gives the best yields for primary carbons, as secondary and tertiary carbons will undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups. Aryl ethers can be prepared in the Ullmann condensation.
- Nucleophilic Displacement of Alkyl halides by phenoxides
- The R-X cannot be used to react with the alcohol. However, phenols can be used to replace the alcohol, while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the -X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism.
- HO-C6H5 + OH- → O--C6H5
- O--C6H5 + R-X → R-O-C6H5
- R2C=CR2 + R-OH → R2CH-C(-O-R)-R2
- Acid catalysis is required for this reaction. Often, Mercury trifluoroacetate (Hg(OCOCF3)2) is used as a catalyst for the reaction, creating an ether with Markovnikov regiochemistry. Tetrahydropyranyl ethers are used as protective groups for alcohols.
Cyclic ethers which are also known as epoxides can be prepared:
- By the oxidation of alkenes with a peroxyacid such as m-CPBA.
- By the base intramolecular nuclephilic substitution of a halohydrin.
Ethers in general are of very low chemical reactivity
. Organic reactions are:
- Ethers are hydrolyzed only under drastic conditions like heating with boron tribromide or boiling in hydrobromic acid. Lower mineral acids containing a halogen, such as hydrochloric acid will cleave ethers, but very slowly. Hydrobromic acid and hydroiodic acid are the only two that do so at an appreciable rate. Certain aryl ethers can be cleaved by aluminium chloride.
- Epoxides, or cyclic ethers in three-membered rings, are highly susceptible to nucleophilic attack and are reactive in this fashion.
- Primary and secondary ethers with a CH group next to the ether oxygen easily form highly explosive organic peroxides (e.g. diethyl ether peroxide) in the presence of oxygen, light, and metal and aldehyde impurities. For this reason ethers like diethyl ether and THF are usually avoided as solvents in industrial processes | <urn:uuid:c378a27e-491b-4f83-87fe-b27cad4360f2> | 3.875 | 1,735 | Knowledge Article | Science & Tech. | 34.932905 |
Want to stay on top of all the space news? Follow @universetoday on TwitterSedimentary rock covers 70% of the Earth. Erosion is constantly changing the face of the Earth. Weathering agents…wind, water, and ice…break rock into smaller pieces that flow down waterways until the settle to the bottom permanently. These sediments( pebbles, sand, clay, and gravel) pile up and for new layers. After hundred or thousands of years these rocks become pressed together to form sedimentary rock.
Sedimentary rock can form in two different ways. When layer after layer of sediment forms it puts pressure on the lower layers which then form into a solid piece of rock. The other way is called cementing. Certain minerals in the water interact to form a bond between rocks. This process is similar to making modern cement. Any animal carcasses or organisms that are caught in the layers of sediment will eventually turn into fossils. Sedimentary rock is the source of quite a few of our dinosaur findings.
There are four common types of sedimentary rock: sandstone, limestone, shale, and conglomerate. Each is formed in a different way from different materials. Sandstone is formed when grains of sand are pressed together. Sandstone may be the most common type of rock on the planet. Limestone is formed by the tiny pieces of shell that have been cemented together over the years. Conglomerate rock consists of sand and pebbles that have been cemented together. Shale forms under still waters like those found in bogs or swamps. The mud and clay at the bottom is pressed together to form it.
Sedimentary rock has the following general characteristics:
- it is classified by texture and composition
- it often contains fossils
- occasionally reacts with acid
- has layers that can be flat or curved
- it is usually composed of material that is cemented or pressed together
- a great variety of color
- particle size varies
- there are pores between pieces
- can have cross bedding, worm holes, mud cracks, and raindrop impressions
This is only meant to be a brief introduction to sedimentary rock. There are many more in depth articles and entire books that have been written on the subject. Here is a link to a very interesting introduction to rocks. Here on Universe Today there is a great article on how sedimentary rock show very old signs of life. Astronomy Cast has a good episode on the Earth’s formation. | <urn:uuid:805ab979-777a-4e6d-a60e-66a9c207b774> | 4.125 | 512 | Knowledge Article | Science & Tech. | 46.774374 |
The getpass module provides two functions:
Prompt the user for a password without echoing. The user is prompted using the string prompt, which defaults to 'Password: '. On Unix, the prompt is written to the file-like object stream. stream defaults to the controlling terminal (/dev/tty) or if that is unavailable to sys.stderr (this argument is ignored on Windows).
If echo free input is unavailable getpass() falls back to printing a warning message to stream and reading from sys.stdin and issuing a GetPassWarning.
Availability: Macintosh, Unix, Windows.
If you call getpass from within IDLE, the input may be done in the terminal you launched IDLE from rather than the idle window itself.
Return the “login name” of the user. Availability: Unix, Windows.
This function checks the environment variables LOGNAME, USER, LNAME and USERNAME, in order, and returns the value of the first one which is set to a non-empty string. If none are set, the login name from the password database is returned on systems which support the pwd module, otherwise, an exception is raised. | <urn:uuid:36796a59-3d59-488c-b826-4ebec1eafcc3> | 3.140625 | 243 | Documentation | Software Dev. | 51.3036 |
The short answer: it could take awhile!
Temperatures will climb into the mid 40s over the weekend and there should be plenty of sunshine. Without a doubt, that will help. However with so much snow packed in, the snow won't just immediately melt.
There are two primary factors control the melting of snow, according to researchers at Shuswap Lake in Canada:
- Air temperature
- Intensity of the sun
Secondary factors are
- Wind, wind speed and wind temperature
- Rain, rain water temperature and quantity
- Heat absorption properties of the ground (i.e. rocks, vegetation, loose soil)
- Angle of the sun in relation to the snow surface
- Snow density and consistence
Snow on the ground melts from top to bottom. Heat converts the snow particles into water and gravity pulls the water to the ground. Ignoring topics like energy and temperatures of the converted water the process is as follows.
The top layer of the snow pack absorbs the heat energy and causes the snow crystals to break down. At first surrounding snow crystals are able to bind the fine water drops. As they grow the gravitational force is gets stronger than the adhesive force and the drops start flow to the ground. This progress is actually far more complicated but not of importance here. The warmer water drops cause some pre-melting in the upper layers but eventually leak through the snow with no major effect to its consistency. The air temperature should be at least 5 degree Celsius to initiate that process. Heat absorption from direct sun light is usually much larger than from the surrounding air. | <urn:uuid:a293b4aa-f601-49bb-b848-52f449af157d> | 3.328125 | 325 | Knowledge Article | Science & Tech. | 53.016289 |
Date of this Version
Estimates of streambed water fl ux are needed for the interpretation of streambed chemistry and reactions. Continuous temperature and head monitoring in stream reaches within four agricultural watersheds (Leary Weber Ditch, IN; Maple Creek, NE; DR2 Drain, WA; and Merced River, CA) allowed heat to be used as a tracer to study the temporal and spatial variability of fluxes through the streambed. Synoptic methods (seepage meter and differential discharge measurements) were compared with estimates obtained by using heat as a tracer. Water flux was estimated by modeling one-dimensional vertical flow of water and heat using the model VS2DH. Flux was influenced by physical heterogeneity of the stream channel and temporal variability in stream and ground-water levels. During most of the study period (April–December 2004), flux was upward through the streambeds. At the IN, NE, and CA sites, high-stage events resulted in rapid reversal of flow direction inducing short-term surface-water flow into the streambed. During late summer at the IN site, regional ground-water levels dropped, leading to surface-water loss to ground water that resulted in drying of the ditch. Synoptic measurements of flux generally supported the model flux estimates. Water flow through the streambed was roughly an order of magnitude larger in the humid basins (IN and NE) than in the arid basins (WA and CA). Downward flux, in response to sudden high streamflows, and seasonal variability in flux was most pronounced in the humid basins and in high conductivity zones in the streambed. | <urn:uuid:b67991a5-7edb-41d3-a3d7-a8a558f6f211> | 3.046875 | 331 | Academic Writing | Science & Tech. | 34.343102 |
12.1.4 Writer Implementations
Three implementations of the writer object interface are provided as
examples by this module. Most applications will need to derive new
writer classes from the NullWriter class.
- class NullWriter()
A writer which only provides the interface definition; no actions are
taken on any methods. This should be the base class for all writers
which do not need to inherit any implementation methods.
- class AbstractWriter()
A writer which can be used in debugging formatters, but not much
else. Each method simply announces itself by printing its name and
arguments on standard output.
- class DumbWriter([file[, maxcol
Simple writer class which writes output on the file object passed in
as file or, if file is omitted, on standard output. The
output is simply word-wrapped to the number of columns specified by
maxcol. This class is suitable for reflowing a sequence of
See About this document... for information on suggesting changes. | <urn:uuid:84ae71a2-4534-497d-bc2f-d6b40cc0ac50> | 2.734375 | 210 | Documentation | Software Dev. | 47.894675 |
Low Earth orbit
||This article needs additional citations for verification. (May 2012)|
A low Earth orbit (LEO) is generally defined as an orbit below an altitude of approximately 2,000 kilometers (1,200 mi). Given the rapid orbital decay of objects below approximately 200 kilometers (120 mi), the commonly accepted definition for LEO is between 160 kilometers (99 mi) (with a period of about 88 minutes) and 2,000 kilometers (1,200 mi) (with a period of about 127 minutes) above the Earth's surface. With the exception of the lunar flights of the Apollo program, all human spaceflights have taken place in LEO (or were suborbital). The altitude record for a human spaceflight in LEO was Gemini 11 with an apogee of 1,374.1 kilometers (853.8 mi). All manned space stations to date, as well as the majority of artificial satellites, have been in LEO.
Orbital characteristics
Objects in LEO encounter atmospheric drag in the form of gases in the thermosphere (approximately 80–500 km up) or exosphere (approximately 500 km and up), depending on orbit height. LEO is an orbit around Earth between the atmosphere and below the inner Van Allen radiation belt. The altitude is usually not less than 300 km because that would be impractical due to the larger atmospheric drag.
Equatorial low Earth orbits (ELEO) are a subset of LEO. These orbits, with low inclination to the Equator, allow rapid revisit times and have the lowest delta-v requirement of any orbit. Orbits with a high inclination angle are usually called polar orbits.
Higher orbits include medium Earth orbit (MEO), sometimes called intermediate circular orbit (ICO), and further above, geostationary orbit (GEO). Orbits higher than low orbit can lead to early failure of electronic components due to intense radiation and charge accumulation.
Human use
While a majority of artificial satellites are placed in LEO, making one complete revolution around the Earth in about 90 minutes, many communication satellites require geostationary orbits, and move at the same angular velocity as the Earth. Since it requires less energy to place a satellite into a LEO and the LEO satellite needs less powerful amplifiers for successful transmission, LEO is still used for many communication applications. Because these LEO orbits are not geostationary, a network (or "constellation") of satellites is required to provide continuous coverage. Lower orbits also aid remote sensing satellites because of the added detail that can be gained. Remote sensing satellites can also take advantage of sun-synchronous LEO orbits at an altitude of about 800 km (500 mi) and near polar inclination. ENVISAT is one example of an Earth observation satellite that makes use of this particular type of LEO.
Although the Earth's pull due to gravity in LEO is not much less than on the surface of the Earth, people and objects in orbit experience weightlessness because the acceleration of gravity is cancelled by the centrifugal acceleration induced by the orbital speed.
The speed needed to achieve a stable low earth orbit is about 7.8 km/s, but reduces with increased orbital altitude. The delta-v needed to achieve low earth orbit starts around 9.4 km/s. Atmospheric and gravity drag associated with launch typically adds 1.5–2.0 km/s to the delta-v launch vehicle required to reach normal LEO orbital velocity of around 7.8 km/s (28,080 km/h). The drag here is low enough that it could theoretically be overcome by radiation pressure on solar sails, a proposed propulsion system for interplanetary travel.
A low earth orbit is simplest and most cost effective for a satellite placement and provides high bandwidth and low latency(time delay between sending and receiving is known as latency).
Space debris
The LEO environment is becoming congested with space debris. This has caused growing concern in recent years, since collisions at orbital velocities can be damaging or even dangerous. They can, of course, produce even more space debris in the process, something known as the Kessler Syndrome. The Joint Space Operations Center, part of United States Strategic Command (formerly the United States Space Command), currently tracks more than 8,500 objects larger than 10 cm in LEO,. However, a limited Arecibo Observatory study suggested there could be approximately one million objects larger than 2 millimeters, which are too small to be visible from Earth.
Earth monitoring satellites use LEO as they are able to see the surface of the Earth more clearly as they are not so far away. They are also able to traverse the surface of the Earth.
Communications satellites - some communications satellites including the Iridium phone system use LEO.
See also
- List of orbits
- Escape velocity
- Medium Earth orbit (MEO)
- High Earth orbit (HEO)
- Highly Elliptical Orbit (HEO)
- Specific orbital energy examples
- International Space Station
- Atmospheric reentry
- Satellite phone
- Suborbital spaceflight
- Orbital periods and speeds are calculated using the relations 4π²R³ = T²GM and V²R = GM, where R = radius of orbit in metres, T = orbital period in seconds, V = orbital speed in m/s, G = gravitational constant ≈ 6.673×10−11 Nm²/kg², M = mass of Earth ≈ 5.98×1024 kg.
- Approximately 8.6 times when the moon is nearest (363 104 km ÷ 42 164 km) to 9.6 times when the moon is farthest (405 696 km ÷ 42 164 km).
- "IADC Space Debris Mitigation Guidelines" (PDF). Inter-Agency Space Debris Coordination Committee. 15 October 2002.
- "NASA Safety Standard 1740.14, Guidelines and Assessment Procedures for Limiting Orbital Debris" (PDF). Office of Safety and Mission Assurance. 1 August 1995.
- "Higher Altitude Improves Station's Fuel Economy". NASA. Retrieved 2013-02-12.
- Fact Sheet: Joint Space Operations Center
- archive of astronomy: space junk
- ISS laser broom, project Orion | <urn:uuid:e8b0402f-5a70-49d9-a43c-198fab702a1c> | 3.984375 | 1,297 | Knowledge Article | Science & Tech. | 42.01045 |
common name: goatweed butterfly, goatweed emperor, goatweed leafwing
scientific name: Anaea andria Scudder (Insecta: Lepidoptera: Nymphalidae: Charaxinae)
The goatweed butterfly is an attractive, fascinating and widespread species that is not often observed by the general public because of its cryptic coloration and somewhat spotty distribution within its range. Both larvae and adults are cryptically colored. Adults play dead when handled. This species provides dramatic examples of adaptive coloration and behavior to escape predators in both the larval and adult stages.
Figure 1. Summer form of adult female goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
The goatweed butterfly is widely distributed throughout the southern Midwest and South ranging from West Virginia to Kansas and south to Texas and Central Florida.
The wingspread of goatweed butterflies is 6.0 to 7.6 cm with males being slightly smaller than females. The upper surface of the wings of adult goatweed butterflies exhibit sexual dimorphism in both shape and color. The wings of males are more or less uniformly orange brown with a dark margin. The wings of females have an irregular lighter submarginal band with broad darker margins. The apex of the forewing is hooked (falcate) and each hind wing bears a short, pointed, backward-projecting tail. Both sexes exhibit marked seasonal dimorphism in wing shape. In the summer forms, the forewing apex is less hooked and the hindwing tail is shorter than in the winter form. They also exhibit seasonal color dimorphism. Summer males are slightly less orange with a narrower marginal band. Summer females are lighter in color than winter females. The undersides of the wings mimic dead leaves and are similar in both sexes.
Both the appearance of the adult seasonal forms and reproductive diapause in the winter forms are controlled by responses of the larvae to photoperiod (daylength). Larvae exposed to short photoperiods during late summer and early fall produce winter form adults that are in reproductive diapause.
Figure 2. Summer form of male goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
Figure 3. Resting goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
Eggs are spherical and greenish-cream in color. Full-grown larvae are approximately 3.8 cm in length and are grey-green with many minute tubercles covering both the head and body. The head also has a small number of larger orange tubercles. The color and tuberculation of the larvae match the surface texture and appearance of twigs of some common host plants. Pupae are light green with darker green lines simulating a leaf-like texture. There is a small heavily sclerotized black anal ring just below (anterior to) the cremaster.
Figure 4. Egg of goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
Figure 5. Full grown caterpillar of goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
Figure 6. Pupa of goatweed butterfly, Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
The goatweed butterfly has two flights per year in the North with possibly three or four flights in parts of the South. Their flight is swift and erratic. Overwintering adults mate in the spring. Males wait for females in clearings or on ridge tops. Adults feed on sap flows, decaying fruits, and dung. Larval hosts for the goatweed butterfly are various species of plants in the genus Croton (Euphorbiaceae). A commonly used host-plant species in central Florida is silver croton, Croton argyranthemus Michx. a common inhabitant of long leaf pine (Pinus palustris Mill.) high pine communities. Goatweed butterflies are also found in other habitats - including open wooded areas, swamps, prairie groves and along streams.
First and second instar larvae eat the leaf blade away from the midrib and rest at the tip. They attach fecal pellets with silk to their backs and to the base of the leaf midrib - probably to repel ants and other predators. Older larvae fold and silk the sides of leaves together and hide inside with their heavily sclerotized heads blocking the entrance to the leaf roll.
Figure 7. Silver croton, Croton argyranthemus Michx., host for goatweed butterfly, Anaea andria Scudder. Photograph by Donald W. Hall, University of Florida.
Figure 8. Second instar larva of goatweed butterfly, Anaea andria Scudder, resting at tip of leaf midrib. Photograph by Jerry F. Butler, University of Florida.
Figure 9. Goatweed butterfly larvae (one on stem, one in leaf roll), Anaea andria Scudder. Photograph by Jerry F. Butler, University of Florida.
The pine sandhill and scrub habitats that support silver croton are rapidly diminishing in Florida because of development. It is expected that goatweed butterfly populations will continue to decline locally as a result of this urban encroachment.
- Daniels JC. 2000. Butterflies 1: Butterflies of the Southeast. UF/IFAS. Card Set. SP 273
- Harris L Jr. 1972. Butterflies of Georgia. University of Oklahoma Press. Norman, OK.
- Heitzman JR, Heitzman JE. 1987. Butterflies and Moths of Missouri. Missouri Department of Conservation. Jefferson City, MO.
- Iftner DC, Shuey JA, Calhoun JV. 1992. Butterflies and Skippers of Ohio. Ohio Biological Survey Bulletin New Series Vol. 9 No. 1.
- Medley JC, Fasulo TR. (2002). Florida Butterfly Tutorials. University of Florida/IFAS. CD-ROM. SW 155.
- Miller JY. 1992. The Common Names of North American Butterflies. Smithsonian Institution Press. Washington, D.C.
- Oppler PA, Krizek GO. 1984. Butterflies East of the Great Plains. The Johns Hopkins University Press. Baltimore, MD.
- Riley TJ. 1988. Effect of larval photoperiod on incidence of adult seasonal forms in Anaea andria (Lepidoptera: Nymphalidae). Journal of the Kansas Entomological Society 61: 224-227.
- Riley TJ. 1988. Effect of larval photoperiod on mating and reproductive diapause in seasonal forms of Anaea andria (Nymphalidae). Journal of the Lepidopterists' Society 42: 263-268.
- Scott JA. 1986. The Butterflies of North America. Stanford University Press. Stanford, CA.
- Schull EM. 1987. The Butterflies of Indiana. Indiana Academy of Science. Indianapolis, IN. | <urn:uuid:9c9641d0-043f-49e5-9760-978e32132d40> | 3.1875 | 1,463 | Knowledge Article | Science & Tech. | 48.991667 |
<language> (UML) A non-proprietary, third generation modelling language. The Unified Modeling Language is an open method used to specify, visualise, construct and document the artifacts of an object-oriented software-intensive system under development. The UML represents a compilation of "best engineering practices" which have proven successful in modelling large, complex systems.
UML succeeds the concepts of Booch, OMT and OOSE by fusing them into a single, common and widely usable modelling language. UML aims to be a standard modelling language which can model concurrent and distributed systems.
UML is not an industry standard, but is taking shape under the auspices of the Object Management Group (OMG). OMG has called for information on object-oriented methodologies, that might create a rigorous software modelling language. Many industry leaders have responded in earnest to help create the standard.
See also: STP, IDE.
OMG UML Home.
Rational UML Resource Center.
Try this search on Wikipedia, OneLook, Google
Nearby terms: Uniface « unification « Unified Han « Unified Modeling Language » unifier » UNIFORM » Uniform Naming Convention | <urn:uuid:18e48ee8-b3ff-48e9-835f-353700361b6d> | 3.546875 | 248 | Knowledge Article | Software Dev. | 21.540317 |
The two posts on changing the seasons (here and here) resulted in a lot of interesting information in the comments and it seems like there is quite a geographical variation in how the seasons are demarcated, with the US possibly being an outlier in using the solstices.
Reader ahcuah is a kindred soul and has kindly sent me the data he collected over a full year of the daytime high and low temperatures. He lives fairly close to Cleveland and so the data is similar to what I would have gotten. In general, the shape of the graph and the locations of peaks and the valleys should be the same over the entire northern hemisphere (I think) and inverted for the southern, so the pattern he gets is of far greater general utility than just for his location.
The recorded temperatures are only to the nearest whole number and this causes a problem for figuring out what the coldest or warmest days are since there are many days around those two points that have the same temperature. For example, the coldest daytime high stays at 34oF from January 14 to January 24. One could make a reasonable estimate that the coldest day is at the midpoint, which would be January 19th, but ahcuah knows that there is a better way to check this.
He did a Fourier analysis of the entire set of data and found that it converged pretty rapidly, with six terms in the series being sufficient. Using these Fourier components, he was able to recalculate the daily temperatures to greater precision and found that the coldest day is actually January 19, agreeing with the rough estimate. The coldest recorded nighttime low stays at 19oF from January 10 through February 4, with January 22/23 being the midpoint. The Fourier analysis says that the coldest night is January 22, again agreeing with the rough estimate.
While we can understand why the coldest days occur in mid-January, a month after the shortest amount of daylight hours given by the winter solstice (the reason being that it takes some time for the Earth to cool down), no obvious explanation comes to my mind as to why the nighttime low lags behind the coldest daytime high by three days.
Similarly, the hottest recorded daytime high stays at 84oF from July 16 through July 27 and the Fourier analysis pins the hottest day at July 19. The highest recorded nighttime low stays at 64oF from July 20 to July 28, with the Fourier analysis giving the peak on July 24, lagging by five days. The hottest days occur about a month after the longest day at the spring solstice, again due to the time lag for the Earth to heat up.
In addition to the interesting puzzle of why there is a three day lag between the daytime and nighttime peaks, there is also issue of why the nighttime peak in winter is broader (26 days) than the daytime one (11 days) but in summer is narrower (9 days vs. 12 days). | <urn:uuid:a97523ac-4e59-49a8-999f-b9049484e34e> | 2.90625 | 605 | Personal Blog | Science & Tech. | 45.865803 |
The quantities Rm, Ra, Cm, Vm, etc. that appear in the diagram and equation are given in ohms, farads, or volts, and will depend on the size of the compartment. In order to specify parameters that are independent of the compartment dimensions, specific units are used. For a cylindrical compartment, the membrane resistance is inversely proportional to the area of the cylinder, so we define a specific membrane resistance RM, which has units of ohms·m².
The membrane capacitance is proportional to the area, so it is expressed in terms of a specific membrane capacitance CM, with units of farads/m². Compartments are connected to each other through their axial resistances Ra. The axial resistance of a cylindrical compartment is proportional to its length and inversely proportional to its cross-sectional area. Therefore, we define the specific axial resistance RA to have units of ohms·m.
For a piece of dendrite or a compartment of length l and diameter d
we then have
WARNING: Many treatments of the passive properties of neural tissue use the symbols Rm, Ra, and Cm for the specific resistances and capacitance, instead of this notation with RM, RA, and CM. Also, many textbooks and journal papers define the resistance and capacitance in terms of that for a unit length of cable having a specified diameter. | <urn:uuid:93150521-d1dd-4854-b28b-8bc036c048eb> | 3.296875 | 289 | Tutorial | Science & Tech. | 36.455 |
Given that p is prime, when is 8p+1 square?
Let 8p + 1 = k2. As LHS is odd, k must be odd; let k = 2m + 1.
Therefore 8p + 1 = 4m2 + 4m + 1, leading to 2p = m2 + m = m(m + 1).
As m and m + 1 are consecutive integers, one of them must be even. However, after dividing through by 2 we can see that LHS is prime, so RHS must be prime. This can only happen when m = 2; that is, p = 3.
Hence 8p + 1 can only be square when p = 3.
Triangle Search: When is 8p+1 a triangle number? | <urn:uuid:99c96b03-7489-4261-b310-377c5e68b125> | 2.765625 | 163 | Q&A Forum | Science & Tech. | 116.537549 |
Invaders or endemics? Molecular phylogenetics, biogeography and systematics of Dreissena in the Balkans
Article first published online: 5 JUN 2007
Volume 52, Issue 8, pages 1525–1536, August 2007
How to Cite
ALBRECHT, C., SCHULTHEIß, R., KEVREKIDIS, T., STREIT, B. and WILKE, T. (2007), Invaders or endemics? Molecular phylogenetics, biogeography and systematics of Dreissena in the Balkans. Freshwater Biology, 52: 1525–1536. doi: 10.1111/j.1365-2427.2007.01784.x
- Issue published online: 5 JUN 2007
- Article first published online: 5 JUN 2007
- (Manuscript accepted 28 March 2007)
- ancient lakes;
- Balkan Peninsula;
- invasive species
1. Zebra mussels and their relatives (Dreissena spp.) have been well studied in eastern, central and western Europe as well as in North America, because of their invasiveness and economic importance. Much less is known about the biology and biogeography of indigenous (endemic) taxa of Dreissena, in the Balkans. A better knowledge of these taxa could help us (i) understand the factors triggering invasiveness in some taxa and (ii) identify other potentially invasive species.
2. Using a phylogenetic approach (2108 base pairs from three gene fragments), Dreissena spp. from natural lakes in the Balkans were studied to test whether invasive Dreissena populations occur in such lakes on the Balkan Peninsula, whether Dreissena stankovici really is endemic to the ancient Lakes Ohrid and Prespa, and to infer the phylogenetic and biogeographical relationships of Balkan dreissenids.
3. No invasive species of Dreissena, such as Dreissena polymorpha, were recorded. The supposedly ‘endemic’D. stankovici is not restricted to the ancient Lakes Ohrid and Prespa, but is the most widespread and dominant species in the west-central Balkans. Its southern sister taxon, Dreissena blanci, occurs sympatrically with D. stankovici in Lakes Prespa, Mikri Prespa and Pamvotis. Both species are classified into the subgenus Dreissena (Carinodreissena) of which the subgenus Dreissena (Dreissena) (which includes the invasive D. polymorpha) is the sister taxon. Dreissena blanci and D. stankovici are considered to represent distinct species.
4. On a global scale, the two Balkan species have small ranges. An early Pliocene time frame for the divergence of the subgenera Carinodreissena and Dreissena is discussed, as well as potential colonization routes of the most recent common ancestor of Carinodreissena spp.
5. The ambiguous taxonomy of dreissenids in the Balkans is addressed. As nominal D. blanci presbensis from Lake Prespa has nomenclatural priority over D. stankovici, the correct name for the latter taxon should be Dreissena presbensis. | <urn:uuid:db0857c9-b916-4242-9f4d-a93c06214e87> | 2.71875 | 705 | Academic Writing | Science & Tech. | 38.587981 |
Basically how do you find out which could be your worst or best case and any other "edge" cases you might have BEFORE having them and so, how do you prepare your code for them?
migrated from stackoverflow.com May 1 '11 at 7:47
Based on the content of the algorithm you can identify what data structures/types/constructs are used. Then, you try to understand the (possible) weak points of those and try to come up with an execution plan that will make it run in those cases.
For example, the algorithm takes a string and an integer as input and does some sorting of the characters of the string.
Here we have:
String with some known special cases:
Integer with known special cases:
Sort algorithm that could fail in the following boundary cases:
Then, take all these cases and create a long list trying to understand how they overlap. Ex:
Now create test cases for them :)
Short summary: break the algorithm in basic blocks for which you know the boundary cases and then reassemble them, creating global boundary cases
I don't think there is any algorithm to determine edge conditions....just experience.
Example: for a byte parameter you would want to test numbers like 0, 127, 128, 255, 256, -1, anything that can cause trouble.
An "edge" has two meanings, and both are relevant when it comes to edge cases. An edge is either an area where a small change in the input leads to a large change in the output, or the end of a range.
So, to identify the edge cases of an algorithm, I first look at the input domain. Its edge values could lead to edge cases of the algorithm.
Secondly, I look at the output domain, and look back at the input values that might create them. This is less commonly a problem with algorithms, but it helps find problems in algorithms that are designed to generate output which spans a given output domain. E.g. a random-number generator should be able to generate all intended output values.
Finally, I check the algorithm to see if there are input cases which are similar, yet lead to dissimilar outputs. Finding these edge cases is the hardest, because it involves both domains and a pair of inputs.
This is a very general question so all I can do is throw out some general, vague ideas :)
-Examine boundary cases. Ex. if you're parsing a string what happens if the string is empty or null? If you're counting from x to y what happens at x and y?
Part of the skill of using algorithms is knowing their weaknesses and patholigical cases. Victor's answer gives some good tips, but in general I would advise that you need to study the topic in more depth to get a feel for this, I don't think you can follow rules of thumb to answer this question fully. E.g. see Cormen, or Skiena (Skiena in particular has a very good section on where to use algorithms and what works well in certain cases, Cormen goes in to more theory I think). | <urn:uuid:ff25b0b1-d0bb-45d3-9300-d14478a980d0> | 3.0625 | 643 | Q&A Forum | Software Dev. | 62.546591 |
Before the Berlin Wall fell, before the Soviet
Union imploded, we feared mutually assured destruction. From the 1950s
until some 40 years later, the threat of nuclear war loomed large, inspiring
nightmares among children and adults alike. In fact, for years, being
a survivalist implied having a bomb shelter.
But with the fall of communism, that all changed. The United States
was the only superpower, and the threat of nuclear war diminished. But
the threat of a nuclear attack or accident did not.
While we may not have to fear thousands of nuclear warheads raining
down on our centers of population and industry, the threat of a "suitcase"
nuclear bomb carried into place by suicidal terrorists is more real
than ever. Our intelligence agencies tell us that the Al Queda and other
terrorist organizations are looking to buy or build bombs, and the old
Soviet system has left thousands of trained scientists with no way to
earn a living. Countries like Iraq and North Korea have nuclear weapon
development programs. Even "friendly" countries like Pakistan
have nuclear arsenals that may, through a coup or even an election,
one day fall into control of hands that are not friendly to the U.S.
The threat of a a suicide attack on a nuclear power plant is causing
folks to question their geographical location. And the possibility of
a rogue nation lobbing a few missiles at us has our president intent
on spending billions on a high-tech umbrella to keep the country safe.
The bad news is that we must again consider how to protect ourselves
from a nuclear disaster. The good news is that we can probably worry
less about blast shelters that protect us from the overpressure of a
20 megaton bomb and focus more on protecting ourselves from the fallout
caused by a smaller bomb, an attack on a nuclear plant, or a "dirty
bomb" that relies on conventional explosives to spread radiation.
Traditionally, people think of dying in blast when a nuclear warhead
goes off, but there are other dangers, too. Don't get me wrong -- he
blast itself will certainly kill you if you are close to it. Death and
serious injury will also be caused by the thermal effects of the bomb,
which can give third degree burns six to eight miles away and first
degree burns to someone 10 to 12 miles away from a one megaton blast.
More death will be caused by the bomb's radiation and even more by the
high dose of radiation carried downwind as nuclear fallout.
To protect yourself from the radiation and fallout, you need a fallout
shelter. To protect yourself from the bomb's blast, you need a blast
shelter. Blast shelters are usually buried deeper than fallout shelters,
have hardened doors blast valves and are designed to withstand over
pressure and negative pressure associated with a nuclear blast. If you
live at or near a place that could be ground zero because it is of strategic
importance, you are better off with a blast shelter. For most of us,
however, a fallout shelter will do.
Fallout shelters are designed to provide a secure location -- often
underground or at least partially underground -- where you can avoid
all or most of the radiation from fallout -- tiny irradiated particles
that rain down from the sky after a nuclear explosion. Fallout shelters
rely on earth, sand, cement, brick, cement block or other dense material
to block the radiation until it lessens due to the half life. A shelter
should, at the minimum, allow only 1/40th of the radiation to get through,
and designs that block all but 1/100th or 1/250th more are superior.
The better the shielding, the safer you will be.
Four feet of earth is considered the minimum amount of shielding you
should have if you are in a hot fallout zone. More shielding is, of
course, better. You should expect to stay in the shelter a minimum of
two weeks and plan on sleeping in it for longer. Again, the longer you
are prepared to stay in the shelter, the safer you will be.
Although expedient shelters can be created by digging a deep trench
and covering it up with dirt (See Nuclear
War Survival Skills for expedient shelters) or by hiding in basements
and subway tunnels, true shelters are far superior because they are
designed to provide the at least the bare minimum required to live there
the weeks or months required for the local radiation level to drop, | <urn:uuid:e5a5ba70-5dde-4e6c-b18a-8a5e62943b53> | 2.703125 | 943 | Personal Blog | Science & Tech. | 50.950723 |
Mistletoes and butterflies
Mistletoes as food plants for Butterflies
The use of mistletoes as food plants by butterflies has generated a whole field of research which spans several continents. In Africa, for example, caterpillars of the genus Mylothris feed almost exclusively on plants of the order Santalales, and within the Santalales 65-80% of feeding records are on species of Loranthaceae. In Indonesia-Australia the situation is similar, with caterpillars of the genera Delias and Ogyris feeding on Santalales, and in Delias 77% of feeding records are on loranths. Other examples include the Catasticta group of genera, feeding on mistletoes in the neotropics, and the Hesperocharis group in South America.
It has been suggested that this predilection in butterflies for mistletoes as food plants is a derived state with multiple origins. It is thought that the original food plants may have been the host trees, and that perhaps through defoliation of the host, or egg-laying near mistletoe plants, there was a shift to the parasite as food plant. Given that mistletoe leaves may have higher nitrogen and mineral nutrient levels and fewer toxins than the host’s, natural selection may have favoured butterflies which preferentially lay their eggs on mistletoes.
Examples of butterflies from these genera and the food plants recorded for their caterpillar stages can be seen at the link below.
Visit another website on Australian caterpillars and mistletoes by Don Herbison-Evans and Stella Crossley. | <urn:uuid:7c35da40-071b-4c36-95ae-61b86e331b26> | 3.4375 | 342 | Knowledge Article | Science & Tech. | 29.277923 |
The syntax looks like this:
BEGIN TRY “Your code goes here” END TRY BEGIN CATCH “Your error condition code goes here” END CATCH
What happens when TRY/CATCH block implemented. Let’s go into the details. Whatever code you are writing inside the try block will get executed by default. When an error occurred for the code which is inside the try block it will stop execution and jump to catch block and run the code which you had written for handling the error, just like the traditional method of error handling. One thing we need to notice here is that there are different error level types. 0 - 10 are Warnings, 11 - 19 are Trappable Errors and 20 - 25 are Terminal Errors.
SQL Server 2005 had dished out some additional functions also to make our job easy. Let’s check out some of the important functions among them and what they returns.
- ERROR_NUMBER() – Show you the error number. You can find all error numbers and descriptions in sysmessages table.
- ERROR_MESSAGE() – Shows the readable description of the error.
- ERROR_LINE() – Returns the line number from where error thrown.
- ERROR_SEVERITY() – Will show up error levels.
- ERROR_STATE() – Shows what kind of error it’s. Example 1 for system errors
- ERROR_PROCEDURE() – This gives you the procedure name which causes the error.
With all these I hope error handling got much easier. Enjoy implementing these. | <urn:uuid:1a8f0137-3a87-42a4-9ee8-0aa411c280fc> | 3.203125 | 331 | Documentation | Software Dev. | 56.841829 |
Perched on a cliff in San Diego, California, reminiscent of Mary Shelley’s novel in which a scientist, Dr. Frankenstein, creates artificial life, Dr. Gerald Joyce can be considered his modern day counterpart. Dr. Joyce is also trying to create life, but in a thimble-size test tube, rather than from human body parts.
Although these are bizarre things, scientists and sci-fi fans have pondered this possibility for ages. Some people even think we will someday encounter an alien from another world, but Dr. Joyce is close to creating an alien life in the test tube.
It’s debatable among scientists as to what constitutes life, but some may say Dr. Joyce already created a life form in 2007.
In 2007, Dr. Joyce and his assistant, a graduate student, Tracey Lincoln, who is now a researcher at the University of Massachusetts Medical School, created a “molecule” in a test tube that could reproduce, and evolve by itself. According to Dr. Joyce, it “swapped jerry-built genes “in a test tube as long as it was fed the right “engineered ingredients”.
This is astounding, and also humorous, in that Jerry is a nickname for Gerald, which is Dr. Joyce’s first name. Perhaps, Jerry will be the name of the very first unanimously accepted artificial life form, if created by Dr. Joyce. After all, Dr. Frankenstein named his creation Frankenstein. | <urn:uuid:717c34c2-1669-4c85-ac50-a680a1ee2436> | 2.859375 | 308 | Personal Blog | Science & Tech. | 57.96337 |
Fast Ball Physics
Is there away to calculate the speed of my sons pitch with
a stop watch? He throws from 60 feet 6 inches.
Not likely unless you have a fast finger! Let's say your son throws a 60
mph fastball -- that's equal to
88 ft/sec. Since the distance from mound to plate is 60.5 ft, that is a
lapsed time of ~0.7 sec. It is estimated that humans have a reaction time
of ~0.1 sec., so the measurement would not be very accurate.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:fc122f8a-014e-41c3-a421-2b481ebb39e1> | 2.84375 | 128 | Knowledge Article | Science & Tech. | 90.317294 |
How does a fridge work?
Recently voted the most significant invention in the history of food and drink, the fridge has transformed our lives. We can keep food fresher for longer, medicines unspoilt and have that refreshing cool drink on hot days. But how does the fridge work?
The fridge has four main components - the inside space where stuff needs to be cooled, the coolant, the radiator and the motor.
The coolant is where most of the action takes places. The coolant starts off as a liquid, absorbs heat from the inside space of the fridge and turns into a gas. The gas then travels to the radiator at the back via the motor. The motor compresses the gas, turning it back into a liquid causing the coolant to give off heat which is transferred to the environment by the radiator. The coolant liquid then starts the cycle again.
The folowing links are external | <urn:uuid:ab8ba952-f81f-4aad-8daf-1eb1355dbf21> | 3.609375 | 186 | Knowledge Article | Science & Tech. | 63.205299 |
23. January 2013Matter and Material User Experiments Research Using Synchrotron Light
Interview with Thomas Huthwelker
The Paul Scherrer Institut makes its research facilities available to scientists from all over the world. To ensure these scientists are exposed to optimal conditions when they arrive is the hard work of many PSI staff. An interview with one of these scientists provides a glimpse behind the scenes. This interview is taken from the latest issue of the PSI Magazine Fenster zur Forschung
17. October 2012Media Releases Biology User Experiments Research Using Synchrotron Light
Until recently, it was not obvious whether the earliest vertebrates (animals with a backbone) which had jawbones already possessed teeth or not. Now, an international research team has shown that the jaws of the prehistoric fish Compagopiscis already had teeth. This means that teeth appeared at the same evolutionary time as jaws – or at least shortly afterwards. The leaders of this project were scientists from the University of Bristol, England, who carried out their decisive experiments at the SLS at PSI.
16. October 2012Media Releases Research Using Synchrotron Light Environment User Experiments
Experiments performed at the Paul Scherrer Institute (PSI) investigate processes inside volcanic materials that determine whether a volcano will erupt violently or mildly. In the experiments, scientists heated small pieces of volcanic material similarly to conditions present at the beginning of a volcanic eruption. They used X-rays from the SLS to observe, in real time, what happens to the rock as it goes from the solid to the molten state.
16. February 2012Media Releases Biology User Experiments Research Using Synchrotron Light
Like a shredder, the immunoproteasome cuts down proteins into peptides that are subsequently presented on the cellular surface. The immune system can distinguish between self and non-self peptides and selectively kills cells that present non-self peptides at their surface. In autoimmune diseases, this mechanism is deregulated. However, inhibition of the immunoproteasome may alleviate disease symptoms and progression. With the help of measurements taken at the Paul Scherer Institute, scientists have now succeeded in determining the first structure of an immunoproteasome.
23. December 2011Media Releases Biology Research Using Synchrotron Light User Experiments
Einzellige Organismen, die vor über einer halben Milliarde Jahre gelebt haben und deren Fossilien in China gefunden wurden, sind wohl die unmittelbaren Vorläufer der frühesten Tiere. Die amöbenartigen Einzeller haben sich in einer Weise in zwei, vier, acht usw. Zellen geteilt, wie es heute tierische (und menschliche) Embryonen tun. Die Forscher glauben, dass diese Organismen einem der ersten Schritte vom Einzeller zum Vielzeller in der Entwicklung richtiger Tiere entsprechen.
This news release is only available in German.
11. November 2011Media Releases Biology Medical Science Research Using Synchrotron Light User Experiments
Forscher der Universität Basel und des Paul Scherrer Instituts konnten im Nanomassstab zeigen, wie sich Karies auf die menschlichen Zähne auswirkt. Ihre Studie eröffnet neue Perspektiven für die Behandlung von Zahnschäden, bei denen heute nur der Griff zum Bohrer bleibt. Die Forschungsergebnisse wurden in der Fachzeitschrift «Nanomedicine» veröffentlicht.
This news release is only available in German.
15. September 2011Media Releases Research Using Neutrons Biology User Experiments
An international research team has now demonstrated in experiments at the Paul Scherrer Institute that the soil in the vicinity of roots contains more water that that further away. Apparently, plants create a small water reserve that helps to tide them over through short periods of drought. These results were obtained from experiments carried out with the benefit of neutron tomography.
18. August 2011Media Releases Biology Research Using Synchrotron Light User Experiments
Reorganisation of the brain and sense organs could be the key to the evolutionary success of vertebrates, one of the great puzzles in evolutionary biology, according to a paper by an international team of researchers, published today in Nature. The study claims to have solved this scientific riddle by studying the brain of a 400 million year old fossilized jawless fish – an evolutionary intermediate between the living jawless and jawed vertebrates.
18. January 2011Media Releases Biology User Experiments Research Using Synchrotron Light
Ribosomes are the protein factories of the living cell and themselves very complex biomolecules. Now, a French research group has for the first time determined the structure of the ribosome in a eukaryotic cell – a complex cell containing a cell nucleus. An important part of the experiments was performed with synchrotron light at the Swiss Light Source SLS of the Paul Scherrer Institute.
28. June 2010Media Releases Biology Research Using Synchrotron Light User Experiments
A central feature of any living organism is that food reacts with oxygen and, in the process, energy is released and made available for a variety of reactions within the organism. Using investigations performed at the Swiss Light Source, SLS, researchers have now been able to explain a crucial part of this process at a molecular level.
7. October 2009
Winner of Nobel Prize in Chemistry is long-term user of Swiss Light Source at the Paul Scherrer InstituteMedia Releases Research Using Synchrotron Light User Experiments Biology
The Paul Scherrer Institute congratulates Professor Venkatraman Ramakrishnan on the Nobel Prize in Chemistry. Ramakrishnan is a long-term user of the Swiss Light Source SLS at the Paul Scherrer Institut in Switzerland. He used this facility for his prize winning studies on the structure of the ribosome.
21. November 2007Biology User Experiments Research Using Synchrotron Light
High-resolution phase-contrast X-ray images of fossil seeds
The emergence of flowering plants is regarded as a major botanical mystery. In the 22nd November edition of the scientific magazine “Nature”, an international research team with participation from the Paul Scherrer Institute (PSI) publishes results that shed fresh light on this controversial question. New three-dimensional non-destructive imaging procedures have been used to carry out investigations into fossilised plant seeds. As a result, it has been possible to confirm an earlier scientific theory, which had previously been cast into doubt by molecular genetic analyses. | <urn:uuid:219b57f0-39a2-4e00-bf9a-ef1629f0edad> | 2.703125 | 1,443 | Content Listing | Science & Tech. | 26.264438 |
What is XPCE?
XPCE is a toolkit for developing graphical applications in Prolog and other interactive and dynamically typed languages. XPCE follows a rather unique approach of for developing GUI applications, which we will try to summarise using the points below.
- Add object layer to Prolog
XPCE's kernel is a object-oriented engine that allows for the
definition of methods in multiple languages. The built-in graphics
are defined in C for speed as well as to define the
platform-independence layer. Applications, as well as some
application-oriented libraries are defined as XPCE-classes with
their methods defined in Prolog.
Prolog-defined methods can receive arguments in native Prolog data, native Prolog data may be associated with XPCE instance-variables and XPCE errors are (selectively) mapped to Prolog exceptions. These features make XPCE a natural extension to your Prolog program.
- High level of abstraction
- XPCE's graphical layer provides a high abstraction level, hiding details on event-handling, redraw-management and layout management from the application programmer, while still providing access to the primitives to deal with exceptional cases.
- Exploit rapid Prolog development cycle
- Your XPCE classes are defined in Prolog and the methods run naturally in Prolog. This implies you can easily cross the border between your application and the GUI-code inside the tracer. It also implies you can modify source-code and recompile while your application is running.
- Platform independent programs
- XPCE/Prolog code is fully platform-independent, making it feasible to develop on your platform of choice and deliver on the platform of choice of your users. As SWI-Prolog saved-states are machine-independent, applications can be delivered as a saved-state. Such states can be executed transparently using the development-environment to facilitate debugging or the runtime emulator for better speed and space-efficiency.
Links about motivation and impressions
- Why using XPCE for graphics in Prolog?
- Why no GUI-Builder?
- Some code fragments to get an impression
- Some screen dumps of applications
- The design of the XPCE/Prolog interface (Publication in Workshop on Logic Programming Environments, 2002)
For starters as well as for more experienced users who want to know how particular tasks are tackled using XPCE/Prolog, there is the XPCE UserGuide. The manual is also available a HTML-tar-archive and can be viewed online.
The reference documentation is available using a hypertext system defined in XPCE/Prolog. This tool exploits the XPCE-class descriptions as well as associated hypertext cards to provide various viewpoints and search mechanisms for browsing the reference material. The manual tools are started using the Prolog command manpce/0:
Finally, the development tools and libraries form a rich set of examples. Just browse through them and then use the Visual Hierarchy Tool to locate the relevant source-code.
On Unix installations, the manpages xpce.1 and xpce-client.1 provide documentation on the command-line options of these commands. | <urn:uuid:00379618-ad70-4276-9169-6bc75c160558> | 2.84375 | 652 | Knowledge Article | Software Dev. | 31.150715 |
Convex hull, developed at the Research Imaging Center (RIC), uses algorithms to "warp" images of human brains and make them ready for scientific comparison. The task is much like fitting a grapefruit inside an orange. No two brains have the exact same size, shape and structure. Identifying the functions of different parts of the brain requires accurate cross-comparison between images taken from hundreds of people.
The outgrowth of work in the '80s by the center's director, Peter T. Fox, MD, convex hull has been honed into computer algorithms by medical physicist J. Hunter Downs III, PhD, an RIC instructor. The brain research community is debating the convex hull method as a possible standard in building an atlas of the brain's functions. Accepting a standard would speed the brain mapping project by letting researchers worldwide compare and share like data.
"I'm a toolmaker," said Dr. Downs, who studied computer science as an undergraduate at The University of Texas at San Antonio and completed his doctorate in medical physics in 1994 at the Health Science Center. "This is a tool designed so the neuroscientists can talk coherently about how the brain functions."
The convex hull concept also has industrial applications where points must be plotted on a curve. For example, engineers who design auto bodies use its principles.
Dr. Fox pioneered the idea for brain imaging in 1985 at Washington University when he took what is called the "bounding box" method a step further. The box describes the boundary drawn around any selected part of the brain image; the image inside then is enlarged or reduced for comparison with like images from other subjects. Dr. Fox began work that would expand the scope beyond warping, or "normalizing," length, width and height. He began exploring a way to normalize curvature as well.
Arriving at the Health Science Center, Dr. Fox described the concept to Dr. Downs and Jack L. Lancaster, PhD, a physicist, professor of radiology and head of the RIC's computer software development team. "I didn't know the mathematical term or the concept of convex hull, but Hunter and Jack sure did. They said, 'That's a convex hull.'"
Here is how it works:
"Use the orange and the grapefruit for an example," Dr. Downs said. "The bounding box is able to stretch the dimensions of the orange to match the size of the grapefruit, but neither the grapefruit nor the orange is quite circular so we need to make them the same shape.
"Convex hull keeps the whole brain image in a three-dimensional space. It is conceived as a way to scale outward from some central point by using the ratios of that distance from the central point to the convex hull surface, which is the outside of the brain," he said.
"For example, you want to scale the orange to be the same size as the grapefruit so you can determine where the seeds lie relative to the different shapes. You take the outer surface of the orange and the central point of the orange and find the lengths in every direction. You do the same thing with the grapefruit. Then you use the ratios of those lengths between the orange and the grapefruit to stretch out the orange to the grapefruit," he said.
Dr. Fox had conceived of several measurements, but computer applications of the convex hull allow for thousands that touch virtually any point on the curved surface.
Convex hull is being used on images of the whole brain and is accurate to within 2 millimeters. The average brain is 180 millimeters long. Dr. Fox and his staff now are developing ways to selectively measure smaller portions of the brain using the same methodology. He expects to sharpen accuracy fourfold to within .04-millimeter of where any given brain function takes place.
Return to index | <urn:uuid:68e99e26-e5b7-4649-9906-3a8684258aa0> | 3.640625 | 797 | Knowledge Article | Science & Tech. | 54.618387 |
Previous abstract Next abstract
The Diffuse Infrared Background Experiment (DIRBE) aboard the Cosmic Background Explorer (COBE) has provided the first full-sky linear polarization survey in the near-infrared, at 1.25 $\mu$m, 2.2 $\mu$m and 3.5 $\mu$m. A set of 41 weekly averaged maps include the linear Stokes parameters Q and U and their uncertainties for sky pixels measuring 0.35$\deg$ on a side. These data are best applied to studies of diffuse radiation, principally the zodiacal light. Polarizations of point sources should be derived from the DIRBE time-ordered data file, which includes calibrated intensities for all the polarization and total intensity channels, at a time resolution of 0.125 sec. This presentation describes the DIRBE polarization data products, with emphasis on the advantages and limitations of DIRBE polarimetry for studying the polarization of the diffuse sky and of point sources.
$^*$ The NASA/Goddard Space Flight Center (GSFC) is responsible for the design, development, and operation of the COBE mission. Scientific guidance is provided by the COBE Science Working Group. GSFC is also responsible for the production of the mission data sets. The data products are available from the National Space Science Data Center.
Tuesday program listing | <urn:uuid:a1eb8b4a-18df-4bd1-9f76-a3e2a74fd5aa> | 2.734375 | 277 | Academic Writing | Science & Tech. | 42.168598 |
Today's sky searches
An asteroid-seeker talks about the search process and Clyde Tombaugh's impact on astronomy.
December 21, 2005
|Lowell Observatory in Flagstaff, Arizona, became a household name after Clyde Tombaugh discovered Pluto from there in 1930. The observatory has remained in the forefront of astronomical findings ever since. Astronomy talked with Brian Skiff, a research assistant there, about modern-day astronomical searches and his thoughts on Tombaugh's legacy.|
You are currently not logged in. This article is only available to Astronomy magazine subscribers.
Already a subscriber to Astronomy magazine?
If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will
need to regsiter for one. Registration is FREE and only takes a couple minutes.
Non-subscribers, Subscribe TODAY and save!
Get instant access to subscriber content on Astronomy.com!
- Access our interactive Atlas of the Stars
- Get full access to StarDome PLUS
- Columnist articles
- Search and view our equipment review archive
- Receive full access to our Ask Astro answers
- BONUS web extras not included in the magazine
- Much more! | <urn:uuid:61eedb78-3090-42a7-93bd-e03ac80a2b4f> | 2.71875 | 262 | Truncated | Science & Tech. | 35.30373 |
...and fine-tune it.
Apparently, it's been know for awhile now that airborne bacterial streams could prompt precipitation in the form of rain or snow. The single-celled bodies which make up these streams serve as "nucleators" that water vapor can coalesce and freeze around. However, how ubiquitous these streams were, and how important they might be to cloud formation, has only recently become apparent:
"Atmospheric scientists haven't previously recognized that these particles are so widely distributed," [Louisiana State University microbiologist Brent Christner] said.
The findings raise the question of how climate change and human activities will affect bacterial balances in the sky. More immediately, they're a starting point for research on bacterial contributions to cloud formation and precipitation.
Since the impact of feedback loops involving clouds on global weather patterns is the largest source of uncertainty in current predictions of climate change, the new research should eventually allow for greater precision in these forecasts.
More cool cloud stuff here. | <urn:uuid:86a3068d-c263-4f6b-a133-a5aeda0688e6> | 3.09375 | 202 | Comment Section | Science & Tech. | 34.01295 |
Brief SummaryRead full entry
BiologyWith its life cycle wholly dependent on infrequent showers of rain, the Raso lark has a particularly precarious existence (2). During periods of drought, one of the few sources of food and water for this species is provided by the small subterranean bulbs of the nutsedges, Cyperus bulbosus or Cyperus cadamosti. In order to reach the bulbs, the Raso lark excavates shallow burrows in the sandy soil using its strong bill (2) (4). Males appear to consume more bulbs than the females, and the largest dominant males usually form territories containing several productive burrows, which they aggressively defend (4) (6). In contrast, females are more reliant on surface food sources such as grass seeds and insects (4). While the difference in bill size between the sexes has previously been thought to account for the differences in feeding behaviour (2) (3) (4), more recent research has shown that this may not be the case (6). Despite its smaller bill size, the female appears to be equally efficient in digging for bulbs, and, as such, its lower bulb intake appears to be due to competitive exclusion from burrowing sites by males. This added obstacle to finding food in Raso Island's harsh conditions may explain why the Raso lark's adult population is mostly comprised of males. Not only are the females more likely to starve during droughts, but the increased time spent foraging relative means that less time is spent keeping a look out for potential threats, making losses due to predation more common (6). Raso lark breeding coincides with the onset of rain showers, with the males courting the females by quietly singing, raising the crest and hopping up and down on the spot with the wings held open. After mating, both sexes collect nesting material such as dried grass, which the female then uses to line a small, three-centimetre deep scrape in the soil, while the male defends the nesting site from intruders. A clutch of up to three eggs may be produced over a period of several days, with individual eggs sometimes laid over a day apart. These are incubated by the female for short, ten-minute periods, interspersed by preening and feeding breaks (4). Few chicks appear to survive to fledging, as newly-hatched chicks and eggs appear to be heavily predated by the Cape Verde giant gecko (Tarentola gigas) (2) (3). | <urn:uuid:369ce165-8d6d-47d9-a225-971d2fb4226f> | 3.90625 | 513 | Knowledge Article | Science & Tech. | 39.514 |
An affiliated website was created specifically for the 2009 National Climate Assessment so that the report would be more accessible to a variety of interested readers.
Visit the "Executive Summary" page on the 2009 National Climate Assessment website to find more information on the following key findings from the report:
Click here to download the "Executive Summary" chapter from the report
- Global warming is unequivocal and primarily human-induced.
- Climate changes are underway in the United States and are projected to grow.
- Widespread climate-related impacts are occurring now and are expected to increase.
- Climate change will stress water resources.
- Crop and livestock production will be increasingly challenged.
- Coastal areas are at increasing risk from sea-level rise and storm surge.
- Threats to human health will increase.
- Climate change will interact with many social and environmental stresses.
- Thresholds will be crossed, leading to large changes in climate and ecosystems.
- Future climate change and its impacts depend on choices made today. | <urn:uuid:aa6198de-dd4e-484f-98d0-6003afcf7913> | 3.3125 | 208 | Content Listing | Science & Tech. | 24.986364 |
Kreisimpressionen--nicht nur (!) mathematisch
Library Home || Full Table of Contents || Suggest a Link || Library Help
|A lesson plan in which students discover the equation of a circle, to describe an everyday object containing a circle or circles. The project also involves work with TI-89 calculators and Derive, as well as Power Point presentations. Work from the author's class is documented on the site as an example and invitation for others to send in their work as well. The showcase is available in English and French; the whole site is available in German.|
|Levels:||High School (9-12)|
|Resource Types:||Graphics, Lesson Plans and Activities, General Software Miscellaneous|
|Math Topics:||Equations, Analytic Geometry|
|Math Ed Topics:||Computers|
© 1994-2013 Drexel University. All rights reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education. | <urn:uuid:be1f332d-acd3-461d-8c00-fc5d2e379cb4> | 3.625 | 211 | Content Listing | Science & Tech. | 36.545625 |
A father (m = 94 kg) and son (m = 48 kg) are standing facing each other on a frozen pond. The son pushes on the father and finds himself moving backward at 3 m/s after they have separated. How fast will the father be moving?
Which equation is used to solve this? Can you set up the problem for me? | <urn:uuid:bccf3f51-523c-4247-8f1d-49c99d008a72> | 2.875 | 73 | Q&A Forum | Science & Tech. | 90.315 |
m is the real parameter.
Find the real number and sign for the equation :
By two methods : graphic , algebric.
Intersect the graph of with parallel lines to x-axis with equation
If then the line doesn't intersect the graph.
If the line is tangent to the graph and the equation has a double solution
If the line intersects the graph in 2 points and the equation has two negative solutions.
If then the equation has the solutions
If then the equation has one negative solution and one positive solution. | <urn:uuid:c019545b-effe-4f90-bd37-46344f14795f> | 3.09375 | 113 | Tutorial | Science & Tech. | 55.085 |
Address of Overloaded Functions
Use of a function name without arguments returns the address of that function. For example:
int Func( int i, int j ); int Func( long l ); ... int (*pFunc) ( int, int ) = Func;
In the preceding example, the first version of Func is selected, and its address is copied into pFunc.
The compiler determines which version of the function to select by finding a function with an argument list that exactly matches that of the target. The arguments in the overloaded function declarations are matched against one of the following:
An object being initialized (as shown in the preceding example)
The left side of an assignment statement
A formal argument to a function
A formal argument to a user-defined operator
A function return type
If no exact match is found, the expression that takes the address of the function is ambiguous and an error is generated.
Note that although a nonmember function, Func, was used in the preceding example, the same rules are applied when taking the address of overloaded member functions. | <urn:uuid:0c1b6309-9f49-41a2-9789-f0f00a3bbc07> | 3.828125 | 225 | Documentation | Software Dev. | 42.786336 |
Carbon semiconductors using sheets of pure graphene are being proposed at Brown University, which recently proposed a method for removing defects from graphene semiconductors. Look for the switch from silicon to carbon semiconductors over the next 20 years. R.C.J.
Professor Vivek Shenoy (right) and graduate student Akbar Bagri (left) are exploring how to perfect the atomic configuration of graphene oxides, proposing that oxygen defects in graphene sheets be located and treated with hydrogen, which combines to form water vapor which leaves the lattice healed. The usual method of removing defects in silicon semiconductors is annealing--slow heating--but Brown demonstrated simulations showing that their hydrogen treatment works better for graphene.
Full Text: http://bit.ly/NextGenLog-cSP5 | <urn:uuid:8f7c1879-f31c-4215-936f-6db2fa329123> | 3.421875 | 163 | Truncated | Science & Tech. | 33.200741 |
Forensic entomologists, of course.
These are the strong-stomached folks who study the arthropod fauna that colonizes dead flesh. Their knowledge of insect taxonomy, ecology, and development can be used to provide estimates of the time and conditions of death. Or zombification, in the present case.
Hypothetically, suppose a zombie shuffles along to my house at horrifying rate of 1 km/hr.
On arrival, I note that the zombie is infested with final instar larvae of the blow fly Phormia regina. Under our current warm summer weather conditions, it takes at least 5 days for the maggots to reach that developmental stage since momma fly visited the zombie. So we can presume that the zombie has been decomposing for at least that long.
Let’s see. 5 days is 120 hours. Shuffling at 1km/hour would put the origin of that zombie no further than 120 kilometers away.
If we collect maggots and crunch the numbers from enough zombies, we should be able to triangulate in on the location of the zombie epicenter. The zombie control forces will then be able to deploy where they will be most effective.
And as usual, entomologists will have saved civilization.
On a more personal note, though, I guess I should have run instead of doing the math. | <urn:uuid:09ce061f-3011-4fb7-9152-ccbad93729f5> | 2.8125 | 286 | Personal Blog | Science & Tech. | 53.989847 |
New research finds that one species of moth is capable of actively jamming the sonar used by moth-hunting bats. This biological equivalent of the 'electronic countermeasures' used by the military to jam radar and sonar signals is described in the journal Science. In the study, researchers used ultrasonic recording and high-speed infrared video to examine the relationship of bats and a tiger moth known as Bertholdia trigona. They found that sudden bursts of ultrasound emitted by the tiger moth appears to actually jam the bat's sonar, not just startle the bat or warn of a bad-tasting moth. We'll find out more.
Produced by Flora Lichtman, Correspondent and Managing Editor, Video | <urn:uuid:704e81f7-eea7-41b6-bd0a-1bca51045ff5> | 3.203125 | 149 | Truncated | Science & Tech. | 38.566522 |
Comment: 17:35 - 18:25 (00:50)
Source: Annenberg/CPB Resources - Earth Revealed - 9. Earthquakes
Keywords: Parkfield, "Tom Daley", "San Andreas Fault", "fault structure", "Vibra-seis truck", wave, energy, velocity, "subsurface structure", earthquake
Our transcription: One of the most fundamental aspects of the Parkfield Experiment focuses on the structure of the San Andreas Fault itself.
To learn more about this structure, geophysicists have set up the Vibra-seis Project.
At the heart of this effort is a specially equipped truck that shakes the ground, triggering waves of seismic energy.
Radiating into the earth, the seismic waves moves at different velocities through different rock types.
Analysis of the velocity changes makes it possible to unravel the intricacies of the subsurface geologic structure.
As the waves penetrate the Earth, they are reflected and refracted off the various rock layers, and by measuring first the direct wave, which travels directly from the source to receivers, we can get the velocity of the rocks.
Then by looking at the later arriving reflected and refracted scattered waves, we can see possibly where the structure changes and where the layering beneath the Earth is.
Geology School Keywords | <urn:uuid:d65ef146-c8e5-4936-ab67-cbd5d26d0828> | 3.53125 | 278 | Knowledge Article | Science & Tech. | 34.662608 |
Science Fair Project Encyclopedia
A person who emits noise (with the voice or otherwise) either loudly or a lot of the time can be described as loud. Whether this is an insult or a compliment is a matter of personal preference: some people self-describe as "loud" while many others consider "loud" people to be intensely irritating.
Units used to measure loudness:
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:f58ac637-9f2f-43aa-8dec-6fbd0e92d3d5> | 3.09375 | 111 | Knowledge Article | Science & Tech. | 37.628674 |
Black Holes in Distant Galaxy Points to Wild Youth
Chandra's image of the lenticular (an elliptical-type galaxy with a disk of old stars) galaxy NGC 1553 reveals diffuse hot gas dotted with many point-like sources. As in the elliptical galaxies, NGC 4649 and NGC 4697, the point-like sources are due to black holes and neutron stars in binary star systems where material pulled off a normal star is heated and emits X-radiation as it falls toward its black hole or neutron star companion.
Black holes and neutron stars are the end state of the brightest and most massive stars. Chandra's detection of numerous neutron stars and black holes in this and other elliptical galaxies shows that these galaxies once contained many very bright, massive stars, in marked contrast to the present population of low-mass faint stars that now dominate elliptical galaxies.
The bright central source in NGC 1553 is probably due to a supermassive black hole in the nucleus of the galaxy. The nature of the spiral feature curling out from either side of this source is not known. It could be caused by shock waves from a pair of bubbles of high energy particles that were ejected from the vicinity of the supermassive black hole. | <urn:uuid:b0a54609-eeda-4870-a7a7-5c16cae731e4> | 3.703125 | 254 | Knowledge Article | Science & Tech. | 42.099296 |
On 56th independence day, August 15 2003, India's Prime Minister Atal Bihari Vajpayee announced. "Our country is now ready to fly high in the field of science. I am pleased to announce that India will send her own spacecraft to the moon by 2008. It is being named Chandrayaan-1". In Sanskrit (language of Ancient India) "Chandrayaan" means "Moon Craft".
Moon has always fascinated Indians from ancient days and now 21st century India is ready to land on moon! Chandrayaan-1 is the first mission towards the dream.
In Chandrayaan-1, the lunar craft would be launched using Polar Satellite Launch Vehicle (PSLV) weighing 1304 kg at launch and 590 kg at lunar orbit. Lunar craft would orbit around moon 100 km from moon surface.
ISRO invited international space organization to participate in the project by providing suitable scientific payloads (instrument for experiments). ISRO selected
3 (C1XS, SIR-2,SARA) payload from ESA (European Space Agency)
1 (RADOM) from BSA (Bulgarian Academy of Science),
2 (MiniSAR, M3) from NASA (National Aeronautics and Space Administration) | <urn:uuid:3fd81051-2d09-4321-ae85-4ec40b719c58> | 3.28125 | 261 | Knowledge Article | Science & Tech. | 55.646214 |
This week's book giveaway is in the General Computing forum. We're giving away four copies of Arduino in Action and have Martin Evans, Joshua Noble, and Jordan Hochenbaum on-line! See this thread for details.
Thanks for giving reply of my question. Their is another method with the name of parseInt it has two arguments parseInt(String, int); I cannot understand the meaning of int argument in this method. Please help me in this regard. Again thanks to all who gave me reply. Ahmer Arman
Ahmer The int argument is the radix that the String integer is in. In other words its base. A String such as "12349" would be base 10 so the parseInt call would be parseInt("12349", 10); . A String that is a hex representation could be something like "AF", so parseInt would be parseInt("AF", 16);. Basicaly your telling the method that this String your passing it is in a different base than base 10. hope that helps
------------------ Dave Sun Certified Programmer for the Java� 2 Platform
Well, since any number of classes COULD have a parseInt method - I am going to use my ESP and guess that you are talking about the one in the Integer class. That method is just saying that if you feed in a number in String format, and you specify that "base" that the number is being displayed in (that would be the second parameter), the method will feed you back the value of that number in base 10. Of course the String can only use digits that are valid in the base that you say that it is in - so if you say it is in base 5, then the String should only have digits 0,1,2,3 and 4 or else it will throw a NumberFormatException. The api gives a whole list of examples: | <urn:uuid:4fd8701a-4239-47d2-a2cb-a228fa5b4707> | 3.484375 | 382 | Comment Section | Software Dev. | 67.376485 |
Elasmobranchii (sharks and rays) > Carcharhiniformes
(Ground sharks) > Pseudotriakidae
Etymology: Gollum: Taken from a personnage of the J.R.R.Tolkien works "The Lord of the rings".
Environment / Climate / Range
Marine; bathydemersal; depth range 120 - 660 m (Ref. 26346), usually 400 - 600 m (Ref. 13566). Deep-water; - 46°S
Length at first maturity / Size / Weight / Age
Maturity: Lm ?, range 70 - ? cm
Max length : 107 cm TL male/unsexed; (Ref. 6893); 109.2 cm TL (female)
Morphology | Morphometrics
Southwest Pacific: occurs off New Zealand and on rises between New Zealand and the east coast of Australia, New Caledonia, and Fiji just south of the Western Central Pacific. Placement in Proscyllidae provisional, probably will be relocated in Pseudotriakidae.
An uncommon to common deep-water bottom-dwelling shark found on the outermost continental shelf and upper slope of New Zealand and on adjacent seamounts and submarine banks (Ref. 13566). Seems adapted to waters of about 10°C and 34.8 ppt salinity (Ref. 6893). Feeds on a wide variety of fishes, cephalopods, and other invertebrates (Ref. 13566). Probably in schools (Ref. 13566). Females grow slightly larger than males (Ref. 13566). Ovoviviparous, embryos feeding on yolk sac and other ova produced by the mother, uterine milk is consumed additionally (Ref. 50449). Two young are born per litter (Ref. 13566).
Compagno, L.J.V., 1984. FAO Species Catalogue. Vol. 4. Sharks of the world. An annotated and illustrated catalogue of shark species known to date. Part 2 - Carcharhiniformes. FAO Fish. Synop. 125(4/2):251-655. Rome: FAO.
IUCN Red List Status (Ref. 90363)
Fisheries: of no interest
ReferencesAquacultureAquaculture profileStrainsGeneticsAllele frequenciesHeritabilityDiseasesProcessingMass conversion
CollaboratorsPicturesStamps, CoinsSoundsCiguateraSpeedSwim. typeGill areaOtolithsBrainsVision
Estimates of some properties based on empirical models
Phylogenetic diversity index (Ref. 82805
= 0.8125 [Uniqueness, from 0.5 = low to 2.0 = high].
Bayesian length-weight: a=0.00367 (-0.21058 - 0.21793), b=3.12 (3.00 - 3.23), based on LWR estimates for this family-BS (Ref. 93245
Trophic Level (Ref. 69278
): 4.2 ±0.7 se; Based on diet studies.
Resilience (Ref. 69278
): Very Low, minimum population doubling time more than 14 years (Fec=2).
Vulnerability (Ref. 59153
): Moderate vulnerability (40 of 100) . | <urn:uuid:f745c6cf-8178-43de-aa3a-1557289231fc> | 3.046875 | 706 | Knowledge Article | Science & Tech. | 62.305976 |
What is Genomics
Genomics is a study of the genomes of organisms. It main task is to determine the entire sequence of DNA or the composition of the atoms that make up the DNA and the chemical bonds between the DNA atoms. Knowledge of the DNA sequence has become an important part of biological research but it is also of vital importance in other research disciplines including medicine, biotechnology, forensic, etc.
Genomics should not be confused with genetics. It is a study of the functions of single genes which shows a great potential in medicine and molecular biology as well. The field of genomics is interested in genome as a whole structure and investigates a single genome only if it is important for the genome as a structure. Genomics can therefore also be defined as a study of the complete genetic material of an organism.
History of genomics dates back to the 1970s when the scientists determined the DNA sequence of simple organisms. The greatest breakthrough in the field of genomics occurred in the mid-1990s when the scientists sequenced the entire genome of Haemophilus influenzae, a free-living organism which, however, does not cause influenza. The bacterium was thought to be the cause of flu until 1933 when it was proven that influenza is caused by a virus. In 2001, the scientists sequenced most of the human genome. Since then, genomes are being sequenced with relative ease. By the end of 2011, scientists sequenced genomes of over 2,700 viruses, more than 1,200 bacteria and archaea and 36 eukaryotes about 50 percent of which are fungi.
Scientists get a number of highly useful information from sequenced DNA of organisms. But what is most important of all, they allow the scientists to determine the relationships between the genes and different sections of DNA which in turn allows them to determine which areas could offer benefits to science as well as make the knowledge useful for medical applications.
Genomic research projects over the last few decades gave rise to several research areas in the study of genomes. The main genomics research areas include:
Human genomics. Like its name suggests, human genomics is focused on studying the human genome sequence. Human DNA was sequenced by the Human Genome Project, an international scientific research project in 2001 but the human genome sequence was proclaimed completed only in 2007.
Bacteriophage genomics. It refers to the study of bacteriophage genomes or genomics of viruses which infect bacteria and are considered as a possible alternative for treatment of illnesses that are caused by antibiotic-resistant bacteria.
Metagenomics. It is a study of metagenomes or genetic material which is obtained from environmental samples rather than from cultivated cultures. Metagenomics has revolutionized the understanding of microbial world and shown that the traditional cultivation techniques have missed the majority of microbial diversity.
Cyanobacteria genomics. This field of genomic research is concentrated on study of cyanobacteria, a phylum of bacteria which get energy through photosynthesis.
Pharmacogenomics. This branch of genomics studies the impact of genetic variation on a drug’s efficacy and toxicity, and plays an important role in optimization of drug therapy. | <urn:uuid:02e0632b-5d9b-4fd1-bcc9-9508137b4093> | 3.65625 | 644 | Knowledge Article | Science & Tech. | 29.941527 |
Science Needs and New Technology for Increasing Soil Carbon Sequestration
F. B. Metting and R. Cesar Izaurralde
Abstract Fossil fuel use and land use change that began over 200 years ago are driving the rapid increase in atmospheric content of CO2 and other greenhouse gases that may be impacting climatic change (Houghton et al., 1996). Enhanced terrestrial uptake of CO2 over the next 50 to 100 years has been suggested as a way to reclaim the 150 or more Pg carbon (C) lost to the atmosphere from vegetation and soil since 1850 as a consequence of land use change (Batjes, 1999; Lal et al., 1998a; Houghton, 1995), thus effectively "buying time" for the development and implementation of new longer term technical solutions, such as C-free fuels. The ultimate potential for terrestrial C sequestration is not known, however, because we lack adequate understanding of (1) the biogeochemical mechanisms responsible for C fluxes and storage potential on the molecular, landscape, regional, and global scales, and (2) the complex genetic and physiological processes controlling key biological and ecological phenomena. Specifically, the structure and dynamics of the belowground component of terrestrial carbon pools, which accounts for two-thirds of global terrestrial organic C stocks, is poorly understood. Focusing primarily on forests, croplands and grasslands, the purpose of this chapter is to consider innovative technology for enhancing C sequestration in terrestrial ecosystems and address the scientific issues related to better understanding of soil C sequestration potential through appropriate and effective approaches to ecosystem management. | <urn:uuid:f2fec39c-079f-4ead-9427-1a1aa75de057> | 3.171875 | 323 | Academic Writing | Science & Tech. | 25.4576 |
Tangent Circles Theorem: Common Tangents & Concurrent Point.
School, SAT Prep, College
The common tangents of three mutually
tangential circles A,
B, and C taken in pairs are concurrent in the point P. P is the
incenter of triangle ABC (center of the incircle).
Dynamic Geometry: You can alter the figure above
dynamically in order to test and prove (or disproved)
conjectures and gain mathematical insight that is less
readily available with static drawings by hand.
This page uses the
dynamic geometry software and requires
Adobe Flash player 7 or higher.
TracenPoche is a project of Sesamath, an association of French
teachers of mathematics.
Instruction to explore the
Animation. Click the red
to start/stop animation
Manipulate. Drag points A
and C to change the figure.
Step by Step construction.
Press P and click the left mouse
on any free area to show the
step-by-step bar and start the
Hide the step-by-step bar by
using again the combination P +
click left mouse. | <urn:uuid:b67ff227-781a-4b72-aeca-d67ae8b22b3a> | 3.390625 | 250 | Tutorial | Science & Tech. | 53.173636 |
How much energy does our global population of nearly 7 billion use every year? According to the US Energy Information Administration (EIA), total primary energy consumption came to 493 quadrillion — that’s 493,000,000,000,000,000 — BTUs in 2008, the most recent year for which figures are available. (One BTU, or British thermal unit, is the amount of energy you’d need to heat up one pound of water by one degree Fahrenheit.)
That counts energy from all sources: oil, coal, gas, nuclear and renewable.
The world’s largest overall energy consumer in 2008? No surprise there, it’s the US, devouring its way through 100.6 quadrillion BTUs. China was closing in rapidly, though: it consumed 85 quadrillion BTUs in 2008 and will doubtless go even higher when new figures become available.
The country with the greatest per-person primary energy consumption, on the other hand, seems an unlikely one: the British Virgin Islands. According to the EIA, every person on the islands in 2008 consumed an average of 3,316 million BTUs, compared to just 330 million BTUs per capita in the US and 64.6 million BTUs per capita in China.
That figure is extreme, even by island standards. Energy statistics for small island nations are typically high, in large part because they’re usually not connected to a regional electricity grid and often depend on imported oil for generating power. (Before the 2008 economic crisis, the British Virgin Islands also ranked near the top globally in offshore finance.) | <urn:uuid:2c615582-3d77-46d4-b03d-6b5bbb8a6d75> | 3.25 | 332 | Knowledge Article | Science & Tech. | 50.861732 |
A slurry of rocks and mud sounded like a freight train when it ripped through a popular Mount Rainier hiking destination in 2001 and scared some television viewers who believed their homes were in the path.
As it turned out, the debris flow at Comet Falls proved less dangerous than initially believed, but it gave scientists insights into a phenomenon that continues to mystify.
Such a debris flow likely added damage to Mount Rainier National Park when a flood – sparked by nearly 18 inches of rain in two days – shut it down in November 2006. Experts are concerned that the level of flood danger is increasing as sediment builds in glacier-fed waters like the Nisqually River.
Scientists suspect that climate change – specifically, shrinking glaciers that leave unstable rock behind – is adding to the risk of debris flows that help clog river channels downstream.
This summer, a team of researchers is gathering information at Mount Rainier that could help provide answers. One of the leading scientists is Gordon Grant, a U.S. Forest Service hydrologist and Oregon State University professor of geosciences.
“Geological record documents debris flows for as long as the mountains have been around,” Grant said. “But given well-documented glacier retreat here and elsewhere, now is a good time to ask whether glacial retreat is changing the risk.”
Among the scientists’ questions: Have debris flows become more frequent? Does this add to the dangers around the Nisqually River and Mount Rainier’s other glacier-fed rivers, making them more likely to jump their banks?
Consider this: At Longmire, the river is nearly 30 feet higher than most of the national park compound, including the popular National Park Inn, ranger housing, maintenance shops and other historically important buildings.
SEDIMENT BUILDING UP
Glaciers on Mount Rainier and elsewhere are shrinking, and glaciologists have blamed climate change. Debris flows typically begin where glaciers run out.
Moreover, sediment has been building – up to a foot per decade – on the bottoms of rivers such as the Nisqually, increasing the likelihood of future floods.
The process, which river experts call aggradation, is typical of glacier-fed rivers of this type.
But Scott Beason, a Mount Rainier National Park ranger who measured the river beds before and after the flood and wrote his master’s thesis about them, suspects that climate change has added to the risk of debris flows which help clog the channels.
During the 2006 flood, Beason used a real-time Internet connection to monitor a U.S. Geological Survey gauge in the Nisqually River. He noticed that water line didn’t rise smoothly. Instead the picture was interrupted by spikes, small jags representing pulses of debris, rather than water, which boosted the flow.
“The power of water to sculpt the land is just amazing,” Beason said. “It doesn’t take much to start a debris flow. … They’re just a completely different kind of monster than a river.”
BIRTH OF A DEBRIS FLOW
In 2001, scientist Carolyn Driedger and fellow U.S. Geological Survey volcano experts went up in a helicopter to check on reports of the debris flow at Comet Falls.
Driedger and her colleagues got a look the following day, when a second mass cut loose. “It was one of the most spectacular things I’ve ever seen in my whole life,” Driedger said.
A 6-foot-wide stream of melt water slipped out of one glacier basin and into another, then became a muddy slurry that bulked up as it enveloped a mass of rocky refuse and charged downhill.
Before that helicopter flight, few scientists had observed the birth of such flows.
The Comet Falls incident took place in summer and was unrelated to flooding. Even so, Grant suspects a link between debris flows and floods is common to volcanoes throughout the Pacific Northwest. “It’s a coupled phenomenon,” he said, meaning that volcanic debris flows are frequently associated with floods.
One theory is that climate change has increased the incidence of extreme storms. Scientists plan to look for a correlation between bad weather and debris flows, Grant said.
Also, Grant said his team plans to take a closer look at how debris flows come about. “We don’t really know the mechanism by which they begin,” he said. “A key issue is how they bulk up.”
(more...) Read more at newstribune.com. | <urn:uuid:10cd7f70-6614-4856-99cf-620d525967fb> | 3.515625 | 959 | Truncated | Science & Tech. | 54.351531 |
Stars produce energy through nuclear fusion, producing heavier elements from lighter ones. The heat generated from these reactions prevents gravitational collapse of the star. Over time, the star builds up a central core which consists of elements which the temperature at the center of the star is not sufficient to fuse. For main-sequence stars with a mass below approximately 8 solar masses, the mass of this core will remain below the Chandrasekhar limit, and they will eventually lose mass (as planetary nebulae) until only the core, which becomes a white dwarf, remains. Stars with higher mass will develop a degenerate core whose mass will grow until it exceeds the limit. At this point the star will explode in a core-collapse supernova, leaving behind either a neutron star or a black hole.
Computed values for the limit will vary depending on the approximations used, the nuclear composition of the mass, and the temperature. Chandrasekhar, eq. (36),, eq. (58),, eq. (43) gives a value of
Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons will increase upon compression, so pressure must be exerted on the electron gas to compress it. This is the origin of electron degeneracy pressure.
In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P=K1ρ5/3. Solving the hydrostatic equation leads to a model white dwarf which is a polytrope of index 3/2 and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.
As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, we find that the equation of state takes the form P=K2ρ4/3. This will yield a polytrope of index 3, which will have a total mass, Mlimit say, depending only on K2.
For a fully relativistic treatment, the equation of state used will interpolate between the equations P=K1ρ5/3 for small ρ and P=K2ρ4/3 for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses.
A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.
In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei which obeyed Fermi-Dirac statistics. This Fermi gas model was then used by the British physicist E. C. Stoner in 1929 to calculate the relationship between the mass, radius, and density of white dwarfs, assuming them to be homogenous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37 kg. In 1930, Stoner derived the internal energy-density equation of state for a Fermi gas, and was then able to treat the mass-radius relationship in a fully relativistic manner, giving a limiting mass of approximately (for μe=2.5) 2.19 · 1030 kg. Stoner went on to derive the pressure-density equation of state, which he published in 1932. These equations of state were also previously published by the Russian physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community.
A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. This value was also computed in 1932 by the Soviet physicist Lev Davidovich Landau, who, however, did not apply it to white dwarfs.
Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Stanley Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:
The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km. radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. … I think there should be a law of Nature to prevent a star from behaving in this absurd way!Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P=K1ρ5/3 universally applicable, even for large ρ. Although Bohr, Fowler, Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar., pp. 110–111 Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten., p. 150
The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various points in a star's life, the nuclei required for this process will be exhausted, and the core will collapse, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.
If a main-sequence star is not too massive (less than approximately 8 solar masses), it will eventually shed enough mass to form a white dwarf having mass below the Chandrasekhar limit, which will consist of the former core of the star. For more massive stars, electron degeneracy pressure will not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities will destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons, leading to the emission of neutrinos., pp. 1046–1047. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy which is on the order of 1046 joules (100 foes.) Most of this energy is carried away by the emitted neutrinos. This process is believed to be responsible for supernovae of types Ib, Ic, and II.
Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon-oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. It is believed that, as the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This results in an increasing rate of fusion reactions, eventually igniting a thermonuclear flame which causes the supernova., §5.1.2
Strong indications of the reliability of Chandrasekhar's formula are: | <urn:uuid:aa7ddb99-6237-45ea-a1cc-eb7a2929f1bb> | 3.625 | 1,997 | Knowledge Article | Science & Tech. | 39.287737 |
- Jarred Figlar-Barnes
- Elma, WA
- United States
Pykrete a glacier???
If you have ever seen the show Mythbusters, you would know of a material called Pykrete. It is 10-15% sawdust mixed 85-90% water. When frozen this material takes a far longer time to melt than regular ice, it is also much tougher than ice too. What I saw was a possible short term solution to a long-term problem. Melting glaciers. What if you could stop a large glacier from melting, using this same idea? You would have to disperse the material (sand, sawdust) on the glacier in winter, preferably before a large snow storm. America’s fastest growing glacier, Crater glacier in Mount St. Helen’s crater, is well on its way to being the lower 48’s largest glacier, and even though none of the ice pre-dates 1980, at its thickest point its over 600 feet deep. It is advancing at a rate of 50 ft a year and thickening 15 feet per year. Most of the glacier is below the average height for glaciers in Washington State, so why is it growing? Rock slides and ash. They are acting just like sawdust in Pykrete, insulating the ice and keeping it from melting. Many rock glaciers can also form below the normal height of glaciers because of this property. So, could this work, does any one have better solutions or ideas?
Closing Statement from Jarred Figlar-Barnes
If anything, a study should be done that factors in cost effectiveness, effectiveness, and environmental impact of this idea. Hopefully next time I post more people will find it interesting. | <urn:uuid:61cb12b5-53ff-47cb-af91-c82e18844f38> | 3.015625 | 355 | Audio Transcript | Science & Tech. | 63.331735 |
10.20.11 - Using data from the Herschel Space Observatory, astronomers have detected for the first time cold water vapor enveloping a dusty disk around a young star.
10.05.11 - Astronomers have found a new cosmic source for the same kind of water that appeared on Earth billions of years ago and created the oceans.
09.21.11 - Chalk up one more feat for Saturn's intriguing moon Enceladus.
09.13.11 - New findings from the Herschel Space Observatory paint a more tranquil picture of galaxy growth than previously thought.
08.01.11 - The Herschel Space Observatory has provided the first confirmed finding of oxygen molecules in space.
07.19.11 - New observations from the Herschel Space Observatory show a bizarre, twisted ring of dense gas at the center of our Milky Way galaxy.
07.07.11 - New observations from the infrared Herschel Space Observatory reveal that an exploding star expelled the equivalent of between 160,000 and 230,000 Earth masses of fresh dust.
04.13.11 - The Herschel Space Observatory has found evidence that tangled filaments in space may be shaped by sonic booms.
02.16.11 - The Herschel Space Observatory has revealed how much dark matter it takes to form a new galaxy bursting with stars.
01.05.11 - Cool, dusty arms spiral around an explosive center in this new image of the Andromeda galaxy from the Herschel and XMM-Newton telescopes.
11.04.10 - It turns out the Herschel Space Observatory has a trick up its sleeve. The telescope, a European Space Agency mission with important NASA contributions, has proven to be excellent at finding magnified, faraway galaxies.
09.01.10 - The Herschel infrared space observatory has discovered that ultraviolet starlight is the key ingredient for making water in space.
05.06.10 - The first scientific results from the Herschel infrared space observatory are revealing previously hidden details of star formation.
04.12.10 - The Herschel Space Observatory has uncovered a cosmic garden of budding stars, each expected to grow to 10 times the mass of our sun.
12.17.09 - The Herschel Space Observatory, a European Space Agency mission with important NASA/JPL participation, has revealed a surprising amount of activity in the Eagle nebula.
10.02.09 - A new image from the Herschel Observatory shows off the observatory's talents for seeing multiple wavelengths of light.
07.10.09 - All three of Herschel's instruments have now opened their eyes and collected their first astronomy data.
06.26.09 - The Herschel Space Observatory has snapped its first picture since blasting into space on May 14, 2009.
06.15.09 - The Herschel observatory has flipped its lid -- the cover protecting the telescope's instruments was successfully removed on June 14, 2009, at 2:54 a.m. Pacific Time.
05.14.09 - The Herschel and Planck spacecraft successfully blasted into space on May 14 from the Guiana Space Centre in French Guiana. | <urn:uuid:2266d33c-1d1d-4a99-88f8-5c5ac2126f33> | 3.65625 | 647 | Content Listing | Science & Tech. | 66.443098 |
Why are insects so energy-efficient while flying? Is it because of their light weight and aerodynamics or due to very efficient biochemical transformations (food->energy)?
Insect flight muscle is capable of achieving the highest metabolic rate of all animal tissues, and this tissue may be considered an exquisite example of biochemical adaptation.
Locusts, for example, may (almost instantaneously) increase their oxygen consumption up to 70-fold when starting to fly. In humans, excercise can increase O2 consumption a maximum of 20-fold, and for birds in flight the figure is about 10-fold (Wegener, 1996; Sacktor, 1976).
As Wegener (1996) has put it (in his definitive paper):
Flight is powered by ATP hydrolysis, and these impressive metabolic rates are achieved by very effective control of ATP hydrolysis and regeneration.
Wegener, G. (1996) Flying insects: model systems exercise physiology Experientia May 15;52(5):404-12. (See here)
Sacktor B. (1976) Biochemical adaptations for flight in the insect. Biochem Soc Symp. 1976;(41):111-31. (See here)
The smaller an animal is the easier it becomes for it to fly. That is because surface area increases to the second power of the diameter of the animal whereas mass increases to the third. So the larger a thing is the more mass per surface are it has.
And since insects tend to be small they tend to be good at flying.
As for any other reason, I don't think insects are any more energy efficient than say, birds. | <urn:uuid:692aa10a-0db3-4e1a-ad67-f98085b7b8bf> | 3.40625 | 339 | Q&A Forum | Science & Tech. | 51.48278 |
It's traditional to think of the real numbers as divided up into three types of number: the rational numbers, the algebraic numbers, and the leftovers known collectively as the transcendental numbers. If you're not familiar with those terms I'll be explaining them below. We can represent this as follows:
I'm using unicode for blackboard bold R and Q there. I hope your browser supports them. ℝ is the set of real numbers, ℚ is the set of rational numbers, and A is the algebraic numbers.
But there are many different kinds of transcendental number in existence, so the subject of this article will be three more levels of classification we could insert between the set of algebraic numbers and the set of real numbers in the sequence above.
But first, let's revisit ℚ and A.
The Rational Numbers, ℚ
The rational numbers are fairly straightforward to deal with. They are nothing more than the ratios of integers. Given two such ratios we can easily compare them to see whether they are equal. For example, we're taught at a young age how to tell if 2/3 is the same as 4/6.
However, it didn't take long for the Ancient Greeks to realise that they had some equations they could solve approximately using rationals, but couldn't solve exactly. The best known example is solving x2=2.
Assume x is rational, so that x=p/q, with p and q integers with highest factors cancelled out. We have p2=2q2, and so p must be divisible by 2. But then p2 is divisible by 4, and so q is divisible by 2. This means that we can cancel 2 from the top and bottom of p/q. That contradicts our assumption about cancellation. So the equation has no rational solution.
If we expect x2=2 to have a solution we must work with numbers that can't be represented as fractions.
Notice how the rational numbers come equipped with a notation to describe them. We can just write one integer over another, with a horizontal line between them. Given such a description we can immediately tell if it's valid (it's only invalid if the denominator is 0) and whether it equals some other rational number.
One interesting property of the rational numbers is that they are *dense* in the real numbers. This is just another way of saying that any real number can be approximated as well as we like by a rational number. For example, we can approximate √2 to within 1/1000 by the rational number 141421/10000. This is an important property that will come up again later.
We also know that there are only countably many rational numbers because there are only countably many pairs of integers.
The Real Algebraic Numbers, A
The real algebraic numbers are all the real numbers that we can obtain by finding roots of polynomial equations whose coefficients are rational numbers. For example, the real solutions to x5-2x2+½=0 are all algebraic numbers. Among other things, this includes the solution to x2=2 that wasn't contained in the rationals. The algebraic real numbers also include the solutions to "insoluble" algebraic equations. (By 'algebraic' equations, I just mean polynomial equations.) Even though Galois proved we can't write expressions for the solutions to all algebraic equations using the four arithmetic operations and nth roots, these solutions still exist in the real numbers, and they are all, by definition, algebraic numbers.
We can describe an algebraic number by writing down the equation it solves, and additionally providing some description to say which root of the equation we're interested in. Unfortunately, given any algebraic number there is always an infinite number of algebraic equations that it satisfies. So like with the rationals there is some redundancy. The good news, however, is that given two such descriptions of algebraic numbers there is an algorithm to tell whether or not they describe equal numbers.
Even though there are vastly more algebraic numbers than rational numbers, they still only form a countable set. As mentioned in the previous paragraph, every algebraic number can be described by a finite string of symbols, and there are only countably many such strings.
The Computable Real Numbers, C
Now I want to insert a set between A and ℝ. These are the computable real numbers. But first think back to what I said about the rational numbers.
Pick a real number x. Given any ε we can always find a rational number that comes within ε of x. We can define a function f on the rationals with the property that f(ε) is rational and within ε of x for any rational ε. So f gives rational approximations to x to any desired accuracy. Such a function uniquely specifies x. In fact x is the limit of f(ε) as ε tends to zero. The computable real numbers are the real numbers specified in this way by *computable* functions. Notice how there are no infinities involved here. A computable real number is represented by nothing more than a finite string of symbols forming a computer program that processes finite integers. We can find out what the real number is to any desired accuracy in a finite time.
Because we can write computer programs to find roots of algebraic equations to any desired accuracy we know that the algebraic numbers are contained in the computables. So now we have:
ℚ⊂A ⊂C ⊂ℝ
Where I'm using C to represent the computables.
Almost any number we ever want to work with is computable. Whether it's √2, or π, or the value of some humongous integral that's appeared in your engineering problem, chances are there's an algorithm somewhere that can approximate it as accurately as you like, given enough CPU time and RAM.
Note that every computable is described by a finite string of symbols, and so the computables form a countable set.
There are some problems with computable numbers. Given two representations of computable numbers we have no way of telling whether or not they are equal. It's not just that this is hard to do. There simply is no algorithm to prove that the numbers are equal. The problem is that the two computer programs may give exactly the same approximations to every degree of accuracy down to a certain ε. But for higher accuracy they may give different results. We have no way of knowing in advance at what value of ε they'll start differing. And anyway, there simply is no algorithm for telling if two computer programs will always generate the same results. So paradoxically, testing the equality of two computable numbers is itself uncomputable.
Actually, the situation is much worse. We can't even tell if we have a valid representation of a computable real. In order to do that, we need to know that our computer program to generate approximations terminates. But solving that would solve the halting problem.
By the way, despite these issues this representation of computable real numbers as functions leads to practical ways to do arbitrary precision arithmetic on a computer in a way that gives us guaranteed bounds on the accuracy of our results.
From here we could go two ways. We could try to rein in our class of number to something more manageable. Or we could try to push further and try to find ways to represent real numbers that aren't even computable. Let's do the former first.
The Periods, P
The periods are an interesting class of real number that doesn't seem to be well known. The definition may be old, but it was only recently that they were being promoted in mathematical circles as an interesting thing to study. So I thought I'd contribute to their promotion.
Consider the number π. It arises straightforwardly in geometric problems. An example is computing the area of a circle. Back in the 18th century Lambert proved that it was irrational and in the 19th century Lindemann proved that it was transcendental. But in a sense the transcendental numbers are simply a rubbish heap into which the leftover numbers have been discarded. Can we get some kind of handle on at least some of these numbers in a way that puts π back on the map?
The real number π is the area of a unit circle. So it is the area of the region of the plane given by the equation x2+y2<1.
Here's a similar construction of another real number:
The area of that region is log(2). (Natural logarithm of course!) We can construct the logarithm of any positive rational number in a similar way. These are all transcendental numbers. But in a way they are nice transcendental numbers. They arise from considering areas of the plane described by straightforward algebraic inequalities. (As in the previous section, I'm using 'algebraic inequalities' to mean 'polynomial inequalities'.)
That suggests a class of real number: those numbers that can be represented as the volume of a region defined by a bunch of algebraic inequalities with rational coefficients. These are known as the periods. I'm talking about generalised volumes in n-dimensions, not just areas in the plane. Clearly π and log(2) are periods.
But does this representation solve the problem we had with computable numbers where we were unable to guarantee we could check the equality of two numbers? Now it gets interesting. The answer is: we don't know. It is conjectured that if we have the same number represented in two different ways as a period that we can transform one representation into another simply by using a small set of elementary operations. It is also conjectured that there is a terminating algorithm for finding such a sequence of operations, or if one doesn't exist, demonstrating this.
One obvious question now is this: are there any computable reals that aren't periods? A recent paper, Periods and elementary real numbers claims to exhibit one by means of a kind of diagonalisation argument. But in general it's hard to prove that a number isn't a period. I don't believe there is a proof yet that e = 2.718... isn't a period, but mathematicians expect that it isn't.
I think the name 'period' comes from the fact that the period of a pendulum of rational length, in a gravitational field of rational strength, is a period. Computing this requires elliptic functions, and these often result in periods when applied to rational numbers.
Although mathematics is about numbers, few mathematical publications talk much about specific real numbers. Curiously, when they do, they often talk about numbers that are periods. For example there have been many recent papers on the values of the Riemann zeta function for interesting arguments. These are often periods.
It has been suggested that the study of periods is actually the study of algebraic geometry in disguise. There is certainly no end to the interesting mathematics we can do using only periods.
We now have:
Q ⊂A ⊂P ⊂C ⊂R
In the section on computable numbers I mentioned that we could go the other way and try to find a bigger class of real numbers that could be represented by finite strings of symbols. We could simply go "all the way" and consider those real numbers that can be defined, by any means possible, using the symbols of mathematics. We can try to pin this down a bit better. We'll work with the language of set theory. In this language we can write strings of symbols like S = "x>0 and x2=2". Such a string uniquely defines a real number if when we glue the string "there exists a unique x such that" onto the beginning of it we get a true proposition. We can now represent the number √2 as the string S.
It looks like we now have:
Q ⊂A ⊂P ⊂C ⊂D ⊂R
with D the set of definables.
Except there's a problem. The set D is not definable! The problem is this: we can talk about strings of symbols representing real numbers. To talk about these in set theory we'd encode these strings of symbols as sets so that we can apply the language of set theory. In order to use our attempted definition of definable we need some way to say when a string of symbols represents a true proposition. But to define a set of definables we need to talk about true propositions in the language of set theory. Godel showed us how to talk about the provability of a proposition within set theory. He did this by showing that provability is about mechanical operations we can perform on strings. But there's nothing analogous for talking about the truth of propositions. In fact, Tarski showed us this is impossible. So while we can talk about all kinds of individual numbers as being definable, we can't construct the set of definable numbers.
You may be interested to see an example of a definable number that isn't computable. Probably the most publicised example is Chaitin's constant. It represents the probability that a randomly generated string of symbols is a computer program that terminates in a finite time. We can't actually compute this number because it requires us to solve the halting problem. Nonetheless, it's perfectly well defined.
You can find the original paper on periods by Zagier and Kontsevich here. I first found out about periods from the book Mathematics Unlimited.
Update: Jared asked if I intended the real algebraic numbers. That's what I said in the text, but I confusingly wrote ℚ for this set. But that usually means the *complex* algebraic numbers. So I've changed notation and now use A for the real algebraic numbers.
Similarly, the periods are usually defined to be complex numbers whose real and imaginary parts are given by algebraically specified volumes. I was trying to avoid mention of the complex numbers to keep prerequisites to a minimum and the essential ideas here work without mention of the complex numbers. | <urn:uuid:79ed062a-09ad-4c02-b6ab-eae85b186794> | 3.875 | 2,945 | Personal Blog | Science & Tech. | 54.323311 |
See Also: Nullable<T> Members
- Documentation for this section has not yet been entered.
The Nullable<T> value type represents a value of a given type T or an indication that the instance contains no value. Such a nullable type is useful in a variety of situations, such as in denoting nullable columns in a database table or optional attributes in an XML element. The runtime transforms Nullable<T>instances without values into true nulls when performing a box operation; instances with values are transformed into boxed T's containing the Nullable<T>'s Value.
An instance of Nullable<T> has two properties, Nullable<T>.HasValue and Nullable<T>.Value. Nullable<T>.HasValue is used to determine whether the current instance currently has a value. It returns true or false, and never throws an exception. Nullable<T>.Value returns the current value of the instance, provided it has one (i.e., Nullable<T>.HasValue is true); otherwise, it throws an exception.
In addition to the above properties, there is a pair of methods, both overloads of Nullable<T>.GetValueOrDefault. The version taking no arguments returns the instance's current value, if it has one; otherwise, it returns the default value of type T. The version taking an argument of type T returns the instance's current value, if it has one; otherwise, it returns the default value argument passed to it.
Applying Nullable<T>.HasValue to an instance that has the default initial value, causes false to be returned. | <urn:uuid:631a0428-156a-467c-9f02-5b9c676374e4> | 3.359375 | 338 | Documentation | Software Dev. | 49.043373 |
Greenhouse gases in the atmosphere play a critical role in shaping the global climate, and human activities have significantly modified the concentrations of many of these gases. A key area of scientific research in understanding the effects of human activities on global climate is the identification and quantification of these greenhouse gas flows.
Greenhouse gases cycle through the oceans and the biosphere over time periods that can range from a few days to millions of years. Carbon, for example, may be stored deep within ocean sediments for many millions of years or it might be cycled back into the atmosphere in a matter of hours. Scientists are trying to understand the various sources and reservoirs - or sinks - of each of the greenhouse gases in order to create better models of how human actions may affect natural processes.
A source is any process or activity through which a greenhouse gas is released into the atmosphere. Both natural processes and human activities release greenhouse gases. A sink is a reservoir that takes up a chemical element or compound from another part of its natural cycle.
Carbon Dioxide With carbon dioxide, it is important to distinguish between natural and man-made (anthropogenic) sources. One of the largest sources of atmospheric carbon dioxide is through plant and animal decay as microorganisms break down the dead material, releasing carbon dioxide into the air as part of the process. Other naturally occuring sources include forest fire and volcanoes.
Burning fossil fuels is a primary source of greenhouse gases caused by man; as the chemical energy in a hydrocarbon-rich fossil fuel is converted into heat, carbon dioxide is produced as a byproduct. Forest clearing - or deforestation - and the burning of solid waste, wood, and wood products are also sources of atmospheric carbon dioxide.
Just as trees and vegetation are sources of atmospheric carbon dioxide when they decay, they are a sink for carbon dioxide as they grow. During photosynthesis, trees and vegetation absorb CO2 from the air and emit oxygen. Humans can also add to this carbon sink through such efforts as reforestation.
The carbon cycle is one of the Earth's major biogeochemical cycles; vast amounts of carbon continuously cycle between the Earth's atmosphere, oceans, and land surfaces in both short and long-term cycles. The carbon exchange in the world's oceans take place on a very large scale, but it is often thought of to be a very rapid process; absorbing and releasing CO2 in short-term cycles with little long-term storage. However, scientists are now beginning to believe that much of the 'extra' carbon dioxide released into the atmosphere through human activities are being absorbed by the oceans, making it a possibility that we could increase the "ocean sink" through a method called ocean fertilization.
Methane Another important greenhouse gas is methane, which has both natural and human sources. Natural sources of methane include wetlands, gas hydrates, permafrost, termites, oceans, freshwater bodies, non-wetland soils, and other sources such as wildfires. Human activities that produce methane include fossil fuel production and transport, livestock and manure management, rice cultivation, and waste management (i.e., landfills and the burning of biomass). The Intergovernmental Panel on Climate Change (IPCC) estimates that 60% of total global methane emissions are related to human activities.
Methane emission levels from specific sources can vary significantly, depending on factors such as climate, industrial and agricultural production characteristics, energy types and usage, and waste management practices. For example, both temperature and moisture have a significant effect on the anaerobic digestion process - a key biological process causing methane emissions in both human and natural sources.
Methane in many soils can be consumed - oxidizing to carbon dioxide - by methane-oxidizing bacteria (methanotrophs). Although this process simply exchanges one greenhouse gas for another, methane is much more powerful than carbon dioxide as a greenhouse gas. Hydroxyl radicals often counted as methane sinks, but - technically - they do not result in methane storage or removal from the atmosphere. These radicals initiate a series of chemical reactions by which methane becomes one of several non-greenhouse compounds that are then removed from the atmosphere through precipitation or another means. Humans can also capture and utilize methane, thereby affecting overall emission levels through the use of technology in industry (such as coal mining) or waste management (landfills).
Nitrous Oxide After carbon dioxide and methane, nitrous oxide is the third most important greenhouse gas. In nature, it is emitted from soils and the oceans; anthropogenic sources of nitrous oxide include the cultivation of soil, the production and use of fertilizers, and the burning of fossil fuels and other organic material. Nitrous oxide is not stored in significant amounts through natural processes or actively taken out of the atmosphere.
Halocarbons Man is completely responsible for emissions of greenhouse gas halocarbons, many of which are synthetic chemicals being used as alternatives to ozone-depleting substances, like CFCs. However, while halocarbons do not deplete the ozone, they are potent greenhouse gases. Sources of these gases include electrical transmission and distribution systems, semiconductor manufacturing, and aluminum and magnesium production. Like nitrous oxide, halocarbons are not stored in significant amounts through natural processes or actively taken out of the atmosphere.
U.S. Emissions Inventory 2006 This report presents estimates by the United States government of U.S. anthropogenic greenhouse gas emissions and sinks for the years 1990 through 2004.
Wikipedia: Carbon Dioxide Sink An extensive array of information on carbon dioxide sinks, including natural sinks, enhancing natural sinks, and artificial sequestration techniques.
Carbon Sinks and Sources In this activity, teachers conduct a classroom discussion aimed at identifying carbon sources and sinks, followed by student interaction to determine whether they are a source or a sink.
Carbon Dioxide Sources and Sinks Middle school students can learn about carbon dioxide's sources and sinks in this interactive lab activity. With the help of an instructor, students can test automobile exhaust, breath, and even plants for the presence of carbon dioxide. | <urn:uuid:7465680b-3c5b-43e2-a947-b8fdf029b567> | 3.890625 | 1,240 | Knowledge Article | Science & Tech. | 21.924955 |
Thread is one of important Class in Java and multi-threading is most widely used feature,but there is no clear way to stop Thread in Java. Earlier there was a stop method exists in Thread Class but Java deprecated that method citing some safety reason. By default a Thread stops when execution of run() method finish either normally or due to any Exception.In this article we will How to Stop Thread in Java by using a boolean State variable or flag. Using flag to stop Thread is very popular way of stopping thread and its also safe, because it doesn't do anything special rather than helping run() method to finish it self.
How to Stop Thread in Java
As I said earlier Thread in Java will stop once run() method finished. Another important point is that you can not restart a Thread which run() method has finished already , you will get an IllegalStateExceptio, here is a Sample Code for Stopping Thread in Java:
Sample Code to Stop Thread in Java
Should we make bExit Volatile
Since every Thread has its own local memory in Java its good practice to make bExit volatile because we may alter value of bExit from any thread and making it volatile guarantees that Runner will also see any update done before making bExit.
That’s all on how to stop thread in Java , let me know if you find any other way of stopping threads in Java without using deprecated stop() method.
Related Java Multi-threading Post: | <urn:uuid:1e0f9f40-7f9e-4644-91c3-76e227d01034> | 3.109375 | 297 | Tutorial | Software Dev. | 40.448934 |
1. Create a map and add a shapefile to it from a location which contains spaces in the path - the layer will render correctly.
2. Leave the map open and close uDig.
3. Restart uDig - the map loads up but the shapefile layer does not render.
Here's the error log...
!ENTRY net.refractions.udig.project 1 0 2009-05-19 17:00:03.250
!MESSAGE Layer: coastline could not find a GeoResource with id:file:/C:/Documents and Settings/Dave S-B/My Documents/Clients/FDSL/Projects/Target/Data/FixedShapefiles/coastline.shp#coastline
...however, that file url is correct. Everything works ok when there are no spaces (and ok in uDig 1.1). | <urn:uuid:df965cf5-a4ed-46c5-a7d3-0a5a8885bf77> | 2.8125 | 186 | Comment Section | Software Dev. | 83.580718 |
Cloud cover fraction
As additional service to interpret the SO2 vertical column densities,
the web pages that show images of the SO2 data also show an image
of the cloud cover fraction for the same region, taken from the same
instrument. Note that currently there is no cloud screening performed
on the data: all SO2 slant column retrieval results are shown as-is.
The same cloud fraction is also used to determine the air-mass factor (AMF)
when taking the AMF from look-up tables made with a
radiative transfer model in order to compute the SO2 vertical column
density. Note that the AMF and thus the vertical column cannot be calculated
of the cloud cover fraction is missing.
For the daily data at orbit (like the image shown here) and at grid
coordinates the cloud cover fraction is presented in the same way and available in the
accompanying data files. Monthly averages if the cloud fraction are not
made: there is no cloud screening method applied to the SO2 data, so that a
monthly average cloud fraction does not provide any useful additional
The cloud fraction is a number between zero (clear-sky: no clouds) and one
(fully clouded, also known as overcast). For the Volcanic & Air Quality
SO2 Services the cloud data is taken from existing data products:
For SCIAMACHY data, the cloud
fraction is taken from the data files generated by the FRESCO algorithm,
which derives the cloud data from the Oxygen A-band (between 758-775 nm).
See this page at the TEMIS website
for more information and data.
The FRESCO cloud data is either in "normal mode" or in "snow/ice mode". In
the latter case, a cloud fraction of one is assumed with the cloud at ground
level when determining the AMF for the SO2 vertical column. In the
datafiles, however, the cloud fraction in the "snow/ice mode" is set to
'-1', and it is shown in a separate colour in the cloud fraction images
(indicated simply as "snow" in the colour bar).
Note that the FRESCO cloud cover data retrieval stricktly speaking is based
on sunlight reflected by a combination of clouds and (reflecting) aerosols:
such aerosols are effectively treated as clouds. This means that the FRESCO
data product contains a (small) aerosol contribution.
For OMI data, the cloud fraction is taken from the level-2 data product
provided by NASA/NOAA.
The "quality flags" of the ground pixel in that data product gives the
presence of snow or ice. The following situations are taken to represent the
"snow/ice mode": permanent ice, dry snow, and a sea ice concentration of
more than 50% -- all other cases are considered to be "normal mode"
The GOME-2 SO2 data are generated in near-real time at DLR and that data
product contains cloud information based on the OCRA/ROCINN method,
developed at DLR. The maps on the web pages show the cloud cover fraction as
determined by OCRA. | <urn:uuid:c56978a7-22a8-4f7f-b8b8-0728e8b774a8> | 2.90625 | 681 | Knowledge Article | Science & Tech. | 39.015134 |
Limits of Functions
Okay, we know what it is for a net to have a limit, and then we used that to define continuity in terms of nets. Continuity just says that the function’s value is exactly what it takes to preserve convergence of nets.
But what if we have a bunch of nets and no function value? Like, if there’s a hole in our domain — as there is at for the function — we certainly shouldn’t penalize this function just on a technicality of how we presented it. Well there may be a hole in the domain, but we still have sequences in the domain that converge to where that hole is. So let’s take a domain , a function , and a point . In particular, we’re interested in what happens when is in the closure of , but not in itself.
Now we look at all sequences which converge to . There’s at least one of them because , but there may be quite a few. Each one of these sequences has an opinion on what the value of should be at . If they all agree, then we can define the limit of the function where is any one of these sequences. In the case of we see that at every point other than our function takes the value . Thus on any sequence converging to (but never taking ) the function gives the constant sequence . Since they all agree, we can define the limit .
If a function has a limit at a hole in its domain, we can use that limit to patch up the hole. That is, if our point is in the closure of but not in itself, and if our function has a limit at , then we can extend our function to by setting . Just like we by default set the domain of a function to be wherever it makes sense, we will just assume that the domain has been extended to whatever boundary points the function takes a limit at.
On the other hand, we can also describe limits in terms of neighborhoods instead of sequences. Here we end up with formulas that look like those we saw when we defined continuity in metric spaces. A function has a limit at the point if for every there is a so that implies . Going back and forth from this definition to the one in terms of sequences behaves just the same as going back and forth between net and neighborhood definitions of continuity.
To a certain extent we’re starting to see a little more clearly the distinct feels of the two different approaches. Using nets tells us about approaching a point in various systematic ways, and having a limit at a point tells us that we can understand the function at that point by understanding any system along which we can approach it. We can even replace the limiting point by the convergent net and say that the net is the point, as we did when first defining the real numbers. Using neighborhoods, on the other hand, feels more like giving error tolerances. A limit is the value the function is trying to get to, and if we’re willing to live with being wrong by , there’s a way to pick a for how wrong our input can be and still come at least that close to the target. | <urn:uuid:41e54e71-fe9c-4b4c-8850-370c6c378249> | 3.09375 | 649 | Personal Blog | Science & Tech. | 61.546765 |
SQLite is a in-process software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. The code for SQLite is in the public domain and is thus free to use for any purpose, commercial or private.
SQLite is an embedded SQL database engine. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views is contained in a single disk file.
The SQLite project is developed and maintained by the SQLite Consortium.
Easy to port to other systems.
- Transactions are atomic, consistent, isolated, and durable (ACID), even after system crashes and power failures.
- Zero-configuration - no setup or administration needed.
- Implements most of SQL92.
- A complete database is stored in a single cross-platform disk file.
- Supports terabyte-sized databases and gigabyte-sized strings and blobs.
- Faster than popular client/server database engines for most common operations.
- Self-contained: no external dependencies.
- Small code footprint: less than 275KB fully configured or less than 200KB with optional features omitted.
- Well-commented ANSI-C source code with over 99% statement test coverage. Available as a single source code file. | <urn:uuid:395e8b98-274f-48e2-81b7-ffaaf870775e> | 3.1875 | 292 | Knowledge Article | Software Dev. | 32.022538 |
Making Starbirth Easier
Outcome: A new model of star formation which uses the advances in understanding of magnetic reconnection has been tested with numerical simulations.
Transformative: The traditional paradigm of star formation based on ambipolar diffusion process faces severe problems explaining observational data. New model can explain new observations and opens ways to better modeling star formation.
Scientific problem: All stars including our Sun were born from clouds of gas as gravitational forces collected the matter from the scales of light years into relatively small dense hot luminous objects. Gas from which stars are born exhibits chaotic turbulent motion and it carries magnetic fields that counteract the gravitational collapse. For years the problem of removing of magnetic field from star forming regions has the focus of intense studies by astrophysicists. Magnetic fields couple with the gas in molecular clouds through their interactions with the minute fraction of ions present in the gas and, traditionally, it is assumed that the slippage of the ions and gas atoms, which is called ambipolar diffusion is responsible for the loss of magnetic field.
Breakthrough: The paper challenges the above accepted paradigm and identifies a process of magnetic reconnection in turbulent media as the major process of magnetic field removal from collapsing molecular clouds. This finding is based on the model of turbulent reconnection by Lazarian & Vishniac 1999. Simulations within the paper exhibit fast removal of magnetic flux without ambipolar diffusion.
Significance: Star formation is one of the most fundamental problems of astrophysics with large interest of general public. Simulations in the paper explain the observational data that cannot be explained with the existing paradigm of ambipolar diffusion.
International collaboration: This work is a result of international collaboration. A student from Brazil came to Madison to work with the PI to test numerically the PI’s idea of new scenario of star formation. | <urn:uuid:e1054652-5e73-429a-9e67-0038e4db2dfd> | 3.28125 | 368 | Academic Writing | Science & Tech. | 20.798506 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
...a linear fashion—that is, the bonds from the carbon atom to the two substituent atoms are situated at an angle that is less than 180°—in both the triplet and the singlet states. The bond angle for the singlet state, however, is predicted to be larger than that for the triplet state. These predictions are fully supported by experiments. The simplest carbene, methylene, has been...
...define the corners of an equilateral triangle, a geometry that requires the C−C−C angles to be 60°. This 60° angle is much smaller than the normal tetrahedral bond angle of 109.5° and imposes considerable strain (called angle strain) on cyclopropane. Cyclopropane is further destabilized by the torsional strain that results from having three eclipsed...
The angle between electron pairs in a tetrahedral arrangement is 109.5°. However, although H 2O is indeed angular and NH 3 is trigonal pyramidal, the angles between the bonds are 104° and 107°, respectively. In a sense, such close agreement is quite satisfactory for so simple an approach, but clearly there is more to explain. To account for variations in bond...
potential energy curve
The data obtained from such a procedure can be used to construct a molecular potential energy curve, a graph that shows how the energy of the molecule varies as bond lengths and bond angles are changed. A typical curve for a diatomic molecule, in which only the internuclear distance is variable, is shown in Figure 10. The energy minimum of this curve corresponds to the observed bond length of...
structure and classification of alcohols
Alkyl groups are generally bulkier than hydrogen atoms, however, so the R−O−H bond angle in alcohols is generally larger than the 104.5° H−O−H bond angle in water. For example, the 108.9° bond angle in methanol shows the effect of the...
What made you want to look up "bond angle"? Please share what surprised you most... | <urn:uuid:dfdcc930-7371-4714-8bf4-267f0735d05b> | 3.703125 | 466 | Truncated | Science & Tech. | 59.191864 |
"24.22 Researchers are proposing the development of a ‘‘nano- channel reactor’’ for steam reforming of methane (CH4) to fuel- cell hydrogen gas to power microscale devices. Gas phase diffusion in nanochannel A = CH4, B= H2O A+B 20 mol% CH 4 -----------------------------> Nanochannel ( NA/NB) = 0.25 300 degrees C, 0.5 atm, 200Nm diameter As each channel diameter is so small, the gas flow is likely to be very small within a given channel. Hence, gas diffusion pro- cesses may play a role in the operation of this device, particularly during the mixing and heating steps. We are specifically inter- ested in evaluating the effective diffusion coefficient of methane gas (species A, MA 1?4 16g/g: mol) in water vapor (species B, MB 1?4 18 g/g: mol) at 3008C and 0.5 atm total system pressure. The diameter of the channel is 200nm (1x 10^9 nm = 10m) A feed gas containing 20 mol % CH4 in water vapor is fed to the nanochannel with a flux ratio NA/NB = 0:25. What is effective diffusion coefficient of CH4 in the nanochannel at the feed gas conditions? Is Knudsen diffusion important? | <urn:uuid:85e520f7-c2bc-4b48-8103-7f417e688ce9> | 2.828125 | 287 | Q&A Forum | Science & Tech. | 75.254056 |
Comparing algorithms of dissimilar performance order, such as O(nlogn) for STL sort and O(kn) for Radix-Sort, should provide insight about when one algorithm is expected to be better than another. The comparison is easier to see if the orders are put in a similar form, such as O(log2(n)*n) and O(k*n). Both formulas have an n term in common, and thus the comparison is between log2(n) and k . Another way to see this is to divide the two formulas. In this case, n cancels out and log2(n)/k ratio remains. If log2(n) is bigger than k, the ratio will be bigger than 1. If k is bigger, the ratio will be less than 1.
Table 8 demonstrates this relationship. It shows that log2(n) increases by one when n doubles -- which follows from one of the rules of logarithms, where log( m*n ) = log(m) + log(n). In this case, log2( 2*n ) = log2(2) + log2(n) = 1 + log2(n). The second portion of the table shows that for log2(n)to double, n has to square, which follows from another rule of logarithms, where m*log(n) = log(nm), where m=2 in this case.
In Table 8 one of the entries is log2(4.3*109)=32 -- for example, n = 4.3E+9, or 4.3 billion. For this case, when the array contains 4.3 billion elements, then O(log(n)*n ) = 32*4.3*109, which is the estimate of the number of comparisons for the STL sort algorithm.
For Binary-Radix Sort, the order is O(k*n), where k is the number of bits in each element. If the elements are 32-bits each, then O(k*n) = 32*4.3*109 operations for an array with 4.3 billion elements. In this case, the estimates for STL sort and Binary-Radix Sort are equal. For the case of 8-bit elements, Binary-Radix Sort estimate is 8*4.3*109, whereas STL estimate stays at 32*4.3*109, predicting Binary-Radix Sort to be faster. For the case of 64-bit elements, the Binary-Radix-Sort estimate is 64*4.3*109, whereas STL estimate stays at 32*4.3*109, predicting STL sort to be faster.
These examples demonstrate that STL sort performance is estimated only from the number of elements in the array and not their size, whereas Binary-Radix Sort is estimated from the number of bits of each element and the number of elements. The order formulas predict that Binary-Radix Sort should be faster than STL sort (actually any optimal comparison sort) when the elements are made of a few bits, but that STL sort should be faster when the element size is large.
Tables 9 show measured performance comparisons. Intel IPP does not implement 8-bit and 64-bit signed integer sorting in non-Radix and Radix sorting.
Table 9 provides a lot of information, with several notables:
- All algorithms performed consistently with their unsigned counterparts (when existed), where signed and unsigned performance was nearly equal.
- IPP Radix Sort is significantly faster than all other algorithms for 16 and 32-bit signed -- 8X and about 3X faster respectively than the closest competitor.
- Order predictions hold accurately for Hybrid Binary-Radix Sort for 8, 16 and 32-bit cases, where the performance decreases by 2X as the number of bits (k) increases by 2X. However, for 64-bit case the performance does not decrease by 2X as predicted, but increases by about 16%.
- Order prediction is not accurate for STL sort, expecting the performance not to depend on the size of array elements, but only on the number of elements. However, STL sort performance decreases by 2X as the number of bits increases by 2X, for 8, 16 and 32-bit cases, but not for 64-bit.
- Order prediction for STL sort to beat Binary-Radix Sort did not hold for up to 64-bits.
At first it may seem that Binary-Radix sort is a good candidate for multicore and multithreading. It splits the data set into two portions and then splits those further recursively. However, the split of the array is data dependent. In other words, the split will not always be even, which leads to uneven load balancing. This may be one of the reasons that Intel did not multithread its Radix-Sort implementation.
In-place Hybrid Binary-Radix Sort (MSD-style) algorithm was developed and improved through several performance optimizations. Insertion Sort was used at the bottom of the recursion tree. Over 40% performance improvement was achieved from the initial implementation, by removing redundant operations. The implementation was extended to handle unsigned and signed integers from 8-bits to 64-bits. A data-type-aware generic interface, overloaded functions, was used to encapsulate unique unsigned and signed implementations under a common interface. The resulting algorithm compared favorably with STL sort implementation, outperforming it by at least 15% for random input data and 32 & 64-bit increasing and decreasing data sets.
Comparison to Intel's IPP sort (in-place) and Radix sort (not in-place) was also performed, where IPP's Radix Sort was found to perform 20X, 8X and about 3X faster for 8-bit, 16-bit and 32-bit unsigned respectively. Hybrid Binary-Radix Sort outperformed IPP's sort for 16 and 32-bit data types, but lagged by 20X for 8-bit unsigned data type.
Inconsistencies in using algorithm order for performance prediction were found for STL sort for most data sizes, whereas Hybrid Binary-Radix Sort was predicted consistently except for 64-bit data size.
A further hybrid in-place algorithm can be evolved by combining the best attributes of Intel's IPP algorithms with in-place Hybrid Binary-Radix Sort. For instance, the high-performance IPP 8-bit in-place sort can be integrated under the generic interface developed. For the 8-bit unsigned overloaded function implementation, Intel's IPP sort function would be called. The combined algorithm would retain its generic interface, but yet would have data type specific optimized implementations. This leads to "generic data type adaptive" algorithms, which would retain a generic interface and adapt to perform optimally for each data type. Purely generic algorithms miss this opportunity, as well as the 20X performance improvement shown for 8-bit data types.
Floating-point support would be a nice extension. Intel IPP sorting routines support floating-point: single and double-precision. Creating a more sophisticated generic implementation, such as STL's, would allow custom classes to be sorted. The use of iterators would reduce the number of items passed to each level of recursion from currently 4 down to 3 (first and last iterator, and bitMask), possibly improving performance.
Intel Integrated Performance Primitives for Intel Architecture, Reference Manual, Volume 1: Signal Processing, August 2008, pp. 5-57 - 5-61.
Jim Vaught of Arxan Defense Systems -- personal discussion.
V. J. Duvanenko, Algorithm Improvement through Performance Measurement: Part 1, Dr. Dobb's
Scott Miller of Arxan Defense Systems -- personal discussion. | <urn:uuid:b424f421-cb96-470d-9aed-cb536f314523> | 3.5 | 1,606 | Knowledge Article | Software Dev. | 58.502918 |
ANSI Common Lisp 3 Evaluation and Compilation 3.2 Compilation
3.2.3 File CompilationThe function compile-file performs compilation of forms in a file following the rules specified in Section 3.2.2 Compilation Semantics, and produces an output file that can be loaded by using load.
Normally, the top level forms appearing in a file compiled with compile-file are evaluated only when the resulting compiled file is loaded, and not when the file is compiled. However, it is typically the case that some forms in the file need to be evaluated at compile time so the remainder of the file can be read and compiled correctly.
The eval-when special form can be used to control whether a top level form is evaluated at compile time, load time, or both. It is possible to specify any of three situations with eval-when, denoted by the symbols :compile-toplevel, :load-toplevel, and :execute. For top level eval-when forms, :compile-toplevel specifies that the compiler must evaluate the body at compile time, and :load-toplevel specifies that the compiler must arrange to evaluate the body at load time. For non-top level eval-when forms, :execute specifies that the body must be executed in the run-time environment.
The behavior of this form can be more precisely understood in terms of a model of how compile-file processes forms in a file to be compiled. There are two processing modes, called "not-compile-time" and "compile-time-too".
Successive forms are read from the file by compile-file and processed in not-compile-time mode; in this mode, compile-file arranges for forms to be evaluated only at load time and not at compile time. When compile-file is in compile-time-too mode, forms are evaluated both at compile time and load time. | <urn:uuid:d048b1d4-29e0-4e20-88df-1ac5106ce4e2> | 2.90625 | 399 | Documentation | Software Dev. | 39.51136 |
Nature provides a free lunch, but only if we control our appetites. ~William Ruckelshaus Business Week, 18 June 1990
People's choices impact ecosystems. Development is an obvious example of an impact. But there are many others.
Here are a few actions of people that impact wetlands: Runoff of lawn fertilizers & pesticides, winter salting of roads, sidewalks & parking lots, automatic dishwasher detergent, clothes washing detergent, grey water systems, discarded cigarette butts, medications in the potable water supply , soil erosion, mine drainage, careless dumping, weed and insect control programs, dioxins, waterway dredging, mercury toxins, flood control measures, glycol runoff, lead shot for hunting and lead sinkers for fishing, oil & chemical spills, West Nile Virus control programs, feeding wild animals - A Vicious Cycle, releasing alien species into the wild, and new, current & old dams in waterways. General Ecology website
Select one of these actions. (If there is another one that is of particular local interest, you may investigate it)
Describe the action.
Explain the issue including its ecological impact. Show several sides of the argument.
Identify alternatives that achieve the goals of people, while minimizing or perhaps eliminating the negative impact.
Decide: Should the change be made?
How - by law, by regulation or by individual personal choice?
Extend your efforts - Make a VE video about the issue.
(VE Rating - Very educational making a strong use of data and critical thinking skills.)
Excellent examples: Requiem for the Honeybee from Charles Greene CSPAN Student Cam project
Down to the Last DROP from Madison Richards CSPAN Student Cam project
Extend your thinking:
Competition Conundrums - Two wetland birds are presenting challenges.
Investigate one of the issues. What are some solutions? Double Crested Cormorants - OR - Canada Goose
Wetlands - Dragonfly TV
Wetlands: Habitat / Mammals / Birds / Aquatic insects / Plants & trees / Amphibians
Just Ducky - crossword puzzle
about ducks / Name that Duck practice / Wetland Vocabulary Exercise / Wetland food web
Eagles Status Evaluation / Competition Conundrum / Lentic ecosystem or Lotic ecosystem?
Wetland or frog song activity / Wetland Poem Project / Water & Watershed Studies / Water Wars
Map PA Waters / Make a schematic representation
Bats are our Buddies / Bats at the Beach Activity / Firefly Watch - fun project / Monitor Wetland
Map Wetlands in your Community / Environmental Careers / Pennsylvania HS Envirothon
West Eugene Wetlands Case Web Site investigate a conflict about a wetlands / School Habitat Garden Project
Internet Hunts / Puzzles and Projects / Plants
and People / Problem based Learning / Civics & History / Habitat
Garden / Computers / Nature / Home
Posted 9/2008 by Cynthia J. O'Hora Updated 3/2009
Save a tree - use a digital answer format - Highlight the text. Copy it. Paste it in a word processing document. Save the document in your folder. Answer on the word processing document in an easily read, contrasting color or font. (No yellow, avoid fancy fonts like: Symbols, , ). Save frequently as you work. Enter your name and the date in a document header. Submit the assignment via a class dropbox or an email attachment. Bad things happen. Save a copy of your document in your computer.
Proof your responses. It is funny how speling errors and typeos sneak in to the bets work. Make your own printer paper answer sheet
Pennsylvania Science Anchors
S.A.3. Systems, Models, and Patterns
S4.B.3.2 Biological Sciences Describe, explain, and predict change in natural or human-made systems and the possible effects of those changes on the environment.
S4.B.3.3 Biological Sciences Identify or describe human reliance on the environment at the individual or the community level.
Science NetLinks Benchmark 5 - The Living Environment - How living things function and interact. A. Diversity of Life
"One of the most general distinctions among organisms is between plants, which use sunlight to make their own food, and animals, which consume energy-rich foods. Animals and plants have a great variety of body plans and internal structures that contribute to their being able to make or find food and reproduce. All organisms, including the human species, are part of and depend on two main interconnected global food webs."
D. Interdependence of Life - " In all environments freshwater, marine, forest, desert, grassland, mountain, and others organisms with similar needs may compete with one another for resources, including food, space, water, air, and shelter.
Aligned with Pennsylvania Academic Standards: Reading, Writing, Science & Technology, Ecology & Environment, Mathematics, Geography, Career.
Aligned with National Academic Standards: Technology, Science, Geography. | <urn:uuid:1a3dd8ad-32cf-42d5-9155-629f33e812dc> | 3.515625 | 1,044 | Tutorial | Science & Tech. | 34.258958 |
Biofuels, once seen as a useful way of combating climate change, could actually increase greenhouse gas emissions, say two major new studies.
And it may take tens or hundreds of years to pay back the "carbon debt" accrued by growing biofuels in the first place, say researchers. The calculations join a growing list of studies questioning whether switching to biofuels really will help combat climate change.
Biofuel production has accelerated over the last 5 years, spurred in part by a US drive to produce corn-derived ethanol as an alternative to petrol.
The idea makes intuitive environmental sense - plants take up carbon dioxide as they grow, so biofuels should help reduce greenhouse gas emissions - but the full environmental cost of biofuels is only now becoming clear.
Extra emissions are created from the production of fertiliser needed to grow corn, for example, leading some researchers to predict that the energy released by burning ethanol is only 25% greater than that used to grow and process the fuel.
The new studies examine a different part of biofuel equation, and both suggest that the emissions associated with the crops may be even worse than that.
One analysis looks at land that is switched to biofuel crop production. Carbon will be released when forests are felled or bush cleared, and longer-term emissions created by dead roots decaying.
This creates what Joseph Fargione of The Nature Conservancy and colleagues call a "carbon debt". Emissions savings generated by the biofuels will help pay back this debt, but in some cases this can take centuries, suggests their analysis.
If 10,000 square metres of Brazilian rainforest is cleared to make way for soya beans - which are used to make biodiesel - over 700,000 kilograms of carbon dioxide is released.
The saving generated by the resulting biodiesel will not cancel that out for around 300 years, says Fargione. In the case of peat land rainforest in Indonesia, which is being cleared to grow palm oil, the debt will take over 400 years to repay, he says.
The carbon debts associated with US corn are measured in tens rather than hundreds of years. But the second study suggests that producing corn for fuel rather than food could have dramatic knock-on effects elsewhere.
Corn is used to feed cattle and demand for meat is high, so switching land to biofuel production is likely to prompt farmers in Brazil and elsewhere to clear forests and other lands to create new cropland to grow the missing corn.
When the carbon released by those clearances is taken into account, corn ethanol produces nearly twice as much carbon as petrol.
"The implications of these changes in land use have not been appreciated up until now," says Alex Farrell, at the University of California, Berkeley, US.
Farrell adds that biofuels could still prove useful in the fight against climate change, but using different approaches - such as focusing on crops for both food and fuel, or new technology for generating biofuels from food waste.
Energy and Fuels - Learn more about the looming energy crisis in our comprehensive special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Feb 07 20:04:17 GMT 2008 by Jd
The full assessment of the impact of biofuels also needs to take into account the reduced frequency of oil spills, environmental damage prevented by not drilling for oil, and the impact of channeling less money to the governments that control oil production.
Of course, the real step forward will be production of biofuels from waste materials rather than primary crops. Everything before that step should also be viewed as a stage of development.
Thu Feb 07 22:58:47 GMT 2008 by Brendan
Jd you are absolutely right! Nothing is a true solution unless the complete lifecycle is considered...
The CSIRO claim they can produce a 'bio-oil' capable of being refined into ethanol, petrol, or diesel from organic waste (household, garden, office, or industrial matter - even landfill!). They are also targeting some of the bigger problems with the lifecycle of the product:
"One of the big issues with making biofuel is that to move the bio-materials around tends to be expensive, it also produces a lot of greenhouse gases...
...What we envisage doing is having small, regional processing facilities close to the source of the bio-material, to convert it into the crude oil, and the oil can then be shipped long distances because it's certainly much easier to move a liquid around than it is to move solid materials." (Quote from here: http://www.abc.net.au/news/stories/2008/02/04/2154292.htm)
or the CSIRO media release: http://www.csiro.au/news/ValuableFuel.html
Personally I feel that the focus of alot of this sort of research (and the government/business policies driving it) seems to be about finding a 'replacement' which will allow us to continue to live and consume oil at our current ever increasing levels...
Research like this gives me hope for the future - there are people out there trying to close the loop :)
What About Oil?
Thu Feb 07 20:22:25 GMT 2008 by Cooley
Well, it is not surprising to me that biofuels produced from feedstocks grown in pristine lands are likely worse than gasoline. What surprises me is these "leading researchers" did not demonstrate the same interest in going upstream on oil. If you are looking ten miles upstream on biofuels and two miles upstream on oil, yes, you might find more problems. Let's pop the hood on oil now guys.
What About Oil?
Thu Feb 07 23:20:15 GMT 2008 by Raven
That's a very salient point Cooley. How can a proper comparison be made between biofuels and fossil fuels, if there haven't been thorough environmnetal audits done on oil in the first place? Has the substantial effect oil spills been factored into the oil equation? Any such alcohol spill would appear to disperse far more readily than oil. Also, why do comparisons based on biofuels derived from corn when we know the trend is towards cellulose conversion to alchol and bio-crude? Is there a fad among researchers for bashing biofuels? And who's doing the bank rolling on this research?
What About Oil?
Mon Feb 11 01:40:35 GMT 2008 by Dann
Oil seepage occurs naturally, whether or not humans are pumping it up to the surface or not. Earthquakes can open up fissures in the rock overlying the oil deposits, causing oil and natural gas to make their way naturally to the surface. This has been happening for millions of years, and there are micro-organisms that are capable of consuming crude oil.
Ethanol, on the other hand, is deadly to micro-organisms (and just about everything else), so a large-scale ethanol spill would in fact be worse for the environment than an oil spill. Whereas oil floats on the surface of water, I imagine alcohol-based biofuels would actually mix with water. An ocean would probably dillute it enough for the effects to be neglegible (eventually), but a lake or wetland could be devastated by a biofuel spill.
What About Oil?
Mon Feb 11 06:05:20 GMT 2008 by Raven
Dann, with all respect, I think you are way off the mark on the ecological toxicity of crude oil vs ethanol.
For a start, ethanol is non-residual, whereas with crude oil it takes a long time to breakdown and disperse. Sure, a few natural occurring microorganisims may eat crude oil, but it doesn't mean that they are there in the required numbers naturally to get a rid of an oil spill quickly.
Whereas ethanol is volitile for a start, so a portion of it readily evaporates into the atmosphere. But it also dilutes very readily with water, so the concentration (and toxicity) can be reduced very quickly, especially with regards to a sea spill, whereas oil simply is not water soluble and does not dilute and disperse in that way.
Also, with land spills, just pouring water will rapidly dilute the harmful ecological effects of ethanol.
Wait to the first ethanol spill, and I'm sure you will find that it's not as bad as an oil spill.
Corn Is Bad For Cattle
Thu Feb 07 20:55:08 GMT 2008 by Ted Krasnicki
Corn is bad for cows, because the latter are ruminants, not grain eaters. The result is poorer nutrition, poorer quality milk, and much higher incidences of disease forcing the greater use of antibiotics. If there is any use for corn it would in biofuels. But feeding corn is great for profits.
It is clear that the purpose of science has now become to make profits for industry and commerce, rather than for the benefit of the common good.
Indeed, if people really wanted to do something about cleaning up this world of pollution and fast, it would be to drastically curtail energy consumption, which means lowering standards of living in the high energy consuming nations, the so-called industrialised nations. But if you ask me, the industrialised nations are living way beyond what this planet can safely support.
Corn Is Bad For Cattle
Fri Feb 08 06:20:40 GMT 2008 by Glenn
Science has never been about serving the common good. It is about understanding how the world around us works.
Applied science seeks to take those scientific principles to solve problems faced by society.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:c1d84cb1-78e9-43b6-ae79-63835411bb6c> | 3.65625 | 2,117 | Comment Section | Science & Tech. | 54.254025 |
What if the Nazis had won; Newton had abandoned science; electric motors had pre-dated steam engines; Darwin had not sailed on the Beagle; Charles II had no interest in science and a young Einstein had been ignored?
EIGHTEENTH-CENTURY scientists thought of electricity and magnetism as substances, "imponderable fluids" whose particles were too small and subtle to be detected by ordinary instruments. In their eyes the two fluids were utterly separate and distinct. It was no more possible to transform electricity into magnetism than turn water into wine (without divine assistance, anyway).
English chemist and physicist Michael Faraday saw it differently. Early in the 1820s, Hans Christian Ørsted and André-Marie Ampère had shown that an electric current moving through a wire generated a magnetic field around the wire. Building on their work, Faraday showed in 1831 that the reverse was also true: moving a wire through ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:2d68a663-8e0e-4325-b8b0-edab7d236c75> | 3.328125 | 220 | Truncated | Science & Tech. | 42.276039 |
Researchers first imagined the buckyball in 1970 and created it in the lab in 1985. But despite its relatively recent discovery here on Earth, this conglomeration of carbon is common in space. In 2010, researchers announced they detected buckyballs 6500 light-years away. Jan Cami, an astronomer at the University of Western Ontario and researcher at the SETI Institute in Mountain View, Calif., led the team that used the Spitzer Space Telescope to look at their dust cloud home, a planetary nebula named Tc1. He says scientists can find these distant buckyballs by looking at how light from a nearby star changes frequency as it interacts with their jiggling bonds, and they may even use what they know about the buckyballs behavior to learn about the other weird molecules in the dust cloud.
Buckyballs have turned out to be common in space. "When we discovered it, we werent quite sure if we were looking at something unique," Cami says. "Quite quickly after our paper came out, researchers found it in many other places." These places include planetary nebulae, lower radiation nebulae called proto-planetary nebulae, other galaxies, evolved stars, and carbon-rich stars called corona borealis stars. He estimates that, in objects like planetary nebulae, the particular molecule makes up about half of 1 percent of all of the carbon present. | <urn:uuid:6732fc3b-295f-42d7-9b98-ac581829c118> | 3.9375 | 286 | Knowledge Article | Science & Tech. | 37.531415 |
This section describes, how to move a column in JTable component. Moving is a very simple method that moves the data from one position to the other. If you want to move a column from one position to another specified position in JTable via using the moveColumn() method, you need the index of column that will count from '0' which denotes the first column of JTable. The moveColumn method takes the index of columns.
Description of program:
This program helps you in moving a column into JTable. This program creates a table through the JTable having '4' rows and '3' columns with the column header and yellow background color in it. This program uses the move method to move a column at specified position with the help of moveColumn() method. In the first table, the previous column is Name, secondary is Course and the third one is Subject. After moving, Course reaches on the first position, Name goes to the second position and Subject does not left it's position So you can see how the first column moves from first position to second position.
Here is the code of program:
Output of program:
After moving a column:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:9c65bf48-0c5c-443c-ad9f-ab5974a7abc2> | 3.375 | 280 | Tutorial | Software Dev. | 51.473951 |
This image of electric blue noctilucent clouds was taken by astronaut Don Pettit while he was aboard the ISS.
Courtesy of Don Pettit and NASA TV
History of Observation of Noctilucent Clouds
Have you ever seen a noctilucent or "night-shining" cloud
? Don't worry if you haven't - they are a fairly recent "discovery" in the world of science. Here's what we know!
The first reporting of these eerie clouds was made in the summer of 1885. The observations were made in northern Europe and Russia. The first photos of these clouds were taken in the late 1880’s.
In the early 1900’s, many scientists were trying to figure out what made these clouds form in the Earth's atmosphere. To figure this out, scientists started looking for these clouds more regularly - first in Europe in the late 1950's and in North America in the 1960's. The first rocket was launched into a noctilucent cloud in 1962.
In the more recent past, more observations made from the ground and satellites orbiting the Earth found that noctilucent clouds are mainly made of water ice. How they form exactly will be researched by the AIM satellite mission to be launched in 2006.
Crews aboard the International Space Station see noctilucent clouds while orbiting the Earth. You can be an observer of noctilucent clouds too and share that information with others on the Internet!
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
There is a special kind of cloud that is found in the mesosphere, which is the third layer of the Earth's atmosphere. These clouds are called noctilucent clouds (NLC’s) or polar mesospheric clouds (PMC’s)....more
Do you know what the highest clouds in the atmosphere are called? Polar Mesospheric Clouds (PMC’s)! These are very special clouds that form about 50 miles above the Earth! They are called Polar because...more
There is a large space station circling Earth right now. It is called the International Space Station (ISS for short). Astronauts live and work in the ISS. Sixteen countries, including the United States,...more
Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. The sun's rays pass through millions of raindrops. A...more
It takes the Earth one year to travel around the sun one time. During this year, there are four seasons: summer, autumn, winter, and spring. Each season depends on the amount of sunlight reaching the...more
Scientists sometimes travel in airplanes that carry weather instruments in order to gather data about the atmosphere. These research aircraft bring air from the outside into the plane so scientists can...more
An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). These instruments can be used in a backyard weather station or on a well-equipped scientific research...more | <urn:uuid:bf2ead52-d6d8-4ba3-9a42-9a7de9002123> | 3.296875 | 694 | Content Listing | Science & Tech. | 62.938541 |
What is SDL_perl?
SDL_Perl is a perl interface to the Simple DirectMedia Library. It is composed of a both a XS wrapper to the SDL libraries and a series of Perl modules that export SDL functionality in an object-oriented fashion.
One of the biggest benefits of using SDL is that it allows portable media applications to be written without having to be concerned with specific implmentations of media libraries for each target platform. Bringing Perl into the picture takes the portability one step further, allowing media-rich applications to be written in a high-level language that can be targeted to a number of platforms. While programming using SDL requires knowledge of C and access to a C compiler, using SDL_perl does not. This greatly decreases the amount of time it takes to get something up on the screen and working.
Using SDL does not require relatively heavyweight graphical environments and libraries, like GNOME or KDE. Programming using SDL could be considered a little more bare-bones and "to the metal" than using a full-fledged GUI toolkit. There are no built in widgets or controls, but rather a plain drawing canvas and multimedia functionality. But it does provide a lot of room for tinkering and general hacking. While it may take a few more lines of code to create a simple GUI widget like a button, you are in no way restricted to what the toolkit designers consider to be the best way to have something work, look, or function. You may find that many tasks that programming games or captive graphical user interfaces require are easier and more straight forward with SDL_perl than using a GUI toolkit.
Using SDL_perl will give you some insight into the SDL library, but because SDL_perl tries to encapulate and group much of the SDL API into a series of objects and do things in a more object-oriented perl-ish fashion, it is not a one-to-one mapping. You don't need to know how SDL works or how to use it in C in order to use SDL_perl, but it does have some of its own idiosyncrasies.
In order to install and use SDL_perl, the SDL libraries need to be installed first. In addition to the base SDL library, contributors have written various kinds of extension libraries that are designed to provide extra functionality to SDL programs. Many distributions are including them as part of the standard installation, or they may be part of an "extras" repository. Source tarballs, source RPMs, and binary RPMs are usually available for most of these components on their respective pages. If your distribution provides packages for these and SDL_perl, you can skip this section and move right on to the next one.
Of course, you'll need Perl installed, it's good to have at least Perl 5.8.5.
To build SDL_perl, the target system will need a C compiler (the same one used to build perl proper), and the Module::Build and Test::Simple. Test::Simple is available with standard Perl installations, and Module::Build can be downloaded from CPAN. Your distribution may provide a packaged installation. Be sure to install any related -devel package also, which is where various headers are located which are necessary for compilation.
This tutorial will make use of the following SDL components:
- SDL — the base, core SDL functionality
- SDL_image — adds support for additional image file formats
- SDL_gfx — provides image manipulation functions and some drawing primitives
- SDL_ttf — TrueType font support
- SDL_mixer — audio file mixing
The following SDL libraries should also be installed (in order to have complete SDL_perl funcationality), but their use with SDL_perl won't be covered in this tutorial. Should your application require this kind of functionality, there are many Perl modules on CPAN which may be of use (I personally find Perl's IO::Socket family of modules to be more than sufficient for most networking programming tasks).
- SDL_net — portable abstraction for network communication
- If you have OpenGL development files installed, OpenGL support will be included also.
At the time of this writing, the latest version of SDL_perl is 2.1.3 and can be downloaded from CPAN at http://search.cpan.org/dist/SDL_Perl/.
SDL_perl uses the newer Module::Build system rather than the traditional ExtUtils::MakeMaker for its building and installation purposes. After untaring the SDL_perl distribution, cd into the untarred directory and execute the Build.PL script:
$ perl ./Build.PL Checking whether your kit is complete... Looks good Creating new 'Build' script for 'SDL_Perl' version '2.1.3'
This will generate a file named Build, which does the compilation and installation heavy lifting. Build by executing the build script:
It will run through various perl-module-specific build tasks, and will compile the XS code into an object file. A few warnings might be issued; if the build finishes without fatal errors, these warnings can be ignored.
Once the build is finished, the test scripts should be run. Then the modules and support files need to be installed into the standard Perl module directories on your system. To do this, give the "test" and "install" argument to the Build script (this will most likely need to be run as root, depending on how/where your Perl is installed):
$ ./Build test $ sudo ./Build install | <urn:uuid:7fc6c549-431a-4a9e-a9be-fd9b25806f5c> | 2.6875 | 1,152 | Documentation | Software Dev. | 51.582973 |
|[Also described as new by Gerstäcker, 1862: 502.].|
|Currently subspecies of Dorylus nigricans: Emery, 1895l PDF: 710; Stitz, 1911b: 375; Santschi, 1935b: 264.|
|Material in copal: DuBois, 1998A: 137.|
Molecular Biology and Genetics
Barcode data: Dorylus molestus
There are 2 barcode sequences available from BOLD and GenBank. Below is a sequence of the barcode region Cytochrome oxidase subunit 1 (COI or COX1) from a member of the species. See the BOLD taxonomy browser for more complete information about this specimen and other sequences.
-- end --
Download FASTA File
Statistics of barcoding coverage: Dorylus molestus
Public Records: 2
Specimens with Barcodes: 2
Species With Barcodes: 1
Males or drones of all dorylus species are so called "sausage flies" and are among the largest ant morphs. Some Dorylus molestus queen are the largest known extant ants. Queens typically grow to 5.2 centimetres (2.0 in) but can reach 8 centimetres (3.1 in).
Its size of Molestus queens allows it to hold the world record in egg laying. Workers (sterile females in the presence of the only living queen) range from .3–1.1 centimetres (0.12–0.43 in).
Huge and specialised soldier morphs (permanent sterile females) provide protection during migration raids.
D molestus is an East African surface swarm raider army ant. The species is important to its ecology; it supports myrmycophile fauna, especially east African birds that attend its raids and depend on the ants' presence in their habitat. Its predatory habits contribute to (mainly) arthropod biodiversity. They attack all animals that are unable to flee and smaller animals that react too slowly, including other Dorylus groups. They invade nests of other social species, such as termites.
D molestus builds temporary surface bivouac nests, which are regularly attacked by a smaller subterran army ant of the same genus, Dorylus (Typhlopone) fulvus badius. A colony engages in foraging raids and also in straight migration raids. Colonies that lose their sole queen sometimes fuse with other colonies or produce (haploid) males before dying out.
The species is capable of surviving in environments other than forests, but it is not yet known if they can survive easily without even a small nearby forest.
This ant species inspired early swarm intelligence studies.
To request an improvement, please leave a comment on the page. Thank you! | <urn:uuid:fd51c970-13a5-439b-9c3e-a3388adccc43> | 3.140625 | 599 | Knowledge Article | Science & Tech. | 46.557705 |
grade levels: 9-12
achievement standards for this lesson
Earthquakes release energy
in two different forms that travel through the earth differently, and which have
different energies associated with them. Earthquake energy is measured by an instrument
known as a seismograph which measures the back-and-forth and side-to-side motion
of the earth's surface as seismic waves pass by. Seismic body waves move through
rock in a compression - relaxation mode, analogous to sound waves traveling through
air. These waves travel fastest through rock. Earthquake magnitude determined
from measurement of body wave motion (MB) is based on the maximum observed ratio
of wave amplitude to period for body waves arriving in the first 5 seconds of
a record (U.S. Geological Survey practice). Seismic surface wave energy is carried
in a side-to-side motion and arrives slightly later than body waves. Large earthquakes
tend to generate more surface wave amplitude relative to body wave amplitude.
The opposite is true for small events. Thus, earthquake magnitude determined from
surface wave energy (MS) is dependent on the amount of total energy released at
the source or epicenter. Local magnitude (ML) is the commonly quoted earthquake
magnitude originally defined by Richter. Because this measure of magnitude was
originally defined from the amplitude measured on a specific type of seismograph
at a specific distance (100 km) from an earthquake and for a specific region (California),
empirical calibration curves must be used to convert seismograph information at
an arbitrary distance in any given region to that expected at 100 kilometers from
a California quake, for different types of seismographs. For a given event, MB,
MS, and ML are generally different. Approximate relationships between these three
scales of earthquake magnitude have been deduced as follows: MB = 1.7 + 0.8 ML
- 0.01ML2 MB = 0.56 MS + 2.9 The importance of determining magnitude is that it
permits earthquakes to be classified on the basis of the elastic energy released
at the epicenter. The relationship between magnitude and energy released (E) is:
logE = 12.24 + 1.44 MS Thus, a one-unit increase in magnitude implies a 30-fold
increase in total elastic energy release!
Print the map, by clicking
below, which shows Idaho earthquake magnitudes measured by seismographs between
1935 and 1993. Turn off the appropriate layers to show earthquakes of greater
than magnitude 3.5 but less than 4.5 (expressed as body wave magnitude, MB);
of greater than magnitude 4.5 and less than 5; and of greater than magnitude
5. Magnitudes greater than 6 are shown as large symbols. Note that because body
wave magnitude is reported, the maximum values for the largest events may be
smaller than commonly reported as Ms. Print
& Figure 7,
then complete the activities below.
Table 1 lists
Idaho earthquakes with surface wave magnitudes greater than 5 that have occurred
between 1884 and 1994; in Table 2, enter the number of earthquakes having magnitudes
greater than or equal to each value listed; plot these values against magnitude
in a graph like that shown in Figure 7. As an example, the first and last two
values have already been entered in Table 2 and plotted in Figure 7.
These are links to access
the handouts and printable materials.
geol3ho.pdf | table1
| Figure 7
| CAD map6
the map, which county and which geologic province have had the most earthquakes?
2. Excluding areas outside the state, where do most felt earthquakes (M>4)
occur in Idaho?
3. Considering only earthquakes with magnitudes greater than 4.5, where do
these occur relative to Holocene fault activity?
Lesson Plan provided by Vita
Idaho Achievement Standards (as of 7/2001) met
by completing this activity: | <urn:uuid:90c8e57d-fe07-4d33-9f78-53ec82230f12> | 4.21875 | 836 | Tutorial | Science & Tech. | 43.044146 |
"Frankenstorm" is the most popular name for the upcoming super-storm due to hit the eastern seaboard with floods tomorrow. As Dot Earth's Andrew Revkin points out, this nickname makes it sound like the storm was created by humans and our climate-changing ways. It's also a reference to the fact that this storm is actually the result of three storms being sewn together to create a monster, as it were.
But is such a storm really the result of human-authored climate change, or just a natural "perfect storm"?
Over on Dot Earth, Revkin writes:
But what is the role, if any, of greenhouse-drive global warming in this kind of rare system?
It's easy to say, as some climatologists have, that "climate change is present in every single meteorological event." . . . Some climate scientists are telling me this event is precisely what you'd expect following a summer in which much of the Arctic Ocean was open water.
But there remains far too much natural variability in the frequency and potency of rare and powerful storms - on time scales from decades to centuries – to go beyond pointing to this event being consistent with what's projected on a human-heated planet.
While the echo of Frankenstein in that Twitter moniker [#Frankenstorm] can imply this is a human-created meteorological monster, it's just not that simple.
The rest of Revkin's article is a thoughtful, fascinating explanation of all the possible causes for this storm. It's worth reading this afternoon, as the east coast battens down the hatches and prepares for floods. Read it on Dot Earth. | <urn:uuid:fc24f168-e287-4cd5-bb37-8cdf6e3f807f> | 2.828125 | 338 | Truncated | Science & Tech. | 54.998049 |