text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
前一个 下一个 编辑 重命名 撤销 搜索 管理
Static Sub DeleteTextures ( Textures As Integer )
Delete named textures.
Specifies the number of textures to be deleted.
Specifies an array of textures to be deleted.
textures named by the elements of the array textures
After a texture is deleted, it has no contents or dimensionality,
and its name is free for reuse (for example by Gl.GenTextures
If a texture that is currently bound is deleted, the binding reverts
to 0 (the default texture).
silently ignores 0's and names that do not correspond to
Gl.INVALID_VALUE is generated if n
See original documentation on OpenGL website | <urn:uuid:76c65e38-c481-4d85-b3f3-3b7edcec7a87> | 2.953125 | 176 | Documentation | Software Dev. | 42.005128 |
Only 1 species: Symbion Pandora
This animal was discovered in 1993 and was so unique it was assinged to an entirely new phylum. It lives on the mouth parts of the Norway lobster Nephrops norwegicus. The organism has a very complex life cycle, with a number of well-defined sessile and free swimming stages with a different morphologies. The largest and best known phase is the feeding stage. At this stage it is 350 um long and is attached by an adhesive disc to the lips of the lobster. It feeds on scraps of the lobster's food using a mouth surrounded by a ring of cilia. It excretes via an anus next to the mouth ring. The feeding stage continually produces inner buds which replace the feeding structures. Both asexual and sexual reproduction can occur.
[< Go Back] | <urn:uuid:cfea4f99-e9d9-4410-b7cc-746c147a5f83> | 3.265625 | 170 | Knowledge Article | Science & Tech. | 58.854275 |
(Submitted April 09, 1997)
I'm a student of physics from Canada, and I was wondering how I could find out
about quasars on a very detailed level.
Since you know about black holes, I assume you know that quasars are a
subset of a class of galaxies called active galactic nuclei (AGN) that
are probably powered by a supermassive black hole. If you haven't
already, check out "Active Galaxies" under "Advance High-Energy
Astrophysics" at our Learning Center. If you are mostly interested in
the physics of accretion onto a black hole, the standard text is
"Accretion Processes in Astrophysics" by Frank, King and Raine.
On the other hand, if you are more interested in AGN in general, the
basic textbook is "The Astrophysics of Gaseous Nebulae and Active
Galactic Nuclei" by Osterbrock (the emphasis here is on optical spectra
but it contains a lot of the physics of photoionzation which is
important in AGN).
Quasars are AGN that are very luminous and radio-bright, and we think
that in general they are radio-bright because we are seeing synchrotron
emission from a jet of relativistic particles coming from the AGN. In
radio-quiet AGN, either the jet is not present or it is directed away
from us (since the particles are relativistic, the emission is beamed
along the direction of the jet). There are some nearby radio galaxies
that may be low-luminosity descendents of quasars, that show radio jets
and evidence of many high energy particles (see books below). Blazars
are an extreme case of quasars where we think we are looking directly
into the jet.
A couple of books and articles on Radio Galaxies and Jets are:
Chapter 13 of "Galactic and Extragalactic Radio Astronomy",
edited by G.L. Vershuur and K.I. Kellermann, 1988, Springer-Verlag.
"Beams and Jets in Astrophysics", by P.A. Hughes,
c. 1991 Cambridge University Press
"Extragalactic Radio Jets" in Ann. Reviews of Astrophysics, 1984, 22:319-58
by A.H. Bridle and R.A. Perley
Andy Ptak and Jonathan Keohane
-- for Imagine the Universe! | <urn:uuid:d590a009-1f57-4857-ae9c-e869349c16f6> | 3.328125 | 534 | Q&A Forum | Science & Tech. | 49.874348 |
The airplane above is traveling in a straight line at a constant speed. Its speed
is the distance traveled divided by the time interval. If we know how far
it is traveling in a given time we can determine its speed. We can
calculate this speed if we have two images of the plane seperated in time.
A timer is started when the following image of the plane is taken.
Two seconds later the next image was taken, after the plane traveled 400 meters. Thus, the airplane speed is 200 meters per second. Notice the distance is measured from the same spot on the plane, in this example from the nose.
In the above example the plane traveled in a straight line during the time we tracked it. What happens if the object we are tracking does not travel in a straight line? The average speed we calculate is incorrect. To visualize this consider tracking a car moving over a curved road. Again we start our clock when we take our first image.
The second image is taken 15 minutes later. If the distance is measured from the front of the car in the first image to the front of the car in the second image, it travels a distance of 4 miles. Thus its apparent speed is 4/15 miles per minute or 16 miles per hour.
Of course the car did not travel in a straight line! To calculate the speed of the
car we have to know the path the car traveled.
For a car this is easy, since a car typically drives
along the road. So, we can calculate the distance the car traveled by measuring
the length of the roadway between the two car images. The length of road the car traveled in 15 minutes is
8 miles, so the true average speed of the car is 8/15 miles per minute, or 32 miles per hour.
A problem we have in tracking clouds is that we do not know the path they take. In fact,
if we can determine the path they take we can determine the wind direction. Watch the image
of the car above and imagine that you did not have the roadway to help determine the cars path. A
good way to determine the path of the car is to track two images of the car that are close in time.
In tracking clouds we do not have a road to help us determine there path. So we view two images of a cloud that are close in time. Time intervals of more than 15 minutes is not good for tracking clouds. Short time interval (1 to 5 minutes) are best. In track clouds we have additional
problems besides not know the actual path.This is what is done using satellite images to track clouds. View the following animation of a cloud and determine what additional problems we encounter when tracking clouds to determine wind speed and direction.
Continue with tracking clouds using weather satellites. | <urn:uuid:9bca1848-63e6-43c7-95db-7af33b17bf5b> | 3.78125 | 575 | Tutorial | Science & Tech. | 63.426039 |
A formula to accompany the Birthday Problem
Let's look at the probabilities a step at a time.
Running this through a computer gives the chart below. Notice that
a probability of over .5 is obtained after 23 dates!
- For one person, there are 365 distinct birthdays.
- For two people, there are 364 different ways that
the second could have a birthday without matching the first.
- If there is no match after two people, the third person
has 363 different birthdays that do not match the other two.
So, the probability of a match is 1 - (365)(364)(363)/(365)(365)(365).
- This leads to the following formula for calculating the probability of a match
with N birthdays is 1 - (365)(364)(363)...(365 - N + 1)/(365)^N.
Notice that the probability is above .9
before the sample size reaches even 45.
Also, take a look at Lionel Mordecai's MathCAD programs.
The algorithms are in an RTF file. It includes a nice graph of the output. He uses
the programs in his statistics class.
Return to the Introduction.
Send comments to George Reese | <urn:uuid:aca32285-53dd-42e1-b19a-0d7edaf601fd> | 3.203125 | 255 | Tutorial | Science & Tech. | 75.163135 |
13 Physical Constants
This chapter describes the physical constants, such as the speed of light, c, and gravitational constant, G, provided by the Science Collection. The values are available in different unit systems, including the standard MKSA system (meters, kilograms, secomds, amperes) and the CGSM system (centimeters, grams, seconds, gauss), which is commonly used in Astronomy.
The constants described in this chapter are defined in the constants sub-collection of the Science Collection. All of the modules on the constants sub-collection can be made available using the form:
The individual modules in the constants sub-collection can also be made available using any of the following forms:
(require (planet williams/science/constants/cgs-constants))
(require (planet williams/science/constants/cgms-constants))
(require (planet williams/science/constants/mksa-constants))
(require (planet williams/science/constants/mks-constants))
(require (planet williams/science/constants/num-constants))
13.1 Fundamental Constants
The speed of light in vacuum, c.
The permeability of free space, μ0.
The permittivity of free space, ε0.
Plancks’s constant, h.
Plancks’s constant divided by 2π, ћ.
Avogadro’s number, Na.
The molar charge of 1 Faraday.
Boltzmann’s constant, k.
The molar gas constant, R0.
The standard gas volume, V0.
The Stefan-Boltzmann radiation constant, σ.
The magnetic field of 1 Gauss.
13.2 Astronomy and Astrophysics
The length of 1 astronomical unit (mean earth-sun distance), au.
The gravitational constant, G.
The distance of 1 light-year, ly.
The distance of 1 parsec, pc.
The standard gravitational acceleration on Earth, g.
The mass of the Sun.
13.3 Atomic and Nuclear Physics
The charge of the electron, e.
The energy of 1 electron volt, eV.
The unified atomic mass, amu.
The mass of the electron, me.
The mass of the muon, m_μ.
The mass of the proton, mp.
The mass of the neutron, mn.
The electromagnetic fine structure constant, α.
The Rydberg constant, Ry
, in units of energy. This is related to the Rydberg inverse wavelength
The Bohr radius, a0.
The length of 1 angstrom.
The area of 1 barn.
The Bohr magneton, μB.
The nuclear magneton, μN.
The absolute value of the magnetic mement of the electron, μe. The physical magnetic moment of the electron is negative.
The absolute value of the magnetic mement of the proton, μp.
The Thomson cross section, σT.
The electric dipole moment of 1 Debye, D.
13.4 Measurements of Time
The number of seconds in 1 minute.
The number of seconds in 1 hour.
The number of seconds in 1 day.
The number of seconds in 1 week.
13.5 Imperial Units
The length of 1 inch.
The length of 1 foot.
The length of 1 yard.
The length of 1 mile.
The length of 1 mil (1/1000 of an inch).
13.6 Speed and Nautical Units
The speed of 1 kilometer per hour.
The speed of 1 mile per hour.
The length of 1 nautical mile.
The length of 1 fathom.
The speed of 1 knot.
13.7 Printers Units
The length of 1 printer’s point (1/72 inch).
The length of 1 TeX point (1/72.27 inch).
13.8 Volume, Area and Length
The length of 1 micron.
The area of 1 hectare.
The area of 1 acre.
The volume of 1 liter.
The volume of 1 US gallon.
The volume of 1 Canadian gallon.
The volume of 1 UK gallon.
The volume of 1 quart.
The volume of 1 pint.
13.9 Mass and Weight
The mass of 1 pound.
The mass of 1 ounce.
The mass of 1 ton.
The mass of 1 metric ton (1000 kg).
The mass of 1 UK ton.
The mass of 1 troy ounce.
The mass of 1 carat.
The force of 1 gram weight.
The force of 1 pound weight.
The force of 1 kilopound weight.
The force of 1 poundal.
13.10 Thermal Energy and Power
The energy of 1 calorie.
The energy of 1 British Thermal Unit, btu.
The energy of 1 therm.
The energy of 1 horsepower.
The pressure of 1 bar.
The pressure of 1 standard atmosphere.
The pressure of 1 torr.
The pressure of 1 meter of mercury.
The pressure of 1 inch of mercury.
The pressure of 1 inch of water.
The pressure of 1 pound per square inch.
The dynamic viscosity of 1 poise.
The kinematic viscosity of 1 stokes.
13.13 Light and Illumination
The luminance of 1 stilb.
The lunimous flux of 1 lumen.
The illuminance of 1 lux.
The illuminance of 1 phot.
The illuminance of 1 footcandle.
The luminance of 1 lambert.
The luminance of 1 footlambert.
The activity of 1 curie.
The exposure of 1 roentgen.
The absorbed dose of 1 rad.
13.15 Force and Energy
The SI unit of force, 1 Newton.
The force of 1 dyne = 10^-5 Newton.
The SI unit of energy, 1 Joule.
The energy of 1 erg = 10^-7 Joule.
The constants are dimensionless scaling factors.
13.17 Physical Constants Example
The following program demonstrates the use of the physical constants in a calculation. In this case, the goal is to calculate the range of light tracel times from Earth to Mars.
The required data is the average distance of each planet from the Sun in asttonomical units (the eccentricities and inclinations of the orbots will be neglected for the purpose of this calculation). The average radius of the orbit of Mars is 1.52 astronomical units and for the orbit of Earth it is 1 astronomical unit (by definition). These values are combined with the MKSA values for the constants for the speed of light (m/s) and the length of an astronomical unit (m) to produve a result for the shortest and longest light travel time in seconds. The figures are converted into minutes before being displayed.
Here is the output from the program.
Light travel time from Earth to Mars:
min = 4.3 minutes
max = 21.0 minutes | <urn:uuid:a9438fee-ebf6-48cd-b6c2-6768a413f211> | 3.546875 | 1,538 | Documentation | Science & Tech. | 77.35625 |
Meteorology: Understanding the Atmosphere Ackerman and Knox
Smoke From Central American Fires
This loop of daily (06 - 15 May 1998) GOES-8 0.65 micron visible imagery shows a widespread smoke pall over the Gulf of Mexico and the surrounding regions (note the hazy appearance of the smoke over much of the Gulf and the adjacent Gulf Coast states). This pall formed as smoke from biomass burning across southern Mexico and parts of Central America drifted northward and northeastward a period of several days.
Persistent south/southwesterly flow has transported the smoke over the Gulf of Mexico, impacting the Gulf Coast states. Reduced visibility (as low as 2-3 miles) has been reported in Florida, Louisiana and Texas. A public health alert was issued on 12 May 1998 for portions of Texas due to the large amount of smoke particulate matter which has been transported into the region.
(this 10-image Java animation sequence will take a minute or two to load...) | <urn:uuid:bec0eb5a-9d88-4132-b15e-40d3fa57520f> | 3.359375 | 202 | Knowledge Article | Science & Tech. | 47.565 |
C was made to make writing a compiler easily. It does a LOT of stuff based on that one principle. Pointers only exist to make writing a compiler easier, as do header files. Many of the things carried over to C++ are based on compatibility with these features implemented to make compiler writing easier.
It's a good idea actually. When C was created, C and Unix were kind of a pair. C ported Unix, Unix ran C. In this way, C and Unix could quickly spread from platform to platform whereas an OS based on assembly had to be completely re-written to be ported.
The concept of specifying an interface in one file and the implementation in another isn't a bad idea at all, but that's not what C header files are. They are simply a way to limit the number of passes a compiler has to make through your source code and allow some limited abstraction of the contract between files so they can communicate.
These items, pointers, header files, etc... don't really offer any advantage over another system. By putting more effort into the compiler, you can compile a reference object as easily as a pointer to the exact same object code. This is what C++ does now.
C is a great, simple language. It had a very limited feature set, and you could write a compiler without much effort. Porting it is generally trivial! I'm not trying to say it's a bad language or anything, it's just that C's primary goals when it was created may leave remnants in the language that are more or less unnecessary now, but are going to be kept around for compatibility.
It seems like some people don't really believe that C was written to port Unix, so here: (from)
The first version of UNIX was written
in assembler language, but Thompson's
intention was that it would be written
in a high-level language.
Thompson first tried in 1971 to use
Fortran on the PDP-7, but gave up
after the first day. Then he wrote a
very simple language he called B,
which he got going on the PDP-7. It
worked, but there were problems.
First, because the implementation was
interpreted, it was always going to be
slow. Second, the basic notions of B,
which was based on the word-oriented
BCPL, just were not right for a
byte-oriented machine like the new
Ritchie used the PDP-11 to add types
to B, which for a while was called NB
for "New B," and then he started to
write a compiler for it. "So that the
first phase of C was really these two
phases in short succession of, first,
some language changes from B, really,
adding the type structure without too
much change in the syntax; and doing
the compiler," Ritchie said.
"The second phase was slower," he said
of rewriting UNIX in C. Thompson
started in the summer of 1972 but had
two problems: figuring out how to run
the basic co-routines, that is, how to
switch control from one process to
another; and the difficulty in getting
the proper data structure, since the
original version of C did not have
"The combination of the things caused
Ken to give up over the summer,"
Ritchie said. "Over the year, I added
structures and probably made the
compiler code somewhat better --
better code -- and so over the next
summer, that was when we made the
concerted effort and actually did redo
the whole operating system in C."
Here is a perfect example of what I mean. From the comments:
Pointers only exist to make writing a compiler easier? No. Pointers exist because they're the simplest possible abstraction over the idea of indirection. – Adam Rosenfield (an hour ago)
You are right. In order to implement indirection, pointers are the simplest possible abstraction to implement. In no way are they the simplest possible to comprehend or use. Arrays are much easier.
The problem? To implement arrays as efficiently as pointers you have to pretty much add a HUGE pile of code to your compiler.
There is no reason they couldn't have designed C without pointers, but with code like this:
it will take a lot of effort (on the compilers part) to factor out the explicit i+src and i+dest additions and make it create the same code that this would make:
while(*(dest++) = *(src++))
Factoring out that variable "i" after the fact is HARD. New compilers can do it, but back then it just wasn't possible, and the OS running on that crappy hardware needed little optimizations like that.
Now few systems need that kind of optimization (I work on one of the slowest platforms around--cable set-top boxes, and most of our stuff is in Java) and in the rare case where you might need it, the new C compilers should be smart enough to make that kind of conversion on its own. | <urn:uuid:5c73ef6d-41a5-448c-86a8-79d535332fef> | 3.0625 | 1,066 | Q&A Forum | Software Dev. | 58.854183 |
Many hopes are placed on the hydrogen economy, but for all those hopes, there sure aren't a lot of good ways to get hydrogen without burning a bunch of fossil fuels. But research being done at the University of Minnesota has efficiently created hydrogen from non-volatile sources such as Soybean Oil and dissolved glucose. This new process does away with the need to convert raw bio-mass to something volatile (ethanol / methane / etc) in order to produce hydrogen.
Small droplets of the non-volatile biomass are sprayed onto a super-hot metal catalyst that converts the carbohydrates (glucose) to carbon monoxide and hydrogen extremely rapidly. The breakup of the molecules produces a lot of heat (think of burning vegetation) which actually keeps the catalyst hot so it requires no external heating. The hydrogen can then be captured for fuel, and the CO can either be captured and used as a mixing agent for the combustion of the hydrogen, or converted to CO2 (which is the CO2 that the vegetation fixed in the first place, so it is a carbon neutral process.)
If this can be scaled up and cost effectively utilized, it could easily be the technology that makes the hydrogen economy an actual possibility. And if it could be adapted to convert cellulosic biomass, like agricultural and yard waste, into hydrogen and CO2, then we would have a technology that would change the face of the world forever.
|< Prev||Next >| | <urn:uuid:5a54d779-81ed-41eb-9119-e53b724f1f24> | 3.921875 | 294 | Personal Blog | Science & Tech. | 29.085856 |
Head First C# Code: Chapter 3, Objects Get Oriented
Every program you write solves a problem.
When you're building a program, it's always a good idea to start by thinking about what problem your program's supposed to solve. That's why objects are really useful. They let you structure your code based on the problem it's solving, so that you can spend your time thinking about the problem you need to work on rather than getting bogged down in the mechanics of writing code. When you use objects right, you end up with code that's intuitive to write, and easy to read and change. | <urn:uuid:d99350c8-2c30-4deb-8e94-51a878e8ff55> | 2.828125 | 125 | Truncated | Software Dev. | 64.405248 |
Compiled by: Mats Lindegarth, Sweden
1. Extract from OSPAR Case Reports:
Initial List of Threatened and/or Declining Species & Habitats in the OSPAR Maritime Area (OSPAR 2006)
OSPAR definition for habitat mapping
“Maerl” is a collective term for several species of calcified red seaweed (e.g. Phymatolithon calcareum, Lithothamnion glaciale, Lithothamnion corallioides and Lithophyllum fasciculatum) which live unattached on sediments. In favourable conditions, these species can form extensive beds, typically 30% cover or more, mostly in coarse clean sediments of gravels and clean sands or muddy mixed sediments, which occur either on the open coast or in tide-swept channels of marine inlets, where it grows as unattached nodules or ‘rhodoliths’. Maerl beds have been recorded from a variety of depths, ranging from the lower shore to 30m depth. As maerl requires light to photosynthesize, depth is determined by water turbidity. In fully marine conditions the dominant species is typically P. calcareum, whilst under variable salinity conditions such as sealochs, beds of L. glaciale may develop. Maerl beds have been recorded off the southern and western coasts of the British Isles, north to Shetland, in France and other western European waters.
OSPAR management considerations
The main management measure which would assist the conservation of this habitat is protection from physical damage. This would require halting direct extraction from maerl beds. A recently concluded four year EU project on maerl in Europe has recommended a presumption of protection of all maerl beds as they are effectively non-renewable resources. Other proposals from this work include the prohibition on the use of towed gear on maerl grounds, moratoria on the issue of further permits for the siting of aquaculture units above maerl grounds.
2. Additional HELCOM information
2.1. Description of the habitatMaerl beds are benthic habitats consisting of unattached particles (maximum diameter ≈5cm) of calcareous red algae of the genera Lithothamnion and Phymatolithon in gravel and sand. Areas where maerl occur are generally well ventilated with low levels of turbidity at depths of 17-22 m.
2.2. Distribution (past and present)
Known areas where maerl beds occur are on offshore banks in the Kattegat (e.g. Lilla Middelgrund and Fladen). The presence of dead maerl at some offshore banks indicates that the habitat must have been more widespread in the past. Maerl beds occur patchily but regularly under similar environmental conditions in full marine areas.
2.3. Importance (sub-regional, Baltic-wide, global)
Because of their restricted distribution, maerl beds of the Kattegat are considered to be of Baltic-wide importance in the HELCOM area. Animals associated with maerl beds and their surroundings include many rare decapod crustaceans, such as Corystes cassivelaunus and Thia scutellata, and echinoderms, such as Ophiothrix fragilis and Ophiocomina nigra.
2.4. Status of threat/decline
The status of threat and/or decline is poorly known. Increased pressure from offshore developments (e.g. wind-farm development) may affect the distribution of the habitat. From a Baltic perspective the biotope is rare and must therefore be considered “potentially endangered”.Nomination of maerl beds to be placed on the OSPAR list cited sensitivity, ecological significance and decline. Information was also provided on threat (for further information see OSPAR 2006).
2.5. Threat/decline factors
Extraction, offshore wind-farms, destructive fishing methods and eutrophication causing increased turbidity.
2.6. Options for improvement
A biotope inventory and a compilation of existing data will increase the knowledge about the distribution of the habitat in the Kattegat. This will allow detailed mapping and therefore more appropriate assessments of potential environmental impacts to the habitat with respect to extraction, offshore installations and fishing. As recommended within OPSAR (OSPAR 2006), protection from any physical damage would be the most important management measure.
Maerl beds on offshore banks are part of the European Habitats Directive Annex I Habitats “1110 (Sandbanks which are slightly covered by sea water all the time)”. As such they are considered to be of community interest which requires the designation of special areas of conservation on a European scale.
HELCOM (1998). Red List of Marine and Coastal Biotopes and Biotope Complexes of the Baltic Sea, Belt Sea and Kattegat - Including a comprehensive description and classification system for all Baltic Marine and Coastal Biotopes. HELCOM-Baltic Sea Environ–ment Proceedings 75, Helsinki Commission. 115 pp.
Jackson, A. (2006). Phymatolithon calcareum maerl beds with hydroids and echinoderms in deeper infralittoral clean gravel or coarse sand. Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line]. Plymouth: Marine Biological Association of the United Kingdom. [cited 13/06/2007]. Available from: http://www.marlin.ac.uk
Naturvårdsverket (2006). Inventeringar av marina naturtyper på utsjöbankar. Rapport 5567. ISBN 91-620-5576-3. 69 pp. Available at www.naturvardsverket.se (in Swedish).
OSPAR (2006). OSPAR Case Reports for the Initial List of Threatened and/or Declining Species and Habitats in the OSPAR Maritime Area. Publication no. 276, pp 125-128. (http://www.ospar.org/eng/html/welcome.html). | <urn:uuid:7fa60b74-df24-4a41-ab2b-4e1e806b503a> | 3.484375 | 1,295 | Knowledge Article | Science & Tech. | 37.828504 |
Write a Webserver in 100 Lines of Code or Lessby Jonathan Johnson
Network programming can be cumbersome for even the most advanced developers. REALbasic, a rapid application development (RAD) environment and language, simplifies networking yet provides the power that developers expect from any modern object-oriented language. I'll show you how this powerful environment enables professional developers and parttime hobbyists alike to quickly create a full-featured webserver.
Because you may be new to REALbasic's unique features and language, it's a good idea to start with something simple. The goal of writing a webserver in REALbasic is not to replace Apache. Instead, this server will demonstrate how to use REALbasic to handle multiple connections using an object-oriented design.
Let's start with some preliminary information about how networking in REALbasic works, as well as a quick walk-through of the HTTP protocol.
REALbasic Networking Classes
REALbasic includes several networking classes that make development easier, including the TCPSocket, UDPSocket, IPCSocket, and ServerSocket, and they are all event-based. This means that you can implement events in your code that will run when certain things happen, such as when an error occurs or when data is received. This saves you the trouble of having to constantly check the states of sockets.
HTTP is a simple protocol that is used to request information (generally files). The HTTP protocol works by having the client, such as a browser, send a request to the server. The server will evaluate the request and reply with either an error or the information requested.
The syntax for a request is:
(GET | POST) + Space + RESOURCE_PATH + Space + HTTPVersion + CRLF
As with most text-based protocols, the ending of a line is a carriage-return and a line-feed concatenated together. This is called a CRLF. An example request that a browser would send for the root file on the server would be:
GET /index.html HTTP/1.1
The server then writes a response. Its syntax is:
HTTPVersion + Space + StatusCode + Space + ResponseStatus + CRLF + CRLF
The server can send more data after the end of the two CRLFs, if desired. That is where, for example, the page requested would be written. After all the data is sent, the connection is closed by the server.
Building the Server
REALbasic provides two classes which will be utilized for the majority of the work. The ServerSocket class automatically handles accepting connections without dropping any, even if they are done simultaneously. The TCPSocket provides an easy mechanism for communicating via TCP. Together, they allow for extremely easy creation of servers.
Let's get started by launching REALbasic 5.5. (Get a 10-day free trial here.) Once open, you will be presented with a few windows. To the left is a "Controls" palette, which contains commonly used GUI controls. To the right is the Properties window, which allows you to visually set properties on objects such as windows, controls, or classes. The window that contains the three items, Window1, MenuBar1, and App, is the project window.
To utilize the ServerSocket and TCPSocket, they need to be subclassed. When you subclass another class, you gain access to events that the superclass exposes. For example, the TCPSocket itself doesn't actually know what to do with the data once it has been received. So, you subclass the TCPSocket and implement the DataAvailable event so that you can do something when data is received.
First, create a new class named HTTPServerSocket whose superclass is ServerSocket. Do this by choosing "New Class" from the File menu. The project window will now have a new item named Class1. Click on it, and notice how the properties window updates to reflect what item is selected. Rename the class to HTTPServerSocket, and select "ServerSocket" from the Super popup menu.
While we're here, go ahead and add another class. Rename it to HTTPConnection, and set its superclass to TCPSocket. You can click on the superclass field to the left of the popup arrow, and REALbasic allows you to type in the class name. Start typing TCP, and notice how autocomplete kicks in. Press tab to let autocomplete finish the rest of the name for you.
Now, you have the two classes that are needed to make the HTTP server. The HTTPServerSocket class is responsible for creating HTTPConnections, and the HTTPConnection class is responsible for handling the HTTP communications.
This is a good time to save the project. Remember where you saved the project at because at the end of this article you will need to put another file beside it.
Double click on the HTTPServerSocket class. This will open a code editor window. On the left is the code browser, which helps navigate between methods, events, properties, and constants. All we need to do in the HTTPServerSocket is implement an event. Expand the "Events" section, and select "AddSocket."
The AddSocket event is fired when the ServerSocket needs more sockets in its pool. The way servers work is that there is generally a "pool" or collection of sockets that are sitting around. When a connection occurs, the server will hand off the connection to one of the sockets in the pool. The ServerSocket takes care of all the tricky TCP code required to create a server that doesn't drop connections for us. All you need to do is add this code to the AddSocket event:
return new HTTPConnection
That's all that is needed. Close that code editor, and double click on the HTTPConnection class we wrote. Here is a diagram of what will happen once the server receives a request:
Each of these tasks will be handled by different methods that we will create. The first task is to determine what resource is being requested. The server will have a string that contains the entire request, and this method will extract a string that represents the path to the resource. The next task will be to take that string and locate that file. The server will then either report an error or send the file. When done sending all the data, the server will close the connection.
To accomplish these tasks, three methods will be created. Add a method to the HTTPConnection class by choosing "Edit->New Method..." Create the method as shown:
The next method you'll need to create is one that takes the path returned from GetPathForRequest and returns the actual file to us. In REALbasic, files are dealt with through the FolderItem class rather than string paths. This is a huge benefit because not only is this same object used across all platforms, but it is so easy to use. Create a method named GetFolderItemForPath that takes "path as String" and returns a FolderItem. The method's scope can be private because the method only pertains to this class.
The last method you need to create is just a convenience method. No matter what type of response the server receives, the response will always have a similar header. To help eliminate repeated code, define one last method named WriteResponseHeader that takes the parameters "statusCode as Integer, response as String" with no return value. This method can also be private.
In the code browser, expand the Events section. You will see Connected, DataAvailable, Error, SendComplete, and SendProgress. With HTTP, there isn't anything the server needs to do in the connected event. The client will be sending a request, so the server needs to wait until we have data available. The DataAvailable event is very important because of that. That event fires whenever data has been received. That event is where most of the logic will reside. The Error event is fired when there is a socket-related error, but for this example, socket-related errors will be ignored. Finally, SendComplete and SendProgress can be used to keep the send buffer full without eating up too much memory.
Since this server is going to be memory-efficient, you need to keep one variable around. Variables that are on the class level and not created inside a method are called properties. The property that is needed is to store a reference to an already open file so that we can write more data from the file when needed. In REALbasic, there is a class called BinaryStream that is used to read and manipulate files. Although there are other classes for accessing files, the BinaryStream class will do what is needed. Create a property by choosing "Edit->New Property..." In the Declaration field, type "stream as BinaryStream" and change its scope to Private. Click OK when done.
Pages: 1, 2 | <urn:uuid:f1e257ea-d754-453e-8914-d0b2145c1ae5> | 3.0625 | 1,822 | Tutorial | Software Dev. | 52.843757 |
Tension in an elevator cable
An elevator has a mass of 1400kg. What is the tension in the supporting cable when the elevator traveling down at 10 m/s is brought to rest in a distance of 40 m. Assume a constant acceleration.
m =1400 kg mass of elevator,
v = 10m/s initial speed of the elevator,
D = 40 m distance required to stop the elevator.
g = 9.81 m/s2 gravitational acceleration, as usual is assumed to be known.
T = ? magnitude of tension in the cable while bringing the elevator to rest.
To find T we must calculate:
a = ? acceleration while stopping the elevator,
t = ? time required to stop elevator.
It is convenient to draw a free-body diagram, as in Figure below.
is the tension in the cable of the elevator, is the gravity force. The resultant force is the force producing acceleration (deceleration in this case) of our elevator.
This can be written in the form of the equation
if we chose the upward direction as positive. Solving for tension gives
For further calculations we can drop the vector notation as all the forces are acting along one line. To calculate the magnitude of the tension T, we must find the magnitude a of the acceleration. It can be found from kinematics equations
a = v/t (2)
D = vt (1/2)a t2 (3)
Equation (2) is based on the fact that elevator final speed is zero. Equation (3) is a standard formula for distance traveled in motion with constant acceleration (negative in this case as directed opposite to the initial speed).
Solving the equations (2) and (3) with respect to acceleration a, we find
Magnitude of tension T can be found from formula (1) taken without the vector notation (magnitude only!!)
Substituting numbers given in the problem we get
T = 15484 N. | <urn:uuid:6b2889d5-84a7-43fe-b0fa-6f22440dbaa6> | 3.09375 | 417 | Tutorial | Science & Tech. | 58.629037 |
Let's Talk About: The Bright stuff
Share with others:
Pick a star in the sky and ask yourself how bright it is. Brighter than the one next to it, perhaps? And why is it so bright? If it sounds simple, think again. It's a puzzle that astronomers say our eyes alone cannot solve. In order to truly understand the bright stuff, we need physics at the ready.
The measure of a star's brightness is referred to as its magnitude. Astronomers arrange magnitudes on a scale in which dim stars are signified by high numbers and bright stars by low numbers. A star with a magnitude of less than zero is considered exceptionally bright.
Think of our sun. Nothing in the sky is brighter than the light from our parent star. That is, if you're talking about its apparent magnitude, which is the brightness of a star as viewed from Earth. Because the sun is so close to us, it has the highest apparent magnitude of all celestial objects: --26.7.
But while the sun is king by apparent magnitude, by absolute magnitude it's not. Absolute magnitude refers to the brightness of an object as it would be viewed 10 parsecs (about 32.6 light-years) from Earth. Consider the star Rigel in the constellation Orion. It may appear as a minor sparkle in the sky, but if all stars were the same distance from Earth, Rigel would outshine them all.
We can further understand a star's brightness by determining its luminosity, or the amount of energy it radiates per second at all wavelengths -- not just visible light. But if physical contact with stars is impossible, how do we measure this inherent property of theirs? The answer lies in clever math. Because absolute magnitude provides a measurement of brightness at a standard distance, astronomers can insert its value into a formula and, thereby, convert it to units of luminosity. Thus, we have crucial criteria for classifying stars.
First Published July 5, 2012 12:00 am | <urn:uuid:339eb60a-5165-4b12-8940-aecbe67461a0> | 4.15625 | 410 | Truncated | Science & Tech. | 62.072304 |
Thank you for your question. I always love a good science competition. First off, let me refer you to a Science Buddies project: "Veggie Power! Making Batteries from Fruits and Vegetables" at this link:http://www.sciencebuddies.org/science-f ... background
It provides a lot of good information for an experiment of this sort. Rather than just provide the answer of which fruit/veggie will provide the most voltage, I have a feeling that the point of the exercise assigned is to conduct some research and experiment to optimize/maximize voltage output. With that, lets discuss some of the processes involved here.
In this application, electricity is produced from a chemical reaction between two dissimilar metal electrodes immersed in an electrolyte solution. The electrolyte, in this case, is provided by the fruit/vegetable cell. You can narrow down good fruit/veggie candidates by researching ones that are high in electrolytes, such as Potassium. You can then test the candidates to find which ones are the most efficient at producing electricity.
Also, it's not just the fruit/veggie cell that contributes to the voltage output. You must also consider the type and placement of your electrodes. A) They must be two different metals, i.e. an anode and a cathode, (I'll leave the research as to which two types work best up to you) or the reaction will not work. B) Electrode orientation and distance from one other will play a factor in how efficient the circuit is (i.e. parallel, end to end, touching or not, etc...).
Some additional questions to get you thinking: Does the size of the fruit/veggie make a difference? What happens if you cut the fruit/veggie up into smaller individual cells (you said you could use up to 10 individual fruits/veggies, but you didn't say you couldn't cut them up and have 20 or more cells)? Does the circuit produce more voltage if the fruit/veggie cells are connected in series or in parallel? Does cell temperature make a difference in voltage output? Does freshness of the cell make a difference? How long before you deplete the power potential of an individual cell? I'm sure there are other questions, but these should get you started.
Please post back with additional questions or comments. I look forward to hearing how you did in the competition.
I hope this helps.
“Education never ends. It is a series of lessons, with the greatest for the last.”
~ Sir Arthur Conan Doyle (Sherlock Holmes) | <urn:uuid:4b52a7d9-3622-42d7-ba80-6dd8c99605d7> | 3.5625 | 539 | Comment Section | Science & Tech. | 57.537137 |
Editor's Note: This story is part of the Feature "Nuclear Fuel Recycling: More Trouble Than It's Worth" from the May 2008 Issue of Scientific American.
A Nuclear Renaissance?
After decades of declining interest, nuclear energy is poised for a comeback, driven by:
The quantity of spent fuel so far accumulated by the U.S. nuclear industry (about 58,000 metric tons) now very nearly equals the capacity of the cooling pools used to hold such material at the reactor sites. By midcentury, the amount will roughly double.
Pros & Cons
In theory, reprocessing spent fuel and recycling it in reactors reduces the quantity of uranium mined and leaves more of the waste in forms that remain radioactive for only a few centuries rather than many millennia. But in practice, this approach is problematic because it is expensive, reduces waste only marginally (unless an extremely costly and complex recycling infrastructure is built), and increases the risk that the plutonium in the spent fuel will be used to make nuclear weapons.
Progress on the proposed U.S. nuclear repository at Yucca Mountain in Nevada remains slow. At best, its construction will not be authorized until 2011, and the project will not be completed until 2016. The U.S. nuclear industry thus will not begin storing spent fuel there until 2017—or even later, if work is delayed by scientific controversies, legal challenges or funding shortfalls. | <urn:uuid:21098da3-7eff-449e-aef2-7042455854f8> | 3.34375 | 285 | Truncated | Science & Tech. | 47.831769 |
Female Green Lynx Spiders are large green spiders with a total length measuring 11-22mm, while the males measure 8-15mm. The two sexes are similar in color and pattern. The carapace has red markings; the abdomen has several white and red chevron-shaped markings. The yellow legs have black spots.
This species occurs in the southern United States, Mexico, Central America and the West Indies. In San Diego, it is common on the coastal side of the mountains. The Green Lynx Spider is found most commonly on shrubby vegetation in gardens, on wild buckwheat flower clusters, and in meadows of tall wildflowers and grasses.
Peucetia usually mature in mid-summer. Mating occurs while hanging from a strand of silk. Females protect the egg sac and young until the spiderlings can tend for themselves. Females have been observed spitting venom from their fangs.
Related and Similar Species
Peucetia longipalpus is a smaller, similar species that is best identified by inspection of the genitalia.
Brady, A.R. 1964. The lynx spiders of North America north of Mexico (Araneae, Oxyopidae). Bull. Mus. Comp. Zool. 131(13): 432-518. | <urn:uuid:b3bdfca3-1dce-40bb-8fca-7c3277bc2700> | 2.984375 | 267 | Knowledge Article | Science & Tech. | 60.48371 |
WebReference.com - Excerpt from Inside XSLT, Chapter 2, Part 1 (1/4)
Trees and Nodes
When you're working with XSLT, you no longer think in terms of documents, but rather in terms of trees. A tree represents the data in a document as a set of nodes-elements, attributes, comments, and so on are all treated as nodes-in a hierarchy, and in XSLT, the tree structure follows the W3C XPath recommendation (www.w3.org/TR/xpath). In this chapter, I'll go through what's happening conceptually with trees and nodes, and in Chapters 3 and 4, I'll give a formal introduction to XPath and how it relates to XSLT. You use XPath expressions to locate data in XML documents, and those expressions are written in terms of trees and nodes.
In fact, the XSLT recommendation does not require conforming XSLT processors to have anything to do with documents; formally, XSLT transformations accept a source tree as input, and produce a result tree as output. Most XSLT processors do, however, add support so that you can work with documents.
From the XSLT point of view, then, documents are trees built of nodes; XSLT recognizes seven types of nodes:
- The root node. This is the very start of the document. This node represents the entire document to the XSLT processor. Important: Don't get the root node mixed up with the root element, which is also called the document element (more on this later in this chapter).
- Attribute node. Holds the value of an attribute after entity references have been expanded and surrounding whitespace has been trimmed.
- Comment node. Holds the text of a comment, not including
- Element node. Consists of the part of the document bounded by a start and matching
end tag, or a single empty element tag, such as
- Namespace node. Represents a namespace declaration-and note that it is added to each element to which it applies.
- Processing instruction node. Holds the text of the processing instruction, which does
?>. The XML declaration,
<?xml version="1.0"?>, by the way, is not a processing instruction, even though it looks like one. XSLT processors strip it out automatically.
- Text node. Text nodes hold sequences of characters-that is, PCDATA text. Text nodes are normalized by default in XSLT, which means that adjacent text nodes are merged.
As you'll see in Chapter 7, you use XPath expressions to work with trees and nodes. An XPath expression returns a single node that matches the expression; or, if more than one node matches the expression, the expression returns a node set. XPath was designed to enable you to navigate through trees, and understanding XPath is a large part of understanding XSLT.
Created: September 12, 2001
Revised: September 12, 2001 | <urn:uuid:46daba4f-8423-4cba-b4d9-22b56b8e351a> | 3.625 | 631 | Truncated | Software Dev. | 62.911289 |
The troposphere is the lowest layer of Earth's atmosphere. We live in the troposphere. Weather happens in this layer. Most clouds are found in the troposphere. The next layer up is the stratosphere.
Click on image for full size
Original artwork by Windows to the Universe staff (Randy Russell).
The troposphere is the lowest layer of Earth's atmosphere. The troposphere starts at ground level. The top of the troposphere is about 11 km up (that's 7 miles or 36,000 feet). The layer above the troposphere is called the stratosphere.
Most of the air (about 3/4ths) in our atmosphere is in the troposphere. Almost all weather happens in the troposphere. Most clouds are found in this lowest layer, too. The jet stream is near the top of the troposphere. This "river of air" zooms along at 400 km/hr (250 mph)!
The troposphere is warmest near the ground. Sunlight heats the ground (or oceans!). The ground heats the air that is closest to it... at the bottom of the troposphere. The higher up you go in the troposphere, the colder it gets. That's why there is often snow in tall mountains, even in the summer. The temperature at the top of the troposphere is around -55° C (-64° F)! Brrrrrrrrr!
Air also gets 'thinner' as you go higher up. That's why mountain climbers sometimes need bottled oxygen to breathe.
The "border" between the troposphere and the stratosphere above it has a special name. It is called the tropopause. The height of the tropopause actually depends on whether it is day or night, summer or winter, or whether you are near the equator or one of the poles. At the equator, the tropopause is about 20 km (12 miles or 65,000 feet) above sea level. In winter near the poles the tropopause is much lower. It is about 7 km (4 miles or 23,000 feet) high.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
When you think of chemistry, do you think about mixing colored liquids in test tubes and maybe making an explosion... or at least a nice puff of smoke? Did you know that a lot of chemistry happens in Earth's...more
The lowest part of the troposphere, closest to Earth's surface, is called the "boundary layer" (or planetary boundary layer or atmospheric boundary layer). Near the surface, the texture of the...more
This is the temperature profile of Uranus' troposphere. The temperature becomes cooler with height until the tropopause, or "top" of the troposphere is reached. The tropopause is defined to be the altitude...more
The troposphere of Jupiter is where the clouds are. Clouds form in regions of strong atmospheric motion, when condensation takes place. The troposphere is the region rapidly stirred by vertical motions....more
Triton is the largest moon of Neptune. It isn't quite as big as Earth's Moon. The surface of Triton is very, very cold. It is colder than the surface of any other planet or moon in our Solar System. It...more
Did you know that ozone is found in two different layers of the atmosphere? You may have heard of the ozone hole problem - that is where ozone is missing in the stratosphere (the 2nd layer of the Earth's...more
Have you ever heard of ozone? That's a word that shows up in the news a lot! Do you know what ozone is and why it is important in the Earth's atmosphere? Ozone is made of three oxygen atoms (O3). You've...more | <urn:uuid:8f9af752-258f-4cff-afe6-d8ac8708ce9a> | 4.15625 | 823 | Knowledge Article | Science & Tech. | 70.480542 |
Appendix A. the ratio of Curvature Radiation (CR) to ICS power
The energy loss of a particle through CR process is:
where is the Lorentz factor of the particles, e is the electric charge of the particles, c the light of speed, is the radius of curvature of the magnetic field. The curvature radius for a dipole magnetic field is , , is the radius of the light cylinder, is a constant for a dipole magnetics field line for last opening field lines the is:
The energy loss of a particle through ICS is:
and is the total cross-section of ICS, is the photon number density near the surface, is the energy of the outgoing photons, near the surface, and in the estimate below. Near the surface of the neutron star, the photon number density of the low frequency wave can be written as:
where E is the electric field in the gap. For the inner gap sparking, one expects that the low frequency wave with electric field has the value of E. In the limit (h is the thickness of the gap, is the radius of the gap), we have (RS75):
Where B is the magnetic field near the surface of the neutron star. Substitute into Eq. (A3), we have:
Thus the ratio of the energy loss of these two processes near the surface of the neuron star is:
If we use (RS75), the ratio can be increase about times, but is still very small. This means that near the surface of the neutron star the efficiency of CR compare which the ICS is very low. Above the surface and
At a distance r, from the equations above, we have:
where (see below). Even if the typical radiation height is at in Eq. (A11), is still small. This means that the efficiency of the ICS process is higher than that of the CR process, but as the estimate bellow, the incoherent radiation of ICS process is inadequate in explaining pulsar radiation either.
Appendix B. average number densities of the outflow
The energy flux carried by relativistic positrons into the magnetosphere above two polar gaps are (RS75):
where the current form in the magnetosphere is taken as (Sutherland, 1979):
Here . The current associated with the corotating plasma of charge density is (Goldreich and Julian 1969):
If the magnetospere is charge separated then the number density of the particles of charge is
The second current corresponds to streaming of charges along the magnetic field lines: and k must be constant along a given field line. This current only exist on the open field lines. The exact division between the open and closed field lines cannot be determined precisely. We may use the vacuum dipole field geometry to locate approximately the division on the neutron star and use the Eq. (B4).
If the potential in the inner gap has the maximum value (RS75):
Thus, . This means that the current flow from the gaps have enough braking torque on the spinning star to lose all the rotational energy (Sutherland 1979). This is reasonable, because . Even if in the charge separated magnetosphere, n0 can be much larger than the GJ density . Especially, in the place near the stellar surface. Near stellar surface, in the balance between the gravitational force and kinetic energy of the particles lets a thin atmosphere existence. We will estimate the luminosity of the ICS process using Eq. (B4).
Appendix C. luminosity in incoherent ICS processes
The incoherent Luminosity by the ICS process near the surface of the neutron star is:
where is a constant, is characteristic time over which the particle radiates at some desirable frequency.
From Eq. (6) for constant and , we have: . As , we can get from Eq. (11) and we can get from (generally , since ), thus, . As an estimation, we take , , so second.
Substitute Eqs. (A4)(B4) and (C2) to Eq. (C1), we can get the luminosity as follows:
where is in order of 1, is the Thomson cross section.
Here the index "R " presents the parameters in the electron rest frame. Here we indicate by 1 (1') the linear polarization of the incoming (outgoing) photon parallel to the plane built up by the magnetic field and the incoming (outgoing) photon, and by 2 (2') the linear polarization orthogonal to this plane. , the cyclotron frequency. In our case , so , and
where is the angle between the direction of the incoming photon and the magnetic field. As ,
At , using Eq. (A10) and (C7), if the radiation region is at and , the luminosity in Eq. (C3) will be down 9 magnitudes. The luminosity observed for radio pulsars can be written as (Sutherland,1979):
The range of is from to , which means that incoherent ICS radiation is inadequate in explaining pulsar radiation.
© European Southern Observatory (ESO) 1998
Online publication: April 15, 1998 | <urn:uuid:8ea1b34f-b205-45c4-acdb-d2afe01759d0> | 3.046875 | 1,087 | Academic Writing | Science & Tech. | 51.85739 |
If you looked at the penis of a Drosophila fly under a microscope (for reasons best known only to yourself), you’d see an array of wince-inducing hooks and spines. These spines are present in all Drosophila and they’re so varied that a trained biologist could use them to identify the species of the owner.
What’s the purpose of these spines? Are they intended to actually wound the female during mating? Do they help the male fly to scrape out the sperm of his rivals? Do they actually pierce the walls of the female’s genital tract, allowing the male to bypass any barriers to his sperm, as other insects do? All of these explanations have been put forward, and it seems that all of them are wrong.
The spines are nothing more than biological Velcro. During sex, they help the male fly to clasp onto his mate from the inside so he can’t be easily dislodged. We know this thanks to Michal Polak and Arash Rashed, who shaved male flies with a laser to see if their sexual performance would be affected. | <urn:uuid:f145ef9c-f6b4-465e-98b3-a5351a6b6f8b> | 2.75 | 236 | Personal Blog | Science & Tech. | 58.466265 |
The C++ strings library includes support for two general types of strings:
- std::basic_string - a templated class designed to manipulate strings of any character type.
- Null-terminated strings - arrays of characters terminated by a special null character.
The templated class std::basic_string generalizes how sequences of characters are manipulated and stored. String creation, manipulation, and destruction are all handled by a convenient set of class methods and related functions.
Several specializations of std::basic_string are provided for commonly-used types:
Defined in header
Null-terminated strings
Null-terminated strings are arrays of characters that are terminated by a special null character. C++ provides functions to create, inspect, and modify null-terminated strings.
There are three types of null-terminated strings:
Additional support
Defined in header <string> | <urn:uuid:9fe6d9f4-1ff9-4711-9750-92854dd0243c> | 3.609375 | 185 | Knowledge Article | Software Dev. | 20.348571 |
I use python 2.6
>>> hex(-199703103) '-0xbe73a3f' >>> hex(199703103) '0xbe73a3f'
Positive and negative value are the same?
When I use calc, the value is
Python's integers can grow arbitrarily large. In order to compute the raw two's-complement the way you want it, you would need to specify the desired bit width. Your example shows
This prints out:
Because Python integers are arbitrarily large, you have to mask the values to limit conversion to the number of bits you want for your 2s complement representation.
Python displays the simple case of | <urn:uuid:2172bf77-8c82-4473-8798-0f14c42c0a24> | 3.140625 | 141 | Q&A Forum | Software Dev. | 54.553 |
Supernova RemnantsThe unprecedented broad bandwidth and excellent spectral resolution of ASCA has had a major impact on all areas of research into the nature of supernova remnants (SNRs). Recent results from ASCA have increased the number of neutron star/SNR associations, debunking the decade-long mystery of the paucity of such associations. Additionally, ASCA has uncovered localized regions of non-thermal X-ray emission inside of SNRs which are not associated with the synchrotron nebulae but are produced by previously unrecognized mechanisms. ASCA maps of SNRs in prominent X-ray emission lines and selected continuum bands show variation in temperature, ionization, chemical composition, and, indeed, even the nature of the underlying emission mechanism. ASCA's increased sensitivity has allowed for a systematic study of the remnants in the LMC leading to the discovery of new ejecta-dominated remnants and an independent measurement of the gas-phase abundances of the LMC. Highly absorbed Galactic SNRs, which were weak and nondescript in previous soft X-ray observations, turn out to be remarkable objects with booming emission lines when observed using ASCA. These and other discoveries are leading to new insights into the nature of the ejecta of young remnants, the physics of supernova-induced shock waves, and the discovery and study of pulsar-powered synchrotron nebulae.
The Connection between Neutron Stars and Supernova RemnantsNeutron Stars (NS) are thought to originate in SN explosions, however the relatively few known associations of NS with SNR had led some to doubt the underlying ideas of NS formation. ASCA has dramatically improved this situation by increasing the number of known associations. Among the current generation of X-ray instruments, only the ASCA instruments possess the capability to separate the soft, mostly thermal emission from the interstellar material swept up by the SN shock from the hard, non-thermal emission from the synchrotron nebula around a pulsar. If the compact object cannot be detected directly, ASCA allows us to infer its existence by detecting its interaction with the surrounding medium. Since this emission is not subject to the beaming effects of the pulsar emission, it is significantly more likely to be detected.
Perhaps, the best example of the contribution of ASCA to this field is the case of the SNR G11.2-0.3. Vasisht et al. (Fig. 1;1996 ApJ 456, L59) used ASCA to detect a plerion in this remnant, which was only hinted at by previous X-ray observations, and Torii et al. (1997 ApJ 489, L145) detected a 65 ms pulsar with ASCA, which had not been detected in the radio. G11.2-0.3 was suggested to be the remnant of the historical SN of A.D. 386 (Clark & Stephenson 1977); the rapid pulsation and the temperature of the shock derived from the ASCA data are both consistent with a remnant age of ~1600 yr. This is only the second association of a pulsar with a historical supernova after the Crab and its pulsar.
The detection of a synchrotron nebula around an as yet undetected pulsar also lends support to the standard picture of NS birth and SNR evolution. Harrus et al. (1996 ApJ 464L, 161) detected the X-ray synchrotron nebula around the known radio pulsar PSR B1853+01 in the SNR W44. In contrast, Slane et al. (1997 ApJ, 485, 221) report the detection of a plerion in CTA-1 which was previously unknown in the radio. Harrus, Hughes, & Slane (1998 ApJ, 499, 273) report the detection of the synchrotron nebula in MSH11-62 which was known in the radio. Other remnants for which ASCA has detected a localized region of non-thermal emission are Kes 75 (Helfand 1994 New Horizons meeting), G292.0+1.8 (Torii et al. IAU proceedings), G327.1-1.1 (Sun et al. in preparation), and MSH15-56 (Plucinsky et al. 1998 Elba Proceedings). ASCA also detected the plerionic component in the SNR N157B in the LMC which prompted a followup observation by RXTE, which detected a 16 ms pulsar !
Figure 1. G11.2-0.3 in four energy bands in the adjacent panels. The upper left is the ASCA band from 0.5 to 3.3 keV, the upper right is the ASCA band from 3.3 to 9.0 keV, and the lower left is the ASCA band over the entire band from 0.5 to 9.0 keV. The lower right panel is the Einstein HRI. The ASCA data demonstrate clearly the existence of the X-ray plerion. The image is from Vasisht et al. 1996 (ApJ\ 456, L59).
Non-thermal Emission in SNRsASCA has detected localized regions of non-thermal X-ray emission which cannot be explained by synchrotron emission from a pulsar nebula. ASCA observations solved the long-standing mystery of the spectrum of SN1006 by localizing the non-thermal emission to the bright rims of the remnant (Koyama et al. 1995 Nature 378, 255), which has been interpreted as synchrotron emission from electrons with energies up to 100 TeV accelerated in the remnant blast wave (Reynolds 1996 ApJ 459, L13). ASCA discovered an extended region of hard X-ray emission in the SNR IC 443 (Keohane et al. 1997, ApJ 484, 350). It is coincident with a region of strong interaction between the remnant shock, and a dense molecular cloud. The authors speculate that the hard X-ray emission arises from TeV electrons whose population has been enhanced by virtue of shock-cloud collisions. If this is correct, then ASCA has unveiled a second means by which supernova remnants create high energy cosmic rays.
Perhaps the most revolutionary observations will be those of the enigmatic remnant G347.5-0.5. This remnant was first detected as a bright source in the ROSAT all-sky survey and was resolved into a shell-type SNR. The bright NW shell was caught serendipitously in an ASCA galactic plane survey pointing and the spectrum was revealed to be non-thermal (Koyama et al. 1997 PASJ 49, L7). ASCA observations of the entire remnant show that the outer shell and the interior of the remnant also have a non-thermal spectrum (Slane et al. 1998 ApJ in preparation). There is no evidence of any thermal X-ray emission from any part of this remnant; this is a puzzling yet exciting result! The shell-type X-ray morphology, completely non-thermal spectrum, and relatively large size are difficult to explain by the mechanisms observed in other remnants; nevertheless, these data confirm the power of ASCA's imaging and spectral capabilities.
SNR SurveysTwo different types of surveys have been initiated in the last several years to utilize ASCA's unique capabilities to provide moderate-resolution spectra of heavily absorbed objects. First, a followup of ROSAT all-sky survey sources which are believed to be extended and which are coincident with known radio SNR has been started. Three of the first five targets have been detected in the X-rays with ASCA. G337.2-0.7 exhibited booming lines of Si, S, Ar and Ca with supersolar abundances indicating that it is possibly a young, ejecta-dominated remnant. G309.2-0.6 also has strong Si and S lines in addition to a strong Fe line. G7.7-3.7 has a thermal spectrum with nearly solar abundances. The second survey project is aimed at detecting small diameter radio remnants with ASCA. The first three targets have been observed and two of the remnants have been detected. G340.6+0.3 is clearly detected and shows line emission. G328.4-0.2 is detected but shows a complex spectrum with a hard tail. Both of these surveys have produced promising results in their first years and should increase the number of known X-ray remnants.
LMC remnantsASCA has also made a systematic study of the SNRs in the LMC. One of the early results of this project was the discovery of new ejecta-dominated remnants of Type Ia SNe (Hughes et al. 1995 ApJ 444, L81). This work showed how it was possible to determine the type of the SN explosion from a comparison of the ASCA X-ray spectra of the remnant with the nucleosynthetic yields expected from Type Ia and II SNe. One surprising conclusion was that roughly one-half of the SNRs produced in the LMC within the last ~1500 yrs came from Type Ia SNe. The fraction expected based on extragalactic patrols is more like 10%-20%. Hughes, Hayashi, & Koyama (1998 ApJ accepted) used the X-ray spectral information provided by ASCA in conjunction with a self-consistent nonequilibrium ionization model assuming a Sedov solution for the dynamical evolution, to deduce the ages, ambient interstellar densities, initial explosion energies, and metal abundances for seven middle-aged remnants. For the remnants for which the ionization timescale age and the Sedov dynamical age agree the derived mean explosion energy is 1.1+/-0.5x10^51 ergs, in excellent agreement with the canonical value. For the remnants N63A, N132D, and N49B, the ionization timescale ages are significantly less than the Sedov dynamical ages and the explosion energies are rather large. Hughes, Hayashi, & Koyama suggest that both of these discrepancies can be resolved by invoking a scenario in which the progenitor was a massive star which had blown out a cavity. They have also provided a new and independent determination of the gas phase abundances in the LMC by using the X-ray spectra to determine the abundances of the astrophysically common elements O, Ne, Mg, Si, S, and Fe, to be 0.2-0.4 times solar. The X-ray-derived values are consistent with those from optical studies (e.g. Russell & Dopita 1992 ApJ 384, 508), but the X-ray data provide significantly more accurate measurements of the important species Mg and Si (for which few good emission lines in the optical band exist). Since the ISM contains the integrated sum of material lost by stars in winds and SNe over the galaxy's life, the chemical composition is one of the principle probes of the galaxy's star formation history.
Thermal X-ray EmissionThe spectral capability of the SIS has been used to perform detailed modeling of the spectra of young Galactic remnants, and thereby learn new insights about the origin of the X-ray emission. Borkowski et al. (1996, ApJ 466, 866) performed a careful study of the Fe K lines from the core-collapse remnant Cas A, and concluded that its strength is accounted for only if a substantial amount of interstellar dust is present. In contrast, when Hwang, Hughes, & Petre (1998, ApJ 497, 833) performed the same analysis on the spectrum of the Type Ia remnant Tycho, they were able to place severe constraints on the amount of dust present. They also find that multiple emission components, presumably from ejecta and the blast wave, are required to explain the relative strengths of the Fe K and L lines. Hwang & Gotthelf (1997, ApJ 475, 665) produced a set of spatially filtered, narrow band maps of Tycho. Although each map has an overall morphology similar to the broad band map, each shows a set of distinctive features. Overall they find the emission morphology is consistent with a spherical shell, and not with a torus, and that some radial mixing of ejecta has occurred. Vink, Kastra & Bleeker (1997, A&A 328, 628) find a dramatic temperature gradient across RCW 86. They also find a relative lack of line emission which they suggest is the result of an electron distribution with a supra-thermal tail.
Discovery of Young X-ray Pulsars with ASCAThe study of pulsars in SNRs is critical to our understanding of the evolution of young neutron stars. It allows us to probe these fascinating objects for which only an astrophysical laboratory is available.
For example, ASCA has nearly doubled the number of known Crab-like pulsars with the discovery of a 65 ms pulsar in the young plerionic SNR G11.2-0.3 (Torii et al. 1997 ApJ 489, L145) and the 16 ms pulsar LMC SNR N157B (Marshall et al. 1998 ApJ 499, L179). The latter pulsar is located near the famous 50 ms LMC pulsar and is the most rapidly rotating pulsar associated with a SNR yet discovered. The properties of these pulsars are consistent with the canonical picture of a young pulsar born as a rapidly rotating (~10 ms) NS powered by the spin-down energy of a magnetized-dipole (~ 10^12 G).
Several other pulsars detected by ASCA are considered candidate SNR pulsars, due to their properties and proximity to young SNRs. These include the 69 ms pulsar discovered near the SNR RCW 103 (Torii et al. 1998 ApJ 494, L207) and the X-ray emission from the 63 ms radio pulsar PSR J1105-6107 (Gotthelf & Kaspi 1998 ApJ 497, L29). The elusive NS candidate in the center of RCW 103, re-discovered by ASCA, may well be a pulsar with unseen pulses due to unfavorable beaming geometry (Fig. 2; Gotthelf et al. 1997 ApJ 497, L29).
Figure 2. The ASCA SIS image of the SNR RCW 103 in two spectral bands. Below 1.5 keV (Bottom) the flux from RCW 103 is predominately from the soft thermal emission of a shocked plasma, typical of a young SNR. Above 3 keV (Top) the intriguing central point source in RCW 103, 1E161348-505, is evident un-obscured by the nebula emission. Due North of the central source is the serendipitous ASCA point source, the 69 ms pulsar PSR J1617-5055. The images were produced using data from both SIS cameras and have been exposure corrected and smoothed.
ASCA is also revealing a new class of slowly rotating NS candidates associated with SNR. These have profound implications for the theory of NS evolution. Perhaps the best example is the discovery using ASCA of 12 sec pulsations from the central object in the young SNR Kes 73 (Vasisht & Gotthelf 1997 ApJ 486, L129). Despite numerous previous observations, this pulsar had eluded detection by the Einstein and ROSAT observatories, which lacked the broad spectral band imaging capabilities of ASCA.
ASCA also discovered AX 1845-0258, a highly absorbed 7 sec pulsar in the distant Milky Way (Gotthelf & Vasisht 1998 NA 3, L293) and the 11 sec pulsar located in Scorpio (Sugizaki et al. 1998 PASJ 49, L25). The characteristics of these pulsars are similar to those of the ``anomalous X-ray pulsars'' (Mereghetti & Stella 1995 ApJ 442, L17; van Paradijs et al. 1995 A&A 299, L41), personified by the well studied 7 sec pulsar in CTB 109 (Gregory & Fahlman 1980; Corbet et al. 1995 ApJ 433, 786). The spin periods for these objects lie in the range of 5-12 sec and their ASCA spectra are unusually steep ( 0.6 keV or Gamma > 3) for an rotation- or accretion- powered pulsar. Their luminosities are typically around ~10^35 ergs/s and seem to be steady over many years. An accretion origin is unlikely as they lack an observed counterpart, show no indication of binary motion, or display flux variability as is typical of accreting systems.
ASCA is playing a key role in increasing our understanding of the evolution of young NSs. By the detection of new anomalous X-ray pulsars, and subsequent monitoring of their pulse and flux histories, ASCA has shown that the standard paradigms of young pulsar evolution may no longer be valid. For example, a follow-up ASCA observation of Kes 73 confirms the remarkable spin-down rate of its pulsar and that the measured luminosity cannot be simply powered by radiative losses due to spin-down (Gotthelf et al. 1998 in preparation). The inferred magnetic field for a rotating magnetic dipole is well above the quantum limit of 4x10^14 G. The Kes 73 pulsar is likely the first example of a ``magnetar'' (Thompson & Duncan 1995 MNRAS 275, 255), a NS with an enormous magnetic field. The pulsar was likely spun down rapidly by magnetic field decay or possibly born as a slow pulsar. In either case, the ASCA data require us to consider alternative NS evolution scenarios in direct competition with the standard theory.
Last modified: Tuesday, 26-Jun-2001 14:22:36 EDT
If you have any questions concerning ASCA, visit our Feedback form. | <urn:uuid:d036f544-17ae-4f34-8dae-856b9cf70ce1> | 2.90625 | 3,752 | Knowledge Article | Science & Tech. | 58.489057 |
This paper was presented at the International Workshop on Monsoon Asia Agricultural Greenhouse Gas Emissions.
Quantifying greenhouse gas emissions from soils: Scientific basis and modeling approach
Article first published online: 31 JUL 2007
Soil Science & Plant Nutrition
Volume 53, Issue 4, pages 344–352, August 2007
How to Cite
LI, C. (2007), Quantifying greenhouse gas emissions from soils: Scientific basis and modeling approach. Soil Science & Plant Nutrition, 53: 344–352. doi: 10.1111/j.1747-0765.2007.00133.x
- Issue published online: 31 JUL 2007
- Article first published online: 31 JUL 2007
- Received 25 September 2006. Accepted for publication 16 January 2007.
- biogeochemical model;
- greenhouse gas;
Global climate change is one of the most important issues of contemporary environmental safety. A scientific consensus is forming that the emissions of greenhouse gases, including carbon dioxide, nitrous oxide and methane, from anthropogenic activities may play a key role in elevating the global temperatures. Quantifying soil greenhouse gas emissions is an essential task for understanding the atmospheric impacts of anthropogenic activities in terrestrial ecosystems. In most soils, production or consumption of the three major greenhouse gases is regulated by interactions among soil redox potential, carbon source and electron acceptors. Two classical formulas, the Nernst equation and the Michaelis–Menten equation, describe the microorganism-mediated redox reactions from aspects of thermodynamics and reaction kinetics, respectively. The two equations are functions of a series of environmental factors (e.g. temperature, moisture, pH, Eh) that are regulated by a few ecological drivers, such as climate, soil properties, vegetation and anthropogenic activity. Given the complexity of greenhouse gas production in soils, process-based models are required to interpret, integrate and predict the intricate relationships among the gas emissions, the environmental factors and the ecological drivers. This paper reviews the scientific basis underlying the modeling of greenhouse gas emissions from terrestrial soils. A case study is reported to demonstrate how a biogeochemical model can be used to predict the impacts of alternative management practices on greenhouse gas emissions from rice paddies. | <urn:uuid:f75a4a93-4f5c-419d-a32c-658a865e774b> | 2.9375 | 451 | Academic Writing | Science & Tech. | 25.828348 |
Mission Type: Orbiter
Launch Vehicle: Delta II 7925
Launch Site: Cape Canaveral, Fla., USA
NASA Center: Jet Propulsion Laboratory
Spacecraft Mass: 729.7 kilograms (1,608.7 pounds) total, composed of 331.8-kilogram (731.5-pound) dry spacecraft, 353.4 kilograms (779.1 pounds) of propellant and 44.5 kilograms (98.1 pounds) of science instruments
Spacecraft Instruments: 1) Thermal emission imaging system
2) Gamma ray spectrometer including a neutron spectrometer and the high-energy neutron detector
3) Martian radiation environment experiment
Spacecraft Dimensions: Main structure 2.2 meters (7.2 feet) long, 1.7 meters (5.6 feet) tall and 2.6 meters (8.5 feet) wide; wingspan of solar array 5.7-meter (18.7-feet) tip to tip
Spacecraft Power: Solar Panels
Maximum Power: 750 W
Total Cost: $297 million for primary science mission ($165 million spacecraft development and science instruments; $53 million launch and $79 million mission operations and science processing)
2001 Mars Odyssey Arrival Press Kit, October 2001
Mars Odyssey is an orbiter carrying science experiments designed to make global observations of Mars to improve our understanding of the planet's climate and geologic history, including the search for water and evidence of life-sustaining environments.
One of the chief scientific goals that 2001 Mars Odyssey will focus on is mapping the chemical elements and minerals that make up the Martian surface. As on Earth, the elements, minerals and rocks that form the Martian planet chronicle its history. And while neither elements (the building blocks of minerals) nor minerals (the building blocks of rocks) can convey the entire story of a planet's evolution, both contribute significant pieces to the puzzle. These factors have profound implications for understanding the evolution of Mars' climate and the role of water on the planet, the potential origin and evidence of life, and the possibilities that may exist for future human exploration.
Other major goals of the Odyssey mission are to:
- Determine the abundance of hydrogen, most likely in the form of water ice, in the shallow subsurface
- Globally map the elements that make up the surface
- Acquire high-resolution thermal infrared images of surface minerals
- Provide information about the structure of the Martian surface
- Record the radiation environment in low Mars orbit as it relates to radiation-related risk to human exploration
Odyssey also served as a communication relay for landers such as the Mars Exploration Rovers Spirit and Opportunity.
The orbiter carries three science payloads comprised of six individual instruments: a thermal infrared imaging system, made up of visible and infrared sensors; a gamma ray spectrometer, which also contains a neutron spectrometer and high-energy neutron detector; and a radiation environment experiment. | <urn:uuid:4ee7e7df-c09d-46ec-8c91-1c812d9a737f> | 3.203125 | 605 | Knowledge Article | Science & Tech. | 37.741527 |
Twisted contains a large number of examples. One in particular, the "evolution of Finger" tutorial, contains a thorough explanation of how an asynchronous program grows from a very small kernel up to a complex system with lots of moving parts. Another one that might be of interest to you is the tutorial about simply writing servers.
The key thing to keep in mind about Twisted, or even other asynchronous networking libraries (such as asyncore, MINA, or ACE), is that your code only gets invoked when something happens. The part that I've heard most often sound like "voodoo" is the management of callbacks: for example,
Deferred. If you're used to writing code that runs in a straight line, and only calls functions which return immediately with results, the idea of waiting for something to call you back might be confusing. But there's nothing magical, no "voodoo" about callbacks. At the lowest level, the reactor is just sitting around and waiting for one of a small number of things to happen:
- Data arrives on a connection (it will call
dataReceived on a Protocol)
- Time has passed (it will call a function registered with
- A connection has been accepted (it will call
buildProtocol on a factory registered with a
- A connection has been dropped (it will call
connectionLost on the appropriate Protocol)
Every asynchronous program starts by hooking up a few of these events and then kicking off the reactor to wait for them to happen. Of course, events that happen lead to more events that get hooked up or disconnected, and so your program goes on its merry way. Beyond that, there's nothing special about asynchronous program structure that are interesting or special; event handlers and callbacks are just objects, and your code is run in the usual way.
Here's a simple "event-driven engine" that shows you just how simple this process is.
self.events =
self.stopped = False
def do(self, something):
while not self.stopped:
thisTurn = self.events.pop(0)
self.stopped = True
reactor = SimplestReactor()
print 'Doing thing 1'
print 'Doing thing 2'
print 'Doing thing 3: and stopping'
At the core of libraries like Twisted, the function in the main loop is not
sleep, but an operating system call like
poll(), as exposed by a module like the Python select module. I say "like"
select, because this is an API that varies a lot between platforms, and almost every GUI toolkit has its own version. Twisted currently provides an abstract interface to 14 different variations on this theme. The common thing that such an API provides is provide a way to say "Here are a list of events that I'm waiting for. Go to sleep until one of them happens, then wake up and tell me which one of them it was." | <urn:uuid:bbc7cbbd-d379-4086-8303-4ae4b57b4555> | 3.046875 | 612 | Q&A Forum | Software Dev. | 51.893421 |
The rate of growth dP/dt of a population of bacteria is proportional to the square root of t, where P is the population size and t is the time in days (0 ? t ? 10) as shown below.
The initial size of the population is 500. After 1 day the population has grown to 600. Estimate the population after 7 days. | <urn:uuid:e0ecb034-5e37-468f-bb84-d016d11de826> | 2.984375 | 74 | Q&A Forum | Science & Tech. | 81.404358 |
EAC Focus - Ruth Gonzalez
Ruth Gonzalez is an expert in seismic imaging methods. Working with seismic data collected on the surface of the earth, she develops algorithms that perform calculations to produce images of the underlying geology. Since 1980, she has applied this technology to help discover oil and gas reservoirs for Exxon. "We use sound energy to help us determine exactly what the earth looks like beneath its surface, much like doctors use ultrasounds to take a peek at an unborn child, or x-rays of your body to help in diagnosing medical problems," she explains of the seismic process. "Our wave equations describe how waves propagate through different materials such as rocks, water, and air. This means that mathematics allows us to see several miles into the earth without invasive processes."
According to Gonzalez, a typical 3D seismic dataset contains about 400 gigabytes of data. "We make simplifying assumptions, use massively parallel computers like the Cray T3D with 256 processors, and rely on high-speed disk and tape to compute accurate solutions within a reasonable time frame. These images are used by Exxon geologists and geophysicists to predict the right places to drill." Gonzalez points out that the advances in parallel computing have made these solutions tractable and routine today. "Computing times, even with vector machines, were estimated to be years compared to the weeks that they actually take on parallel machines," she says. "This allows us to explore for hydrocarbons in complex areas that were previously thought to be too risky."
Gonzalez became interested in the application of mathematics to real-world problems as a college student at the University of Texas at Austin. She received her B.A. and M.A. in mathematics in 1975 and 1979 and was a researcher at Applied Research Laboratories at the university from 1976 to 1980. There, she worked on wave propagation problems in underwater acoustics that were of interest to the U.S. Navy. "This was my first encounter with real-life problems using applied mathematics," she says. " I developed mathematical models that allowed other scientists to best position submarines so that they could remain undetected by potential attackers."
Gonzalez joined Exxon Production Research Company (EPR) in 1980 and began working toward her Ph.D. in applied mathematics at Rice University in 1981. She received her degree in 1986. Although her doctoral research focused on the computational mathematics in simulating fluid flow through oil and gas reservoirs, she continued to focus on seismic imaging at Exxon. She worked at another Exxon company, Exxon Exploration, from 1992 to 1996 and is now back at EPR. Throughout her career she has been a consultant and mentor for researchers at various Exxon companies, teaching them how to use seismic imaging technologies.
Gonzalez is currently developing new methods that take into account the irregular distribution of surface seismic data. This will honor both the kinematics and amplitude information of the recorded seismic waves. "Current theories are true for uniform distribution of surface data," she explains, "but we are rarely able to collect such data in the real world."
Gonzalez is a member of the Society of Exploration Geophysicists and has been a member of the CRPC External Advisory Committee since 1995. She was a member of the external advisory committee for the CRPC's Geoscience Parallel Computation Project (GPCP) from 1993 to 1995, focusing on both reservoir simulation and seismic imaging. She is co-author of the paper, "Interdisciplinary Approach to Maximize Benefits of Prestack Depth Migration," which was selected as one of the best five papers for the Rio '95 Conference in Rio de Janeiro, Brazil.
|Other Issues of PCR||Back to PCR||CRPC Home Page| | <urn:uuid:d8b866a9-b503-42dd-9932-9c38d7cec0d1> | 2.953125 | 756 | Knowledge Article | Science & Tech. | 39.337943 |
more articles like this
Hubble Finds Farthest Protocluster of Galaxies
updated: Jan 10, 2012, 9:06 AM
NASA's Hubble Space Telescope has uncovered a cluster of galaxies in the initial stages of construction - the most distant such grouping ever observed in the early universe. An astrophysicist at UC Santa Barbara contributed to the discovery.
In a random sky survey made in near-infrared light, Hubble spied five tiny galaxies clustered together 13.1 billion light-years away. Among the brightest galaxies at that epoch, they are very young, living just 600 million years after the universe's birth in the Big Bang.
Galaxy clusters are the largest structures in the universe, comprising hundreds to thousands of galaxies bound together by gravity. The developing cluster, or protocluster, presumably will grow into one of today's massive galactic cities, comparable to the nearby Virgo cluster, a collection of more than 2,000 galaxies.
"Just a couple of years ago, a discovery like this one would have seemed impossible," said Tommaso Treu, team member and a professor of physics at UCSB. "Now, we are not only finding galaxies as close as ever before to the Bang Bang, but we are actually finding entire structures of them!"
The study's leader, Michele Trenti, added: "These galaxies formed during the earliest stages of galaxy assembly, when galaxies had just started to cluster together. The result confirms our theoretical understanding of the buildup of galaxy clusters. And, Hubble is just powerful enough to find the first examples of them at this distance." Trenti is affiliated with the University of Colorado at Boulder and Britain's Institute of Astronomy at the University of Cambridge.
Trenti presents his results today at the American Astronomical Society meeting in Austin, Texas. The study is published in the Jan. 10 issue of The Astrophysical Journal.
Most galaxies in the universe live in groups and clusters, and astronomers have probed many mature galactic cities in detail, as far away as 11 billion light-years. But finding clusters in the early phases of construction has been challenging because they are rare, dim, and widely scattered across the sky.
Last year, a group of astronomers uncovered one distant developing cluster. Led by Peter L. Capak of NASA's Spitzer Science Center at the California Institute of Technology in Pasadena, the astronomers discovered a galactic grouping 12.6 billion light-years away with a variety of telescopes, including Hubble. Spectroscopic observations were made with the W.M. Keck Observatory in Hawaii to confirm the cluster's distance by measuring how much its light has been stretched by the expansion of space.
Trenti's team used Hubble's sharp-eyed Wide Field Camera 3 to hunt for the elusive catch. "We need to look in many different areas, because the odds of finding something this rare are very small," said Trenti. "It's like playing a game of Battleship: The search is hit and miss. Typically, a region has nothing, but if we hit the right spot, we can find multiple galaxies."
Because these distant, fledgling clusters are so dim, the team hunted for the systems' brightest galaxies. These brilliant light bulbs act as billboards, advertising cluster construction zones. Galaxies at early epochs don't live alone. From simulations, the astronomers expect galaxies to be clustered together. Because brightness correlates with mass, the most luminous galaxies pinpoint the location of developing clusters. These powerful light beacons live in deep wells of dark matter, which form the underlying structure in which galaxy clusters form. The team expects many fainter galaxies not seen in these observations to inhabit the same neighborhood.
The five bright galaxies spotted by Hubble are about one-half to one-tenth the size of our Milky Way, yet are comparable in brightness. The galaxies are bright and massive because they are fed lots of gas through mergers with other galaxies. The team's simulations show that the galaxies will eventually merge and form the brightest central galaxy in the cluster, a giant elliptical galaxy similar to the Virgo Cluster's M87.
The observations demonstrate the progressive buildup of galaxies and provide further support for the hierarchical model of galaxy assembly, in which small objects accrete mass, or merge, to form bigger objects over a smooth and steady but dramatic process of collision and agglomeration. The process is similar to that of streams merging into tributaries, then into rivers, and then to bays.
Hubble looked in near-infrared light because ultraviolet and visible light from distant objects have been stretched into near-infrared wavelengths by the expansion of space in these extremely distant galaxies. The observations are part of the Brightest of Reionizing Galaxies (BoRG) survey, which is using Hubble's Wide Field Camera 3 to search for the brightest galaxies around 13 billion years ago, when light from the first stars burned off a fog of cold hydrogen in a process called reionization.
The team estimated the distance to the newly spied galaxies based on their colors, but the astronomers plan to follow up with spectroscopic observations to confirm their distance. Treu is leading the spectroscopic follow-up of these protocluster galaxies using the 10-meter W.M. Keck Telescope.
The first paper on the spectroscopic follow-up led by Treu will be published in the Feb. 1 issue of The Astrophysical Journal. Treu plans to extend the spectroscopic follow-up with MOSFIRE, a next generation instrument about to be commissioned on the Keck Telescope. "These protocluster galaxies are the ideal targets for MOSFIRE, an instrument capable of measuring infrared spectra for several objects at once," said Treu. "In combination with the Hubble images, Keck spectra will provide more accurate measurements of the distance and motion of these galaxies, as well as the way in which stars form and reionize the universe around this time."
Comments in order of when they were received | (reverse order)
2012-01-10 12:56 PM
oh no...not more construction!
2012-01-11 09:47 AM
50% of comments on this page were made by Edhat Community Members. | <urn:uuid:6b12e0be-08db-428c-bdbf-0af25396f6d2> | 3.4375 | 1,279 | Comment Section | Science & Tech. | 41.974538 |
Mutational Buildup Indicates Living Populations Are Young
Mutations are copy errors that degrade the information contained in DNA, which carries vital instructions for the maintenance and reproduction of cellular life. All creatures experience mutations. Very rarely, a mutational error can help some individuals in a population survive in an unusual environment. A mutation can cause disease or death, but the vast majority of mutations have no effect on the organism. Since they are too subtle to cause a difference in any trait, no process can detect them. Even "natural selection" is incapable of detecting them. Therefore, these nearly neutral mutations build up relentlessly.
Eventually, the accumulating mutations will damage vital systems and cause "mutational meltdown," which leads to extinction. The buildup of mutations is accelerated by small population sizes, making recovery difficult or impossible. For example, conservationists must carefully breed pandas, giant salamanders, Tasmanian devils, Bengal tigers, and many more endangered creatures with others of their kinds that have the fewest mutations. But this only delays the inevitable.
It has been calculated that with 100 mutations per 20-year generation, the human genome would not last much longer than 500 generations. Because mutational buildup acts as a clock ticking down toward extinction, and because so many animals have yet to reach that point, this is strong evidence that life on earth is only thousands, not millions, of years old. | <urn:uuid:de1567b4-ed3b-44ca-9c27-eccf6ae67cd2> | 3.765625 | 281 | Knowledge Article | Science & Tech. | 22.710843 |
A fungus is a member of a large group of eukaryotic organisms that includes microorganisms such as yeasts and molds, as well as the more familiar mushrooms. These organisms are classified as a kingdom, Fungi, which are separate from plants, animals, and bacteria. This fungal group is distinct from the structurally similar slime molds and water molds. The discipline of biology devoted to the study of fungi is known as “mycology”, which is often regarded as a branch of botany, even though genetic studies have shown that fungi are more closely related to animals than to plants. Abundant worldwide, most fungi are subtle because of the small size of their structures, and their cryptic lifestyles, the ability of an organism to avoid observation by other organisms in soil, on dead matter, and as symbionts of plants, animals, or other fungi. Symbiosis is close and often long-term interactions between different biological species. Since the 1940s, fungi have been used for the production of antibiotics, and, more recently, various enzymes produced by fungi are used industrially and in detergents. Fungi are also used as biological pesticides to control weeds, plant diseases and insect pests.
How many species of fungi have been identified so far?
There are about 75,000 scientifically identified species of fungi, with scientists believing that there may be as many as a million fungal species yet to be identified. As differing species of fungi may look the same apparently, classifying them accurately is difficult, and usually requires the application of molecular tools such as DNA sequencing. Since, DNA sequencing is still expensive, even for fungi with genomes far shorter than mammals, it will likely be many decades before the majority of fungi are classified with certainty.
What is the History of classification of fungi?
Ever since the pioneering 18th and 19th century taxonomical works of Carl Linnaeus, a Swedish botanist, Christian Hendrik Persoon, a mycologist and Elias Magnus Fries, a botanist, fungi have been classified according to their morphology (e.g., characteristics such as spore color or microscopic features) or physiology. Fungal classification at the phyla level is complicated, and is constantly being reshuffled. Fungi were first misclassified as plants, but subsequent investigations found they actually have more in common with animals. Like plants and animals, fungi are eukaryotes. General fungi include molds — which grow in strands called hyphae, mushrooms — fruiting bodies of fungal colonies, and yeasts — the name for any single-celled fungi. But, these are broad terms, and molds, yeasts, and mushrooms can be found across several taxonomic categories of fungi.
How are fungi classified?
The 75,000 identified species of organisms commonly classified together as fungi are customarily divided into eight phyla, or divisions:
Chytridiomycota, or chytrids: This is the most ancient form of fungi, with about 1,000 identified species. These produce spores with flagella (zoospores), and go after amphibians, maize, alfalfa (Alfalfa is a plant used widely as animal feed), potatoes, and other vulnerable organisms. These are most representative of fungi that lived throughout the Paleozoic era, being mainly aquatic. A spore is a unit of asexual reproduction adapted to spending a long period of time in unfavorable conditions before developing into an offspring.
Blastocladiomycota: It is the second phlya of fungi, only created as a distinct category in 2007. Like the chytrids, they use zoospores to reproduce, and parasitic of all major eukaryotic groups.
Neocallimastigomycota: This is third phyla,and are anaerobic fungi that mainly dwell in the stomachs of ruminants. Their name contains the Greek suffix referring to whips, “mastix”, for their numerous flagella. The second and third phyla were both initially misclassified as chytrids.
Zygomycota: The fourth phyla of fungi are the more familiar Zygomycota, named for the hardy spherical spores they produce. Zygomycota includes black bread mold and molds, one of the most often viewed fungi by humans. Most are soil-living saprobes that feed on dead animal or plant remains. Some are parasitic of plants or insects. They reproduce sexually and form tough zygospores. There is no distinguishable male or female. There are over 600 species of this genus. The other one is Pilobolus, a fungus capable of ejecting spores several meters through the air.
Glomeromycota: This is the fifth phyla of fungi and is the known as “Arbuscular mycorrhizae (AM) fungi”.Typically, that term means “tree fungi.” Tree fungi can be found in large numbers in the roots of more than 80% of families of vascular plants. This relationship is symbiotic and ancient, extending back at least 460 million years, to the beginning of plant life on land.
Ascomycota: This is the sixth phyla of fungi and is known as “sac fungi”. This makes different spherical sacs to hold their spores, and has the most species out of all the fungi. Examples include Penicillium, edible types of morels, truffles, Baker’s yeast, lichens, powdery mildews, the black and blue-green molds and many others. Most of these phyla are plant-pathogenic. There are over 50,000 species, about 25,000. Their life cycle is a complex combination of sexual and asexual reproduction.
Basidiomycota or the club fungi: This is the seventh phyla of fungi. This group contains most common mushrooms. It is eminent by the presence of a spore-producing structure called the ‘basidium’, usually known as a ‘cap’. Along with Ascomycota, the club fungi are known as ‘Higher Fungi’. This includes the gill fungi (most mushrooms), the pore fungi (e.g., the bracket fungi, which grow shelf like on trees, and an edible type called tuck-a-hoe). It also includes the fungi that cause smut and rust in plants.
Deuteromycota: This phyla encompasses a copious assortment of fungi that do not fit tidily in other divisions; they have in common an apparent lack of sexual reproductive features. Also called “Fungi Imperfecti”, these cause diseases of plants and of animals (e.g., athlete's foot and ringworm), and that produce penicillin. A number of the fungi classified as deuteromycetes have been found to be asexual stages of species in other groups, and some classification schemes consider the deuteromycetes a class under Ascomycota. | <urn:uuid:1e908638-5183-4c0d-919b-689f489568bd> | 3.734375 | 1,475 | Knowledge Article | Science & Tech. | 35.907018 |
World's First Supercavitating Boat?
"For decades, researchers have been trying to build boats, submarines, and torpedoes that make use of supercavitation — a bubble layer around the hull that drastically reduces friction and enables super-fast travel.
Now a company in New Hampshire called Juliet Marine Systems has built and tested such a craft, and says it is the world's fastest underwater vehicle.
The ship, called the 'Ghost,' looks like two supercavitating torpedoes with a command module on top, and can carry 18 people plus weapons and supplies.
The company is in talks with the U.S. Navy to build a version of the ship that can guard the fleet against swarm attacks by small boats. The question is how well it really works, and whether it can be used reliably and effectively on the high seas."
Imagine a boat that moves through the water differently from any other boat in existence. It uses “supercavitation”—the creation of a gaseous bubble layer around the hull to reduce friction underwater—to reach very high speeds at relatively low fuel cost.
Its speed and shape means it can evade detection by sonar or ship radar. It can outrun torpedoes. Its fuel efficiency means it has greater range and can run longer missions than conventional boats and helicopters.
Now imagine that this vessel has already been built and tested. It “flies” through the water more or less the way it was designed to—like a high-tech torpedo, except part of the craft is above water—and it can be maneuvered like a fighter plane. “It’s almost as much an aircraft as it is a boat,” says its inventor, Gregory Sancoff, the founder and CEO of Juliet Marine Systems, a private company in Portsmouth, NH.
OK, so here’s how it works, according to a patent filing (see diagram, below). The main compartment of the Ghost vessel, which houses the cockpit and controls, sits above the water in between two torpedo-shaped pontoons or “foils,” which are submerged and create all the buoyancy and propulsion for the craft.
The angle of the struts that connect the foils to the command module is adjustable—so the craft can ride high in choppy seas and at high speeds (so waves don’t hit the middle part), and low in calm water and at lower speeds.
“We’re basically riding on two supercavitating torpedoes. And we’ve put a boat on top of it,” Sancoff says.
At the front of each foil is a special propeller system that pulls the craft forward. The propellers are powered by a modified gas turbine—a jet engine—housed in each foil; the air intake and exhaust ports for the engines are in the struts.
As the ship moves through the water, the motion of the propellers creates a thin layer of bubbly water vapor that surrounds each foil from front to back, helped along by the presence of “air trap fins” that keep the vapor in contact with the hull (and keep liquid away from the hull). The vapor is what constitutes the supercavitation, so the foils can glide effortlessly through the bubbles.
“The key is the propulsion. You have to have a lot of power at the right location in this vessel,” Sancoff says. Exactly how this is done is a trade secret. But the propulsion system, which he says generates 30 percent more thrust than any other propeller-based system, essentially “boils water underwater and generates steam vapor.” (I take this to mean the pressure directly behind the propeller blades is so low that the liquid water there “boils” off and becomes a gas—hence the bubbles.)
In any case, the overall design makes the craft go fast, but Sancoff isn’t making any public claims yet about exactly how fast. “We don’t talk about speed, how many weapons [it can carry], or how far we can go,” he says. Yet its rumored speed is at least 80-100 knots—over 100 mph. That’s not going to challenge the top speedboat records—there have been hydroplane efforts (riding on the water surface) that have exceeded 200 mph (174 knots) and even 300 mph (261 knots), some with fatal results—but the Ghost is faster than any previous underwater vehicle, Sancoff says.
What’s more, he says, the Ghost provides a much smoother ride than what Navy SEALs are used to; many of them blow out their backs from the bumpiness of their boats, he says. “Our boat does not have impact from the waves. We cut through the wave,” Sancoff says. “That is critical science.”
To steer itself through the water and maintain stability, the Ghost uses four movable flaps on the front of each foil and four on the back of each foil, for a total of 16 flaps. (The flaps reach through the thin bubble layer into the surrounding water.) The struts are adjusted to keep the command module out of the water, and the foils stay submerged, so waves at the water surface should only hit the struts, which have a small cross-section.
“It’s computer controlled, like a modern F-18,” Sancoff says. “We’re boring what looks like two wormholes underwater, and we’re flying through foam.” Sancoff himself has been test-driving the ship over the past couple of years. “I have been learning an entirely new craft since then. It’s a totally new experience,” he says. “Just because you drive Grandpa’s boat, you’re not going to drive this one. It’s more like a helicopter.”
As for the craft’s audio profile, Sancoff is proud of its “silent propulsion” system that includes a sophisticated muffler system for the engines. You can’t hear it from 50 feet away, he says.
- Full Article Source
DVD - the Physics of Crystals, Pyramids and Tetrahedrons
This is a wonderful duel DVD set lasting 2 hours and which presents one man's lifelong study of pyramids, crystals and their effects. Several of his original and very creative experiments are explained and diagramed out for experimenters. These experiments include;
1) transmutation of zinc to lower elements using a tetrahedron,
- Two DVDs - More Info and check out this Youtube Clip
2) energy extraction from a pyramid,
3) determining mathematic ratios of nature in a simple experiment,
4) accelerating the growth of food,
5) increasing the abundance of food,
6) how crystals amplify, focus and defocus energy,
7) using crystals to assist natural healing,
8) how the universe uses spirals and vortexes to produce free energy and MORE...
High Voltage & Free Energy Devices Handbook
This wonderfully informative ebook provides many simple experiments you can do, including hydrogen generation and electrostatic repulsion as well as the keys to EV Gray's Fuelless Engine. One of the most comprehensive compilations of information yet detailing the effects of high voltage repulsion as a driving force. Ed Gray's engine produced in excess of 300HP and he claimed to be able to 'split the positive' energy of electricity to produce a self-running motor/generator for use as an engine. Schematics and tons of photos of the original machines and more! Excellent gift for your technical friends or for that budding scientist! If you are an experimenter or know someone who investigates such matters, this would make an excellent addition to your library or as an unforgettable gift. The downloadable HVFE eBook pdf file is almost 11MB in size and contains many experiments, photos, diagrams and technical details. Buy a copy and learn all about hydrogen generation, its uses and how to produce electrostatic repulsion. - 121 pages
- More Info
KeelyNet BBS Files w/bonus PDF of 'Keely and his Discoveries'
Finally, I've gotten around to compiling all the files (almost 1,000 - about 20MB and lots of work doing it) from the original KeelyNet BBS into a form you can easily navigate and read using your browser, ideally Firefox but it does work with IE. Most of these files are extremely targeted, interesting and informative, I had forgotten just how much but now you can have the complete organized, categorized set, not just sprinklings from around the web. They will keep you reading for weeks if not longer and give you clues and insights into many subjects and new ideas for investigation and research. IN ADDITION, I am including as a bonus gift, the book (in PDF form) that started it all for me, 'Keely and his Discoveries - Aerial Navigation' which includes the analysis of Keely's discoveries by Dr. Daniel G. Brinton. This 407 page eBook alone is worth the price of the KeelyNet BBS CD but it will give you some degree of understanding about what all Keely accomplished which is just now being rediscovered, but of course, without recognizing Keely as the original discoverer. Chapters include; Vibratory Sympathetic and Polar Flows, Vibratory Physics, Latent Force in Interstitial Spaces and much more. To give some idea of how Keely's discoveries are being slowly rediscovered in modern times, check out this Keely History. These two excellent bodies of information will be sent to you on CD. If alternative science intrigues and fascinates you, this CD is what you've been looking for...
- More Info
'The Evolution of Matter' and 'The Evolution of Forces' on CD
Years ago, I had been told by several people, that the US government frequently removes books they deem dangerous or 'sensitive' from libraries. Some are replaced with sections removed or rewritten so as to 'contain' information that should not be available to the public despite the authors intent. A key example was during the Manhattan Project when the US was trying to finalize research into atomic bombs. They removed any books that dealt with the subject and two of them were by Dr. Gustave Le Bon since they dealt with both energy and matter including radioactivity. I had been looking for these two books for many years and fortunately stumbled across two copies for which I paid about $40.00 each. I couldn't put down the books once I started reading them. Such a wealth of original discoveries, many not known or remembered today. / Page 88 - Without the ether there could be neither gravity, nor light, nor electricity, nor heat, nor anything, in a word, of which we have knowledge. The universe would be silent and dead, or would reveal itself in a form which we cannot even foresee. If one could construct a glass chamber from which the ether were to be entirely eliminated, heat and light could not pass through it. It would be absolutely dark, and probably gravitation would no longer act on the bodies within it. They would then have lost their weight. / Page 96-97 - A material vortex may be formed by any fluid, liquid or gaseous, turning round an axis, and by the fact of its rotation it describes spirals. The study of these vortices has been the object of important researches by different scholars, notably by Bjerkness and Weyher. They have shown that by them can be produced all the attractions and repulsions recognized in electricity, the deviations of the magnetic needle by currents, etc. These vortices are produced by the rapid rotation of a central rod furnished with pallets, or, more simply, of a sphere. Round this sphere gaseous currents are established, dissymetrical with regard to its equatorial plane, and the result is the attraction or repulsion of bodies brought near to it, according to the position given to them. It is even possible, as Weyher has proved, to compel these bodies to turn round the sphere as do the satellites of a planet without touching it. / Page 149 - "The problem of sending a pencil of parallel Hertzian waves to a distance possesses more than a theoretical interest. It is allowable to say that its solution would change the course of our civilization by rendering war impossible. The first physicist who realizes this discovery will be able to avail himself of the presence of an enemy's ironclads gathered together in a harbour to blow them up in a few minutes, from a distance of several kilometres, simply by directing on them a sheaf of electric radiations. On reaching the metal wires with which these vessels are nowadays honeycombed, this will excite an atmosphere of sparks which will at once explode the shells and torpedoes stored in their holds. With the same reflector, giving a pencil of parallel radiations, it would not be much more difficult to cause the explosion of the stores of powder and shells contained in a fortress, or in the artillery sparks of an army corps, and finally the metal cartridges of the soldiers. Science, which at first rendered wars so deadly, would then at length have rendered them impossible, and the relations between nations would have to be established on new bases."
- More Info
Hypnosis CD - 3 eBooks with How To Techniques and Many Cases
If you have a few minutes, you might want to read my page on hypnosis and all the amazing things associated with its application. Included is an experience I had when I hypnotized a neighbor kid when I was about 14. As well the hypnotic gaze of snakes, the discovery of 'eyebeams' which can be detected electronically, the Italian Hypnotist Robber who was caught on tape with his eyes glowing as cashiers handed over their money and remembered nothing, glamour and clouding the mind of others, several methods of trance induction and many odd cases, animal catatonia, healing, psychic phenomena, party/stage stunts, including my favorite of negative hallucination where you make your subject NOT see something...much more...if nothing else, its might be a hoot to read.
- More Info
14 Ways to Save Money on Fuel Costs
This eBook is the result of years of research into various methods to increase mileage, reduce pollution and most importantly, reduce overall fuel costs. It starts out with the simplest methods and offers progressively more detailed technologies that have been shown to reduce fuel costs. As a bonus to readers, I have salted the pages with free interesting BONUS items that correlate to the relevant page. Just filling up with one tank of gas using this or other methods explained here will pay for this eBook. Of course, many more methods are out there but I provided only the ones which I think are practical and can be studied by the average person who is looking for a way to immediately reduce their fuel costs. I am currently using two of the easier methods in my own vehicle which normally gets 18-22 mpg and now gets between 28 and 32 mpg depending on driving conditions. A tank of gas for my 1996 Ford Ranger costs about $45.00 here so I am saving around $15-$20 PER TANK, without hurting my engine and with 'greener' emissions due to a cleaner burn! The techniques provided in this ebook begin with simple things you can do NOW to improve your mileage and lower your gas costs. - eBook Download / More Info
The Physics of the Primary State of Matter
The Physics of the Primary State of Matter - published in the 1930s, Karl Schappeller described his Prime Mover, a 10-inch steel sphere with quarter-inch copper tubing coils. These were filled with a material not named specifically, but which is said to have hardened under the influence of direct current and a magnetic field [electro-rheological fluid]. With such polarization, it might be guessed to act like a dielectric capacitor and as a diode...
- More Info
$5 Alt Science MP3s to listen while working/driving/jogging
No time to sit back and watch videos? Here are 15 interesting presentations you can download for just $5 each and listen to while driving, working, jogging, etc. An easy way to learn some fascinating new things that you will find of use. Easy, cheap and simple, better than eBooks or Videos. Roughly 50MB per MP3.
- More Info
15 New Alternative Science DVDs & 15 MP3s
An assortment of alternative science videos that provide many insights and inside information from various experimenters. Also MP3s extracted from these DVDs that you can listen to while working or driving. Reference links for these lectures and workshops by Bill Beaty of Amateur Science on the Dark Side of Amateur Science, Peter Lindemann on the World of Free Energy, Norman Wootan on the History of the EV Gray motor, Dan Davidson on Shape Power and Gravity Wave Phenomena, Lee Crock on a Method for Stimulating Energy, Doug Konzen on the Konzen Pulse Motor, George Wiseman on the Water Torch and Jerry Decker on Aether, ZPE and Dielectric Nano Arrays. Your purchase of these products helps support KeelyNet, thanks!
- More Info | <urn:uuid:d02e171b-cc78-423d-9278-205cae15eeaf> | 2.84375 | 3,602 | Content Listing | Science & Tech. | 49.521505 |
Joined: 16 Mar 2004
|Posted: Wed Jan 07, 2009 11:34 am Post subject: Renewable Energy:Small Solution for Big Electricity Problem
A Small Solution For a Big Electricity Problem
Rice University faces an uphill battle trying to help the renewable power generation industry take on one its toughest challenges.
The problem is not a lack of big ideas, but money.
Scientists at the Houston-based university are trying to create a new type of wire made from carbon manipulated at the molecular level so that it can carry massive amounts of electricity to populated areas from windy and sunny areas.
If successful, the new technology could greatly enhance the U.S. power transmission system's limited capacity, which hampers the expanded use of wind and solar energy.
Many experts see both wind and solar as the most promising alternative-energy sources. But there aren't many electrical transmission facilities connecting areas that can produce massive solar-and-wind generated electricity with urban centers. And even where there are facilities available, sending the energy is a challenge because transmission systems throughout the country are strained.
Nano-Tubes, Big Conundrum
Rice scientists said they have in their hands a revolutionary solution to create a highly efficient power grid.
They created, in the laboratory, fibers made of "carbon nano-tubes," or molecules of pure carbon, that are 100 times stronger than steel and transmit electricity more efficiently than copper, the material from which most electrical wires are made. But they need at least three to five more years and $25 million for a largescale investigation into these fibers' properties.
"What we need is a person to fund a five-year effort at $5 million a year with the possibility that it might not work," said Wade Adams, director of Rice University's institute for nanoscale science and technology. "We have people who may sign a check for $500,000 but not for $25 million."
But some said the problem with Rice's nano-carbon tube wires is that they are still far from hitting the market as a viable product.
"The idea of a superconducting wire is thought of as the Holy Grail and has been for decades," said Josh Wolfe, an analyst at Lux Research Inc., which specializes in "disruptive" technologies. "What can be probed in a lab does not a product make."
Rice is not alone in its search to develop a superconducting power wire. At least three companies in the U.S. - American Superconductor Corp. (AMSC), Metal Oxide Technologies Inc. and SuperPower Inc. - are already manufacturing wires that are much more efficient than conventional copper-made cables but are still more expensive to produce. These companies use second-generation high-temperature superconductors, rather than Rice's "carbon nano-tubes." The high-temperature superconductors, which are basically ceramic materials encased in metals, were discovered in 1986 by IBM researchers.
In May, American Superconductor started a project with Consolidated Edison Inc. (ED) of New York, a company that provides electric service to approximately 3.2 million customers, to use superconducting wires under the streets of Manhattan.
"Without a doubt we think we are going after a billion-dollar opportunity," said Jason Fredette, American Superconductor's spokesman.
Metal Oxide Technologies is also producing wires using technology developed by the University of Houston using second-generation high-temperature superconductors.
Wade said what Rice's project, called Rice University Armchair Quantum Wire, is trying to provide the power industry with a new material that works regardless the temperature and will have even more conductivity than wires made from high-temperature superconductors.
The project currently operates with a budget of $1.5 million, most of which comes from the Air Force.
Funding allocated from the National Aeronautics and Space Administration fell through at the same time as the death of of the project's founder, professor Richard Smalley, who was awarded the Nobel Prize in Chemistry in 1996 for the discovery of a new type of carbon. Adams said Rice has asked several major oil companies, including BP PLC (BP), Royal Dutch Shell (RDSA) and Exxon Mobil Corp. (XOM), to invest a small part of the billions of dollars they have in profits in a technology that they can own and may transform the future energy business.
"But some of they said, 'Come back and talk a year from now to see what progress you are making,'" Adams said. ExxonMobil declined to comment on Rice's project, but a spokesman said the company has long tradition of supporting university research around the world. Shell said it couldn't confirm the company has been contacted by Rice University.
Nano-Tubes, Big Costs
Rice's Smalley founded Carbon Nanotechnologies Inc. (CNT.Xx) in 2000 to commercialize carbon nano-tubes. The company merged in 2007 with Unidym.
While the cost of the carbon nanotubes material has decreased significantly, prices average about $150 a gram. Experts, however, predict the cost to fall to $10 a gram in the next few years, making nano-tubes more economically feasible.
"If you ask me if we will ever see a super-conducting wire that does what Rich Smalley envisioned, I think it is possible," Wolfe said. "But I couldn't give you a timeframe or a probability of when that will happen, and those are the two things that anybody that is credible has to do."
CPS Energy, which is a part owner of Texas' grid said the project of Rice's is "too futuristic."
"We contribute and support new research, but this project is too futuristic," said CPS Energy's spokesman Rolando Romero.
But companies that sell electricity and renewable energy services, such as ConEdison Solutions, said they will be open to invest in new technology.
"We support any effort to improve transmission of renewable energy," said Christine Nevin, a company's spokeswoman.
Source:Dow Jones Newswire via http://www.cattlenetwork.com/content.asp?contentid=202661 | <urn:uuid:b4679a8d-3a93-4a0f-9f68-6aa8f38d1d00> | 2.828125 | 1,288 | Comment Section | Science & Tech. | 38.732854 |
Because escape responses are so critical, the nerve fibers that control them in many invertebrates tend to be especially large. The tail-flick escape response of crayfish, which is often successful, is mediated by such “giant” nerve fibers. But even those giant fibers are no match for a shrew’s myelinated fibers. And shrews have a second advantage as well: they are warm-blooded, and thus their nervous system is always at the optimum temperature for peak performance. The combination of those two attributes makes shrews formidable predators, at least from the perspective of a crayfish. If escape fails and a battle ensues, a shrew quickly prevails.
The shrew’s brain is ultimately responsible for its sensory abilities, so we have sought to understand how the animal’s brain is organized and how that might contribute to the shrew’s skill as a predator. In all mammals, an outer six-layered sheet of tissue called the neocortex is the final processing station for visual, tactile, and auditory information. To investigate how the cortex is organized into different subdivisions for each of those functions, we can flatten it out, section it on a microtome, and stain it for anatomical markers that reveal the different areas. Along with recordings of brain activity, this technique enables us to map the size, shape, and location of brain regions devoted to the different senses and body parts.
In water shrews, a remarkable 85 percent of sensory cortex is devoted to processing information from touch. Vision and hearing take up only 8.5 percent and 6.5 percent of sensory cortex, respectively. Within the touch region of cortex, most of the area (about 70 percent) is devoted to processing sensory information from the whiskers, leaving only 30 percent for the trunk and limbs. That is an astounding mismatch between the size of body parts and the size of their representation in the neocortex—a phenomenon called cortical magnification. But it makes sense if one considers the importance of the whiskers, rather than their relative size. A similar “rule of thumb” governs body maps in human brains, where much of the touch region is devoted to the hands and lips, leaving only a meager area representing the trunk and legs.
The mammalian brain does not develop in isolation; rather, it is shaped by information from the body. A number of studies in different species suggest that inputs from the different senses compete for brain territory during development. We can get a clue to this process in shrews by peeking into the nest and observing the young. At early stages of development, just when the maps in the neocortex are being laid down, the skin housing the whiskers is swollen and vascular—standing out from the rest of the face. This reflects the enormous metabolic resources being devoted to whisker development. Thousands of touch receptors have been generated in the skin surrounding the nascent whiskers, and a massive cable of nerve fibers is already connecting them to the brain and sending signals to the developing neocortex. In developing water shrews, important inputs from the whiskers essentially carve out their large share of space in the neocortex. When the shrews finally emerge from the nest, at the age of three weeks, they are well-equipped with a keen sense of touch, and a week later they start diving for food on their own. | <urn:uuid:f4c0c209-67d2-4a62-8a1b-c7e2513f68b4> | 3.953125 | 685 | Nonfiction Writing | Science & Tech. | 41.717308 |
SQUEEZING human proteins out of plants could soon be easier than tapping latex from rubber trees, say biotechnologists who have learnt to make the roots of tobacco plants express human genes.
Many human proteins, such as antibodies, are already produced in cultures of plant cells. But these cultures require an expensive sterile medium and it is difficult to extract the proteins from inside the morass of cells.
Ilya Raskin at Rutgers University in New Jersey wondered if it might be cheaper and more practical to make plants secrete the proteins from their roots as they grow. Plant roots secrete many chemicals to alter the soil around them to make it to their liking and for self-defence. "They can only survive if they launch a continuous chemical attack on their environment," says Raskin.
He and his colleagues took genes from humans, jellyfish and bacteria and added a sequence for a protein ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:9672d83c-6c9a-4941-854a-63093bd47e0d> | 3.75 | 213 | Truncated | Science & Tech. | 48.077416 |
The Large Hadron Collider is the largest and most complex scientific instrument ever built and the highest energy particle accelerator in the world.
The accelerator is located 100 m underground and runs through both French and Swiss territory. ( 27km circumference)
Year 2008 September 10th, marks the culmination of 20 years of work by over 8000 scientists thousands of engineers, technicians and support staff from over 80 different countries.
some critics say that this could create a black hole and suck up the entire world. but many say that even if a black hole is created it will vanish within a millionth of a second..
for more info follow these links.
(i think the best footage/documentary from the LHC) http://www.youtube.com/watch?v=_fJ6PMfnz2E
this video is done by Chris Mann, (the link: http://lhc-first-beam.web.cern.ch/lhc-first-beam/Welcome.html
) CERN- European organization for nuclear research /lhc first beam.
Hope this video must have been useful.
Please subscribe, leave a comment or rate, i would love to see your feedback! Thanks | <urn:uuid:d50b7c33-895b-42aa-86c0-ee87550ca5ca> | 3.171875 | 249 | Personal Blog | Science & Tech. | 71.208477 |
In other words, the 1st parameter of the loop declares and initialises a variable (fl) to 10. The loop then executes the 2nd parameter, which is a logical check. It basically says "if FL is less-than-or-equal-to 150". If this statement is a true statement, it will execute the body of the loop (sum = sum + fl) and will then proceed to the 3rd and final parameter of the loop, which is an increment.
This code can also be written as:
var sum = 0;
for (fl=10; fl<=150; fl+= 5)
sum += fl;
document.write (sum += fl);
This code doesn't really make much sense, but it may be part of a larger file that we can't see. | <urn:uuid:25b653b4-04c9-44c8-867b-a67027148a11> | 3.203125 | 169 | Comment Section | Software Dev. | 79.39228 |
Polar Bears Need Love
…And a plan.
A team of leading polar bear ecologists called for nations to make plans now for dealing with ongoing—and soon to be critical—threats to the survival of polar bears. The biggest problem for this iconic Arctic species is the mindblowingly fast disappearance of sea ice. Their paper appears in Conservation Letters.
For instance, last month (January 2013) the average coverage of sea ice in the Arctic Ocean was 409,000 square miles (1.06 million square kilometers) below the 1979 to 2000 January average. That’s the sixth-lowest January extent in the satellite record, says the National Snow & Ice Data Center. Plus 2012 saw the lowest ever summer coverage of sea ice in the Arctic.
To keep reading, click here. | <urn:uuid:f3d1071f-e656-44a3-9878-66d34b9a9122> | 2.8125 | 163 | Truncated | Science & Tech. | 57.188992 |
1. What progresses have been made recently (last 15 years) in new propulsion system research ? Rockets and their principle, that's 1500 years old, at least ! True that the Chinese were using them mainly for fireworks and the Europeans for weapons (what else ? ...), still to my knowledge nothing really new was applied to the present day. There was some talk in the 70-es about an atomic engine called "Nerva", then in the 80-es some scientists forgot themselves and talked about reasearch involving an ionic engine...and denied everything next day, I bet their wrists were kind of red . The present system sounds impressive at Earth scale only. At 20 km per second (meaning 72,000 km/hr), speed enough to get around the world in 2000 seconds (36 minutes) it still takes 3 years to go to Mars and back. That's crawling, man ! Decency (lower level) begins at 100 km/sec and with 200 km/sec, well, you can hope to start in May and be back by Thanksgiving, or Christmas at the outmost, without too many microG side effects.
2. Earth radioactivity is monitored from Space from years. (This is not a question). IS SUCH INFORMATION AVAILABLE ONLINE ? IF YES, WHERE? (This IS a question). I'm talking about a map display with constant readings of the local radioactivity, especially artificial sites, like power plants, research facilities, weaponry.
If not as a map, is the information available in text, like listing the abnormal only and the respective places ? | <urn:uuid:3d2e6056-cdfe-45de-a497-523e845bac82> | 2.765625 | 323 | Comment Section | Science & Tech. | 69.613214 |
For this year’s microgravity project we will be working with the Jet Propulsion Laboratory (JPL) and Cbana Labs in the development of a volatile organic compound (VOC) monitor that can be used in space. A VOC is a device that measures the air quality of the environment and helps detect hazardous gases. One way to do this is with the use of a micro flame ionization detector (microFID). I hate to use Wikipedia as a reference, but they actually have a pretty good artical explaining more about FID’s if you want to learn more: http://en.wikipedia.org/wiki/Flame_ionization_detector. Cbana Labs currently has a functional device for operation on Earth but would like to extend the usage of such a device to a space shuttle or the international space station. The problem with the current design is the method used to introduce samples into the device relies on gravity. So our task will be to redesign the current technology to work in a zero-g or microgravity environment. Another main consideration that is to be tested is if the micro-FID actually functions even with the samples properly fed into it. The problem is that flames behave very differently in zero gravity and might affect the results. Here’s a cool demonstration video to see what I mean. http://www.youtube.com/watch?v=Q58-la_yAB4
Archive for February, 2012
UNL Microgravity bloggers will be posting here … stay tuned! | <urn:uuid:7505ae73-8d3d-411d-89f4-5cfb12b7faef> | 3.421875 | 315 | Personal Blog | Science & Tech. | 46.754199 |
Brief SummaryRead full entry
BiologyThe kingfisher feeds mainly on fish and invertebrates, which it catches by perching on a convenient branch or other structure overhanging the water, and plunging into the water when suitable prey comes within striking distance (2). If a suitable perch is not present, individuals may hover over the water whilst searching for prey (2). During the breeding season, pairs perform a display flight whilst calling. The nest consists of a tunnel in a riverbank or amongst the roots of a tree; both sexes help to excavate the tunnel, which terminates in a rounded chamber. In April or May 6-7 whitish eggs are laid on the bare earth, but after some time regurgitated fish bones form a lining to the nest chamber. Both parents incubate the eggs for 19-21 days. The young fledge after around 23-27 days, before this time they may eagerly approach the entrance of the tunnel when waiting to be fed (4). | <urn:uuid:7ac89103-eb75-4595-81b3-6acd49bd6201> | 3.25 | 204 | Knowledge Article | Science & Tech. | 52.649006 |
Pipelining is sending multiple requests over a socket and receiving the responses later, in the same order. This is faster than sending one request, waiting for the response, then sending the next request, and so on. This implementation returns a promise (future) response for each request that when invoked waits for the response if not already arrived. Multiple threads can send on the same pipeline (and get promises back); it will pipeline each thread's request right away without waiting.
A pipeline closes itself when a read or write causes an error, so you can detect a broken pipeline by checking isClosed. It also closes itself when garbage collected, or you can close it explicitly.
Create new Pipeline on given connection. You should
close pipeline when finished, which will also close connection. If pipeline is not closed but eventually garbage collected, it will be closed along with connection.
Send message to destination; the destination must not response (otherwise future
calls will get these responses instead of their own).
Throw IOError and close pipeline if send fails
Send message to destination and return promise of response from one message only. The destination must reply to the message (otherwise promises will have the wrong responses in them). Throw IOError and closes pipeline if send fails, likewise for promised response. | <urn:uuid:3d6a3b9a-7dae-4fba-af6b-b5d62cb61236> | 2.890625 | 262 | Documentation | Software Dev. | 34.337469 |
1.3. Population systemPopulation system (or a life-system) is a population with its effective environment.
The term "life-system" was introduced by Clark et al. (1967). Later, Berryman (1981) suggested another term "population system" which is definitely better. A short review of the life-system methodology was published by Sharov (1991).
Major components of a population system
Temporal and spatial structure of a population system
Dynamics of Population SystemsFactor-Effect concept: Environmental factors affect population density. When factors change then population density changes respectively.
Advantage: Causal explanation of population change
Disadvantage: It is useful for one-step instant effects but becomes confusing if the effect is simultaneously a factor that causes soemthing else, or if there are time delays in the effect.
Factor-Process concept: Environmental factors do not affect population density directly, instead they affect the rate of various ecological processes (mortality, reproduction, etc.) which finally result in a change of population density. Processes can change the value of factors; thus, feedback loops become possible.
In 50-s and 60-s there was a discussion about population regulation between two schools in population ecology. An agreement could not be reached because these schools used different concepts of population dynamics. Andrewartha and Birch (1954) used the factor-effect concepts whereas Nicholson (1957) used the factor-process concept.
The factor-process concept works not only in population ecology but in any kind of dynamic systems, e.g. in economic systems. Forrester (1961) formalized the factor-process concept and applied it to industrial dynamics. Later this formalism became very popular in ecology and is widely used especially in ecosystem modeling.
The model of Forrester is based on tank-pipe analogy. The system is considered as a set of tanks connected by pipes with vents which can regulate the "flow" of liquid from one tank to another. The flow of "liquid" between tanks is considered as "material flow". However there is also "information flow" that regulates the vents. Vents are equivalent to processes; and the amount of liquid in a tank is a variable or a factor because it can affect processes via information flow.
The figure above is the Forrester diagram for an insect population system with 4 stages: eggs, larvae, pupae, and adults. Transition between these states (development) is regulated by temperature. Influx of eggs depends on the number of adults. Mortality occurs in all stages of development. Larval and pupal mortality is density-dependent.
Diagrams of Forrester can be easily transformed into differential equation models. Each process becomes a term in a differential equation that determines the dynamics of the variable. For example, the number of larvae in the figure above is affected by 3 processes: egg development ED(T), larval development LD(T), and larval mortality LM(N). Egg and larval development rates are functions of temperature T; whereas larval mortality is a function of larval numbers N. The equation for larval dynamics is the following:
Here the term ED(T) is positive because egg development increases the number of larvae ("liquid" influx). Terms LD(T) and LM(N) are negative because larval development (molting into pupae) and mortality reduce the numbers of larvae.
Limitations of Forrester's model:
Factors and processesThe dynamics of a system is always viewed as a sequence of states. State is an abstraction like an arrow hanging in the air but it helps to understand the dynamics. Because systems are built from components, the state can be represented as a combination of states of all its components. For example, the state of the predator-prey system can be characterized by the density of prey (component #1) and density of predators (component #2). As a result, the state of the system can be considered as a point in a 2-dimensional space with coordinates that correspond to system components. This space is usually called the phase space
Components of the system will be called factors because they affect system dynamics. The state of the component is the value of the factor. Factors are considered with as many details as necessary for understanding system's dynamics.
Examples of factors:
Examples of processes
Factors affect the rate of processes as shown below:
On the other hand, processes change the values of factors:
A process may be affected by multiple factors. For example, mortality caused by predation may depend on the prey density, predator density, number of refuges, temperature (if it changes the activity of organisms), etc.
The value of a factor may change due to multiple processes. For example, the number of organisms on a specific stage changes due to development (entering and exiting this stage), dispersal, and mortality due to predation, parasitism, and infection.
Thus, there is no one-to-one correspondence between factors and processes. Life-tables show the rate of various mortality processes, but they do not show the effect of factors on these processes. For example, parasitism may be mostly determined by weather; viral infection may be determined by host plant chemistry. But life-tables do not show the effect of either weather or and host plant chemistry.
It is dangerous to judge on the role of factors (e.g., biotic vs. abiotic) from life-tables. For example, a life-table may show that 90% mortality of an insect is caused by parasitism. This may lead to an erroneous conclusion that parasites rather than weather are most important in the change of population density. It may appear that the synchrony between life cycles of the host and parasite depends primarily on weather.
Life tables show various mortality processes in the population but they do not indicate the role of factors. To analyze the role of factors, it is necessary to vary these factors experimentally and examine how they affect various mortality processes. These experiments may show, for example, that the rate of parasitism depends on the density of parasites, density of hosts, and temperature. To understand population dynamics, it is necessary to know both the effect of factors on the rate of various processes, and the effect of processes on various ecological factors (e.g., on population density). This information is integrated via modeling.
ReferencesBerryman A. A. (1981) Population systems: a general introduction. New York: Plenum Press.
Clark L. R., P. W. Geier, R. D. Hughes, and R. F. Morris (1967) The ecology of insect populations. London: Methuen.
Sharov, A. A. 1992. Life-system approach: a system paradigm in population ecology. Oikos 63: 485-494. Get a reprint! (PDF) | <urn:uuid:e046f103-1edb-4f22-9635-bcc25526d684> | 3.40625 | 1,415 | Academic Writing | Science & Tech. | 37.638245 |
The Dawn spacecraft launched 27 September 2007. After completing an initial check-out phase in December 2007, Dawn began its interplanetary cruise phase which included a gravity assist from Mars in February 2009 that put the spacecraft on a trajectory to rendezvous with the asteroid 4 Vesta in July 2011. After observing Vesta for a year, the spacecraft will depart in July 2012 and spend over 2.5 years travelling to the dwarf planet Ceres (or 1 Ceres, the asteroid). Dawn will rendezvous with Ceres in February 2015 and will spend 6 months taking measurements before departing in July 2015 (end of mission).
Dawn's primary science goal is to gain an understanding of the conditions and processes occurring when the solar system was only about 10 million years old. It will measure the size, shape, mass, volume and spin rate of the two protoplanets, Vesta and Ceres, to determine their internal structure, density and homogeneity. Dawn will also investigate their thermal history by measuring the elemental and mineral abundances. It will image their surfaces to determine their bombardment and tectonic history, and use gravity, spin and magnetic data to limit the size of any metallic core.
This tool enables people to select Dawn Vesta images by looking at a map of Vesta and selecting the area in which they are interested. They can also search the data by various parameters, and download data in a selection of formats. Eric Palmer made this tool originally for the Dawn Science Team and now has adapted it for SBN.
Use the Small Bodies Data Ferret to find other datasets for this mission/target. | <urn:uuid:db244010-b65e-48b7-babf-f3fd0e7e1116> | 3.140625 | 322 | Knowledge Article | Science & Tech. | 43.323386 |
It is known that many present-day species of bee maintain a symbiotic (help one another) relationship with various species of bacteria of the genus Bacillus within their gut (normal gut flora for some bees, just as E. coli bacteria are normal members of our gut flora). It is also known that Bacillus is one genus of bacteria which can form spores - a kind-of "suspended animation" special life-form of the microorganism - which allows survival for many, many years. In fact, the conditions (temperature and time) required for sterilization of surgical instruments, etc., inside an autoclave (pressure-cooker) have been determined by finding under what conditions spores are killed (please see Better use a pressure cooker!. If spores are killed, then everything else is also killed). To date, the best-tested, most well-documented, longest spore survivor is about 70 years (from a sealed tube prepared by the great scientist, Louis Pasteur, which was opened in 1956). Further, at least 25-million-year-old Bacillus DNA had already been identified within the gut of a bee trapped in amber. Therefore, these scientists reasoned that the bee might have Bacillus spores inside its "stomach," which might be recoverable, and which might regenerate into what are called vegetative (living, dividing) cells, if the spores were placed into nutritious, healthy growth conditions. Consequently, these investigators began the search for these ancient bacteria.
All of the manipulations to obtain the bacterium from the bee trapped inside the amber were performed under sterile conditions. To begin with, while working inside (arms and hands) a special box (called a laminar-flow hood... all of the air passes through a special filter, and always blows toward the person using the hood) these investigators chemically-sterilized all of the amber surfaces, then cracked the amber with sterilized tools to expose the bee (Proplebia dominicana, "....an extinct species of neotropical bee found in 25- to 40-million-year-old amber from the Dominican Republic"). The gut contents of the bee were removed, and added to a nutrient-containing liquid called trypticase soy broth. This broth is very nutritious, (bacteria love it) and even weak organisms can be coaxed to recover when placed within. After incubating the broth for a period of time, living, growing bacteria were identified within the culture! Now, the effort turned to testing all of the equipment, solutions, and anything else present within the laminar-flow hood during the preparation of the culture, for possible contamination by present-day bacteria. Nothing was found. Therefore, these scientists concluded that the origin of the bacteria growing within the broth culture was the gut of the ancient bee. Next came the effort to identify the kind of bacterium which had been isolated.
The methodology involved use of small pieces of DNA called primers, whose nucleotide sequences were known to coincide with certain regions of present-day Bacillus ribosomal RNA (rRNA) genes [please see: What the Heck is a Gene?]. Ribosomal RNA within a cell is unique to every species of life, and can therefore be used to identify an organism, and even the species of organism. When added to test DNA, these primers will associate with the test DNA through base-pairing, if the DNA sequence between the primer and one strand of the test DNA are complimentary. This association allows the technique called PCR [please see: What the Heck is PCR?] to be used, which results in the amplification (huge increase in the number of copies) of the gene with which the primer has associated. Thus, PCR allows generation of enough DNA from an incredibly small amount of original test DNA to be tested in all sorts of ways. By using this technique, the 25- to 40-million-year-old rRNA gene (in the now-living organism!) was identified as Bacillus DNA, and most closely resembled that of B. sphaericus! Biochemical studies on the isolate, along with morphological characteristics, also place the organism within the B. sphaericus grouping.
Of course, this information has caused quite a stir in the scientific
community, and there are efforts underway by other investigators to
independently attempt a similar isolation of the organism (or any
organism for that matter) from other bees trapped in
amber those many years ago. Only through this type of validation
will there be agreement among scientists that these investigators
have indeed sucessfully isolated a living remnant of our ancient past. | <urn:uuid:2c11219f-a17b-4d68-8127-14fc14fa7db0> | 3.734375 | 945 | Knowledge Article | Science & Tech. | 38.301604 |
"Hello, World!" in Python With Tkinter
Most every modern application has a graphic user interface (GUI). For Python, Tkinter is the most commonly available graphic interface toolkit simply because it comes with every installation of Python. This introduction shows how to say "Hello, World!" with Tkinter.
An Introduction to Programming wxPython
Only so much can happen at the command line. Most every modern application has a graphic user interface (GUI). For Python, wxPython is the most mature, cross-platform graphic interface available. This introduction shows how to say "Hello, World!" with a GUI.
Game Programming Tutorial List
If you simply cannot wait for About.com's tutorial on developing games in Python, you might try one of these tutorials. They cover the development of simple geometric games through to more complicated, arcade-style games.
How To Create A HTML Calendar In Python Dynamically
Whether you want to develop a web-based diary or just want a calendar for your website, a dynamically created calendar in HTML is a very useful item to have. Creating one is a snap with Python's calendar module.
HTML and CSS Preamble For Creating a Calendar in Python
This page contains the HTML and CSS needed for the tutorial on creating calendars in Python.
Building an RSS Reader With Python
An RSS Reader is a straightforward program, and building one ensures that one knows the basics of the language. It also teaches the basics of Python web programming and XML handling. If you are looking for a first or second project to finish after you have learned Pytyon, follow these step-by-step tutorials to build a web-based, customisable RSS Reader.
Processing Natural Languages with Python
In this tutorial, the co-authors of "Introduction to Natural Language Processing" discuss the features and power of the Natural Language Toolkit (NLTK) for Python. The NLTK can be found on Sourceforge.
Python, Swig, and C/C++
If you write a lot of C or C++ code and need a wrapper to hold all of it together, David Beazley has written a software development tool that you may find helpful. The Simplified Wrapper and Interface Generator (SWIG) connects your C or C++ program with Python so that the code from the one can be used semi-natively by the other. This link leads to a tutorial on how to use SWIG.
Programming a Nokia S60 With Python
If you would like to make your cell phone do more than Nokia intended, you will probably need to program it yourself. This tutorial tells you how to do it. Nokia also has a wiki explicitly for programmers who write applications for the Nokia S60. If you run into trouble, you might try the Nokia discussion forums.
Grid Computing With Python
If you need a computing system that does the work of 50 computers, you should consider grid computing. The US Department of Energy's Argonne National Laboratory has developed a Grid computing package in Python. It is available at http://www.accessgrid.org/. | <urn:uuid:a5732513-6708-4b65-8ce2-934a070c721f> | 2.8125 | 635 | Content Listing | Software Dev. | 52.774675 |
Images and animations courtesy NASA/Goddard Space Flight Center Scientific Visualization Studio.
animation (4 MB MPEG)
As Hurricane Ignacio was churning in the waters near Baja, California, the TRMM and GOES satellites captured this image. The accompanying visualization zooms down to the storm and peels away the clouds to reveal the underlying rain structure. Greens represent areas where rain is falling at a rate of 1 inch per hour while the red areas are indicative of rain rates of 2 inches per hour.
This image originally appeared on the Earth Observatory. Click here to view the full, original record. | <urn:uuid:3a416617-186d-4558-8e07-b966d5330b90> | 2.765625 | 125 | Truncated | Science & Tech. | 37.978409 |
XML was designed to transport and store data.
To learn more about XML, read our XML tutorial.
XML 1.0 became a W3C Recommendation 10. February 1998.
XML 1.0 (SE) became a W3C Recommendation 6. October 2000. The second edition is not a new version, but an update and a "bug-fix".
The third edition is not a new version, but an update and a "bug-fix".
XML 1.1 was released as a Working Draft 13. December 2001, and became a Candidate Recommendation 15. October 2002. XML 1.1 allows almost any Unicode characters to be used in names.
In XML, element names are defined by the developer. This often results in a conflict when trying to mix XML documents from different XML applications.
XML Namespaces provide a method to avoid element name conflicts.
XLink allows you to insert links into XML documents.
XPointer allows the links to address into specific parts of an XML document.
XML Base is a standard for defining a default reference to external XML resources (similar to <base> in HTML).
XInclude is a mechanism for merging XML documents using elements, attributes, and URI references.
|Specification||Draft / Proposal||Recommendation|
|XML 1.0||10. Feb 1998|
|XML 1.0 (2.Ed)||06. Oct 2000|
|XML 1.0 (3.Ed)||04. Feb 2004|
|XML 1.0 (5.Ed)||26. Nov 2008|
|XML 1.1||04. Feb 2004|
|XML 1.1 (2.Ed)||16. Aug 2006|
|XML 1.0 Namespaces||14. Jan 1999|
|XML 1.0 Namespaces (2.Ed)||16. Aug 2006|
|XML 1.1 Namespaces||04. Feb 2004|
|XML 1.1 Namespaces (2.Ed)||16. Aug 2006|
|XML Infoset||24. Oct. 2001|
|XML Infoset (2.Ed)||04. Feb. 2004|
|XML Base||27. Jun 2001|
|XML Base (2.Ed)||28. Jan 2009|
|XLink 1.0||27. Jun 2001|
|XPointer Framework||25. Mar 2003|
|XPointer element() scheme||25. Mar 2003|
|XPointer xmlns() scheme||25. Mar 2003|
|XInclude 1.0||20. Dec 2004|
|XInclude 1.0 (2.Ed)||15. Nov 2006|
|XML Processing Model||05. Apr 2004|
|XMLHttpRequest Object||06. Dec 2012|
Your message has been sent to W3Schools. | <urn:uuid:6ed2dcbd-a343-4111-b390-3d4cf0eb0f81> | 3.171875 | 625 | Knowledge Article | Software Dev. | 103.865714 |
Take a Trip to a Zoo of Insects
Many insects are known by common names that may be based on behavior or refer to some obvious characteristic of the insect. Some of these names are suggestive of other animals — animals that are common in zoos.
There are insects called water scorpions. They are among a group of insects that live in water but must come to the surface for air. Water scorpions possess an appendage at the rear that functions as a tube through which air is drawn. They are also predators and catch their prey with grasping front legs. Grasping front legs and a rear appendage are characteristics of real scorpions, thus the name.
Another insect is named after the scorpion. The scorpion fly gets its name because of an abdomen that resembles that of a scorpion. It actually curls the tip of its abdomen over its back, but unlike its namesake, the scorpion fly can't sting!
There are also lions and tigers in the insect world, including a group of moths known as tiger moths. These moths get their name because many have stripes and are sometimes orange colored like real tigers. And there is the tiger swallowtail butterfly that just happens to be yellow and black striped.
The tiger also lends its name to a group of beetles. The tiger beetles are vicious predators. They run down and tear to shreds their prey, sort of like real tigers.
Insect lions include the aphis-lion and the ant lion. These insects are of the order Neuroptera and are named after the predatory lion because the immatures of both feed on other insects as predators. The aphis-lion is an immature lacewing and it, like the adult, feeds on aphids. The ant lion turns out to be a dobson fly, but before it becomes an adult it lives in funnel-shaped holes in the sand, where it captures and devours insects unfortunate enough to fall into the trap.
There are even insect pachyderms, elephants and rhinoceros. Not surprisingly, these insects include some of the largest in North America and the world. Elephant beetle males don't have horns on their heads, but they do have horn-like structures from a structure behind the head. Rhinoceros beetle males have a single horn that protrudes upright from the head, just like a real rhino.
Some insects have bird names. Swallowtails are well-known birds and butterflies. The butterfly gets its name from the long extension from the hind wing that is swallowlike. One is called the zebra swallowtail because it is black with white stripes. Hawk moths get their names because they have long narrow wings and fly very fast. In the grasshopper family is an insect known as the grouse locust, probably because it is mottled brown and blends in with the ground on which it sits.
There are all kinds of insects, and many of them remind us of other animals, such as elephants and lions and tigers, oh my! | <urn:uuid:1b0e2188-bbb4-4390-976c-6874fca15538> | 3.3125 | 620 | Knowledge Article | Science & Tech. | 55.577831 |
Sponsored Link •
Object and class locks
As described above, two memory areas in the Java virtual machine contain data shared by all threads. These are:
If multiple threads need to use the same objects or class variables concurrently, their access to the data must be properly managed. Otherwise, the program will have unpredictable behavior.
To coordinate shared data access among multiple threads, the Java virtual machine associates a lock with each object and class. A lock is like a privilege that only one thread can "possess" at any one time. If a thread wants to lock a particular object or class, it asks the JVM. At some point after the thread asks the JVM for a lock -- maybe very soon, maybe later, possibly never -- the JVM gives the lock to the thread. When the thread no longer needs the lock, it returns it to the JVM. If another thread has requested the same lock, the JVM passes the lock to that thread.
Class locks are actually implemented as object locks. When the JVM
loads a class file, it creates an instance of class
java.lang.Class. When you lock a class, you are actually
locking that class's
Threads need not obtain a lock to access instance or class variables. If a thread does obtain a lock, however, no other thread can access the locked data until the thread that owns the lock releases it.
The JVM uses locks in conjunction with monitors. A monitor is basically a guardian in that it watches over a sequence of code, making sure only one thread at a time executes the code.
Each monitor is associated with an object reference. When a thread arrives at the first instruction in a block of code that is under the watchful eye of a monitor, the thread must obtain a lock on the referenced object. The thread is not allowed to execute the code until it obtains the lock. Once it has obtained the lock, the thread enters the block of protected code.
When the thread leaves the block, no matter how it leaves the block, it releases the lock on the associated object.
A single thread is allowed to lock the same object multiple times. For each object, the JVM maintains a count of the number of times the object has been locked. An unlocked object has a count of zero. When a thread acquires the lock for the first time, the count is incremented to one. Each time the thread acquires a lock on the same object, a count is incremented. Each time the thread releases the lock, the count is decremented. When the count reaches zero, the lock is released and made available to other threads. | <urn:uuid:42523e1b-0bbb-4f5c-94f6-4e59c8c6c73e> | 3.875 | 543 | Documentation | Software Dev. | 61.932318 |
IMAGE: Red, yellow-green, and green pea aphids. Photo courtesy of Charles Hedgcock, R.B.P., via NPR.
Over at the Agricultural Biodiversity Weblog, an Edible Geography favourite, Jeremy links to a fascinating post about transgenic pea aphids. The pea aphid has been the focus of quite a bit of biological excitement lately as the findings of the Aphid Genome Project are gradually released: the tiny insects can change their shape in response to food supply and environmental conditions, with the option to become wingless, winged, sexual, asexual, or “morphs that are specialized to resist desiccation or to defend the colony.”
The latest and greatest aphid super power was discovered by researcher Nancy Moran, who wanted to understand exactly how pea aphids can change colour, from green to red via a sickly yellow. Scientists already knew that the red colour came from carotenoids, a class of organic pigments that includes such micro-nutrients as lutein, lycopene, and beta-carotene, and that is responsible for the yellowness of lemons, the orangeness of carrots, and the redness of tomatoes.
As computational biologist Iddo Freidberg points out at Byte Size Biology, carotenoids are also responsible for the pinkness of flamingos and salmon, as well as the orange glow prized by sunless tanners. To achieve such colouration, animals and humans must eat foods that contain the pigment, since it is a well-known fact, according to Moran, that “animals do not make carotenoids.”
IMAGE: Flamingo via by Flickr user Art Goeren; salmon; and an orange-toned Chelsy Davy.
Or at least, that was the accepted wisdom until Nancy Moran decided to look for carotenoid-synthesising genes in her aphid DNA. To her amazement, she found them, which makes pea aphids the only animals on earth known to be able to produce their own carotenoids.
Further detective work showed that rather than eat carotenoid-rich foods or host an in-house carotenoid-factory run by symbiotic bacteria, ancestral pea aphids actually went the whole hog and stole an entire sequence of DNA coded to synthesise carotenoids from some pathogenic fungi, spliced it into their own genome, and passed their new carotenoid-producing super power down the generations. The process is called lateral gene transfer, and according to Moran, “although gene transfers between microorganisms are common, finding a functional fungus gene as part of an animal’s DNA is a first.”
IMAGE: Red and green pea aphids. Photo courtesy of Charles Hedgcock, R.B.P., via NPR.
Pea aphids use their carotenoids to spread risk: apparently “red aphids are more susceptible to parasitic wasps, whereas green aphids are more susceptible to predators such as lady-bird beetles.” Humans, on the other hand, require carotenoids for health: deficiencies of Vitamin A and other carotenoids can cause blindness and a weakened immune system.
To that end, researchers have spent a considerable amount of time and energy splicing carotenoid-synthesising genes (in this case, from a daffodil and a bacterium called Erwinia uredovora, which causes soft rot diseases in fruit) into a staple food, such as rice, to create biofortified transgenic foods designed to reduce deficiency-related diseases in developing countries. Interestingly, the kind of highly controversial lateral gene transfer that creates Golden Rice™ (or, for that matter, Monsanto’s Roundup Ready soybeans) is an artificial analogue of the process pea aphid forefathers undertook when they lifted a genetic sequence from fungi and incorporated it into their own DNA.
IMAGE: Normal rice (left) compared to Golden Rice™ (right), via.
But what if we cut out the middle man, followed in the pea aphids’ footprints, and spliced some carotenoid-synthesising genes into our own DNA? The thought has clearly occurred to Nancy Moran. As she told Science Daily,
Animals have a lot of requirements that reflect ancestral gene loss. This is why we require so many amino acids and vitamins in the diet. Until now it has been thought that there is simply no way to regain these lost capabilities. But this case in aphids shows that it is indeed possible to acquire the capacity to make needed compounds.
The implications are quite breath-taking: perhaps, then, we could deliberately engineer a new race of biofortified transgenic humanoids capable of self-synthesising their own micro-nutritional requirements.
Doubtless, this would result in the same sorts of unintended, surreal, and occasionally disastrous consequences that these kinds of ambitious human interventions into complex systems always seem to cause, but maybe it’s more appropriate to experiment on ourselves directly, rather than on the plants we and other animals consume?
In any case, the scenario makes a wonderful thought-experiment: trying to imagine the consequences of this transgenic future makes the centrality of food production, preparation, and consumption to every aspect of human existence dramatically clear. | <urn:uuid:ddf6fe1f-2ebf-4988-9960-a82460f8ddc3> | 3.578125 | 1,132 | Personal Blog | Science & Tech. | 28.613536 |
Family within the order Proboscidea, named by Gray 1821.
The family of elephants developed during middle Miocene (16 million years ago) with its ancestor Primelephas. The first mammoths (genus Mammuthus) appeared about 3 million years ago in africa. 120 000 years ago they started to migrate to northern europe, and adapt to colder climate, while in africa, the mammoths developed into the genus elephas, which spread also to asia and europe, among others the "Forest elephant" Elephas antiquus.
Taxonomy (the surviving recent elephants)
Up until now, the presently living elephants have been divided into two species: Asian and African. The DNA evidence, reported in the August 24, 2004 issue of the journal Science, provides a definitive answer to the long-debated controversy.T he finding has implications for both international law and conservation strategies. Recent DNA tests have shown that the forest elephant is not a subspecies of the African elephant but a true species.
(See links , )
Future taxonomy (?):
The family elephantidae
- Probsocidea (order) | <urn:uuid:bc8da107-d263-47d8-8241-1f2166907798> | 3.390625 | 236 | Knowledge Article | Science & Tech. | 27.908957 |
FW: RE [Haskell-cafe] Monad Description For Imperative Programmer
wagner.andrew at gmail.com
Wed Aug 1 11:31:45 EDT 2007
> "an IO monad is a delayed action that will be executed as soon as that action is needed for further evaluation of the program."
I'm not sure I like this, as it seems to confuse the issue. An expert
should correct me if I'm wrong, but monads in and of themselves don't
depend on laziness. Rather, *everything* in haskell is lazy, unless
As for the rest of your email, I don't necessarily disagree, but don't
find it particularly helpful either. Which may, of course, be a
personal thing. For me, I think the key to monads is to really
understand 2 things about them:
1.) They are simply type constructor classes
2.) Monads are about sequencing
For point 2, think about the two main things you have to define to
create an instance of a monad:
(>>=) :: m a -> (a -> m b) -> m b
That is, you have two monadic actions, m a and m b, and bind says how
to take the result of the first and fit it into the second -- in
(>>) :: m a -> m b -> m b
Again, we have 2 monadic actions that we're composing, in sequence.
This time, we're just discarding the result of the first.
More information about the Haskell-Cafe | <urn:uuid:08e527dc-bab2-4841-b27f-15a5b11f5c66> | 2.828125 | 334 | Comment Section | Software Dev. | 69.950909 |
The Hawaii Natural Energy Institute (HNEI) has carried out research and development (R&D) on biological hydrogen production since the early 1990s. Initially, this project investigated the genetics of cyanobacterial (blue green algae) hydrogenases. A new R&D phase was initiated in 1996 to develop a microalgal indirect biophotolysis process, in which water is converted in separate stages into oxygen and hydrogen (H2). The organism chosen for initial work on this project was a strain of Spirulina (Arthrospira platensis) already being commercially grown in Hawaii and used in the prior biohydrogen research at HNEI. Laboratory work confirmed that Spirulina produces H2 by dark fermentations, but not in the light.
The major part of the research carried out under this project from 1996 to 2000 was the operation and engineering studies of the photobioreactors. While this initial work demonstrated the ability to produce Spirulina in the reactors, an indirect biophotolysis process using cyanobacteria in the photobioreactors was not demonstrated.
Proposals for future biohydrogen research at HNEI aim to maximize the yield of H2 from endogenous substrates by dark fermentations in microalgae or by bacteria using exogenous waste substrates. Such processes could produce H2 fuel in small-scale amounts at acceptable costs in the near term, and larger quantities in the long term. | <urn:uuid:7c81db18-2fff-4edf-ba32-2ab6c0018f50> | 2.9375 | 296 | Knowledge Article | Science & Tech. | 23.461638 |
Climate—The mean state of the atmosphere over a long period of time. This period is generally at least 30 years, to produce a statistically significant sample and to avoid shorter term variability. The principal elements of climate are the same as the principal elements of weather (IPCC 2007a).
Climate change—Alteration in the mean state of the atmosphere over a long period of time that can be detected statistically. Climate change consists of trends in the mean and the variability of climate that persist for an extended period, typically decades or longer. Climate change may be due to natural internal processes, natural external forcings, or persistent anthropogenic changes in the composition of the atmosphere or in land use (IPCC 2007a).
Climate variability—Variations in the mean state of the atmosphere on all spatial and temporal scales beyond the scales of individual weather events. Climate variability may be due to natural internal processes within the climate system (internal variability) or to variations in natural or anthropogenic external forcing (external variability). Major forms of interdecadal variability, which are often cyclical, include the El Niño–Southern Oscillation and the Pacific Decadal Oscillation (IPCC 2007a).
Global warming—A long-term increase in the average surface temperature of the world that constitutes the major form of climate change.
Radiative forcing—Change in the net radiation flow at the top of the troposphere (about 6–10 km [4–6 mi] altitude), with positive values indicating increased heat toward the surface of Earth (IPCC 2007a). Changes in greenhouse gases, ozone, other atmospheric constituents, airplane contrails, the reflectivity of the earth, and the output of the sun contribute to radiative forcing. Radiative forcing from 1750 to 2005 was +1.6 watts per square meter [+0.8, −1.0 watts per square meter], causing the warming detected around the world (IPCC 2007a). Greenhouse gas emissions from power plants, motor vehicles, deforestation, and other human activities have caused 93% of the radiative forcing (IPCC 2007a).
Weather—The state of the atmosphere at a point in time. Principal elements of weather include temperature, precipitation, wind, air pressure, and humidity (U.S. National Weather Service, http://www.weather.gov/glossary). | <urn:uuid:629f783b-d94e-4309-a70e-ecec34e00671> | 3.671875 | 477 | Knowledge Article | Science & Tech. | 30.511394 |
It takes millions of years for a star to form but the video above captures part of the process in action. Astronomer Patrick Hartigan and his team from Rice University in Houston, Texas stitched together images from NASA's Hubble Space Telescope taken over a period of 14 years. They reveal jets of gas being ejected from three young stars.
The jets may appear sluggish but that's because of our frame of reference. They are 10,000 times longer than the distance between the Earth and the sun and move at 150 kilometres per second.
The time-lapse starts with a wide-angle view of a jet, named HH 47,
bursting out of a star that is hidden in a cloud of gas. The jet creates
a bow shockwave that appears on the right as a white blob. The next clip shows the shockwave in close-up.
The next two sequences focus on the HH34 jet and highlight bright regions where material is smashing together and shockwaves are colliding. The final two clips reveal the gas within the HH1 jet moving at different speeds and the shockwave at the top of the HH2 jet grazing the edge of a dense cloud of gas.
According to Hartigan, it's the first time that jets have been
observed interacting with their surroundings, which reveals how young
stars influence their environment.
"We can now compare observations of the jets with those produced by computer simulations and laboratory experiments to see what aspects of the interactions we understand and what parts we don't," he says.
Journal reference: "Fluid dynamics of stellar jets in real time: Third epoch Hubble Space Telescope images of HH 1, HH 34, and HH 47", Hartigan, P. et al., The Astrophysical Journal, vol 736, p 29, (2011) | <urn:uuid:0e24eb60-96ed-43b4-a40a-650fb2ae2f48> | 3.875 | 367 | Truncated | Science & Tech. | 59.813687 |
Elasmobranch Reproduction and Mating System Evolution
The collapse of shark populations has resulted from the inability of shark recruitment to keep up with the intense exploitation these animals are experiencing worldwide. Sharks (and elasmobranchs in general) are particularly vulnerable to overexploitation due to their reproductive characteristics (e.g., low fecundity and late age at maturity) which are more similar to that of mammals than teleost fishes. Despite the fact that reproductive capacity is such a pivotal aspect of fisheries management, there is little information available on mating systems, reproductive mechanisms, and genetic basis of parentage in sharks, and elasmobranchs in general. To help increase our understanding of reproduction in elasmobranchs, GHRI scientists are employing both field observational methods and genetic profiling (a type of DNA fingerprinting) to collect basic biological information on reproduction.
Current GHRI research in this area includes studying the genetic basis of mating behavior in the bonnethead shark, Sphyrna tiburo, the scalloped hammerhead, Sphyrna lewini, and the blue shark, Prionace glauca. Because mating behavior of sharks is difficult to observe directly in the wild, it is largely unknown whether the few observed cases of polyandrous mating (i.e., where females mate with multiple males) is the rule or the exception in nature. To answer this question, GHRI researchers are developing and using a type of inherited genetic marker (known as microsatellites) to assess the incidence of multiple paternity in shark litters. Results thus far indicate that multiple paternity is common in some species, but infrequent in others. This type of basic information has direct bearing on both the long-term genetic biodiversity consequences of overfishing one gender, as occurs in some fisheries, as well as on our understanding of the evolution of mating systems in an ancient lineage of vertebrates. GHRI is also studying reproductive parameters and mating behavior in the southern stingray (see Stingray Conservation and Ecology page).
For a description of recent GHRI research on elasmobranch mating see:
Chapman, D.D., M.J. Corcoran, G.M. Harvey, S. Malan and M.S. Shivji. 2003. Mating behavior of southern stingrays, Dasyatis americana(Dasyatidae). Environmental Biology of Fishes 68 (3): 241-245.
Chapman, D.D., P.A. Prodohl, J. Gelsleichter, C.A. Manire and M.S. Shivji. 2004. Predominance of genetic monogamy by females in a hammerhead shark, Sphyrna tiburo: Implications for shark conservation. Molecular Ecology 13: 1965-1974.
The Guy Harvey Research Institute
Nova Southeastern University
8000 N Ocean Drive
Dania Beach, FL 33004
The Mission of the Oceanographic Center is to carry out innovative, basic and applied research and to provide high-quality graduate and undergraduate education in a broad range of marine science and related disciplines. | <urn:uuid:9ae2c00a-aac1-4b2c-9896-dfb4120bdaa0> | 3.890625 | 641 | Knowledge Article | Science & Tech. | 31.095067 |
Get flash to fully experience Pearltrees
Introduction This is not a lesson like the others in Radioactivity and Atomic Physics Explained but it fits in well with the lesson on nuclear power . It is a very sophisticated simulation of a pressurised water reactor (PWR), which is the most common type of nuclear power reactor in the US but not in Europe, though the principles are very similar. Using the tour
We have been increasingly using Flash animations for illustrating Physics content. This page provides access to those animations which may be of general interest. The animations will appear in a separate window. The animations are sorted by category, and the file size of each animation is included in the listing. Also included is the minimum version of the Flash player that is required; the player is available free from http://get.adobe.com/flashplayer/ . | <urn:uuid:40361a0d-9b62-4630-8c10-59500acf2807> | 3.5 | 171 | Truncated | Science & Tech. | 40.7825 |
Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of refraction and lenses.What About Astigmatism?
Learn about astigmatism at Microscopy University.LTU Physlet: Optics Model for a Far-Sighted Eye (with corrective lenses)
Use this Java applet to explore the correction for a farsighted eye. And click here to explore the nearsighted eye.
Use this Java applet to demonstrate the problem of farsightedness and its correction. And click here to explore the nearsighted eye.
Farsightedness and its Correction
The human eye's ability to accommodate allows it to view focused images of both nearby and distant objects. As mentioned earlier in Lesson 6, the lens of the eye assumes a large curvature (short focal length) to bring nearby objects into focus and a flatter shape (long focal length) to bring a distant object into focus. Unfortunately, the eye's inability to a provide a wide variance in focal length leads to a variety of vision defects. Most often, the defect occurs at one end of the spectrum - either the inability to assume a short focal length and focus on nearby objects or the inability to assume a long focal length and thus focus on distant objects.
Farsightedness or hyperopia is the inability of the eye to focus on nearby objects. The farsighted eye has no difficulty viewing distant objects. But the ability to view nearby objects requires a different lens shape - a shape that the farsighted eye is unable to assume. Subsequently, the farsighted eye is unable to focus on nearby objects. The problem most frequently arises during latter stages in life, as a result of the weakening of the ciliary muscles and/or the decreased flexibility of the lens. These two potential causes leads to the result that the lens of the eye can no longer assume the high curvature that is required to view nearby objects. The lens' power to refract light has diminished and the images of nearby objects are focused at a location behind the retina. On the retinal surface, where the light-detecting nerve cells are located, the image is not focused. These nerve cells thus detect a blurry image of nearby objects.
The cure for the farsighted eye centers around assisting the lens in refracting the light. Since the lens can no longer assume the convex and highly curved shape that is required to view nearby objects, it needs some help. Thus, the farsighted eye is assisted by the use of a converging lens. This converging lens will refract light before it enters the eye and subsequently decreases the image distance. By beginning the refraction process prior to light reaching the eye, the image of nearby objects is once again focused upon the retinal surface.
While farsightedness most often occurs among adults, occasionally younger people will suffer from this vision defect. When farsightedness occurs among youth, the cause is seldom related to the inability of the lens to assume a short focal length. In this case, the problem is more closely related to an eyeball that is shortened. Because the eyeball is shortened, the retina lies closer than usual to the cornea and lens. As a result, the image of nearby objects is formed beyond the retina. The traditional correction for such a problem is the same as for adults - the use of a converging lens. | <urn:uuid:23f786a0-f381-444e-ab08-482285a9fc7d> | 3.953125 | 706 | Knowledge Article | Science & Tech. | 41.882169 |
Nucleosynthesis is the process of creating new atomic nuclei from preexisting nucleons (protons and neutrons).
The primordial preexisting nucleons were formed from the quark-gluon plasma of the Big Bang as it cooled below ten million degrees.
This first process may be called nucleogenesis, the genesis of nucleons in the universe.
The subsequent nucleosynthesis of the elements (including all carbon, all oxygen, etc.) occurs primarily in stars either by nuclear fusion or nuclear fission.
For more information about the topic Nucleosynthesis, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:01060fad-8967-457d-8913-34dc751bb494> | 3.765625 | 160 | Knowledge Article | Science & Tech. | 32.681667 |
The most famous is probably Saturn's moon Mimas.
Image Credit JPL/NASA/CaltechDeath Star anyone? Mimas is also cool because that massive crater (the laser cannon), would have almost shattered the entire moon.
Credit: NASA, ESA, P. Kalas, J. Graham, E. Chiang, E. Kite (University of California, Berkeley), M. Clampin (NASAGoddard Space Flight Center), M. Fitzgerald (Lawrence Livermore National Laboratory), and K. Stapelfeldt and J. Krist (NASA Jet Propulsion Laboratory)Fomalhaut is a fairly bright star in the night sky. This picture is cool not only because it looks like the eye of Sauron, but also because of the science behind it. The star is at the center, and the ring around it is similar to the Kuiper belt in our solar system. Within that ring, scientists saw, for the first time, a planet orbiting its parent star. For more see NASA
Tinker Bell (or a hummingbird) has been found in the collision of three galaxies. When galaxies collide they can throw stars and gas across space. This leads to arguably the most beautiful pictures in astronomy.
Image Credits: NASA/CXC/CfA/P. Slane et al.
This last image has a really cool resemblance to a human hand. What you are seeing is the result of a pulsar only 12 miles across. The intense magnetic field and rapid rotation of this massive object has created this beautiful nebula.
I wrote this post for two reasons. First is to look at some wonderful astronomy photos. Second is that we often hear of random patterns like these that people see a shape in and want it to be accepted as real. Often the pictures are not as pretty and maybe not even as close of a match. Without more evidence you shouldn't accept that those are anything more than random shapes like these. | <urn:uuid:357112d5-8c31-48b1-8c92-611930d438e7> | 2.921875 | 402 | Personal Blog | Science & Tech. | 59.039369 |
Originally posted by: ga14
How do you determine if a function is periodic or not without graphing it?
For example, cos^2(2pit) is periodic, as is sin^3(2t). But e^(-2t)cos(2pit) is nonperiodic, as is the discrete signal x[n]=cos(2n).
Does this have to do with Fourier series somehow? Is there an easier way to determine it? Thanks for any help.
I just came up with a little something that you might find interesting. I thought about it while I was working on some math (before my quiz tomorrow).
In the case y = a sin b (where a and b are functions
of x, not just constants), the function y is periodic if the derivative of both of the functions is equal to zero and a constant, respectively. In your aforementioned example, the derivative of 2pit with respect to t would indeed be a constant (2pi), but, using the chain rule, the derivative of e^(-2t) is -2e^(-2t), which has obvious fluctuations, no matter how many times you take the derivative. (In fact, each time you take the derivative, you would be messing up your function by a factor of 2 each time.)
...so that's one way to tell if a trigonometric function is periodic, I suppose. | <urn:uuid:7f135483-3287-4cd8-90f5-6f927d976b3f> | 2.984375 | 292 | Comment Section | Science & Tech. | 66.370388 |
LBA REGIONAL DERIVED SOIL PROPERTIES, 0.5-DEG (ISRIC-WISE)Entry ID: lba_isric_wise
Abstract: The data set consists of a subset of the ISRIC-WISE global data set of derived soil properties for the study area of the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) in South America (i.e., longitude 85 to 30 degrees W, latitude 25 degrees S to 10 degrees N).The World Inventory of Soil Emission Potentials (WISE) database currently contains data for over 4300 soil profiles ... collected mostly between 1950 and 1995. This database has been used to generate a series of uniform data sets of derived soil properties for each of the 106 soil units considered in the Soil Map of the World (FAO-UNESCO, 1974). These data sets were then linked to a 1/2 degree longitude by 1/2 degree latitude version of the edited and digital Soil Map of the World (FAO, 1995) to generate GIS raster image files for the following variables:Total available water capacity (mm water per 1 m soil depth)Soil organic carbon density (kg C/m**2 for 0-30 cm depth range)Soil organic carbon density (kg C/m**2 for 0-100 cm depth range)Soil carbonate carbon density (kg C/m**2 for 0-100 cm depth range)Soil pH (0-30 cm depth range)Soil pH (30-100 cm depth range)LBA was designed to create the new knowledge needed to understand the climatological, ecological, biogeochemical, and hydrological functioning of Amazonia; the impact of land use change on these functions; and the interactions between Amazonia and the Earth system. LBA was a cooperative international research initiative led by Brazil and NASA was a lead sponsor for several experiments.
(Click for Interactive Map)
Data Set Citation
Dataset Originator/Creator: BATJES, N.H.
Dataset Title: LBA REGIONAL DERIVED SOIL PROPERTIES, 0.5-DEG (ISRIC-WISE)
Dataset Release Date: 2004
Dataset Release Place: Oak Ridge, Tennessee, U.S.A.
Dataset Publisher: Oak Ridge National Laboratory Distributed Active Archive Center
Data Presentation Form: Online Files
Dataset DOI: doi:10.3334/ORNLDAAC/701Online Resource: http://mercury.ornl.gov/ornldaac/send/query?term2=701&term2attribut...
Start Date: 1950-01-01Stop Date: 1995-12-31
ISO Topic Category
Role: DIF AUTHOR
Email: shannon.spencer at unh.edu
Complex Systems Research Center Institute for the Study of Earth, Oceans, and Space Morse Hall University of New Hampshire
Province or State: New Hampshire
Postal Code: 03824
Creation and Review Dates
DIF Creation Date: 2001-07-11
Last DIF Revision Date: 2007-09-17 | <urn:uuid:b3a1ad9f-9031-4a8f-8286-01ea03c968ae> | 2.796875 | 677 | Content Listing | Science & Tech. | 52.121731 |
HISTORY OF THE METHODS/FLOW CHART
ABOUT THE AUTHOR/CV
Multiple Arrivals in Wave Propagation
Level set methods and Fast Marching Methods are designed to track a propagating interface, and find the first arrival of the interface as it passes a point. One way to think about this is to imagine that the front's boundary is a propagating flame that separates two regions: a burnt part behind the front and an unburnt part in front of the flame. Once a piece of ground is burnt, is burnt, it stays burnt: this corresponds to an entropy condition which insures that the motion is irreversible, and that the first arrival time when the disturbance hits is what is measured.
However, there are many situations, like propagating sound waves, in which later arrivals are also important. In the drawings below, a wave crosses over itself as it moves, creating both initial and later arrivals:
A good example of the important of these later arrivals is in geophysical imaging, in which one tries to predict what lies beneath the earth's surface by sending waves into the ground and recording their reflection. In this case, the first returning wave might not contain all the information, and later arrivals might contain more energy which can be used to more accurately predict what lies beneath.
Standard ApproachSuppose you want to compute the arrival of all waves starting at a source point, as in the figure on the left:
A New, Evolving Interface ApproachInstead, we can transform this into a boundary value problem, and use something like the fast marching method, only this time, we will add an extra dimension to the problem. Let's start by thinking about a two dimensional problem. At each red spot in the domain, we can imagine three variables: the two x and y coordinates, and an additional coordinate theta corresponding to the takeoff angle from that point. Then we can imagine an arrow, leaving that point, passing and bending through the medium (the same way that light bends as passes through different media) and landing somewhere on the boundary: the place and direction it lands are called escape coordinates.
equation (though with an extra dimension). We can do this by solving using a variant of fast marching methods and ordered upwind methods. Start a surface at the boundary (in three dimensional phase space): for every point on this boundary, we know the escape position and angle, since it is the same as the starting point (it's already on the exit!). Then, we can systematically march inwards, reaching back to the known escape values, and eventually cover the entire phase space cube: this is what is shown below:
ResultsBelow is one result from this technique: Waves propagate from the top point through a region that contains a slowness disk in the center: you can easily see the waves propagate around the slow part, and double back on themselves, creating multiple arrivals.
DetailsWe developed a fast, general computational technique for computing the phase-space solution of static Hamilton-Jacobi equations. Starting with the Liouville formulation of the characteristic equations, we derived ``Escape Equations'' which are static, time-independent Eulerian PDEs. They represent all arrivals to the given boundary from all possible starting configurations. The solution is numerically constructed through a `one-pass' formulation, building on ideas from semi-Lagrangian methods, Dijkstra-like methods for the Eikonal equation, and Ordered Upwind Methods. To compute all possible trajectories corresponding to all possible boundary conditions, the technique is of computational order O(N \log N), where N is the total number of points in the computational phase-space domain; any particular set of boundary conditions is then extracted through rapid post-processing. The technique can be applied to the problem of computing first, multiple, and most energetic arrivals to the Eikonal equation. | <urn:uuid:8e2c67ea-1e73-4ca4-8e42-b2f0422ef03a> | 3.734375 | 792 | Academic Writing | Science & Tech. | 27.006338 |
Writing a Windows PowerShell Formatting File
"Writing a Windows PowerShell Formatting File" is for command developers who are writing cmdlets or functions that output objects to the command line. Formatting files define how Windows PowerShell displays those objects at the command line. This documentation provides an overview of formatting files, an explanation of the concepts that you should understand when writing these files, examples of XML used in these files, and a reference section for the XML elements.
In This Section
- Formatting File Overview
- Describes what a format file is and the general components of a formatting file, including common features that can be defined in the file, the different types of format views that can be defined for .NET Framework objects, and a simplified example of the XML used to define a table view.
- Formatting File Concepts
- Includes information that you might need to know when creating your own formatting files, such as the different types of views that you can define and special components of those views.
- Examples of Formatting Files
- Provides XML examples of several formatting files, including examples of a table view, a list view, and a wide view, as well as examples that show how to define features such as selection sets, selection conditions, and common controls.
- Format Schema XML Reference
- Includes reference topics for the XML elements used in a formatting file. | <urn:uuid:5ff13c2f-e38d-4947-bac8-65bc3bc05bf3> | 3.046875 | 278 | Documentation | Software Dev. | 27.260132 |
|Dominant Cropland Irrigation Water Source, 1997|
|Source: Natural Resources Conservation Service|
This shaded polygon map shows the dominant water source for irrigated cropland for each 8-digit hydrologic unit. Water sources are categorized by 1) Well, 2) Pond, Lake, or Reservoir, 3) Stream, Ditch, or Canal, 4) Lagoon or Wastewater (not tailwater), and 5) a combination of water sources. The dominant water source is defined as the water source that is the most common in that hydrologic unit. Areas with 95% or more Federal area are shown as gray. Areas without irrigated cropland are left white.
Cautions for this Product:
In many areas shown on this map as irrgated, non- irrigated cropland may be more common than any type of irrigated cropland. The total amount of irrigated cropland may be small. Irrigation of land uses other than cropland is not included. Data are not collected on Federal land. Data are not available for Alaska, and the Pacific Basin. Data for Puerto Rico and the U.S. Virgin Islands are shown by 6-digit hydrologic unit. For Further Information: | <urn:uuid:9528b4e2-7c00-4ce7-8fa0-36c6999d3818> | 3.59375 | 263 | Knowledge Article | Science & Tech. | 43.699105 |
Two trees 20 metres and 30 metres long, lean across a passageway between two vertical walls. They cross at a point 8 metres above the ground. What is the distance between the foot of the trees?
Three triangles ABC, CBD and ABD (where D is a point on AC) are all
isosceles. Find all the angles. Prove that the ratio of AB to BC is
equal to the golden ratio.
The largest square which fits into a circle is ABCD and EFGH is a square with G and H on the line CD and E and F on the circumference of the circle. Show that AB = 5EF.
Similarly the largest equilateral triangle which fits into a circle is LMN and PQR is an equilateral triangle with P and Q on the line LM and R on the circumference of the circle. Show that LM = 3PQ
This dynamic image is drawn using Geogebra, free software and very easy to use. You can download your
own copy of Geogebra from
together with a good help manual and
for beginners. You may be surprised at how easy it is to draw the dynamic diagram above for yourself.
Doing mathematics often involves observing and explaining properties of `invariance', that
is, what remains the same when the rest of the pattern changes according to certain rules that can
be described in mathematical terms. NRICH dynamic mathematics problems allow you to alter the diagrams
and change some properties, so that you can observe what remains invariant. This may lead you to a
conjecture that you can prove. Proving the result in the case of The Eyeball Theorem uses only similar | <urn:uuid:11da5587-7026-4229-8ba5-be33afdc8778> | 3.359375 | 344 | Tutorial | Science & Tech. | 62.938129 |
How many people died? It's one of the first questions asked in a war or violent conflict, but it's one of the hardest to answer. In the chaos of war many deaths go unrecorded and all sides have an interest in distorting the figures. The best we can do is come up with estimates, but the trouble is that different statistical methods for doing this can produce vastly different results . So how do we know how different methods compare?
Plants are amazingly good at something that is still flummoxing
us humans in our quest for sustainable energy sources: turning sunlight
into energy in an efficient way. Around 100 bilions tons of biomass
are produced annually through photosynthesis. The question is, how
exacty do plants do it?
Yesterday's refusal by the UK government to posthumously pardon Alan Turing makes sad news for maths, computer science and the fight against discrimination. But even if symbolic gestures are, symbolically, being rebuffed, at least Turing's most important legacy — the scientific one — is going stronger than ever. An example is this week's announcement that scientists have devised a biological computer, based on an idea first described by Turing in the 1930s.
How does Olympic success correlate with a nation's GNP? How does the location of the Olympics affect the chance of record breaking? How can simple statistics help us understand the likelihood of winning streaks and the chance that an innocent athlete will fail a drugs test? What events should an ambitious nation target as the "easiest" in which to win Olympic medals. John D. Barrow will explore these question and more in a free public lecture at Gresham College in London tomorrow, 17th January 2012.
In the corner of the garden between the Centre of Mathematical Sciences and the Isaac Newton Institute in Cambridge, sits a reminder of our ongoing quest to understand gravity: an apple tree that was taken as a cutting from the tree at Newton's birthplace, the tree that is said to have inspired his theory of gravity. Newton's theory was extended to the cosmological scales by Einstein's theory of general relativity – but can supergravity explain how gravity works in the quantum world? | <urn:uuid:719c2e3b-2849-49ab-944d-ec8477d76b9f> | 2.921875 | 441 | Content Listing | Science & Tech. | 47.00393 |
It is exciting to see China enter the space race and for the
US to propose new ventures to the moon and Mars. Yet there
is a lack of vision behind human space projects which has
prevented sustained support and progress.
We have a motivating purpose behind a sustained drive into
space: the purpose of space research should be to secure human
destiny by moving off Earth, beyond our solar system, and into
It's clear large portions of life on earth have been destroyed
by meteorite strikes and possibly even gamma ray bursts from dying
stars. Our sun, too, will die taking our solar system and
all life with it.
Life has always bounced back. Humans may not. Humans are at
the top of the food chain which means we will be the most impacted
by an earth or solar system wide calamity.
What will save humanity is our intelligence. Our intelligence,
however, must be applied.
It sounds silly to worry about a dying sun, that is so far
away. In millions of years surely we'll have the technology
to save ourselves. But there are no guarantees. Nothing happens
without having a goal, creating a plan, and taking action.
A man and a woman can stand side by side for millions of
years and still not have any children. Having children
requires taking a certain kind of action. We can't rely
on just luck or time.
It sounds less silly to worry about meteorites and gamma
ray bursts because they have already happened and will happen
again. That humanity has not already been destroyed is
random chance in a lottery only recently have we
understood we are playing.
Adrian L. Melott, a University of Kansas astronomer, said:
"You can expect a dangerous gamma ray burst every few hundred
million years," It could happen tomorrow or it could be
millions of years."
We are not yet taking steps to save ourselves.
As humans we need to take the next logical steps
and make sure we can survive problems we know will happen.
It would be a shame to have come this far for nothing.
So, what does this all mean?
* We should come up with a list of potential threats to humanity.
* We should come up with plans to meet those threats.
* NASA and other space agencies should be retasked with this goal in mind.
* The UN should facilitate the space efforts of all countries into one earth wide effort with the common goal of securing human destiny.
This proposal is not meant to be thousands of pages on potential threats
and solution plans. There are many smart people who can tackle these
problems. Rather, this proposal is meant to provide simple
and clear reasons for why space exploration is necessary and
what our priorities should be in the future.
Some useful links:
* Extinction Level Events (http://en.wikipedia.org/wiki/Extinction_event)
* Theory: Sun Radiation Caused Extinction (http://www.newsmax.com/archives/articles/2004/1/7/231029.shtml)
* BBC Mass Extinctions Introduction (http://www.bbc.co.uk/education/darwin/exfiles/massintro.htm) | <urn:uuid:017942a4-705d-4a2b-aa04-76b12817240b> | 3.0625 | 682 | Personal Blog | Science & Tech. | 60.637775 |
Ask a question about 'Skid-to-turn'
Start a new discussion about 'Skid-to-turn'
Answer questions from other users
is an aeronautical vehicle reference for how such a vehicle may be turned. It applies to vehicles such as aircraft and missiles. In skid-to-turn, the vehicle does not roll to a preferred angle. Instead commands to the control surfaces are mixed to produce the maneuver in the desired direction. This is distinct from the coordinated turn used by aircraft pilots. For instance, a vehicle flying horizontally may be turned in the horizontal plane by the application of rudder controls to place the body at a sideslip angle
Sideslip angle, also called angle of sideslip , is a term used in fluid dynamics and aerodynamics and aviation. It relates to the rotation of the aircraft centerline from the relative wind...
relative to the airflow. This sideslip flow then produces a force in the horizontal plane to turn the vehicle's velocity vector. The benefit of the skid-to-turn maneuver is that it can be performed much quicker than a coordinated turn. This is useful when trying to correct for small errors. The disadvantage occurs if the vehicle has greater maneuverability in one body plane than another. In that case the turns are less efficient and either consume greater thrust or cause a greater loss of aircraft specific energy
Aircraft specific energy is a form of specific energy applied to aircraft and missile trajectory analysis. It represents the combined kinetic and potential energy of the vehicle at any given time...
than coordinated turns. | <urn:uuid:274bae68-a324-45d9-b77b-707e9c8ef865> | 3.625 | 317 | Q&A Forum | Science & Tech. | 41.277278 |
VARIATION IN DNA CONTENT AMONGST DIFFERENT SPECIES
afc at gnv.ifas.ufl.edu
Mon Jan 31 09:33:18 EST 1994
In article <1994Jan28.190047.6170 at iitmax.iit.edu>, garfinkl at iitmax.iit.edu (Mark D. Garfinkel) writes:
> afc at gnv.ifas.ufl.edu (Andrew Cockburn) writes:
> *The readers of this thread would do well to review Britten &
> Kohne's 1968 Science paper for the definition & initial methodology on
> genomic DNA complexity measurements, as well as the long series of Cot
> curve experiments by Britten & Davidson & their colleagues.
Shameless plug: When I started working on mosquitoes, I wanted to know
what their interspersion patterns were like. Instead of doing Cot curves
(of which I have done too many), we simply hybridized total genomic DNA
to a few hundred random clones from a genomic library. In a week or
so we had determined interspersion patterns for ten species. We
published this in Arch. Insect Biochem.Physiol. 10:105-113 (1989).
>>What I would like to see is an explanation of how a small genome
>>can evolve from a large genome full of repeats. I believe that this
>>happens in insects fairly frequently. Allen Spradling suggested to me
>>that this might be tied up with the mechanism of polytene
>>chromosome formation, which involves deletion of repetitive DNA.
> *Not being privy to the exchange you had with him, I don't know
> what Allan had in mind. I wonder how the mechanisms at work in
> underreplicating or eliminating portions of the genome in *somatic*
> polytene tissues could bear on the behavior of *germline* chromosomes.
> Please elaborate for us.
Spradling's point was based on experiments he reported in _Cell_ a few
years ago. It is the convential wisdom that polyteny in Drosophila includes
underreplication of heterochromatin. His results indicate that
this is false; rather heterochromatin is specifically deleted *after*
being replicated. BTW, polytenes are known from many groups besides
insects, and in most of these other cases it is not assumed that under-
replication is responsible for elimination of heterochromatin. (This
is based on a conversation from several years ago and the subsequent
reading of his paper. I know nothing about the polytene literature.)
My argument is that there does exist a mechanism for specifically
eliminating heterochromatin and perhaps other repeats from the genome.
Although this is expressed in somatic tissue, it does not take too
great a leap of imagination to think that this might occur occasionally
in the germline, at least on an evolutionary time scale.
Some such mechanism must be acting, or genomes would have expanded to
infinity by now. We know of lots of mechanisms for expanding genomes,
and if there were no counteracting mechanism genome size would rachet up
More information about the Mol-evol | <urn:uuid:0b15f6a4-804e-4673-be8e-8cd63a708df1> | 2.703125 | 693 | Comment Section | Science & Tech. | 45.779586 |
User events such as a mouse click are received by the browser process, then marshaled to WebKit where the click event is hit tested through a page's DOM, checking for event handlers along the way. On touch-capable devices, a finger drag can be used to scroll the page, but a Touch event handler on the page may also optionally override this default behavior by calling preventDefault. Because there is no way to determine programmatically if an event handler will prevent this default scrolling behavior, if a Touch event occurs where there's an event handler, we have to first marshal the event to WebKit to run through its event handler. The WebKit thread is often slow to respond, particularly during page load, which can result in very long delays between a Touch intended to scroll the page, and the scroll actually occurring.
The original solution for this problem was for WebKit to inform the embedding application of when a page had at least one Touch event handler registered. If there were no Touch event handlers registered, we wouldn't send the events to WebKit, and would scroll immediately. This document describes a more flexible solution where regions of the page where Touch event handlers are active are used by the compositor to avoid waiting for WebKit hit testing for as much of a page as is possible.
Remove latency for touch scrolling wherever possible without changing any behavior for pages that use touch event handlers.
In WebKit, we hook into the creation of Touch event handlers, and add DOM Nodes with any type of Touch event handler to a counted map in WebCore::Document. After a layout occurs, or when a Touch event handler is added or removed, code in WebCore::ScrollingCoordinator iterates across all DOM Nodes to generate a vector of rectangles where Touch events need to be marshaled to WebKit. Due to out-of-flow children, this involves walking through all the child renderers associated with each Node being tracked. If a Touch event handler is registered on the DOMWindow or Document node, we avoid the potentially expensive process of walking the renderers, and simply use the view's bounds, as they're guaranteed to be inclusive. Note: registering a Touch event handler on the DOMWindow or Document node will defeat any benefit provided by this project!
Currently, these rects are generated in the outermost Document's coordinate system. This is to enable us to share code with the iOS implementation of the same functionality (see this bug for background). Placing these rects in the coordinate system of their enclosing cc::Layer would allow us to compute tighter bounds when there are transforms, as well as avoid needing to recompute these rects when rects in a sub-frame are scrolled, since the compositor is already aware of the scroll.
The hit testing is currently done just for the touchStart events since the point at which these event hit determines where the next train of events will be sent until we receive another touchStart(due to a different gesture starting or due to another finger being pressed on screen). On the compositor, we first check whether the touch event falls on any layer and then we walk the layer hierarchy checking each the touchEventHandlerRegion on every layer. Currently because of the reasons mentioned above about using the outermost coordinate system, this search ends on the main scrolling layer, and if there is a hit, the compositor forwards this touch event to the renderer and then it is sent to WebKit to be processed as usual. If there is no touchEventHandlerRegion that was hit, the compositor sends an ACK with NO_CONSUMER_EXISTS.
As far as the browser side is concerned, only the ACKs it receives for the outgoing touch events matter in determining the current state. Currently there are four states that the ACK can be at. INPUT_EVENT_STATE_ACK_UNKNOWN is the initial default state that the touch_event_queue is at and might not be used on different platforms(ex: Android). When a touchStart event comes the touch event queue on the browser side always sends this touch event through IPC to the compositor. Then the touch event queue waits for the ACK for that touchStart to make a decision about the rest of the touch events in queue.
If it receives NO_CONSUMER_EXISTS, it stops sending touch events to the compositor until the next touchStart arrives and sends them directly to the platform specific gesture detector. This is mostly the case for regular browsing helps the gesture detector take over after a single touch event gets ACKed back from the compositor making it possible for the gesture to be generated fast enough to not cause any visible lag.
If it receives either NOT_CONSUMED or CONSUMED, this means there was a hit in the touchEventHandlerRegion and we should continue sending the touchMoves and touchEnd following this event to the compositor( which will send them to the renderer without doing any hit testing). If the ACK was CONSUMED, then the touchEventHandler had called preventDefault and neither this particular touch event nor the rest of the touch events until the next touchStart should be sent to the gesture detector. If the ACK was NOT_CONSUMED, this might mean either the touchEventHandlerRegion was too conservative and when the touchStart was hit tested in WebKit it didn't hit any touchEventHandlers or the touchEventHandler didn't preventDefault or process that particular touch event. In this case the touch_event_queue still forwards this event to the gesture_detector. | <urn:uuid:bf2a969a-3cf5-48ea-847e-167e6dd1a09c> | 2.703125 | 1,148 | Documentation | Software Dev. | 33.719478 |
Bats of Ouray National Wildlife Refuge, Utah
Ouray National Wildlife Refuge (NWR) is located in the northeast corner of Utah along the Green River and is part of the Upper Colorado River System and the Colorado Plateau. The Colorado Plateau is home to 19 species of bats, some of which are quite rare. Of those 19 species, a few have a more southern range and would not be expected to be found at Ouray NWR, but it is unknown what species occur at Ouray NWR or their relative abundance.
The assumption is that Ouray NWR provides excellent habitat for bats, since the riparian habitat consists of a healthy population of cottonwoods with plenty of older, large trees and snags that would provide foraging and roosting habitat for bats. The more than 4,000 acres of wetland habitat, along with the associated insect population resulting from the wetland habitat, would provide ideal foraging habitat for bats. The overall objective of this project is to conduct a baseline inventory of bat species occurring on the refuge using mist nets and passive acoustic monitoring.
- The Author is Laura E. Ellison.
Citation: Ellison, L.E., 2011, Bats of Ouray National Wildlife Refuge: U.S. Geological Survey Open-File Report 2011–1032, 51 p. | <urn:uuid:5fcca6f9-0332-4178-9272-a558664b9388> | 3.421875 | 270 | Knowledge Article | Science & Tech. | 45.19922 |
ANSI Common Lisp 24 System Construction 24.1 System Construction Concepts
To load a file is to treat its contents as code
and execute that code.
The file may contain source code or compiled code.
A file containing source code is called a source file.
Loading a source file is accomplished essentially
by sequentially reading2 the forms in the file,
evaluating each immediately after it is read.
A file containing compiled code is called a compiled file.
Loading a compiled file is similar to loading a source file,
except that the file does not contain text but rather an
implementation-dependent representation of pre-digested expressions
created by the compiler. Often, a compiled file can be loaded
more quickly than a source file.
See Section 3.2 Compilation.
The way in which a source file is distinguished from a compiled file | <urn:uuid:aea92957-7a1f-40b5-ba10-6cace369d796> | 3.4375 | 178 | Documentation | Software Dev. | 42.5235 |
In this video segment, participants discuss some of the vocabulary of geometry. Watch this segment after you have completed Problems D1-D3 and compare your thinking with that of the onscreen participants.
What are some of the difficulties the participants ran into while trying to define a point and a line? How are your descriptions similar to or different from those of the onscreen participants?
If you are using a VCR, you can find this segment on the session video approximately 20 minutes and 48 seconds after the Annenberg Media logo. | <urn:uuid:1ecb0712-ca13-4075-af6d-92b74682713f> | 3.3125 | 107 | Truncated | Science & Tech. | 44.263099 |
Wetland Changes Affect South Florida Freezes
Orange and other citrus crops are being squeezed by stronger freezes in South Florida, due to changes in wetlands.
Image to right: Landsat Satellite Looks at Florida -- This image mosaic of South Florida was acquired by the Landsat-5 satellite during the late-1980's. The large blue circular area left of the center is Lake Okeechobee. The dark colored area just to the south of the lake is an agricultural area. To the southeast and east are water conservation areas. The Everglades wetlands are to the south. Click on image to enlarge. Credit: EarthSatellite Corporation/NASA/USGS
Scientists using satellite data, records of land-cover changes, computer models, and weather records found a link between the loss of wetlands and more severe freezes in some agricultural areas of south Florida. In other areas of the state, land use changes resulted in slightly warmer conditions.
NASA and U.S. Geological Survey (USGS) funded scientists in several agencies used Landsat satellite data to look at changes to wetlands in south Florida, particularly south and west of Lake Okeechobee. They studied three freeze events and reproduced those conditions in a computer climate model, using weather and land-cover change records. The freezes they studied took place on December 26, 1983, December 25, 1989, and January 19, 1997.
Image to left: Citrus Crops in Florida -- The number of citrus trees by county, and key areas of winter vegetable cultivation during 2000. Click on image to enlarge. Credit: FDACS (2002)/Colorado State Univ.
Curtis H. Marshall and Roger A. Pielke Sr. of Colorado State University (CSU), Fort Collins, Colo. and Louis T. Steyaert of the USGS and NASA's Goddard Space Flight Center, Greenbelt, Md. authored the study.
They found a strong connection between areas that were changed from wetlands to agriculture during the 20th century, and those that experienced colder minimum and subfreezing temperatures over a longer time, in the current land-use scenario. Water typically doesn't cool as quickly as the land at night, which may explain why when wetlands are converted to croplands the area freezes more quickly and more severely.
Image to right: Commercial Citrus Production Areas -- The State's citrus belt is divided into five production areas. Click on image to enlarge. Credit: USDA
"The conversion of the wetlands to agriculture itself could have resulted in or enhanced the severity of recent freezes in some of the agricultural lands of south Florida," said Curtis Marshall.
The study focused on "radiation freeze events" which occur at night, frequently under calm wind conditions and when there is little or no cloud cover. At night, much of the warmth absorbed by the land during the day escapes into the atmosphere, cooling the ground.
This study of wetland changes is important to Florida and the rest of the U.S. Particularly Florida because it ranked first among states in 2002 for cash receipts of citrus crops. The state was either first or second for several fresh fruit and vegetable crops, including avocados, bell peppers, strawberries, sweet corn, tomatoes, watermelon.
Image to left: Evaluating Freeze Damage -- To evaluate fruit for freeze damage several cuts are made. The first cut removes the cap or rind at the stem end. The second cut is made 1/4" down from the beginning of the pulp. The third cut is 1/4" inch further down. The fourth and final cut is made through the center. This orange has wavy and open segments indicating burst juice cells. The effects of drying are seen at each cut. Click on image to enlarge. Credit: USDA
Over the last 150 years, the citrus industry has been planting and moving further southward, changing more wetlands into agricultural lands. Ironically, as the industry moves further south to avoid freezes, the land changes create the conditions the industry is trying to avoid.
The scientists analyzed land-cover changes over the past 100 years in Florida based on information from the USGS, which reconstructed what the vegetation looked like before 1900. They also used early 1990s Landsat-5 satellite images or land-cover. They input land-cover and weather record data into a computer model and re-created the conditions for the three freezes.
Image to right: Checking for Freeze Damage -- Weather conditions may affect the fruit droppage rates. Of all the varieties, the honey tangerines have had the highest droppage rates. Click image to enlarge. Credit: USDA
In all three cases, the most densely cultivated areas were colder and also experienced subfreezing conditions for a longer period of time. Those areas include south and southwest of Lake Okeechobee and other key agricultural areas in the Kissimmee River valley.
They concluded that the conversion of wetlands may be enough to make the freeze events stronger in south Florida. Meanwhile, the researchers also found that an open-pit mine surrounded by urban areas east of Tampa exhibited warming. Warmer areas were also seen over coastal areas of west-central and east-central Florida.
NASA Goddard Space Flight Center | <urn:uuid:3cda0754-58d9-4ffb-a5a5-43e561e7bb8a> | 3.71875 | 1,059 | Knowledge Article | Science & Tech. | 46.271548 |
Evolution and Structure of the Internet by Romualdo Pastor-Satorras and Alessandro Vespignani, Cambridge University Press, £40, ISBN 0521826985
GREATEST information resource ever? That has to be the internet. Lacking any central management, yet supremely efficient and dependable, this "network of networks" is also a marvel of self-organisation. What are its fundamental laws? In Evolution and Structure of the Internet, physicists Romualdo Pastor-Satorras and Alessandro Vespignani survey an explosion of recent work by physicists who have discovered that the internet and world wide web share deep secrets with other complex networks ranging from food webs to social networks. This book illustrates again how the ideas of physics seem to apply to almost everything.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:0ed5bfe5-3371-4bc2-9686-cfc3473b1595> | 2.703125 | 188 | Truncated | Science & Tech. | 26.773542 |
Spider Pictures, Pictures of Arachnids, Facts, and Information
Pictures of spiders and other arachnids, including scorpions, with detailed information.
The arachnid family includes over 100,000 known species including spiders, scorpions, harvestmen, ticks, solifugae, and mites. Most of those species are terrestrial, but a few live in marine environments. Nearly all arachnids have eight legs, whereas insects only have six. The difference in the number legs is the most common way that people decipher insects from arachnids, but you can also distinguish them by remembering that arachnids never have antennas or wings.
Arachnids are mostly carnivorous and feed on pre-digested bodies of small animals and insects. Some arachnids use venom to kill their prey. Unlike other animals, which digest their food with acids once they are in their stomach, arachnids spit their digestive juices on their prey to break them down before they ingest them. Other arachnids, such as ticks and mites, are parasitic and feed off of the blood of other animals.
Some arachnids, such as spiders, lay eggs after mating. Conversely, scorpions and a few other species give birth to live young.
The most famous arachnids, spiders, make an amazing silk that they can use to make webs to catch their prey, spin their prey to restrain it, climb, make burrows, and hold sperm for short amounts of time. Some species of spider can even use threads of their silk to help them glide through the air!
Opiliones, also known as daddy longlegs, do not make silk or have venom. They gained their nickname from their very long legs. Some species have a leg span of 6 inches! They are completely harmless to humans and feed on plants, scavenge some decaying animals, and fecal matter. | <urn:uuid:bdb04956-45d2-4a7d-8f76-6fd2e7acbc06> | 3.1875 | 398 | Knowledge Article | Science & Tech. | 48.052206 |
When studying animal behaviors, biologists generally have to wait for something to happen — either for an animal to do something, or for one animal to provoke another. Alternatively, they could use robot animals, tempting them with sexy fembots, mama-bots or tasty prey.
Predators of the threatened Mojave ground squirrel include badgers, coyotes, snakes, falcons, hawks, and U.S. military aerial strikes. That's because the squirrel makes its home in a section of California's Mojave Desert also used by the Air Force as a practice area. But the military has to make sure not to accidentally bomb the squirrels, them being threatened and all, and expends a lot of time and money trying to find them so as to avoid that.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:8534d993-11aa-4e3f-8379-67015f801540> | 2.71875 | 210 | Content Listing | Science & Tech. | 45.415094 |
Charles Kuen Kao (1933-)
Charles Kuen Kao is a pioneer in the development of fibre optics for use in telecommunications.
Kao was born in Shanghai in 1933. His father, a judge in the Court for International Law, had received an excellent education in both China and the United States. Kao was tutored at home with his brother before attending school in Shanghai. In the aftermath of Japanese invasion, Kao’s family fled to Hong Kong, where he enrolled at St Joseph’s College. Graduating with a perfect academic record, Kao was eligible to apply to the University of Hong Kong. However, enduring the disruption and disarray caused by the war persuaded Kao to study in Britain. He thus became an undergraduate in electrical engineering at Woolwich Polytechnic.
On completing his degree, Kao joined Standard Telephones and Cables, a British subsidiary of International Telephone and Telegraph Co. (ITT) in North Woolwich. As a trainee, Kao showed particular promise in the field of microwave research. He was offered the opportunity to transfer to Standard Telecommunications Laboratories (STL) in Harlow. At STL, Kao and his co-workers conducted pioneering work on fibre optics and their potential as a telecommunications medium. As well as conducting and developing scientific research, including making precise measurements of the attenuation of light in glass and other materials, Kao played a leading role in the engineering and commercial realisation of optical communication.
In 1970 Kao joined the Chinese University of Hong Kong, where he initiated new programmes of study in electronics for both undergraduates and graduates. By the 1980s optical fibres were being laid across the world in vast quantities, and the industry had evolved into a giant. To maintain the pace of research and development within this field, ITT was keen to employ Kao as its first Executive Scientist.
Later in his career, alongside prestigious academic posts, for example at Imperial College London, Kao directed and founded a number of telecommunications companies. In 2009 he was awarded the Nobel Prize in Physics for his contributions to the study of the transmission of light in optical fibres and for fibre communication. | <urn:uuid:a52163b8-f0a0-41f5-bfb5-c6f3eef60b58> | 2.703125 | 441 | Knowledge Article | Science & Tech. | 32.770562 |
Writing in this week's Science, University of Witwatersrand palaeontologist Professor Lyn Wadley and her colleagues describe an excavation they have carried out in a cave site called Sibudu in South Africa's KwaZulu Natal Province.
Dating from 77,000 years ago, the team have uncovered successive layers of sedge, and other plant materials including grasses, arranged on the floor of the cave and covering an area between 1 and three metres across. Also within the oldest deposits are thin layers of leaves from the Cryptocarya woodii - also known as the Cape Laurel - tree.
Trees of this species are well known to practitioners of traditional medicine, and chemical analysis of the leaves has confirmed that they contain a range of insecticidal compounds, including alpha-pyrones.
It's likely, therefore, that the ancient middle stone age inhabitants of this shelter were aware of the beneficial mosquito-repelling qualities of these leaves and protected themselves by using them within their bedding.
Moreover, these early humans were also pioneers of infection control, it seems, because from about 73,000 years ago there's evidence in the cave that, rather than make their beds, the inhabitants regularly burned them. This would have had the effect of helping to rid the environment of parasites and other insect pests.
The finding is extremely important because, whilst there is robust evidence of the activities of stone age peoples out in the field, their domestic arrangements were much less well understood. Maybe they even pre-empted the inability of the teenager to make a bed, which is partly why they torched theirs...
Bedrock of course. Bored chemist, Sun, 11th Dec 2011
I live near the site, it can get cold in winter, enough that you will want some sort of blanket, either dry grasses or hides. SeanB, Mon, 12th Dec 2011
what if Wilma was so fat she couldnt rise & Fred kept feeding her cause he loved her so............from the beginnings CZARCAR, Thu, 15th Dec 2011
It certainly seems reasonable that our ancestors would have the need for bedding. A cave floor is far from a comfortable place to sleep without some sort of cushioning and insulation against a hard cold surface. Not that I speak from experience, I hasten to add.
77.000 year old beds are very futuristic from Fred'n'Wilma's point of view Nizzle, Thu, 22nd Dec 2011 | <urn:uuid:11884d03-d283-447c-922c-27fc2b6412b5> | 3.359375 | 505 | Comment Section | Science & Tech. | 53.897727 |
[Picture credit: Suvendra Dutta]
The closest void in our galactic neighborhood is called the 'local void'. It is empty except for a lonely dwarf galaxy whose velocity indicates it's trying to move out of the void. That behavior is what one would expect to happen to underdense regions during structure formation. According to estimates following from ΛCDM simulations one would also expect there to be about ten dwarf galaxies in this void . So where are they? Why is the void so empty?
However, what is usually computed in structure formation simulations is not the distribution of visible matter but that of dark matter, and our usual matter follows the dark matter's structures. Thus, what we actually know is that there are too many dark matter dwarf haloes in the simulations as compared to data. It might thus be the problem is not one with the void, but that dark matter haloes just failed to form galaxies. ΛCDM also predicts too many dwarf dark matter haloes as compared with the observed dwarf galaxies . The solution to that puzzle might thus be on the cosmological side - in case there's something about structure formation we haven't yet got quite right - or on the astrophysical side - in case there's something about galaxy formation we haven't appropriately incorporated.
Tinker and Conroy recently extrapolated the halo occupation distribution (the relation between dark matter halos and galaxies) into regimes in which observational data is lacking in order to model the distribution of dwarf galaxies. In doing so, they claim to be able to model the emptiness of voids, which would mean the explanation is on the astrophysical side. Tikhonov and Klypin however point out that to explain the void structures, small haloes with circular velocity Vc > 20 km/s should not host galaxies, which however they do: they include a table with properties of observed isolated dwarf galaxies with circular velocities of about 20 km/s. Tikhonov and Klypin conclude
"We would like to emphasize that the disagreement with the theory is staggering. The observed spectrum of void sizes disagrees at many sigma level from the theoretical void spectrum if haloes with Vc > 20 km/s host galaxies brighter than MB = −12."
There is no bottomline to this post, I'm just trying to summarize some stuff I recently read. I'm still not entirely sure what to make out of the void problem, any comments are welcome.
A. Tikhonov and A. Klypin "The emptiness of voids: yet another over-abundance problem for the LCDM model" arXiv:0807.0924v1 [astro-ph]
P. J. E. Peebles, "Galaxies as a cosmological test" arXiv:0712.2757v1 [astro-ph]
Jeremy L. Tinker and Charlie Conroy "The Void Phenomenon Explained" arXiv:0804.2475v2 [astro-ph]
Strigari at al "Redefining the Missing Satellites Problem" arXiv:0704.1817v2 [astro-ph] | <urn:uuid:00397da6-0ef9-4ef7-8a8b-bd959481b43c> | 2.859375 | 661 | Personal Blog | Science & Tech. | 49.919915 |
I know outside a nucleus, neutrons are unstable and they have half life of about 15 minutes. But when they are together with protons inside the nucleus, they are stable. How does that happen?
I got this from wikipedia:
When bound inside of a nucleus, the instability of a single neutron to beta decay is balanced against the instability that would be acquired by the nucleus as a whole if an additional proton were to participate in repulsive interactions with the other protons that are already present in the nucleus. As such, although free neutrons are unstable, bound neutrons are not necessarily so. The same reasoning explains why protons, which are stable in empty space, may transform into neutrons when bound inside of a nucleus.
But I don't think I get what that really means. What happens inside the nucleus that makes neutrons stable?
Is it the same thing that happens inside a neutron star's core? Because, neutrons seem to be stable in there too. | <urn:uuid:36cc5814-fbf1-41fb-b7b1-3209c5aa748c> | 3.625 | 202 | Q&A Forum | Science & Tech. | 49.053362 |
This recent map of sea-surface temperature anomalies shows that weak El Nino conditions have developed in the tropical Pacific.
Forecasts by the International Research Institute for Climate and Society and other institutions show that a weak El Niño has developed in the equatorial Pacific, and is likely to continue evolving with warmer-than-normal conditions persisting there until early 2010. What exactly is this important climate phenomenon and why should society care about it? Who will be most affected? We address these questions as well as clear up some common misconceptions about El Niño, La Niña, and everything in between!
First, the basics.
El Niño refers to the occasional warming of the eastern and central Pacific Ocean around the equator (the yellow and orange areas in the image). The warmer water tends to get only 1 to 3 degrees Celsius above average sea-surface temperatures for that area, although in the very strong El Niño of 1997-98, it reached 5 degrees or more above average in some locations. La Niña is the climatological counterpart to El Niño-- a yin to its yang, so to speak. A La Niña is defined by cooler-than-normal sea-surface temperatures across much of the equatorial eastern and central Pacific. El Niño and La Niña episodes each tend to last roughly a year, although occasionally they may last 18 months or longer.
The Pacific is the largest ocean on the planet, so a significant change from its average conditions can have consequences for temperature, rainfall and vegetation in faraway places. In normal years, trade winds push warm water-and its associated heavier rainfall-westward toward Indonesia. But during an El Niño, which occurs on average once every three-to-five years, the winds peter out and can even reverse direction, pushing the rains toward South America instead. This is why we typically associate El Niño with drought in Indonesia and Australia and flooding in Peru. These changing climate conditions, combined with other factors, can have serious impacts on society, such as reduced crop harvests, wildfires, or loss of life and property in floods. There is also evidence that El Niño conditions increase the risk of certain vector-borne diseases, such as malaria, in places where they don't occur every year and where disease control is limited.
During either an El Niño or a La Niña, we also observe changes in atmospheric pressure, wind and rainfall patterns in different parts of the Pacific, and beyond. An El Niño is associated with high pressure in the western Pacific, whereas a La Niña is associated with high pressure in the eastern Pacific. The 'seesawing' of high pressure that occurs as conditions move from El Niño to La Niña is known as the Southern Oscillation. The oft-used term El Niño-Southern Oscillation, or ENSO, reminds us that El Niño and La Niña episodes reflect changes not just to the ocean, but to the atmosphere as well.
ENSO is one of the main sources of year-to-year variability in weather and climate on Earth and has significant socioeconomic implications for many regions around the world. The development of a new El Niño episode in recent months offers an opportunity to clear up some common misconceptions about the climate phenomenon:
Why should we care about El Niño?
Examples of El Niño impacts
ENSO & IRI's mission
El Niño periods cause more disasters than normal periods On a worldwide basis, this isn't necessarily the case. But ENSO conditions do allow climate scientists to produce more accurate seasonal forecasts and help them better predict extreme drought or rainfall in several regions around the globe. (Read a 2005 paper on the topic here.)
On a regional level, however, we've seen that El Niño and La Niña exert fairly consistent influences on the climate of some regions. For example, El Niño conditions typically cause more rain to fall in Peru, and less rain to fall in Indonesia and Southern Africa. These conditions, combined with socioeconomic factors, can make a country or region more vulnerable to impacts.
"On the other hand, because El Niño enhances our ability to predict the climate conditions expected in these same regions, one can take advantage of that improved predictability to help societies improve preparedness, issue early warnings and reduce possible negative impacts," says Walter Baethgen who runs IRI's Latin America and the Carribbean regional program.
El Niño and La Niña significantly affect the climate in most regions of the globe Actually, they significantly affect only about 25% of the world's land surface during any particular season, and less than 50% of land surface during the entire time that ENSO conditions persist.
Regions that are affected by El Niño and La Niña see impacts during the entire 8 to 12 months that the climate conditions last No. Most regions will only see impacts during one specific season, which may start months after the ENSO event first develops. For example, the current El Niño may cause the southern U.S. to get wetter-than-normal conditions in the December to March season, but Kenyans may see wetter-than-normal conditions between October and December.
El Niño episodes lead to adverse impacts only Fires in southeast Asia, droughts in eastern Australia, flooding in Peru often accompany El Niño events. Much of the media coverage on El Niño has focused on the more extreme and negative consequences typically associated with the phenomenon. To be sure, the impacts can wreak havoc in developing and developed countries alike, but El Niño events are also associated with reduced frequency of Atlantic hurricanes, warmer winter temperatures in northern half of U.S., which reduce heating costs, and plentiful spring/summer rainfall in southeastern Brazil, central Argentina and Uruguay, which leads to above-average summer crop yields.
We should worry more during El Niño episodes than La Niña episodes Not necessarily. They each come with their own set of features and risks. In general, El Niño is associated with increased likelihood of drought throughout much of the tropical land areas, whereas La Niña is associated with increased risk of drought throughout much of the mid-latitudes (see maps here and here.) El Niño may have gained more attention in the scientific community, and thus the public, because it substantially alters the temperature and circulation patterns in the tropical Pacific. La Niña, on the other hand, tends to amplify normal conditions in that part of the world: the relatively cold temperatures in the eastern equatorial Pacific become colder, the relatively warm temperatures become even warmer, and the low-level winds blowing from east to west along the equatorial Pacific strengthen.
The stronger the El Niño/La Niña, the stronger the impacts, and vice versa Current forecasts show a weak-to-moderate El Niño has formed and will remain through the rest of the year. Does this mean we should expect weak-to-moderate impacts? Not necessarily. The important point to remember is that ENSO shifts the odds of some regions receiving less or more rainfall than they usually do, but it doesn't guarantee this will happen. For example, scientists expected the very strong El Niño of 1997/98--which triggered wildfires in Indonesia and flooding and crop loss in Kenya - to also increase the chances of below-normal summer rainfall in India and South Africa, but this didn't happen. On the other hand, India did experience strong rainfall deficiencies in 2002, during a much weaker El Niño.
El Niño and La Niña events are directly responsible for specific storms or other weather events We usually can't pin a single event on an El Niño or La Niña, just like we can't blame global climate changes for any single hurricane. ENSO events typically affect the frequency or strength of weather events. When looked at over the course of a season, regions experience increased or decreased rainfall, for example.
El Niño and La Niña are closely related to global warming El Niño and La Niña are a normal part of the earth's climate and have likely been occurring for millions of years. Global climate change may affect ENSO cycles, but the research is still ongoing.
About the IRI The IRI works on the development and implementation of strategies to manage climate related risks and opportunities. Building on a multidisciplinary core of expertise, IRI partners with research institutions and local stakeholders to best understand needs, risks and possibilities. The IRI supports sustainable development by bringing the best science to bear on managing climate risks in sectors such as agriculture, food security, water resources, and health. By providing practical advancements that enable better management of climate related risks and opportunities in the present, we are creating solutions that will increase adaptability to long term climate change.
The IRI was established as a cooperative agreement between NOAA's Climate Program Office and Columbia University. It is part of The Earth Institute, and is located at the Lamont Campus.
Follow IRI on Twitter: @climatesociety Media contact:
Telephone: 845.680.4476 or 845.680.4468 | <urn:uuid:356de354-8503-46db-b329-f812cddce324> | 4.28125 | 1,826 | Knowledge Article | Science & Tech. | 32.271615 |
Date: 1993 - 1999
Where do asteroids come from?
I have looked in several astronomy texts, and they pretty much agree
that the asteroids are material that never quite got together to form a planet.
Many astronomers believe that the early solar system was a swirling cloud of
dust and gas. Sometimes particles of dust would collide and stick together; even tually, through many
such chance collisions, some of these clumps would be big enough to
gravitationally attract other nearby bits of dust, getting bigger yet. These
clumps are called planetesimals, and the process I've described is called accret ion. You can imagine that
eventually some of the planetesimals would get big enough to absorb most of the
other stuff in their orbit, finally leaving perhaps only one (maybe some smaller
ones survive as moons). But in the case of the asteroids, Jupiter's gravity prev ented this
accumulation from going to completion, leaving several small bodies instead of
one big one. That's the theory, anyway.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:7f9b2606-565b-4bf1-9048-1be7b3f891d7> | 4.125 | 233 | Knowledge Article | Science & Tech. | 37.864733 |
There are examples all around us of using Perl for code generation:
- CGI: Perl is probably the number one tool used in generating HTML code for CGI.
- SQL: Perl is often used to generate SQL code for database interaction.
- The B::.... modules (need I say more?).
- eval (it's mere existance....)
Those are just some very common and familiar examples. They're so familiar and seem in many cases so effortless that we forget Perl is generating code. But honestly, if Perl is "The Glue of the Internet", it is that because it excells at gluing things together, and often includes outputting code that some other entity can utilize.
"If I had my life to do over again, I'd be a plumber." -- Albert Einstein
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
Outside of code tags, you may need to use entities for some characters:
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] || | <urn:uuid:8cc261d6-1c5b-43d9-865b-8595563bf795> | 2.90625 | 438 | Content Listing | Software Dev. | 73.372328 |
- Purpose: Show that an object's acceleration, under the influence
of gravity, is un-effected by the objects velocity.
- Cock mechanism; place pin in hole to keep cocked; place balls
on ends; release pin to eject balls.
- One ball drops vertically, with zero initial velocity; other
ball launched horizontally. Balls hit floor simultaneously.
- Located in L02, section B3 | <urn:uuid:a10333f1-efb0-4764-8f03-ed3a5682eb8f> | 3.140625 | 86 | Content Listing | Science & Tech. | 38.962328 |
Wave on Circular Loop
- This demonstration has been used to explain how sound can
be used to break a wine glass, or to help illustrate principles behind the
Bohr model of the hydrogen atom.
- There are two configurations of loops from which to choose:
parallel to the floor or perpendicular to the floor.
- When changing accessories to the oscillator be careful to
lock the piston (as the oscillator is quite delicate).
- Mechanical oscillator, wire loop, and
function generator located in L02, section C2, in "Chladni plates" | <urn:uuid:2a36e45e-25e6-4d31-b3e9-eb5ef08d8281> | 3.5 | 122 | Tutorial | Science & Tech. | 30.180194 |
by Staff Writers
Manoa HI (SPX) Aug 31, 2011
Since the beginning of the Industrial Revolution, the concentration of carbon dioxide in the atmosphere has been rising due to the burning of fossil fuels. Increased absorption of this carbon by the oceans is lowering the seawater pH (the scale which measures how acidic or basic a substance is) and aragonite saturation state in a process known as ocean acidification.
Aragonite is the mineral form of calcium carbonate that is laid down by corals to build their hard skeleton. Researchers wanted to know how the declining saturation state of this important mineral would impact living coral populations.
Much of the previous research has been centered on the relationship between coral growth and aragonite levels in the surface waters of the sea. Numerous studies have shown a direct correlation between increased acidification, aragonite saturation, and declining coral growth, but the process is not well understood.
Various experiments designed to evaluate the relative importance of this process have led to opposing conclusions. A recent reanalysis conducted by Dr. Paul Jokiel from the Hawai'i Institute of Marine Biology (HIMB), suggests that the primary effect of ocean acidification on coral growth is to interfere with the transfer of hydrogen ions between the water column and the coral tissue.
Jokiel re-evaluated the relevant data in order to synthesize some of the conflicting results from previous ocean acidification studies. As a result, Jokiel came up with the "proton flux hypothesis" which offers an explanation for the reduction in calcification of corals caused by ocean acidification.
In the past, scientists have focused on processes at the coral tissues. The alternative provided by Jokiel's "proton flux hypothesis" is that calcification of coral skeletons are dependent on the passage of hydrogen ions between the water column and the coral tissue.
This process ultimately disrupts corals' ability to create an aragonite skeleton. Lowered calcification rates are problematic for our coral reefs because it creates weakened coral skeletons leaving them susceptible to breakage, and decreasing protection.
Dr. Jokiel is excited about this work; he states that "this hypothesis provides new insights into the importance of ocean acidification and temperature on coral reefs. The model is a radical departure from previous thought, but is consistent with existing observations and warrants testing in future studies".
In general, this hypothesis does not change the general conclusions that increased ocean acidification is lowering coral growth throughout the world, but rather describes the mechanism involved.
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
New Study Shows that Florida's Reefs Cannot Endure a 'Cold Snap'
Miami FL (SPX) Aug 29, 2011
Remember frozen iguanas falling from trees during Florida's 2010 record-breaking cold snap? Well, a new study led by scientists at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science shows that Florida's corals also dropped in numbers due to the cold conditions. "It was a major setback," said Diego Lirman, associate professor at the UM Rosenstiel School and lea ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2011 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:393d6671-3dd4-4def-9bb3-c4af63315966> | 3.09375 | 758 | Truncated | Science & Tech. | 26.82656 |
Similar organisms generally have similar genome sizes. Given this, would two species of yeast have the same number of genes and chromosomes?
Edit: Fixed with thanks to @daniel-standage
This question appears to start from the premise that different species of yeast are closely related, but they aren't. Saccharomyces cerevisae and Schizosaccharomyces pombe, both Ascomycetes, are thought to have diverged at least 300 million years ago (c.f. the mammalian divergence from other vertebrates was about 200 million years ago).
S. cerevisiae has a genome size of 12.1 Mb, with 5821 protein coding genes (plus another 786 dubious ORFs) spread over 16 chromosomes.
S. pombe has a genome size of 12.6 Mb with 5124 protein coding genes on just 3 chromosomes.
Filamentous fungi have larger genomes and gene numbers. From memory, Aspergillus nidulans has 8 chromosomes and over 9,000 protein coding genes. So, the similarity in gene numbers in the two yeasts probably reflects their broadly similar lifestyles. | <urn:uuid:a29d6672-a66f-471e-911f-050b7367ba55> | 3.15625 | 235 | Q&A Forum | Science & Tech. | 48.606333 |
Terabyte online databases, consisting of billions of records, are becoming common as the price of online storage decreases. These databases are often represented and manipulated using the SQL relational model. A relational database consists of relations (files in COBOL terminology) that in turn contain tuples (records in COBOL terminology). All the tuples in a relation have the same set of attributes (fields in COBOL terminology).
Relations are created, updated, and queried by writing SQL statements. These statements are syntactic sugar for a simple set of operators chosen from the relational algebra. Select project, here called scan, is the simplest and most common operator – it produces a row-and column subset of a relational table. A scan of relation R using predicate P and attribute list L produces a relational data stream as output. The scan reads each tuple, t, of R and applies the predicate P to it. If P(t) is true, the scan discards any attributes of t not in L and inserts the resulting tuple in the scan output stream. Expressed in SQL, a scan of a telephone book relation to find the phone numbers of all people named Smith would be written:
SELECT telephone_number /* the output attribute(s) */
FROM telephone_book /* the input relation */
WHERE last_name = ‘Smith’; /* the predicate */
A scan’s output stream can be sent to another relational operator, returned to an application, displayed on a terminal, or printed in a report. Therein lies the beauty and utility of the relational model. The uniformity of the data and operators allow them to be arbitrarily composed into dataflow graphs. The output of a scan may be sent to a sort operator that will reorder the tuples based onan attribute sort criteria, optionally eliminating duplicates. SQL defines several aggregate operators to summarize attributes into a single value, for example, taking the sum, min, or max of an attribute, or counting the number of distinct values of the attribute. The insert operator adds tuples from a stream to an existing relation. The update and delete operators alter and delete tuples in a relation matching a scan stream.
The relational model defines several operators to combine and compare two or more relations. It provides the usual set operators union, intersection, difference, and some more exoticones like join and division. Discussion here will focus on the equi-join operator (here called join). The join operator composes two relations, A and B, on some attribute to produce a third relation. For each tuple, ta, in A, the join finds all tuples, tb, in B with attribute value equal to that of ta. For each matching pair of tuples, the join operator inserts into the output steam a tuple built by concatenating the pair. Codd, in a classic paper, showed that the relational data model can represent any form of data, and that these operators are complete. Today, SQL applications are typically a combination of conventional programs and SQL statements. The programs interact with clients, perform data display, and provide high-level direction of the SQL dataflow. The SQL data model was originally proposed to improve programmer productivity by offering a non-procedural database language. Data independence was and additional benefit; since the programs do not specify how the query is to be executed, SQL programs continue to operate as the logical and physical database schema evolves.
Parallelism is an unanticipated benefit of the relational model. Since relational queries are really just relational operators applied to very large collections of data, they offer many opportunities for parallelism. Since the queries are presented in a non-procedural language, they offer considerable latitude in executing the queries. Relational queries can be executed as a dataflow graph. As mentioned in the introduction, these graphs can use both pipelined parallelism and partitioned parallelism. If one operator sends its output to another, the two operators can execute in parallel giving potential speedup of two.
The benefits of pipeline parallelism are limited because of three factors: (1) Relational pipelines are rarely very long – a chain of length ten is unusual. (2) Some relational operators donot emit their first output until they have consumed all their inputs. Aggregate and sort operators have this property. One cannot pipeline these operators. (3) Often, the execution cost of one operator is much greater than the others (this is an example of skew). In such cases, the speedup obtained by pipelining will be very limited. Partitioned execution offers much better opportunities for speedup and scaleup. By taking the large relational operators and partitioning their inputs and outputs, it is possible to use divide-and-conquer to turn one big job into many independent little ones. This is an ideal situation for speedup and scaleup. Partitioned data is the key to partitioned execution. | <urn:uuid:a1d6e4be-8b7f-4d6e-81f9-c8c5a07dc2c0> | 3.34375 | 1,001 | Knowledge Article | Software Dev. | 26.189338 |
We know that the sun's output can vary by about 1 Watt per square meter during a solar cycle. So try computing the temperature with 1368 instead of 1367. Try again with 1366. How much difference does that make? I get about 0.05 K (0.09 F). That's a pretty small number. We'd be hard-pressed to get a thermometer to record it.
How about the albedo? I find about a 1 K change (0.9) for a 0.01 change in albedo. Such a change in albedo (or to increase it by this much) is plausible for the earth, though 0.10 would be a wild value, not expected by anybody I know of. Quick sanity check ... why is the albedo so much more important than solar term? Albedo multiplies the solar constant. The natural variation is 1 Watt per square meter. The 0.01 change in albedo means a 13.67 Watt per square meter change in energy entering the climate system, so we expect it to be much larger. (project: why is it 19 times more important rather than just 13.67?)
The mean distance from the earth to the sun does change -- on a long enough time scale. Changes in the earth's orbital eccentricity (how circular vs. how oval-shaped) can give changes in average distance of something like 0.001 AU. This translates to about 0.18 K (0.32 F) variations in earth's temperature to space. (These are Milankovitch variations in eccentricity, with the fastest time scale of change being 100,000 years; some are over 2 million years).
If we take those sizes of temperature change and divide by how fast they occur, we get a sense of which is most important for thinking about climate change on our time scale of interest. The sun's output changes along a solar cycle of about 11 years. The earth's albedo changes with the seasonal cycle, so 1 year. And the eccentricity is 100,000 years. Looking at degrees per year, we then have:
- Albedo 1 degree per year
- Solar cycle 0.005 degrees per year
- Eccentricity 0.000002 degrees per year
Variations in solar output and the earth's orbit are not part of the climate system itself, and the orbit can be predicted to very high accuracy for a very long time into the future. In terms of understanding the climate system, both the solar and orbit factors are good -- given these non-climate terms, we can compute a climate. If we know the albedo.
But what is albedo? It's the bouncing of energy from the sun back to space. Now what does that bouncing? Well, everything in the climate system -- clouds, gas molecules, ocean surface, trees, grass, desert, dirt, glaciers, snow cover, ice packs, ... The gas molecule term depends little on the climate, and, if I remember correctly, would bounce about 15% of the solar input even if the earth's surface were a perfect solar absorber (perfectly black). But it isn't; even the ocean, which is about the darkest part of the earth's surface, reflects at least 6% of the sun's energy (that reaches it), and that figure increases as the sun gets low in the sky.
If none of these things changed their albedo as the climate got colder or warmer, we'd only have an annoyance -- can't give a climate figure without looking at the system. But the extent of deserts, ice sheets, sea ice, ... does change depending on climate temperatures. That 'about 0.30' albedo is correct for recent times. It may not be correct in an ice age earth or an even warming than present earth. Consequently, while we can use this model to understand some points about the climate system, we can't use it to predict full climate responses. We're not surprised, since we're much warmer at the surface than the blackbody temperature. But this is an additional reason we're going to want a more complex model down the road.
On the other hand, it does help us understand the system -- we know now that albedo is very important, and how much it and solar input could be expected, on a simple basis, to affect climate.
We also, it turns out, have a tool for identifying unreliable sources. I was surprised to see this be the case, but Steve Milloy at junkscience(dot-com)/Greenhouse assumes that if there were no clouds, the earth's albedo would be zero. Even knowing nothing about the details of albedo, you know this has to be wrong. If the earth's albedo, aside from clouds, were zero, you couldn't see the earth except for the clouds. As this image reminds us, you can indeed see the earth from space. His writing where he makes the error says:
We should note that devoid of atmosphere Earth would actually be a less-cold -1 °C (272 K) because the first calculation strangely includes 31% reflection of solar radiation by clouds
He also seems to be calculating the solar constant rather than taking an observed value, and using odd values for the his calculation.
Project: what would be better values for each figure, and how would using them affect his results? | <urn:uuid:2bd84287-7cb9-4f2e-90e6-a711b6f59166> | 3.875 | 1,112 | Personal Blog | Science & Tech. | 68.997707 |
The Perciformes is the largest order of vertebrates and includes around 40% of all bony fish. Their name means perch-like. They are ray-finned fish and have over 7000 different species. There are many different sizes and colors, and they are found in fresh and marine enviroments. The first documented species appeared in the Late Cretaceous. Perciform fish typically have dorsal and anal fins divided into anterior spiny and posterior soft-rayed portions, which may be partially or completely separated.
Author: Rigo N
Photo Credit: http://en.wikipedia.org/wiki/Image:Malabar_grouper_melb_aquarium.jpg http://en.wikipedia.org/wiki/Perch http://en.wikipedia.org/wiki/Parrotfish http://en.wikipedia.org/wiki/Image:Astronotus_ocellatus.jpg | <urn:uuid:517822ef-9ee6-4e21-9010-b8462e6b7565> | 3.546875 | 193 | Knowledge Article | Science & Tech. | 40.792599 |
Major Section: PROGRAMMING
(= x y) is logically equivalent to
(equal x y).
= has a guard requiring both of its arguments
to be numbers. Generally,
= is executed more efficiently than
For a discussion of the various ways to test against 0, See zero-test-idioms.
= is a Common Lisp function. See any Common Lisp documentation
for more information. | <urn:uuid:06608cd9-dc8a-456a-b9e7-bb626ac99b71> | 2.953125 | 85 | Documentation | Software Dev. | 49.811368 |
Scientific Investigations Report 2012–5187
Groundwater is essential for water supply and plays a critical role in maintaining the environmental health of freshwater and estuarine ecosystems in the Atlantic Coastal basins of New Jersey. The unconfined Kirkwood-Cohansey aquifer system and the confined Atlantic City 800-foot sand are major sources of groundwater in the area, and each faces different water-supply concerns. The U.S. Geological Survey (USGS), in cooperation with the New Jersey Department of Environmental Protection (NJDEP), conducted a study to simulate the effects of withdrawals in the Kirkwood-Cohansey aquifer system, the Atlantic City 800-foot sand, and the Rio Grande water-bearing zone and to evaluate potential scenarios. The study area encompasses Atlantic County and parts of Burlington, Camden, Gloucester, Ocean, Cape May, and Cumberland Counties. The major hydrogeologic units affecting water supply in the study area are the surficial Kirkwood-Cohansey aquifer system, a thick diatomaceous clay confining unit in the upper part of Kirkwood Formation; the Rio Grande water-bearing zone; and the Atlantic City 800-foot sand of the Kirkwood Formation.
Hydrogeologic data from 18 aquifer tests and specific capacity data from 230 wells were analyzed to provide horizontal hydraulic conductivity of the aquifers. Groundwater withdrawals are greatest from the Kirkwood-Cohansey aquifer system, and 65 percent of the water is used for public supply. Groundwater withdrawals from the Atlantic City 800-foot sand are about half those from the Kirkwood-Cohansey aquifer system. Ninety-five percent of the withdrawals from the Atlantic City 800-foot sand is used for public supply. Data from six streamgaging stations and 51 low-flow partial record sites were used to estimate base flow in the area. Base flow ranges from 60 to 92 percent of streamflow.
A groundwater flow model of the Kirkwood-Cohansey aquifer system, the Rio Grande water-bearing zone, and the Atlantic City 800-foot sand was developed and calibrated using water-level data from 148 wells and base-flow data from 22 gaging or low-flow partial record stations. The Kirkwood-Cohansey aquifer system within the Great Egg Harbor River and the Mullica River Basins was simulated on a monthly basis from 1998 through 2006. An existing regional model of the New Jersey Coastal Plain was revised to provide boundary conditions for the Great Egg Harbor and Mullica River Basin model (referred to as the Great Egg-Mullica model). In the Great Egg-Mullica model, monthly groundwater recharge rates used in the model ranged from 10–15 inches per year in 2001 to 20–25 inches per year in 2005. The mean-absolute error for 10 of the 14 long-term hydrographs used in model calibration was less than 5 ft. Groundwater flow budgets for the Great Egg-Mullica model calibration periods, May 2005 and September 2006, and for the entire model calibration period 1998 to 2006, showed that nearly 70 percent of the water entering the Atlantic City 800-foot sand came from the horizontal connection with the Kirkwood-Cohansey aquifer system in updip areas.
The groundwater flow model was used to simulate scenarios under three possible conditions: average 1998 to 2006 withdrawals (Average scenario), full-allocation withdrawals (Full Allocation scenario), and projected 2050-demand withdrawals (2050 Demand scenario). Withdrawals in the Full Allocation scenario are nearly twice the withdrawals from the Average scenario, primarily because of the potential for large agricultural withdrawals if all allocations are used. Withdrawals for the 2050 Demand scenario are about 50 percent greater than those for the Average scenario, primarily due to expected increases in withdrawals for public supply. Monthly base-flow depletion criteria were determined using the Low-Flow Margin method, currently under consideration by NJDEP, to estimate available water on an annual basis at the Hydrologic Unit Code 11 (HUC11) level and to determine whether a water-supply deficit exists. Simulations of various groundwater-withdrawal scenarios were made using the calibrated model, and results were compared with baseline conditions (no withdrawals) to determine where and when base-flow deficits may be occurring and may be expected to occur in the future. Scenarios were simulated to assess base-flow depletion that could occur from different groundwater-withdrawal situations. In the Average scenario, deficits occurred in 7 of the 14 subbasins. In the Full Allocation scenario, deficits occurred in 11 of the subbasins. In the 2050 Demand scenario, deficits occurred in 9 of the 14 subbasins. The largest deficits occurred in the Absecon Creek subbasin because the base-flow depletion criteria for this subbasin is small due to the surface-water diversions that are already occurring there and because existing groundwater withdrawals in the subbasin have resulted in base-flow depletion under current (1998–2006) conditions. Three adjusted scenarios, variations of the Average, Full Allocation, and 2050 Demand scenarios, were simulated; for the adjusted scenarios, the withdrawals were modified in stages with the intent to successively eliminate or minimize the base-flow deficits. Modifications included shifting withdrawals to a deeper part of the Kirkwood-Cohansey aquifer system, implementing seasonal conjunctive use of shallow and deep aquifers, and specifying reductions in withdrawals within a HUC11 subbasin in deficit. The adjusted scenarios are intended to show the relative effectiveness of each of the three approaches in reducing the deficits. Most of the deficits under the Average, Full Allocation, and 2050 Demand scenarios were eliminated by reductions in withdrawals or allocations. Shifting withdrawals to a deeper part of the Kirkwood-Cohansey aquifer system or seasonal conjunctive use did not eliminate deficits for any subbasin. Reductions in withdrawals accounted for more than 95 percent of the total reduction of deficits in all but one subbasin.
First posted November 9, 2012
Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge.
Pope, D.A., Carleton, G.B., Buxton, D.E., Walker, R.L., Shourds, J.L., and Reilly, P.A., 2012, Simulated effects of alternative withdrawal strategies on groundwater flow in the unconfined Kirkwood-Cohansey aquifer system, the Rio Grande water-bearing zone, and the Atlantic City 800-foot sand in the Great Egg Harbor and Mullica River Basins, New Jersey: U.S. Geological Survey Scientific Investigations Report 2012–5187, 139 p., available only at http://pubs.usgs.gov/sir/2012/5187.
Groundwater Flow Model
Simulation and Results of Withdrawal Scenarios
Summary and Conclusions
Appendix 1. Simulated base flow, base-flow depletion, available water, and deficits under Average, Full Allocation, and 2050 Demand scenarios by Hydrologic Unit Code 11 subbasins in southeastern New Jersey
Appendix 2. Results of simulations of Average, Full Allocation, and 2050 Demand scenarios–basic scenarios–from the Great Egg-Mullica Model
Appendix 3. Results of simulations of Average, Full Allocation, and 2050 Demand scenarios with Tier 1–3 adjustments from the Great Egg-Mullica Model | <urn:uuid:f5dcb04d-a6e2-438d-a73a-cac1b2c43a70> | 2.78125 | 1,547 | Academic Writing | Science & Tech. | 32.018995 |
Ever wondered what exotic life forms may be lurking in the dark, hidden corners of your home? Scientists wonder too. Studies have shown that our modern plumbing systems provide sanctuary to a menagerie of microbes. A new pilot project plans to elicit the help of homeowners to catalogue the life growing in their water heaters. The research may give clues to how microbes evolve in the wild.
Forty years ago, scientists discovered certain heat-loving microbes, or thermophiles, living in domestic water heaters. These same bugs are found in hot springs, like those in Yellowstone and Iceland.
"We are asking people to sample the hot spring in their own basement," says Christopher House from Penn State University.
House and his colleagues will be doing follow-up testing on these samples to get a sense of the biodiversity in the water heater environment. With financial support from the NASA Astrobiology Institute, they hope to collect a wide geographic sampling from across the country.
"Are the thermophiles in New York the same as in Alaska or Hawaii?" House wonders.
Read the Full Story
Last Updated: 22 July 2011 | <urn:uuid:597fd5c8-e3d1-4f6b-b277-62e96c88a5f0> | 3.171875 | 226 | Truncated | Science & Tech. | 45.099274 |
By Seth Borenstein – It’s one thing to make an object invisible, like Harry Potter’s mythical cloak. But scientists have made an entire event impossible to see. They have invented a time masker.
The scientists created a lens of not just light, but time. Their method splits light, speeding up one part of light and slowing down another. It creates a gap and that gap is where an event is masked.
“You kind of create a hole in time where an event takes place,” said study co-author Alexander Gaeta, director of Cornell‘s School of Applied and Engineering Physics. “You just don’t know that anything ever happened.” more> http://is.gd/ILe9RK
- Scientists figure out how to “cloak” time (cbsnews.com)
- Pentagon-backed ‘time cloak’ stops the clock (vancouversun.com)
- Not Actually a Time Cloak of the Day (geeks.thedailywh.at)
- imabonehead: Now you see it, now you don’t: Time cloak created – Yahoo! News (news.yahoo.com)
- Scientists’ new time masker creates invisibility – San Francisco Chronicle (sfgate.com)
- Now You See it, Now You don’t: Time Cloak Created (usnews.com) | <urn:uuid:888837af-2a2f-48fc-a15b-4bed8cc25aef> | 3.09375 | 303 | Truncated | Science & Tech. | 65.616029 |
The Solar Maximum Mission (SMM) spacecraft was launched on February 14, 1980, near the height of the solar cycle, to enable the solar phsyics community to examine, in more physically meaningful detail than ever before, the most violent aspect of solar activity: flares.SMM recorded its final data in November, 1989.
(From NASA's Solar Maximum Mission: A Look at a New Sun)
Energy Range: 25 - 500 keV.
Energy Range: 10 - 140 MeV for Gamma Rays and neutrons above 20 MeV, and 10 - 140 keV for hard X-rays.
Wavelength Range: 1170 - 1800 Ångstroms in second order, and up to 3600 Ångstroms in first order. | <urn:uuid:6fd6aee6-ba74-4423-ae2a-5a82eaf48246> | 3.15625 | 156 | Knowledge Article | Science & Tech. | 63.051429 |
Unsolved problems in mathematics
From Uncyclopedia, the content-free encyclopedia
| Unsolved Problems|
This article is part of Uncyclopedia's unsolved problems in … series.
|Unsolved problems in …|
Unsolved mathematical problems are those which have either exceeded the intellect of every living mathematician so far in history, or are just plain impossible, or no one has really cared to bother with it much, as of yet.
edit The problem of the unsolved problems
Finding solutions to unsolved problems is becoming an increasing problem, due to the decreasing numeracy of an increasingly calculator-dependent population, who have lost any number sense. Kids these days can't even recognise the equation below is actually a limerick from 1980 by Leigh Mercer:
The Millennium Prize Problems are too much for most of the population, who wouldn't even recognize imaginary numbers such as twiddly-two or eleventy-eight. The unsolved problems in math which should challenge the general population ought to be of a humbler nature, and less intellectually dense. What follows are some examples.
edit Unsolved Math Problems for the Common Man
Examples of unsolvable mathematical problems include, but are by no means limited to, the following:
edit The unequal equality problem
The solution to 1 = 2 has driven many a mathematician to drink, especially when he or she is asked to solve for the unknown. Most people take the easy way out and say that the statement 1 = 2 is false. But that's just lazy.
edit The two or more unknowns problem
Solving a single equation with more than one unknown has also caused many a mathematician to weep openly.
edit The problem of the square root of bugger all times six
There needs to be a more elegant solution to the problem than just "bugger all". An alternate solution is needed, because that would imply that "bugger all" is its own square root.
edit The redefinition of the numerical properties of the number zero
While we are on the topic, the properties of zero need to be cleaned up. According to the proceedings of the 2008 AMS Conference, zero is at the heart of computer failure, affecting everything from recording instruments to guided missile systems. This is due to zero's unruly nature ascribed to it by humans. When we say that division by zero lacks definition, it is because we have not bothered. Mankind invented the number system, it is a construct of our own perceptions and ideas. Surely we can invent a zero that does not cause so much mayhem when you divide by it accidentally in a computer program. But for now mankind is the zero's bitch. Zero pwns mankind and the world, and the madness has got to stop.
edit The Inconvenience of Indeterminate Forms
Indeterminate forms are problems which face us that have no actual defined value. Division by zero, as shown above, leads to undefined behaviour. But there are other problems which the common man can understand, and hopefully offer a solution. L'Hôpital's Rule was made to take care of many cases of undefined behaviour in algebra by cancelling like terms in a rational function, and so on. But there are many other cases where indeterminate forms can't be avoided. The rewards for solving most of these problems are immense, and can result in fame, fortune, and a place in history.
edit The problem of
What about the problem of evaluating ? If you cancel them out, do you really get 1? If you could, we could definitely get something for nothing and our economic problems would vanish the world over.
edit The problem of
What about ? Since the properties of infinity are such that , we are given to believe that . What utter hogwash! If that's true, that means we can't cancel the infinities to make 1! What number behaves like that? It looks like we need to tame infinity also.
edit The problem of
Your math teacher told you that anything to the power zero equals 1, but what about ? If it can equal 1, then you can get something for nothing again. So this remains an expression which is without definition. This is also up for grabs. If you can define it so that my calculator doesn't give me an error, there is either a Field's Medal or a Nobel Prize in it for you, and a place in history. Get to work!
edit Opaque story problems
A bus drives along its daily route. At the first stop, two guys get on. At the next stop, three guys get on and one guy gets off. At the third stop, three guys get off and four guys get on. At the fourth stop, eight people get on and six guys get off. What is the name of the driver?
Now, they say there is no solution to this kind of math problem. But that is only that no one has worked at it enough yet.
edit Billy's unfinished math problems
Billy is a kid in grade eight who had a homework assignment to hand in to his teacher today and he didn't finish it. There is a whole sheet of unsolved math problems that were left blank because he wanted to play FIFA on his PS/3 with his buddies and watch Avatar on Blu-Ray for the rest of the evening. Billy is a lazy bum who text messages his friends while in math class and frequently asks to "go to the bathroom" (he has no medical problems). He is mathematically illiterate due to a litany of his own avoidance behavior. Sucks to be Billy. | <urn:uuid:398d270b-1d3a-4b67-9fff-be2acbfb3276> | 2.90625 | 1,140 | Content Listing | Science & Tech. | 52.762776 |