text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
An absolutely historic weather event took place on Tuesday, October 26, 2010. It occurred in the Midwest and especially in the State of Minnesota. An incredibly intense low pressure system tracked due north from southern into northern Minnesota. As it was moving north it kept intensifying so that by Tuesday afternoon it reached a pressure of 28.20 inches of mercury or 955 Millibars. This was the lowest pressure ever observed for the interior U.S. The previous record was established over Cleveland, Ohio back on January 16, 1978. That pressure reached 28.28 inches of mercury or 958 Millibars.
To put Tuesday’s storm into context, a lot of the hurricanes do not reach a pressure this low. There have been instances when there has been a little lower recorded pressure, but that has been in the Northeast U.S. coastal areas with the help of energy from the Atlantic Ocean. But again, for the interior U.S. we have never seen a pressure as low as that observed over northern Minnesota on Tuesday. There has been some talk the past couple days that the storm which caused “The Wreck of the Edmund Fitzgerald” over Lake Superior back in November of 1974 was the most powerful storm ever. That could not be further from the truth. The lowest pressure observed with that storm was only 980 Millibars.
Needless to say, the recent “bomb” (this word is often used to describe extremely intense low pressure systems) of a storm resulted in incredibly strong winds across the Midwest Tuesday and Wednesday. Westerly winds gusted to over 60 mph at times in the Cornbelt resulting in some damage.
Up in North Dakota, not only did the winds gust to near hurricane force (70 mph) at times, but also there was heavy wet snow with blinding visibility (blizzard). By Thursday, October 28, the low pressure system now centered over Ontario in Canada had weakened a lot and the winds had diminished to only about 15-20 mph. across the nation’s mid section. | <urn:uuid:6bc604df-1e5a-46b0-b52b-8062f282f82c> | 3.03125 | 413 | Personal Blog | Science & Tech. | 59.661588 |
|Variegated Fritillary Butterfly - Euptoieta claudia|
Family: Nymphalidae (brushfoot butterflies) / Subfamily Heliconiinae
Live adult butterflies photographed at Alpharetta, GA and Corpus Christi, Texas USA.
Butterfly Main | Moths | Moth Index | Skippers | Butterfly Index | Insects & Spiders
The Variegated Fritillary is widespread and common in the southern U.S. It is mainly found in grasslands, farmland, roadsides and other open areas; mountain meadows, everywhere but dense forests. Frequently strays northward in such numbers as to become common all the way to Canada. Its flight is low and direct, with very little swooping or diving, and it is an avid visitor of flowers and other liquid sources .
Life cycle: Eggs are cream-colored, ribbed, laid on various hostplants including violets and pansies (Viola), Flax (Linum), Passionflower, stonecrop, moonseed, and plantain. Caterpillar to 32mm ( 1¼"), white with red banding, black spines. Red head has two long black spines. Pupa (chrysalis) 19mm (¾"), pale shiny blue with black, yellow and orange marks with gold bumps. Adult butterflies can overwinter only in the south. Flies spring to fall, with 2-3 broods. Range: Resident Arizona - Florida, southern plains. Emigrates to Southern California, Southern British Columbia, NW Territories and Quebec .
Subfamily Heliconiinae - Heliconians and Fritillaries can be divided into 45-50 genera and were sometimes treated as a separate family (Heliconiidae) within the Papilionoidea.
Larvae of the Variegated Fritillary eat a more varied diet of plants than just about any other variety of butterfly. Both the caterpillar and chrysalis are considered among the most beautiful of any of the North American butterflies. This butterfly has characteristics of the true fritillaries (Speyeria) which feed only on violets, and the longwing fritillaries (Agraulis) whose larvae eat only passion flowers. The V. fritillary caterpillar thrives on both types of plants .
|Helpful: You can hear the pronunciation of many scientific and taxonomic terms at howjsay.com|
Learn to identify many of the American Midwest's common species through descriptions and large diagnostic photos of live, wild specimens.
Butterfly Index | Moth Pictures | Moths Index | Skipper Butterflies | <urn:uuid:a211299a-b862-4393-806e-3db7a346f45f> | 3.125 | 553 | Knowledge Article | Science & Tech. | 32.510498 |
log(y) = b* log(x)
b represents the percentage change in y that is associated with a 1% change in x. But this transformation is not always a good idea.
I frequently see papers that examine the effect of temperature (or control for it because they care about some other factor) and use log(temperature) as an independent variable. This is a bad idea because a 1% change in temperature is an ambiguous value.
Imagine an author estimates
log(Y) = b*log(temperature)
and obtains the estimate b = 1. The author reports that a 1% change in temperature leads to a 1% change in Y. I have seen this done many times.
Now an American reader wants to apply this estimate to some hypothetical scenario where the temperature changes from 75 Fahrenheit (F) to 80 F. She computes the change in the independent variable D:
DAmerican = log(80)-log(75) = 0.065
and concludes that because temperature is changing 6.5%, then Y also changes 6.5% (since 0.065*b = 0.065*1 = 0.065).
But now imagine that a Canadian reader wants to do the same thing. Canadians use the metric system, so they measure temperature in Celsius (C) rather than Fahrenheit. Because 80F = 26.67C and 75F = 23.89C, the Canadian computes
DCanadian = log(26.67)-log(23.89) = 0.110
and concludes that Y increases 11%.
Finally, a physicist tries to compute the same change in Y, but physicists use Kelvin (K) and 80F = 299.82K and 75F = 297.04K, so she uses
Dphysicist = log(299.82) - log(297.04) = 0.009
and concludes that Y increases by a measly 0.9%.
What happened? Usually we like the log transformation because it makes units irrelevant. But here changes in units dramatically changed the predication of this model, causing it to range from 0.9% to 11%!
The answer is that the log transformation is a bad idea when the value x = 0 is not anchored to a unique [physical] interpretation. When we change from Fahrenheit to Celsius to Kelvin, we change the meaning of "zero temperature" since 0 F does not equal 0 C which does not equal 0 K. This causes a 1% change in F to not have the same meaning as a 1% change in C or K. The log transformation is robust to a rescaling of units but not to a recentering of units.
For comparison, log(rainfall) is an okay measure to use as an independent variable, since zero rainfall is always the same, regardless of whether one uses inches, millimeters or Smoots to measure rainfall. | <urn:uuid:321c06af-fb96-4a9a-b676-a1af0d15a543> | 3.3125 | 599 | Personal Blog | Science & Tech. | 69.683501 |
device_writewrites data_count bytes from the buffer data to device. The number of bytes actually written is returned in bytes_written.
If mode is
D_NOWAIT, the function returns without waiting for I/O completion. Otherwise mode should be 0. recnum is the record number to be written, its meaning is device specific.
The function returns
D_SUCCESSif some data was successfully written and
D_NO_SUCH_DEVICEif device does not denote a device port or the device is dead or not completely open.
device_write_inbandfunction works as the
device_writefunction, except that the data is sent “in-line” in the request IPC message (see Memory).
This is the asynchronous form of the
device_write_requestperforms the write request. The meaning for the parameters is as in
device_write. Additionally, the caller has to supply a reply port to which the
ds_device_write_replymessage is sent by the kernel when the write has been performed. The return value of the write operation is stored in return_code.
As neither function receives a reply message, only message transmission errors apply. If no error occurs,
ds_device_write_reply_inbandfunctions work as the
ds_device_write_replyfunctions, except that the data is sent “in-line” in the request IPC message (see Memory). | <urn:uuid:c1b8e91e-4821-46c2-aa89-4aab1ec37ca3> | 2.796875 | 309 | Documentation | Software Dev. | 24.756538 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Marshall Space Flight Center, Alabama, and the Arnold Engineering and Development Center (Siebold et al., 1993). These medium-sized particles, which have lower characteristic ejection velocities and smaller area-to-mass ratios than the smaller particles, may also be longer-lived than the small particles and could pose a long-term hazard to other Earth-orbiting space objects.
Fragmentation debris—the single largest element of the cataloged Earth-orbiting space object population—consists of space objects created during breakups and the products of deterioration. Breakups are typically destructive events that generate numerous smaller objects with a wide range of initial velocities. Breakups may be accidental (e.g., due to a propulsion system malfunction) or the result of intentional actions (e.g., space weapons tests). They may be caused by internal explosions or by an unplanned or deliberate collision with another orbiting object.
Since 1961, more than 120 known breakups have resulted in approximately 8,100 cataloged items of fragmentation debris, more than 3,100 of which remain in orbit. Fragmentation debris thus currently makes up more than 40 percent of the U.S. space object catalog (and undoubtedly represents an even larger fraction of uncataloged objects). The most intensive breakup on record was the 1987 breakup of the Soviet Kosmos 1813, which generated approximately 850 fragments detectable from the Earth. The fragmentation debris released from a breakup will be ejected at a variety of initial velocities. As a result of their varying velocities, the fragments will spread out into a toroidal cloud that will eventually expand until it is bounded only by the limits of the maximum inclinations and altitudes of the debris. This process is illustrated in Figure 1-5. The rate at which the toroidal cloud evolves depends on both the original spacecraft's orbital characteristics and the velocity imparted to the fragments; in general, the greater the spread of the initial velocity of the fragments, the faster will the evolution occur.
In contrast, debris fragments that are the product of deterioration usually separate at low relative velocity from a spacecraft or rocket body that remains essentially intact. Products of deterioration large enough to be detected from Earth are occasionally seen—probably such items as thermal blankets, protective shields, or solar panels. Most such deterioration is believed to be the result of harsh environmental factors, such as atomic oxygen, radiation, and thermal cycling. During 1993 the still-functional COBE (Cosmic Background Explorer) spacecraft released at least 40 objects detectable from Earth—possibly debonded thermal blanket segments—in a nine-month period, perhaps as a result of thermal shock. | <urn:uuid:c186d49c-ff53-416a-9be6-5bda995077ba> | 3.59375 | 577 | Knowledge Article | Science & Tech. | 30.581604 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 10 results on physics.org and 55 results in our database of sites
55 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
Explore the moon's surface with this map: a mosaic of landing site images and a tour of the Apollo moon landings.
Multimedia guide to the moon including a phase calculator which lets you see what the moon will look like at any time of year. Visually attractive site
Does exactly what the title suggests provides an image of the moon as it would appear on your chosen date (1800-2199 AD). The depiction of lunar surface features suffers geometric distortion but the ...
Find out how the rate of rotation of the Earth and Moon cause an effect where is seems the Moon doesn't spin.
Q & A site. Explains visibility of Moon (and Venus) in daytime sky.
Count craters and help explore the moon in this great citizen science project.
A useful page from NASA all about the moon, its origin, movements, craters, interior and more.
A very good fact file on the moon covering lots of different aspects and finishing with a short quiz.
A page about lunar exploration and the Apollo moon landings.
Relive the first moon landing from NASA's point of view including images and recordings of what people were saying.
Showing 1 - 10 of 55 | <urn:uuid:dd4bea0b-6611-4edf-bb95-f0885973b28b> | 3 | 339 | Content Listing | Science & Tech. | 63.351659 |
|Sep25-06, 08:57 AM||#1|
(ceramics) random walk approach to gases, liquids, or solids...
For the random walk approach to gases, liquids, or solids, why isn't there a gradient? The atoms don't jump by themselves, right? They should have to feel forces to jump...
|Sep25-06, 10:27 AM||#2|
And there is a gradient - the temperature gradient or concentration gradient. One can observe a concentration gradient by taking a drop of ink and dropping it in a liquid like water, and watching the ink disperse.
In the case of solids, the atoms are more or less fixed in position - that's what makes a solid solid. In liquids, the atoms/molecules are subject to interatomic/intermolecular forces, but the individual atoms/molecules can migrate. In gases, there is distance between the atoms/molecules and the interatomic/intermolecular forces are very low if existent.
Now in solids, there can be diffusion, but is very slow - orders of magnitude less than in liquids and gases. Hydrogen can diffuse in many metals. There is self-diffusion of atoms in a solid.
Think of the process of precipitation hardening of a metal.
|Sep27-06, 08:24 AM||#3|
Ok, I got it! Thank you very much!
|Similar Threads for: (ceramics) random walk approach to gases, liquids, or solids...|
|Apply kinetic theory to solids and liquids?||Classical Physics||2|
|Specific heat : solids and liquids||Introductory Physics Homework||2|
|solids disolved in liquids||Chemistry||3|
|Why are liquids generally less dense than solids?||Chemistry||4| | <urn:uuid:3f0d90c5-a058-4390-877c-fcc37617743b> | 3.296875 | 396 | Comment Section | Science & Tech. | 57.698845 |
Population size, foaling, deaths, age structure, sex ratio, age-specific survival rates, and more over a 14 year time span. This information will help land and wildlife managers find the best maintenance and conservation strategies.
Program to keep common species common by identifying those species and plant communities that are not adequately represented in existing conservation lands. Links to projects, applications, status maps, and a searchable database.
Describes the value of molecular biology genetic tools in enhancing the delineation of the genetic diversity and the effects of environmental degradation on living species. Links to research, which differentiated two species of sage-grouse.
Trapping education manual for the beginning or inexperienced trapper intended to provide information on North Dakota's predators and furbearing animals and the basics on how to trap them using good trapping skills and sound fur management.
Retrieval system to locate websites on publications and compilations on biological resources. Searches can be made by type, such as checklists, distribution, and regional overviews, by taxon, and by geography, including global, U.S., and Canada. | <urn:uuid:cfef51a4-35c5-4362-8f45-e5c8c7b1fa97> | 3.34375 | 226 | Content Listing | Science & Tech. | 22.45413 |
Main page for accessing links for information and data on the San Francisco Bay estuary and its watershed with links to highlights, water, biology, wetlands, hazards, digital maps, geologic mapping, winds, bathymetry and overview of the Bay.
Doppler radar imaging is used for tracking the migration and behavior of bird populations. With the U.S. Fish and Wildlife Service and other agencies, USGS uses this technology to assist decision makers balance natural and industrial concerns
Clearinghouse for the description and availability of multiple geospatial datasets relating to Alaska from many federal, state and local cooperating agencies under the coordination of the Alaska Geographic Data Committee.
Links to volcanism, volcanic history, volcanic rocks, and general geology by state, by region, national parks and national monuments and a brief introduction to volcanism around the U.S. entitled: Windows into the past.
Homepage for the Dept. of the Interior's Initiative coordinated by the USGS, for amphibian (frogs, toads, salamanders and newts) monitoring, research, and conservation. Links to National Atlas for Amphibian Distribution, photos, and interactive map serve
They're abundant in this area, but hard to count reliably. We outline a procedure for estimating the population sizes so that we can determine whether they're increasing or dwindling. We must both listen for their calls and visually confirm them. | <urn:uuid:8ffa3ba9-ee1f-4d5c-9cc6-780298a0b372> | 2.984375 | 285 | Content Listing | Science & Tech. | 28.099681 |
About Structured Query Language
- SQL was first implemented in IBM's System R in
the late 1970's.
- SQL is the de-facto standard query language
for creating and manipulating data in
- There are some differences, but much
of SQL is standard across MS Access,
Oracle, Sybase, MySQL, PostgreSQL.
- SQL is either specified by a command-line tool
or is embedded into a general purpose
programming language such as Cobol,"C",
Pascal, or accessible via modules as in Java, PHP and Perl.
- In Perl, we use the DBI interface.
"DBI is a database access Application Programming Interface (API) for
the Perl Language. The DBI API
Specification defines a set of functions, variables and
conventions that provide a consistent database
interface independent of the actual database being used."
--Tim Bunce, the architect and author of DBI
- SQL is a standardized language monitored by
the American National Standards Institute
(ANSI) as well as by National Institute of
- ANSI 1990 - SQL 1 standard
- ANSI 1992 - SQL 2 Standard (sometimes
- ANSI and ISO SQL:1999, also know as SQL 3, added some object oriented concepts
- ANSI and ISO SQL:2003, introduced XML-related features
- ANSI and ISO SQL:2006
- ANSI and ISO SQL:2008
- SQL has two major parts:
- Data Definition Language
(DDL) is used to create (define) data
structures such as tables and indexes,
- Data Manipulation Language
(DML) is used to store, retrieve and
update data from tables.
Some of the material on this page has been extracted from a
Database Management Systems Course
, Baruch College, City University of New York.
Copyright, 1997, 1998 Richard Holowczak
Table of Contents. | <urn:uuid:e718b8cc-638b-4e09-85ee-ec6b0dfdc3ff> | 3.765625 | 402 | Documentation | Software Dev. | 31.171449 |
Solid State #1 by Sanjay Sharma
(a) Amorphous solids (b) Crystalline solids
4. Solid angle
5. Interfacial angle
6. Zone and Zone-axis
(i) AB type structures
The AB2 or A2B type of ionic crystals contain the ions in the ratio 1:2 or 2 : 1 respectively. For Example : CaF2 populary called to have fluorite structure with other examples like SrF2, BaF2, PbF2and BaCl2 etc. The coordination number of Ca++ in CaF2 is 8 while that of F- is 4. On the contrary Na2O have antifluorite structure i.e., here the place of cations in occupied by anions and vice versa.
The size of the unit cell and arrangement of atoms in a crystal is determined with the help of measurement of differaction of x-rays by the crystal. When a beam of monochromatic x-ray strikes two planes of atoms in a crystal at a certain angle Q, It is reflected. The intensity of reflected beam will be maximum if -
In actual practice it is difficult to grow a perfect crystal. Even single crystals grown with-all care are found to contain
In an ionic crystal, the electrons are mostly concentrated around the electronegative component. Some of these electrons have the tendency of thermal release i.e., the property of loosing its position on increase in temperature. These thermally released electrous become mobile resulting to increase in conductivity of solid
When an electron is thermally removed from its position the electron deficient site thus formed is called a HOLE. Holes also impart electrical conductivity but their movement is opposite in direction to which the electrons move. The electrons and holes in solids give rise to electronic imperfections.
The defect discussed above is/are called point defects and can be categorised to following 3 types -
(A) Stoichiometric defects
(B) Non - stoichiometric defects
(C) Impurity defects.
If imperfections in crystals are such that the ratio between the cations and the anions remains the same as described in its molecular formula, the defect will be called STOICHIOMETRIC DEFECT. These can be further categorised to-
(a) Schottky defect (b) Frenkel defect.
In an ionic crystal of A+B- type, if equal no. of cations and anions are missing from their lattice sites, the defect is called SCHOTTKY DEFECT. In this defect electrical neutrality is maintained due to disappearence of similar no. of cations and anions (Lattice Vacancy).
The schottky defect is shown by highly ionic compounds having -
(i) High co-ordination number
(ii) Small difference in the size of cations and anions
For Example : NaCl, KCl, KBr, CsCl etc.
This type of defect is seen in those crystals where the difference in the size of cations and anions is very large and their coordination no. is low for example AgCl, AgBr, ZnS etc. Due to such a defect the density of the solid remains unchanged.
When the ratio of cations and anions due to imperfection differ from that indicated by their molecular formula, the defects are called Non-stoichiometric defects. These defects results in either excess of metal atom of excess of non metal atom.
The metal excess may occur in either of the following two ways -
(i) Due to missing of a negative ion from its lattice site,
thus leaving a hole which is occupied by an electron.
The electrons thus trapped in the anion vacancies
are called F.Centres (F=Farbe=German word for color) as these are responsible for imparting color to the crystals.
This defect is similar to frenkel defect
e.g. FeO, FeS, NiO etc.
Another common method of introducing defects in ionic solids is by adding impurity ions having different change than host ion. These foreign atoms are present at lattice site in substitutional solids and at vacant interstitial sites in interstitial solids.
f + C = e + 2
where, f = Number of faces
e = Number of edges
c = Number of interfacial angles.
2. Coordination Number
3. Crystallographic axes
4. Standard or unit plane
5. Axial ratio. | <urn:uuid:48b76e97-12de-44f4-b57c-216bfbaecbfd> | 3.46875 | 951 | Academic Writing | Science & Tech. | 47.287516 |
by Dave Hennen
CNN Senior Meteorologist
In 2004 a 9.1 magnitude quake occurred in a similar region to today’s large quake generating a tsunami that killed more than 200,000 people. Again last year a similar magnitude 9.0 quake also generated a killer tsunami killing over 15,000. Today’s quake was strong enough to generate a killer tsunami, but some key differences in the quakes probably saved thousands of lives.
Likely the biggest difference today, was the type of quake that occurred. Both the 9.1 quake in 2004 and the Japan quake were considered “thrust” quakes. In both cases the sea floor was violently thrust upward. This pushed a large amount of water towards the surface that produced the widespread tsunami. Think of dropping a large rock into a pond. A wave spreads outward from where the rock is thrown. The same happens at the epicenter of the quake as a tsunami is generated.
Today’s quake was a different type of quake all together. Instead of the plates being thrust upwards, the quake today was categorized as a “strike/slip”. In this case the plates move horizontally not vertically so not as much water was displaced. These types of faults can be deadly. The Haiti quake in 2010 killed thousands. The 7.0 magnitude quake was centered over a highly populated area near Port-Au-Prince, so it was collapsing buildings, not a tsunami that caused most of the fatalities.
The other main factor was the size of the quake. A 9.1 magnitude quake may seem very similar to today’s 8.6 magnitude but the scale is logarithmic which means the devastating 9.1 quake in 2004 was 3.2 times larger, or more than 5 times stronger than today.
Still 8.0 or higher magnitude quakes are rare. According to the USGS nearly 3 million quakes occur annually across the globe, but since 1900 only 89 quakes have had a magnitude of greater than 8.0, and the fact that we had the initial shock today of 8.6, followed by a 8.2 aftershock makes today even more unprecedented. | <urn:uuid:d0cf3bc1-4425-4181-b208-2d0c93c531e6> | 3.71875 | 445 | Personal Blog | Science & Tech. | 63.010086 |
This was derived from a graphical image; the image will be used
more directly in a subsequent version of this document.
From the top down:
- Your App Here (Python)
- A Python application makes a Tkinter call.
- Tkinter (Python Module)
- This call (say, for example, creating a button widget), is
implemented in the Tkinter module, which is written in
Python. This Python function will parse the commands and the
arguments and convert them into a form that makes them look as if they
had come from a Tk script instead of a Python script.
- tkinter (C)
- These commands and their arguments will be passed to a C function
in the tkinter - note the lowercase - extension module.
- Tk Widgets (C and Tcl)
- This C function is able to make calls into other C modules,
including the C functions that make up the Tk library. Tk is
implemented in C and some Tcl. The Tcl part of the Tk widgets is used
to bind certain default behaviors to widgets, and is executed once at
the point where the Python Tkinter module is
imported. (The user never sees this stage).
- Tk (C)
- The Tk part of the Tk Widgets implement the final mapping to ...
- Xlib (C)
- the Xlib library to draw graphics on the screen.
See About this document... for information on suggesting changes. | <urn:uuid:0a04e2e5-eb85-483e-9f3e-93a8f9d93dbf> | 3.5 | 328 | Documentation | Software Dev. | 68.337888 |
The CPython interpreter scans the command line and the environment for various settings.
CPython implementation detail: Other implementations’ command line schemes may differ. See Alternate Implementations for further resources.
When invoking Python, you may specify any of these options:
python [-bBdEhiOsSuvVWx?] [-c command | -m module-name | script | - ] [args]
The most common use case is, of course, a simple invocation of a script:
The interpreter interface resembles that of the UNIX shell, but provides some additional methods of invocation:
In non-interactive mode, the entire input is parsed before it is executed.
An interface option terminates the list of options consumed by the interpreter, all consecutive arguments will end up in sys.argv – note that the first element, subscript zero (sys.argv), is a string reflecting the program’s source.
Execute the Python code in command. command can be one or more statements separated by newlines, with significant leading whitespace as in normal module code.
If this option is given, the first element of sys.argv will be "-c" and the current directory will be added to the start of sys.path (allowing modules in that directory to be imported as top level modules).
Since the argument is a module name, you must not give a file extension (.py). The module-name should be a valid Python module name, but the implementation may not always enforce this (e.g. it may allow you to use a name that includes a hyphen).
Package names are also permitted. When a package name is supplied instead of a normal module, the interpreter will execute <pkg>.__main__ as the main module. This behaviour is deliberately similar to the handling of directories and zipfiles that are passed to the interpreter as the script argument.
This option cannot be used with built-in modules and extension modules written in C, since they do not have Python module files. However, it can still be used for precompiled modules, even if the original source file is not available.
If this option is given, the first element of sys.argv will be the full path to the module file (while the module file is being located, the first element will be set to "-m"). As with the -c option, the current directory will be added to the start of sys.path.
Many standard library modules contain code that is invoked on their execution as a script. An example is the timeit module:
python -mtimeit -s 'setup here' 'benchmarked code here' python -mtimeit -h # for details
runpy.run_module() Equivalent functionality directly available to Python code
PEP 338 – Executing modules as scripts
Changed in version 3.1: Supply the package name to run a __main__ submodule.
Execute the Python code contained in script, which must be a filesystem path (absolute or relative) referring to either a Python file, a directory containing a __main__.py file, or a zipfile containing a __main__.py file.
If this option is given, the first element of sys.argv will be the script name as given on the command line.
Issue a warning when comparing str and bytes. Issue an error when the option is given twice (-bb).
If given, Python won’t try to write .pyc or .pyo files on the import of source modules. See also PYTHONDONTWRITEBYTECODE.
Turn on parser debugging output (for wizards only, depending on compilation options). See also PYTHONDEBUG.
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is not read.
This can be useful to inspect global variables or a stack trace when a script raises an exception. See also PYTHONINSPECT.
Don’t display the copyright and version messages even in interactive mode.
New in version 3.2.
Don’t add user site directory to sys.path
Force the binary layer of the stdin, stdout and stderr streams (which is available as their buffer attribute) to be unbuffered. The text I/O layer will still be line-buffered.
See also PYTHONUNBUFFERED.
Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is loaded. When given twice (-vv), print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit. See also PYTHONVERBOSE.
Warning control. Python’s warning machinery by default prints warning messages to sys.stderr. A typical warning message has the following form:
file:line: category: message
By default, each warning is printed once for each source line where it occurs. This option controls how often warnings are printed.
Multiple -W options may be given; when a warning matches more than one option, the action for the last matching option is performed. Invalid -W options are ignored (though, a warning message is printed about invalid options when the first warning is issued).
Warnings can also be controlled from within a Python program using the warnings module.
The simplest form of argument is one of the following action strings (or a unique abbreviation):
The full form of argument is:
Here, action is as explained above but only applies to messages that match the remaining fields. Empty fields match all values; trailing empty fields may be omitted. The message field matches the start of the warning message printed; this match is case-insensitive. The category field matches the warning category. This must be a class name; the match tests whether the actual warning category of the message is a subclass of the specified warning category. The full class name must be given. The module field matches the (fully-qualified) module name; this match is case-sensitive. The line field matches the line number, where zero matches all line numbers and is thus equivalent to an omitted line number.
Skip the first line of the source, allowing use of non-Unix forms of #!cmd. This is intended for a DOS specific hack only.
The line numbers in error messages will be off by one.
These environment variables influence Python’s behavior.
Change the location of the standard Python libraries. By default, the libraries are searched in prefix/lib/pythonversion and exec_prefix/lib/pythonversion, where prefix and exec_prefix are installation-dependent directories, both defaulting to /usr/local.
Augment the default search path for module files. The format is the same as the shell’s PATH: one or more directory pathnames separated by os.pathsep (e.g. colons on Unix or semicolons on Windows). Non-existent directories are silently ignored.
In addition to normal directories, individual PYTHONPATH entries may refer to zipfiles containing pure Python modules (in either source or compiled form). Extension modules cannot be imported from zipfiles.
An additional directory will be inserted in the search path in front of PYTHONPATH as described above under Interface options. The search path can be manipulated from within a Python program as the variable sys.path.
If this is the name of a readable file, the Python commands in that file are executed before the first prompt is displayed in interactive mode. The file is executed in the same namespace where interactive commands are executed so that objects defined or imported in it can be used without qualification in the interactive session. You can also change the prompts sys.ps1 and sys.ps2 in this file.
Set this to a non-empty string to cause the time module to require dates specified as strings to include 4-digit years, otherwise 2-digit years are converted based on rules described in the time module documentation.
If this is set to a non-empty string it is equivalent to specifying the -i option.
This variable can also be modified by Python code using os.environ to force inspect mode on program termination.
If this is set to a non-empty string it is equivalent to specifying the -u option.
If this is set, Python won’t try to write .pyc or .pyo files on the import of source modules.
If this is set before running the interpreter, it overrides the encoding used for stdin/stdout/stderr, in the syntax encodingname:errorhandler. The :errorhandler part is optional and has the same meaning as in str.encode().
For stderr, the :errorhandler part is ignored; the handler will always be 'backslashreplace'.
If this is set, Python won’t add the user site directory to sys.path
Sets the base directory for the user site directory
If this environment variable is set, sys.argv will be set to its value instead of the value got through the C runtime. Only works on Mac OS X.
Setting these variables only has an effect in a debug build of Python, that is, if Python was configured with the --with-pydebug build option.
If set, Python will print threading debug info.
If set, Python will dump objects and reference counts still alive after shutting down the interpreter.
If set, Python will print memory allocation statistics every time a new object arena is created, and on shutdown. | <urn:uuid:80c2ab50-70f6-44de-91f8-8ba876abedb6> | 3.046875 | 2,025 | Documentation | Software Dev. | 47.754127 |
The Element Curium
[Click for Isotope Data]
Atomic Number: 96
Atomic Weight: 247
Melting Point: 1618 K (1345°C or 2453°F)
Boiling Point: ~3400 K (~3100°C or ~5600°F)
Density: 13.51 grams per cubic centimeter
Phase at Room Temperature: Solid
Element Classification: Metal
Period Number: 7 Group Number: none Group Name: Actinide
Radioactive and Artificially Produced
What's in a name? Named after the scientists Pierre and Marie Curie.
Say what? Curium is pronounced as KYOOR-ee-em.
History and Uses:
Curium was first produced by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso, working at the University of California, Berkeley, in 1944. They bombarded atoms of plutonium-239, an isotope of plutonium, with alpha particles that had been accelerated in a device called a cyclotron. This produced atoms of curium-242 and one free neutron. Curium-242 has a half-life of about 163 days and decays into plutonium-238 through alpha decay or decays through spontaneous fission.
Curium's most stable isotope, curium-247, has a half-life of about 15,600,000 years. It decays into plutonium-243 through alpha decay.
Since only milligram amounts of curium have ever been produced, there are currently no commercial applications for it, although it might be used in radioisotope thermoelectric generators in the future. Curium is primarily used for basic scientific research.
Scientists have produced several curium compounds. They include: curium dioxide (CmO2), curium trioxide (Cm2O3), curium bromide (CmBr3), curium chloride (CmCl3), curium chloride (CmCl3), curium tetrafluoride (CmF4) and curium iodide (CmI3). As with the element, the compounds currently have no commercial applications and are primarily used for basic scientific research.
Estimated Crustal Abundance: Not Applicable
Estimated Oceanic Abundance: Not Applicable
Number of Stable Isotopes: 0 (View all isotope data)
Ionization Energy: 6.02 eV
Oxidation States: +3
3s2 3p6 3d10
4s2 4p6 4d10 4f14
5s2 5p6 5d10 5f7
6s2 6p6 6d1 | <urn:uuid:6d205fae-0cfe-4e0e-9a88-9a93c0c3f488> | 4.03125 | 568 | Knowledge Article | Science & Tech. | 47.526099 |
Most soil animals are very small. As a result, we don't know much about them. Simple questions like where they live haven't been completely answered. Part of the problem is that larval stages do not look like the adults. Maggots for instance, do not resemble flies. The Tullgren funnel is one method used to separate soil animals from leaves. A light bulb applies heat and light above to leaves, or other materials. Animals moving away from the light bulb are trapped in a vial below. This is a good method for larger animals that can survive the heat. Beetles, springtails and fly larvae are often recovered. Smaller animals frequently dry out and die. Animals trapped in moisture on the sides of the funnel also die.
Soil animals are classified by size. Larger soil animals (macrofauna) include burrowing animals, insects, slugs and snails and earthworms. Middle sized animals (mesofauna) include spring tails (collembolan) and mites. The smallest animals (microfauna) include nematodes.
The most common animals in mushrooms are collembola, fly larvae, beetles, and mites. Millipedes and nematodes are also extracted sometimes.
Earthworms are segmented and larger than most other soil animals. The "nightcrawler" shown here is not native to North America. They dig deep burrows and are very important in decomposition. By mixing soil and organic matter, they have changed the layered structure of some forest soils. I have never isolated one from a mushroom.
Collembola are primitive wingless insects, usually less than 2 mm long. The "spring" under the abdomen and short, thick antennae are distinctive. They have six legs and are white or grey.
The most common fly larvae are white with black heads. They lack legs or antennae. Their bodies are segmented.
Beetles have six legs, antennae, a head and three body segments. They are usually reddish brown or black. Beetle larvae have white, soft, segmented bodies. The six legs are just behind the head. The head is a hard, reddish capsule.
Mites are rarely longer than 1 mm and difficult to see with the naked eye. They are reddish and have eight legs. They do not have the narrow waist between their abdomen and thorax that spiders have.
Millipedes have tube-shaped bodies made up of many segments. Each segment has two pairs of legs (4). The antennae are short. Centipedes in contrast have one pair of legs per body segment.
Nematodes are extremely thin, and range in length from 0.5 to 1.5 mm. They do not have segmented bodies like earthworms. They are transparent (clear).
Among the questions that can be answered using the Tullgren funnel are:
To get the best results use multiple funnels. By chance one funnel may give a very high number, or a very low number. You can calculate the average by using more than one funnel. You might want to read designing experiments before starting.
|Ring stand support for Tullgren funnel.
|Wood rack for Tullgren funnel.
You can calculate the average by dividing the total number by the weight of mushrooms in all of the funnels. For example: 500 individuals divided by 5 kilograms=500 ÷ 5=an average of 100 animals per kilogram. Remember you cannot calculate an average from only one funnel.
Last update: 19 Nov 06. © 2006. Robert Fogel, Ivins, UT 84731. | <urn:uuid:0a5df388-b975-40c4-8563-83273192a85a> | 4.09375 | 746 | Knowledge Article | Science & Tech. | 55.282975 |
Heavy oil is a common fuel for industrial furnaces, boilers, marines and diesel engines. Previous studies showed that the combustion of heavy oil involves not only the complete burning of volatile matters but also the bum-out of coke residues [1-3]. Detailed knowledge about heavy oil combustion therefore requires an understanding of the different burning stages of heavy oil droplets in the bumer. This in turn, demands knowledge about the single droplet evaporation and combustion characteristics. This study measured the temperature and size histories of heavy oil (C glass) droplets burning in microgravity to elucidate the various stages that occur during combustion. The elimination of the gravity-induced gas convection in microgravity allows the droplet combustion to be studied in greater detail. Noting that the compositions of heavy oil are various, we also tested the fuel blends of a diesel light oil (LO) and a heavy oil residue (HOR).
Ikegami, M., Xu, G., Ikeda, K., Honam, S., Nagaishi, H., Dietrich, D.L., Struk, P.M., Takeshita, Y., Combustion Stages of a Single Heavy Oil Droplet in Microgravity, Sixth International Microgravity Combustion Workshop, NASA Glenn Research Center, Cleveland, OH, CP-2001-210826, pp. 261-264, May 22-24, 2001. | <urn:uuid:309cc348-e4b6-4bf1-91f9-3c696e402b89> | 3.015625 | 291 | Academic Writing | Science & Tech. | 48.260492 |
|Annu. Rev. Astron. Astrophys. 1989. 27:
Copyright © 1989 by . All rights reserved
Galaxies of the Local Group are all near enough that rather reliable methods can be used to diagnose what kinds of stellar populations each contains. There are still many uncertainties in detail, but experience has shown that the problem is at least tractable for the local sample, whereas it becomes extremely difficult and uncertain for more distant galaxies, where only integrated properties can be measured [see the excellent discussions in Norman et al. (94)]. There is usually no difficulty in ascertaining whether or not a galaxy has any very young stars or uncondensed interstellar material, though even this question becomes tricky when very small amounts are being sought. The problem becomes more difficult as older components are looked for because of the faintness of older stars and because of the ambiguity in age dating them. (It is difficult to distinguish a low-mass old star from a low-mass young star.) Even in the Magellanic Clouds, where we can measure stars down to luminosities fainter than the Sun's, one cannot be certain about how to interpret a given field star color-magnitude diagram (CMD) because of the unknown and probably variable star formation rate (SFR), and, possibly, a variable initial-mass function (IMF).
One method that at least provides fairly reliable age data is the use of star clusters as probes of the galaxy. From good charge-coupled device color-magnitude diagrams it is possible to determine the ages and chemical compositions of star clusters in the Magellanic Clouds and to obtain less accurate data for clusters in the more distant Local Group members, such as M31 and M33, where integrated colors, and spectra can be used (with calibration from the MWG and the Magellanic Clouds). But there is no good way to compensate for the fact that star clusters disintegrate in time at a rate that probably depends on both the properties of the clusters and those of the host galaxies. Thus, though it is relatively simple to determine the present star formation rate for stars in clusters in a galaxy, it is not possible to trace the rate back in history farther than a time that is about the mean lifetime of the clusters, which is about 108 yr for the MWG (131). Attempts to do this have been made for the Magellanic Clouds (38, 62) and for the dwarfs NGC 6822 and IC 1613 (65). These tracings go back only a fraction of the lifetimes of the galaxies, however, and so we are forced to reconstruct the oldest times from other evidence. | <urn:uuid:c810350a-4b4e-4af2-abf1-6d7038565f9b> | 2.9375 | 542 | Academic Writing | Science & Tech. | 44.174935 |
Hurricane Celia as observed by NASA's spaceborne Atmospheric Infrared Sounder (AIRS). This image shows Celia on July 23 in visible light, as you would perceive it from space. Located in the eastern north Pacific Ocean off the coast of Mexico, Celia's winds have now dissipated to highs of 40 mph. Celia was the first hurricane of the eastern north Pacific season. Figure 1 is a daylight snapshot taken on July 19; Celia as tropical storm, winds at 50mph. Figure 2 is a daylight snapshot taken on July 21; Celia has a small eye with an 80-90% closed eyewall; sustained winds at 75mph with gusts reaching 92mph; Celia is upgraded to hurricane status.
| Figure 1: July 19 Daylight Snapshot for PIA00438|| Figure 2: July 21 Daylight Snapshot for PIA00438|
The major contribution to radiation (infrared light) that AIRS channels sense comes from different levels in the atmosphere, depending upon the channel wavelength. To create the movies, a set of AIRS channels were selected which probe the atmosphere at progressively deeper levels. If there were no clouds, the color in each frame would be nearly uniform until the Earth's surface is encountered. The tropospheric air temperature warms at a rate of 6 K (about 11 F) for each kilometer of descent toward the surface. Thus the colors would gradually change from cold to warm as the movie progresses.
Clouds block the infrared radiation. Thus wherever there are clouds we can penetrate no deeper in infrared. The color remains fixed as the movie progresses, for that area of the image is "stuck" to the cloud top temperature. The coldest temperatures around 220 K (about -65 F) come from altitudes of about 10 miles.
We therefore see in a 'surface channel' at the end of the movie, signals from clouds as cold as 220 K and from Earth's surface at 310 K (about 100 F). The very coldest clouds are seen in deep convection thunderstorms over land.
Quick Time Movie July 20
Quick Time Movie July 22 Celia located in upper left. The other intense convection area towards the center of the granule exhibits no circulation.
Quick Time Movie July 23 Dry air is now eating into Celia; the storm is becoming disorganized and weak.
The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena. | <urn:uuid:26204c01-9a1e-49d2-9a53-4e99f426ea18> | 3.578125 | 633 | Knowledge Article | Science & Tech. | 47.073364 |
Effect of Friction on Tides and Tidal Currents
In shallow water the tide and the tidal currents will be modified by the friction to which the waters are subjected when moving over the bottom. This bottom friction influences the currents to a considerable distance from the boundary surface, owing to the turbulent character of the flow (p. 480).
The effect of friction on the tide can be illustrated by considering a co-oscillating tide in a bay of constant depth and width. In the absence of friction the tide will have the character of a standing wave that can be considered composed of two waves traveling in opposite direction, the incoming wave and the reflected wave. In the presence of friction the tide can still be considered as composed of two such waves, but the combination no longer results in a single standing oscillation because the amplitudes of both waves must decrease in their directions of progress. In general, it can be assumed that the amount of energy that is dissipated is always proportional to the total energy of the wave. If this is true, the friction leads to a logarithmic decrease of the amplitude, provided the depth is constant. Assume that the waves progress in the x direction, that the influence of friction begins at x = 0 and that reflection takes place at x = l. On these assumptions the amplitude of the incoming wave will be (Fjeldstad, 1929)
From this equation it follows that the oscillation can be considered as brought about by two standing waves of phase difference π/2 or one quarter of a period (p. 552).
Let us consider a bay the length of which is ⅜ L, where L is the length of the wave in the bay. This means introducing l = ⅜ L, kl = ¾ π, and kx = 2πx/L. Let us furthermore assume that at the opening of the bay the tide can be represented by the equation η0 = Z cos σt, which means that at x = 0 the amplitude is Z and high water occurs at t = 0. In the absence of friction the standing wave in the bay will show a node at a distance of one quarter wave length from the opening, and inside of the node high water will occur at t = 6h if the period of the wave is 12h.
Effect of friction on amplitude and phase of the cooscillating tide in a bay, the length of which is 3/8 of the length of the tide wave.
The variations along the length of the bay of amplitude and phase are shown in fig. 146, by the curves marked 0. The effect of friction will depend upon the value of μ and, in order to illustrate the effect, we introduce three numerical values μ = 8/(15L), μ = 4/(3L), and μ = 4/L, corresponding to a decrease of the amplitude of the tide wave to one half of its value on a distance equal to 1.17 L, 0.52 L, and 0.17 L, respectively. The corresponding variations along the length of the bay of amplitude and phase of the tide are shown in fig. 146 by the curves marked 1, 2, and 3. The dashed line in the upper part of the figure shows the change in phase on three eighths of a wave length of a progressive wave.
By means of fig. 146 three effects of friction are brought out: (1) the node at which the amplitude of the tide is zero disappears and, instead, a region with minimal range is found; (2) the abrupt change of phase disappears and is replaced by a gradual change; (3) the phase difference between the opening and the end of the bay is decreased and approaches
The most striking example of the influence of friction on tides is found on the wide shelf along the Arctic coast of eastern Siberia. There the tide wave reaches the shelf from the north after having entered the Polar Sea through the wide opening between Spitsbergen and Greenland and having crossed the deep portions of the Polar Sea. Between longitudes 150°E and 180°E the width of the North Siberian Shelf exceeds 300 miles and in the greater part of that area the depth of the water is between 20 and 40 meters. The sea is ice-covered nearly throughout the year and, owing to the resistance which the ice offers, the tidal currents are subjected to frictional influences from the ice on top as well as from the bottom. The total effect of friction is therefore so great that on the coast the tide nearly vanishes (Sverdrup, 1927, Fjeldstad, 1929 and 1936). The decrease of the amplitude when approaching the coast is brought out by the data in table 72, which shows the amplitude and phase of the term M2 near the border of the shelf and at two localities on the coast. Of these two localities, Ayon Island lies a little south of Four Pillar Island, but the tide wave reaches Four Pillar Island later because the direction of progress of the wave is altered near the coast owing to the configuration of the bottom (Sverdrup, 1927).
|Locality||Latitude N||Longitude E||M2|
|Amplitude (cm)||Phase (degrees)||Difference in phase|
|Near border of shelf||74°33′||167°10′||13.75||158||0|
|Ayon Island||69 52||167 43||1.78||347||189|
|Four Pillar Island||70 43||162 35||0.98||60||262|
It is seen that the later the tide the smaller the amplitude is, and it can be readily verified that the logarithm of the amplitude is nearly a linear function of the phase difference, as should be expected if the wave length remained constant, because in that case μx = μLα/2π where α = kx represents the phase difference.
In several adjacent seas the effect of friction has been studied by H. Jeffreys, who used a method developed by G. I. Taylor and first applied to conditions in the Irish Sea. The principle is simply that under stationary conditions the net amount of tidal energy which is brought into an area must equal the amount which is lost in the same area by dissipation due to friction. Therefore a determination of the net amount of tidal energy which is brought into an area represents also a determination of the dissipation.
Combined influence of friction and the rotation of the earth on tidal currents in shallow water in the Northern Hemisphere (according to Sverdrup). For explanation, see text.
These studies have found an interesting application. It appears to be established by astronomers that the speed of rotation of the earth is very slowly decreasing, so that during a century the length of the day increases on an average by about one thousandth of a second. This slowing up may be caused by the dissipation of tidal energy, because estimates of the dissipation give values which correspond to the energy needed for bringing about the observed change in the earth's period of rotation.
So far, we have considered the effect of friction on the tides. In order to study theoretically the effect of friction on tidal currents, it is necessary to add the frictional terms (p. 475) in the equations of motion applicable to long gravitational waves (p. 555), and to integrate the equations. Such integration was performed by Sverdrup (1927) on the assumption that only the vertical turbulence need be considered and that the coefficient of eddy viscosity was constant. The boundary conditions were that at the free surface the shearing stresses should be zero and at the bottom the velocity should be zero. The results give some idea about the effect of friction, although the assumption of a constant eddy viscosity is not in agreement with more recent results according to which the eddy viscosity near the bottom increases rapidly with increasing distance from the bottom.
The more important conclusions can be summarized as follows. Near the bottom there exists a “layer of frictional influence” the thickness of which depends upon the ratio s = (2T sin ϕ)/T0 and upon the value of the eddy viscosity, and above which the tidal currents have the
Tidal currents in the North Sea, lat. 58°17′N, long. 2°27′E, depth 80 m, demonstrating the effect of friction when approaching the bottom. Measurements by Helland-Hansen on August 7 and 8, 1906.
Figure 148 shows an example of current measurements in the North Sea which appear to confirm the above conclusions. Other examples are found in Sverdrup's discussion (1927) of current measurements on the North Siberian Shelf, but in several of these cases it was necessary to take into account that the ice offered a resistance to the tidal motion and also that occasionally a nearly discontinuous increase in density at some depth brought complications. In the latter case an approximation could be obtained by introducing two layers of constant eddy viscosity separated by a layer of no eddy viscosity, the latter being the layer of very great stability.
The theoretical treatment of the subject has been expanded by Fjeldstad (1929, 1936) who has found integrals of the equations of wave motion in cases in which the eddy viscosity can be represented as a simple function
(A) Observed variations with depth of tidal currents at different lunar hours, according to measurements by Sverdrup on August 1, 1925, in lat. 76°36′N, long. 138°30′E. (B) Computed variation with depth of tidal currents, assuming an eddy viscosity which increases linearly from the bottom to the surface (according to Fjeldstad).
At the bottom one should expect, from analogies with experimental work in laboratories (p. 479), that the eddy viscosity will be small, having a value which depends upon the roughness of the bottom and the “friction velocity.” Near the bottom the eddy viscosity should increase linearly with increasing distance, the increment being proportional to the friction velocity. At some greater distance from the bottom, stability of the stratification may influence the eddy viscosity, and in very shallow water the eddy viscosity must reach a maximum below the free surface and decrease to a small value at the very surface. In homogeneous shallow water it may be expected, however, that the introduction of an eddy viscosity which increases linearly from the bottom to the surface will give a good approximation because conditions close to the bottom exercise the greatest influence upon the character of the motion and because at some distance from the bottom the value of the eddy viscosity is of minor importance. This is illustrated by the example in fig. 149. To the left are represented the components of the tidal current in the direction of progress of the tide wave, at the time of maximum current at the surface (marked I) and at the five following tidal hours. The curves are based on observations at three depths—0, 12, and 20 m—on
Observations of tidal currents at different distances from the bottom and within the layer of frictional resistance are not available from many localities and the factual information as to the effect of friction on tidal currents is therefore meager. Measurements from the North Sea off the coast of Germany have been discussed by Thorade (1928), who has studied the influence of friction by a different method of attack. In the North Sea the gravitational forces can be directly determined because the slope of the surface due to the tide wave can at any time be derived from tidal observations at coastal stations. Furthermore, Corioli's force and the accelerations can be derived from the current measurements and the frictional forces can therefore be found by means of the equations of wave motion because all other terms in the equations are known. Thorade's results are, in general, in agreement with the conclusions which have been presented, but many details need further examination. It is of particular interest, however, to observe that on an average during one tidal period Thorade finds that the eddy viscosity is very small at the bottom, increases rapidly with increasing distance from the bottom, but decreases again when approaching the surface. The general character of this variation is in agreement with the above considerations as to the variation of the eddy viscosity.
The influence of friction on tidal currents is also evident from studies of the tidal currents in the Dover Straits by J. van Veen (1939). He finds there that the velocity distribution between the surface and the bottom can be represented by means of a function of the form v = az1/n where n equals about 5.2. This implies that the eddy viscosity is approximately proportional to z4.2/5.2, meaning that the increase is somewhat less than that corresponding to a linear law, but no conclusions can be drawn as to the numerical values of the coefficient.
The effect of lateral mixing on tidal currents has so far not been examined, but it is possible that friction arising from lateral turbulence is of importance close to coasts. | <urn:uuid:134ad326-911c-414c-a10b-8ce0d79928bf> | 4.03125 | 2,708 | Academic Writing | Science & Tech. | 47.634553 |
HTTP Handlers and Factories
ASP.NET provides a low-level request/response API that enables developers to use
.NET Framework classes to service incoming HTTP requests. Developers accomplish
this by authoring classes that support the System.Web.IHTTPHandler interface
and implement the ProcessRequest() method. Handlers are often useful when
the services provided by the high-level page framework abstraction are not required
for processing the HTTP request. Common uses of handlers include filters and
CGI-like applications, especially those that return binary data.
Each incoming HTTP request received by ASP.NET is ultimately processed by a specific
instance of a class that implements IHTTPHandler. IHttpHandlerFactory provides the infrastructure
that handles the actual resolution of URL requests to IHttpHandler instances.
In addition to the default IHttpHandlerFactory classes provided by ASP.NET, developers can optionally create and register
factories to support rich request resolution and activation scenarios.
Configuring HTTP Handlers and Factories
HTTP handlers and factories are declared in the ASP.NET configuration as part
of a web.config file. ASP.NET defines an <httphandlers> configuration section
where handlers and factories can be added and removed. Settings for
HttpHandlerFactory and HttpHandler are inherited by subdirectories.
For example, ASP.NET maps all requests for .aspx files to the PageHandlerFactory class in
the global machine.config file:
<add verb="*" path="*.aspx" type="System.Web.UI.PageHandlerFactory,System.Web" />
Creating a Custom HTTP Handler
The following sample creates a custom HttpHandler that handles
all requests to "SimpleHandler.axd".
VB Simple HttpHandler
A custom HTTP handler can be created by implementing the IHttpHandler
interface, which contains only two methods. By calling IsReusable, an
HTTP factory can query a handler to determine whether the same instance can be used to service multiple requests. The
ProcessRequest method takes an HttpContext instance as a parameter, which
gives it access to the Request and Response intrinsics.
In the following sample, request data is ignored and a constant
string is sent as a response to the client.
Public Class SimpleHandler : Inherits IHttpHandler
Public Sub ProcessRequest(context As HttpContext)
Public Function IsReusable() As Boolean
After placing the handler class definition in the application's \App_Code directory,
the handler class can be specified as a target for requests. In this case, all
requests for "SimpleHandler.axd" will be routed to an instance of the
SimpleHandler class, which lives in the namespace
<add verb="*" path="SimpleHandler.axd" type="Acme.SimpleHandler" /> | <urn:uuid:a3e2cb97-e5e0-4987-a019-60e1fb08ac9b> | 2.78125 | 587 | Documentation | Software Dev. | 35.103575 |
The Magic BallScience brain teasers require understanding of the physical or biological world and the laws that govern it.
There was a magic show. A man put a solid metal ball under an opaque cover. After a while, he took the cover off of the ball and there was nothing left but some liquid.
AnswerIt was a ball of gallium at 80F. After the ball was heated to 85F, it melted and became a liquid. Gallium's melting point is 85F.
See another brain teaser just like this one...
Or, just get a random brain teaser
If you become a registered user you can vote on this brain teaser, keep track of
which ones you have seen, and even make your own.
Back to Top | <urn:uuid:87d61ad4-9694-4224-90c8-1d655a35d914> | 2.8125 | 154 | Personal Blog | Science & Tech. | 72.800109 |
We can use what we know of geometric sequences to understand geometric series. A geometric series is a series or summation that sums the terms of a geometric sequence. There are methods and formulas we can use to find the value of a geometric series. It can be helpful for understanding geometric series to understand arithmetic series, and both concepts will be used in upper-level Calculus topics.
A geometric series is what happens when we sum a geometric sequence, okay? A sequence is a series of numbers, the sum is always all added up together. And to find the sum of a geometric series we have a number of different equations at our disposal, okay?
So what we have is for a finite series, okay, that is a series with a set number of terms, we have these 2 equations at the top of the board. Oops, and I have misread them, miswritten them. This should be a a sub 1. Sorry about that. So, what we have is the first term times 1-r to the n over 1-r. This is the exact same thing as a sub 1 r to the n minus 1 over r to the 1, okay? These are opposite statements if you switch one of these, you switch the other. Negative ones cancel. So either one of these is perfectly fine, okay? Your book may have 1, just go with whatever your book has or your teacher tells you. Okay.
In general I will use this equation, okay? The first one. And the reason I do that is because this is the formula for a finite series. We also have another formula for a infinite series and basically that's one that never ends, okay? And the reason I chose this is because the denominator is going to be the same for both of these and not having to remember when to switch your denominator makes my life a little but easier, okay? So I'm going to use these 2, if you want to use these 2, that's perfectly fine as well. But basically what we have, so we have these 2 for finite and one for infinite.
One way you can tell the difference is for the finite one, you're summing a sub n okay, you are summing the first n terms, whatever that maybe. For the infinite series, we don't have an n. So that's telling us we don't have a specified term number which means we're summing everything, okay? There is one restriction though that we have to have when we are summing a infinite series, and that is that our absolute value of our rate has to be less than 1, okay? And what that means is that our terms have to be getting smaller, okay?
And I'm talking about positive negative because they can switch back and forth. But basically, the numeric part of our numbers have to be getting smaller. And how that actually works is I've written out this sequence right here. 8, 4, 2, 1, one half, one fourth, and basically what we're doing is dividing by 2 every time or multiplying by one half because we always have to multiply when finding our geometric sequences and series. And what happens, if we added up all these terms together, eventually the terms down here are so small they're not going to do anything. So our next term will be one sixteenth, one thirty second, one sixty fourth so on and so forth. Eventually those numbers when we're dealing with whole numbers aren't going to make a difference, okay. We add one one thousandths to a number we already have, it's not going to make a difference. So that's how this infinite series equation works. Okay? You're just counting on these numbers to eventually be so small they're not going to affect our sum. Okay?
So 2 really 3 different equations for summing a geometric sequence, a geometric series. Pick 2 or try to pick one of these 2 and then you have to remember this one as well. We have our finite and our infinite. | <urn:uuid:722949b3-bcbd-4f9b-9db8-63f6f474ce9f> | 3.765625 | 817 | Tutorial | Science & Tech. | 64.682616 |
These are commonly known as the narrow-winged damselflies, and there are 13 British species. Their adult body length ranges from 25 - 50 mm, and they contain the largest number of species in Northern Europe. The adults are usually red and black or blue and black.
On the left is Coenagrion pluella, the Azure damselfly or Pond damselfly. Its wingspan is 41 mm and body length 33 mm. The adult flies from May to August. It is found near water meadows with lush grass, canals with abundant reeds, ditches and is one of the most common found in garden ponds. It is common in England and Ireland, but less common in Scotland. This one was just moulting into and adult. The female lays her eggs in the tissues of plants on the water surface.
The venation of the wings is used in identification to species level, however there is disagreement between entomologists in naming the veins, so identification for a beginner is usually easiest using illustrations.
They have 10 or 11 abdominal segments. All males have a pair of claspers on segment 10, and their reproductive organs on segment 2 or 3. In females the ovipositor is in segment 8 or 9. Some females may have a pair of appendages on segment 10.
Before mating the male must transfer sperm from the genital opening on segment 9 to the reproductive organs on segments and 3. Then on finding a female he grabs her by the neck with his claspers. She curves her body around until the tip of her abdomen touches his reproductive organs in segment 2 and 3 to collect the sperm. This is known as the copulation wheel. After mating the pair may fly in tandem with the male leading. Females mate with more than one male and store the sperm from the matings, although she tends to use the sperm from the last mating. The male mating organ contains a structure that allows him to scrape or push aside the sperm from previous matings before depositing his own in the most favourable spot. The length of time he holds on to the female (the copulation wheel) will also prevent her mating with another.
The female places her eggs in the water, usually on the stems of aquatic plants. Some species actually crawl underwater to place their eggs deeper, some have a saw-like structure on the ovipositor to make slits in plants enabling them to place the egg inside the stem, and others just skim over the water dipping the tip of their abdomen in and scattering the eggs singly. In some species the male holds on to the female while she lays eggs. Adult life span can be as long as 2 months, but is usually no more than 2 or 3 weeks. | <urn:uuid:cb9d64ef-4d3a-48c2-bf9a-030a17fe72fc> | 3.625 | 557 | Knowledge Article | Science & Tech. | 60.658274 |
The spectroheliograph in use at the Yerkes Observatory brought proof of the distribution of elements such as calcium, at various densities, in layers of the Sun.s atmosphere. Hale named the billowing clouds of calcium and hydrogen vapor "flocculi."
Hale made his greatest discovery while working at the Mt. Wilson Observatory. In laboratory experiments he matched the spectral lines emitted by sunspots, the dark areas of the Sun where the temperature drops hundreds of degrees below that of the surrounding photosphere. By applying the Zeeman effect of magnetic displacement to the spectral data, Hale proved the shifting magnetic field present in sunspots. Further observation and experimentation led Hale and his group to discover the Sun's entire magnetic field and its polarity reversals with the sunspot cycle.
Hale's instrumentation plus the observatories he designed and built are landmarks in the history of astrophysics and cosmology. | <urn:uuid:4d5f071f-e715-45cf-93a4-6153b71f3d70> | 3.921875 | 188 | Knowledge Article | Science & Tech. | 31.020129 |
Can you reduce friction by decreasing the weight of the object on
top? For example, if you wanted to reduce the friction between the
tires of a car and the ground underneath it. Let's say on the first
test the cars total weight was 3500 lb, if we took out unnecessary
items would it reduce the friction between the tires and the ground
on the second test? Considering that weight is the gravitational
force exerted on an object and by decreasing the force or the push
thus reducing friction between the two objects.
Friction force = coefficient of friction times the support force.
You can test this with a spring scale pulling on a wood plank. You
can change the support force by adding weights on top of the
plank. You will see a difference in the frictional force on the
scale as you pull at a constant velocity.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:178b06f6-2e72-436b-995f-2189f6c1eb3a> | 3.203125 | 191 | Q&A Forum | Science & Tech. | 61.75 |
Coriolus Effect at Equator
Although the Coriolus effect is zero on the equator, the
rotation of the Earth may still affect ballistics there. For
example, suppose that a shell is fired directly eastward along the
equator, in the absence of wind, and it hits a target ten miles
away. When the shell is fired it has an eastward motion due to the
rotation of the Earth plus a force (explosion) that sends the shell
out of the barrel of the gun in the same direction. If the same
target is relocated ten miles to the west of the original firing
spot, and another shell is fired under identical conditions (same
gun, same type of ammunition, same angle of elevation), the shell
will theoretically land where?
It lands where it was aimed.
The Coriolus effect (there is no Coriolus force any more than there
is centrifugal force) happens when an object moves north or south on
a rotating body, free from the surface.
A shell shot north from the equator leaves the gun with an E-W
velocity the same as the tangential velocity of Earth's equator. It
flies north, free from contact with the surface and moves to a place
where the tangential (not angular) velocity of Earth's surface is
less. To the earthbound observer, it veers east.
A shell fired along a line of latitude is fired and lands at points
that are moving at the same tangential velocity so the earthbound
observer sees no effect.
Note that the frame of reference is crucial. We launch rockets west
to east as close to the equator as possible because orbital velocity
is measured in not relation to the ground, but the earth's center, as it were.
R. W. "Bob" Avakian
B.S. Earth Sciences; M.S. Geophysics
Oklahoma State Univ. Inst. of Technology
There should actually be a small Coriolis effect that will make the
shell's ranges different in the two directions, even if it never
leaves the plane of the equator.
The initial velocity imparted to the shell relative to the spinning
earth has the same magnitude in both cases. In the eastward case,
the firing of the shell will cause the earth to spin a little slower
(until the shell lands); in the westward case, the firing of the
shell will cause the earth to spin a little faster (again, until the
The Coriolis effect influences the shell's motion only when it
changes its distance to the Earth's axis of rotation. Changing
latitude does this, which is the context in which the Coriolis
effect is ordinarily encountered. At the Equator, the only factor
that can change distance to the rotation axis is the height attained
in the trajectory.
Think of it in terms of a shell fired straight up (vertically) from
the equator. (Neglect air resistance for this argument.) As the
shell rises, its eastward velocity remains the same as the Earth's
at the equator. However, its radius from the axis becomes larger,
so the land surface would appear to turn faster. Relative to the
land, the shell would appear to move to the west. Then, as the
shell falls, its radius decreases, its westward movement appears to
pick up again, reaching the same value as the earth's surface when
it reaches the surface. So, from a perspective on the surface of
the rotating earth, the projectile's trajectory would start
vertical, curve to the west, and then return to earth exactly vertical.
Richard Barrans, Ph.D., M.Ed.
Department of Physics and Astronomy
University of Wyoming
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:4c06b862-e998-4ee4-9565-0f2be6c7fe67> | 3.875 | 809 | Knowledge Article | Science & Tech. | 54.861528 |
Behind this unassuming door, off a small side street in Cambridge Massachusetts, is the MIT Plasma Science and Fusion Center. There's little indication that inside scientists are using some of the most powerful magnetic fields in the world to manipulate the same stuff that the Sun is made of.
The first stop was the Alcator C-Mod tokamak. A tokamak is a kind of nuclear reactor that physicists hope will someday power cities the of the future. Inside the big blue cylinder is a hollow, doughnut shaped cavity where powerful magnetic fields start out by compressing disperse plasma ions into a dense ring. The ring heats up, and a nuclear reaction starts to take place when ions get close to each other. Usually two ions repel because they have the same charge, just like how two north pole magnets with the same poles repel each other. But if they have enough energy, the ions can overcome that repulsion and knock right into each other. When that happens, the strong nuclear force takes over, the two ions bond, releasing a lot of energy. In theory, if done right, the energy released would be more than the energy needed to run the machine, and the excess could be used to generate electricity. This is the key to building a fusion power plant. Unfortunately scientists are not nearly there. Fusion research has been going on for years, and scientists have made great strides, but the net gain in energy still eludes them. The Alcator C-Mod won't be the machine to power the future, but it's helped lay a lot of the theoretical groundwork for bigger tokamaks that might. The Alcator C-Mod itself might be shutdown soon because of proposed budget cuts to fusion research.
The churning plasma gets extremely hot, and it has to flow smoothly, otherwise it might melt the machine. Inside this rectangular case is a laser that shoots inside of the tokamak to check that the magnetic field inside is keeping the scorching plasma away from the walls of the chamber.
This is command central for the Alacator C-Mod. The screens display important information about how the reactor is functioning. This is a recording from an earlier run, the tokomak was offline when we were there. From left to right, the screens show a visual image of the inside of the chamber, how hot all of the surfaces of the chambers are getting, a cross-section of the magnetic fields con fining the burning plasma within, a rundown of radio frequencies also used to help shape the plasma, and chart of other important information like power usage, magnetic field strength and the like.
All of the rooms we walked through were surrounded by thick concrete, a reminder that when these machines are running, no one is actually in the same room. The thick radiation shielding makes the lab feel like a kind of bunker in places, but it's necessary for safety. As we headed downstairs to the lab's linear accelerator, we first passed by a container holding some of its more radiological components. A linear collider is a kind of particle accelerator that shoots a thin beam of energized particles at a target at the end of a straight pipe. There are ones around the world for different purposes, but the one here is mostly used here to test instruments and materials needed for various fusion experiments. It's entirely built and run by the graduate students at MIT. The beam starts in the blue box at the end. Inside an energized emitter releases charged particles, atomic nuclei with one neutron and one proton known as "deuterons." Using electric charges, they're hustled into the beam pipe where they'll be accelerated up to almost the speed of light.
As the ions shoot down the beam pipe, they pick up energy as electrodes switch quickly between positive and negative charge. Magnetic fields focus the beam to a width smaller than a human hair.
The deuterons shoot out of the tube at the bottom of the photo and hit the black spot at the end of the copper pipe. The speeding particles fuse to whatever sample is in the middle of the target, and eject neutrons off to the left, which instruments collect and measure. This neutron harvest is the main way that scientists hope to extract usable energy from fusion tokamaks. If they put a block of the right kind of material in the path of the whizzing neutrons, they'll hit it and transfer their energy into it. The block will heat up, and if it gets it hot enough, scientists can use it to boil water, which will turn a turbine and generate electricity.
We moved on to a cavernous room where some more of the lab's of the biggest experiments are housed. This is the Levitated Dipole Experiment, designed to study the way plasmas behave, rather than trying to use it as a power source. The idea is to try to better understand "space weather."In the upper atmosphere charged ions get trapped inside Earth's magnetic field, resulting in belts of radiation and the aurora.
In essence, the LDX as it's called, simulates the magnetic field of a planet inside it's chamber. Magnetic fields trap plasmas inside it just like Earth traps ions ejected from the sun. What makes this experiment so cool, is in order to study the plasma at the center, scientists have to use powerful magnets to levitate a half-ton coil of wire in the middle when its running. It has to float in the center, because if there were supports holding the coil to the side, they would disrupt the magnetic fields and ruin the experiment.
The view inside the LDX's sixteen-foot chamber. The hole at the bottom is where the round coil rests when idle. When the experiment is running, a winch first raises the coil up to the middle of the chamber then lets it go, suspending it between two powerful magnets. The coil hovers in the middle of the plasma cloud, while scientists watch the round vortexes that form around it.
The last experiment we stopped at was the Versatile
Toroidal Facility, another basic physics experiment. The idea is to understand what happens when two magnetic field lines
pinch together while plasma is trapped inside. It might sound esoteric, but it happens all the time on the surface of the Sun. The sun is essentially one giant ball of plasma gas with a powerful and convoluted magnetic field. Every once in a while the field lines get twisted around and pinch together, causing a huge mass of charge particles to shoot away from the surface. This is where the charged particles that get trapped in our atmosphere come from. On Earth we know this as a solar flare, and if powerful enough, they can disrupt cell phone, TV and radio signals. Physicists are essentially creating their own tiny solar flares in the lab.
This long tube sticking out is the tail end of one of the instruments used to observe the plasma within.
This wire cage is one of the instruments used to see inside the experiment. Magnetic fields swirl around the criss-crossed copper strands, inducing electrical currents in the wires that physicists use to track the bigger magnetic field within. Temperatures inside the VTF are much cooler than a tokamak, so the copper wires can be embedded in the plasma without melting. | <urn:uuid:ce000e6e-da52-4dc7-888e-1ef51a975a8b> | 3.65625 | 1,478 | Nonfiction Writing | Science & Tech. | 51.023317 |
It isn't possible for two signals that have different frequencies to have the same phase. They might be in phase for an instant, but because of the different frequencies, they cannot stay in phase.
However, you might divide them or mix them to a common frequency to produce square waves that are in phase, out of phase by a fixed amount, or at least near the same frequency.
These can be compared to produce an error signal which can then be used to pull one of the frequencies relative to the other so that the phase relationship is maintained.
This is called a phase locked loop.
This is commonly used to produce a range of stable frequencies from one stable, but expensive, reference frequency like a crystal oscillator.
Not sure how you would do this with lasers, but I guess the principle is the same. | <urn:uuid:879af711-d5b4-44e4-9762-7fa9a1afd971> | 3 | 167 | Q&A Forum | Science & Tech. | 47.755411 |
Organized areas of thunderstorm activity reinforce pre-existing frontal zones, and they can outrun cold fronts. This outrunning occurs in a pattern where the upper level jet splits into two streams. The resultant mesoscale convective system (MCS) forms at the point of the upper level split in the wind pattern in the area of best low level inflow. The convection then moves east and toward the equator into the warm sector, parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise. This feature is commonly depicted in the warm season across the United States on surface analyses, as they lie within sharp surface troughs. If squall lines form over arid regions, a duststorm known as a haboob may result from the high winds in their wake picking up dust from the desert floor. Squall lines are depicted on National Weather Service surface analyses as an alternating pattern of two red dots and a dash labelled "SQLN" or "SQUALL LINE".
Structure and evolution of winter cyclones in the Central United States and their effects on the distribution of precipitation. Part V: Thermodynamic and dual-Doppler radar analysis of a squall line associated with a cold front aloft
Apr 01, 1998; ABSTRACT On 8-9 March 1992, a long-lived squall line traversed the state of Kansas, producing hail and damaging winds. It was... | <urn:uuid:1959237a-220e-4f95-8a23-1a4b36e1da73> | 3.84375 | 320 | Knowledge Article | Science & Tech. | 50.263769 |
Life as We Know It
Whale of a comeback, dancing cockatoos, sticky bees, and waltzing pond scum
- By Amanda Bensen, Joseph Caputo, T.A. Frail, Laura Helmuth and Abigail Tucker
- Smithsonian magazine, July 2009
Bumblebees search for flower petals that offer traction, University of Cambridge-led scientists have shown. Some petals are smooth and slippery, but others have cone-shaped cells that act like Velcro when bees touch down. The reward for sticking the landing? Bees can guzzle nectar more readily. For the flowers' part, more and longer visits by bees increase the chances of pollination. | <urn:uuid:5ffad20d-b7e8-4f41-ba1e-84c0c4f2e5df> | 2.890625 | 142 | Truncated | Science & Tech. | 49.27125 |
A pair of companies in Arizona are about to build a system to pull CO2 out of the atmosphere, attempting to prove that the "wind-scrubber" concept works. The scrubber will employ sodium hydroxide, which reacts with carbon dioxide, to remove CO2 from air drawn through the system. In principle, such systems could help to reduce carbon dioxide levels already in the atmosphere, thereby complementing attempts to reduce the amount of additional carbon being emitted.
There are a few problems with the system under consideration: it may not work; sodium hydroxide is caustic and toxic; and, according to the article, "the stored CO2 could be supplied to the oil industry for use in the process of enhanced oil recovery" -- which seems rather self-defeating, in the long-run.
All that said, the notion of figuring out ways to actively reduce existing carbon levels alongside reducing the amount of new carbon added to the atmosphere is a good idea. If, as some recent reports suggest, we may be already too late to prevent massive problems even if we manage to cut our emissions dramatically, aggressive carbon sequestration may be critical. Let's hope that the proof-of-concept test works -- and that they can then come up with a better technology (and lose the "use the carbon to pump more oil" idea).
I always wonder about this.
Why doesn't anyone get bio-engineer plankton to be more efficient at scrubbing this stuff?
The key problem with this plan is that there's energy involved in getting that sodium hydroxide. It doesn't occur in nature and has to be electrolytically extracted from salt water.
Another plan I've heard of, is to compress the CO2 into a liquid (it would be dry ice at normal pressure) and pump it into the deep seabed. It would dissolve into the seawater, which can absorb more than the atmospher and take many centuries to get back to the surface.
But I think the most compelling idea is to provide micronutrients (iron mostly) to the empty seawater of the south Pacific. The resulting algal bloom would extract up to a third of our current CO2 emission level, according to one estimate. | <urn:uuid:e264a766-a4b5-45d3-a320-ff119af477a4> | 3.203125 | 459 | Comment Section | Science & Tech. | 41.846565 |
Great Pacific Garbage Patch a bigger worry than tsunami debris Debris from the Japanese tsunami is starting to wash ashore on the U.S. West Coast in a big way. Beachcombers from Northern California to Alaska are finding fishing floats, soccer balls and ships that have drifted thousands of miles across... Digging into the Great Pacific Garbage Patch Our earth is covered by more than 75 percent water, yet we know more about the moon than the depths of the sea. Today on World Oceans Day we celebrate and honor oceans by recognizing the underwater footprint we all unknowingly leave behind. When it comes to plastic, what you throw away doesn't really go [...] The Great Pacific Garbage Patch The Great Pacific Garbage Patch is a swirling mass of marine debris, a Plastic Soup of discarded bags and bottles, in the north-east of the Pacific Ocean, whose mass is impossible to determine accurately. An expedition to explore the patch sets sail from San Diego, California on May 28. World's oceans are 'plasticized' A marine expedition of environmentalists has confirmed the bad news it feared -- the "Great Pacific Garbage Patch" extends even further than previously known. Research ship finds the world's oceans are 'plasticized' A marine expedition of environmentalists has confirmed the bad news it feared -- the "Great Pacific Garbage Patch" extends even further than previously known. Plastic trash in Pacific Ocean continues to grow The amount of plastic in the ocean area known as the "Great Pacific Garbage Patch" has increased a hundredfold since the early 1970s, according to a new study, and the alarming findings could pressure California and other coastal states to do more to reduce plastic trash. Plastic in Pacific is changing ocean habitats, study shows In the Great Pacific Garbage Patch, plastic has increased by 100 times over in past 40 years. Some sea creatures are now laying their eggs on plastic. Researchers Say 'Plastic Pollution' Destoying Earth's Oceans (CNN) — A marine expedition of environmentalists has confirmed the bad news it feared — the "Great Pacific Garbage Patch"... Great Pacific Garbage Patch' Increased 100-Fold in 40 Years: Study In the past 40 years, the size of the massive Pacific Garbage Patch, consisting mainly of plastics, chemical sludge, and other man-made flotsam. Due to the massive area and [...] 'Great Pacific Garbage Patch' Poses New Threat to Marine Life Imagine a landfill twice the size of Texas, filled with junk, castoffs and other trash. Now imagine it's floating in the middle of the Pacific Ocean.
Key Words: great pacific garbage patch | <urn:uuid:07fc44c0-63d8-4ea5-ab66-3819b930b6bd> | 2.96875 | 529 | Personal Blog | Science & Tech. | 48.721764 |
Dramatic and unprecedented plumes of methane – a greenhouse gas 20 times more potent than carbon dioxide – have been seen bubbling to the surface of the Arctic Ocean by scientists undertaking an extensive survey of the region.
The scale and volume of the methane release has astonished the head of the Russian research team who has been surveying the seabed of the East Siberian Arctic Shelf off northern Russia for nearly 20 years.
In an exclusive interview with The Independent, Igor Semiletov, of the Far Eastern branch of the Russian Academy of Sciences, said that he has never before witnessed the scale and force of the methane being released from beneath the Arctic seabed.
"Earlier we found torch-like structures like this but they were only tens of metres in diameter. This is the first time that we've found continuous, powerful and impressive seeping structures, more than 1,000 metres in diameter. It's amazing," Dr Semiletov said. "I was most impressed by the sheer scale and high density of the plumes. Over a relatively small area we found more than 100, but over a wider area there should be thousands of them."
Scientists estimate that there are hundreds of millions of tonnes of methane gas locked away beneath the Arctic permafrost, which extends from the mainland into the seabed of the relatively shallow sea of the East Siberian Arctic Shelf. One of the greatest fears is that with the disappearance of the Arctic sea-ice in summer, and rapidly rising temperatures across the entire region, which are already melting the Siberian permafrost, the trapped methane could be suddenly released into the atmosphere leading to rapid and severe climate change.
Dr Semiletov's team published a study in 2010 estimating that the methane emissions from this region were about eight million tonnes a year, but the latest expedition suggests this is a significant underestimate of the phenomenon. | <urn:uuid:1b6bb254-6084-433f-ada6-20ed72a1661e> | 3.71875 | 377 | Comment Section | Science & Tech. | 31.335965 |
The track pattern — dot-dash-dash-dash, dot-dash-dash-dot, dot-dash-dot-dot (".--- .--. .-..") — spells out "JPL" in Morse code, which translates letters and numbers into a series of short ("dot") and long ("dash") signals.
Curiosity's signature wheel-print was a nod to NASA's lead center for unmanned planetary exploration, which built the rover and now commands it as Curiosity prepares to explore Mars in search of conditions habitable to past or present life.
The dashes and dots are more than just an autograph on the ground. They serve as "visual odometry" marks, which allow Curiosity's engineers to determine the position and orientation of the rover, as well as how far it traveled, by analyzing images of its tracks.
"We have intentionally put holes in the wheels to leave a unique track on Mars," Heverly said. "So if we are in sand dunes where we don't have lots of rock features around us, we can use those patterns to do our visual odometry."
In addition to the Morse code JPL, Curiosity's wheels also feature a zigzag cleat pattern.
Curiosity's short test drive was the latest in a series of instrument and equipment checkouts that the rover needs to pass before heading out toward its first major science target.
Earlier this week, Curiosity extended its 7-foot (2.1-m) robotic arm, which is capped by a turret of tools including a camera, drill, spectrometer, scoop and the mechanisms for sieving and portioning samples of powdered rock and soil.
"We unstowed the robotic arm and took a look at the tools on [its] end. It's kind of a Swiss army knife there where we have a lot of instruments," said Curiosity mission manager Michael Watkins of JPL. "We wanted to make sure all of that was working by doing these first motor checks, and all of that went successfully." | <urn:uuid:683cbc5e-dba8-467c-8a96-a755b4ef29b9> | 3.15625 | 414 | Truncated | Science & Tech. | 60.109011 |
As the Intergovernmental Panel on Climate Change (IPCC) puts the finishing touches to its final report of the year, two of its senior scientists look at what the panel is and how well it works. Here, a view from a leading researcher into temperature change.
The IPCC is a framework around which hundreds of scientists and other participants are organised to mine the panoply of climate change literature to produce a synthesis of the most important and relevant findings.
Politicians wave goodbye to the IPCC's objectivity, argues Dr Christy
These findings are published every few years to help policymakers keep tabs on where the participants chosen for the IPCC believe the Earth's climate has been, where it is going, and what might be done to adapt to and/or even adjust the predicted outcome.
While most participants are scientists and bring the aura of objectivity, there are two things to note:
- this is a political process to some extent (anytime governments are involved it ends up that way)
- scientists are mere mortals casting their gaze on a system so complex we cannot precisely predict its future state even five days ahead
The political process begins with the selection of the Lead Authors because they are nominated by their own governments.
Thus at the outset, the political apparatus of the member nations has a role in pre-selecting the main participants.
But, it may go further.
At an IPCC Lead Authors' meeting in New Zealand, I well remember a conversation over lunch with three Europeans, unknown to me but who served as authors on other chapters. I sat at their table because it was convenient.
After introducing myself, I sat in silence as their discussion continued, which boiled down to this: "We must write this report so strongly that it will convince the US to sign the Kyoto Protocol."
Politics, at least for a few of the Lead Authors, was very much part and parcel of the process.
And, while the 2001 report was being written, Dr Robert Watson, IPCC Chair at the time, testified to the US Senate in 2000 adamantly advocating on behalf of the Kyoto Protocol, which even the journal Nature now reports is a failure.
Follow the herd
As I said above - and this may come as a surprise - scientists are mere mortals.
The tendency to succumb to group-think and the herd-instinct (now formally called the "informational cascade") is perhaps as tempting among scientists as any group because we, by definition, must be the "ones who know" (from the Latin sciere, to know).
You dare not be thought of as "one who does not know"; hence we may succumb to the pressure to be perceived as "one who knows".
The Alabama team produces data on atmospheric temperatures collected by weather balloons
This leads, in my opinion, to an overstatement of confidence in the published findings and to a ready acceptance of the views of anointed authorities.
Scepticism, a hallmark of science, is frowned upon. (I suspect the IPCC bureaucracy cringes whenever I'm identified as an IPCC Lead Author.)
The signature statement of the 2007 IPCC report may be paraphrased as this: "We are 90% confident that most of the warming in the past 50 years is due to humans."
We are not told here that this assertion is based on computer model output, not direct observation. The simple fact is we don't have thermometers marked with "this much is human-caused" and "this much is natural".
So, I would have written this conclusion as "Our climate models are incapable of reproducing the last 50 years of surface temperatures without a push from how we think greenhouse gases influence the climate. Other processes may also account for much of this change."
To me, the elevation of climate models to the status of definitive tools for prediction has led to the temptation to be over-confident.
Here is how this can work.
Computer models are the basic tools which are used to estimate the future climate. Many scientists (ie the mere mortals) have been captivated by an IPCC image in which the actual global surface temperature curve for the 20th Century is overlaid on a band of model simulations of temperature for the same period.
The observations seem to fit right in the middle of the model band, implying that models are formulated so capably and completely that they can reproduce the past very well.
Without knowing much about climate models, any group will be persuaded by this image to believe models are quite precise.
However, there is a fundamental flaw with this thinking.
You see, every modeller knew what the answer was ahead of time. (Those groans you just heard were the protestations of my colleagues in the modelling community - they know what's coming).
In my view, on the other hand, this persuasive image is not a scientific experiment at all. The agreement displayed is just as likely to do with clever software engineering as to the first principles of science.
The proper and objective experiment is to test model output against quantities not known ahead of time.
Our group is one of the few that builds a variety of climate datasets from scratch for tests just like this.
Since we build the datasets here, we have an urge to be sceptical about arguments-from-authority in favour of the real, though imperfect, observations.
In these model vs data comparisons, we find gross inconsistencies - hence I am sceptical of our ability to claim cause and effect about both past and future climate states.
This year's IPCC report projects major climatic changes ahead
Mother Nature is incredibly complex, and to think we mortals are so clever and so perceptive that we can create computer code that accurately reproduces the millions of processes that determine climate is hubris (think of predicting the complexities of clouds).
Of all scientists, climate scientists should be the most humble. Our cousins in the one-to-five-day weather prediction business learned this long ago, partly because they were held accountable for their predictions every day.
Answering the question about how much warming has occurred because of increases in greenhouse gases and what we may expect in the future still holds enormous uncertainty, in my view.
How could the situation be improved? At one time I stated that the IPCC-like process was the worst way to compile scientific knowledge, except for all the others.
Improvements have been adopted through the years, most notably the publication of the comments and responses. Bravo.
I would think a simple way to let the world know there are other opinions about various aspects emerging from the IPCC font would be to provide some quasi-official forum to allow those views to be expressed.
These alternative-view authors should be afforded the same protocol as the IPCC authors, ie they themselves are their own final reviewers and thus would have final say on what is published.
We should always begin our scientific pronouncements with this statement: 'At our present level of ignorance, we think we know...'
At that point, I suppose, the blogosphere would erupt and, amidst the fire and smoke, hopefully, enlightenment may appear.
I continue to participate in the IPCC (unless an IPCC functionary reads this missive and blackballs me) because I not only am able to contribute from my own research, but there are numerous opportunities to learn something new - to feed the curiosity that attends a scientist's soul.
I can live with the disagreements concerning nuances and subjective assertions as they simply remind me that all scientists are people, and do not prevent me from speaking my mind anyway.
Don't misunderstand me.
Atmospheric carbon dioxide continues to increase due to the undisputed benefits that carbon-based energy brings to humanity. This increase will have some climate impact through CO2's radiation properties.
However, fundamental knowledge is meagre here, and our own research indicates that alarming changes in the key observations are not occurring.
The best advice regarding scientific knowledge, which certainly applies to climate, came to me from Mr Mallory, my high school physics teacher.
He proposed that we should always begin our scientific pronouncements with this statement: "At our present level of ignorance, we think we know..."
Good advice for the IPCC, and all of us.
John R Christy is Professor and Director of the Earth System Science Center at the
University of Alabama, Huntsville, US
He has contributed to all four major IPCC assessments, including acting as a Lead Author in 2001 and a Contributing Author in 2007 | <urn:uuid:cfdcb3d3-56bc-4fa9-991b-4763f194a076> | 3.140625 | 1,729 | Nonfiction Writing | Science & Tech. | 39.983988 |
Hugh Pickens writes "How has a 78-ton boulder traveled 130 meters inland from the sea since 1991? Live Science reports that geologists have puzzled for years over the mysterious boulders that litter the desolate coastline of Ireland's Aran Islands that somehow move on their own when no one is looking. The sizes of the boulders in the formations range 'from merely impressive to mind-bogglingly stupendous,' writes geoscientist Rónadh Cox. While some researchers contend that only a tsunami could push these stones, new research finds that plain old ocean waves, with the help of some strong storms, do the job. Some boulders move inland at an average rate of nearly 3 meters per decade, with one rock moving 3.5 meters vertically and 69 meters horizontally in one year. The team compared modern high-altitude photos of the coastline to a set of meticulous maps from 1839 that identified the location of the boulders' ridges — nearly 100 years after the most recent tsunami to hit the region, which struck in 1755. The Aran cliffs rise nearly vertically out of the Atlantic (video), leaving very deep water close to the shore. As waves slam into the sheer cliff, that water is abruptly deflected back out toward the oncoming waves. This backflow may amplify subsequent waves resulting an occasional storm wave that is much larger than one would expect. 'There's a tendency to attribute the movement of large objects to tsunami,' says Cox. 'We're saying hold the phone. Big boulders are getting moved by storm waves.'" | <urn:uuid:d511a727-c3b2-48ad-a27e-3b173bd30ee1> | 3.5 | 318 | Comment Section | Science & Tech. | 49.626038 |
There are a few different things at play here, but generally speaking: yes, a nucleus of ununoctium is larger than a nucleus of Sodium, and no, the electron orbitals of Ununoctium may not be larger than those in Sodium.
On a nuclear level, protons and neutrons do take up a finite amount of space. The size of the nucleus is partially determined by the number of protons and neutrons, but also by the binding energy of the nucleus (in other words, how energetically favorable the formation of that nucleus is).
Neon atomic orbitals
One measure of this is the energy of the first excited state in the nucleus: if the first excited state is very high in energy, it means the ground state is very well bound, and the nucleus is probably smaller than its nearby neighbors on the chart of the nuclides.
At the atomic (electron orbital) level, the more electrons an atom has, the more diffuse they have to be due to charge repulsion and the Pauli exclusion principle. However, the closure of a valence orbital can make a huge difference. Thus, an atom of cesium is larger than an atom of sodium, but an atom of chlorine is actually smaller than the sodium atom.
Two basic trends are apparent across the periodic table: atomic radius increases as you move down a column, and decreases as you move across a row. Sodium has a pretty big atomic radius because of that one valence electron; ununoctium, being in the same period as a noble gas, will likely have a closed valence shell (we don't know for sure yet!), meaning its atomic radius could very well be smaller than sodium's.
One useful thing to consider is that the size of the nucleus is orders of magnitude smaller than the size of the atom: 10-15 meters versus 10-10 meters. So even as the nucleus expands with increasing proton and neutron number, it has more than enough space to expand into without ever affecting the electrons!
Kelly Chipps (AKA nuclear.kelly)
Department of Physics
Colorado School of Mines
Kimberly Lane from Billings, MT | <urn:uuid:aa3198e5-28c7-4369-b867-16df28bf7462> | 3.4375 | 446 | Q&A Forum | Science & Tech. | 38.514875 |
Potassium-40 heats up Earth's core
May 7, 2003
Radioactive potassium could be a significant source of heat in the Earth's core. V Rama Murthy from the University of Minnesota and colleagues have shown that potassium-40 can exist in the core of the Earth and provide heat via its radioactive decay. The result could have important implications for theories of thermal evolution of planetary cores and the origin of geomagnetic fields (V Rama Murthy et al. 2003 Nature 423 163).
Potassium-40, which has a radioactive half-life of about 1.2 billion years, could be an important source of heat in the Earth’s core but this has never been unambiguously confirmed in an experiment. Murthy and colleagues used an iron and iron- sulphur mixture to represent the Earth's core and potassium silicate glass to represent the shell. They measured the partition coefficient - the concentration of potassium-40 in the sulphur mix divided by its concentration in the silicate - at temperatures and pressures approaching those found deep in the Earth's mantle.
The researchers found that the logarithm of the partition coefficient is inversely proportional to temperature. The results suggest that potassium-40 can move from the silicate ‘shell’ to the iron-sulphur ‘core’ and that it would be possible for a high enough concentration of potassium-40 to build up in the core.
The team calculated a core potassium-40 content of between 60 and 130 ppm, which produces between 0.4 and 0.8 TW of heat. Estimates of the core-mantle boundary heat flux are between 8 and 10 TW, so the heat produced by potassium-40 could significantly contribute to the heat flux at the boundary. Recent studies have shown that the present level of heat flux would have been insufficient to sustain the Earth’s magnetic field for the past 3.5 billion years. This ‘extra’ radioactive heat could thus have allowed the field to exist.
“We now plan to expand these measurements to much higher pressures and temperatures,” Murthy told PhysicsWeb. “We shall also extend the experiments to the other major radioactive heat sources in the Earth, uranium and thorium.”
About the author
Belle Dumé is Science Writer at PhysicsWeb | <urn:uuid:1199a65c-aa80-4dc4-9039-3ccc63991f5b> | 3.734375 | 475 | Truncated | Science & Tech. | 51.564577 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Friday, 24 May 2013
Sugar-free diets aren't just making headlines in the human world. Cockroaches have joined the anti-sugar trend.
Friday, 17 May 2013
Scientists have identified nanostructures in the ultra-black skin markings of an African viper that could inspire the quest to create the ultimate light-absorbing material.
Friday, 3 May 2013
Scientists curious about the winter vanishing act of a Madagascar dwarf lemur were astonished to find the animals curled up asleep in underground burrows.
Monday, 15 April 2013
Dugongs in one of Australia's largest populations appear to be getting sick and dying as a result of exposure to cold water, say researchers.
Wednesday, 10 April 2013
An unusual prehistoric fish with fins near its bottom has helped to solve the mystery over why most animals, including humans, have paired limbs.
Friday, 5 April 2013
A Seychelles freshwater turtle species declared extinct after decades of futile searches, never existed, say scientists.
Tuesday, 26 March 2013
Human colonisation of the Pacific led to the loss of at least 1000 species of land birds or 10 per cent of global bird biodiversity.
Thursday, 14 March 2013
Fossilised forms of a phallus-shaped invertebrate have shed light on a dramatic spurt in Earth's biodiversity that occurred half a billion years ago, a new study says.
Thursday, 7 March 2013
Fencing lions in wildlife reserves could help save them from extinction, says an international team of conservationists.
Wednesday, 13 February 2013
Weird and wonderful A sea slug has taken the idea of a throw-away society to new levels with the discovery it discards its penis after sex, then grows a new one to use the next day.
Wednesday, 6 February 2013
Hibernation slows down the shortening of telomeres, and could explain why some rodents live longer than other animals, say researchers.
Friday, 18 January 2013
A lobster thrown live into boiling water may suffer for many seconds, says a scientist who argues that crustaceans can likely feel pain.
Thursday, 17 January 2013
The complex social structure of the red fire ant is made possible by a DNA fusion known as a supergene, say biologists.
Thursday, 17 January 2013
Barnacles have the largest penises, relative to body size, in the animal kingdom, and can capture sperm directly from water.
Wednesday, 16 January 2013
An Australian researcher who discovered a new species of flying frog near Ho Chi Minh City says it is a rare find so close to such a big city. | <urn:uuid:e9861535-a425-4c97-a9e2-b212e0b53de1> | 2.703125 | 554 | Content Listing | Science & Tech. | 42.702909 |
To use all functions of this page, please activate cookies in your browser.
With an accout for my.bionity.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
Sequencing by hybridization
Sequencing by Hybridization is a class of methods for determining the order in which nucleotides occur on a strand of DNA. Typically used for looking for small changes relative to a known DNA sequence. The binding of one strand of DNA to its complementary strand in the DNA double-helix (aka hybridization) is sensitive to even single-base mismatches when the hybrid region is short or is specialized mismatch detection proteins are present. This is exploited in a variety of ways, most notable via DNA chips or microarrays with thousands to billions of synthetic oligonucleotides found in a genome of interest plus many known variations or even all possible single-base variations.
This technology has largely been displaced by Sequencing-by-synthesis based methods.
Examples of commercial systems
Also see: Sequencing by ligation
|This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Sequencing_by_hybridization". A list of authors is available in Wikipedia.| | <urn:uuid:d62f0d45-3ede-4a9c-81f8-56e920fb3a64> | 2.96875 | 282 | Knowledge Article | Science & Tech. | 29.811613 |
anomalous waterArticle Free Pass
anomalous water, also called Orthowater, or Polywater, liquid water generally formed by condensation of water vapour in tiny glass or fused-quartz capillaries and with properties very different from those well established for ordinary water; e.g., lower vapour pressure, lower freezing temperature, higher density and viscosity, higher thermal stability, and different infrared and Raman spectra. For a few years after the announcement of the discovery of the substance (1968) by a group of Soviet scientists, many investigators held the view that the substance was a new form of water, possibly a polymer. In the 1970s thorough study established that anomalous water is ordinary water containing ionic contaminants that cause it to have the unusual properties.
What made you want to look up "anomalous water"? Please share what surprised you most... | <urn:uuid:c40f1b8d-0aae-4d17-b043-4485fa2af12e> | 2.90625 | 181 | Knowledge Article | Science & Tech. | 26.17 |
Java provides generic sort methods for arrays:
void sort( Object arr ) void sort( Object arr, Comparator cmp )
The first method uses the natural compareTo method, while the second allows a Comparator to be specified. The sort algorithm is a modified mergesort, stable and guaranteed O(n · log(n)).
Since the Collection interface provides a toArray method, it is easy to sort any Collection by first converting it to an array.
The Java List class provides sort methods that operate by dumping the List into an array, sorting the array, and then resetting the List elements in the sorted order.
Contents Page-10 Prev Next Page+10 Index | <urn:uuid:102d381b-fbe6-430a-b9cd-175007b9a5e2> | 2.890625 | 147 | Documentation | Software Dev. | 38.947301 |
This is a part of the Howard T. Odum Collection
The work of Howard Odum generated an entire lexicon of terms that helped communicate his ideas to the world. Some terms, like energy and power, were drawn from existing disciplines but were used in new ways. Other terms such as emergy, energy hierarchy, net energy, and maximum power were new terms that Odum, his students, and his colleagues defined. This glossary defines the most important of these terms and provides some context for how they were used. -- Cutler J. Cleveland
An expression of all the energy and material resources used in the work processes that generate a product or service, calculated in units of one form of energy. In the early 1970’s Odum argued that traditional methods of measuring energy did not account for the “quality” of different forms of energy (e.g. sunlight, versus fossil fuels or electricity). Odum and his colleagues began using sunlight as the base to evaluate all other forms of energy, reasoning that, all other forms are nothing more than concentrated sunlight. David Scienceman, an Australian colleague of Odum’s, first coined the term "emergy." Odum believed that emergy was a universal measure of the work of nature and society made on a common basis and therefore a measure of the environmental support to any process in the biosphere.
To see the full encyclopedia article on emergy: click here | <urn:uuid:bcfd5be5-8f0c-46de-990e-60ade6969007> | 3.40625 | 291 | Structured Data | Science & Tech. | 37.646923 |
Sounding rockets are a sort of ballistic missile able to boost payloads of a few hundred kilograms to altitudes of 250-750 km, with almost vertical ascent and descent trajectories.
Originally conceived to sound the physical properties of the upper atmosphere, hence the name 'sounding' rockets, their use has been extended to provide 'weightlessness' conditions for experimental research in space physical and life sciences.
Those conditions are met during the freefall phase of the payload once the rocket motors have exhausted their thrust and have dropped. The freefall ends with the deployment of the parachute that lowers the payload to the ground with appropriate impact speeds (of about 8 m/s).
The duration of weightlessness, in the range from 6 to 13 minutes, is determined by the apogee reached by the rocket.
Since 1982, ESA has used sounding rockets as carriers for its microgravity research programmes. Two different rocket configurations, MASER/TEXUS and MAXUS, are currently in use and their main features are given below.
Esrange, in northern Sweden, is the launch site where all sounding rockets for ESA’s microgravity research programme are launched from.
Performance and dimensions
|- Mass||285 kg||260 kg||485 kg|
|- Diameter||0.43 m||0.43 m||0.64 m|
|- Length||3.3 m||3.3 m||3.5 m|
|Apogee||250 km||250 km||750 km|
|Residual acceleration||≤10-4 g||≤10-4 g||≤10-4 g|
|Microgravity duration||6 min||6 min||13 min|
The sounding rocket modular concept
Sounding rocket experiments are accommodated in circular and stackable decks with useful diameters of 40 cm or 60 cm for MASER/TEXUS and MAXUS, respectively. These decks are eventually installed into cylindrical structures and attached by means of elastic dampers that reduce the impact of launch vibrations. The resulting assembly that includes power batteries, control electronics, data interfaces and service accesses constitutes an experiment module. The external structures can be hermetically sealed when the need arises for a pressurised environment.
Modules can be stacked on top of each other up to a maximum length limited by stability conditions. Besides the experiment modules other functional modules complement each sounding rocket mission, such as the service module for telemetries and telecommands as well for the rocket attitude control, the video module to relay multiple video channels to the receiving ground stations, the recovery system that commands the deployment of the parachute packet in the nosecone of the rockets, and the separation module needed to detach from the payload stack when the rocket motors are at the end of their thrust.
Most of the functional modules are reused from one flight to the next after some appropriate refurbishment and upgrade, as needed. The experiment modules can be re-used as well, whenever the experiments need several runs in microgravity either with new samples or with different boundary conditions, parameters, and experiment protocol. This combination of standard and modular blocks gives the Sounding Rocket programmes both a strong reliability asset and a unique flexibility facet.
Data transmission and ground control
Each experiment module is connected to the telemetry system hosted in the service module that, during flight, transmits the data to the ground at appropriate resolution and rate. Both housekeeping and scientific data are relayed, the latter including sequences of video images. Data is recorded onboard and on ground according to the required parameters of speed and accuracy.
The availability of real-time data allows the experimenters to follow the course of their experiments. If required by the experiment protocol or by contingency reasons, the process can be directed from the ground in response to the actual behaviour of the system being investigated; an uplink channel enables the transmission of telecommands in those cases. The number and nature of telemetries and telecommands are jointly pre-defined as part of the experiment module design and development.
During the launch phase, the experiment payloads experience both random vibration and linear accelerations; the latter reach a peak of about 12g and last for about 45 s in total, which is the time needed for the complete burn of both stages of the motor.
After burnout and separation of the motor from the experiment payloads the Attitude Control System reduces the Payload motions in order to obtain the minimum residual acceleration; at that point the microgravity phase starts and the experiments may be performed.
During launch the experiment modules may be operational in order to maintain certain conditions, such as a given temperature in a furnace, but have to be mechanically resilient to the launch vibrations and accelerations.
The experiment modules are recovered after landing and transported via helicopter back to the launch site; the scientific samples are then returned within few hours to the scientists.
The launch preparation campaign is performed at the Esrange premises, where state-of-the-art facilities are available for use by the scientific teams. The laboratories include clean benches, laminar flow benches, microscopes, centrifuges, autoclaves and incubators, to enable the investigators to perform the final flight preparation of their samples in the week before the launch.
The launch preparation activities allow for a late access to the experiment modules; such late access activities may include activities such as the insertion of biological samples prepared shortly before launch, or the activation of an experiment. Such late access activities may be performed up to 30 minutes before the scheduled launch time.
Last update: 9 November 2010 | <urn:uuid:8d1d3921-2efd-420f-a4aa-634b36c25123> | 3.625 | 1,140 | Knowledge Article | Science & Tech. | 27.55945 |
Category: Science in Action
Subject(s): Life Science/Biology
Keywords: geothermal hotsprings, astrobiology, fundamental features of life, life beyond earth, biochemistry, extremophiles, extreme environments, microbes, bacteria, ecosystems, organisms, complexity of life, ocean life, microbial life in extreme environments, yellowstone nati
|00:43:41||Listening for the Long Term (Webcast)|
Join us as we talk with Jill Tarter, Director of the Center for SETI Research and the inspiratio ...
|0:27:33||Talking with ET: The Language and Timescales of Interstellar Communication (Webcast)|
What if we did contact another intelligent life form in the universe? What should we say? What tr ...
|00:46:27||What About Intelligent Life? (Webcast)|
SETI is a scientific effort seeking to determine if there is intelligent life outside Earth. We ...
|00:32:24||Looking for Mars on Earth (Webcast)|
Chris McKay, Planetary Scientist at the NASA Ames Research Center, has traveled the world seekin ...
|00:35:56||Life at the Extremes (Webcast)|
Meet Breea Govenar, a biologist at Penn State University, as she speaks to us from aboard a rese ... | <urn:uuid:f69658c1-0fd4-45bb-9dc0-ff676322cff9> | 2.875 | 278 | Content Listing | Science & Tech. | 48.442869 |
Using this graphic and referring to it is encouraged, and please use it in presentations, web pages, newspapers, blogs and reports.
For any form of publication, please include the link to this page and give the cartographer/designer credit (in this case UNEP/GRID-Arendal)
ADEME, Bilan Carbone® Entreprises et Collectivités, Guide des facteurs d’émissions, 2007; US Environmental Protection Agency (www.epa.gov/solar/energy-resources/calculator.html); ESU-Services Consulting (Switzerland); World Wildlife Fund; Jean-Marc Manicore (www.manicore.com); Jean-Pierre Bourdier (www.x-environnement.org); fatknowledge.blogspot.com; www.actu-environnement.com; www.cleanair-coolplanet.org.
Uploaded on Thursday 16 Feb 2012
Examples of GHG emission amounts 12
Examples of GHG emission amounts generated by different activities or goods are scattered across the book in the form of proportional bubbles (in kilograms of CO2 equivalent). | <urn:uuid:91d1d413-b7bc-4c7b-a2a4-7c14e8318f45> | 2.734375 | 243 | Knowledge Article | Science & Tech. | 40.989849 |
Taken largely from the Talk.Origins Archive web site, and is used here with the kind permission of its author,
According to the theory of evolution, the "descent with modification" road to humans (or any other group, for that matter) is paved with a sequence of transitional fossils, spaced out in a time sequence reflected in the ages of the fossils found. Since fossils of soft-bodied animals are relatively rare (they don't fossilize easily), the record is rather spotty prior to the first appearance of vertebrates (in the form of jawless fishes), so this lesson will focus only on the fossil record of vertebrates
As we study the growing number of fossils, we find that they usually fit nicely into one group or another, and most of those groups clearly show gradual change over time, even phasing into new and different groups along the way, adding changes upon changes. Nevertheless, many of those earlier groups apparently had populations which continued to exist with very little change, producing the modern day representatives of those surviving groups. As a result, the picture painted by the fossils reveals an ongoing coexistence, of older more primitive forms continuing to live alongside the growing diversity of animals which they produced.
However, most of those groups along the way apparently failed to survive in their original forms. They became extinct. But fortunately, some members of some of those groups were fossilized, and a few of those are found from time to time, giving us the hit-or-miss, very spotty record of fossils which has lead us to hypothesize that picture of a branched tree of being which we call evolution. New fossils are being found every day, helping to fill in some of the gaps, and those fossils continue to confirm and strengthen that picture of life through time with ever-increasing detail.
In this lesson, we will peek at a very small sampling of this fossil record, focusing mainly on the forms and times when various human traits first appeared. We will build a type of "family tree" called a "cladogram", which emphasizes the first appearances of traits which are also diagnostic for major animal groups living today. If you would like to see more of the transitional details, go to one of the documents on the well done web site of Talk.Origins Archives: "Transitional Vertebrate Fossils" (<http://www.talkorigins.org/faqs/faq-transitional.html>). Most of the following information was taken from that site. It was compiled and presented by Kathleen Hunt, a PhD candidate in zoology at the University of Washington in the mid 1990s. The references cited here can be found at the end of that 5-part document.
A. WHAT IS A TRANSITIONAL FOSSIL?
1. "General lineage":
2. "Species-to-species transition":
3. Transitions to New Higher Taxa
There are now several known cases of species-to-species transitions that resulted in the first members of new higher taxa.
4. An Example of a Transition Series: from Synapsid Reptiles
The list of some 27 species which best documents the transition from mammal-like reptiles to mammals starts with pelycosaurs (early synapsid reptiles; Dimetrodon is a popular, advanced, example) and continues with therapsids and cynodonts up to the first unarguable "mammal". This covered some 160 million years, from the early Pennsylvanian (315 ma) to the late Jurassic (155 ma), with a 30 million year gap in the late Triassic. Most of the changes in this transition involved elaborate repackaging of an expanded brain and special sense organs, remodeling of the jaws & teeth for more efficient eating, and changes in the limbs & vertebrae related to active, legs-under-the-body locomotion. What is most striking (here, as well as in most other transitional fossils) is a mosaic mixture (existing in each species along the way) of some earlier (more primitive) traits along with newer, more derived traits, with a gradual decrease in the primitive traits, an increase in the derived traits, and gradual changes in size of various features through time. Some differences observed:
(*) Fenestrae are holes in the sides of the skull
(**) The presence of a dentary-squamosal jaw joint has been arbitrarily selected as the defining trait of a mammal.
5. Two Examples of Species-to-Species Fossil Sequences
Rose & Bown (1984) analyzed over 600 specimens of primates collected from a 700-meter-thick sequence representing approximately 4 million years of the Eocene. They found smooth transitions between Teilhardina americana and Tetonoides tenuiculus, and also beween Tetonius homunculus and Pseudotetonius ambiguus. "In both lines transitions occurred not only continuously (rather than by abrupt appearance of new morphologies followed by stasis), but also in mosaic fashion, with greater variation in certain characters preceding a shift to another character state." The T. homunculus - P. ambiguus transition shows a dramatic change in dentition (loss of P2, dramatic shrinkage of P3 with loss of roots, shrinkage of C and I2, much enlarged I1) that occurs gradually and smoothly during the 4 million years. The authors conclude "...our data suggest that phyletic gradualism is not only more common than some would admit but also capable of producing significant adaptive modifications."
B. WHY DO GAPS EXIST (OR SEEM TO EXIST)?
Species-to-species transitions are even harder to document. To demonstrate anything about how a species arose, whether it arose gradually or suddenly, you need exceptionally complete strata, with many dead animals buried under constant, rapid sedimentation. This is rare for terrestrial animals. Even the famous Clark's Fork (Wyoming) site, known for its fine Eocene mammal transitions, only has about one fossil per lineage about every 27,000 years. Luckily, this is enough to record most episodes of evolutionary change (provided that they occurred at Clark's Fork Basin and not somewhere else), though it misses the rapidest evolutionary bursts. In general, in order to document transitions between species, you need specimens separated by only tens of thousands of years (e.g. every 20,000-80,000 years). If you have only one specimen for hundreds of thousands of years (e.g. every 500,000 years), you can usually determine the sequence of species, but not the transitions between species. If you have a specimen every million years, you can get the order of genera, but not which species were involved. And so on. These are rough estimates (from Gingerich, 1976, 1980) but should give an idea of the completeness required.
Note that fossils separated by more than about a hundred thousand years cannot show anything about how a species arose. Think about it: there could have been a smooth transition, or the species could have appeared suddenly, but either way, if there aren't enough fossils, we can't tell which way it happened.
2. Discovery of the fossils
Documenting a species-to-species transition is particularly grueling, as it requires collection and analysis of hundreds of specimens. Typically we must wait for some paleontologist to take on the job of studying a certain taxon in a certain site in detail. Almost nobody did this sort of work before the mid-1970's, and even now only a small subset of researchers do it. For example, Phillip Gingerich was one of the first scientists to study species-species transitions, and it took him ten years to produce the first detailed studies of just two lineages (primates and condylarths). In a (later) 1980 paper he said: "the detailed species level evolutionary patterns discussed here represent only six genera in an early Wasatchian fauna containing approximately 50 or more mammalian genera, most of which remain to be analyzed." [emphasis added]
3. Getting the word out
Why don't paleontologists bother to popularize the detailed lineages and species-to-species transitions? Because it is thought to be unnecessary detail. For instance, it takes an entire book to describe the horse fossils even partially (e.g. MacFadden's "Fossil Horses"), so most authors just collapse the horse sequence to a series of genera. Paleontologists clearly consider the occurrence of evolution to be a settled question, so obvious as to be beyond rational dispute, so, they think, why waste valuable textbook space on such tedious detail?
What is truly amazing, given the conditions described above, is that the fossil record shows as many contiguous sequences of fossils as it does. And furthermore, as new fossils are found (and these are many per year) they always fit nicely (or closely) into the sequences already documented, both in time and morphology, and occasionally fill one of the many gaps as well. Remember, particularly in view of the overwhelming number of transitional sequences, the lack of fossils here and there does nothing to weaken the overall picture of descent with modification; the process of evolution is very much a reality.
4. Overview of the Cenozoic
Ma = millions of years ago
C. WHAT IS "PUNCTUATED EQUILIBRIUM"?
There's been a heated debate about which of these modes of evolution is most common, and this debate has been largely misquoted by laypeople. Virtually all of the quotes of paleontologists saying things like "the gaps in the fossil record are real" are taken out of context from this ongoing debate about punctuated equilibrium. Actually, no paleontologist that I know of doubts that evolution has occurred, and most agree that at least sometimes it occurs gradually, and the fossil record clearly shows this. What they're arguing about is how often it occurs gradually. You can make up your own mind about that. (As a starting point, check out Gingerich, 1980, who found 24 gradual speciations and 14 sudden appearances in early Eocene mammals; MacFadden, 1985, who found 5 cases of gradual anagenesis, 5 cases of probable cladogenesis, and 6 sudden appearances in fossil horses; and the numerous papers in Chaline, 1983. Most studies seem to show between 1/4-2/3 of the speciations occurring fairly gradually.)
"Anagenesis", "phyletic evolution": Evolution in which an older species, as a whole, changes into a new descendent species, such that the ancestor is transformed into the descendant.
"Cladogenesis": Evolution in which a daughter species splits off from a population of the older species, after which both the old and the young species coexist together. Notice that this allows a descendant to coexist with its ancestor.
D. PREDICTIONS: EXPECTATIONS IN THE FOSSIL RECORD:
Predictions of evolutionary theory: Evolutionary theory predicts that fossils should appear in a progression through time, in a nested hierarchy of lineages, and that it should be possible to link modern animals to older, very different animals. In addition, the "punctuated equilibrium" model also predicts that new species should often appear "suddenly" (within 500,000 years or less) and then experience long periods of static equilibrium (little or no change). Where the record is exceptionally good, we should find a few local, rapid transitions between species. The "phyletic gradualism" model predicts that most species should change gradually throughout time, and that where the record is good, there should be many slow, smooth species-to-species transitions. These two models are not mutually exclusive -- in fact they are often viewed as two extremes of a continuum -- and both agree that at least some species-to-species transitions should be found.
Overview of the Transitional Vertebrate Fossil Record? The 35 page listing of transitional vertebrates offered in the TalkOrigins Archive, is a reasonably complete picture of the vertebrate record as it is now known. As extensive as it may seem, it is still just a crude summary, and some very large groups were, for convenience, left out. For instance, the list mostly includes transitional fossils that happened to lead to modern, familiar animals. This may unintentionally give the impression that fossil lineages proceed in a "straight line" from one fossil to the next. That's not so; generally at any one time there are a whole raft of successful species, only a few of which happened to leave modern descendents. The horse family is a good example; Merychippus (about 15 mya) gave rise to something like 19 new three - toed grazing horse species, which traveled all over the Old and New Worlds and were very successful at the time. Only one of these lines happened to lead to Equus, though, so that's the only line described in that listing. As they say, "Evolution is not a ladder, it's a branching bush."
A Bit Of Historical Background. When The Origin Of Species was first published, the fossil record was poorly known. At that time, the complaint about the lack of transitional fossils bridging the major vertebrate taxa was perfectly reasonable. Opponents of Darwin's theory of common descent (the theory that evolution has occurred; not to be confused with his separate theory that evolution occurs specifically by natural selection) were justifiably skeptical of such ideas as birds being related to reptiles. The discovery of Archeopteryx only two years after the publication of The Origin of Species was seen as a stunning triumph for Darwin's theory of common descent. Archeopteryx has been called the single most important natural history specimen ever found, "comparable to the Rosetta Stone" (Alan Feduccia, in "The Age Of Birds"). O.C. Marsh's groundbreaking study of the evolution of horses was another dramatic example of transitional fossils, this time demonstrating a whole sequence of transitions within a single family. Within a few decades after the Origin, these and other fossils, along with many other sources of evidence (such as developmental biology and biogeography) had convinced the majority of educated people that evolution had occurred, and that organisms are related to each other by common descent. (Today, modern techniques of paleontology and molecular biology further strengthen this conclusion.)
Since then, many more transitional fossils have been found, as sketched out in the listing. Typically, the only people who still demand to see transitional fossils are either unaware of the currently known fossil record (often due to shoddy and very dated arguments they may have read) or are unwilling to recognize it for some reason.
What Does The Fossil Record Show Us Now? The most noticeable aspects of the vertebrate fossil record, those which must be explained by any good model of the development of life on earth, are:
1. A remarkable temporal pattern of fossil morphology, with "an obvious tendency for successively higher and more recent fossil assemblages to resemble modern floras and faunas ever more closely" (Gingerich, 1985) and with animal groups appearing in a certain unmistakable order. For example, primitive fish appear first, amphibians later, then reptiles, then primitive mammals, then (for example) legged whales, then legless whales. This temporal- morphological correlation is very striking, and appears to point overwhelmingly toward an origin of all vertebrates from a common ancestor.
2. Numerous "chains of genera" that appear to link early, primitive genera with much more recent, radically different genera (e.g. reptile - mammal transition, hyenids, horses, elephants), and through which major morphological changes can be traced. Even for the spottiest gaps, there are a few isolated intermediates that show how two apparently very different groups could, in fact, be related to each other (ex. Archeopteryx, linking reptiles to birds).
3. Many known species-to-species transitions (primarily known for the relatively recent Cenozoic mammals), often crossing genus lines and occasionally family lines, and often resulting in substantial adaptive changes.
4. A large number of gaps. This is perhaps the aspect that is easiest to explain, since for stratigraphic reasons alone there must always be gaps. In fact, no current evolutionary model predicts or requires a complete fossil record, and no one expects that the fossil record will ever be even close to complete. Evolutionary biologists consider gaps as the inevitable result of chance fossilizations, chance discoveries, and immigration events. | <urn:uuid:dd3e9be7-b953-4af6-8f16-82008d52b7d4> | 3.671875 | 3,424 | Academic Writing | Science & Tech. | 36.811451 |
A genetic takeover on a scale never seen before among vertebrates is taking place in the western US.
An alien fish is not only hybridising with the locals, but also breaking down the genetic barriers between once-distinct species. Such multi-way hybridisation could turn out to be much more common than we thought.
Dave McDonald, an evolutionary biologist at the University of Wyoming in Laramie, and his colleagues sampled DNA from three species of fish in the Colorado river basin in the south-western US - two native species, the flannelmouth and bluehead suckers, and one introduced species, the white sucker - as well as hybrids between them.
They found that white and flannelmouth suckers breed so extensively with each other that all sorts of genetic intermediates exist; white suckers also occasionally breed with bluehead suckers.
The muttsucker proxy
The team also found another sort of fish, which they dubbed the "muttsucker" - a hybrid containing genetic material from all three original species. Since bluehead and flannelmouth suckers have never been reported to cross-breed on their own, it seems that introducing the white sucker acts as a genetic bridge to break down the barriers between these two native species
If this process continues, the gene pools of the three species could eventually merge into a single, indistinct "hybrid swarm", which may eventually pull in other native suckers as well. If so, says McDonald, "this introduced species isn't going to wipe out just one native. It's taking out a whole assemblage of native species."
McDonald's is the first published study to report solid evidence of three-way hybrids in vertebrates, says Ole Seehausen, an evolutionary ecologist at the Swiss Federal Institute for Aquatic Science and Technology in Kastanienbaum, although Seehausen says he has unpublished evidence that the many species of cichlid fish in Lake Victoria in Africa arose from a three or even four-way hybridisation in the distant past.
Endangered species - Learn more about the conservation battle in our comprehensive special report.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Tue Jul 22 00:18:40 BST 2008 by Alan Crouch
Not ruling out coencidence?
Tue Jul 22 00:58:40 BST 2008 by Joe Lalonde
WOW...Is this not what our species is doing on this planet? Turning into one mixed hominoid.
Tue Jul 22 01:07:06 BST 2008 by Peter M
No, because we are all part of one species to begin with. There has not been enough separation for enough time between different groups of people for separate species to form.
Tue Jul 22 02:31:05 BST 2008 by Alex
If these tribrids end up being more successful, what's the big deal? Isn't that the point of evolution?
Tue Jul 22 04:05:47 BST 2008 by Dann
Exactly. If the resulting tribrids are better adapted to their environment, then they 'deserve' to prevail. It not even like the original species are becoming extinct. Their genes will continue being passed down through the generations, just with slightly different 'packaging'.
It would seem that one big omni-species with a huge amout of genetic variation would also be more likely to survive environmental change than a few individual species that are more genetically restricted. Tribridisation might actually save them in the long run.
Then again, humans can get rather fond of their abstract classification methods (and the whole notion of a 'species' is about as abstract as you can get). Some people like things to be black and white, rather than these pesky shades of grey you'll find in the 'real' world.
Tue Jul 22 08:24:50 BST 2008 by Mike
You need to see the larger picture.
The point is, what are these changes doing to the ecosystem?
Here in the UK the American signal crayfish is certainly better adapted and 'deserves to prevail', and is taking over our rivers-the problem is that it's doing it to well and the knock on effect is doing in everything from other aquatic inverts to fish fry. Even kingfisher numbers are down because they have nothing to eat.
Wed Jul 23 04:24:03 BST 2008 by Dann
Natural selection works mostly at the species level (depending on how you define 'species' of course). If there are knock-on effects of one species evolving, then that creates selective pressure on the other species that are affected, who then either adapt or die in their own right.
Ecosystems are always performing a delicate balancing act, so you can't really 'destroy' an ecosystem; you really just alter the balance in some way. Some species might become extinct in the short term, but life in general tends to find new balances. It's a pity for those species that can't adapt quickly enough, but those that manage to survive become better for it.
The only constant in life is change. Sometimes that change is of the catastrophic type, but life in general has already proven robust enough to survive for several billion years.
Or am I now looking at *too large* a picture? :)
Tue Jul 22 19:27:59 BST 2008 by Christopher Wininger
Its all just nature unless you want to survive. Youre right, this is just evolution in action and nature doesnt care. The only problem is that this affects our food supply and the balance of our ecosystem. Of course we can just stand back and watch it happen and that would be fine for nature. Its just that all the changes are taking place so fast that it may not be fine for us. Despite being so prevalent it is my belief that we are a relatively delicate species. We sit high on the food change and depend on other animals to pre-process our food. We are relatively large and consume a lot of food and without the help of other large life forms (fur, wood, etc...) we do not fair well outside of a narrow temperature range. All it takes are a series of small changes and we could be wiped out. During past extinctions it has been the large dominant species that get knocked off. The dinosaurs were toast but small mammals and insects survived. Lets get it straight we are not trying to preserver species diversity, global temperatures, and coral reefs to save the planet. The planet will be here. It is us we are trying to save.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:ae79537c-c4d3-4ce8-b7f6-58c915b64fa1> | 3.1875 | 1,462 | Comment Section | Science & Tech. | 53.977862 |
Justin Mullins describes how "Alice" can use quantum entanglement to send two bits of information to "Bob" using one photon (20 May, page 26). According to Mullins's description, Bob has to receive one photon from an entangled pair and then receive the other from Alice. So Bob still has to receive two photons in order to receive two bits of information.
I may have missed something here, but the process Mullins describes does not appear to offer any advantages over using two photons to send two bits in a conventional way. Also, since Bob has to physically receive both photons, there is no way for Alice to send the information faster than light.
This is what ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:5811c3fa-9aad-49bc-bea5-5a734c713a28> | 3.546875 | 167 | Truncated | Science & Tech. | 51.517821 |
The name comes from their (H)igh iron abundance, with respect to other ordinary chondrites, which is about 25-31% by weight. Over half of this is present in a free state, making these meteorites strongly magnetic despite the stony chondritic appearance.
A probable parent body for this group is the S-type asteroid 6 Hebe, with less likely candidates being 3 Juno and 7 Iris. It is supposed that these meteorites arise from impacts onto small near-earth asteroids broken off from 6 Hebe in the past, rather than originating from 6 Hebe directly. The H chondrites have very similar trace element abundances and Oxygen isotope ratios to the IIE iron meteorites, making it likely that they both originate from the same parent body.
The most abundant minerals are bronzite (an orthopyroxene), and olivine. Characteristic is the fayalite (Fa) content of the olivine of 16 to 20 mol%. They contain also 15-19% of nickel-iron metal and about 5% of troilite. The majority of these meteorites have been significantly metamorphosed, with over 40% being in petrologic class 5, most of the rest in classes 4 and 6. Only a few (about 2.5%) are of the largely unaltered petrologic class 3.
Historically, the H chondrites have been named bronzite chondrites or olivine bronzite chondrites for the dominant minerals, but these terms are now obsolete. | <urn:uuid:0c7daf87-d130-4451-acbe-9b614f38d78a> | 3.765625 | 322 | Knowledge Article | Science & Tech. | 38.733326 |
A worldwide database of over 13,800 integrated U–Pb and Hf-isotope analyses of zircon, derived largely from detrital sources, has been used to examine processes of crustal evolution on a global scale, and to test existing models for the growth of continental crust through time. In this study we introduce a new approach to quantitatively estimating the proportion of juvenile material added to the crust at any given time during its evolution. This estimate is then used to model the crustal growth rate over the 4.56 Ga of Earth's history. The modelling suggests that there was little episodicity in the production of new crust, as opposed to peaks in magmatic ages. The distribution of age-Hf isotope data from zircons worldwide implies that at least 60% of the existing continental crust separated from the mantle before 2.5 Ga. However, taking into consideration new evidence coming from geophysical data, the formation of most continental crust early in Earth's history (at least 70% before 2.5 Ga) is even more probable. Thus, crustal reworking has dominated over net juvenile additions to the continental crust, at least since the end of the Archean. Moreover, the juvenile proportion of newly formed crust decreases stepwise through time: it is about 70% in the 4.0–2.2 Ga time interval, about 50% in the 1.8–0.6 Ga time interval, and possibly less than 50% after 0.6 Ga. These changes may be related to the formation of supercontinents. | <urn:uuid:57cab0bb-b07f-423b-9535-d42536e8b165> | 2.71875 | 318 | Academic Writing | Science & Tech. | 48.601812 |
TeacherExpt:Sodium ethanoate ‘stalagmite’
From Learn Chemistry Wiki
In this demonstration experiment a supersaturated solution of sodium ethanoate crystallises rapidly forming a ‘stalagmite’.
This is a demonstration used to show the rapid crystallisation of a supersaturated solution in a spectacular way and explore the energy change involved. It can also be used to stimulate interest in public presentations.
Apparatus and chemicals
The teacher requires:
- Eye protection
- Beaker (250 cm3)
- Measuring cylinder (25 cm3)
- Watch glass (large one, about 10 cm diameter)
- Stirring rod
- Bunsen burner, tripod and gauze
- Access to a top-pan balance (1 d.p. is sufficient)
- A black background is probably better than a white one for this demonstration.
- The ‘stalagmite’ can be re-heated and used again. Keep the solution clean and free from dust – this could cause it to crystallise prematurely.
Health & safety
Wear eye protection throughout.
Before the demonstration:
B. Heat the beaker over a low flame and stir until a clear solution is obtained.
C. Cover the beaker with a watch glass and allow to cool to room temperature to give a supersaturated solution.
D. Remove the watch glass and place a few crystals of sodium ethanoate on it.
E. Pour the supersaturated solution slowly onto the sodium ethanoate cystals. The solution should crystallise immediately on contact with the crystals. It will form a growing ‘stalagmite’ of solid sodium ethanoate as more and more of the solution is poured onto it.
The watch glass becomes warm as heat is released during the crystallisation process.
If re-heating is shown to the class, emphasise that the solid is dissolving (in its own water of crystallisation) and not melting. A supersaturated solution of sodium thiosulfate, obtained in a similar way, is also stable until a seed crystal is added.
Commercial ‘heat packs’ are available which use the principle of supersaturation. Here a mechanical disturbance, usually a spring loaded button inside the pack, induces crystallisation. The packs can be re-used by heating in boiling water to re-dissolve the crystals.
Health and Safety checked, November 2006
This experiment has been reproduced from Practical Chemistry: http://practicalchemistry.org/experiments/introductory/solutions-and-water/sodium-ethanoate-stalagmite,64,EX.html
This website has a movie showing the reaction. | <urn:uuid:fab842cc-e98a-4541-90a4-948cde771e14> | 3.734375 | 564 | Tutorial | Science & Tech. | 35.8159 |
Naked eye astronomy
What is that - naked eye astronomy? Well... it's a very old science.
There were astronomers way before the telescopes.
People looked at the stars for thousands of years without any optical aid. And, with a lot of perseverance they got awesome results. They learned the apparent rotation of the stars, sun, moon and planets. It was counter intuitive to realize the Earth rotation so many early astronomers thought that our sky is revolving around us.
Planets were special objects because they don't follow the same "sky paths" as normal stars.
Without any optical aid, there are 5 visible planets. They look like bright stars.
You can simulate the motion of stars and planets in Sky Map Online by going forward (or backward) a specific time interval. For example, move forward one hour at a time. Stars will rise from east and set to the west.
It's really amazing how people actually unlocked a lot of our universe mysteries by just rigorous star and planets observations. They figured out and calculated the planets orbits with accuracy a few hundred years ago - Keppler, Tycho, Copernicus, etc got incredible results (for their time).It's easy to study the apparent movement of the space objects (sun, moon, planets and stars). It doesn't require any optical aid.
Here are some simple quizzes to be solved by "naked eye astronomy":
- Does our sun rises from the same direction every morning or not?
- We know it rises from the east, but does it really rises exactly at the east?
- Can you determine the aproximative trajectory of the sun during the day?
- Does the sun go higher during summer time?
- Do stars rise and set like our sun?
- Are there stars which are visible all day (never rise or set)?
- If you live in northern hemisphere, can you locate Polaris?
- How many constellations do you know?
- Can you see Milky Way from your backyard? What about from a dark site (e.g. from a camping site in mountains)?
All these questions can be answered without the need of
binoculars or telescopes.
Because we don't have the time, interest or right conditions, we may be even too ignorant about our surroundings (our sky).
Here are a few funny examples:
- During a power outage in a big city, people were finally able to see our Milky Way like a silver cloud... Some of these naive city dwellers were not sure about what is that and called 911 asking if the "silver cloud" caused the power outage.
- Claim that Polaris is the brightest star in the sky. Polaris is indeed an important star for humans from Northern hemisphere, because it helps us to find which way is North. But, even if it's a fairly bright star, it is not the brightest one. Sirius is the brightest star (of course, if you exclude our very own Sun)
- There is an Internet hoax claiming almost every August (since 2003) that Mars is going to be so close to Earth that it can be seen as a small full moon. No matter how much we'll try and how close Mars come on its own orbit to Earth, it's still so far away that we see it a bright star (and not like a disk) without any telescope
What can we see with our naked eyes?
- planets like bright stars (Venus and Jupiter are the brightest night objects after our Moon)
- hundreds of stars (from suburbs)
- recognize some constellations and identify stars by their names
- Milky Way during summer and winter evenings
- Milky Way dark lanes and thousand of stars form a dark location
- Andromeda galaxy - only if you know where to look and at a dark location
- shooting stars (meteoroids), comets, fireballs, artificial satellites
- plane lights and mistake them for satellites :) - usually the plane lights are flashing (red and blue on the tip of the wings), satellites are steady lights moving at constant speed on the sky
Yes, they are! The problem is that our eyes can't really figure out colors at low lights. With some experience, somebody can distinguish a red star (or planet) from a blue or white one.
For example, Mars, Betelgeuse (in Orion), Alderbaran (in Taurus) are red objects. Can you distinguish their color from blue or white stars? | <urn:uuid:1d368987-7ace-4735-8184-e21885cb84aa> | 3.921875 | 919 | Tutorial | Science & Tech. | 58.22311 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
The first space flight is a term that is debated. This is because “first spaceflight” can refer to three key events in the history of space exploration. The first would be the launch of the German rocket, V-2. This was the first rocket to successfully reach space. However, the Russians don’t consider a suborbital launch to count. In that case the title of the first successful spaceflight would actually go to the launch of Sputnik 1. Sputnik was the first artificial satellite to be launched into orbit. The final definition would be the first successful human spaceflight. This honor goes again to the Russians. The Vostok 1 mission was the first successful launch of a human being into outer space and the first successful orbit of the earth by a human being. Each and every one of these “firsts” was important to the advancement of space exploration. So I will try to describe their significance in detail.
The German V-2 rockets were actually an innovation of the liquid fuel rocket designed by the Father of Modern Rocketry, Robert Goddard. The German government funded the research of its rocket scientists in order to actually create a weapon. So the first long range missiles were also the German’s invention. The significance of the V-2 launch is that it proved that rockets could be used to enter space. Many of the scientists that worked on the project would later work for NASA after WWII.
The Launch of Sputnik was a significant event. It was the very first successful launch of a rocket into space. It took the design of the V-2 rocket a step further using a plan based off Intercontinental ballistic missiles. The impact of the launch of Sputnik was immediate and it started the Space Race that spawned the development of the majority of modern space technology.
The Vostok 1 mission is probably the most important first flight. It was the equivalent of the Wright’s brothers’ flight at Kittyhawk, North Carolina. It proved that human space flight was possible and made later feats like the Apollo 11 mission possible. Without this first flight other marvels like Mir, the International Space Station and the space shuttle would not come to exist.
So while what the actual first spaceflight is still debatable, the three candidates for the designation were each important first in their own right. They expanded our knowledge of space and helped human technology advance by leaps and bounds and even made modern life as we now know it possible.
If you enjoyed this article there are others that you might enjoy on Universe Today. There is an interesting article about Laika the first dog in space. There is also an article commemorating the early pioneer of space exploration.
There are also great resources on the web. You can check out russianspaceweb.com to learn more about the history of the Russian space program. You can also check out the NASA.gov website to learn more about Sputnik.
You can also check out Astronomy Cast. Episode 124 ,Space Capsules, is particularly pertinent to this article. | <urn:uuid:0c0b9d21-93f7-479c-987d-0db1e6eab636> | 3.703125 | 648 | Personal Blog | Science & Tech. | 54.971037 |
Humankind’s experience visiting worlds beyond our own begins and ends with the dozen Apollo astronauts who skipped about on tiny swaths of the moon. But that doesn’t mean we can’t experiment with how and where we might visit (or live) on the extreme surfaces of other worlds. A few studies out recently are doing just that.
Radiation? Big deal
Our planet provides a protective shield from the most damaging radiation produced by the sun—a shield not available on the moon or Mars. It’s a hazard for any human leaving the planet, and it’s a hazard for plants, too.
However, a new study of the Chernobyl area in the Ukraine, site of the famous nuclear accident, is actually raising hopes for space farming.
Even 25 years after the catastrophic nuclear accident at Chernobyl, the area around the site harbors radioactive soil. But researchers working there have found that oil-rich flax plants can adapt and flourish in that fouled environment with few problems. Exactly how the flax adapted remains unclear, but what is clear is that two generations of flax plants have taken root and thrived there, and that could have big implications for growing plants aboard spacecraft or on other planets at some point in the future. [Popular Science]
In fact, scientists from the Slovak Academy of Sciences’ Institute of Plant Genetics and Biotechnology saw that just five percent of the 720 proteins they studied had changed. One team member’s fascinating idea about the flax’s hardiness is that because plants were around back when the Earth’s surface was exposed to more radiation than it is today, they are “remembering” abilities they formerly used to withstand that environment.
This is no (small) cave
The moon’s temperatures aren’t quite ideal for human comfort; they can range from more than 200 degrees Fahrenheit to colder than -200 degrees. But Indian scientists recently found a Goldilocks cave (sort of).
The cave holds steady at a (relatively) comfortable -4, since the moon’s weather can’t penetrate its 40-foot-thick wall. It could also protect astronauts from “hazardous radiations, micro-meteoritic impacts,” and dust storms, according to paper published by the journal Current Science, as quoted by Silicon India. [The Week]
The cave, discovered by India’s lunar orbiter Chandrayaan-1, is about 400 feet wide and more than a mile long. The previous record-holder for largest known hole on the moon was much smaller, at about 213 feet wide and 289 feet deep.
Still, even if humans succeed in finding a decent place to set up shop for a moon base, who wants to take a trip there just to spend all their time farming? Luckily, some scientists are on the case. Popular Mechanics shows the starfish-shaped design of indoor farms that create both food and oxygen, all while we humans kick back with a space beer.
Gene Giacomelli, a University of Arizona agricultural researcher and the lead investigator of a NASA-funded growth chamber for the moon, envisions a multiarmed, inflatable greenhouse building staffed with robots that do the bulk of the work. “Astronauts should not have to be farmers,” he says. [Popular Mechanics]
Six astronauts have already landed on Mars. In Russia. Sort of.
Last June, those brave souls undertaking the Mars500 Project were locked away in their pretend space capsule to simulate the 500-plus days of a mission to Mars. Last month, they reached the halfway point—emerging from the module outside Moscow in spacesuits to conduct fake scientific experiments in a sandy room designed to look like Mars. Rather than trying to simulate 500 days of weightlessness here on Earth, the scientists running the project have focused on the psychological problems of traveling to another world—mainly, being cooped up with the same insufferable people for so long.
So far the crew has been coping. “After a couple of weeks they were really a team, certainly with some temporary ups and downs of individual crewmembers,” Zell told The Associated Press. “A big challenge is missing daylight, missing visual perceptions,” he said. “They also have to live with the food which they have on board and with the air which they have on board.” … Christer Fuglesang, an ESA astronaut who took part in two shuttle missions and made five real spacewalks, said the 18-month duration of the experiment strongly challenged the participants.”What they must miss, I’m sure, is the interaction with their families and friends,” he told the AP. [AP]
DISCOVER: How Long Until We Find a Second Earth?
DISCOVER: Children of Chernobyl
80beats: Our Galaxy May Have 50 Billion Exoplanets–and It’s Still Making More
80beats: 24 Years After Chernobyl, Radioactive Boars Still Roam Germany
80beats: A Trip to Mars Could Reduce Astronauts’ Muscles to Spaghetti | <urn:uuid:60246010-9099-4e3c-9a3e-7532282dc876> | 4.03125 | 1,073 | Content Listing | Science & Tech. | 48.463193 |
It’s been a Haskell programming rule of mine for a long time that explicit recursion is an admission of failure. First, linear recursions have already been captured in one of the Prelude (or Data.List) functions. Second, recursions or iterations over a data structure are almost always either fmap or one of the Data.Foldable functions. Third, the list monad (and list comprehensions) capture a common recursion pattern, one found especially in path-finding, alpha-beta AI, and other try-every-path problem spaces.
If you’re recursing, and lists are involved, the recursion pattern has already been captured in a Prelude or Data.List function. map, the fold*s, the scan*s, filter, find, unfoldr and group(By) cover, I think, every loop I’ve ever written in an imperative language.
The dynamic “scripting” languages like Perl and Python have had foreach loops for years, to cover the most common case of looping over every element of an array. Java 1.6 and C# have this too. That’s a handy bit of syntactic sugar, but it is only a weak replacement for map, and doesn’t help at all for any of the others.
Programmers new to FP, including me when I was learning Haskell, often find it hard to think in terms of folds and maps. So go ahead, write the explicitly recursive version. But when you’re done, replace it with the appropriate library function. In time, you won’t need to write it down any more and will think of what you would write, recognize it as one of these higher-order functions, and write that down instead. Eventually, you’ll think in the folds and maps directly.
What about when you’re not working on lists? Take another common structure, trees. Or any other data structure, really. Consider this binary tree implementation:
data Tree a = Empty | Node a (Tree a) (Tree a)
It’s fairly obvious that we could map a function over the tree. This common pattern is represented by the typeclass Functor. Functors represent “things which can be mapped over”, or containers to whose contents we might like to apply a function. Note that any Monad is also a Functor, though Haskell 98 does not enforce this. Some existing, handy Functors: Maybe, lists, functions ((->) a), tuples (a,), Either a, Map k.
The Functor typeclass contains a single function, fmap :: (Functor f) => (a -> b) -> f a -> f b. It turns an ordinary function into one “inside the functor”. For our tree:
instance Functor Tree where fmap _ Empty = Empty fmap f (Node x l r) = Node (f x) (fmap f l) (fmap f r)
It’s similar in spirit to map over lists, and in fact fmap = map for lists.
Writing the instance for the Functor or Foldable type classes, which you only do once, has huge maintenance and conciseness advantages over writing the explicit recursion in even two places. That would be true of a treeMap function for a tree that isn’t a Functor. The advantage of the typeclasses is that they provide a standard interface common to many libraries.
Finally, many seemingly complex recursion patterns, especially in the path-finding and alpha-beta AI spaces can be written in a list comprehension, or (if you prefer) in list monad do-notation. | <urn:uuid:c693ab89-ae97-4415-bab8-b052a0218619> | 2.734375 | 774 | Personal Blog | Software Dev. | 58.358992 |
This Demonstration simulates the movement of "atoms" vibrating around a pulsating source. These might, for example, represent radial longitudinal oscillations of sound waves. The source oscillates sinusoidally at a chosen frequency and then movement of the seven circles of atoms follows the pulsation of the blue ball with an increasing phase delay. During one cycle, the pulsation occurs at a chosen frequency (1 to 10 cycles/sec) with increasing and decreasing sinusoidal amplitude. The volume of the blue ball and the position of the outer atoms may be read on the upper graph.
It is also possible to change the inertia of the medium by increasing the mass of the atoms; this changes the phase of the seven gray circles. The upper graph shows the relationship between inertia and phase. If the phase delay is slight, the frequencies of all the vibrating atoms remain essentially unchanged.
Their amplitudes may change if the damping factor is increased. The extreme right position of the slider results in zero amplitude (no movement at all) of the outer ring. | <urn:uuid:1a5661ab-50be-4768-abd7-ef13f5ef0134> | 3.421875 | 216 | Documentation | Science & Tech. | 40.1765 |
These names refer neither to routines nor to locations with interesting contents; only their addresses are meaningful.
The address of _etext is the first location after the program text.
The address of _edata is the first location after the initialized data region.
The address of _end is the first location after the uninitialized data region.
When execution begins, the program break (the first location beyond the data) coincides with _end , but the program break may be reset by the brk(2) , malloc(3C) , and the standard input/output library (see stdio(3C) ), functions by the profile (-p ) option of cc(1B) , and so on. Thus, the current value of the program break should be determined by sbrk ((char *)0) .
References to end , etext , and edata , without a preceding underscore will be aliased to the associated symbol that begins with the underscore. | <urn:uuid:dc8dd889-0397-444a-bdaf-a6e9f0390a3b> | 3 | 199 | Documentation | Software Dev. | 48.99 |
Eclipsing Binary Light Curves
An X-ray binary is a special binary system where
one of the stars is a
normal star but the other star is an odd star, an X-ray emitting neutron
star or white dwarf
or black hole.
By looking at the X-ray emission from
the system, which comes mostly from the X-ray star,
can learn the size of the stars in the system.
Below, there is a representation of a light curve
as it would
appear from plotting data from eclipsing
binaries. Above the
light curve is a diagram showing where the two stars in the system
are relative to each other (as seen by the observer). Notice how the
"brightness" or "magnitude" changes as the smaller star is behind, next
to, and in front of the larger star. The X-ray intensity of the system
is greatest when both stars are completely visible, and least when the
X-ray emitting star is eclipsed by the central star, which blocks out
the X-rays from the smaller star.
<Previous Page |
Now that you've learned how to interpret binary light curves, you'll have a
chance to practice identifying the parts of an X-ray light curve. | <urn:uuid:da0cd8f1-8d39-4098-91b1-c1377fbb8621> | 4.125 | 267 | Tutorial | Science & Tech. | 55.331471 |
What happens when you try and fit the triomino pieces into these
Can you cover the camel with these pieces?
How many different ways can you find to join three equilateral
triangles together? Can you convince us that you have found them
If you split the square into these two pieces, it is possible to fit the pieces together again to make a new shape. How many new shapes can you make?
Follow the diagrams to make this patchwork piece, based on an
octagon in a square.
Investigate the smallest number of moves it takes to turn these
mats upside-down if you can only turn exactly three at a time.
Isabelle, Henry and Will describe some journeys that could fit the shape of the paths drawn.
Go to last month's problems to see more solutions. | <urn:uuid:ba9965ca-eb29-45f8-b129-4de2c5399318> | 2.703125 | 169 | Content Listing | Science & Tech. | 59.156775 |
Boltzmann's constant, also called the Boltzmann constant and symbolized k or k B , defines the relation between absolute temperature and the kinetic energy contained in each molecule of an ideal gas . This constant derives its name from the Austrian physicist Ludwig Boltzmann (1844-1906), and is equal to the ratio of the gas constant to the Avogadro constant .
The value of Boltzmann's constant is approximately 1.3807 x 10 -23 joule s per kelvin (J · K -1 ). In general, the energy in a gas molecule is directly proportional to the absolute temperature. As the temperature increases, the kinetic energy per molecule increases. As a gas is heated, its molecules move more rapidly. This produces increased pressure if the gas is confined in a space of constant volume, or increased volume if the pressure remains constant.
Also see Table of Physical Constants . | <urn:uuid:445f8e6c-4797-486d-8a57-e4513f2b6708> | 3.875 | 188 | Knowledge Article | Science & Tech. | 42.214071 |
The sample is mounted on a goniometer and gradually rotates while being bombarded with X-rays, producing a diffraction pattern of regularly spaced spots known as diffractions.
The diffracted beams add constructively in a few specific directions, determined by Bragg’s law:
- n. = 2.d.sin ø
Here d is the spacing between diffracting planes, θ is the incident angle, n is any integer, and λ is the wavelength of the beam. These specific directions appear as spots on the diffraction pattern.
- Principle of X-rays diffraction
The goniometric counter measures diffracted X-Rays intensity as a function of the diffraction angle. Plotting the angular positions and intensities of the resultant diffracted peaks of radiation produces a pattern, which is characteristic of the sample. Where a mixture of different phases is present, the resultant diffractogram is formed by addition of the individual patterns.
Powdered or massive,
X-rays source : 30kV, 15 mA , copper anticathode
- lenght : 150 mm,
- angle range from -3° to 150° (2theta),
- scan speed : from 0,01° to 100°/min (2theta) | <urn:uuid:d4c936e9-b8da-4e94-bc54-7a803476e727> | 3.953125 | 270 | Knowledge Article | Science & Tech. | 48.44352 |
|Figure 2. A closer look with at the alcoves and lineated valley fill seen in Fig 1. The locations of these THEMIS & higher resolution MOC images are marked in Fig. 1a.
d)The outflow of two ridged lobes from alcoves (bottom left and right) as they join a major lineated vally fill of area C (upper right) near the convergence with B (Fig. 1c). The left lobe is swept westward, forming broad arcing folds while the right lobe is increasingly compressed until it resembles a tight isoclinal fold. Both lobes ultimately merge into the general lineated valley fill parallel to the valley walls.
e)Detail of siedways lobe-like flows converging into the lineated valley fill on the valley floor, where flows merge from areas A and D.
f)A major east-facing zone of multiple alcoves and converging lobe-like flows in the more disatant reaches of the system, along the edge of the northernmost large mesa (area G). Note the concentric-outward ridges reaching out from the alcoves and their progressing compression, folding and flattening as the ridges deform and become part of the lineated valley fill on the valley floor.
g)The northern reaches of the lineated valley system. The lineated valley fill splits in two (bottom right) and flows around a massif to create a broad up-flow collar and a diffuse, down-flow wake.
Figure credits: Amanda Nahm, Brown University
Geological Society of America
2005 Annual Meeting News Release 05-37, 14 October 2005. | <urn:uuid:7fedd179-a64d-4fff-9f30-8fc96f562cd9> | 2.8125 | 344 | Truncated | Science & Tech. | 52.539127 |
The Schrodinger equation is a linear differential equation used in various fields of physics to describe the time evolution of quantum states. It is a fundamental aspect of quantum mechanics. The equation is named for its discoverer, Erwin Schrodinger.
General time-dependent form
The Schrodinger equation may generally be written
The left side of the equation describes how the wavefunction changes with time; the right side is related to its energy. For the simplest case of a particle of mass m moving in a one-dimensional potential V(x), the Schrodinger equation can be written
The quickest and easiest way to derive Schrodinger's equation is to understand the Hamiltonian operator in quantum mechanics. In classical mechanics, the total energy of a system is given by
where p is the momentum of the particle and V(x) is its potential energy. Applying the quantum mechanical operator for momentum:
and subbing into the classical mechanical form for energy, we get the same Hamiltonian operator in quantum mechanics:
from which Schrodinger's equation and the eigenvalue problem can be easily seen.
In many instances, steady-state solutions to the equation are of great interest. Physically, these solutions correspond to situations in which the wavefunction has a well-defined energy. The energy is then said to be an eigenvalue for the equation, and the wavefunction corresponding to that energy is called an eigenfunction or eigenstate. In such cases, the Schrodinger equation is time-independent and is often written
Here, E is energy, H is once again the Hamiltonian operator, and ψ is the energy eigenstate for E.
One example of this type of eigenvalue problem is an electrons bound inside an atom.
Examples for the time-independent equation
Free particle in one dimension
In this case, V(x) = 0 and so we see that the solution to the Schrodinger equation must be
ψ = Ae − ikx
with energy given by
Physically, this corresponds to a wave travelling with a momentum given by , where k can in principle take any value.
Particle in a box
Consider a one-dimensional box of width a, where the potential energy is 0 inside the box and infinite outside of it. This means that ψ must be zero outside the box. One can verify (by substituting into the Schrodinger equation) that
ψ = sin(kx)
is a solution if k = nπ where n is any integer. Thus, rather than the continuum of solutions for the free particle, for the particle in a box there is a set of discrete solutions with energies given by | <urn:uuid:a15c6272-8d1b-42b8-907a-289a1027be66> | 3.8125 | 554 | Knowledge Article | Science & Tech. | 30.889858 |
Boundary value problems of analytic function theory
Problems of finding an analytic function in a certain domain from a given relation between the boundary values of its real and its imaginary part. This problem was first posed by B. Riemann in 1857 . D. Hilbert studied the boundary value problem formulated as follows (the Riemann–Hilbert problem): To find the function that is analytic in a simply-connected domain bounded by a contour and that is continuous in , from the boundary condition
where and are given real continuous functions on . Hilbert initially reduced this problem to a singular integral equation in order to give an example of the application of such an equation.
The problem (1) may be reduced to a successive solution of two Dirichlet problems. A complete study of the problem by this method may be found in .
The problem arrived at by H. Poincaré in developing the mathematical theory of tide resembles problem (1). Poincaré's problem consists in determining a harmonic function in a domain from the following condition on the boundary of this domain:
where and are real functions given on , is the arc abscissa and is the normal to .
The generalized Riemann–Hilbert–Poincaré problem is the following linear boundary value problem: To find an analytic function in from the boundary condition
where is an integro-differential operator defined by the formula
where are (usually complex-valued) functions of class defined on (i.e. satisfying a Hölder condition), is a given real-valued function of class and are (usually complex-valued) functions on of the form
where are functions of class in both variables. The expression on the right-hand side of (4) is understood to mean the boundary value on from inside the domain of the -th order derivative of .
A special case of the Riemann–Hilbert–Poincaré problem, in the case when , , is the Riemann–Hilbert problem; Poincaré's problem is also a special formulation of the same problem. Many important boundary value problems — such as boundary value problems for partial differential equations of elliptic type with two independent variables — may be reduced to the Riemann–Hilbert–Poincaré problem.
The Riemann–Hilbert–Poincaré problem was also posed for , , and was solved by I.N. Vekua .
An important role in the theory of boundary value problems is played by the concept of the index of the problem — an integer defined by the formula
where is the increment of under one complete traversal of the contour in the direction leaving the domain at the left.
The Riemann–Hilbert–Poincaré problem is reduced to a singular integral equation of the form
where is the unknown real-valued function of class , is an unknown real constant, and
The functions and are expressed in terms of and , .
Let and be the numbers of linearly independent solutions of the homogeneous integral equation corresponding to (5) and of the homogeneous integral equation
associated with it. The numbers and are connected with the index of the Riemann–Hilbert–Poincaré problem by the equality
Of special interest is the case when the problem is solvable whatever the right-hand side . In order for the Riemann–Hilbert–Poincaré problem to be solvable whatever the right-hand side , a necessary and sufficient condition is or , and in the latter case the solution of equation (6) must satisfy the condition
in both cases and the homogeneous problem has exactly linearly independent solutions. If , then the Riemann–Hilbert–Poincaré problem is solvable for any right-hand side if and only if .
As regards the Riemann–Hilbert problem, the following statements are valid: 1) If , then the inhomogeneous problem (1) is solvable whatever its right-hand side; and 2) if , then the problem has a solution if and only if
The Riemann–Hilbert problem is closely connected with the so-called problem of linear conjugation. Let be a smooth or a piecewise-smooth curve consisting of closed contours enclosing some domain of the complex plane , which remains on the left during traversal of , and let the complement of in the -plane be denoted by . Let a function be given, and let it be continuous in a neighbourhood of the curve , everywhere except perhaps on itself. One says that the function is continuously extendable to a point from the left (or from the right) if tends to a definite limit (or ) as tends to along an arbitrary path, while remaining to the left (or to the right) of .
The function is said to be piecewise analytic with jump curve if it is analytic in and and is continuously extendable to any point both from the left and from the right.
The linear conjugation problem consists of determining a piecewise-analytic function with jump curve , having finite order at infinity, from the boundary condition
where and are functions of class given on . On the assumption that everywhere on , the integer
is called the index of the linear conjugation problem.
If is a piecewise-analytic vector, is a square -matrix and is a vector, and if also , then the integer
The theory of one-dimensional singular integral equations of the form (5) was constructed on the basis of the theory of the linear conjugation problem.
|||B. Riemann, , Gesammelte math. Werke - Nachträge , Teubner (1892–1902) (Translated from German)|
|||D. Hilbert, "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen" , Chelsea, reprint (1953)|
|||I.N. Vekua, Trudy Tbil. Mat. Inst. Akad. Nauk GruzSSR , 11 (1942) pp. 109–139|
|||H. Poincaré, "Leçons de mécanique celeste" , 3 , Paris (1910)|
|||N.I. Muskhelishvili, "Singular integral equations" , Wolters-Noordhoff (1972) (Translated from Russian)|
|||F.D. Gakhov, "Boundary value problems" , Pergamon (1966) (Translated from Russian)|
|||B.V. Khvedelidze, Trudy Tbil. Mat. Inst. Akad. Nauk GruzSSR , 23 (1956) pp. 3–158|
The problem discussed in the article is also known as the barrier problem. For applications in mathematical physics, see [a6], [a7], [a9], and the references given there. An important contribution to the theory (matrix case) was given in [a5]. Other relevant publications are [a1], [a2], [a3], [a4] and [a8]. The method proposed in [a1] employs the state space approach from systems theory.
Note that the various names given to various variants of these problems are by no means fixed. Thus, what is called the linear conjugation problem above is also often known as the Riemann–Hilbert problem [a9]. This version, especially the matrix case where , , are all (invertible) matrix-valued functions, is of great importance in the theory of completely-integrable systems. Indeed, consider an overdetermined system of linear partial differential equations (cf. [a6] for more detail)
where are to be thought of as rational functions in a complex parameter with coefficients depending on but with the pole structure independent of ; e.g.
with the constants and the functions of only. An invertible matrix solution of (a1) exists if and only if the corresponding satisfy
a so-called Zakharov–Shabat system. Many integrable systems can be put in this form. Now let solve (a1) and take a function on a contour in the -plane. Solve the -family of matrix Riemann–Hilbert problems . Then also solves (a1) and this leads to an action of the group of invertible matrix-valued functions in on the space of solutions of (a2). This method of obtaining a new solution , from an old one and a function is known as the Zakharov–Shabat dressing method. is also sometimes known as a Riemann–Hilbert transformation. In the case of Einstein's field equations (axisymmetric solutions) a similar technique goes by the names of Hauser–Ernst or Kinnersley–Chitre transformations, and in that case (a subgroup of) the group involved is known as the Geroch group [a10]. The Riemann monodromy problem asks for multi-valued functions regular everywhere but in , , such that analytic continuation around a contour containing exactly one of these points changes into , . This problem reduces to the Riemann–Hilbert problem by taking a contour through and a suitable step function on it. The Riemann monodromy problem was essentially solved by J. Plemelj [a11], G.D. Birkhoff , and I.A. Lappo-Danilevsky [a13].
|[a1]||H. Bart, I. Gohberg, M.A. Kaashoek, "Fredholm theory of Wiener–Hopf equations in terms of realization of their symbols" Integral Equations and Operator Theory , 8 (1985) pp. 590–613|
|[a2]||"Mathématique et physique" L. Boutet de Monvel (ed.) et al. (ed.) , Sem. ENS 1979–1982 , Birkhäuser (1983)|
|[a3]||K. Clancey, I. Gohberg, "Factorization of matrix functions and singular integral operators" , Operator Theory: Advances and Applications , 3 , Birkhäuser (1981)|
|[a4]||I.C. [I.Ts. Gokhberg] Gohberg, I.A. Feld'man, "Convolution equations and projection methods for their solution" , Transl. Math. Monogr. , 41 , Amer. Math. Soc. (1974) (Translated from Russian)|
|[a5]||I.C. Gohberg, M.G. Krein, "Systems of integral equations on a half line with kernels depending on the difference of arguments" Amer. Math. Soc. Transl. (2) (1960) pp. 217–287 Uspekhi Mat. Nauk , 13 : 2 (80) (1958) pp. 3–72|
|[a6]||V.E. Zakharov, S.V. Manakov, "Soliton theory" J.M. Khalatnikov (ed.) , Physics reviews , 1 , Harwood Acad. Publ. (1979) pp. 133–190|
|[a7]||E. Meister, "Randwertaufgaben der Funktionentheorie" , Teubner (1983)|
|[a8]||Yu.L. Rodin, "The Riemann boundary value problem on Riemannian manifolds" , Reidel (1988) (Translated from Russian)|
|[a9]||D.V. Chudnovsky (ed.) G. Chudnovsky (ed.) , The Riemann problem, complete integrability and arithmetic applications , Lect. notes in math. , 925 , Springer (1982)|
|[a10]||C. Hoenselaers, W. Dietz, "Solutions of Einstein's equations: techniques and results" , Lect. notes in physics , 265 , Springer (1984)|
|[a11]||J. Plemelj, "Problems in the sense of Riemann and Klein" , Interscience (1964)|
|[a12a]||G.D. Birkhoff, "Singular points of ordinary linear differential equations" Trans. Amer. Math. Soc. , 10 (1909) pp. 436–470|
|[a12b]||G.D. Birkhoff, "A simplified treatment of the regular singular point" Trans. Amer. Math. Soc. , 11 (1910) pp. 199–202|
|[a13]||I.A. Lappo-Danilevsky, "Mémoire sur la théorie des systèmes des équations différentielles linéaires" , Chelsea, reprint (1953)|
Boundary value problems of analytic function theory. A.V. Bitsadze (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Boundary_value_problems_of_analytic_function_theory&oldid=13982 | <urn:uuid:77e46c13-c865-473d-9d2d-673e7a5a63ed> | 3.203125 | 2,821 | Knowledge Article | Science & Tech. | 56.333139 |
How to Create Coding Standards that Work
One of the things that we love most about Perl is its flexibility, itssimilarity to natural language, and the fact that There's More Than One Way To Do It. Of course, when I say ``we'' I mean Perl hackers; the implicit ``them'' in this case is management, people who prefer other languages, or people who have to maintain someone else's line noise.
|"Just because there are bad coding standards out there, doesn't mean that all coding standards are bad."|
Perl programmers tend to rebel at the idea of coding standards, or at having their creativity limited by arbitrary rules -- otherwise, they'd be coding in Python :). But I think that sometimes a little bit of consistency can be a good thing.
As Larry himself said in one of his State of the Onion talks,
three virtues of coding are Diligence, Patience and Humility. Diligence (the opposite
of Laziness, if you're paying attention) is necessary when you're working
with other programmers. You can't afford to name your variables
$stimps_is_a_sex_goddess if someone has to come along after you and figure out what the hell you
meant. This is where coding standards come in handy.
Let me tell you about my recent experiences writing coding standards.
I work in a small company with about half a dozen coders on staff. We code in languages such as Perl, Python, and C, with occasional excursions into things like SQL and non-programming-languages like HTML.
We'd been working together a few months when it was decided that some development standards (slightly broader than coding standards, but mostly related to coding) would be a good idea. The difficulties we wanted to address were:
- program design,
- naming conventions,
- formatting conventions,
- documentation, and
All these issues had popped up already in our few short months of working with each other, especially when one person handed a project over to another. We needed to create some standards to ensure that all our work was consistent enough for other people to follow, but we didn't want to do this at the expense of individuality or creativity. And we didn't want to insult our coders' intelligence by dictating every little thing to them.
Being the person who tends to write things in our company, I took it upon myself to put together some standards with the help of the developers. From the beginning, my plan was to set some general ground rules, then to expand on them language by language where necessary. I wanted the standards to be as brief as possible, while still conveying enough information for a hypothetical new hire to read and understand without having to guess at anything.
Here's what we came up with as our general rules:
- The verbosity of all names should be proportional to the scope of their use
The plurality of a variable name should reflect the plurality of the data
it contains. In Perl,
$nameis a single name, while
@namesis an array of names
In general, follow the language's conventions in variable naming and other
things. If the language uses
variable_names_like_this, you should too. If it uses
ThisKindOfName, follow that.
Failing that, use
StudlyCapsfor classes, and
lower_casefor most other things. Note the distinction between words by using either underscores or StudlyCaps.
Function or subroutine names should be verbs or verb clauses. It is
unnecessary to start a function name with
Filenames should contain underscores between words, except where they are
$PATH. Filenames should be all lower case, except for class files which maybe in StudlyCaps if the language's common usage dictates it.
That's it. That's the core of our coding standards.
Those rules were developed during a one-hour meeting with all development staff. There's nothing there that anyone disagrees on at all, and I think that's because the rules are basically common sense.
Our standards then go on to give a few extra guidelines for each language. For Perl, we have the following standards:
and follow all suggestions contained therein, except where they disagree
with the general coding standards, which take precedence.
2. Use the
-w command line flag and the
strict pragma at all times, and
-T (taint checking) where appropriate.
3. Name Perl scripts with a
.pl extension, and CGI scripts with a
.cgi extension. One exception: Perl scripts in
$PATH may omit the
... and a few more, about one printed page in total. For instance, we have
a couple of regexp-related guidelines, a couple of points about references
and complex data structures (including when not to use them), and a list of
our favourite modules that we recommend developers use
Our documentation standards say to include at least a
LICENSE file with each piece of software; that each source code file should include
the name, author, description, version and copyright information; that any
function that needs more than two lines of comments to explain what it does
needs to be written more clearly; and that any more detailed documentation
should be handed to professional technical writers.
Coding standards needn't be onerous. Just because there are bad coding standards out there, doesn't mean that all coding standards are bad.
I think the way to a good coding standard is to be as minimalist as possible. Anything more than a couple of pages long, or which deviates too far from common practice, will frustrate developers and won't be followed. And standards that are too detailed may obscure the fact that the code has deeper problems.
Here's a second rule: standardise early! Don't try to impose complex standards on a project or team that's been going for a long time -- the effort to bring existing code up to standard will be too great. If your standards are minimal and based on common sense, there's no reason to wait for the project to take shape or the team's preferences to become known.
If you do set standards late, don't set out on a crusade to bring existing code up to scratch. Either fix things as you come to them, or (better) rewrite from scratch. Chances are that what you had was pretty messy anyway, and could do with reworking.
Third rule? I suppose three rules is a good number. The third rule is to encourage a culture in which standards are followed, not because Standards Must Be Obeyed, but because everyone realises that things work better that way. Imagine what would happen if, for instance, mail transport agents didn't follow RFC822. MTAs don't follow RFC822 because they're forced to, but because Internet email just wouldn't work without it. The thought of writing an MTA which was non-compliant is perverse (or Microsoft policy, one or the other).
If your development team understands that standards do make things easier and result in higher quality, more maintainable code, then the effort of enforcement will be small.
Damn, I seem to have found a fourth rule. Oh well.
Fourth rule: don't expect coders to document. Don't expect coders to do architecture or high-level design. Don't expect coders to have an eye for user interface. If they do, that's great, but no matter how many standards or methodologies you lay down, there's no way to change the fact that coding skill is not necessarily related to, and in fact may be inversely proportional to, those other necessary skills. Don't let a set of standards be your crutch when you really need to hire designers or documentors. | <urn:uuid:5e55e92c-f76c-4336-b30c-2c63b6210791> | 2.8125 | 1,620 | Personal Blog | Software Dev. | 51.434841 |
A future is a place-holder for the undetermined result of a (concurrent) computation. Once the computation delivers a result, the associated future is eliminated by globally replacing it with the result value. That value may be a future on its own.
Whenever a future is requested by a concurrent computation, i.e. it tries to access its value, that computation automatically synchronizes on the future by blocking until it becomes determined or failed.
There are four kinds of futures:
A concurrent future is created by the expression
which evaluates the expression exp in a new thread and returns immediately with a future of its result. When the expression has been evaluated, the future is globally replaced by the result. We speak of functional threads. See the discussion on failed futures below for the treatment of possible error conditions.
Note: The presence of concurrency has subtle implications on the semantics of pattern matching.
The following expression creates a table and concurrently fills it with the results of function f. Each entry becomes available as soon as its calculation terminates:
Vector.tabulate (30, fn i => spawn f i)
A derived form is provided for defining functions that always evaluate in separate threads:
fun spawn f x y = exp
An application f a b will spawn a new thread for evaluation. See below for a precise definition of this derived form.
An expression can be marked as lazy:
A lazy expression immediately evaluates to a lazy future of the result of exp. As soon as a thread requests the future, the computation is initiated in a new thread. The lazy future is replaced by a concurrent future and evaluation proceeds similar to spawn. In particular, failure is handled consistently.
Lazy futures enable full support for the whole range of lazy programming techniques. For example, the following function generates an infinite lazy stream of integers:
fun enum n = lazy n :: enum (n+1)
Analogously to spawn, a derived form is provided for defining lazy functions:
fun lazy f x y = exp
See below for a precise definition of this derived form. It allows convenient formulation of lazy functions. For example, a lazy variant of the map function on lists can be written
fun lazy mapz f = nil | mapz f (x::xs) = f x :: mapz f xs
This formulation is equivalent to
fun mapz f xs = lazy (case xs of => nil | x::xs => f x :: mapz f xs)
Promises are explicit handles for futures. A promise is created through the polymorphic library function Promise.promise:
val p = Promise.promise ()
Associated with every promise is a future. Creating a new promise also creates a fresh future. The future can be extracted as
val f = Promise.future p
A promised future is eliminated explicitly by applying
Promise.fulfill (p, v)
to the corresponding promise, which globally replaces the future with the value v.
Note: Promises may be thought of as single-assignment references that allow dereferencing prior to assignment, yielding a future. The operations promise, future and fulfill correspond to ref, ! and :=, respectively.
Promises essentially represent a more structured form of "logic variables" as found in logic programming languages. Their presence allows application of diverse idioms from concurrent logic programming to ML. Examples can be found in the documentation of the Promise structure.
If the computation associated with a concurrent or lazy future terminates with an exception, that future cannot be eliminated. Instead, it turns into a failed future. Promised futures can be failed explicitly by means of the Promise.fail function. Requesting a failed future does not block. Instead, any attempt to request a failed future will re-raise the exception that was the cause of the failure.
Another error condition is the attempt to replace a future with itself. This may happen if a recursive spawn or lazy expression is unfounded, or if a promise is fulfilled with its own future. In all of these cases, the future will be failed with the special exception Future.Cyclic.
The future semantics implies implicit data-flow synchronisation, which enables concurrent programming on a high level of abstraction.
A future is requested if it is used as an argument to a strict operation. The following operations are strict (note that futures can appear on the module level):
Note however, that selecting items from a structure using longids is non-strict.
If a future is requested by a thread, that thread blocks until the future has been replaced by a non-future value (or a failed future). After the value has been determined, the thread proceeds. The only exception are failed futures, which do not block.
Requesting a lazy future triggers initiation of the corresponding computation. The future is replaced by a concurrent future of the computation's result. The requesting thread blocks until the result is determined.
Requesting a promised future will block at least until a fulfill operation has been applied to the corresponding promise. Blocking continues if the promise is fulfilled with another future.
Requesting a failed future never blocks. Instead, the exception that was the cause of the failure will be re-raised.
Structural operations that are strict (i.e., pattern matching, op= and pickling) traverse all values in a depth-first left-to-right order. Futures are requested in that order. If traversal is terminated early, the remaining futures are not requested. Early termination occurs if a future is failed, upon a mismatch (pattern matching), if two partial values are not equal (op=), or if part of a value is sited (pickling). For unpacking it is unspecified, how much of the respective signatures is requested.
To deal with state in a thread-safe way, the structure Ref provides an atomic exchange operation for references:
val exchange : 'a ref * 'a -> 'a
With exchange, the content of a cell can be replaced by a new value. The old value is returned. The exchange operation is atomic, and can thus be used for synchronisation. As an example, here is the implementation of a generic lock generator:
fun lock () = let val r = ref () in fn f => fn x => let val new = Promise.promise () val old = Ref.exchange (r, Promise.future new) in await old; f x before Promise.fulfill (new, ())) end end
The library structure Lock implements locks this way.
Modules can be futures as well. See Laziness and Concurrency for modules.
The following library modules provide functionality relevant for programming with futures, promises and concurrent threads:
ByNeed is a functor that allows creation of lazy futures for modules. Lazy modules are also at the core of the semantics of components.
|fvalbind||::=||<lazy | spawn>||(m,n≥1) (*)|
|<op> vid atpat11 ... atpat1n <: ty1> = exp1|
|| <op> vid atpat21 ... atpat2n <: ty2> = exp2|
|| <op> vid atpatm1 ... atpatmn <: tym> = expm|
<lazy | spawn>
<op> vid atpat11 ... atpat1n <: ty1> = exp1
| <op> vid atpat21 ... atpat2n <: ty2> = exp2
| <op> vid atpatm1 ... atpatmn <: tym> = expm
<op> vid = fn vid1 => ... fn vidn =>
<lazy | spawn> case (vid1,...,vidn) of
(atpat11,...,atpat1n) => exp1 <: ty1>
| (atpat21,...,atpat2n) => exp2 <: ty2>
| (atpatm1,...,atpatmn) => expm <: tym>
vid1,...,vidn are distinct and new. | <urn:uuid:13c91150-f3a7-4563-a4a7-6f91628d14e3> | 3.3125 | 1,693 | Documentation | Software Dev. | 48.113973 |
|The Open Door Web Site|
Calculations Involving the Doppler Effect
Velocity of sound relative to air = v
Frequency of emitted sound = f
Apparent frequency of received sound = f’.
Moving Source (stationary observer)
If the source moves with a velocity of magnitude vs (relative to the air) then the velocity of the waves relative to the source is (v±vs).
so in this case,
The apparent frequency of the sound, f’, as measured by the observer, will be equal to v/.
Therefore, the apparent frequency is given by | <urn:uuid:2be125e9-0a72-4fb6-9e08-370fd65bee1b> | 3.84375 | 128 | Tutorial | Science & Tech. | 33.482759 |
Occasionally, it may be useful to specify that a certain file or directory must, if necessary, be built or created before some other target is built, but that changes to that file or directory do not require that the target itself be rebuilt. Such a relationship is called an order-only dependency because it only affects the order in which things must be built--the dependency before the target--but it is not a strict dependency relationship because the target should not change in response to changes in the dependent file.
For example, suppose that you want to create a file every time you run a build that identifies the time the build was performed, the version number, etc., and which is included in every program that you build. The version file's contents will change every build. If you specify a normal dependency relationship, then every program that depends on that file would be rebuilt every time you ran SCons. For example, we could use some Python code in a SConstruct file to create a new version.c file with a string containing the current date every time we run SCons, and then link a program with the resulting object file by listing version.c in the sources:
import time version_c_text = """ char *date = "%s"; """ % time.ctime(time.time()) open('version.c', 'w').write(version_c_text) hello = Program(['hello.c', 'version.c'])
If we list version.c as an actual source file, though, then version.o will get rebuilt every time we run SCons (because the SConstruct file itself changes the contents of version.c) and the hello executable will get re-linked every time (because the version.o file changes):
% scons -Q gcc -o hello.o -c hello.c gcc -o version.o -c version.c gcc -o hello hello.o version.o % scons -Q gcc -o version.o -c version.c gcc -o hello hello.o version.o % scons -Q gcc -o version.o -c version.c gcc -o hello hello.o version.o
One solution is to use the
to specify that the version.o
must be rebuilt before it is used by the link step,
but that changes to version.o
should not actually cause the hello
executable to be re-linked:
import time version_c_text = """ char *date = "%s"; """ % time.ctime(time.time()) open('version.c', 'w').write(version_c_text) version_obj = Object('version.c') hello = Program('hello.c', LINKFLAGS = str(version_obj)) Requires(hello, version_obj)
Notice that because we can no longer list version.c
as one of the sources for the hello program,
we have to find some other way to get it into the link command line.
For this example, we're cheating a bit and stuffing the
object file name (extracted from version_obj
list returned by the
$LINKFLAGS is already included
$LINKCOM command line.
With these changes, we get the desired behavior of re-building the version.o file, and therefore re-linking the hello executable, only when the hello.c has changed:
% scons -Q cc -o hello.o -c hello.c cc -o version.o -c version.c cc -o hello version.o hello.o % scons -Q scons: `.' is up to date. % edit hello.c [CHANGE THE CONTENTS OF hello.c] % scons -Q cc -o hello.o -c hello.c cc -o hello version.o hello.o % scons -Q scons: `.' is up to date. | <urn:uuid:119594d0-8e20-42bf-8836-532b6aa9e16c> | 2.75 | 817 | Documentation | Software Dev. | 70.34702 |
Posted: Sun Mar 02, 2008 8:18 pm Post subject: Reply with quote
Since air has weight it must also have density, which is the weight for a chosen volume, such as a cubic inch or cubic meter. If clouds are made up of particles, then they must have weight and density. The key to why clouds float is that the density of the same volume of cloud material is less than the density of the same amount of dry air. Just as oil floats on water because it is less dense, clouds float on air because the moist air in clouds is less dense than dry air.
We still need to answer the question of how much a cloud weighs. For an example, let's use your basic "everyday" cloud—the cumulus cloud with a volume of about 1 cubic kilometer (km) located about 2 km above the ground. In other words, it is a cube about 1 km on each side. The National Oceanic and Atmospheric Administration (NOAA) provides some estimates of air and cloud density and weight. NOAA found that dry air has a density of about 1.007 kilograms/cubic meter (kg/m3), moist air comes in at about 0.627 kg/m3, and the density of the actual cloud droplets is about 0.0005 kg/m3. The density of the cloud is thus about 62 percent of dry air. In the final calculations, the 1 km3 cumulus cloud weighs a whopping 1.4 billion pounds (635 million kilograms)! But the cloud floats because the weight of the same volume of dry air is even more, about 2.2 billion pounds (1 billion kilograms). Still, remember that it is the lesser density of the cloud that allows it to float on the dryer and more-dense air.
Ok, considering these guys and girls have the all tools to back this up, I think we can accept that:
1) Dry air is denser than cloud.
--> clouds must displace some amount of dry air
--> a cloud is a physically separate entity in the atmosphere
2) Water in a cloud is the "reason" the cloud is less dense than drier air
--> Opaque air is less dense than transparent but also moist air
--> normal air also holds water but is not a cloud, i.e. opaque
So, something (or the lack of something) turns normal moist air into an opaque cloud, which then floats on a pressure/plasma barrier like foam on a lake and in turn is capped under a pressure/plasma barrier under the stratosphere (like a thermocline).
What is it then that makes normal air go through a phase shift? Personally I think it is a sign of low plasma density. That is, there is a cloud lurking inside all of moist air. Under normal plasma densities, the cloud is contained like a popcorn kernel. Drop the plasma density and the air will "pop" into a cloud, which then takes on a life of its own.
As for ice in clouds this isn't so hard to accept either. Ice is less dense than water.
So if you have a separate atmospheric entity called a cloud- which is full of water, then the less dense water should float on the top.
--> Ice at the top.
It is only when the cloud lattice breaks down that you get re-crystallization, or "mineralization" of the water (remember, ice is a technically a mineral). Specific gravity is changed and you get precipitation. | <urn:uuid:e8b9a286-0198-426a-a841-eb6f6fc1cd58> | 3.640625 | 724 | Comment Section | Science & Tech. | 69.156771 |
Tornadoes and tornado deaths, sorted by Fujita
Click on image for full size
Courtesy of University of Chicago
Most of the world's tornadoes happen in the United States. These
tornadoes are around Tornado Alley
. There are
about 750 tornadoes each year in the U.S. An average of about 100 people
are killed each year by tornadoes.
The fewest tornadoes cause the most damage. Violent
tornadoes are very rare but they are the most destructive. Scientists now
can make better forecasts
have better equipment. This way they can warn
tornadoes so not as many will die. They can't stop tornadoes, so the
tornadoes will still cause a lot of damage.
The charts to the left show how fewest tornadoes can cause the most
destruction. Click on the picture for a better explanation.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
What types of instructional experiences help K-8 students learn science with understanding? What do science educators teachers, teacher leaders, science specialists, professional development staff, curriculum designers, school administrators need to know to create and support such experiences?...more
Tornadoes are hard to forecast. They don't last very long so there's not much time to figure out what's happening. Also, scientists don't really know how they form. They know what the weather's like when...more
Tornadoes are very dangerous. This is why it's important to know when they are going to form. Forecastors at the National Weather Service are always looking for storms that could pop up. Nobody knows exactly...more
Tornadoes form from severe thunderstorms. They are very destructive because they have a high energy density. They also don't last very long. This makes it hard to learn about them. Since scientists don't...more
Sound travels in waves. When the waves hit your ear, you hear a sound. Have you ever noticed the waves in the ocean? They go up and down, up and down. Sound waves act the same way. The number of times...more
Storm chasers are different than storm spotters. Chasers travel around Tornado Alley looking for severe storms and tornadoes. Sometime there are dozens of chasers following the same storm. All kinds of...more
A tornado is the most destructive natural storm. You might think that this also means that tornadoes are the strongest storms; that's not the case. In fact, a thunderstorm which produces a tornado can...more
The Doppler effect was named after Christian Doppler, who first came up with the idea in 1842. He learned that sound waves would be pushed closer together if the source of the sound was moving toward you....more | <urn:uuid:01734548-d5fd-4e5e-bffa-27922750c049> | 3.203125 | 614 | Content Listing | Science & Tech. | 61.751982 |
ASK A SCIENTIST
Question: Why do animals see different than humans? Why can't humans see in the dark?
Have you ever wondered what your dog or cat sees when he or she watches television with you? Do they see what we see or is it something entirely different? Until very recently, we humans were totally and literally in the dark about what and how other animals see. Modern science has revealed an amazing world of vision diversity. Some animals see subtle variations in color, others can slow motion down to speeds that rival our most advanced high speed cameras, while still others actually detect heat variations much like the Hubble telescope. Let's focus on a few different species and see how remarkably differently they see the world around them.
Dogs are said to have dichromatic vision -- they can see only part of the range of colors in the visual spectrum of light wavelengths. Humans have trichomatic vision, meaning that they can see the whole spectrum. Dogs probably lack the ability to see the range of colors from green to red. This means that they see in shades of yellow and blue. Dogs can detect motion better and can see flickering light better than humans. Dogs likely see television as a series of moving frames rather than as a continuous scene. And what about our furry cat friends? Most cats can only detect a little color, and are best at focusing on one object narrowly (for hunting). But cats do have better night vision than humans.
Horses have an amazing range of vision – that is, except for seeing things that are right in front of them. They literally can't spot whatever is between their eyes and therefore directly ahead, due to their binocular vision.
As the fall season continues to take firm root in Upstate New York, our skies are filled with migratory birds gathering together and beginning their long journeys to their winter homes. What do birds see? In fact, among our feathered friends there exists perhaps the greatest diversity in the ability see of all species. Pigeons, for example, can see literally millions of different colors and are thought to be among the best at color detection, compared to any other animal on earth. Eagles' vision is also among the sharpest of any animal. Some can see twice as far as people. Studies suggest that some eagles can spot an animal the size of a rabbit up to two miles away. Canadian geese also have very good eyesight. They can see more than 180 degrees horizontally and vertically, which is very useful during flight. And they are able to fly at night and through terrible storms.
Ask a Scientist appears Thursdays. Questions are answered by faculty at Binghamton University. Teachers in the greater Binghamton area who wish to participate in the program are asked to write to Ask A Scientist, c/o Binghamton University, Office of Communications and Marketing, PO Box 6000, Binghamton, NY 13902-6000 or e-mail firstname.lastname@example.org. Check out the Ask a Scientist Web site at askascientist.binghamton.edu. To submit a question, download the submission form(.pdf, 460kb). | <urn:uuid:70ca282c-328a-473c-9682-7131bc6b20ae> | 3.859375 | 639 | Q&A Forum | Science & Tech. | 55.532464 |
What the Sputniks Said (Jul, 1958)
What the Sputniks Said
Russian scientists disclose how radio waves travel from their satellites to earth
By A. J. Steiger
Radio LISTENERS who tracked the earth-circling travels of Sputnik I have reported new discoveries in short-wave propagation, including a round-the-world echo, according to preliminary findings published in a recent issue of Radio, a Russian popular electronics journal.
What the Sputniks discovered about prospects for using solar power to operate space vehicle instruments is also discussed in the Moscow journal. These reports on Russia’s pioneer space vehicles’ discoveries, the first to be published, are translated here.
Propagation Conditions. “Preliminary results of reception of Sputnik I radio signals,” writes Prof. A. Kazantsev, Doctor of Technical Sciences, in Radio, “show that in the 15-meter wave band these signals were received at very great distances, far surpassing the distance of direct visibility and in a number of cases reaching 10, 000 kilometers. Very valuable material on possible ways of short-wave propagation can be derived from study of the data on long-distance reception of these signals.
“It will be recalled that the satellite orbit’s perigee (its lowest point) was in the northern hemisphere and its apogee (highest point) was in the southern hemisphere. The apogee’s altitude reached about 1000 kilometers above the earth’s surface. In the southern hemisphere, therefore, the satellite traveled above the principal layer of the ionosphere, layer F2, which conditions short-wave reflection.
“Concerning the northern hemisphere, especially interesting short-wave propagating conditions were created. At certain intervals Sputnik I was above the F2 layer of maximum ionization, at others below it, and at certain times close to the maximum.
“When Sputnik I was above layer F2, then passing from above through the mass of the ionosphere, the radio waves were reflected from the earth’s surface and propagated further by single or multiple reflection from layer F2 in those areas where its critical frequency had sufficiently high values (Fig. 1).
“It is also possible that radio waves coming into the ionosphere from above at a sloping angle are considerably refracted and therefore penetrate into an area outside the bounds of direct geometric visibility (Fig. 2).
“When Sputnik I was below layer F2 (Fig. 3), and approached an observation point from a global area lighted by the sun, the radio signals on the 15-meter wave band could come from the satellite to a point of reception, after going through consecutive reflections from layer F2 and the earth’s surface, and then through direct visibility.”
Limited Reception. “If the satellite, after passing over the observation point, moved away into an unlighted area of the globe, signal reception ceased in a relatively short distance, depending on limits of visibility.
“Non-symmetrical reception conditions were also observed. When the satellite was close to layer F2 of maximum ionization, then especially favorable conditions might develop for the formation of radio-wave conducting channels able to propagate radio waves over very long distances (Fig. 4).
“There is evidence, in fact, that along with satellite signals which reached the observation point by the shortest route, signals were sometimes received that had traveled around the globe (round-the-world radio echo). One of the USSR’s most skillful radio amateurs, Yu. N. Prozorskiy of Moscow, on October 8 at 0007-0008 hours recorded the reception of such a round-the-world radio echo in the 15-meter wave band.
“Concerning signals in the 7.5-meter wave band, as far as can be judged at present, they were as a rule received in the limits of direct visibility, although in certain cases owing to high values of daytime critical frequencies of the F2 layer, this wave could be propagated also outside direct visibility.
“A conclusion can be drawn as to precisely what way radio-wave propagation occurred after correlation has been established between the altitudes of Sputnik I and the real altitudes of the F2 layer at one and the same moment, and analysis of the propagation conditions.”
Sun’s Radiation. Discussing preliminary findings of Sputnik II with respect to solar radiation in outer space, Russian Academician A. I. Berg, leading Russian authority on space-flight electronics, wrote in Radio: “Of special interest for radio specialists was the data picked up by the second Soviet satellite on solar radiation in the short-wave band which has a direct effect on conditions in the upper layers of the atmosphere.
“During the course of more than a hundred years, scientists have been exploring the intensity and spectral composition of the radiant energy which falls on the earth from the sun, and have on this basis indirectly been attempting to determine what these magnitudes are for conditions outside the earth’s atmosphere.
“The most reliable data at present permit assuming that the density of the stream of the sun’s radiant energy, beyond the limits of the atmosphere, is equal to 1.4 kilowatt per square meter. In actinometry and meteorology, this magnitude is called the ‘solar constant.’ About 9% of this stream falls on the ultraviolet part of the solar spectrum, about 40% on the visible part, and 51% on the far red and infrared parts of the sun’s spectrum.
“At the earth’s surface, with the sun standing at an altitude of 30° above the horizon, the density of the stream of solar energy is considerably less owing to the dispersion and absorption of solar energy by the atmosphere. It amounts to not more than 30 to 35% of the stream density beyond atmospheric limits and is differently distributed. Only 2 to 3% of it falls in the spectrum’s ultraviolet part, 44% in the visible spectrum, and 54% in spectral heat rays.
“Making these data more precise, particularly the direct measurement of stream density of the sun’s radiant energy, i.e., the solar constant beyond atmospheric limits, will make it possible to determine accurately the sun’s effective temperature and density of the radiant energy stream emitted by a unit of solar surface. Precise measurement here is of interest to astrophysics first of all, but it is of more than [theoretical] importance.”
Battery Requirements. “If a transistor solar battery of 1 square meter in area be constructed and faced toward the sun even with the accuracy of a 30° angle, then as might be expected this surface will be exposed to solar power of the order of 1 kilowatt. With 10% battery efficiency in conversion of solar energy to electricity, the output of such a solar battery surface might be expected to reach 100 watts of electric power.
“But if it be assumed that a satellite flying at a great height is exposed to the sun’s rays approximately two-thirds of its orbit circuit time around the earth, then the solar battery can be expected to produce 100 watt-hours of energy. However, to secure such conditions, the spectral characteristics of the transistor battery must be close to the above-indicated frequency distribution of solar energy, especially in the visible and infrared parts of the spectrum, and, moreover, such a battery must operate on an optimum load.
“Unfortunately, the materials presently known that will permit creating batteries that possess high internal resistance are complex and cumbersome. A much lower-magnitude of electric energy should therefore be expected. But even this would nevertheless have great importance as a possible alternate way of powering space vehicle measuring instruments—a solar battery, for example, used in combination with an ordinary or storage battery.” — | <urn:uuid:d585c044-ad7d-4610-a313-90816d2f5d6e> | 3.40625 | 1,651 | Truncated | Science & Tech. | 33.93275 |
What was the weather like millions of years ago?
What did dinosaurs eat for breakfast?
How did people on Easter Island build those statues?
Answer: Ask a rock.
Rocks are some of the most basic features of any landscape. They can be found in cities, deserts, forests, and they can be sculpted, rough, shiny or drab. Most of our natural�resources�are pulled from rocks, from the petroleum in our gas tank to the mica flakes in our toothpaste (it’s those little sparkly bits). They also hold vast amounts of information. They can tell us in what direction glaciers flowed, or what comprised the trade patterns and societal structures of past civilizations. Rocks from outer space can tell us about the very essence of the universe. Got a question about the natural world past or present? It’s a pretty safe bet that the rocks around you know at the very least a part of the puzzle.
Sometimes the most mundane things in the world around us can hold the most fascinating information. This blog will focus on cool tidbits in earth science and archaeology, with fun (and sometimes unrelated) things thrown in along the way. Use the Glossary to look up the words/people/places in bold, and let me know if there are things I mention that I need to define.
It seems appropriate to start the story at the beginning, with one of the first great stories that rocks ever told; how old is the earth we walk on?
Once upon a time there was a Calvinist Archbishop in Ireland named James Ussher (the extra s was just for fun). He was a renowned scholar who calculated the age of the earth using biblical dates, and figured out that creation began sometime on the night before October 23, 4004 BC. Everyone at the time thought that this was awesome, so much so that they began printing his chronology in the front of family bibles. This was 1654, the earth was 5,658 years old, and all was well.
Then the Enlightenment began, and science entered the scene. It got to be the late 1700′s and another James rose to challenge Ussher’s chronology. James Hutton was a Scot, and a guy that really REALLY liked rocks. He studied them for fun, because what else would a gentleman of leisure do in the 1700′s? He was a little bothered by Ussher’s chronology though. If Ussher was right, then how on earth did something like sandstone exist? It seemed to have been deposited in rivers and on beaches over time, but the time it took to turn sand into stone would end up being far longer than the time it took for the entire earth to be created.
Whew. That’s a puzzle. The prevailing theory at the time was that ‘it was just different in the past’, and that modern observations of the time it took to erode and deposit a rock couldn’t apply. Still, Hutton was bothered with this idea. He set out looking for a place that would prove that there just had to be more time in, well, time itself. He found various places where there were unconformities or places where the rocks didn’t meld into each other but existed in two different sharp and very distinct layers. It wasn’t until he got to Siccar point that he was able to get the full picture.
Here, at a rocky outcrop into the sea along the coast of Scotland, he found evidence for his suspicion that there had to have been more time since the world started. What he saw was a rock formation, known as the Old Red Sandstone, lying horizontally across another sedimentary rock that was gray and had vertical layers. Hutton was ridiculously excited by this. He even found ripple marks on the vertical layers indicating that they had been originally deposited horizontally in water. This fit with Nicolas Steno‘s long established Principles of Stratigraphy, which had been around for about as long as Ussher’s chronology.
Hutton was thrilled, because (get ready to follow the bouncing ball, kids) the bottom layer of rocks would have had to be deposited, compacted and lithified (turned into rock), tilted like a see-saw with Andre the Giant on one end and a two year old on the other, then uplifted, eroded and sunk again into the sea where eventually the horizontal layers of the Old Red Sandstone would be deposited, compacted, lithified, and uplifted to the surface, where eventually weathering and erosion would create the lovely spot called Siccar Point where Hutton was having his really excited moment.
And why was he so excited? Because like that last ridiculous run-on sentence, a lot had to happen, which meant that a lot of time had to take place. And it had to be a lot more time than the accepted age of the earth, which in 1788 was 5,792 years old. Because Hutton established that the earth had to be a lot older than Ussher’s estimate, he has become known as the Father of Modern Geology. Because without accurate estimates of how old the rocks that geologists are looking at are it becomes far more difficult to actually get any decent information out of them. Others of his colleagues took his work on the subject, and actually made it readable (he was a great scientist and a horrible writer) and he became all but worshipped by other people that really liked rocks.
It wasn’t until the advent of modern technology that we were able to figure out exactly how old the earth was. Scientists in Hutton’s time used relative dating to figure out how old things were, simply putting the puzzle pieces together and figuring out that this rock was older than that rock which is the same age as that rock over there. They had no numbers to work with on their time scale, just fossils and places where the rocks met each other.
Eventually though we figured out the numbers that went with their relative dating scale, using absolute dating which measured how long ago rocks had been formed using radio-active isotopes found in tiny zircon crystals. The oldest rocks that we know about now date to 4.28 Billion years old! They can be found in Canada along with the previous record holder the Acasta Gneiss, at a sprightly 4.03 billion years old.
So, how old is the earth?
Modern estimates using old rocks and new technology put the Earth’s birthday somewhere around 4.5 billion years ago, a lot longer than the 6,014 years that the world would be turning this October 22 (MARK YOUR CALENDERS) if we kept with Ussher’s original calculation. But, while Ussher was wrong, and while Hutton only was able to prove that Ussher was wrong, their contributions to the field led modern geologists to look at the rocks and figure out that the rocks did in fact know exactly how old they were.
Next time: Wealth in a warzone; why the rocks in Afghanistan are such a big deal
var _gaq = _gaq || ; _gaq.push(['_setAccount', 'UA-21884343-1']); _gaq.push(['_trackPageview']); | <urn:uuid:064115f4-1395-4b22-aebf-3bfdeda266ae> | 3 | 1,516 | Personal Blog | Science & Tech. | 57.966158 |
This week I received some wonderful pictures from Patrick McKinnon, showing rainbow-like cloud features over Seattle. Take a look at them!
And over the past year, others have sent me similar pics. What are they? They have lots of colors like rainbows, but they aren't rainbows. And besides there is no rain with them!
What you are seeing is an example of iridescence, with the colors produced by a process called diffraction.
Iridescence is associated with thin clouds of relatively uniform, small, cloud droplets or ice crystals. The colors are generally in the pastel range. This phenomenon is most obvious in cirrocumulus, altocumulus, and lenticular clouds (lens-shaped clouds formed by flow over mountains and in their lee).
The source of the color is the same as that produces the colors in soap bubbles and oil slicks on the road--diffraction.
Diffraction depends on the wavelike nature of light. Sunlight is made up of all wavelengths of the visible spectrum and in diffraction, the colors are separated by light interacting with itself--called constructive and destructive interference. In a future blog I will go into this mechanism in more detail. | <urn:uuid:4293149c-63a0-4fd4-a1c9-48d75fc870e2> | 3.109375 | 251 | Personal Blog | Science & Tech. | 46.6275 |
Comprehensive DescriptionRead full entry
BiologyA rare species (Ref. 26346) inhabiting continental slopes (Ref. 247). Usually mesopelagic although taken most often near the bottom (Ref. 10717). Its razor-edged lower teeth is used to attack and dismember large prey (Ref. 247). Ovoviviparous (Ref. 205). Utilized dried salted for human consumption and for fishmeal (Ref. 247). | <urn:uuid:5db406be-1665-47af-bf12-bc0724f38e37> | 2.78125 | 94 | Knowledge Article | Science & Tech. | 48.318939 |
Diversity in Alabama
The word biodiversity is a descriptive term for the overall variety of living things in the world or a particular region. Well-known areas of high biodiversity include tropical rainforests and coral reefs, where the varieties of plants and animals can be astounding. An area with a large number of a particular type of plant or animal is often called the center of diversity for that group. For example, the lakes of eastern Africa are known for their vast array of cichlid fishes, the popular aquarium pets. Over 300 species of cichlids live in Lake Victoria alone. Thus, eastern Africa is known as the center of diversity for that group of fish. Few people know that Alabama is the center of diversity for several types of aquatic animals, including freshwater mussels.
Alabama’s diversity of freshwater mussels is greater than anywhere else in the world, including tropical areas. North America is home to 307 species of freshwater mussels, as recognized by the American Fisheries Society. A total of 179 species have been reported from Alabama, representing 58% of the total. This is remarkable when you take into account the fact that Alabama makes up only a small percentage of the North American land mass.
Why does the southeastern United States, and Alabama in particular, have such a high diversity of mussel species? Two factors played a role. One of the factors is the wealth of river systems in the state. The central basin in Alabama is the Mobile River system, which is comprised of the Alabama, Tombigbee, Tallapoosa, Coosa, Black Warrior and Cahaba rivers, which form the Mobile River that empties into Mobile Bay. Another major river, the Tennessee, flows through the northern reaches of the state. South central and southeastern Alabama are drained by smaller, coastal systems, such as the Choctawhatchee, Conecuh and Yellow rivers. Finally, the extreme southeastern and southwestern portions of the state lie within the Chattahoochee and Escatawpa river drainages, respectively. Each river system has a unique assemblage of mussels, including many endemic species, which are those that are found in a small area and nowhere else.
The other factor that played a role in the mussel diversity of Alabama is that the river systems in the state are very old. Rivers in more northerly climes fell under the continental ice caps during the various ice ages in our not too distant (geologically speaking) past. Thus, as those rivers were destroyed and reformed, mussel assemblages were eliminated and the rivers had to be repeatedly colonized by mussels. Alabama lies well to the south of the southern extent of the ice caps, thus our rivers escaped the fate of more northerly rivers. So, though they have shifted position and even changed channels over the years, they have remained basically intact. The fact that the mussels in the Alabama region are separated in the different drainages, and have been isolated for a very long time, have allowed them to evolve into the multitude of species that we see today.
Among this diversity is an abundance of shell morphologies, with many unusual shapes ranging from very long and thin, almost square, triangular or round. Sizes range from the very large Washboard, a common commercially valuable species that reaches a length of 10 inches, to the Littlewing Pearlymussel, which only grows to about 1-½ inches in length. Various species are adorned with shell sculpture, such as ridges, corrugations, pustules, knobs, tubercles or furrows. The thin, outer layer of the shell is called the periostracum. Among species, periostracum color ranges from yellow, through olive green to brown and black. Some of the lighter colored species are two-toned, with green rays or chevrons. The shell nacre, also known as mother of pearl, which makes up the thick, inner layer of shell, also varies among species. Most mussels have white nacre, but in mussels the nacre may be purple, pink, reddish, salmon or pale orange.
Many of the common names applied to mussels date back to the days when they were harvested for the pearl button industry, which thrived during the first half of the twentieth century. Most of those names simply reflect things they resembled to the mussel fishermen, and were often colorful and unusual. Some examples include: the pigtoes, which have a wide sulcus, giving them the appearance of a cloven hoof; the Washboard, which has a heavily ridged and corrugated shell, resembling the scrub boards that were used for laundry in the days before electric washing machines; and Butterfly, which has a shell in the shape of a butterfly’s wing. Some mussel common names simply reflect the ornamentation or color of their shells. Examples of these are Fiveridge, Threehorn Wartyback, Pimpleback, Rainbow and Wavyrayed Lampmussel. During recent years, scientists that work with freshwater mussels have tried to standardize their common names, assigning common names to those ignored by commercial mussel fishermen. Though generally less colorful than those applied by the mussel fishermen of old, they are usually somewhat descriptive in nature.
In addition to being highly diverse in form, the mussels of Alabama are also diverse in function, demonstrating a variety of life history strategies. The general life history strategy of mussels is very interesting and unique in the animal kingdom. Females brood larvae, called glochidia, until they are mature. Glochidia are parasitic, generally using fish as hosts, so must come into contact with and infest a host upon their release from the female. Glochidia attach to the gills or fins of the fish, which forms a cyst of scar tissue around them. There, the glochidia remain for a period of one to several weeks, depending on the species of mussel. While attached to the fish, the mussels develop all of the organs necessary for a free-living existence. Not only does the cyst offer a safe, secure place for their development, it often allows the larvae to be dispersed over a relatively wide area. Adult mussels are sedentary in nature, seldom moving more than a few feet throughout their lives. The timing of glochidia discharged has been linked with spawning runs of their host fish. Spawning runs of fish allow the glochidia to be dispersed over a large area. Infestations of glochidia have been found to be harmless to the fish and hamper their lives in no way.
The diversity of mussel life history strategies is demonstrated in timing and length of reproductive events, such as spawning and discharge of glochidia. However, the multiplicity of strategy is most notable in the method of making their glochidia available for host infestation. Many species simply discharge their glochidia into the water column and trust their fate to chance. Some discharge glochidia in sticky webs of mucus, through which potential hosts may swim and become infested. Other mussel species have evolved very elaborate methods of attracting hosts. Some discharge glochidia in small packets, called conglutinates, which resemble food items sought by fish.
Conglutinates that resemble fish embryos, insect larvae and worms have been observed. When the host bites into the conglutinate, the packet breaks apart and exposes the host to the glochidia. Some species of mussel have developed intricate lures as modifications to their anatomy. These are in the form of folds of tissue called mantle flaps, and may resemble small fish, crayfish or insects, depending on the species. When the potential host tries to bite the lure, glochidia are discharged by the mussel. Surely the most elaborate host attractor is called a superconglutinate, which is used by a few species in southern Alabama. This is simply a combination of conglutinates, or small packets of glochidia, that are bound together into a single mass. A superconglutinate is formed in such a manner that it closely resembles a minnow, see video. A superconglutinate is discharged into a hollow tube of mucus, which trails in the water current behind the mussel and may be over a yard long. When moving erratically in the water current behind the mussel, superconglutinates look amazingly life-like and have been observed to elicit strikes by predatory fish.
The great diversity of species in Alabama is something of which we can all be proud. However, though diversity of freshwater mussels in the state remains high, many species have been lost from destruction of their habitat by human alterations to river systems. Most species require flowing water; over clean, stable sand and gravel. Water pollution and construction of dams on our major rivers, such as the Tennessee, Coosa, Black Warrior and Alabama, have eliminated many species from our state and even driven quite a few to extinction. Also detrimental to mussel habitat is dredging and channelization of our rivers. These activities cause destabilization of the river bottom and mussels cannot survive in loose, sifting sediments. Particularly hard hit was the Tombigbee River, which was subjected to a combination of impoundment and channelization as part of construction of the Tenn-Tom Waterway. Another effect of destroying habitat on our major rivers is that populations in tributaries become isolated. So, what was once a single large population of a particular species becomes fragmented into a number of smaller populations, separated from each other by expanses of poor habitat, following impoundment of a river. These smaller populations are more susceptible to extirpation than larger populations and often lack the genetic diversity to help them overcome adverse conditions.
But, even though many species have been lost, there are some bright spots. Several areas of good habitat, with diverse mussel assemblages, remain. A few smaller rivers and streams, such as Sipsey River and many of the streams in Bankhead and Talladega national forests, have mussel faunas that are basically intact. The Federal Clean Water Act, which was implemented in 1971, has improved water quality in many rivers and mussel reintroduction efforts are under way. There are plans to reintroduce some species that have not been seen in Alabama for almost one hundred years. So, with our help and continued diligence, the status of our wonderful mussel fauna can be maintained and even improved. | <urn:uuid:6cc52f1d-4cbd-468e-9eb2-ff43122697ee> | 3.390625 | 2,150 | Knowledge Article | Science & Tech. | 33.326855 |
GREAT CIRCLE SAILING
The great circle distance in degrees between Point1 (lat1, lon1) and Point2 (lat2, lon2) is easily calculated:
cos(D)= Sin(lat1) * Sin(lat2) + cos(lat1) * cos(lat2) * cos(lon2 - lon1)
or what is the same:
D= arccos(Sin(lat1) * Sin(lat2) + cos(lat1) * cos(lat2) * cos(lon2 - lon1))
In this paper we assume the following signs N=+, S=-, W=+, E=- .
cos(D) will always fall in the range -1 < cos(D) < 1 and (D) will be a positive angle: 0° < D < 180°. You can use a formula that will use the arctan function but this will give a result in the -90° to +90° range and you need to add 180° if the result is negative.
The azimuth or initial course from point 1 (origin) to point 2 (destination) Zn can also be easily calculated. Zn is always measured from North to East from 0° to 360°. First we calculate Z. If we have considered West longitudes as positive we use this formula:
(If we had considered East longitudes as being positive we would need to change the sign on one side of this equation.)
The function ATAN( ) returns a value between -90° and +90° so Z needs to be adjusted to the right quadrant in order to obtain Zn: 0° < Zn < 360°
If sign(sin(Z)) = sign(sin(lon2-lon1)) Then Zn = Z + 180 Else, If Z < 0 Then Zn = Z + 360 Else Zn = Z
Zn = Z + 90 * (1 + sign( sin(lon2-lon1) * sin(Z) ) ) | <urn:uuid:f178995e-c0d3-4655-b939-ba2daff8c94b> | 3.0625 | 433 | Documentation | Science & Tech. | 68.122771 |
3.2. Supernova Remnants: Plerions
The Crab Nebula is a somewhat unique object, and hence one could not confidently predict what other supernova remnants might be detectable. The Crab is a member of that subclass of supernova remnants known as plerions, in which a bubble of relativistic particles is powered by a central pulsar. No other plerions have been seen by telescopes in the Northern Hemisphere (Reynolds et al. 1993) (Table 3). However, two have been detected in the Southern Hemisphere by the CANGAROO group. They first reported a detection of PSR 1706-44 at TeV energies in 1993 (Kifune et al. 1995) based on 60 hours of observation in the summer of 1992. PSR 1706-44 is identified with a pulsar (of period 102 ms) and appears to be associated with a supernova remnant, possibly a plerion. At GeV energies it has a very flat spectrum. The energy spectrum is hard with a flux above 1 TeV of about 0.15 × 10-11 cm-2 s-1. There is no evidence that the signal is periodic. The detection has been confirmed by the University of Durham group working in Narrabri, Australia (Chadwick et al. 1997).
(10-11 cm-2 s-1)
|Crab Nebula...||400||7.0||Whipple, ASGAT, HEGRA, TA, Crimea, Gamma*, CANGAROO, CAT|
|PSR 1706-44...||1000||0.8||CANGAROO, Durham|
|SS 433...||550||< 1.8||Whipple|
|3C 58...||550||< 1.1||Whipple|
|PSR 0656+14...||1000||< 3.4||Whipple|
The CANGAROO group have also reported the detection of a 6 signal from the vicinity of the Vela pulsar (Yoshikoshi et al. 1997). The integral -ray flux above 2.5 TeV is 2.5 × 10-12 photons cm-2 s-1. Again, there is no evidence for periodicity, and the flux limit is about a factor of 10 less than the steady flux. The signal is offset (by 0°.14) from the pulsar position, which makes it more likely that the source is a synchrotron nebula. Since this offset position is coincident with the birthplace of the pulsar, it is suggested that the progenitor electrons are relics of the initial supernova explosion and that they have survived because the magnetic field was weak. | <urn:uuid:a5ca5417-132e-4bc3-ab7e-9177b3d20893> | 3.3125 | 558 | Knowledge Article | Science & Tech. | 78.771537 |
The current is maximum through those segments of a circuit that offer the least resistance. But how do electrons know beforehand that which path will resist their drift the least?
This is really the same as Adam's answer but phrased differently.
Suppose you have a single wire and you connect it to a battery. Electrons start to flow, but as they do so the resistance to their flow (i.e. the resistance of the wire) generates a potential difference. The electron flow rate, i.e. the current, builds up until the potential difference is equal to the battery voltage, and at that point the current becomes constant. All this happens at about the speed of light.
Now take your example of having let's say two wires (A and B) with different resistances connected between the wires - lets say $R_A \gt R_B$. The first few electrons to flow will be randomly distributed between the two wires, A and B, but because wire A has a greater resistance the potential difference along it will build up faster. The electrons feel this potential difference so fewer electrons will flow through A and more electrons will flow through wire B. In turn the potential along wire B will build up and eventually the potential difference along both wires will be equal to the battery. As above this happens extremely rapidly.
So the electrons don't know in advance what path has the least resistance, and indeed the first few electrons to flow will choose random paths. However once the current has stabilised electron flow is restricted by the electron flowing ahead, and these are restricted by the resistance of the paths.
To make an analogy, imagine there are two doors leading out of a theatre, one small door and one big door. The first person to leave after the show will pick a door at random, but as the queues build up more people will pick the larger door because the queue moves faster.
They don't. Electrons follow the path of least resistance in the same way that water flows downhill. The electrons do not act collectively, each individual electron is driven away from other electrons, and driven toward positive charges. The collective result is well described by the statement that they follow the path of least resistence.
|show 4 more comments| | <urn:uuid:2402d419-cd6b-4283-8ea8-f9862f9f0829> | 3.78125 | 455 | Q&A Forum | Science & Tech. | 54.203306 |
The similar mathematical forms of Coulomb’s Law of Electrostatics and Newton’s Law of Gravitation suggest that two oppositely charged spheres should be able to move in a binary orbit about their center of mass using only the electric force as the force of attraction. To test this idea, we will attempt to achieve a binary orbit between oppositely charged graphite coated Styrofoam spheres. The spheres will have a mass of 5 grams and a radius of 1.5 cm. They will be charged to a surface voltage of 20 kV (one positively and one negatively) using a precisely controlled, current-limited, bipolar power supply. Initially the spheres will be separated by a center to center distance of 20 cm. Once charged, the spheres will be launched in opposite directions (perpendicular to the line adjoining them). A successful orbital attempt will result in the two spheres orbiting one another about their center of mass. The microgravity environment will minimize the effects of frictional and gravitational forces, allowing the orbit to be purely electrostatic in nature. | <urn:uuid:94cc3341-fdf7-44dd-a77f-757e10e132ad> | 3.84375 | 214 | Academic Writing | Science & Tech. | 31.725884 |
Introduction | Task | Materials| Teacher Process | Student Process |Evaluation | Science Curriculum Standards | Conclusion | Credits
Our water supplies the Earth with a valuable resource. Many people say that without water humans, animals and plants could not survive. Why is water so important?
The Teacher Process
The Student Process
Directions: Complete the activities below in order. Report to your teacher if you do not understand or if you have any problems.
1. Your teacher just read the book Water by Roy Gallant to your class. Talk with your partners Summarize what you have learned from the book, in your science journal.
2. Read about the water cycle at:
http://www.EnchantedLearning.com/rt/weather/watercycle.shtml and then return to here.
3. Now that you and your partners have read the page, complete the water cycle diagram at:
4. Explore this site
and find six important processes involved in the water cycle.
@ http://mbgnet.mobot.org/fresh/cycle/cycle.htm Write the terms and a brief description in your science journal.
5. Take the true/false quiz @ http://ga.water.usgs.gov/edu/sc3.html Record your answers in your journals.
6. Read the directions to the following experiment.
7.Go to your science station and complete the experiment. (Directions and supplies will be provided at the science station.)
State of Ohio Science Standards Addressed
Science and Technology Standard
#2 -Describe about
the design process and product in oral, written, or pictorial form.
#4 -Explore air and water. Study the sun. Study the temperature.
#1. Analyze a series of events or cycles, discuss the patterns, and make predictions.
#4. Use evidence and observations to explain and communicate the results of investigations.
#7. Plan and conduct a simple investigation based on systematic observations.
#8. Record and organize observations made.
The students should acquire a complete understanding of the water cycle and the effect that water has upon the Earth. Further discussions about deserts, rain forest, swamps, polluted water and other water affected areas would be a great extension upon this lesson.
Other Relevant Internet Sites
Credits & References
Gallant, Roy (2001) Water. New York, Benchmark Books.
Rachel M. Birt | <urn:uuid:ed5b5739-3a22-433e-918b-6c371d01f91b> | 3.796875 | 506 | Tutorial | Science & Tech. | 58.110392 |
There is much unknown about the ramifications of large-scale deployment of geoengineering technologies. Are they merely short-term fixes? What are the...
AerosolsLast Updated on 2013-04-10 at 17:13
Aerosols, small particles suspended in air with a lifetime of at least minutes, are either emitted as primary aerosols (dust or particle emissions of... More »
Carbon dioxideLast Updated on 2013-02-22 at 12:48
Carbon dioxide (CO2) is a chemical molecule consisting of one carbon atom covalently bonded to two oxygen atoms. At atmospheric pressure and temperature, carbon doixode is... More »
Value of carbon: five definitionsLast Updated on 2012-08-22 at 15:12
What is the value of a tonne of carbon dioxide (CO2) that has not been emitted into the atmosphere? It all depends on what you mean by value. The purpose... More » | <urn:uuid:86cbbf34-7f37-4374-a31a-7bd0ed6531ef> | 2.71875 | 194 | Content Listing | Science & Tech. | 52.69124 |
The Exploratorium Observatory website provides resources for K-12 teachers and students including a build a solar system activity, a calculation for age and weight on other planets, and a guide to the SETI mission. The website also provides information on space weather, Saturn, the transit of Venus, the spirit and opportunity rovers, the solar cycle, solar eclipses, the mini-transit of Mercury, sunspots, and auroras.
%0 Electronic Source %T The Observatory: A Guide to Astronomy Resources on the Exploratorium Website %I Exploratorium %V 2013 %N 21 May 2013 %9 text/html %U http://www.exploratorium.edu/observatory/
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:0600682d-7e35-443f-bf34-a59fcffe2b7c> | 2.8125 | 189 | Content Listing | Science & Tech. | 24.710902 |
I made a program that i got as homework.
the question is: Write a C++ program to sum the sequence if
2/9 - 5/13 + 8/17....
till the number of terms the user wants.
I made the following program, but the answer seems to be wrong. Can someone please help me point out where have i gone wrong.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
cout<<"Enter the number of terms in the series: ";
cout<<"\nThe sum of the series is = "<<sum;
i jsut tried that
if i enter the number of terms n to be odd
the output screen shows the sum = 0.22222
and if i enter the number of terms to be even
then the output screen shows= 0.162393
which is the sum of the first term only
That's because you never change the values of a and b. You just keep adding or subtracting 5/13 from sum.
I just noticed there is a bunch of mess in your code.
iostream header should be without ".h", math header in C++ is called <cmath> (also without ".h"). conio.h is not standard, nor is it needed. main must be int, not void, cin and cout are in std namespace. Your loop runs n+1 cycles so n+2 terms are computed. count variable is useless as it is (in your code) equal to i+1. | <urn:uuid:eebbf2a6-311e-483e-be0c-28ba3597170b> | 2.96875 | 326 | Comment Section | Software Dev. | 101.449077 |
Climate Change Indicators in the United States
This figure shows how annual average temperatures in the contiguous 48 states have changed since 1901. Surface data come from land-based weather stations. Satellite measurements cover the lower troposphere, which is the lowest level of the Earth's atmosphere. "UAH" and "RSS" represent two different methods of analyzing the original satellite measurements. This graph uses the 1901 to 2000 average as a baseline for depicting change. Choosing a different baseline period would not change the shape of the data over time.
Data source: NOAA, 2012 1
This figure shows how annual average temperatures worldwide have changed since 1901. Surface data come from a combined set of land-based weather stations and sea surface temperature measurements. Satellite measurements cover the lower troposphere, which is the lowest level of the Earth's atmosphere. "UAH" and "RSS" represent two different methods of analyzing the original satellite measurements. This graph uses the 1901 to 2000 average as a baseline for depicting change. Choosing a different baseline period would not change the shape of the data over time.
Data source: NOAA, 2012 2
- Since 1901, the average surface temperature across the contiguous 48 states has risen at an average rate of 0.13°F per decade (1.3°F per century) (see Figure 1). Average temperatures have risen more quickly since the late 1970s (0.31 to 0.45°F per decade). Seven of the top 10 warmest years on record for the contiguous 48 states have occurred since 1990.
- Worldwide, 2001-2010 was the warmest decade on record since thermometer-based observations began. Global average surface temperature has risen at an average rate of 0.14°F per decade since 1901 (see Figure 2), similar to the rate of warming within the contiguous 48 states. Since the late 1970s, however, the United States has warmed faster than the global rate.
- Some parts of the United States have experienced more warming than others (see Figure 3). The North, the West, and Alaska have seen temperatures increase the most, while some parts of the Southeast have experienced little change. However, not all of these regional trends are statistically significant.
Temperature is a fundamental measurement for describing the climate, and the temperature in particular places can have wide-ranging effects on human life and ecosystems. For example, increases in air temperature can lead to more intense heat waves, which can cause illness and death, especially in vulnerable populations. Annual and seasonal temperature patterns also determine the types of animals and plants that can survive in particular locations. Changes in temperature can disrupt a wide range of natural processes, particularly if these changes occur more quickly than plant and animal species can adapt.
Concentrations of heat-trapping greenhouse gases are increasing in the Earth's atmosphere (see the Atmospheric Concentrations of Greenhouse Gases indicator). In response, average temperatures at the Earth's surface are rising and are expected to continue rising. However, because climate change can shift the wind patterns and ocean currents that drive the world's climate system, some areas experience more warming than others, and some might experience cooling.
About the Indicator
This indicator examines U.S. and global surface temperature patterns from 1901 to the present. U.S. surface measurements come from weather stations on land, while global surface measurements also incorporate observations from buoys and ships on the ocean, thereby providing data from sites spanning much of the surface of the Earth. For comparison, this indicator also displays satellite measurements that can be used to estimate the temperature of the Earth's lower atmosphere since 1979.
This indicator shows anomalies, which compare recorded annual temperature values against a long-term average. For example, an anomaly of +2.0 degrees means the average temperature was 2 degrees higher than the long-term average. This indicator uses the average temperature from 1901 to 2000 as a baseline for comparison. Annual anomalies are calculated for each weather station, starting from daily and monthly average temperatures. Anomalies for broader regions have been determined by dividing the country (or the world) into a grid, averaging the data for all weather stations within each cell of the grid, and then averaging the grid cells together (for Figures 1 and 2) or displaying them on a map (Figure 3). This method ensures that the results are not biased toward regions that happen to have many stations close together.
Data from the early 20th century are somewhat less precise than more recent data because there were fewer stations collecting measurements at the time, especially in the Southern Hemisphere. However, the overall trends are still reliable. Where possible, the data have been adjusted to account for any biases that might be introduced by station moves, development (e.g., urbanization) near the station, changes in instruments and times of measurement, and other changes.
The data for this indicator were provided by the National Oceanic and Atmospheric Administration's National Climatic Data Center, which maintains a large collection of climate data online at: www.ncdc.noaa.gov/oa/
ncdc.html. Surface temperature anomalies were calculated based on monthly values from a network of long-term monitoring stations. Satellite data were analyzed by two independent groups—the Global Hydrology and Climate Center at the University of Alabama in Huntsville (UAH) and Remote Sensing Systems (RSS)—resulting in slightly different trend lines. | <urn:uuid:63629139-5d6f-4601-8aa9-fd04c1fb1667> | 3.90625 | 1,093 | Knowledge Article | Science & Tech. | 31.907518 |
Gerd Binnig’s and Heinrich Rohrer’s scanning tunneling microscope (STM) has fueled much of the nanotechnology research effort around the world. Developed in an IBM laboratory about 35 years ago (ETA May 2, 2013: the year was 1981), it was the first microscope that allowed researchers to access material at the nanoscale (there’s more about these researchers and their accomplishment in my May 26, 2011 posting). Don Eigler, also working for IBM, was the first to use an STM to manipulate the placement of atoms on a surface. In 1989, he ‘nudged’ xenon atoms into the shape of three letters, IBM (there’s more about Eigler in this Wikipedia essay).
Today, May 1, 2013, IBM has released an atomic movie, A Boy and His Atom, which was made with their seminal scanning tunneling microscope,
If the story is not apparent to you, here’s how the IBM May 1, 2013 news release describes the movie,
The movie’s plot line depicts a character called Atom who befriends a single atom and goes on a “playful journey.” This journey involves dancing, jumping on a trampoline, and playing catch. It’s unlikely to win any Oscars, but that’s not really the point; it’s designed to get people inspired about science.
In almost five years of writing this blog, this is the first time I’ve seen a physical description of an STM and it is one big sucker (from the news release),
… Christopher Lutz, Research Scientist, IBM Research. “It weighs two tons, operates at a temperature of negative 268 degrees Celsius and magnifies the atomic surface over 100 million times. [emphasis mine] The ability to control the temperature, pressure and vibrations at exact levels makes our IBM Research lab one of the few places in the world where atoms can be moved with such precision.”
Making a movie with an STM is not as easy as it might seem (from the news release; Note: a link has been removed),
Remotely operated on a standard computer, IBM researchers used the microscope to control a super-sharp needle along a copper surface to “feel” atoms. Only 1 nanometer away from the surface, which is a billionth of a meter in distance, the needle can physically attract atoms and molecules on the surface and thus pull them to a precisely specified location on the surface. The moving atom makes a unique sound that is critical feedback in determining how many positions it’s actually moved.
There is a corporate agenda associated with this particular public relations gambit, from the news release,
As computer circuits shrink toward atomic dimensions — which they have for decades in accordance with Moore’s Law — chip designers are running into physical limitations using traditional techniques. The exploration of unconventional methods of magnetism and the properties of atoms on well-controlled surfaces allows IBM scientists to identify entirely new computing paths.
Using the smallest object available for engineering data storage devices – single atoms – the same team of IBM researchers who made this movie also recently created the world’s smallest magnetic bit. They were the first to answer the question of how many atoms it takes to reliably store one bit of magnetic information: 12. By comparison, it takes roughly 1 million atoms to store a bit of data on a modern computer or electronic device. If commercialized, this atomic memory could one day store all of the movies ever made in a device the size of a fingernail.
“Research means asking questions beyond those required to find good short-term engineering solutions to problems. As data creation and consumption continue to get bigger, data storage needs to get smaller, all the way down to the atomic level,” continued Heinrich [Andreas Heinrich, Principle Investigator, IBM Research]. “We’re applying the same techniques used to come up with new computing architectures and alternative ways to store data to making this movie.”
Guinness World Records has acknowledged A Boy and His Atom as the world’s smallest movie. For now. | <urn:uuid:e3d39bbf-7963-46c4-83af-f0e83fdcf233> | 3.3125 | 861 | Personal Blog | Science & Tech. | 35.058389 |
Logs come in all shapes, but as applications and infrastructures grow, the result is a massive amount of distributed data that's useful to mine. From web and mail servers to kernel and boot logs, modern servers hold a rich set of information. Massive amounts of distributed data are a perfect application for Apache Hadoop, as are log files—time-ordered structured textual data.
You can use log processing to extract a variety of information. One of its most common uses is to extract errors or count the occurrence of some event within a system (such as login failures). You can also extract some types of performance data, such as connections or transactions per second. Other useful information includes the extraction (map) and construction of site visits (reduce) from a web log. This analysis can also support detection of unique user visits in addition to file access statistics.
These exercises give you practice in:
- Getting a simple Hadoop environment up and running
- Interacting with the Hadoop file system (HDFS)
- Writing a simple MapReduce application
- Writing a filtering Apache Pig query
- Writing an accumulating Pig query
To get the most from these exercises, you should have a basic working knowledge of Linux®. Some knowledge of virtual appliances is also useful for bringing a simple environment up.
There are two ways to get Hadoop up and running. The first is to install the Hadoop software, and then configure it for your environment (the simplest case is a single-node instance, in which all daemons run in a single node). See Distributed data processing with Hadoop, Part 1: Getting started for details.
The second and simpler way is through the use of the Cloudera's Hadoop Demo VM (which contains a Linux image plus a preconfigured Hadoop instance). The Cloudera virtual machine (VM) runs on VMware, Kernel-based Virtual Machine (KVM), or Virtualbox.
Choose a method, and complete the installation. Then, complete the following task:
- Verify that Hadoop is running by issuing an HDFS
The HDFS is a special-purpose file system that manages data and replicas within a
Hadoop cluster, distributing them to compute nodes for efficient processing.
Even though HDFS is a special-purpose file system, it implements many of the
typical file system commands. To retrieve help information for
Hadoop, issue the command
hadoop dfs. Perform
the following tasks:
- Create a test subdirectory within the HDFS.
- Move a file from the local file system into the HDFS subdirectory using
- For extra credit, view the file within HDFS using a
As demonstrated in Distributed data processing with Hadoop, Part 3: Application development, writing a word count map and reduce application is simple. Using the Ruby example demonstrated in this article, develop a Python map and reduce application, and run them on a sample set of data. Recall that Hadoop sorts the output of map so that like words are contiguous, which provides a useful optimization for the reducer.
As you saw in Data processing with Apache Pig, Pig allows you to build simple scripts that are translated into MapReduce applications. In this exercise, you extract all log entries (from /var/log/messages) that contain both the word kernel: and the word terminating.
- Create a script that extracts all log lines with the predefined criteria.
Log messages are generated by a variety of sources within the Linux kernel (such as
dhclient). In this
example, you want to discover the various sources that generate log messages and
the number of log messages per source.
- Create a script that counts the number of log messages for each log source.
The specific output depends on your particular Hadoop installation and configuration.
Listing 1. Performing an ls operation on the HDFS
$ hadoop dfs -ls / drwxrwxrwx - hue supergroup 0 2011-12-10 06:56 /tmp drwxr-xr-x - hue supergroup 0 2011-12-08 05:20 /user drwxr-xr-x - mapred supergroup 0 2011-12-08 10:06 /var $
More or fewer files might be present depending on use.
In Exercise 2, you create a subdirectory within HDFS and copy a file into it. Note
that you create test data by moving the kernel message buffer into a file. For
extra credit, view the file within the HDFS using the
command (see Listing 2).
Listing 2. Manipulating the HDFS
$ dmesg > kerndata $ hadoop dfs -mkdir /test $ hadoop dfs -ls /test $ hadoop dfs -copyFromLocal kerndata /test/mydata $ hadoop dfs -cat /test/mydata Linux version 2.6.18-274-7.1.el5 (firstname.lastname@example.org)... ... e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX $
In Exercise 3, you create a simple word count MapReduce application in Python. Python is actually a great language in which to implement the word count example. You can find a useful writeup on Python MapReduce in Writing a Hadoop MapReduce Program in Python by Michael G. Noll.
This example assumes that you performed the steps of exercise 2 (to ingest data into the HDFS). Listing 3 provides the map application.
Listing 3. Map application in Python
#!/usr/bin/env python import sys for line in sys.stdin: line = line.strip() words = line.split() for word in words: print '%s\t1' % word
Listing 4 provides the reduce application.
Listing 4. The reduce application in Python
#!/usr/bin/env python from operator import itemgetter import sys last_word = None last_count = 0 cur_word = None for line in sys.stdin: line = line.strip() cur_word, count = line.split('\t', 1) count = int(count) if last_word == cur_word: last_count += count else: if last_word: print '%s\t%s' % (last_word, last_count) last_count = count last_word = cur_word if last_word == cur_word: print '%s\t%s' % (last_word, last_count)
Listing 5 illustrates the process of invoking the Python MapReduce example in Hadoop.
Listing 5. Testing Python MapReduce with Hadoop
$ hadoop jar /usr/lib/hadoop-0.20/contrib/streaming/hadoop-streaming-0.20.2-cdh3u2.jar \ -file pymap.py -mapper pymap.py -file pyreduce.py -reducer pyreduce.py \ -input /test/mydata -output /test/output ... $ hadoop dfs -cat /test/output/part-00000 ... write 3 write-combining 2 wrong. 1 your 2 zone: 2 zonelists. 1 $
In Exercise 4, you extract /var/log/messages log entries that contain both the word kernel: and the word terminating. In this case, you use Pig in local mode to query the local file (see Listing 6). Load the file into a Pig relation (log), filter its contents to only kernel messages, and then filter that resulting relation for terminating messages.
Listing 6. Extracting all kernel + terminating log messages
$ pig -x local grunt> log = LOAD '/var/log/messages'; grunt> logkern = FILTER log BY $0 MATCHES '.*kernel:.*'; grunt> logkernterm = FILTER logkern BY $0 MATCHES '.*terminating.*'; grunt> dump logkernterm ... (Dec 8 11:08:48 localhost kernel: Kernel log daemon terminating.) grunt>
In Exercise 5, extract the log sources and log message counts from /var/log/messages.
In this case, create a script for the query, and execute it through Pig's local mode.
In Listing 7, you load the file and parse the input using a space
as a delimiter. You then assign the delimited string fields to your named elements.
GROUP operator to group the messages by their
source, and then use the
FOREACH operator and
COUNT to aggregate your data.
Listing 7. Log sources and counts script for /var/log/messages
log = LOAD '/var/log/messages' USING PigStorage(' ') AS (month:chararray, \ day:int, time:chararray, host:chararray, source:chararray); sources = GROUP log BY source; counts = FOREACH sources GENERATE group, COUNT(log); dump counts;
The result is shown executed in Listing 8.
Listing 8. Executing your log sources script
$ pig -x local logsources.pig ... (init:,1) (gconfd,12) (kernel:,505) (syslogd,2) (dhclient:,91) (localhost,1168) (gpm:,2) [gpm:,2) (NetworkManager:,292) (avahi-daemon:,37) (avahi-daemon:,44) (nm-system-settings:,8) $
- Distributed computing with Linux and Hadoop (Ken Mann and M. Tim Jones, developerWorks, December 2008): Discover Apache's Hadoop, a Linux-based software framework that enables distributed manipulation of vast amounts of data, including parallel indexing of internet web pages.
- Distributed data processing with Hadoop, Part 1: Getting started (M. Tim Jones, developerWorks, May 2010): Explore the Hadoop framework, including its fundamental elements, such as the Hadoop file system (HDFS), common node types, and ways to monitor and manage Hadoop using its core web interfaces. Learn to install and configure a single-node Hadoop cluster, and delve into the MapReduce application.
- Distributed data processing with Hadoop, Part 2: Going further (M. Tim Jones, developerWorks, June 2010): Configure a more advanced setup with Hadoop in a multi-node cluster for parallel processing. You'll work with MapReduce functionality in a parallel environment and explore command line and web-based management aspects of Hadoop.
- Distributed data processing with Hadoop, Part 3: Application development (M. Tim Jones, developerWorks, July 2010): Explore the Hadoop APIs and data flow and learn to use them with a simple mapper and reducer application.
- Data processing with Apache Pig (M. Tim Jones, developerWorks, February 2012): Pigs are known for rooting around and digging out anything they can consume. Apache Pig does the same thing for big data. Learn more about this tool and how to put it to work in your applications.
- Writing a Hadoop MapReduce Program in Python (Michael G. Noll, updated October 2011, published September 2007): Learn to write a simple MapReduce program for Hadoop in the Python programming language in this tutorial.
- The Open Source developerWorks zone provides a wealth of information on open source tools and using open source technologies.
- developerWorks Web development specializes in articles covering various web-based solutions.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers.
- Follow developerWorks on Twitter, or subscribe to a feed of Linux tweets on developerWorks.
Get products and technologies
- Cloudera's Hadoop Demo VM (May 2012): Start using with Apache Hadoop with a set of virtual machines that include a Linux image and a preconfigured Hadoop instance.
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement Service Oriented Architecture efficiently.
- Check out developerWorks blogs and get involved in the developerWorks community.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
M. Tim Jones is an embedded firmware architect and the author of Artificial Intelligence: A Systems Approach, GNU/Linux Application Programming (now in its second edition), AI Application Programming (in its second edition), and BSD Sockets Programming from a Multilanguage Perspective. His engineering background ranges from the development of kernels for geosynchronous spacecraft to embedded systems architecture and networking protocols development. Tim is a platform architect with Intel and author in Longmont, Colo. | <urn:uuid:0c9fee69-0d9d-4b33-8a15-851ae62aa604> | 2.90625 | 2,829 | Tutorial | Software Dev. | 50.201287 |
› View Now
Launching the NPP
Project Scientist, NPP
The purpose of the NPP is, it's basically the prototype of the next generation Earth-observing satellite. It's the nation's first attempt to really combine weather monitoring and climate observing in the same platform.
The nations' newest weather monitoring and climate observation satellite is getting ready to take its place in space so that we may know what is going on here on Earth.
The spacecraft is known as NPP, for NPOES Preparatory Project, and it is a technological trailblazer in the effort to find out more about the weather and condition of our world.
NPP is a continuation of the earth orbiting satellite systems. For weather forecasting and for climate predictions, you need to have continuous observations. So what NPP does is continue the data record started by the NASA EOS satellites and improves on the instruments that are used for numerical weather forecasting from the current series of NOAA satellites.
Launch Director, NPP
The current satellites we have on orbit have been very successful for us, but NPP is taking all the advances we've had in technology over the past five, 10 years, putting them on this test bed spacecraft, being able to use them, prove them out for the future constellation.
NPP data will be used by virtually all of the national weather services for all the nations of the world. And then there are the scientific users who are trying to understand the individual phenomena both at home and abroad.
While some spacecraft are built to collect a specific set of information using only one instrument, the NPP will observe the Earth in a variety of forms using five instruments. The information it will gather will be extensive, but working with a large set of instruments makes the preparation equally exhaustive.
Mission Manager, NPP
Well, every mission has its own set of challenges, you know, what's challenging about NPP is the fact we have five instruments. Some spacecraft have one instrument. And every mission has to go through environmental testing, so now you have to go through environmental testing with five different instruments, which all carry their own set of requirements and restrictions.
These are NASA satellites, these are one-of-a-kind satellites. You know, something like a GPS constellation, which would launch 20 of the same type of satellites, you can get into a rhythm with how you process those. But it's not the case with NASA missions, which is one of the reasons it's an interesting job because every spacecraft brings its own set of challenges and uniqueness and it keeps the job interesting.
Although NASA routinely dispatches spacecraft to other worlds to push the bounds of exploration, the agency does not lose focus on examinations of our home planet. Previous missions, some still in operation, have compiled decades of data about interactions of the Earth's myriad environmental systems. NPP aims to continue those observations with new levels of precision.
It has two specific goals. One is to get the data for the weather forecasts, environmental observations and take a whole suite of observations that continue our satellite data records which span from measuring aerosols, you know, dust particles in the atmosphere, how have they changed over the past decade? Is the vegetation index? Is the ground greener or browner over time? Has the sea surface temperature changed? Has the ozone changed? These are all data sets that we have that we have multi-decades sets of data sets and we just want to keep adding to that so we can answer the question, Is the climate changing?
The NPP will lift off from Vandenberg Air Force Base in California, so it can be positioned in a specific orbit for its important mission.
The NPP satellite will launch from Vandenberg Air Force Base, specifically Space Launch Complex 2 for the reason that it needs to go into a polar orbit. Polar orbit meaning that as the Earth rotates the satellite will be crossing the poles. And because this is an earth-observing satellite, you are able to see every bit of the Earth.
The NPP satellite is going into space courtesy of a Delta II rocket, the workhorse of America's fleet of uncrewed missions. First launched in 1989, the Delta II has been used to successfully orbit several Earth-observing satellites. It dispatched spacecraft to Mars including the Spirit and Opportunity rovers in 2003, both of which continue to operate on Martian soil. NASA's record is perfect for missions launch on a Delta II.
The NPP mission on Delta II is currently the last manifested Delta II to launch on either coast. That has historical significance to our team, however, we're treating this as we have treated all the rest of the Delta II launches.
Recent years have seen new rockets emerge on the Launch Services Program roster. They use new methods of construction, compared with that employed for the Delta II.
Delta has more of a historic launch processing flow of building the entire rocket up on the pad.
Work to prepare the Delta II to launch the NPP satellite began during the summer.
We began build-up of the vehicle in July of this year, erecting first stage, the nine solid rocket motors, the second stage, putting the payload fairing into the mobile service tower. We will then bring out the satellite in a transportation can, erect it, mate it, then bring the payload fairing around the satellite.
We take great pride in the success we've had on Delta II.
By the time you get there on launch day, it's kind of like you've planned a trip and you've packed for the trip and all you have left to do is gas and go. So that's what we do on launch day, we load the rocket with fuel and liquid oxygen and then we do our final avionics and electrical checks and we push the button and we sit on the edge of our seats.
› View Now | <urn:uuid:fd2a9355-a377-40b6-a06c-a12bf384f45d> | 3.578125 | 1,200 | Audio Transcript | Science & Tech. | 47.373811 |
Report an inappropriate comment
Foxxification Of This Story?
Wed Aug 15 01:50:10 BST 2012 by Julian Mann
If you would care to take a look at the NS article 21.1.2009. by Graham Lawton(amongst others) an opinion is expressed there that 40 to 50 per cent of the human genome got there by HGT. Therefore the chances are actually quite high that Neanderthals and humans living in close proximity, could have picked up the same genes from an external source via an intermediary, without there having been any cross-breeding at all. With regard to the geographically separated humans, one would have expected vertical gene transfer to have passed on these Neanderthal genes over an extended period of time, via human/human mating. As that has not happened, particularly after the Neanderthal die out,you would need to re-consider the relative importance of VGT in this process. Perhaps HGT was after all the bigger player and accounts for all of the common genes. | <urn:uuid:d25de33d-1479-4f7e-8362-531b7462ee37> | 3.296875 | 204 | Comment Section | Science & Tech. | 52.427619 |
K-T Boundary Clay.
|Sample Image | Spin Video | QuickTimeVR Rotation|
|K-T Boundary Clay.|
The K-T boundary clay is is found in thin layers all over the world at sedimentary levels that indicate it is the same age everywhere: About 65 million years ago. This is the boundary between the Cretaceous (K for some reason) and the Tertiary (T for obvious reasons) periods, and also the time at which there was a mass extinction.
That such a thin layer of similar material should be found all over the globe is strange, but what's even stranger is that it is always highly enriched in iridium compared everything around it. It's as if something dumped a huge quantity of iridium on the earth and spread it around in some kind of giant explosion.
That something was almost certainly a large (ca. 10km diameter) chondritic meteorite, a type known to contain very high levels of iridium compared to the earths crust. All the evidence points to such an object hitting the Yucatan peninsula of Mexico a the same time the clay was deposited and the dinosaurs became extinct.
And the dark layer in this rock is a tiny bit of that iridium-rich clay material.
Source: Jensan Scientifics
Contributor: Theodore Gray
Acquired: 8 April, 2009
Text Updated: 9 April, 2009 | <urn:uuid:8926f433-2675-4040-a0fb-398df450e523> | 3.578125 | 293 | Knowledge Article | Science & Tech. | 50.298636 |
As wildfire consumes hundreds of thousands of forested acres every summer, NASA’s Earth-observing satellites keep tabs on the destruction. The Aqua and Terra satellites can help scientists and land-use managers figure out how much land has burned and where, as well as how fires are responding to the changing climate. Now the visualization wizards at NASA's Goddard Space Flight Center have stitched this data together into a timelapse video, which you can watch past the jump.The video includes fires detected in the continental United States from 2002 through July 2011. 2011 was a banner year for wildfire, with 2.7 million acres burning just in Texas, and featuring a fire that threatened the Los Alamos National Laboratory in New Mexico.
From 2009 to 2011 alone, more than 200,000 fires consumed 18 million acres across the nation. NASA says that is equivalent to the combined area of Massachusetts, Vermont, New Hampshire, Delaware and Rhode Island. And that doesn’t even include 2012, in which wildfires gobbled hundreds of thousands of acres in Colorado, New Mexico and throughout the rest of the West.
Check it out, and notice the fiery trends:
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:3a330414-555a-4a72-9b99-620684babb3c> | 3.265625 | 287 | Truncated | Science & Tech. | 46.599301 |
Sometimes you will need to add some points to an existing barplot. You might try
par(mfrow = c(1,2)) df <- data.frame(stolpec1 = 10 * runif(10), stolpec2 = 30 * runif(10)) barplot(df$stolpec1) lines(df$stolpec2/10) #implicitno x = 1:10 points(df$stolpec2/10)
but you will get a funky looking line/points. It’s a bit squeezed. This happens because bars are not drawn at intervals 1:10, but rather on something else. This “else” can be seen if you save your barplot object. You will notice that it’s a matrix object with one column – these are values that are assumed on x axis. Now you need to feed this to your lines/points function as a value to x argument and you’re all set.
df.bar <- barplot(df$stolpec1) lines(x = df.bar, y = df$stolpec2/10) points(x = df.bar, y = df$stolpec2/10)
Another way of plotting this is using plotrix package. The controls are a bit different and it takes some time getting used to it.
library(plotrix) barp(df$stolpec1, col = "grey70") lines(df$stolpec2/10) points(df$stolpec2/10) | <urn:uuid:fe6ce598-b4c9-42c0-88fb-9069159309ce> | 2.75 | 333 | Tutorial | Software Dev. | 86.027753 |
An excited atom in a small cavity is precisely such as antenna, albeit a microscopic one. If the cavity is small enough, the atom will be unable to radiate because the wavelength of the oscillating field it would "like" to produce cannot fit within the boundaries. As long as the atom cannot emit a photon, it must remain in the same energy level; the excited state acquires an infinite lifetime.
In 1985 research groups at the University of Washington and at the Massachusetts Institute of Technology demonstrated suppressed emission. The group in Seattle inhibited the radiation of a single electron inside an electromagnetic trap, whereas the M.I.T. group studied excited atoms confined between two metallic plates about a quarter of a millimeter apart. The atoms remained in the same state without radiating as long as they were between the plates.
Millimeter-scale structures are much too wide to alter the behavior of conventionally excited atoms emitting micron or submicron radiation; consequently, the M.I.T. experimenters had to work with atoms in special states known as Rydberg states. An atom in a Rydberg state has almost enough energy to lose an electron completely. Because this outermost electron is bound only weakly, it can assume any of a great number of closely spaced energy levels, and the photons it emits while jumping form one to another have wavelengths ranging from a fraction of a millimeter to a few centimeters. Rydberg atoms are prepared by irradiating ground-state atoms with laser light of appropriate wavelengths and are widely used in cavity QED experiments.
The suppression of spontaneous emission at an optical frequency requires much smaller cavities. In 1986 one of us (Haroche), along with other physicists at Yale University, made a micron-wide structure by stacking two optically flat mirrors separated by extremely thin metallic spacers. The workers sent atoms through this passage, thereby preventing them from radiating for as long as 13 times the normal excited-state lifetime. Researchers at the University of Rome used similar micron-wide gaps to inhibit emission by excited dye molecules.
The experiments performed on atoms between two flat mirrors have an interesting twist. Such a structure, with no sidewalls, constrains the wavelengths only of photons whose polarization is parallel to the mirrors. As a result, emission is inhibited only if the atomic dipole antenna oscillates along the plane of mirrors. (It was essential, for example, to prepare the excited atoms with this dipole orientation in the M.I.T. and Yale spontaneous-emission inhibition experiments.) The Yale researchers demonstrated these polarization-dependent effects by rotating the atomic dipole between the mirrors with the help of a magnetic field. When the dipole orientation was tilted with respect to the mirrors' plane, the excited-state lifetime dropped substantially.
Suppressed emission also takes place in solid-state cavities—tiny regions of semiconductor bounded by layers of disparate substances. Solid-state physicists routinely produce structures of submicron dimensions by means of molecular-beam epitaxy, in which materials are built up one atomic layer at a time. Devices built to take advantage of cavity QED phenomena could engender a new generation of light emitters [see "Microlasers," by Jack L. Jewell, James P. Harbison and Axel Scherer; SCIENTIFIC AMERICAN, November 1991].
These experiments indicate a counterintuitive phenomenon that might be called "no-photon interference." In short, the cavity prevents an atom from emitting a photon because that photon would have interfered destructively with itself had it ever existed. But this begs a philosophical question: How can the photon "know," even before being emitted, whether the cavity is the right or wrong size?
Part of the answer lies in yet another odd result of quantum mechanics. A cavity with no photon is in its lowest-energy state, the so-called ground state, but it is not really empty. The Heisenberg uncertainty principle sets a lower limit on the product of the electric and magnetic fields inside the cavity (or anywhere else for that matter) and thus prevents them from simultaneously vanishing. This so-called vacuum field exhibits intrinsic fluctuations at all frequencies, from long radio waves down to visible, ultraviolet and gamma radiation, and is a crucial concept in theoretical physics. Indeed, spontaneous emission of a photon by an excited atom is in a sense induced by vacuum fluctuations. | <urn:uuid:744849a9-cef6-479a-bea6-045a7bdc761d> | 4.0625 | 889 | Academic Writing | Science & Tech. | 27.671962 |
Inside XSL-T (1/4) - exploring XML
Now that we have had so many columns with XSL examples and so few about XSL, it seems like a good idea to study XSL in more detail. XSL comes out to be a very generic mechanism for transforming document trees, at least the paperless ones...
XSL was initially devised to solve two problems:
- Transforming an XML document into something else.
- Formatting an XML document for display on a page-oriented device, such as a printer or a browser.
Subsequently it has proven difficult to solve the second problem in a fashion that satisfies all the different requirements from low resolution screen displays all the way to hi-res printing and copying. Furthermore, screen formatting is currently done with Cascading Style Sheets (CSS), so little interest developed in yet another method. The World Wide Web Committee (W3C) then decided to split the two tasks into separate sub-standards, XSL Transformations (XSL-T) and XSL formatting objects (XSL-FO). While XSL-T has been an official recommendation since November of last year, XSL-FO is still in the making.
The T in XSLT
A transformation expressed in XSLT describes rules for transforming a source tree into a result tree. The transformation is achieved by associating patterns with templates. Whenever a pattern matches elements in the source tree, a template is used to create part of the result tree. The result tree is separate from the source tree, and their structures can be completely different. In constructing the result tree, elements from the source tree can be filtered and reordered, and new elements can be added. A transformation expressed in XSLT is called a stylesheet in the case where XSLT is transforming into a display language, such as HTML or WML.
This example shows the structure of a stylesheet. Ellipses (
indicate where attribute values or content have been omitted. Although this
example shows one of each type of allowed element, stylesheets may contain zero
or more of each of these elements.
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:import href="..."/> <xsl:include href="..."/> <xsl:output method="..."/> <xsl:strip-space elements="..."/> <xsl:preserve-space elements="..."/> <xsl:decimal-format name="..."/> <xsl:namespace-alias stylesheet-prefix="..." result-prefix="..."/> <xsl:key name="..." match="..." use="..."/> <xsl:attribute-set name="..."> ... </xsl:attribute-set> <xsl:variable name="...">...</xsl:variable> <xsl:param name="...">...</xsl:param> <xsl:template match="..."> ... </xsl:template> <xsl:template name="..."> ... </xsl:template> </xsl:stylesheet>
The order in which the children of the
occur is not significant except for
xsl:import elements and for
In addition, the
xsl:stylesheet element may contain any element
not from the XSLT namespace. Such elements can provide, for example,
- information about what to do with the result tree.
- information about how to obtain the source tree.
- metadata about the stylesheet.
- structured documentation for the stylesheet.
On to servlets.
Created: Aug 13, 2000
Revised: Aug 13, 2000 | <urn:uuid:f68e69e5-1e1c-4e75-942b-e3f912437c00> | 3.9375 | 782 | Documentation | Software Dev. | 66.60899 |
Radiant energy, called "electromagnetic radiation", is released every time an electron slows down, changes its orbit around an atom or vibrates back and forth. Through these changes in its motion, the electron creates a changing electric field.
It is an observed fact that when an electric field is changing, a magnetic field appears. And when a magnetic field is changing, an electric field appears. This is how an electromagnetic wave works and how it is able to travel immense distances from faraway stars to our small solar neighborhood. The wave's changing electric field produces a changing magnetic field which in turn creates another changing electric field and on and on and on.
If you observe the electric and magnetic fields as the wave passes by, you will note that the size of the fields go up and down again and again. The distance in space between peaks in the field is called the "wavelength". The number of peaks that a non-moving observer counts per second as the wave passes by is called the "frequency".
The electric and magnetic fields in the wave point in directions that are 90 degrees apart and both fields point 90 degrees away from the direction the wave is moving.
The most familiar form of radiant energy is visible light. | <urn:uuid:a260fac3-e21f-485c-b9a8-1f6f120b0863> | 3.9375 | 248 | Knowledge Article | Science & Tech. | 52.170758 |
I do not know how to explain to a 6 year old how we are able to perceive colour. Does anyone know how this can be explained?
Well, depending on the depth you want to introduce her to, it can be difficult to explain to adults - much less children. You explained the basics well enough. Without going to the molecular mechanisms, here's a useful diagram:
Light will pass through the eye and Retina until it hits the Cones and Rods. That produces a reaction (which I'll address next) - which then causes signals to either resume or cease (Rods are, ironically, shut off by light - not turned on). These signals pass through Ganglion cells, through the Optic Nerve, and interpreted by the brain. When some cells are switched on, like a 3-Way traffic stop, they will prevent the information from other cells being transmitted.
A simple way of explaining it might be that inside our eyes are millions of teeny-tiny molecules that act like light-switches, and the brightness and color of light determine which of the "light-switches" are on or off.
If you want to go further into the actual mechanisms (or just have the background knowledge for future reference) the next diagram shows the mechanism itself (and the original in full-size since the one shown is squished if you prefer):
The big things to note above is the change from cis-Retinal to trans-Retinal which occurs after a Photon is absorbed. This causes signaling molecules to go a bit wild, open ion channels, and the depolarization of the membrane propagates the charge down the cell - much like a neuron.
I have never quite understood this idea that an object has a colour depending on the light that hits it. Okay, I understand that in low light objects have a different colour because there is a not much light hitting it and that different objects absorb different wavelengths of light and therefore appear as different colours.
Well, let me stop you here. Low-light is a different situation than colored-light. Let's get the basics down:
Photons are absorbed by the electrons of the atoms that compose the molecules of an object. Whatever Wavelength of light is not absorbed by the electrons is reflected, and it's this Wavelength that we perceive as the color of the object as our Cones absorb the reflected light.
Photons can also be emitted when an electron moves to a lower-energy state. The Wavelength emitted by the electron is directly related to the difference between the High and Low energy states, as this diagram shows fairly well:
The emitted photons are the object's "Incandescence" - which is when an object produces a color of light by itself. The color you perceive is NOT going to change if the object emits incandescent light because the object is generating its own light. Neon signs are a great example: The gases being subjected to a current emit light, and will appear whatever color it's supposed to be whether or not it's a blue moon or sunset.
Photons which are reflected and not emitted - i.e. almost everything that doesn't have a power source - as I said above, are then absorbed by our Cones and our brain interprets the signals to produce a color.
The reason why objects that reflect light can change color is because not all environmental light is the same. Red objects will appear Black under Blue light because Blue light doesn't contain Red Wavelengths - there's nothing to reflect, so the object absorbs all of the available Wavelengths - the very definition of Black!
A lot of our color perception depends on ambient light, and most of the time - thanks to the Sun - that is a full-spectrum white light. Which brings me to answer the last bit of your question:
What I don't understand is that if I place a bar of gold a bar of silver side by side in the same lighting conditions they do have different colours, so therefore there must be something inherent in these object that give them different colours. What is that something?
Yes, there definitely is something inherent in both objects. That is: Their electron configurations absorb different chunks of the spectrum, and accordingly reflect different chunks of the spectrum. Although that's still a bit simplistic, since metals have some unique properties that other Elements do not. Their electrons exist in more of an "ocean" than around central atoms, but that's a whole other question.
As a fun bit of Trivia to impress your daughter when she's old enough, it's always a fun fact to know that the color Yellow is completely constructed in your head.
The human eye only has Rho (Red), Gamma (Green), and Beta (Blue) color receptors which have the following absorption pattern (from photo.net):
What everybody sees as "Yellow" is actually when both the Green and Red receptors are activated at the Wavelength where they intercept above, which your brain interprets as "Yellow":
Because your brain does a lot of processing, despite having only three color receptors we can perceive millions of colors (and shades/tones). Now for a while everywhere you look you'll be utterly amazed at what's going on, and you rightfully should be. ;-) | <urn:uuid:b63e12d3-c4dc-4c96-b429-5ba568a93ed2> | 3.578125 | 1,078 | Q&A Forum | Science & Tech. | 48.786244 |
The Thermostat Hypothesis is that tropical clouds and thunderstorms actively regulate the temperature of the earth. This keeps the earth at a equilibrium temperature.
The stability of the earth’s temperature over time has been a long-standing climatological puzzle. The globe has maintained a temperature of ± ~ 3% (including ice ages) for at least the last half a billion years during which we can estimate the temperature. During the Holocene, temperatures have not varied by ±1%. And during the ice ages, the temperature was generally similarly stable as well.
...some scientists have claimed that clouds have a positive feedback. Because of this, the areas where there are more clouds will end up warmer than areas with less clouds. This positive feedback is seen as the reason that clouds and warmth are correlated.
I and others take the opposite view of that correlation. I hold that the clouds are caused by the warmth, not that the warmth is caused by the clouds.
A thunderstorm can do more than just reduce the amount of surface warming. It can actually mechanically cool the surface to below the required initiation temperature. This allows it to actively maintain a fixed temperature in the region surrounding the thunderstorm.
When tropical temperatures are cool, tropical skies clear and the earth rapidly warms. But when the tropics heat up, cumulus and cumulonimbus put a limit on the warming. This system keeps the earth within a fairly narrow band of temperatures. | <urn:uuid:0e4c2ac5-aff1-495a-872f-7db9a6759e27> | 3.890625 | 297 | Personal Blog | Science & Tech. | 43.182426 |
|addend + addend =||sum|
|minuend − subtrahend =||difference|
|multiplicand × multiplier =||product|
|dividend ÷ divisor =||quotient|
|nth root (√)|
|degree √ =||root|
In general, for non-zero integers and , it is said that divides —and, dually, that is divisible by —written:
if there exists an integer such that . Thus, divisors can be negative as well as positive, although sometimes the term is restricted to positive divisors. (For example, there are six divisors of four, 1, 2, 4, −1, −2, −4, but only the positive ones would usually be mentioned, i.e. 1, 2, and 4.)
1 and −1 divide (are divisors of) every integer, every integer (and its negation) is a divisor of itself, and every integer is a divisor of 0, except by convention 0 itself (see also division by zero). Numbers divisible by 2 are called even and numbers not divisible by 2 are called odd.
1, −1, n and −n are known as the trivial divisors of n. A divisor of n that is not a trivial divisor is known as a non-trivial divisor. A number with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors.
There are divisibility rules which allow one to recognize certain divisors of a number from the number's digits.
- 7 is a divisor of 42 because , so we can say . It can also be said that 42 is divisible by 7, 42 is a multiple of 7, 7 divides 42, or 7 is a factor of 42.
- The non-trivial divisors of 6 are 2, −2, 3, −3.
- The positive divisors of 42 are 1, 2, 3, 6, 7, 14, 21, 42.
- The set of all positive divisors of 60, , partially ordered by divisibility, has the Hasse diagram:
Further notions and facts
There are some elementary rules:
- If and , then . This is the transitive relation.
- If and , then or .
- If and , then it is NOT always true that (e.g. and but 5 does not divide 6). However, when and , then is true, as is .
If is a prime number and then or .
A positive divisor of which is different from is called a proper divisor or an aliquot part of . A number that does not evenly divide but leaves a remainder is called an aliquant part of .
An integer whose only proper divisor is 1 is called a prime number. Equivalently, a prime number is a positive integer which has exactly two positive factors: 1 and itself.
The total number of positive divisors of is a multiplicative function , meaning that when two numbers and are relatively prime, then . For instance, ; the eight divisors of 42 are 1, 2, 3, 6, 7, 14, 21 and 42. However the number of positive divisors is not a totally multiplicative function: if the two numbers and share a common divisor, then it might not be true that . The sum of the positive divisors of is another multiplicative function (e.g. ). Both of these functions are examples of divisor functions.
If the prime factorization of is given by
then the number of positive divisors of is
and each of the divisors has the form
where for each
For every natural , .
where is Euler–Mascheroni constant. One interpretation of this result is that a randomly chosen positive integer n has an expected number of divisors of about .
In abstract algebra
The relation of divisibility turns the set of non-negative integers into a partially ordered set, in fact into a complete distributive lattice. The largest element of this lattice is 0 and the smallest is 1. The meet operation ^ is given by the greatest common divisor and the join operation v by the least common multiple. This lattice is isomorphic to the dual of the lattice of subgroups of the infinite cyclic group .
See also
- Arithmetic functions
- Divisibility rule
- Divisor function
- Euclid's algorithm
- Fraction (mathematics)
- Table of divisors — A table of prime and non-prime divisors for 1–1000
- Table of prime factors — A table of prime factors for 1–1000
- Durbin, John R. (1992). Modern Algebra : an Introduction (3rd ed. ed.). New York: Wiley. p. 61. ISBN 0-471-51001-7. "An integer is divisible by an integer if there is an integer (for quotient) such that ."
- Hardy, G. H.; E. M. Wright (April 17, 1980). An Introduction to the Theory of Numbers. Oxford University Press. p. 264. ISBN 0-19-853171-0.
- Richard K. Guy, Unsolved Problems in Number Theory (3rd ed), Springer Verlag, 2004 ISBN 0-387-20860-7; section B.
- Øystein Ore, Number Theory and its History, McGraw–Hill, NY, 1944 (and Dover reprints). | <urn:uuid:88daeba0-4b60-41c8-ba49-154c47d59817> | 3.84375 | 1,215 | Knowledge Article | Science & Tech. | 63.00463 |