text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
The following information has been adapted from National Aeronautics and Space Administration. 1978. The Nimbus 7 Users' Guide. C. R. Madrid, editor. Goddard Space Flight Center.
Launched on 25 October 1978 from Vandenberg Air Force Base, California, the Nimbus-7 spacecraft was the last in a series of operational weather satellites operated by the US National Oceanic and Atmospheric Administration (NOAA) and the US National Aeronautics and Space Administration (NASA). Nimbus-7 was placed in a sun-synchronous orbit at an altitude of 955 km. Equatorial crossings are local noon for ascending node and local midnight for descending node. Spacecraft inclination is 99.1 degrees, with a leeward latitude of 80.77 degrees. Orbital period is 104.15 minutes, and consecutive equator crossings are separated by 26.1 degrees longitude.
The spacecraft has three major structures that house power, attitude control and information flow components. The spacecraft's base is a hollow, torus-shaped sensor mount containing electronics equipment and battery modules. The lower surface of the torus provides mounting space for sensors and antennas. Larger experiments are held by a box beam structure mounted in the center of the torus. A control housing unit is connected to the top of the sensor mount by a tripod truss structure. Above the control housing are sun sensors, horizon scanners and a command antenna. Duplicate solar paddles complete the configuration, which is similar to an ocean buoy's.
The Nimbus-7 weighs 965 kilograms, is 3.04 meters tall, 1.52 meters in diameter at the base and 3.96 meters wide with solar paddles fully extended.
The spacecraft supported the following seven experiments and subsystem (THIR):
Document Type: Platform Document
Revision Date: December 1995
The satellite was placed in a 955 km sun-synchronous polar orbit on 25 October 1978. Its repeat cycle allowed for global coverage every six days, or every 83 orbits. Because of power limitations aboard the spacecraft, sensors were not run simultaneously, but were scheduled on a priority basis.
Seven Nimbus Experiment Teams (one for each NASA-provided sensor program) plus the United Kingdom team for the Stratospheric and Mesospheric Sounder experiment met at frequent intervals from the inception of each committee through at least one year post-launch. Each team consisted of five to ten members and was supported by applications scientists and data processing support personnel. Each NET was also supported by the Nimbus-7 data applications system manager or an appointed representative.
NET members advised on all aspects of their respective sensor programs and performed related studies and tasks during pre- and post-launch phases. They determined the principal research and development requirements of each experiment.
The Nimbus-7 platform allowed a number of experiments related to pollution control, oceanography, and meteorology to be conducted. Mission objectives were:
In addition, the NETS defined the following goals:
The Nimbus-7 was maintained in a near polar, sun-synchronous orbit at an altitude of 955 km. Equatorial crossings are local noon for ascending and local midnight for descending nodes. Spacecraft inclination is 99.1 degrees, with a maximum poleward latitude of 80.77 degrees. The orbital period is 104.16 minutes. Equator crossings on consecutive orbits are separated by 26.1 degrees longitude.
The Nimbus-7 observatory provides global coverage every six days, or every 83 orbits.
Spacecraft inclination is 99.1 degrees, with a maximum poleward latitude of 80.77 degrees. The Nimbus-7's attitude control subsystem provides stabilization about the spacecraft's roll, pitch and yaw axis and control of solar paddle orientation, maintaining them nearly perpendicular to the nominal sunline.
Consisting of four attitude control loops and associated switching logic, telemetry and test modes, electrical manifolding, and thermal environmental control, this system maintains spacecraft alignment with the local orbital reference axes to within 0.7 degrees of the pitch axis and one degree of the roll and yaw axis. The system keeps the instantaneous angular rate changes about any axis to less than 0.01 degree per second.
The three-axis ACS uses horizon scanners for roll and pitch attitude error sensing. The rate gyros sense yaw rate and, in a gyro compassing mode, sense yaw attitude. A torquing system uses a combination of reaction jets to provide spacecraft momentum control and large control torques when required; flywheels are utilized for fine control and residual momentum storage.
The communications and data handling subsystem, which manages all information flow for the Nimbus-7 platform, is composed of the S-band communications system and tape recorder subsystem. The S-band communication system includes the S-band command and telemetry system, the data processing system and the command clock. The S-band command and telemetry system consists of two S-band transponders, a command and data interface unit, four earth view antennas, a sky view antenna, and two S-band transmitters (2211 MHz). Commands are transmitted to the observatory by pulse code modulation, phase-shift keying/frequency modulation/phase modulation of the assigned 2093.5 MHz S-band uplink carrier. Stored command capability provides for command execution at predetermined times.
Please see Telemetry and Ranging.
A data handling and processing complex established at the Goddard Space Flight Center, designated the Nimbus Observation Processing System, distributed payload data (except for that pertaining to Stratospheric and Mesospheric Sounder) among several facilities for processing, then converted the results into data products. Responsibilities of the Nimbus Observation Processing System were:
The Nimbus-7 Observatory's equatorial crossings are local noon for ascending node and local midnight for descending node. | <urn:uuid:fbc17e6b-8eb3-4159-a8db-3a0a374ad134> | 3.390625 | 1,197 | Knowledge Article | Science & Tech. | 35.992602 |
The most recent El Niño event began in the spring months of 1997. Instrumentation placed on Buoys in the Pacific Ocean after the 1982-1983 El Niño began recording abnormally high temperatures off the coast of Peru. Over the next couple of months, these strength of these anomalies grew. The anomalies grew so large by October 1997 that this El Niño had already become the strongest in the 50+ years of accurate data gathering.
The image below displays the Sea Surface Temperature (SST) Anomalies in degrees Celsius for the middle of September, 1997. By this time, the classic El Niño pattern has almost fully ripened, with maxima above +4 degrees Celsius.
Image by: CPC ENSO Main Page | <urn:uuid:bd4e4d26-ab29-41a2-a74b-a790f4de7fc7> | 3.34375 | 145 | Knowledge Article | Science & Tech. | 46.765577 |
The Photonic Research Institute (PRI) of the National Institute of Advanced Industrial Science and Technology (AIST), an independent administrative institution, has found that introducing a nano-structured layer (i-layer) where organic semiconductor forms a 3-dimensional p-n junction at the molecular level into p-n junction interface of organic thin-film solar cell based on organic semiconductor to construct a p-i-n junction expands the photovoltaic conversion layer to enhance the efficiency of light utilization. With the p-i-n type organic thin-film solar cell, an energy conversion efficiency, 4 %, a world top level, has been achieved under the condition of simulated solar radiation of AM1.5G. It is expected that this feat will accelerate the implementation of plastic film solar cell characterized by lightweight and flexibility.
The organic thin-film solar cell, of which practical use as low cost, flexible solar cells is pursued for, is a solid-state solar cell based on the same principle as the widely used silicon solar cell, and has a long history of R&D longer than 30 years. However, its energy conversion efficiency has been rather poor, and upgrading the energy conversion efficiency provides is the most significant hurdle for enjoying the practical application.
The solid-state solar cells, either organic or inorganic, are based on the photovoltaic effect of p-n junction. As the photovoltaic conversion layer of p-n junction in organic semiconductor is as thin as a few nanometers, the efficiency of light utilization is so poor with the conventional simple layered solar cell that adequate photocurrent could not be derived. For this reason, upgrading of light utilization efficiency by expanding the photovoltaic conversion layer has been regarded as the key factor for improving the energy conversion efficiency of organic thin-film solar cells.
The PRI-AIST has found that introducing a nano-structured layer (i-layer) where organic semiconductor forms a 3-dimensional p-n junction at the molecular level into p-n junction interface of organic thin-film solar cell to construct multiple p-n junctions expands the photovoltaic conversion layer.
For n- and p-type organic semiconductors, fullerene (C60) and zinc phthalocyanine (ZnPc), respectively, are used, and the preparation of p-i-n junction organic thin-film solar cell by introducing a nano-structured layer with mixed ZnPc and C60 (ZnPc: C60 = i-layer) into p-n junction interface composed of ZnPc and C60, has boosted the energy conversion efficiency to about 4 %. This value is at the top level among the organic thin-film solar cells as evaluated under the simulated solar radiation of AM1.5G. While the newly developed organic thin-film solar cells have a lot of light input left still untapped owing to the thinness of photovoltaic layer, it is expected that further enhancement of light utilization efficiency by building tandem structures will make it possible to improve the energy conversion efficiency substantially. Getting the perspective for upgrading the energy conversion efficiency of organic thin-film solar cells in this way, is expected to accelerate the realization of plastic film solar cells markedly.
Photo. Currently predominant silicon solar cell (left) and plastic film solar cell (right) | <urn:uuid:427bc082-c270-4d14-8491-8e549d0a87d8> | 2.796875 | 693 | Knowledge Article | Science & Tech. | 21.284318 |
Provided by Allen Browne, November 1999.
Here are some common mistakes newbies make with Nulls. If you are unclear about Nulls, first read Nulls: Do I need them?.
If you enter criteria under a field in a query, it returns only matching records. Nulls are excluded when you enter criteria.
For example, say you have a table of company names and addresses. You want two queries: one that gives you the local companies, and the other that gives you all the rest. In the Criteria row under the City field of the first query, you type:
and in the second query:
Wrong! Neither query includes the records where City is Null.
Specify Is Null. For the second query above to meet your design goal of "all the rest", the criteria needs to be:
Is Null Or Not "Springfield"
Note: Data Definition Language (DDL) queries treat nulls differently. For example, the nulls are counted in this kind of query:
ALTER TABLE Table1 ADD CONSTRAINT chk1 CHECK (99 < (SELECT Count(*) FROM Table2 WHERE Table2.State <> 'TX'));
Maths involving a Null usually results in Null. For example, newbies sometimes enter an expression such as this in the ControlSource property of a text box, to display the amount still payable:
=[AmountDue] - [AmountPaid]
The trouble is that if nothing has been paid, AmountPaid is Null, and so this text box displays nothing at all.
Use the Nz() function to specify a value for Null:
= Nz([AmountDue], 0) - Nz([AmountPaid], 0)
While Access blocks nulls in primary keys, it permits nulls in foreign keys. In most cases, you should explicitly block this possibility to prevent orphaned records.
For a typical Invoice table, the line items of the invoice are stored in an InvoiceDetail table, joined to the Invoice table by an InvoiceID. You create a relationship between Invoice.InvoiceID and InvoiceDetail.InvoiceID, with Referential Integrity enforced. It's not enough!
Unless you set the Required property of the InvoiceID field to Yes in the InvoiceDetail table, Access permits Nulls. Most often this happens when a user begins adding line items to the subform without first creating the invoice itself in the main form. Since these records don't match any record in the main form, these orphaned records are never displayed again. The user is convinced your program lost them, though they are still there in the table.
Always set the Required property of foreign key fields to Yes in table design view, unless you expressly want Nulls in the foreign key.
In Visual Basic, the only data type that can contain Null is the Variant. Whenever you assign the value of a field to a non-variant, you must consider the possibility that the field may be null. Can you see what could go wrong with this code in a form's module?
Dim strName as String Dim lngID As Long strName = Me.MiddleName lngID = Me.ClientID
When the MiddleName field contains Null, the attempt to assign the Null to a string generates an error.
Similarly the assignment of the ClientID value to a numeric variable may cause an error. Even if ClientID is the primary key, the code is not safe: the primary key contains Null at a new record.
(a) Use a Variant data type if you need to work with nulls.
(b) Use the Nz() function to specify a value to use for Null. For example:
strName = Nz(Me.MiddleName, "") lngID = Nz(Me.ClientID, 0)
If [Surname] = Null Then
is a nonsense that will never be True. Even if the surname is Null, VBA thinks you asked:
Does Unknown equal Unknown?
and always responds "How do I know whether your unknowns are equal?" This is Null propagation again: the result is neither True nor False, but Null.
Use the IsNull() function:
If IsNull([Surname]) Then
Do these two constructs do the same job?
(a) If [Surname] = "Smith" Then MsgBox "It's a Smith" Else MsgBox "It's not a Smith" End If (b) If [Surname] <> "Smith" Then MsgBox "It's not a Smith" Else MsgBox "It's a Smith" End If
When the Surname is Null, these 2 pieces of code contradict each other. In both cases, the If fails, so the Else executes, resulting in contradictory messages.
(a) Handle all three outcomes of a comparison - True, False, and Null:
If [Surname] = "Smith" Then MsgBox "It's a Smith" ElseIf [Surname] <> "Smith" Then MsgBox "It's not a Smith" Else MsgBox "We don't know if it's a Smith" End If
(b) In some cases, the Nz() function lets you to handle two cases together. For example, to treat a Null and a zero-length string in the same way:
If Len(Nz([Surname],"")) = 0 Then
|Home||Index of tips||Top| | <urn:uuid:a1d1df28-3dc7-4ea0-8045-c821679ea48c> | 3.015625 | 1,159 | Documentation | Software Dev. | 62.660184 |
More Evidence That Intelligence Is Largely Inherited: Researchers Find That Genes Determine Brainís Processing Speed
ScienceDaily (Mar. 18, 2009) ó They say a picture tells a thousand stories, but can it also tell how smart you are? Actually, say UCLA researchers, it can.
In a study published recently in the Journal of Neuroscience, UCLA neurology professor Paul Thompson and colleagues used a new type of brain-imaging scanner to show that intelligence is strongly influenced by the quality of the brainís axons, or wiring that sends signals throughout the brain. The faster the signaling, the faster the brain processes information. And since the integrity of the brainís wiring is influenced by genes, the genes we inherit play a far greater role in intelligence than was previously thought.
Genes appear to influence intelligence by determining how well nerve axons are encased in myelin ó the fatty sheath of ďinsulationĒ that coats our axons and allows for fast signaling bursts in our brains. The thicker the myelin, the faster the nerve impulses.
More Evidence That Intelligence Is Largely Inherited: Researchers Find That Genes Determine Brain’s Processing Speed | National Policy Institute | <urn:uuid:a9e399b0-8432-460a-aeaf-c48ae9fa6023> | 2.984375 | 247 | Comment Section | Science & Tech. | 29.769911 |
Europe's Unexpected Immigration Problem - Wildlife!
Animals and plants brought to Europe from other parts of the world are a bigger-than-expected threat to health and the environment costing at least €12 billion a year, a study said on Thurday (21 February).
More than 10,000 'alien' species have gained a foothold in Europe, from Asian tiger mosquitoes to North American ragweed, and at least 1,500 are known to be harmful, the European Environment Agency (EEA) said.
"In many areas, ecosystems are weakened by pollution, climate change and fragmentation. Alien species invasions are a growing pressure on the natural world which are extremely difficult to reverse," said Jacqueline McGlade, head of the EEA.
Introduced species that suddenly thrive in a new home in Europe, including parakeets from Africa or water hyacinth from the Amazon, were estimated to cost Europe at least €12 billion a year, according to the 118-page study.
"Our number is an underestimate," said Piero Genovesi, a lead author at the Italian Institute for Environmental Protection and Research, adding that it omitted the impacts of many species such as tropical 'killer algae' in the Mediterranean.
"The problem has exploded in the last 100 years," he said. Europe had the most data but the problem was worsening worldwide, he said. And more travel, trade and climate change were likely to aggravate the invasions.
Common Ragweed photo via Shutterstock.
Read more at EurActiv. | <urn:uuid:bcf8d624-bf5b-40ad-9dd5-8e94946590c6> | 2.96875 | 312 | Truncated | Science & Tech. | 36.672486 |
Contact: Patrick Farrell
Boston University Medical Center
Caption: This is an image of Mercury's tail obtained from combining a full day of data from a camera aboard the STEREO-A spacecraft. The reflected sunlight off the planet's surface results in a type of over-exposure that causes Mercury to appear much larger than its actual size. The tail-like structure extending anti-sunward from the planet is visible over several days and spans an angular size exceeding that of a full Moon in the night sky.
Usage Restrictions: None
Related news release: Mercury found to have comet-like appearance by satellites looking at sun | <urn:uuid:634ac6e9-0f2d-42e9-b901-592d9be514b5> | 3.515625 | 128 | Truncated | Science & Tech. | 20.12827 |
This post will be explaining the Java’s Stack implementation. Last in first order policy is used in Stack in which one by one elements get inserted in a sequence which could be retrieved later on in reverse order. Use these 2 methods to insert or read the elements to/from the Stack.
Code is given below to show the implementation of LIFP stack. Total elements that are needed to be inserted to Stack will be taken and after which it will be asked
Heap is used to store new objects created by Java. Stack is used to store primitive data types like int and double when they are declared locally. These primitive data types are stored on heap when they are declared globally. Whenever a Java method is called, all the local variables are pushed to stack and its pointer is decremented after completion of that method call. In a multithreaded application only one heap is used and each thread contains its own stack. So do not declare your data globally | <urn:uuid:a9097c05-e180-49d4-b725-a6511a95effb> | 3.453125 | 194 | Documentation | Software Dev. | 48.240711 |
Vertebrate mitochondrial DNA (mtDNA)
The mtDNA genome is a small, circular
molecule, about 16 ~ 18,000 base pairs in circumference in most
The genome comprises 13 protein-coding regions,
rRNA genes, a replication control
and 22 tRNA genes. The order of
is broadly conserved across vertebrates. There are no introns: splicing
out of tRNAs produces mRNA templates, and the terminal "A" of some stop codons is produced
as part of poly-adenylation.
is self-replicating with the aid of nucDNA-encoded
polymerases. It contributes
to cell respiratory systems in the Cytochrome Oxidase, ATP
and NADH systems. The vertebrate mtDNA
genetic code differs from the "Universal" code is several respects.
inherited solely through the maternal egg cytoplasm, the paternal sperm
mitochondria making no contribution. This, plus the absence of genetic
recombination, allows the mtDNA molecule
to be passed on intact from mother to daughter. It has therefore found
great application in evolutionary
and population biology as a molecular marker. | <urn:uuid:4880f6ca-cde0-4e1f-8eaa-913a3b1da49b> | 3.265625 | 250 | Knowledge Article | Science & Tech. | 25.705985 |
Nearly one quarter of the world’s coastlines are dominated by the highly productive brown algae we know as kelp. Kelps serve a myriad of roles for humans – from providing food to sheltering shorelines from stormy waves. The startling diversity of life that dwells in these kelp forests provides joy to avid divers, fishers, and diners around the planet.
But how will these kelp forests fare in the face of climate change?
To answer this question, we are bringing together a team of international scientists who have studied these systems for decades. With data from the last century of surveys, experiments, and satellite measurements in hand, we will create a set of predictions as to how kelp forests will change in the future. These predictions will help guide the future study of kelp forests in a changing ocean. | <urn:uuid:04485023-d3c8-4630-8468-9999b85aa153> | 3.46875 | 169 | Knowledge Article | Science & Tech. | 43.98587 |
Whenever you are presented with a problem of modeling some numbers in conceptual space, the first thing you have to figure out before you write a single line of behavioral code is what kind of data structures you are going to use. Going all the way back to the beginning of this blog, I've emphasized the importance of considering the efficiency of your design and the effect that it has on the Big-O performance of your program. Thinking about proper data structures can buy you a lot of speed, and it can also make it really easy to visualize your program in small chunks as the complexity increases.
So what's a data structure? The first thing programmers learn is how to use variables for individual chunks of information, like this:
int x = 3;(Technically, of course, a String object in Java is a whole bunch of characters, which makes it a data structure in itself. But the nice thing about object-oriented programming is that you don't have to think about it if you want to.)
String str = "Hello world.";
To understand data structures, consider an array. An array is one of the first slightly more advanced concepts that a beginning programmer will run into. Instead of storing just one integer, it can store several. For example, here's a simple representation of part of the fibonacci sequence:
int fib = new int;When you create a single "int," you're asking the program to set aside a chunk of space in memory, large enough to hold one number. When you create an array like this, you're asking the program instead of set aside a bigger chunk of memory ten times that size, plus (for some languages) a little bit of extra information about size constraints and such.
fib = 1;
fib = 1;
fib = 2;
fib = 3;
fib = 5;
fib = 8;
fib = 13;
fib = 21;
fib = 34;
fib = 55;
But arrays can be wasteful. What if you want to set aside space that sometimes houses a hundred numbers, and sometimes houses just a few? You could create an array of size 100, but most of the time that space would be wasted. That's when you want to use a linked list, where you ask for new memory only at the moment that you actually need it.
I'm not dedicating this whole post to the implementation fundamentals of lists, but interested beginners should go check out the Wikipedia article to find out how this works. (Sidebar: While relying on Wikipedia for information about controversial topics is often unwise, most of the technical topics that are covered are really good.)
Besides linked lists, there are lots of other data structures that you can use depending on your situation:
- A tree (which may or may not be binary) will hierarchically organize information for you, much like the folder structure on your computer does, shortening the search time as long as you know where you are going.
- A hash table or map is a structure which will find a value associated with a key, usually very quickly. An example would be a dictionary search: you supply a word, and the program would retrieve a definition.
Understanding what purpose the various structures serve, and when to use each one, is a very key skill in programming interviews. Often when you are asked "How would you solve this problem?" the best answer is not to blurt the first notion that comes into your head, but to start applying data structures to model the problem space: lists (or specifically, stacks or queues), trees (binary or otherwise), tables (sometimes you can just assume the existence of a database, which is centered around associative tables).
When I hear a problem that lends itself to this, I usually make a beeline to the whiteboard and start thinking out loud: "You're asking about a list of items, so let's describe what's in an item first... then build a linked list out of items..." Then I'll be either writing code to illustrate what I'm thinking, or (if the interview is shorter) just sketch out diagrams so that the interviewer understands the description and will probably accept that I know how to implement it.
Software is built a piece at a time. If you start explaining how you visualize the problem in your head, you can give a much better insight into how you think than if you just start solving the problem directly. In fact, if you start off strong with this approach but then go off on the wrong track, often the interviewer will be eager to guide you towards his concept of the solution because he's being carried along with your thought process. This often changes the dynamic of the interview entirely. Instead of being a room with an interrogator and a suspect, the interviewer may start thinking of himself as your ally and not your judge. And that's exactly where you want to be when you're looking for work.
Digression's over. Next time I'll illustrate this when I get back to decoding HackMaster tables. | <urn:uuid:9e99efc2-a3b1-42e6-b679-806091a10d72> | 3.828125 | 1,034 | Personal Blog | Software Dev. | 53.519693 |
Project: SAFARI 2000Project: SAFARI 2000
The SAFARI 2000 project was an international science initiative to study the linkages between land and atmosphere processes in the southern African region. In addition, SAFARI 2000 examined the relationship of biogenic, pyrogenic, and anthropogenic emissions and the consequences of their deposition to the functioning of the biogeophysical and biogeochemical systems of southern Africa. This initiative, which was conducted in 1999-2001, was built around a number of ongoing, already-funded activities by NASA, the international community, and African nations in the southern African region.
Data Set: SAFARI 2000 PAR Measurements, Kalahari Transect, Botswana, Wet Season 2000Data Set: SAFARI 2000 PAR Measurements, Kalahari Transect, Botswana, Wet Season 2000
Ceptometer data from a Decagon AccuPAR (Model PAR-80) were collected at four sites in Botswana during the SAFARI 2000 Kalahari Transect Wet Season Campaign (March, 2000). These sites include Maun, Pandamentanga, Ghanzi/Okwa River Crossing, and Tshane. The measurements were taken near stake flags placed at 25 m intervals along three parallel 750 m transects located 250 m apart. The ceptometer contains 80 photosynthetically active radiation (PAR) sensors fixed at 1 cm intervals along a wand and connected to a control box. The sampling protocol followed in general was to first measure above canopy incident PAR, then canopy reflected PAR, then above canopy incident PAR again, and finally, canopy transmitted PAR. The data can be used to compute fraction of photosynthetically active radiation (FPAR), intercepted PAR, leaf area index (LAI), and gap fraction. These data currently exist in raw format, but can be processed using manufacturer-provided software to estimate the derived products.The data are stored as ASCII files, in csv format, organized by site, with one file per transect. Incident, transmitted, and reflected PAR radiation values for a transect and site are in the same file. The type of measurement for each data point is known due to comments in the data files. For the Maun and Pandamatenga sites, there is an additional file containing above canopy PAR irradiance. The PAR data units are micromols m-2 s-1, and the time is in Local Time. There is also a readme file, in txt format, for each site.
Detailed Documentation: Data Set Reference Document
Privette, J. L., Y. Tian, Y. Wang, R. J. Scholes, and R. B. Myneni. 2005. SAFARI 2000 PAR Measurements, Kalahari Transect, Botswana, Wet Season 2000. Data set. Available on-line [http://daac.ornl.gov/] from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A. doi:10.3334/ORNLDAAC/794
Data Set Files:Download Data Set Files: (139.4 KBytes in 14 Files)
All Data Taken At Latitude: 18.64S To 24.16S, Longitude: 25.50E To 21.70E
You will need to Register/Sign In in order to see the individual data set files. When you register or sign-in, you will be able to download individual data set files to your desktop, add them to your shopping cart, or add the whole data set to your cart. | <urn:uuid:007c9f17-0be1-429d-b36f-058ace96daa8> | 3 | 737 | Academic Writing | Science & Tech. | 44.692822 |
Date of this Version
Currently, the Northern Eurasia Earth Science Partnership Initiative (NEESPI) includes over 120 international projects involving more than 200 scientific institutions from over 30 countries. The program involves national government agencies, academia and private organizations in the U.S., Europe, Japan and Northern Eurasia (Gutman 2007). The NEESPI science is directed at evaluating the role of anthropogenic impacts on the Northern Eurasia ecosystems, the hemispheric-scale interaction and assessing how future human actions would affect the global climate and ecosystems of the region. Projections of the consequences of global changes for regional environment in Northern Eurasia are also in the center of the scientific foci of this initiative. The Land-Cover/Land-Use Change (LCLUC) Program is an interdisciplinary science program in the Earth Science Division of the Science Mission Directorate supporting several regional initiatives, including NEESPI. The NASA LCLUC currently funds over 30 NEESPI projects. The NEESPI program links to several international projects, such as GLP, iLEAPS and others, under major international programs: IGBP and WCRP. The NEESPI covers a large geographic domain, which includes the former Soviet Union, northern China, Mongolia, Scandinavia and Eastern Europe. This contribution provides a short description of the ongoing NEESPI studies in the non-boreal European sub-region of the NEESPI geographic domain that are supported by the NASA LCLUC program. More information on the projects can be found at http://neespi.org and http://lcluc.hq.nasa.gov. | <urn:uuid:1093febc-f8e0-4f2d-93d9-21b3de01c821> | 2.96875 | 331 | Knowledge Article | Science & Tech. | 28.844986 |
Our most reliable engine of change has been increased understanding of the physical world. First it was Galilean dynamics and Newtonian gravity, then electromagnetism, later quantum mechanics and relativity. In each case, new observations revealed new physics, physics that went beyond the standard models—physics that led to new technologies and to new ways of looking at the universe. Often those advances were the result of new measurement techniques. The Greeks never found artificial ways of extending their senses, which hobbled their protoscience. But ever since Tycho Brahe, a man with a nose for instrumentation, better measurements have played a key role in Western science.
We can expect significantly improved observations in many areas over the next decade. Some of that is due to sophisticated, expensive, and downright awesome new machines. The Large Hadron Collider should begin producing data next year, and maybe even information. We can scan the heavens for the results of natural experiments that you wouldn't want to try in your backward—events that shatter suns and devour galaxies—and we're getting better at that. That means devices like the 30-meter telescope under development by a Caltech-led consortium, or the 100-meter OWL (Overwhelmingly Large Telescope) under consideration by the European Southern Observatory. Those telescopes will actively correct for the atmospheric fluctuations which make stars twinkle—but that's almost mundane, considering that we have a neutrino telescope at the bottom of the Mediterranean and another buried deep in the Antarctic ice. We have the world's first real gravitational telescope (LIGO, the Laser Interferometer Gravitational-Wave Observatory) running now, and planned improvements should increase its sensitivity enough to study cosmic fender-benders in the neighborhood, as (for example) when two black holes collide. An underground telescope, of course….
There's no iron rule ensuring that revolutionary discoveries must cost an arm and a leg: ingenious experimentalists are testing quantum mechanics and gravity in table-top experiments, as well. They'll find surprises. When you think about it, even historians and archaeologists have a chance of shaking gold out of the physics-tree: we know the exact date of the Crab Nebula supernova from old Chinese records, and with a little luck we'll find some cuneiform tablets that give us some other astrophysical clue, as well as the real story about the battle of Kadesh…
We have a lot of all-too-theoretical physics underway, but there's a widespread suspicion that the key shortage is data, not mathematics. The universe may not be stranger than we can imagine but it's entirely possible that it's stranger than we have imagined thus far. We have string theory, but what Bikini test has it brought us? Experiments led the way in the past and they will lead the way again.
We will probably discover new physics in the next generation, and there's a good chance that the world will, as a consequence, become unimaginably different. For better or worse. | <urn:uuid:b38c91c8-3996-414a-94dc-28606a119075> | 3.25 | 612 | Nonfiction Writing | Science & Tech. | 31.647043 |
前一个 下一个 编辑 重命名 撤销 搜索 管理
This class allows Gambas
programs to communicate using UDP sockets.
It can be used like a server or client, as each data fragment sent or received is identified with its host IP and port.
This class inherits from Stream
class, so you can use standard stream
methods to read, write and close the socket. | <urn:uuid:337084c6-b617-4ed3-b4c8-a8ed57917066> | 2.8125 | 111 | Documentation | Software Dev. | 73.858962 |
The vast majority of scientists agree that human activity has significantly increased greenhouse gases in the atmosphere—most dramatically since the 1970s. In February 2007 the Intergovernmental Panel on Climate Change found that global warming is "unequivocal" and that human-produced carbon dioxide and other greenhouse gases are chiefly to blame, to a certainty of more than 90 percent. Yet global warming skeptics and ill-informed elected officials continue to dismiss this broad scientific consensus.
Tectonic faults are sites of localized motion, both at the Earth's surface and within its dynamic interior. Faulting is directly linked to a wide range of global phenomena, including long-term climate change and the evolution of hominids, the opening and closure of oceans, and the rise and fall of mountain ranges. In Tectonic Faults, scientists from a variety of disciplines explore the connections between faulting and the processes of the Earth's atmosphere, surface, and interior.
Children ask, "Why is the sky blue?" but the question also puzzled Plato, Leonardo, and even Newton, who unlocked so many other secrets. The search for an answer continued for centuries; in 1862 Sir John Herschel listed the color and polarization of sky light as "the two great standing enigmas of meteorology." In Sky in a Bottle, Peter Pesic takes us on a quest to the heart of this mystery, tracing the various attempts of science, history, and art to solve it.
Seen from space, the earth is blue. That luminous blueness is water—the Atlantic, Pacific, Indian, Arctic, and Antarctic oceans. Seventy percent of what we call "earth" is under water. Life began in the ocean, and the ocean still plays a vital role in our lives and the earth's ecosystem. More than half the world's population lives within a few miles of the sea; we're drawn to it to swim, surf, sail, or simply gaze out across the waves.
Earth System Analysis for Sustainability uses an integrated systems approach to provide a panoramic view of planetary dynamics since the inception of life some four billion years ago and to identify principles for responsible management of the global environment in the future. Perceiving our planet as a single entity with hypercomplex, often unpredictable behavior, the authors use Earth system analysis to study global changes past and future.
Scientists Debate Gaia is a multidisciplinary reexamination of the Gaia hypothesis, which was introduced by James Lovelock and Lynn Margulis in the early 1970s. The Gaia hypothesis holds that Earth's physical and biological processes are linked to form a complex, self-regulating system and that life has affected this system over time. Until a few decades ago, most of the earth sciences viewed the planet through disciplinary lenses: biology, chemistry, geology, atmospheric and ocean studies. The Gaia hypothesis, on the other hand, takes a very broad interdisciplinary approach.
Rivers going underground, great springs emerging from the ground, independent hollows and basins instead of connecting valleys, deep potholes and vast caves, isolated towerlike hills reminiscent of the unbelievably steep peaks depicted in Chinese paintings—these are some of the distinctive features of karst, the name given to the kinds of country that owe their special characteristics to the unusual degree of solubility of their component rocks in natural waters.
Artificial Neural Networks (ANNs) offer an efficient method for finding optimal cleanup strategies for hazardous plumes contaminating groundwater by allowing hydrologists to rapidly search through millions of possible strategies to find the most inexpensive and effective containment of contaminants and aquifer restoration. ANNs also provide a faster method of developing systems that classify seismic events as being earthquakes or underground explosions.
In this book Peter Lindert evaluates environmental concerns about soil degradation in two very large countries—China and Indonesia—where anecdotal evidence has suggested serious problems. Lindert does what no scholar before him has done: using new archival data sets, he measures changes in soil productivity over long enough periods of time to reveal the influence of human activity.
In this book fifteen distinguished scientists discuss the effects of life—past and present—on planet Earth. Unlike other earth science and biology books, Environmental Evolution describes the impact of life on the Earth's rocky surfaces presenting an integrated view of how our planet evolved. Modeled on the Environmental Evolution course developed by Lynn Margulis and colleagues, it provides a unique synthesis of atmospheric, biological, and geological hypotheses that explain the present condition of the biosphere. | <urn:uuid:5073fa4c-542b-4eb6-99cb-6a8c62739a6e> | 3.6875 | 912 | Content Listing | Science & Tech. | 21.909865 |
Quantum mechanics: Suppose that there is a particle with orbital angular momentum $|L|$. But the particle also has spin quantity $|S|$. The question is, how do I reflect this into Schrodinger equation? I do know how Schrodinger equation becomes for each case - when a particle has particular orbital angular momentum and when a particle has some spin, but not when both occur.
The Schroedinger equation does not describe spin. If you need to describe spin as well, you should use the Pauli equation or the Dirac equation (for spin 1/2).
|show 3 more comments|
I think we can talk about spin and spin interactions with the standard Schrodinger. Start with spin orbit coupling or LS Coupling
Next see the Zeeman effect, and especially Paschen Bach
You need perturbation theory to pick up on spin effects given standard Schrodinger model of the atom as seen on wikipedia: | <urn:uuid:cd70b70a-ec17-4737-a52b-2d991eb734c5> | 2.75 | 196 | Q&A Forum | Science & Tech. | 49.699592 |
I understand that the Bernoulli effect is a flawed explanation for the cause of lift, and does not cause much at all, but how much?
Is there any experimental data on the force caused by the Bernoulli effect? Maybe implicitly through data of the pressure difference between the top and underside of an aeroplane's wings. After that, I assume I could (crudely approximating the pressure to be acting perpendicularly to the flight direction) use $\Delta P A$ to work out the net force on the plane.
Perhaps there is another way to quantitatively analyse the extent to which the Bernoulli effect causes lift.
Edit: see this short cartoon (content similar to Mike Dunlavey's answer). | <urn:uuid:808b65f8-7502-45e9-9eb9-f559b59fae84> | 2.796875 | 150 | Q&A Forum | Science & Tech. | 39.890977 |
The universe is constantly being created—and destroyed. Discover how these processes work, and how they may hold clues to how the universe began.
More About the Universe
Details of the big bang are obscured by billions of years of cosmic history. But high-tech orbiting telescopes are lifting the veil on our universe's formative years.
Supernovae occur when large stars collapse, ejecting plumes of gas, dust, and energy. Scientists study the remnants of these blasts for clues about the life and death of stars.
Humans have studied nebulae for centuries. But space-based and infrared telescopes that can cut through the dust are casting these cosmic cloud formations in a whole new light.
Space-based telescopes have revealed the complex and beautiful details of thousands of our universe's far-flung galaxies.
Phenomena: A Science Salon
National Geographic Magazine
Our genes harbor many secrets to a long and healthy life. And now scientists are beginning to uncover them
All the elements found in nature—the different kinds of atoms—were found long ago. To bag a new one these days, and push the frontiers of matter, you have to create it first.
Burn natural gas and it warms your house. But let it leak, from fracked wells or the melting Arctic, and it warms the whole planet. | <urn:uuid:ca4a78e9-2878-4d74-8463-47ef8a09379b> | 3.75 | 275 | Content Listing | Science & Tech. | 50.730103 |
It was a bit late (or the scientists were premature) but the Redoubt Volcano has finally begun erupting. (See my previous post for context.) You’ve probably heard the top news on this by now. Here are tidbits that might have been lost in the headlines.
- As of noon today (Tues, 3/24), there had been six eruptions. Tuesday has been calm, but the official alert level remains at “High.”
- Besides the ash plume, and steam plume, there have been lahar flows — mud and water. Some have reached all the way to Cook Inlet, 21 miles away. (But no damage so far at the Cook Inlet oil terminal, although workers were evacuated.)
- Some of these watery lahars have been 20-25 feet deep.
- A glacier has melted (causing the flooding).
- 24 hours before the first eruption, seismologists at the Alaska Volcano Observatory raised the alert level around the volcano, after swarms of earth tremors in the area. (So, yes there was some warning immediately before the explosion.)
Here’s a photo of one of the eruptions, on Monday March 23rd, 2009. | <urn:uuid:963a5ee2-20a7-48cb-bd99-91868d8a4a74> | 2.8125 | 255 | Personal Blog | Science & Tech. | 61.904765 |
Homes and buildings chilled without air conditioners. Car interiors that don’t heat up in the summer sun. Tapping the frigid expanses of outer space to cool the planet. Science fiction, you say? Well, maybe not any more.
A team of researchers at Stanford has designed an entirely new form of cooling structure that cools even when the sun is shining. Such a structure could vastly improve the daylight cooling of buildings, cars and other structures by reflecting sunlight back into the chilly vacuum of space.
Some engineers are dusting off an old idea for storing energy—using electricity to liquefy air by cooling it down to nearly 200 °C below zero. When power is needed, the liquefied air is allowed to warm up and expand to drive a steam turbine and generator.
The concept is being evaluated by a handful of companies that produce liquefied nitrogen as a way to store energy from intermittent renewable energy sources. Liquefied air might also be used to drive pistons in the engines of low-emission vehicles.
IBM’s question-answering Watson supercomputer is building quite the résumé. First it won a much-publicized showdownagainst the two greatest Jeopardy! champions of all time, then it went to medical school and emerged as a budding oncologist. Now Watson has a new job–as a customer-service agent with the mostest. The help desk is a bit of a step down from fighting cancer, but IBM is nothing if not pragmatic. U.S. organizations spend $112 billion on call center labor and software, yet half of the 270 billion customer-service calls go unresolved each year, presenting a fairly sizable opening for an enhanced cognitive computer. Let’s face it: Rare is the occasion when you a) reach a live person and b) they know what they’re talking about. Why not give silicon a chance?
Watson is learning fast
Computers aren’t just getting better, they’re getting smarter. Sixteen years ago, a software program beat the reigning chess champion. IBM had spent seven years creating it, and it was time well spent. The victory got the world’s attention and proved that superior computation skills could at least sometimes add up to superior performance.
Two years ago, IBM’s Watson software beat the world’s two best players in the television game show “Jeopardy!” Although “Jeopardy!” is a test of trivia, the victory was anything but trivial. It showed how well artificial intelligence researchers could process ordinary language and extract knowledge from unstructured databases.
Since then, Watson has been put to work learning something a lot less trivial—medical diagnosis. But that’s still a very limited domain—in fact, it’s restricted to cancer diagnoses so far.
But IBM is also looking to the long term. It has given one of the world’s leading AI researchers, at a leading university for AI, an open-ended three-year charter to make Watson smarter.
Against all probability, a device that purports to use cold fusion to generate vast amounts of power has been verified by a panel of independent scientists. The research paper, which hasn’t yet undergone peer review, seems to confirm both the existence of cold fusion, and its potency: The cold fusion device being tested has roughly 10,000 times the energy density and 1,000 times the power density of gasoline. Even allowing for a massively conservative margin of error, the scientists say that the cold fusion device they tested is 10 times more powerful than gasoline — which is currently the best fuel readily available to mankind.
The device being tested, called by Energy Catalyzer (E-Cat for short), was created by Andrea Rossi. Rossi has been claiming for the past two years that he had finally cracked cold fusion, but much to the chagrin of the scientific community he hasn’t allowed anyone to independently analyze the device — until now. While it sounds like the scientists had a fairly free rein while testing the E-Cat, we should stress that they still don’t know exactly what’s going on inside the sealed steel cylinder reactor. Still, the seven scientists, all from good European universities, obviously felt confident enough with their findings to publish the research paper.
Robots began replacing human brawn long ago—now they’re poised to replace human brains. Moshe Vardi, a computer science professor at Rice University, thinks that by 2045 artificially intelligent machines may be capable of “if not any work that humans can do, then, at least, a very significant fraction of the work that humans can do.”
So, he asks, what then will humans do?
As we begin to scratch at the basic workings of life, we’ll also inevitably come up against the mechanics of death. Real life extension science is on the horizon, and we should have a belief in place about how to approach these areas of science, because progress is not going to wait while we grapple with imponderables.
Some believe in a utopian future, in which humans can transcend their physical limitations with the aid of machines. But others think humans will eventually relinquish most of their abilities and gradually become absorbed into artificial intelligence (AI)-based organisms, much like the energy making machinery in our own cells. | <urn:uuid:9d742c81-986f-4e8b-bdad-f5123666884e> | 3.03125 | 1,113 | Content Listing | Science & Tech. | 47.650818 |
Physics & the Detection of Medical X-Rays
Röntgen's accidentally discovered of the radiation which he
called "x-rays" and is now called Röntgen rays in much of the world. This
discovery can offer students a good understanding of how
scientific discoveries occur. The Web has several sites which include
excellent pictures including some images made by Röntgen.
von Wilhelm Conrad Röntgen
Before X-rays and immediately after their discovery
The first chapter of a book, Naked to the bone: Medical
Imaging in the Twentieth Century, this site contains a fascinating story of
attempts by Alexander Graham Bell to find a bullet lodged in President James
Garfield. Exploring Bell's attempt to "look" inside the President's body with
electromagnetic induction would make a great physics lesson. Then, the
chapter describes Röntgen's discovery.
Entdeckung einer neuen Strahlung (The Discovery of New Radiation)
A very short discussion of Roentgen's early work
The source for the history of Röntgen and of x-rays from
their discovery to modern times. Even if you don't read German, you
should look at the pictures.
Röntgen's discovery with pictures of the original equipment. For English
see the next link
experimental apparatus by Wilhelm Conrad Röntgen
Wilhelm Conrad Röngten
A 61-page biography which is primarily in German but
includes severla lengthy quotations in English. PDF
This speech given on December 10, 1901 by C.T. Odhner,
President of the Royal Swedish
Academy of Sciences, describes why Röntgen was chosen for the first Nobel
Prize in physics.
Biography prepared the Nobel Committee
This biography was prepared at the time Röntgen received the
Nobel Prize. It is short but very informative.
This biography was prepared at DESY and is written so that
school children will find it easy to read.
The Discovery Print
Some radiologists commissioned
a painting to commemorate the 100th anniversary of
Röntgen's discovery. It is a nice picture.
Language should not be a barrier.
A set of pictures including some of x-ray images taken in
1896. Language should not be a barrier.
Google für Geschichte der Radiologie
Suchen Sie Yahoo für
for Roentgen x-ray discovery
Search Yahoo for
Roentgen x-ray discovery
the language of the Web site. | <urn:uuid:900342af-b8bc-4578-8d52-366ccd9a637a> | 3.21875 | 547 | Content Listing | Science & Tech. | 47.773735 |
Common names: Spring Cankerworm, Inchworm, Measuring Worm
Scientific name: Order Lepidoptera, family Geometridae, Paleacrita vernata
Size: Adult--1/2" to 1", larva 1"
Close up of an adult female cankerworm moth (R. Childs)
Three fall cankerworm larvae. Note the 3 pairs of prolegs.
Identification: Adults are light brown or gray moths with translucent wings. Often called inchworms or measuring worms because of their looping movement. Variable in color, but usually striped longitudinally. Larvae drop from trees on silk threads.
A spring cankerworm caterpillar. Note the 2 pairs of prolegs.
Biology and life cycle: Female adults are wingless; they climb trees to lay eggs in clusters that hatch in the spring just at bud break. Brownish purple eggs are laid in groups in the bark of trees. One brood per year. Larvae hatch in spring when leaves first open, feed for three or four weeks, crawl into the soil to pupate.
Habitat: Elms, oaks, lindens, sweetgums, apples, and other shade and fruit trees.
Feeding habits: Larvae feed on tree and shrub foliage. They drop down on silk threads to evade predators, then go back and eat some more when danger has passed. Why? Guess they are still hungry.
An adult wingless female spring cankerworm producing
an egg mass on the trunk of a tree. (R. Childs)
Economic importance: Can defoliate broadleaf trees.
Natural control: Trichogramma wasps, birds, and lizards.
Organic control: Band trunks with sticky material in late winter during egg-laying time. Apply the products to paper bands, not directly to the trunk to avoid girdling the tree. Put the material on a paper band. Spray with Bacillus thuringiensis or plant oil products as a last resort.
Insight: These little guys will do a lot of damage in the spring to plants like dwarf yaupon holly, but the foliage usually grows back without any long-term injury.
A female cankerworm adult. Note that this is a wingless moth. | <urn:uuid:08f111f3-58b3-4939-9725-3ed0d323a443> | 3.5 | 486 | Knowledge Article | Science & Tech. | 61.621981 |
Particle of charge q is brought from A to B, following the path as a sinusoidal function, and then find the work done in doing so? The point ?A?, is ?r?, distance from the infinitely long linear body given in figure.Write with the correct explanation for the answer.
Please find the figure in the attached file.
However, if it is electric field then remember that the force due to it is conservative, that is path independent but only depends on intial and final positions of the charged particle or in more simple language the displacement.
Thus W = F. d = force x displacement.
Preparing for JEE?
Kickstart your preparation with new improved study material - Books & Online Test Series for JEE 2014/ 2015
@ INR 5,443/-
For Quick Info
Find Posts by Topics | <urn:uuid:aebd4818-20f9-4070-92de-9ec306d2a934> | 3.4375 | 173 | Tutorial | Science & Tech. | 64.924545 |
minimum returns the minimum value from a list, which must be non-empty, finite, and of an ordered type. It is a special case of minimumBy, which allows the programmer to supply their own comparison function.
The least element of a non-empty structure.
The minimumBy function takes a comparison function and a list and returns the least element of the list by the comparison function. The list must be finite and non-empty.
The least element of a non-empty structure with respect to the given comparison function.
O(n) minimum returns the minimum value from a ByteString This function will fuse. An exception will be thrown in the case of an empty ByteString.
O(n) minimum returns the minimum value from a Text, which must be non-empty. Subject to fusion. | <urn:uuid:e74c99c7-60a6-486c-ad1d-f4856c8dfa7a> | 3.046875 | 166 | Documentation | Software Dev. | 55.940011 |
Superheros or super beings: where is evolution taking us?
Humans are constantly evolving but the laws of physics mean we won't suddenly develop X-Men-like abilities
Natural selection continues apace in nature, but modern medicine is subverting the process in humans, writes DICK AHLSTROM
Birds do it, bees do it, but do we? Evolution through natural selection is alive and well and making changes in all the species on the planet, but where is it taking us?
Scientists can actually watch the process in organisms such as viruses, bacteria or fruit flies.We know evolution is real because we can apply environmental pressures to drive genetic change in these quickly regenerating species.
It is more difficult to watch in “real time” in humans and mammals, since change evolves slowly over many generations. In this case, the fossil record helps, by, for instance, showing us what our earlier human-like ancestors looked like and what changes occurred.
One way or another change arrives, brought either by external environmental factors working directly on our genes or by spontaneous mutation. And most of the change is spontaneous, says Prof James McInerney of the Department of Biology at NUI Maynooth.
A case of mistaken identity
“The vast majority of evolution is neutral, it is not in response to an external influence. It is because the [gene] replication system makes mistakes,” says Prof McInerney, currently a visiting professor at Harvard University in the US.
This may seem like a bad thing but it is just the opposite. “We need to have genetic variety or pathogens will bring us down. We have to be evolvable,” he says.
“Evolvability is good. We could evolve to a position where our replication system doesn’t make mistakes but that is a very bad idea. The ability to be different turns out to be a good idea a lot of the time.”
The parts of our genomes that change the most, however, are the ones that interact with external forces. “How we interact with the environment makes for most of the change,” says Prof McInerney.
A plague on just one of your houses
He cites an extreme example: the waves of Bubonic plague that swept Europe. Some people survived better than others. “The bacterium that was afflicting the population had a different effect on one blood group than another,” he says. Those who survived passed these resistance genes on to subsequent generations.
Sickle-cell disease is another example, says Prof Aoife McLysaght of the Molecular Evolution Lab at the Smurfit Institute of Genetics at Trinity College Dublin.
This red blood cell-related condition causes anaemia in sufferers, but having it confers an advantage, with the person being less affected by malaria, she says.
“Evolution does a cost/benefit analysis. The benefit of protection against malaria outweighed the cost of being anaemic.”
This kind of natural selection is under way all of the time, but do we have any idea where it is taking us? “Human evolution is still going on but it is hard to know where it is heading,” says McLysaght.
Prof Dan Bradley, professor of population genetics at the Smurfit Institute, agrees. “It is hard to predict because we don’t know what the future holds. But we can’t look at the present and assume we will continue on a straight line.” | <urn:uuid:83a4ef99-892d-4155-a75f-36b014a84ef2> | 2.953125 | 738 | Truncated | Science & Tech. | 42.446651 |
The average temperature of the whole surface of the World Ocean is ~17.5°C. The highest temperature, >36°C is found in the Red Sea and the lowest, <- 2°C, has been observed in the Weddell Sea. Water temperature depth distribution depends on the amount of solar heating of the Ocean surface and intermixing of water masses.
Warmer surface and near-surface layers transmit heat to underlying waters, forming a productive layer. Hydrological, biological and other processes act within it. The thickness of an active layer ranges from 200-400 m. Down to depths of 1,000-1,800 m, the temperature gradually decreases, and below 1800 m, cold waters of almost constant temperature exist.
This page is under development. See Related Knowledge on the left Navigation Bar to see documents, web-sites or other informational items about this topic.
Text and images are from Man and the Ocean, a CD-ROM produced by the Russian Head Department of Navigation and Oceanography (HDNO). | <urn:uuid:f76c1e6d-2889-4ec8-b3ca-565012350007> | 3.59375 | 211 | Knowledge Article | Science & Tech. | 52.421785 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 16 results on physics.org and 157 results in our database of sites
155 are Websites,
2 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
These pages takes a fascinating look inside the world of computers and the internet. There is good use of colour and the site makes good use of flash to explain computers, microprocessors, the ...
Amazing demonstration of quantum levitation by Tel Aviv University's quantum levitation group.
Great introduction to the basics of computer programming and how programming languages work.
A brief description of how Chess Computers work from HowStuffWorks.com.
A brief description of how Computer Mice work - from the traditional mouse to newer opticam mice. From HowStuffWorks.com.
Really useful guide to all aspects of quantum mechanics.
This is the introduction page to various sections on quantum topics.
A brief description of how analog and LCD Computer Monitors work from HowStuffWorks.com.
Comprehensive site aimed at high school student learning about quantum phenomena. Excellent!!
News from New Scientist on the topic of quantum.
Showing 21 - 30 of 157 | <urn:uuid:bda60958-5c37-4bdf-a7f9-5b48e958a1cb> | 3.109375 | 293 | Content Listing | Science & Tech. | 56.48756 |
As you'd probably know, in the simplest terms a servlet is a java alternative to CGI. It is used for dynamic content presenation of content over the web. In this process a servlet can perhaps pick data from a database, refer other classes/beans for performing computations, some legacy app, or even some other servlets and send the results back to the user's browser in html form. The problem with servlets is that they don't enable separation of code from content.
Within servlet code you have to embed chunks of HTML, which makes it very messy. So within code you have awkward fragments like
out.println("<p>This is real weird</p>");
A JSP on the other hand is a high level interface to a servlet. A JSP page is an ordinary html page (of course named as a .jsp) containing pieces of java (default language) code. So the content developer can create the html page first, and the programmer puts pieces of code directly within html. Your server automatically 'compiles' your jsp into servlet code. It also enables dynamic loading, wherein if u change your jsp, the resulting servlet also changes,is recompiled and loaded, without your having to restart your server.
Usually in real world scenarios, both don't exist in isolation, you have jsp pages and serlvets working in tandem. The servlet acts as the gateway to an application, it performs routing of requests and some essential tasks.
A request from an html page goes to say a master servlet which redirects it an appropriate jsp. That way all monitoring, security and other issues can be taken care of at the central location, i.e. the master servlet. And all presentation is performed by the jsp.
To learn up JSPs you can look up the JSP specs published by Sun. As a simple tutorial you might also want to check out is: | <urn:uuid:cbf435c1-4754-4cef-83a4-7e27f9bdac85> | 3.546875 | 404 | Comment Section | Software Dev. | 60.870085 |
|Elizabeth White, Mariana Trench Vehicle, The Cape and Islands NPR Stations, 4 December 2006|
The Mariana Trench in the Pacific Ocean is the deepest point in the world's crust. The trench plunges 11,000 meters below the ocean's surface, and is deeper than Mt. Everest is tall. The trench was surveyed by the Royal Navy in 1951, but little is known about the region. Today, engineers at the Woods Hole Oceanographic Institution are building a unique underwater vehicle capable of exploring the deepest depths of the trench. Named Nereus, the vehicle will hopefully bring a wealth of scientific data to the surface.
FILE » underwater_robot_128_17043.mp3
Audio Recording, MP3 Format | <urn:uuid:1b078b47-45f4-4f7d-836a-59ca0f9ec1f1> | 3.546875 | 152 | Truncated | Science & Tech. | 53.550811 |
A target specifies a product to build and contains the instructions for building the product from a set of files in a project or workspace. A target defines a single product; it organizes the inputs into the build system—the source files and instructions for processing those source files—required to build that product. Projects can contain one or more targets, each of which produces one product.
The instructions for building a product take the form of build settings and build phases, which you can examine and edit in the Xcode project editor. A target inherits the project build settings, but you can override any of the project settings by specifying different settings at the target level. There can be only one active target at a time; the Xcode scheme specifies the active target.
A target and the product it creates can be related to another target. If a target requires the output of another target in order to build, the first target is said to depend upon the second. If both targets are in the same workspace, Xcode can discover the dependency, in which case it builds the products in the required order. Such a relationship is referred to as an implicit dependency. You can also specify explicit target dependencies in your build settings, and you can specify that two targets that Xcode might expect to have an implicit dependency are actually not dependent. For example, you might build both a library and an application that links against that library in the same workspace. Xcode can discover this relationship and automatically build the library first. However, if you actually want to link against a version of the library other than the one built in the workspace, you can create an explicit dependency in your build settings, which overrides this implicit dependency. | <urn:uuid:6b5f7001-f006-4b76-973d-125ed091dcb6> | 3.390625 | 341 | Documentation | Software Dev. | 39.551258 |
||This article needs additional citations for verification. (March 2011)|
|SI unit:||m / s|
In kinematics, velocity is the rate of change of the position of an object, equivalent to a specification of its speed and direction of motion. Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path (the object's path does not curve). Thus, a constant velocity means motion in a straight line at a constant speed. If there is a change in speed, direction, or both, then the object is said to have a changing velocity and is undergoing an acceleration. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration.
Velocity is a vector physical quantity; both magnitude and direction are required to define it. The scalar absolute value (magnitude) of velocity is called "speed", a quantity that is measured in metres per second (m/s or m⋅s−1) when using the SI (metric) system. For example, "5 metres per second" is a scalar (not a vector), whereas "5 metres per second east" is a vector. The rate of change of velocity (in m/s) as a function of time (in s) is "acceleration" (in m/s2 – stated "metres per second per second"), which describes how an object's speed and direction of travel change at each point in time. In science a "deceleration" is called a "negative acceleration", for example: −2 m/s2.
Equation of motion
The average velocity of an object moving through a displacement during a time interval is described by the formula:
The velocity vector v of an object that has positions x(t) at time t and x at time , can be computed as the derivative of position:
Velocity is also defined as rate of change of displacement. Average velocity magnitudes are always smaller than or equal to average speed of a given particle. Instantaneous velocity is always tangential to trajectory. Slope of tangent of position or displacement time graph is instantaneous velocity and its slope of chord is average velocity.
The equation for an object's velocity can be obtained mathematically by evaluating the integral of the equation for its acceleration beginning from some initial period time to some point in time later .
The final velocity v of an object which starts with velocity u and then accelerates at constant acceleration a for a period of time is:
The average velocity of an object undergoing constant acceleration is , where u is the initial velocity and v is the final velocity. To find the position, x, of such an accelerating object during a time interval, , then:
When only the object's initial velocity is known, the expression,
can be used.
This can be expanded to give the position at any time t in the following way:
These basic equations for final velocity and position can be combined to form an equation that is independent of time, also known as Torricelli's equation:
The above equations are valid for both Newtonian mechanics and special relativity. Where Newtonian mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in Newtonian mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. In other words only relative velocity can be calculated.
The kinetic energy is a scalar quantity.
Escape velocity is the minimum velocity a body must have in order to escape from the gravitational field of the earth. To escape from the Earth's gravitational field an object must have greater kinetic energy than its gravitational potential energy. The value of the escape velocity from the Earth's surface is approximately 11 100 m/s.
Relative velocity
Relative velocity is a measurement of velocity between two objects as determined in a single coordinate system. Relative velocity is fundamental in both classical and modern physics, since many systems in physics deal with the relative motion of two or more particles. In Newtonian mechanics, the relative velocity is independent of the chosen inertial reference frame. This is not the case anymore with special relativity in which velocities depend on the choice of reference frame.
If an object A is moving with velocity vector v and an object B with velocity vector w, then the velocity of object A relative to object B is defined as the difference of the two velocity vectors:
Similarly the relative velocity of object B moving with velocity w, relative to object A moving with velocity v is:
Usually the inertial frame is chosen in which the latter of the two mentioned objects is in rest.
Scalar velocities
In the one dimensional case, the velocities are scalars and the equation is either:
- , if the two objects are moving in opposite directions, or:
- , if the two objects are moving in the same direction.
Polar coordinates
In polar coordinates, a two-dimensional velocity is described by a radial velocity, defined as the component of velocity away from or toward the origin (also known as velocity made good), and an angular velocity, which is the rate of rotation about the origin (with positive quantities representing counter-clockwise rotation and negative quantities representing clockwise rotation, in a right-handed coordinate system).
The radial and angular velocities can be derived from the Cartesian velocity and displacement vectors by decomposing the velocity vector into radial and transverse components. The transverse velocity is the component of velocity along a circle centered at the origin.
- is the transverse velocity
- is the radial velocity.
The magnitude of the radial velocity is the dot product of the velocity vector and the unit vector in the direction of the displacement.
- is displacement.
The magnitude of the transverse velocity is that of the cross product of the unit vector in the direction of the displacement and the velocity vector. It is also the product of the angular speed and the magnitude of the displacement.
Angular momentum in scalar form is the mass times the distance to the origin times the transverse velocity, or equivalently, the mass times the distance squared times the angular speed. The sign convention for angular momentum is the same as that for angular velocity.
- is mass
The expression is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion.
See also
- Escape velocity
- Four-velocity (relativistic version of velocity for Minkowski spacetime)
- Group velocity
- Phase velocity
- Proper velocity (in relativity, using traveler time instead of observer time)
- Rapidity (a version of velocity additive at relativistic speeds)
- Relative velocity
- Terminal velocity
- Velocity vs. time graph
- Wilson, Edwin Bidwell (1901). Vector analysis: a text-book for the use of students of mathematics and physics, founded upon the lectures of J. Willard Gibbs. p. 125. This is the likely origin of the speed/velocity terminology in vector physics.
- Basic principle
- Robert Resnick and Jearl Walker, Fundamentals of Physics, Wiley; 7 Sub edition (June 16, 2004). ISBN 0-471-23231-9.
|Wikimedia Commons has media related to: Velocity| | <urn:uuid:35c2dfd5-c3ee-4491-a3af-a6736c170583> | 4.40625 | 1,703 | Knowledge Article | Science & Tech. | 28.576596 |
Borrowing heavily from the psychologists definition of the analogy and the software designers interface/class,...the structure for the analogy can be represented as here.
An analogy is the mapping of similar attributes, operations and rules between two or more varieties. Each variety specifies a value for attributes as well as an implementation for the operations.
In the code structure examples in the next few slides We will use the annotations N, A & O for the Name, Attribute and Operation. The subscript a (for analogy) will be for the analogy description and the subscript v (for variety) will be for the implementations by the varieties
To discuss the restriction depicted by the dotted lines I will use the following example | <urn:uuid:c9f68fb5-4a33-4214-b993-6bc5ffc961b1> | 3.21875 | 140 | Documentation | Software Dev. | 30.105 |
What is ENERGY MEASURED IN?
Energy is measured in Joules in the SI system. Thermal energy can be measured in calories (1 calorie = 4.2 Joules) or BTU. In traditional units, energy is in ft.lbforce ...
What is energy? Energy is the ability to do work and results in a change. It is a very commonly used word in our everyday lives. Different Forms of
In physics, energy is an indirectly observed quantity which comes in many forms, such as kinetic energy, potential energy, radiant energy, and many others; which are listed in this summary article. This is a major topic in science and technology and this article gives an overview of its major ...
In physics and chemistry, it is still common to measure energy on the atomic scale in the non-SI, but convenient, units electronvolts (eV). The Hartree (the atomic unit of energy) is commonly used in calculations. Historically Rydberg units have been used.
Power is an expression of an amount of work, or energy transferred, in a unit of time. Power itself is not directly measured, but instead can be calculated using a number of ...
Energy is the ability to do work, where work is computed from the equation: Work = Force x Distance. There is no absolute measure of energy, because energy definition is based on the work that one system does (or can do) on another. Thus, only the conversion of a system from one state into ...
We would like to show you a description here, but the site you’re looking at won't allow us.
Best Answer: In science, kinetic energy is commonly measured in joules. However, in our every day lives, other energy measurements are more familiar to us than joules. Some of ...
Best Answer: Energy and work is measured in joules. That is force*distance in N*meters = N*m = W*s. 1000 joules (J) = 1 kJ. Power is measured in watts. That is energy/time ...
You can measure kinetic energy (energy of motion) with a thermometer. Measuring potential (stored) energy can be a difficult task. The potential energy of a ball stuck up
In the International System of Units energy is measured in joules. Energy can also be measured in other fields using numerous other units which include kilocalories
How Do We Measure Energy? Changing Energy Food Energy Heat Energy Next chapter is about Electricity. | About Energy Quest | Art Gallery | Ask Professor Quester | Devoured by the Dark | Energy ...
Back to Table of Contents. How is thermal (heat) energy measured, and how well does the Sun provide? The basic unit for thermal energy in home heating applications is the "therm", which is defined to be 100,000 BTU's:
Power is a physics term used to describe the average amount of energy that is transferred per unit of time. Thus power is expressed in units that are units of energy divided ...
A: Energy is measured in joules and kilojoules. One joule is the amount of energy needed to lift 100 grams to 1 kiliogram.
Best Answer: In the SI system of units it is the Joule, which is the work done in moving 1 meter against a force of 1 Newton. 1 Watt = 1 Joule per second. You can also use the ...
Back to Table of Contents. How is electrical energy measured, and how well does the Sun provide? Electrical Units of Energy. In the so-called "International System of Units", which are based on metric units, and which form the basis for the electrical units we use, both work and energy have the ...
Best Answer: Depends on what part of the world and what area of work. Usually, the standard unit of energy is the joule.
Energy is the ability to do work and results in a change. Energy can be measured by the work that is completed. Work is equal to Force times the distance. Look
The unit that measures heat energy is Joule. This unit is equal to the energy expended in applying a force of one Newton through a distance of one metre. However
There is no absolute measure of energy. Energy is defined by the work that one system does on another system. Watt, kilowatt and joule are common units of measure...read more free on Reference.com.
What is energy measured in? ChaCha Answer: Energy also can be measured in joules.
How is energy measured? ChaCha Answer: Kinetic energy (energy of motion) is measured with a thermometer.Measuring potential (stored) ...
EIA’s effort to take the lead to develop robust and reproducible energy-efficiency indicators and also measurements of greenhouse gas as related to energy use and energy efficiency.
Measurement of Energy. Temperature and Heat . Heat is the result of the movement of matter. Temperature is the measure of this movement of matter.
Energy is the integral part of our life. It is the energy that produces work, heat and power. Energy and power are measured in different units which makes the things confusing sometimes. Let us have look at various units of measurement of energy and power and their conversion. The units of ...
How Is Energy in Foods Measured?. You may know that eating too many calories can be unhealthy and that getting too few can also be bad. But measuring a calorie is more complex than you think. It helps to know where a calorie comes from and what the word calorie means. The molecular ...
Best Answer: When, for instance, a uranium or plutonium nucleus fissions it releases about 190 million electron volts of energy. (See first link. The 10 MeV of neutrino ...
jsmith: How energy is measured depends on its type. For instance, heat energy is measured by degrees or Celsius units, and electricity is expressed in volts.
How is Energy Efficiency Measured? If you’ve read my previous posts on the origin of Passive House, you’ll know it originated out of Europe and the Passive House standards were born of the Low Energy House standards.
Energy intensity is the ratio of energy consumption to some measure of demand for energy services—what we call a demand indicator. However, at best, energy-intensity measures are a rough surrogate for energy efficiency. This is because energy intensity ...
Thermal energy measures the movement of atoms or molecules within a substance. It is considered less a measurement and more a process or an amount of internal energy.
Types of Energy and the units used to measure it Many branches of science and technology are involved with energy, and each group originally defined energy using units that they considered useful.
Power is measured in watts. In an electrical system power is equal to the voltage multiplied by the current. The more power, the more electrical energy used per unit time.
In physics and chemistry, it is still common to measure energy on the atomic scale in the non-SI, but convenient, units electronvolts (eV). The Hartree (the atomic ... en.wikipedia.org. What units is energy measured in - WikiAnswers.
Nuclear energy appears in nature as energy emissions from the sun and the stars, while man-made forms can be found in the reactors used to power nuclear plants. Measuring this form of energy can be done based on how it affects the air, how it’s absorbed in the environment and how ...
The phrase power factor frequently is used in the electrical and power electronics industry. For example, home, office, and industrial electrical equipment often is fitted with power factor-corrected power supplies.
Get the answer to "How is electric power measured?" at Answers Encyclopedia, where answers are verified with credible reference sources like Encyclopedia.com.
Many commercial building electric meters have a meter multiplier, indicating that the power measured by the meter (and rotating disk) is a multiple of the power required by the building. This multiplier is sometimes on the meter faceplate.
Solar power is measured by insolation. The insolation rating of a particular area is the amount of solar radiation to hit the ground in a specific time period.
MTM Scientific, Inc: How to Measure the Power Output of Solar Panels. A photovoltaic solar electric panel generates DC power when it is exposed to sunlight.
The energy contained in our food is measured in terms of calories (cal) and joules (J). Technically, one calorie is the amount of energy required to raise the temperature of 1 gram of water by 1 degree Celsius (1.8 degrees Fahrenheit).
Summarily, the Helmholtz free energy is also the measure of an isothermal-isochoric closed system’s ability to do work. If any external field is missing, the Helmholtz free energy formula becomes: ΔF = ΔU –TΔS. F = Helmholtz Free Energy in Joule;
There are two types of solar energy measurement, based on the type of energy: photovoltaic energy produces electricity, and solar thermal energy heats water.
The American Council for an Energy-Efficient Economy(ACEEE) released its annual “State Energy Efficiency Scorecard” last week to great fanfare.
Measuring energy helps us monitor our consumption. Energy is measured in joules and calories.
What Is What Unit Is Gravitational Potential Energy Measured In? - Find Questions and Answers at Askives, the first startup that gives you an straight answer
Students should be able to: define temperature as the measure of thermal energy and describe the way it is measured.
HEAT ENERGY PER UNIT OF MEASURE FOR NATURAL GAS. UNIT OF MEASURE: APPROX. HEAT ENERGY: 1 cubic foot: 1,000 BTU's: 100 cubic feet (1 therm) 100,000 BTU's: 1,000 cubic feet (1 mcf) 1,000,000 BTU's: How Natural Gas is Sold as Transportation Fuel.
HEAT ENERGY What is HEAT? Form of energy and measured in JOULES Particles move about more and take up more room if heated this is why things expand if heated – A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow.com
If you didn't find what you were looking for you can always try Google Search
Add this page to your blog, web, or forum. This will help people know what is What is ENERGY MEASURED IN | <urn:uuid:d0f5727e-b1ba-4b8b-8b69-4920aa030fd5> | 3.359375 | 2,199 | Content Listing | Science & Tech. | 49.762155 |
In the course of this review of the `prototype' barred galaxy NGC 1365 we may have gained the impression that this galaxy in several respects is rather unusual. Indeed, NGC 1365 is a supergiant galaxy falling at the very upper end of the infrared Tully-Fisher relation with a very strong bar. To what extent do its special properties have their origins in this bar?
The only spiral galaxy in our immediate neighbourhood that rivals NGC 1365 in size is the supergiant galaxy M 101 - a magnificent Sc galaxy with a prominent m = 1 asymmetry but where the faint trace of a bar seen in infrared light or in CO does not give the impression to be massive enough to have a decisive influence on the dynamics of the system. Why these galaxies are so different and why one of the two galaxies has a strong bar and the other almost none is only a subject for guess. Does M 101 possess a stabilizing halo, preventing the formation of a bar, that is missing in NGC 1365? To understand these differences is fundamental to our understanding of spiral galaxies.
A promising step forward in the search for the halo contribution in galaxies is the detection of ionized gas beyond the H I disk in the spiral galaxy NGC 253 (Bland-Hawthorn et al. 1997). This discovery is of profound importance as CO is not a good tracer of molecular hydrogen in metal poor surroundings. If hydrogen, ionized by some external radiation, can be detected beyond the H I cutoff, the rotation curve could be extended to give the total mass of the galaxy and the mass and extent of the halo. As the dominating mechanism for the ionization is suspected to be hot young stars in the inner region that are seen by a warped outer hydrogen disk, a search for such ionized hydrogen might successfully be undertaken in the warped galaxy NGC 1365.
We have seen how numerical simulations by successive approximations can lead to the determination of the distribution of mass within a galaxy. In further extensions these simulations should be made fully self-consistent with both stellar populations and interstellar matter gravitating and free to interact. A special problem to address is whether the gravitation from gaseous arms formed by the bar can prolong the life of stellar arms and help to drive the spiral arms through co-rotation. Another important problem to address is how to transport matter in the central region all the way into the nuclear accretion disk.
The relations between compact radio sources and compact super star clusters on one hand and between compact radio sources and infrared sources on the other, discovered in the nuclear region of NGC 1365, is remarkable. Further observations, for instance spectral observations at high spatial resolution and search for time variations in the radio domain, should contribute in an essential way to shed light on these strange classes of objects which are of fundamental importance for the understanding of star formation and stellar evolution in the nuclear region of a galaxy.
Many outflow cones from active nuclei have been presented in the literature. However, the methods of observation have differed. The draw-back of long slit measurements is the general lack of homogeneous spatial coverage and the great demand on observing time. The disadvantage of Fabry-Perot methods is the difficulty to detect faint emission, e.g. in extended line wings, and to compare line strengths. The modern 3-D spectrographs now developed for large telescopes should substantially improve the situation. Also, the modelling of these outflow cones has been very different from case to case. As an example we may mention the model worked out in detail for the high excitation gas in the Seyfert 2 galaxy NGC 7582 by Morris et al. (1985), where the approach differs fundamentally from the one applied for NGC 1365. It would be of great advantage for the understanding of these outflow high-excitation cones if the observations and the modelling were done in comparable ways for a number of galaxies.
We do not know for certain whether to classify the nucleus of NGC 1365 as a low luminosity AGN and to what degree it is obscured by intervening dust. It would be important to get a spectrum covering a large wavelength range with high spatial resolution of the nucleus itself to hopefully get the extinction, the luminosities in various emission lines, the total luminosity, and the shape of the continuum in order to compare this nucleus with other Seyfert nuclei.
The distribution of populations in NGC 1365 is not sufficiently known. Most CCD images in the optical region are not absolutely calibrated. Adequate photometry (performed with suitable focal reducer) would permit analysis of the nuclear bulge, lens or bar, population analysis over selected regions, improved mass/luminosity determinations and photometry of faint outer regions. It would be of interest to try to identify planetary nebulae and ordinary old globular clusters in the galaxy, not only to trace these populations but also to compare these distance indicators against distances given by Cepheids.
Finally, concentrating studies, involving observations over a wide range of wavelength bands, as well as simulations, on a single galaxy gives the possibility to relate different phenomena and reach a global scenario consistent with observations. Ideally, this should be undertaken in a systematic way for a number of selected galaxies.
My thanks are due to Rainer Beck and V. Shoutenkov for permission to publish the results shown in Fig. 7, to Massimo Tarenghi and Alan Moorwood for providing Figs. 5 and 29 respectively, to S. Jörsäter, M. Näslund and J.J. Hester for permission to publish Fig. 4, and to Claus Madsen and Hans Hermann Heyer for producing Figs. 1 and 2 from ESO photographic plates. Further, I am grateful to Claes Fransson, Maja Hjelm, Helmuth Kristen, Per Lindblad, Aage Sandqvist and Lodewijk Woltjer for critically reading the manuscript and for constructive discussions. | <urn:uuid:d556730d-89a4-4054-96a8-62463960ec0a> | 2.734375 | 1,235 | Academic Writing | Science & Tech. | 39.976805 |
Cosmic Microwave Background
Name: Jack H.
How is it that we can "see" the cosmic microwave background radiation? It seems
to me that, no matter which way its photons were heading when they started on their way in the
year 380,000, they must surely have left the matter of the universe behind by now. They
travel at the speed of light and we do not even come close, (do we?). So why are they
still around here?
Understanding the "Big Bang" is not at all in line with our intuition. It is not as if we are
sitting "over here" on Earth and "it" happened "over there" about 13 billion years ago. Except
for the predicted (and observed) fine structure, it occurred in all directions in space
simultaneously -- it is the very creation of space itself. So our intuition fails us, and must
be disregarded. The experimental observation is that no matter what direction we look into
space, other galaxies are receding from us. From the Doppler
shift in the wavelength(s) of light we are able to tell how far these sources are, and how
fast they are receding from us. The observation is that not only are they moving away from us
(and one another) more recent observations suggest that the furthest (i.e. oldest) sources may
even receding at an increasing rate.
No one (even the experts, not just us poor mortals) has a good explanation for why this is and
so there are tags that we put on the observations "dark matter", "dark energy" etc., but that
is not an explanation, it is a name tag.
The "Universe" the "All" is weird indeed.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:e2d71429-9457-4b0d-b6e2-5dd343b8f186> | 2.828125 | 384 | Nonfiction Writing | Science & Tech. | 60.564909 |
Sand could shed light on quark-gluon plasma
Nov 13, 2007
What do sand and quarks have in common? The answer, according to physicists in the US, is that they both behave like liquids under certain circumstances. When Sidney Nagel and colleagues at University of Chicago fired jets of sand-like granular materials at solid targets, some of the resulting spray patterns were very similar to what has been seen when heavy nuclei collide to produce a “quark-gluon plasma”. The discovery could shed light on why such a plasma appears to behave like a liquid rather than a gas – something that has puzzled physicists since the behaviour was first seen in 2005.
In their experiments, the team fired jets of tiny glass or copper beads at a solid cylindrical target and captured the resultant spray patterns using high-speed photography (Phys. Rev. Lett. 99 188001). They found that after hitting the target, the jet behaves like a liquid by spreading out in directions perpendicular to incoming jet. The team also discovered that when they used targets with diameters smaller than the diameter of the jet, the beads flowed around the target, creating a bell shape on the other side. It turns out that this is exactly what happens when a jet of water hits a similar target and such “waterbells” were first seen in the 19th century .
These sheets and bells of beads might seem odd if you think of the jet as a collection of independent particles, which would each strike the target and just bounce back. However, if the density of the jet is high enough, incoming particles can collide with rebounding particles in the region of the target and these collisions would cause the jet to behave like a liquid. According to Nagel, the incoming stream of particles creates a pressure on this “liquid”, causing it to squirt out in directions perpendicular to jet.
The team measured the angle between the initial direction of the jet and the final trajectories of the particles for a number of different targets with cross-sectional diameters both larger and smaller than the jet itself. They found that the relationship between the angle and target diameter was identical for jets made of glass or copper beads – and for water jets with high velocities. As a result, Nagel and colleagues concluded that the sand jets behave as a liquid.
Furthermore, Nagel and colleagues believe that this liquid-like behaviour of colliding particles has been seen before -- at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in the US. Two years ago, researchers at the RHIC smashed together pairs of gold nuclei to create multi-particle “quark-gluon plasma”. Such a plasma is believed to be present in the early Universe – just before it has cooled enough for quarks and gluons to combine and form protons and neutrons.
The RHIC researchers were surprised to discover that the plasma behaved more like a liquid than a gas. This was evident when the nuclei underwent glancing collisions, which created rugby-ball or almond -shaped plasma that expanded more rapidly along certain directions. This came as a surprise because quantum chromodynamics (QCD) – the theory describing quark-gluon interactions -- predicts that the plasma should behave like a weakly-interacting gas rather than a strongly-interacting fluid.
Nagel and colleagues tried to simulate the RHIC collisions by firing sand jets with a rectangular -- rather than circular -- cross section at a cylindrical target. This resulted in a liquid that also expanded in preferred directions. They measured this anisotropy using the same dimensionless parameter used to characterize the RHIC plasmas and found them to be very similar.
According to Nagel, the similarities between the two experiments mean that it could be possible to describe the liquid properties of the quark-gluon plasma in terms of the very rapid collisions of a dense collection of hard spheres – just like the particles in the jets of sand. “All physics is related”, he told physicsworld.com. “Studying one branch elucidates effects in a seemingly disconnected area of physics”.
About the author
Hamish Johnston is editor of physicsworld.com | <urn:uuid:f5e79c89-c407-4ab2-893c-89943956a10a> | 3.59375 | 878 | Truncated | Science & Tech. | 42.160874 |
We are now at a turning point for fossil fuels, and decisions will need to be made about the best policies from here forward. The environmental goods, commodities, and services of economic systems can be evaluated in terms of real wealth, and these evaluations can be used to make decisions towards best policies in resource, economic and environmental management.
“Although there is energy in everything including information, we recognize that energies of different kinds are not equal, but can be compared by expressing everything in units of one kind of energy required [Emergy]. In this way, human services are found to require thousands of times more energy of ordinary kinds than do agricultural processes. The Emergy production and use per unit of time is called Empower. . . . The Emergy of one kind of energy required to generate a product or service of another kind of energy is the Transformity. The more energy transformation steps there are, the higher the Transformity. In an agrarian landscape, the resources of agriculture and nature are converged to support small cities. . . . [In fossil fuel based cities] the city processes reach out to surrounding zones in their interactions (Odum, Brown et al., 1995, pp. 5-6).
Maximum Empower Principle: At all scales, systems prevail through system organization that first, develops the most useful work with inflowing emergy by reinforcing productive processes and overcoming limitations, and second by increasing the efficiency of useful work (Odum, 2001, p. 3)
While power is derived from energy flow, empower is derived from the flow of emergy. The spectrum of these relationships can be viewed by plotting energy and emergy use within systems as a function of transformity (Tilley and Brown, 2001). Some of the issues that we might address with Emergy Synthesis include:
- Evaluation of environmental contributions of natural capital such as fish stocks and wildlife
- Evaluation of net emergy benefits and net empower yield of renewable energy technologies
- Evaluation of urban and rural carrying capacity
- Evaluation of development alternatives
- Diminishing returns of energy subsidies to agriculture and other high-tech economic endeavors
- Relative empower signatures (Odum, 2001, 2007)
For example, Odum calculates a very rough global carrying capacity.
Divide the global emergy budget by the emergy/person of the standard of living you wish to sustain. The global annual emergy budget per person ranges from about 1.0 E15 sej/person/yr in India to 40+ E15 sej/person/yr in developed countries. The Ambio article shows that the carrying capacity for people now is 3.2 times higher than it would be without fossil fuels and virgin mineral mining (Odum, 2000).
Net Empower Solar Photovoltaic Yield
Paoli, C., Vassallo, P., & Fabiano, M. (2008.) Solar power: An approach to transformity evaluation. Ecological Engineering, 34, 191–206.
Paoli et al. (2008) evaluated a small solar photovoltaic (PV) plant in Italy that used monocrystalline silicon (BP Solar BP585F) PV panels that covered 136 m2 (~1500 ft2), faced south at 30 degree inclination, and had max power output of 18,300 watts at 8% efficiency. This would be equivalent to a few home roof tops. The energy return on energy invested (i.e. Emergy Yield Ratio) was 1.03, meaning that 3% more energy was yielded than diverted from the economy for investment. Thus, zero return is surely within the margin of error and shows that PV is not a primary source of energy. In addition, they estimated that 28% of the total energy required to produce electricity with PV came from electricity. Another 23% was need to make the inverters and another 20% for maintenance costs.
HT Odum argued from a emergy-based viewpoint that making the leap from visible radiation of solar power with its solar transformity of one sej/J to electricity with a mean solar transformity of 250,000 sej/J, meant that PV was making too giant of a leap across the energy hierarchy. From a practical perspective that leap means that PV has to be a very inefficient means of making electricity.
On a side note I once estimated that the weather system used about 1,000,000 solar joules to make 1 joule of lightning. Nature is typically not efficient. The planet’s weather system works on about a 9% efficiency at transferring solar power into weather (winds, sea currents, etc). Photosynthesis is ~1% efficient. However, nature is great at doing a lot things at one time; for example, biodiversity is huge even though it is powered by 1% efficient machinery. That is another thing that the industrial/fossil age has biased us about. High efficiency is an aberration (Comment by Dave Tilley). | <urn:uuid:1dd7b8e4-9167-4e7f-bde2-71f2e32d57cb> | 2.734375 | 1,017 | Personal Blog | Science & Tech. | 41.672673 |
NASA eClips: Methane: An Indicator for Life on Mars?
In this pair of eClips videos, students will learn how scientists are using spectroscopy to identify methane plumes on Mars. They will also explore some of the biological and geological processes that form methane on Earth and the implications for astrobiologists who are looking for life beyond Earth.
The first definitive detection of methane in the atmosphere of Mars indicates the planet is still alive, in either a biologic or geologic sense, according to a team of NASA and university scientists.
If microscopic Martian life is producing the methane, it likely resides far below the surface, where it is still warm enough for liquid water to exist. Liquid water, as well as energy sources and a supply of carbon, are necessary for all known forms of life. However, it is possible a geologic process produced the Martian methane, either now or eons ago. On Earth, the conversion of iron oxide (rust) into the serpentine group of minerals creates methane, and on Mars this process could proceed using water, carbon dioxide, and the planets internal heat. Although we dont have evidence on Mars of active volcanoes today, ancient methane trapped in ice "cages" called clathrates might now be released.
See how NASA scientists are investigating the recent discovery of water ice and methane plumes on Mars to test their hypotheses about the similarities between Earth and Mars.
Related Mathematics Problems
These problems provide a mathematical introduction to some of the issues related to life and planetary surface conditions
Problem 393: Taking a stroll around a martian crater! Students use a recent photograph of a crater on Mars to estimate its circumference and the time it will take NASAs Opportunity Rover to travel once around its edge. [Grade: 6-8 | Topics: scale model; distance = speedxtime; metric measure] (PDF)
Problem 392: Exploring the DNA of an organism based upon arsenic. Students estimate the increase in the mass of the DNA from an arsenic-loving bacterium in which phosphorus atoms have been replaced with arsenic. [Grade: 8-10 | Topics: Integer math; percentages] (PDF) | <urn:uuid:4a2766fd-d415-49c2-9667-c94ba7a71fd5> | 3.984375 | 442 | Tutorial | Science & Tech. | 26.018793 |
int get_line(): This private member function reads in a line of text from the standard input and stores it in the buffer of the Parser object. The line_num data member is incremented accordingly. If the parser already had a line cached as the result of a prior call to unget_line(), then no line is read in from standard input -- the line already in the buffer will be regarded as the current line of input.
void unget_line(): During the processing of the input stream, it is sometimes necessary for the parser to put a line back in the input stream when it has read too far ahead. The unget_line() member function does this by leaving the line in the buffer unchanged and setting the cached data member to TRUE. The function can ``unget'' a line only if at least one line has already been read and the cached data member is FALSE.
void error(char *err): This member function simply displays the supplied error message, the current line number and current text line to the diagnostic log stream, cerr. It is used to report errors encountered during the parsing of the input stream.
char *get_word(const char *delim = " "): This function implements an elementary tokenizer for the Parser class. Upon completion, this function will return the next token in the buffer that is delimited by one of the characters in the delim parameter. It makes use of a temporary buffer which is dynamically allocated by this method, if necessary. The library function strtok() is used to extract the tokens from the current line.
const int line_size: This variable represents the size of the buffer to be used by the parser to store each line of input. It is initialized by the Parser constructor. Its value should be larger than the length of the longest text line in the input stream.
int line_num: This data member keeps track of the number of lines currently read. When an error is encountered in the input stream, the line number stored in this variable is displayed to aid in the debugging process.
char *buffer: This data member stores the contents of the current line read from standard input. It is dynamically allocated by the Parser constructor. The get_line() member function actually stores the contents of the current line in the buffer.
char *tmp_buf: During execution of the tokenization function, get_word(), a copy of the line pointed to by the buffer data member is made and stored in tmp_buf. Doing this provides the tokenizer function with a copy of the buffer to manipulate without altering the original contents pointed to by buffer.
int cached: When a line is to be placed back into the input stream by the unget_line() member function, the cached data member is set to TRUE. This will inhibit a subsequent get_line() invocation from trying to read another line from standard input.
int read_one: Upon reading a line of text from the input stream, this data member is set to TRUE, indicating that at least one line of standard input has been successfully read. This boolean value is consulted by the unget_line() and get_word() member functions. | <urn:uuid:4123e6aa-fac5-4339-b9ae-86fe95b72fc8> | 3.375 | 652 | Documentation | Software Dev. | 46.915544 |
Growing Atlantic dead zone shrinks habitat for billfish and tuna, may lead to over-harvest
A dead zone off the coast of West Africa is reducing the amount of available habitat for Atlantic tuna and billfish species, reports the National Oceanic and Atmospheric Administration in a study published in Fisheries Oceanography. The zone is growing due to rising water temperatures and is expected to cause over-harvest of tuna and billfish as the fish seek higher levels of oxygen in areas with greater fisheries activity.
Dead zones are areas of the ocean which are too low in oxygen to support many marine species. There are about 400 of these "hypoxic" regions throughout the world, many caused by human activities. Perhaps the most notorious, the New Jersey-size dead zone in the Gulf of Mexico is caused by fertilizer runoff released by the Mississippi which encourages oxygen-depleting algae to proliferate out of control. Another dead zone was discovered in 2007. It occurs off the coast of Texas, where the Brazos river empties into the Gulf.
Dead zones can also be caused by climate change. Increases in ocean temperature can change the course of currents, isolating certain areas from influxes of deeper, colder water. As the water in the area sits, it warms and releases its oxygen, making it inhospitable to many aquatic species. Three major dead zones are known to have been caused by climate change: one off the coast of Chile and Peru, one off the east coast of Africa, and another off of Africa's west coast. A new dead zone was reported off the US west coast in 2002. It occurs seasonally and is believed to be part of a continuum of South America's dead zone.
NOAA scientists teamed up with researchers from the University of Miami and The Billfish Foundation to study the West African dead zone and its effect on fish species.
Article continues: http://news.mongabay.com/2010/1229-morgan_dead_zone.html | <urn:uuid:46bbbecd-a394-40ae-b9c9-1abe9e052595> | 3.484375 | 403 | Truncated | Science & Tech. | 41.780739 |
Origin Of Universe
Origin of the
Universe - What's the Latest Theory?
When it comes to the origin of the universe, the "Big
Bang Theory" and its related Inflation Universe Theories (IUTs) are
today's dominant scientific conjectures. According to these interrelated
notions, the universe was created between 13 and 20 billion years ago from
the random, cosmic explosion (or expansion) of a subatomic ball that hurled
space, time, matter and energy in all directions. Everything - the whole
universe -- came from an initial speck of infinite density (also known as a
"singularity"). This speck (existing outside of space and time) appeared
from no where, for no reason, only to explode (start expanding) all of a
sudden. Over a period of approximately 10 billion years, this newly created
space, time, matter and energy evolved into remarkably-designed and
fully-functional stars, galaxies and planets, including our earth. | <urn:uuid:7adea17b-f8d6-4b5f-8204-96b06e493e03> | 3.484375 | 209 | Knowledge Article | Science & Tech. | 22.2079 |
PLAYING sound waves backwards could help the US Navy set up secure communication links with submarines or unmanned submersibles. The system works by broadcasting messages in such a way that they can only be received at one point in the water-so no one else can intercept them.
Underwater communications are very primitive, says Geoff Edelmann at the Scripps Institution of Oceanography in San Diego. Most use simple underwater transducers called hydrophones to send and receive messages. But because sound waves travel in all directions in the ocean, reflecting off the seabed and the surface and also interfering with each other, acoustic signals are often too garbled to understand.
"They tend to be very unstable, very short-range and only good in deep water," says Edelmann. To communicate for any length of time, or over large distances, submarines either surface or use a floating antenna. But that's not always possible in covert ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:ee15fc1f-8259-44d3-b75a-234500df0127> | 3.859375 | 216 | Truncated | Science & Tech. | 38.650638 |
What would your favorite science-fiction movie be without the costumes? Most likely it would not be your favorite movie.
Fashion and costume choices set the stage for some of cinema’s most memorable moments. But what are movie sets made out of? Where was the cotton used to make the leading lady’s pants harvested? These questions can be explained by delving into the science of biodiversity.
Chase Mendenhall is a doctoral candidate in Ecology and Evolutionary Biology at Stanford University. He explores the trade-offs between the conservation of biodiversity and farming by closely monitoring bird and bat populations that inhabit farmland in Costa Rica.
He is also one of two speakers who will be attending the Science Café at 6:00 p.m. on Wednesday, October 17, 2012, at the Koshland Science Museum in Washington, DC.
The Exchange: We are all very excited about the upcoming Science Café on Wednesday night; could you give us a little preview of what you will be speaking about?
Chase Mendenhall: It’s a pretty broad topic that we’ll be tackling here, and it’s also one that we still don’t know a lot about. My main focus of research is to gain a better understanding of what biodiversity really is, what eco systems are, and also a general framework for how we can evaluate them moving forward. That’s sort of my main goal, to get people to understand what biodiversity truly is.
In a few words, how would you describe the concept of biodiversity?
The term biodiversity was coined by E.O. Wilson and it gives a broad definition of things that we should value in nature. What we have come to a consensus on is that biodiversity is nature. One of the main ways I explain biodiversity is by including humans in the definition and also to move beyond the species concept. So rather than looking at biodiversity specifically as species, we look at it as the interactions of different ecosystems. Biodiversity is a really big, heavy term to express all of the things that we don’t understand about nature.
How does biodiversity affect our daily lives?
If you actually look around you at all of the materials you use, all of the things you use on a daily basis, the things that bring you happiness, in one way or another, they come from a natural eco system. Something to keep in mind is that all of the products around us in one way or another come from the biosphere. It’s hugely important, from what we eat, to the materials we use to build our homes, the fibers in our clothing, and the synthetic materials we derive them from, you can usually trace their origins back to somewhere in the biosphere.
Fashion, and the fibers used to make clothing, play a huge role in the movies and Hollywood, does biodiversity play a role in that industry?
The fashion industry drives our interest, and conversely conservation goals, for charismatic birds for their feathers, and mammals for their pelts. The cotton industry and many other fiber crops depend on wildlife for their success. For example, the cotton industry has recently been made aware of the vital importance bats play in keeping cotton’s major pest at bay; cotton bullworm moths. In fact, recent studies have shown that the Mexican free-tailed bats save U.S. farmers more a billion in U.S. dollars annually by acting as “nature’s pesticide” and keep the fashion industry supplied with natural cotton.
[Learn more about the intersection of fashion and biodiversity, here]
The textile industry seems to be very reliant on functioning ecosystems, what can the fashion and textile industries do to make sure these delicates systems are protected?
The fashion industry is a great model for environmentalists to use because of its ability to communicate and change behavior using positive and aesthetic incentives, rather than inciting apocalyptic incentives to effect change. I think the fashion industry and many of the arts draw huge amounts of inspiration from nature as well as provide raw materials, dyes, and cultural importance for many aesthetics popular in the global fashion industry.
[See what the United Nations is doing to promote fashion and biodiversity, here]
Anything else you would like to add? |
I think one of the most important things that I want to bring home tomorrow night is the understanding that an ecosystem and biodiversity isn’t necessarily just Yosemite National Park or the Great Barrier Reef, one of these iconic natural landscapes. The vast majority of ecosystems are the ones that contain humans, cityscapes and seascapes; places where humans are an integral part of nature. I want people to take away the idea that biodiversity and ecosystems are not some mythical and far-removed artifact of a past life, but they are something we live with on a day-to-day basis.
For more information, or to purchase tickets for the Science Café, please visit the Koshland Science Museum website. | <urn:uuid:5926ce2c-90e0-4bb0-8460-4f6b15ea6111> | 3 | 1,014 | Audio Transcript | Science & Tech. | 42.481034 |
The Science Guys
Science Guys > February 2001
Why is water clear but when it freezes it is often white, as in snow?
The transparency of water is amazing: the water of an unspoiled reef or Oregon’s Crater Lake is renown for its clarity, for one can peer about 100 feet through it. Yet everyone knows that even a minor snowfall blankets the ground in white. The inch of snow we received in January was sufficient to blanket the area with the white stuff, turning our area briefly into a winter wonderland. More to the point, why is water or a chunk of ice clear, but snow isn’t?
Pure water is colorless, whether in a vapor, liquid, or solid phase. That is, the molecules of water cannot absorb visible light and so cannot yield colors like dye or pigment molecules. But this doesn’t mean that water does nothing to the light that strikes it. You can model water like a pane of glass, which both transmits and reflects light. A good mirror reflects almost all light, and allows none to be transmitted through it. Conversely, high-quality windows allow almost all light to transmit through them with little reflection or glare. But if you look closely, you will notice a faint reflection of yourself on most windowpanes.
Water and glass not only reflect but also refract light. This means that as a light beam enters water or glass, the light bends. You know this from the spoon-in-a-glass trick: if you put a spoon in a glass of water, you notice that the handle of the spoon makes an abrupt "break" at the water/air interface. The same happens as light enters a piece of ice: it will bend. Now, if you just have one solid sheet of ice, the bending isn’t much and neither is the reflection off the ice’s surface; most of the light penetrates the ice, and the ice appears clear.
Now, what if you aren’t looking through an inch of solid, smooth ice but a multitude of dainty snowflakes? Snowflakes form directly from the vapor phase of water into the lovely crystals snowflakes are so famous for. These ice crystals form with hexagonal (six-sided) symmetry, so each crystal has many sides or crystal faces. When incident light hits a snowflake, some of the light is reflected off the crystal back towards the observer. However, most of the light penetrates the crystal and is bent, or refracted. Now, a snowflake has such a complex, intricate structure that this light hits internal crystal faces and bounces around inside the crystal. The combination of reflection and refraction is so efficient for an ice crystal that, ultimately, most of the light actually bounces back towards the light source and the observer. Since all the colors in sunlight add to give white, what we see when we look at snow is white: the sunlight that has reflected off and refracted through the water (ice) crystals to come back at us.
In fact, because snow reflects light so well, you can get what is called "snow blindness" if you go outside on a bright, sunny day after it has snowed. The light is so bright, that when you go inside it appears very dark, and it takes a minute or two for your pupils to dilate enough to see well.
The myriad of water droplets and ice crystals which form clouds are great at bending light and thus clouds, like snow, reflect most of the light and appear "white." Satellite images show that clouds appear bright white when viewed from above. From below thick clouds appear dark, because cloud tops reflect most of the light into space. | <urn:uuid:7620fae5-7ad3-4eca-ae96-129b642cca57> | 3.25 | 762 | Knowledge Article | Science & Tech. | 59.332114 |
Epiactis prolifera Verrill, 1869
Common name(s): Brooding anemone, proliferating anemone, small green anemone
Subclass Zoantharia (Hexacorallia)
|Epiactis prolifera in an aquarium|
|(Photo by: Dave Cowles, 2004)|
How to Distinguish from Similar Species: Epiactis ritteri has broad radiating white lines on the oral disk which do not reach the mouth, breeds young internally, and becomes extremely flat when contracted. E. lisbethae can be up to 8 cm diameter and the radiating dark lines on the edges of the pedal disk extend all the way up to the top of the column. Small individuals which are closed can look similar to E. fernaldi (photo), but look for tiny young of all the same size being brooded on the column (these are of several different sizes when found on E. prolifera and only seasonally).
Geographical Range: Southern Alaska to southern California
Depth Range: Mid intertidal to subtidal
Habitat: On and under rocks and on algae and eelgrass, outer rocky coasts and in bays.
Biology/Natural History: The tentacles of this species end with a terminal pore. Many individuals have tiny juvenile anemones attached near the base. Animals' sexual pattern is gynodioecious (small adults are female, larger adults are simultaneous hermaphrodites), cross-fertilize though some self-fertilization also occurs. Eggs are fertilized inside female gastrovascular cavity, then are expelled. Cilia on the mother's surface move the eggs (or larvae?) down to small pits on the edges of the pedal disk where they attach via mucus and specialized large nematocysts in the mother's tissue. Live on mother's column (digesting yolk, then catching prey) until at least 3 months old and 4 mm diameter, then crawl off. Probably feed on small crustaceans. Predators include nudibranch Aeolidia papillosa and leather star Dermasterias imbricata. Mosshead sculpins may also eat them. Animals move freely about, often pack the bottoms of tidepools, and may be covered with camouflaging debris. Flora and Fairbanks stated they tasted this species' foot fried in butter and do not recommend it even for the desperate.
|Main Page||Alphabetic Index||Systematic Index||Glossary|
Gotshall and Laurent, 1979
Morris et al., 1980
O'Clair and O'Clair, 1998
Edmands, S. and D. C. Potts, 1997. Population genetic structure in brooding sea anemones (Epiactis spp) with contrasting reproductive modes. Marine Biology 127: 485-498.
This individual, photographed on San Juan Island, has the blue foot sometimes seen on this species. Diameter is about 3 cm.
Photo July 2006 by Dave Cowles
Small E. prolifera such as these can look similar to E. fernaldi when closed (the radiating lines on the column wall can be faint).
However, notice the tiny, light-colored young being brooded on the column wall, which would not be seen in E. fernaldi.
Identification by Lisbeth Francis, photo by Dave Cowles on San Juan Island, July 2006
|This green individual, 3 cm pedal disk diameter, is brooding young of several different sizes. They appear to me to be in more than one row although E. prolifera is supposed to brood them all in one row. Photos by Dave Cowles, July 2007||The underside of the pedal disk of the same individual does not show any obvious red or orange stripes, as it would if it were Epiactis lisbethae.|
This night shot of an Epiactis prolifera under a boulder shows the single row of brooded young.
Photo by Dave Cowles, late October 2007 | <urn:uuid:01ce2926-eb1c-4a02-8ac8-38ecbc804490> | 2.875 | 849 | Knowledge Article | Science & Tech. | 46.175528 |
Carbon dioxide (CO2) is the most well-known ‘greenhouse gas’, a byproduct of burning fossil fuels such as gasoline, methane and propane. The greater the number of these heat trapping molecules that exist, the more the Earth’s temperatures will continue to rise.
Trees naturally eat carbon dioxide, but on a per tree basis, they don’t consume much, about one ton in their lifetime. In order to keep the number of heat absorbing molecules down, we need more CO2 eaters here on the Earth. French biochemist Pierre Calleja has looked to algae for an innovation to curb this molecular issue.
Calleja has been studying single celled algae, known as microalgae, for over two decades. Algae has been on Earth longer than most species, with fossil records dating back 3 billion years! Algae produces more oxygen to us than all of the other plants in the world put together. We’re only now realizing the full array benefits that algae presents to our way of life. Algae produces energy through photosynthesis by combining sunlight, H2O, and CO2. Microalgae can be grown and cultivated in extremely adverse conditions, unlike most any other crop on Earth. These microalgae can produce up to 100x more biofuel than comparable energy crops, making it the target of much research and speculation as a future power fuel.
In this case, Calleja has taken advantage of microalgaes amazing properties to build a ‘microalgae lamp’ that not only produces light, but eats CO2. The algae rests in water, and produces harvestable energy through photosynthesis by absorbing CO2 and sunlight. This energy is funneled into a battery and stored, which is used to power the light at night. The light energy is conveniently released from the battery only when needed. The microalgae lamp produces a friendly byproduct, oxygen, while only one of these algae lights is able to suck up 1 ton of CO2 per year. To put that in perspective, one microalgae lamp absorbs as much CO2 in one year as a tree does in it’s lifetime!
These new street lamps in their current form have a stylish and futuristic vibe, but undoubtedly could be altered with different designs and colors. Customization would certainly boost the speed of acceptance of this green trend. The main questions to ask about this invention are how much will each of these cost, and what type of maintenance will they require. Will the water need to be replenished, and how frequently? What is the realistic scope of implementation; do we currently have the resources to install these algae lights worldwide?
The overarching question here is what level of algae implementation will we soon experience in our lives in the future? To what extent will algae fuel our vehicles and our homes? How long until we can grow our own fuel at home? Algae is also high in protein and essential vitamins; How extensive will the use of algae be in our diets? What else can algae do for us? Does algae fuel have a ceiling?
Trackback from your site. | <urn:uuid:e9f74312-256f-4f34-8a9c-5e092d0dbb82> | 3.65625 | 642 | Personal Blog | Science & Tech. | 49.424832 |
The information on EOL is organized using hierarchical classifications of taxa (groups of organisms) from a number of different classification providers. You can explore these hierarchies in the Names tab of EOL taxon pages. Many visitors would expect to see a single classification of life on EOL. However, we are still far from having a classification scheme that is universally accepted.
Biologists all over the world are studying the genetic relationships between organisms in order to determine each species' place in the hierarchy of life. While this research is underway, there will be differences in opinion on how to best classify each group. Therefore, we present our visitors with a number of alternatives. Each of these hierarchies is supported by a community of scientists, and all of them feature relationships that are controversial or unresolved.
We invite the scientific community to submit additional hierarchies to improve EOL's coverage of current systematic hypotheses. Please contact the Species Pages Working Group for additional information.
Current EOL classification providers include:
AntWeb is generally recognized as the most advanced biodiversity information system at species level dedicated to ants. Altogether, its acceptance by the ant research community, the number of participating remote curators that maintain the site, number of pictures, simplicity of web interface, and completeness of species, make AntWeb the premier reference for dissemination of data, information, and knowledge on ants. AntWeb is serving information on tens of thousands of ant species through the EOL.
Avibase is an extensive database information system about all birds of the world, containing over 6 million records about 10,000 species and 22,000 subspecies of birds, including distribution information, taxonomy, synonyms in several languages and more. This site is managed by Denis Lepage and hosted by Bird Studies Canada, the Canadian copartner of Birdlife International. Avibase has been a work in progress since 1992 and it is offered as a free service to the bird-watching and scientific community. In addition to links, Avibase helped us install Gill, F & D Donsker (Eds). 2012. IOC World Bird Names (v 3.1). Available at http://www.worldbirdnames.org as of 2 May 2012. More bird classifications are likely to follow
The Catalogue of Life Partnership (CoLP) is an informal partnership dedicated to creating an index of the world’s organisms, called the Catalogue of Life (CoL). The CoL provides different forms of access to an integrated, quality, maintained, comprehensive consensus species checklist and taxonomic hierarchy, presently covering more than one million species, and intended to cover all know species in the near future. The Annual Checklist EOL uses contains substantial contributions of taxonomic expertise from more than fifty organizations around the world, integrated into a single work by the ongoing work of the CoLP partners.
FishBase is a global information system with all you ever wanted to know about fishes. FishBase is a relational database with information to cater to different professionals such as research scientists, fisheries managers, zoologists and many more. The FishBase Website contains data on practically every fish species known to science. The project was developed at the WorldFish Center in collaboration with the Food and Agriculture Organization of the United Nations and many other partners, and with support from the European Commission. FishBase is serving information on more than 30,000 fish species through EOL.
The Index Fungorum, the global fungal nomenclator coordinated and supported by the Index Fungorum Partnership (CABI, CBS, Landcare Research-NZ), contains names of fungi (including yeasts, lichens, chromistan fungal analogues, protozoan fungal analogues and fossil forms) at all ranks.
The Integrated Taxonomic Information System (ITIS) is a partnership of federal agencies and other organizations from the United States, Canada, and Mexico, with data stewards and experts from around the world (see http://www.itis.gov). The ITIS database is an automated reference of scientific and common names of biota of interest to North America . It contains more than 600,000 scientific and common names in all kingdoms, and is accessible via the World Wide Web in English, French, Spanish, and Portuguese (http://itis.gbif.net). ITIS is part of the US National Biological Information Infrastructure (http://www.nbii.gov).
International Union for Conservation of Nature (IUCN) helps the world find pragmatic solutions to our most pressing environment and development challenges. IUCN supports scientific research; manages field projects all over the world; and brings governments, non-government organizations, United Nations agencies, companies and local communities together to develop and implement policy, laws and best practice. EOL partnered with the IUCN to indicate status of each species according to the Red List of Threatened Species.
Metalmark Moths of the World
Metalmark moths (Lepidoptera: Choreutidae) are a poorly known, mostly tropical family of microlepidopterans. The Metalmark Moths of the World LifeDesk provides species pages and an updated classification for the group.
As a U.S. national resource for molecular biology information, NCBI's mission is to develop new information technologies to aid in the understanding of fundamental molecular and genetic processes that control health and disease. The NCBI taxonomy database contains the names of all organisms that are represented in the genetic databases with at least one nucleotide or protein sequence.
The Paleobiology Database
The Paleobiology Database is a public resource for the global scientific community. It has been organized and operated by a multi-disciplinary, multi-institutional, international group of paleobiological researchers. Its purpose is to provide global, collection-based occurrence and taxonomic data for marine and terrestrial animals and plants of any geological age, as well as web-based software for statistical analysis of the data. The project's wider, long-term goal is to encourage collaborative efforts to answer large-scale paleobiological questions by developing a useful database infrastructure and bringing together large data sets.
The Reptile Database
This database provides information on the classification of all living reptiles by listing all species and their pertinent higher taxa. The database therefore covers all living snakes, lizards, turtles, amphisbaenians, tuataras, and crocodiles. It is a source of taxonomic data, thus providing primarily (scientific) names, synonyms, distributions and related data. The database is currently supported by the Systematics working group of the German Herpetological Society (DGHT)
The aim of a World Register of Marine Species (WoRMS) is to provide an authoritative and comprehensive list of names of marine organisms, including information on synonymy. While highest priority goes to valid names, other names in use are included so that this register can serve as a guide to interpret taxonomic literature. | <urn:uuid:b3c0c9c1-e9f9-4e33-b447-4d99df6f7e81> | 3.0625 | 1,413 | Content Listing | Science & Tech. | 21.282321 |
Optional Argument. DIM
Description. Sort by ascending value.
Class. Transformational function.
ARRAY must be of type integer, real, or character. It must not be scalar.
DIM (optional) must be scalar and of type integer
with a value in
the range 1 < DIM < n,
where n is the rank of ARRAY.
The corresponding actual argument must not be
an optional dummy argument.
Result Type, Type Parameter, and Shape. The result has the same
shape, type, and type parameter as ARRAY.
Case (i): The result of SORT_UP(ARRAY), when ARRAY is one-dimensional, is a vector of the same shape as ARRAY, containing the same elements (with the same number of instances) but sorted in ascending element order. The collating sequence for an array of type CHARACTER is that used by the Fortran intrinsic functions, namely ASCII.
Case (ii): The result of SORT_UP(ARRAY) for a multi-dimensional ARRAY is the result that would be obtained by reshaping ARRAY to a rank-one array V using array element order, sorting that rank-one array in ascending order, as in Case(i), and finally restoring the result to the original shape. That is, it gives the same result as RESHAPE( SORT_UP(V), SHAPE = SHAPE(ARRAY) ), where V = RESHAPE( ARRAY, SHAPE = (/ M /) and M = SIZE(ARRAY).
Case (iii): The result of SORT_UP(ARRAY, DIM=k) contains the same elements as A, but each one-dimensional array section of the form ARRAY (i1,i2,...,ik+1,:,ik-1,...,in,, where n is the rank of ARRAY, has been sorted in ascending element order, as in Case(i) above.
Case (i): SORT_UP( (/30, 20, 30, 40, -10/) )
has the value [-10 20 30 30 40].
/ \ | 1 9 2 | If A is the array | 4 5 2 |, | 1 2 4 | \ / / \ | 1 2 4 | then SORT_UP(A) has the value | 1 2 5 |, | 2 4 9 | \ /
/ \ | 1 9 2 | If A is the array | 4 5 2 |, | 1 2 4 | \ / / \ | 1 2 2 | then SORT_UP(A, DIM = 1) has the value | 1 5 2 |. | 4 9 4 | \ /
Next: SUM_PREFIX(ARRAYDIM, MASK, SEGMENT, Up: Specifications of Library Procedures Previous: SORT_UP(ARRAYDIM) | <urn:uuid:16e69c94-08ce-46f6-ba2a-e56e0b7305cd> | 2.84375 | 607 | Documentation | Software Dev. | 71.297197 |
It’s summer. It’s hot. And once again, we are hearing from the usual suspects that we must change our entire way of living. Repent, they say. Carbon dioxide emissions are killing Mother Earth. Give up hydrocarbons and embrace renewable energy.
Doing so, we’re assured, will result in a gentler climate and myriad other benefits, including scads of “green” jobs. Sounds easy, no?
Alas, no matter how much they may wish it to be so, the proponents of alternatives — and better yet, “clean” energy — cannot overcome the problem of scale. A simple bit of math shows that even with the rapid expansion that solar and wind-energy capacity have had in the past few years, those two sources cannot even meet incremental global demand for electricity, much less make a dent in the world’s insatiable thirst for coal, oil, and natural gas. Indeed, had any of the myriad advocates for renewable energy bothered to use a simple calculator, they would see that their favored sources simply cannot provide the vast scale of energy needed by the world’s 7 billion inhabitants, at a price that can be afforded.
Consider this: between 1985 and 2011, global electricity generation increased by about 450 terawatt-hours per year. That’s the equivalent of adding about one Brazil (which used 485 terawatt-hours of electricity in 2010) to the electricity sector every year. And the International Energy Agency expects global electricity use to continue growing by about one Brazil per year through 2035.
How much solar would be needed to produce 450 terawatt-hours per year? Well, Germany has more installed solar-energy capacity that any other country, with some 25,000 megawatts of installed photovoltaic panels. In 2011, those panels produced 18 terawatt-hours of electricity. Thus, just to keep pace with the growth in global electricity demand, the world would have to install about 25 times as much photovoltaic capacity as Germany’s total installed base, and it would have to do so every year.
Let me repeat that: just to meet the world’s increasing demand for electricity — while not displacing any existing electricity-production facilities — the world would have to install about 25 times as much photovoltaic capacity as what now exists in Germany. And it would have to achieve that daunting task every year.
The scale problem is equally obvious when it comes to wind. In fact, wind-energy’s scale problems are even more thorny because wind energy requires so much land. | <urn:uuid:e69e6503-c204-445d-b900-323dcc448279> | 2.953125 | 543 | Nonfiction Writing | Science & Tech. | 43.958726 |
A student holding a book between his hands. The forces that he exerts on the front and back covers of the book are perpendicular to the book and are horizontal. The book weights 31 N . The coefficient of the static friction between his hands and the book is .40. To keep the book from falling, what is the magnitude of the minimum pressing force that each hand must exert?
I have to draw a free body diagram to show the forces.
Arrow down for the book at W = 31N. How do I show the hands and does Fn point up??
I am totally lost on this, help please. thanks. | <urn:uuid:82cbcd6a-7925-4d0f-82e5-b32dcba908bb> | 3.15625 | 129 | Q&A Forum | Science & Tech. | 88.954866 |
The first part of an investigation into how to represent numbers
using geometric transformations that ultimately leads us to
discover numbers not on the number line.
Introduces the idea of a twizzle to represent number and asks how
one can use this representation to add and subtract geometrically.
How can you use twizzles to multiply and divide? | <urn:uuid:052f9086-4117-433c-8642-aa03fab2bc0e> | 3.46875 | 73 | Knowledge Article | Science & Tech. | 31.707895 |
Jan 22, 2013, 1:57 PM
Post #31 of 31
Perl documentation on Regex modifiers:
Re: [Stefanik] Match characters in middle and end string
[In reply to]
- m :
Treat string as multiple lines. That is, change "^" and "$" from matching the start or end of the string to matching the start or end of any line anywhere within the string.
- s :
Treat string as single line. That is, change "." to match any character whatsoever, even a newline, which normally it would not match.
Used together, as /ms, they let the "." match any character whatsoever, while still allowing "^" and "$" to match, respectively, just after and just before newlines within the string. | <urn:uuid:4fd960ff-2fe5-4e4d-9c5e-244b292728c8> | 2.828125 | 169 | Comment Section | Software Dev. | 68.398807 |
Complex Numbers and Polar Coordinates
Forgot to hit “publish” earlier…
So we’ve seen that the unit complex numbers can be written in the form where denotes the (signed) angle between the point on the circle and . We’ve also seen that this view behaves particularly nicely with respect to multiplication: multiplying two unit complex numbers just adds their angles. Today I want to extend this viewpoint to the whole complex plane.
If we start with any nonzero complex number , we can find its absolute value . This is a positive real number which we’ll also call . We can factor this out of to find . The complex number in parentheses has unit absolute value, and so we can write it as for some between and . Thus we’ve written our complex number in the form
where the positive real number is the absolute value of , and — a real number in the range — is the angle makes with the reference point . But this is exactly how we define the polar coordinates back in high school math courses.
Just like we saw for unit complex numbers, this notation is very well behaved with respect to multiplication. Given complex numbers and we calculate their product:
That is, we multiply their lengths (as we already knew) and add their angles, just like before. This viewpoint also makes division simple:
In particular we see that
so multiplicative inverses are given in terms of complex conjugates and magnitudes as we already knew.
Powers (including roots) are also easy, which gives rise to easy ways to remember all those messy double- and triple-angle formulæ from trigonometry:
Other angle addition formulæ should be similarly easy to verify from this point.
In general, since we consider complex numbers multiplicatively so often it will be convenient to have this polar representation of complex numbers at hand. It will also generalize nicely, as we will see. | <urn:uuid:856e09ca-5d99-49fa-9e0b-72ef941c9d40> | 3.90625 | 399 | Personal Blog | Science & Tech. | 41.953599 |
The Cross Product and Pseudovectors
Finally we can get to something that is presented to students in multivariable calculus and physics classes as if it were a basic operation: the cross product of three-dimensional vectors. This only works out because the Hodge star defines an isomorphism from to when . We define
All the usual properties of the cross product are really properties of the wedge product combined with the Hodge star. Geometrically, is defined as a vector perpendicular to the plane spanned by and , which is exactly what the Hodge star produces. We choose which perpendicular direction by the “right-hand rule”, but this is only because we choose the basis vectors , , and (or as these classes often call them: , , and ) by the same convention, and this defines an orientation we have to stick with when we define the Hodge star. The length of the cross product is the area of the parallelogram spanned by and , again as expected from the Hodge star. Algebraically, the cross product is anticommutative and linear in each variable. These are properties of the wedge product, and the Hodge star — being linear — preserves them.
The biggest fib we tell students is that the value of the cross product is a vector. It certainly looks like a vector on the surface, but the problem is that it doesn’t transform like a vector. Before the advent of thinking of all these things geometrically, people thought of a vector quantity as a triple of real numbers that transform in a certain way when we change to a different orthonormal basis. This is inspired by the physical world, where there’s no magic orthonormal basis floating out somewhere to pick out coordinates. We should be able to turn our heads and translate the laws of physics to compensate exactly. These rotations form the special orthogonal group of orientation- and inner product-preserving transformations, but we can also throw in reflections to get the whole orthogonal group, of all transformations from one orthonormal basis to another.
So let’s imagine what happens to a cross product when we reflect the world. In fact, stand by a mirror and hold out your right hand in the familiar way, with your index finger along one imagined vector , your middle finger along another vector , and your thumb pointing in the direction of the cross product . Now look in the mirror.
The orientation has been reversed, and mirror-you is holding out its left hand! If mirror-you tried to use its version of the cross product, it would find that the cross product should go in the other direction. The cross product doesn’t behave like all the other vectors in the world, because it doesn’t reflect the same way.
Physicists to this day use the old language describing a triple of real numbers that transform like a vector under rotations, but point the wrong way under reflections. They call such a quantity a “pseudovector”. And they also have a word for a single real number that somehow mysteriously flips its sign when we apply a reflection: a “pseudoscalar”. Whenever we read about scalar, vector, pseudovector, and pseudoscalar quantities, they just mean real numbers (or triples of them) and specify how they change under certain orthogonal transformations.
But geometrically we can see exactly what’s going on. These are just the spaces , , , and , along with their representations of the orthogonal group . And the “pseudo” means we’ve used the Hodge star — which depends essentially on a choice of orientation — to pretend that bivectors in and trivectors in are just like vectors in and scalars in , respectively. And we can get away with it for a long time, until a mirror shows up.
The only essential tool from multivariable calculus or introductory physics built from the cross product that we might have need of is the “triple scalar product”, which takes three vectors , , and . It calculates the cross product of two of them, and then the inner product with the third to get a scalar. But this is the coefficient of our unit cube in the definition of the Hodge star:
since . That is, the triple scalar product gives the (oriented) volume of the parallelepiped spanned by , , and , just as we remember from those classes. We really don’t need the cross product as a primitive operation at all, and in the long run it only leads to confusion as it identifies vectors and pseudovectors without the explicit use of the orientation-dependent Hodge star to keep us straight. | <urn:uuid:b0f9306d-76ed-459a-84b9-9cacc9dc6608> | 3.921875 | 984 | Personal Blog | Science & Tech. | 46.129465 |
Science Fair Project Encyclopedia
Bees (Apoidea superfamily) are flying insects, closely related to wasps and ants. They are adapted for feeding on nectar, and play an important role in pollinating flowering plants, and are called pollinators. Bees have a long proboscis that they use in order to obtain the nectar from flowers. Bees have antennae made up of thirteen segments in males and twelve in females. They have two pairs of wings, the back pair being the smaller of the two. Their legs are modified so that they can gather pollen and the apex of their abdomens are modified into a stinger. There are over 16,000 described species, and possibly around 30,000 species in total. Bees may be solitary, or may live in various sorts of communities. The most advanced of these are eusocial colonies, found among the honeybees and stingless bees . Sociality is believed to have evolved separately in different groups of bees.
The life cycle of bumblebees begins in the spring when the queen bee rises from hibernation. At this time the queen bee is the one who does all the work because there are no worker bees to do the work yet. She searches for a place to build her nest and she builds the honeypots. She also does the foraging to collect nectar and pollen. Bumblebee colonies die off in the autumn, after raising a last generation of queens, which survive individually in found hiding spots. Interestingly bumblebee queens sometimes seek winter safety in honeybee hives, where they are sometimes found dead in the spring by beekeepers, presumably stung to death by the honeybees. It is not known whether any succeed in winter survival in such an environment.
With honeybees, which survive winter as a colony, the queen begins egg laying in winter, to prepare for spring. This is probably triggered by day length. She is the only fertile female, and deposits all the eggs from which the other bees are produced. Except for her one mating flight or to establish a new colony, the queen rarely leaves the hive after the larvae have become full grown bees. The queen deposits each egg in a cell prepared by the worker bees. The egg hatches into a small larva which is fed by nurse bees (worker bees who maintain the interior of the colony). After about a week (depending on species), the larva is sealed up in its cell by the nurse bees. After another week (again, depending on species), it will emerge an adult bee.
Both workers and queens are fed royal jelly during the first three days of the larval stage. Then workers are switched to a diet of pollen and nectar or diluted honey, while those intended for queens will continue to receive royal jelly. This causes the larva to develop to the pupa stage more quickly, while being also larger and fully developed sexually. Queen breeders consider good nutrition during the larval stage to be of critical importance to the quality of the queens raised, good genetics and sufficient number of matings also being factors. During the larval and pupal stages, various parasites can attack the pupa/larva and destroy or mutate it.
Queens are not raised in typical horizontal brood cells of the honeycomb. They are specially constructed to be much larger, and have a vertical orientation. As the queen finishes her larval feeding, and pupates, she moves into a head downward position, from which she will later chew her way out of the cell. At pupation the workers cap or seal the cell. Just prior to emerging from their cells, young queens can often be heard "piping." This is considered likely to be a challenge to other queens for battle.
Worker bees are infertile females. Worker bees secrete the wax used to build the hive, clean and maintain the hive, raise the young, guard the hive and forage for nectar and pollen. In honeybees, the worker bees have a modified ovipositor called a stinger with which they can sting to defend the hive, but the bee will die soon after.
Drone bees are the male bees of the colony. Drone honeybees do not forage for nectar or pollen. The primary purpose of a drone bee is to fertilize a new queen. Drones mate with the queen in flight. They die immediately after mating.
In some species, drones are suspected of playing a contributing role in the temperature regulation of the hive. Drone bees have no stinger, since a stinger is actually a modified ovipositor.
Queens live for an average of three years, while workers have an average life of only three months.
Honeybee queens release pheromones to regulate hive activities, and worker bees also produce pheromones for various communications.
Honey is produced from nectar collected from flowers, which is a clear liquid consisting of nearly 80% water with complex sugars. The collecting bees store the nectar in a second stomach and return to the hive where worker bees remove the nectar. The worker bees digest the raw nectar for about 30 minutes using enzymes to break up the complex sugars into simpler ones. Raw honey is then spread out to dry, which reduces the water content to less than 20%. Once dried, each honeycomb is sealed with wax to preserve the honey.
Honey itself is so sweet that bacteria cannot grow on it, and dry enough that it does not support yeasts. Anaerobic bacteria may be present and survive in spore form, in honey, as well as anywhere else in common environments. Honey (or any other sweetener) which is diluted by the non-acidic digestive fluids of infants, can provide an ideal medium for the transition of botulism bacteria from the spore form to the actively growing form, which produces a toxin. When infants are weaned to solid foods, their digestive system becomes acid enough to prevent such growth and poisoning. No sweeteners should be given to infants prior to weaning, as there is a small, but possibly lethal risk of poisoning.
Honey bee pheromones
Honey Bees use special pheromones, or chemical communication, for almost all behaviors of life. Such uses include (but are not limited to): mating, alarm, defense, orientation, kin and colony recognition, food production, and integration of colony activities. Pheromones are thus essential to honey bees for their survival.
Solitary, communal, and quasisocial Bees
Some other bees form small colonies. For example, most species of bumblebee (Bombus terrestris, B. pratorum, et al.) live in colonies of 30-400 bees. (By contrast, an average honeybee hive at the height of summer will have 40,000 - 80,000 bees.) The queen bee is typically able to survive on her own for at least a short time (unlike queens in eusocial species who must be cared for at all times).
Other species of bee such as the Orchard Mason bee (Osmia lignaria) and the hornfaced bee (Osmia cornifrons) are solitary in that every female is fertile. There are no worker bees for these species. Solitary bees typically produce neither honey nor beeswax. They are immune from tracheal and varroa mites, but have their own unique parasites, pests and diseases. (see diseases of the honeybee)
Cuckoo bees are bumblebee look-alikes that invade bumblebee nests and lay their eggs. The bumblebees raise the young as their own. Megachilid bees also have other megachilid Coelioxys bees whose young are placed into the already provisioned nests of these solitary bees. They destroy the host larvae and eat the food.
- "The general story of the communication of the distance, the situation, and the direction of a food source by the dances of the returning worker bee on the vertical comb of the hive, has been known in general outline from the work of Karl von Frisch in the middle 1950s." (World&I)
All bees eat nectar and pollen. Bees are excellent pollinators and play an important role in agriculture.
Bees are the favorite meal of Merops apiaster, a bird.
- Africanized bee
- Bee anatomy (mouth)
- Bee learning and communication
- Honeybee life cycle
- Characteristics of common wasps and bees
- The Sacramento Bee: Newspaper of Sacramento, California
External links and references
- Bees, Wasps and Ants Recording Society (UK)
- Carl Hayden Bee Research Center
- Pollinator Paradise (solitary bees)
- Raising honeybee queens
- Bees of the World, C. D. Michener (200)
- Rescuing Australian stingless bees
- Honey Bee Pheromones
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:80700455-6905-48c5-a318-0d7895087036> | 4.15625 | 1,861 | Knowledge Article | Science & Tech. | 46.633898 |
Shallow Water Equations
Model ID: 202
The Shallow Water equations are frequently used for modeling both oceanographic and atmospheric fluid flow. Models of such systems lead to the prediction of areas eventually affected by pollution, coast erosion and polar ice-cap melting.
Comprehensive modeling of such phenomena using physical descriptions such as the Navier-Stokes equations can often be problematic, due to the scale of the modeling domains as well as resolving free surfaces. The Shallow Water equations, of which there are a number of representations, provide an easier description of such phenomena. This 1D model investigates the settling of a wave over a variable bed as a function of time.
|The initial water surface profile and the sea bed profile.| | <urn:uuid:22b1778e-9a4e-4b4e-af10-f11362aec365> | 3.109375 | 149 | Knowledge Article | Science & Tech. | 26.954263 |
tatistics and reports analyze the change over time of any kind of phenomena. For example, you could evaluate an employer's performance by analyzing progress curves provided by reports; managers can make business decisions based on statistical sales data; meteorologists can predict natural disasters based on statistical weather pattern data—and the list goes on. For the software industry, statistics and reports provide both an ongoing challenge and an ongoing market. At present, programming languages such as PHP and Java come with built-in packages for developing applications around statistical problems.
This article explores PHP's support for the statistical domain. You will see how to generate reports and statistics for simple text phrases, XML documents, and complex databases.
The Text_Statistics PEAR Package
The Text_Statistics PEAR package makes it easy to calculate some basic readability metrics on blocks of text. These metrics include such things as the number of words, the number of unique words, the number of sentences, and the number of total syllables. You can use these statistics to calculate the Flesch score for a sentence, which is a number between 0 and 100 that represents readability. Figure 1 shows the formula for the Flesch Reading Ease Score (FRES) test.
|Figure 1. Flesch Reading Ease Score Formula: This formula calculates the relative readability of a text; the higher the score, the easier the text is to read.
The higher the score, the more readable the text (high scores have larger potential audiences). For example, a Flesch score between 90 and 100 equates to a fifth-grade reading level; a Flesch score between 0 and 30 means the text may be readable only by college graduates. This tutorial provides more details about the Flesch readability formula and the Flesch readability tests (Flesch—Kincaid Grade Level).
You install the PEAR package like this (version 1.0 is the stable version):
pear install Text_Statistics
As you will see in the next two examples, the Text_Statistics PEAR is very easy to use, because the applications are straightforward and the code is very intuitive. For example, to retrieve the number of syllables in a word you can use the Text_Word class like this:
//the tested word
$word = 'paragraphs';
////create an instance of the Text_Word class
$stats = new Text_Word($word);
//Print the syllables of the $word variable
print_r("The word '".$word."'
has ".$stats->numSyllables()." syllables.");
The output of this example is:
The word 'paragraphs' has 3 syllables.
Here's a more complete example based on the Text_Statistics class that analyzes a complete (but still short) sentence. In this case the output contains more information, including the number of syllables, the number of unique words, the Flesch score, the abbreviations, and more:
//the tested text
$text = "This is an example.";
//create an instance of the Text_Statistics class
$stats = new Text_Statistics($text);
// Print the number of syllables, number of unique words,
// the Flesch number, the abbreviations, and more
print_r("<b>The entire array:</b><br />");
print_r("<br /><br />");
print_r("<b>Text:</b> ".$stats->text."<br />");
print_r("<b>Syllables:</b> ".$stats->numSyllables."<br />");
print_r("<b>Words number:</b> ".$stats->numWords."<br />");
print_r("<b>Unique words:</b> ".$stats->uniqWords."<br />");
print_r("<b>Sentences number:</b> ".$stats->numSentences."<br />");
print_r("<b>Flesch:</b> ".$stats->flesch."<br />");
The output of this example shows both a raw array of data that the Text_Statistics package returns and a list of specific values extracted from that array and formatted more readably:
The entire array:
Text_Statistics Object ( [text] => This is an example.
[numSyllables] => 5
[numWords] => 4
[uniqWords] => 4
[numSentences] => 1
[flesch] => 97.025
[_abbreviations] => Array ( [/Mr\./] => Misterr
[/Mrs\./i] => Misses [/etc\./i] => etcetera
[/Dr\./i] => Doctor )
[_uniques] => Array (
[this] => 1
[is] => 1
[an] => 1
[example] => 1 ) )
Text: This is an example.
Words number: 4
Unique words: 4
Sentences number: 1
You aren't limited to analyzing text files, though; another PEAR package makes it easy to analyze data stored in XML. | <urn:uuid:1e5c70d1-36a0-4062-85e1-2db9cc1b510c> | 3 | 1,102 | Documentation | Software Dev. | 52.854229 |
Wetland MonitoringIn 2005, the Iowa Department of Natural Resources’ (IDNR) Watershed Monitoring and Assessment Section began its wetland monitoring program in north-central Iowa, through grant funds provided by the U.S. Environmental Protection Agency. Wetlands provide many benefits for both water quality and wildlife, therefore, a statewide monitoring program is being developed to assess these valuable areas. Results from this monitoring will enable the IDNR to determine the ecological condition of wetlands while documenting the leading contaminants and stressors found in these systems. This information will help make informed decisions affecting the future of Iowa’s wetlands.
Iowa Wetland Action Plan - 2010 --- NEW ---
Iowa Wetland Monitoring
(Water Fact Sheet 2007 - 6; January 2007)
Iowa's Wetland Monitoring Program - 2005
(Water Fact Sheet 2006 - 2; January 2006)
Why Monitor Wetlands?
(Water Fact Sheet 2006 - 7; February 2006) | <urn:uuid:4050b5fb-d76e-4ca8-b373-b3c2b74995d3> | 3.484375 | 191 | Knowledge Article | Science & Tech. | 28.885328 |
Vacuum Container and Buoyancy
Hello, Maybe this is a stupid question, but I have never
received a satisfying answer, so I will try my luck here, worst case
scenario is being called stupid, so why not. My question is: Is it
theoretical possible to create a container that can handle the pressure of a
high 'enough' level of vacuum to become zero-weight or even float like
Helium? Or how close is it possible to get to the zero weight of the
container. My thought is based on Helium balloons, and my thought is that a
high level of vacuum has to be better than the expensive Helium.
The key principle to consider is "buoyancy" -- helium balloons float
in air, and air has a particular density (that depends on temperature
and pressure). If you build a container whose average density is less
than that of air (in other words, its mass is lower than the mass of
an equal volume of air), the container will float. You can also create
buoyant lift by simply heating air (such as with a hot air balloon).
Hotter air expands, and therefore has lower density, than colder air.
Your first question about 'is it possible' to build a lighter-than-air
craft, I would have to say 'yes', although I am not aware of such a
device actually having been built using vacuum rather than a
lighter-than-air gas such as hydrogen or helium. I would add that its
weight cannot be zero, though, as the container must have some weight
(blimps are lighter than air, but are still very heavy in terms of
actual weight - they just have a very large volume). The reason the
vacuum route may not have been pursued I would guess is because of
your second question. Cost is an interesting twist. It is very
expensive to build a strong, rigid container that is also well-sealed
enough to hold vacuum, and much easier to build, basically, a big sack
that you can fill with a lighter-than-air gas. If your goal is to be
cost-effective, the "big sack of gas" is a pretty cheap option.
Hope this helps,
That is not a stupid question, that is a great question. You
question is about Archemedes' Principle, where the evacuated
container displaces air and produces a buoyancy force equal to the
weight of the displaced air. The theory here is sound. If you could
produce a lightweight container which can withstand the pressure
without collapsing then this would work. And in fact this idea was
proposed in 1670 by Father Francesco de Lana, an Italian Jesuit
priest, who suggested using very thin copper to make lightweight
evacuated spheres. Such spheres would collapse under the pressure,
however. Even today we do not have any materials strong enough to
withstand such pressure while remaining light enough so that the
buoyancy force exceeds the gravitational force. You would need to
fill the interior of such an object with another gas to provide the
outward pressure necessary to resist the inward pressure of the
atmosphere. Hydrogen weighs only 1/10th that of air and is the ideal
candidate although it does have safety concerns (it is highly flammable).
Be assured, neither you nor your question is "stupid". Many have asked
this question, and confusion takes hold. I hope I do a little bit clearer.
The principle of buoyancy is this: An object is exposed to a buoyant
force that is equal to the WEIGHT of the VOLUME of the fluid that the object
displaces when placed on/in the liquid. Now this requires some thought
because "what is equal" is the weight (force) that weighs the same amount as
the same amount of fluid (volume) displaced. That is a little tricky.
Possibly an example will help.
Suppose you float an aluminum "boat" on water. If the aluminum is
crumpled into a ball, the "boat" will sink because you have decreased its
volume. But, if the aluminum foil is shaped into a "boat" so that it
occupies a larger volume, the "boat" will float. If you have a closed boat,
it does not matter what is in the boat, what the "boat" is made of -- only
its shape. What is important is the weight of the volume of water that is
occupied by the "boat".
If you could make a balloon, the volume whose weight exceeds the volume
of air displaced by the surrounding air, it will sink. If the balloon's
weight is less than the weight than the amount of fluid displaced, it will
float. So, properly designed, the "boat" could be lead or concrete. The
important issue is, "The weight of the volume of fluid displaced by the
This is a reasonable question. If it were possible to construct a lightweight
and strong container and remove the air from it, the container would float like
a balloon does. That is true. Unfortunately, the density of air is quite small,
so that the weight of the container would need to be very small for it to float.
Because the pressure of air is fairly high, there are no practicable materials
that are light and strong enough to withstand the pressure and still float. That
is why it is necessary to use an internal gas (helium, hydrogen) to support the
pressure to make the container (blimp or balloon) practicable.
This is different from floating in water. Water is dense, and has large but
manageable pressure for containers in the ocean (like a submarine). Standard
steel construction is adequate to make a vessel that is buoyant in water in
Click here to return to the Engineering Archives
Update: June 2012 | <urn:uuid:1bdce2e1-3dbf-4aaf-9673-10cbd315063b> | 2.96875 | 1,236 | Q&A Forum | Science & Tech. | 54.912111 |
: What is WCF?
: Windows Communication Foundation (formerly code-named "Indigo") is a
: set of .NET technologies for building and running connected systems.
: It is a new breed of communications infrastructure built around the
: Web services architecture.
: in technical terms
: Windows Communication Foundation is Microsoft's unified programming
: model for building service-oriented applications with managed code.
: It extends the .NET Framework to enable developers to build secure
: and reliable transacted Web services that integrate across platforms
: and interoperate with existing investments. Windows Communication
: Foundation combines and extends the capabilities of existing
: Microsoft distributed systems technologies, including Enterprise
: Services, System.Messaging, Microsoft .NET Remoting, ASMX, and WSE
: to deliver a unified development experience across multiple axes,
: including distance (cross-process, cross-machine, cross-subnet,
: cross-intranet, cross-Internet), topologies (farms, fire-walled,
: content-routed, dynamic), hosts (ASP.NET, EXE, Windows Presentation
: Foundation, Windows Forms, NT Service, COM+), protocols (TCP, HTTP,
: cross-process, custom), and security models (SAML, Kerberos, X509,
: username/password, custom).
: What is three major points in WCF?
: 1) Address --- Specifies the location of the service which will be
: like http://Myserver/MyService.Clients
will use this location to
: communicate with our service.
: 2)Contract --- Specifies the interface between client and the
: server. It's a simple interface with some attribute.
: 3)Binding --- Specifies how the two paries will communicate in term
: of transport and encoding and protocols.
: How can we host a service on two different protocols on a single
: Let’s first understand what this question actually means. Let’s say
: we have made a service and we want to host this service using HTTP
: as well as TCP. You must be wondering why to ever host services on
: two different types of protocol. When we host a service it’s
: consumed by multiple types of client and it’s very much possible
: that they have there own protocol of communication. A good service
: has the capability to downgrade or upgrade its protocol according
: the client who is consuming him.
: Let’s do a small sample in which we will host the ServiceGetCost on
: TCP and HTTP protocol.
: Once we are done the server side coding its time to see make a
: client by which we can switch between the protocols and see the
: results. Below is the code snippet of the client side for
: multi-protocol hosting.
: What is the difference WCF and Web services?
: Web services can only be invoked by HTTP. While Service or a WCF
: component can be invoked by any protocol and any transport type.
: Second web services are not flexible. But Services are flexible. If
: you make a new version of the service then you need to just expose a
: new end point. So services are agile and which is a very practical
: approach looking at the current business trends.
: For more information visit
Thanks for sharing, I would be interested in learning more about WCF and related new technologies...
I'm Brad Wang...
.NET Freelancer from China | <urn:uuid:50117a90-9198-4bdf-8b84-016210b31010> | 3.265625 | 748 | Documentation | Software Dev. | 44.341201 |
Science / Chemistry Glossary
Ammonia: Pure NH3 is a colorless gas with a sharp, characteristic odor. It is easily liquified by pressure, and is very soluble in water. Ammonia acts as a weak base. Aqueous solutions of ammonia are . . . View Full Definition
Ammonium Ion: (NH4+) ammonium NH4+ is a cation formed by neutralization of ammonia, which acts as a weak base.
Amorphous: A solid that does not have a repeating, regular three-dimensional arrangement of atoms, molecules, or ions.
Amperage: The amount of charge moved per second by an electric current, measured in amperes.
Ampere: (A) amp. The SI unit of electric current, equal to flow of 1 coulomb of charge per second. An ampere is the amount of current necessary to produce a force of 0.2 micronewtons per meter betwe . . . View Full Definition
Amperometry: Determining the concentration of a material in a sample by measuring electric current.
Amphi: A prefix used to name certain members of a series of geometric isomers or stereoisomers.
Amphiprotic Solvent: Solvents that exhibit both acidic and basic properties; amphiprotic solvents undergo autoprotolysis. Examples are water, ammonia, and ethanol.
Amphoteric: A substance that can act as either an acid or a base in a reaction. For example, aluminum hydroxide can neutralize mineral acids ( Al(OH)3 + 3 HCl = AlCl3 + 3 H2O ) or strong bases ( Al(OH)3 . . . View Full Definition
Amplitude: The displacement of a wave from zero. The maximum amplitude for a wave is the height of a peak or the depth of a trough, relative to the zero displacement line.
Amylopectin: A form of starch made of glucose molecules linked in a branching pattern.
Amylose: A form of starch made of long, unbranched chains of alpha-D-glucose molecules.
Analysis: Chemical analysis. Determination of the composition of a sample.
Analyte: An analyte is the sample constituent whose concentration is sought in a chemical analysis.
Angstrom: A non-SI unit of length used to express wavelengths of light, bond lengths, and molecular sizes. 1 Å = 10-10 m = 10-8 cm.
Angular Momentum Quantum Number: (ell) azimuthal quantum number; orbital angular momentum quantum number. A quantum number that labels the subshells of an atom. Sometimes called the orbital angular momentum quantum number, . . . View Full Definition
Anhydrous: A compound with all water removed, especially water of hydration. For example, strongly heating copper(II) sulfate pentahydrate (CuSO4·5H2O) produces anhydrous copper(II) sulfate (CuSO4).
Anion: An anion is a negatively charged ion. Nonmetals typically form anions.
Anode: The electrode at which oxidation occurs in a cell. Anions migrate to the anode.
Anodize: To coat a metal with a protective film by electrolysis.
Anthocyanin: A family of pigments that give flowers, fruits, and leaves of some plants their red or blue coloring. Anthocyanins consist of sugar molecules bound to a benzopyrylium salt (called anthocyani . . . View Full Definition
Antibonding Orbital: A molecular orbital that can be described as the result of destructive interference of atomic orbitals on bonded atoms. Antibonding orbitals have energies higher than the energies its consti . . . View Full Definition
Antichlor: A chemical compound that reacts with chlorine-based bleaches to stop the bleaching. Thiosulfate compounds are antichlors.
Antioxidant: Antioxidants are compounds that slow oxidation processes that degrade foods, fuels, rubber, plastic, and other materials. Antioxidants like butylated hydroxyanisole (BHA) are added to food t . . . View Full Definition
Antiozonant: Substances that reverse or prevent severe oxidation by ozone. Antiozonants are added to rubber to prevent them from becoming brittle as atmospheric ozone reacts with them over time. Aromatic . . . View Full Definition | <urn:uuid:2ea47dea-4259-473f-80a2-3267027ff8d5> | 3.75 | 934 | Structured Data | Science & Tech. | 34.283846 |
Data reported by the weather station: 988360 (RPMZ)
Latitude: 6.9 | Longitude: 122.06 | Altitude: 5
|Main||Year 1960 climate||Select a month|
To calculate annual averages, we analyzed data of 363 days (99.18% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||27.4°C||363|
|Annual average maximum temperature:||30.9°C||363|
|Annual average minimum temperature:||23.2°C||363|
|Annual average humidity:||76.7%||363|
|Annual total precipitation:||-||-|
|Annual average visibility:||26.1 Km||363|
|Annual average wind speed:||6.0 km/h||363|
Number of days with extraordinary phenomena.
|Total days with rain:||173|
|Total days with snow:||0|
|Total days with thunderstorm:||23|
|Total days with fog:||2|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||0|
Days of extreme historical values in 1960
The highest temperature recorded was 34.6°C on May 2.
The lowest temperature recorded was 18°C on December 18.
The maximum wind speed recorded was 40.7 km/h on October 6. | <urn:uuid:1a241d50-b235-40ab-8066-5e3fff4a0263> | 2.75 | 354 | Structured Data | Science & Tech. | 70.510476 |
A parameter is a variable in an SQL statement. For example, suppose a Parts table has columns named PartID, Description, and Price. To add a part without parameters would require constructing an SQL statement such as:
INSERT INTO Parts (PartID, Description, Price) VALUES (2100, 'Drive shaft', 50.00)
Although this statement inserts a new order, it is not a good solution for an order entry application because the values to insert cannot be hard-coded in the application. An alternative is to construct the SQL statement at run time, using the values to be inserted. This also is not a good solution, due to the complexity of constructing statements at run time. The best solution is to replace the elements of the VALUES clause with question marks (?), or parameter markers:
INSERT INTO Parts (PartID, Description, Price) VALUES (?, ?, ?)
The parameter markers are then bound to application variables. To add a new row, the application has only to set the values of the variables and execute the statement. The driver then retrieves the current values of the variables and sends them to the data source. If the statement will be executed multiple times, the application can make the process even more efficient by preparing the statement.
The statement just shown might be hard-coded in an order entry application to insert a new row. However, parameter markers are not limited to vertical applications. For any application, they ease the difficulty of constructing SQL statements at run time by avoiding conversions to and from text. For example, the part ID just shown is most likely stored in the application as an integer. If the SQL statement is constructed without parameter markers, the application must convert the part ID to text and the data source must convert it back to an integer. By using a parameter marker, the application can send the part ID to the driver as an integer, which usually can send it to the data source as an integer, thereby saving two conversions. For long data values this is critical, because the text forms of such values often exceed the allowable length of an SQL statement.
Parameters are legal only in certain places in SQL statements. For example, they are not allowed in the select list (the list of columns to be returned by a SELECT statement), nor are they allowed as both operands of a binary operator such as the equal sign (=), because it would be impossible to determine the parameter type. In general, parameters are legal only in Data Manipulation Language (DML) statements, and not in Data Definition Language (DDL) statements. For details, see "Parameter Markers" in Appendix C: SQL Grammar.
When the SQL statement invokes a procedure, named parameters can be used. Named parameters are identified by their names, not by their position in the SQL statement. They can be bound by a call to SQLBindParameter, but the parameter is identified by the SQL_DESC_NAME field of the IPD (implementation parameter descriptor), not by the ParameterNumber argument of SQLBindParameter. They can also be bound by calling SQLSetDescField or SQLSetDescRec. For more information on named parameters, see "Binding Parameters by Name (Named Parameters)," later in this chapter. For more information on descriptors, see Chapter 13: Descriptors. | <urn:uuid:9c3d0ab7-cd78-4717-a2b8-32a36b1b7344> | 2.921875 | 678 | Documentation | Software Dev. | 40.288308 |
© 1997, Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, CA 94112.
It would be the ultimate backpacking trip: an exploration of Saturn's rings. The astronauts squeeze into their space suits, strap on their rocket-powered backpacks, and load their cameras. Their spaceship brings them within a few kilometers of the rings. So long as the ship remains in a circular orbit, its speed is identical to that of the chunks of ice and rock which make up the rings. A collision with any ring material under these conditions is just a gentle nudge.
Stepping outside, the backpackers -- using brief rocket bursts -- slowly glide toward the glittering golden plain. The explorers touch down on a large boulder, and in one smooth push of the foot, propels themselves onto the next big piece. In between this slow-motion ballet toe-step, the explorers feel the blizzard of tiny particles gently bouncing off the front of their space suits.
Floating in the silvery gravel is intoxicating. The partly hidden Sun glinting off the field of ring chunks, the gentle gravitational symphony of collective motion that carries both chunks and explorers safely around the planet, the feeling of being surrounded yet freely floating -- all disguise the fact that everything is whirling around Saturn at tens of thousands of kilometers per hour.
of the Solar System
Know Your ABCs
Blandness is Skin-Deep
Activity: A Grapefruit Saturn
Jewel of the Solar System
You don't have to travel millions of kilometers to visit Saturn. It comes to you. Of all the celestial sights available to backyard telescopes, only Saturn and the Moon are sure to elicit an exclamation of delight from those who have never looked through a telescope before. And one look is seldom enough. No photograph or description can duplicate the beauty of the colossal ringed planet floating against the black velvet of the night sky.
Dark side of the ringed planet. This is most people's favorite picture of Saturn. With the Sun off to the right, Saturn casts a shadow on its rings; its daytime hemisphere looks like a crescent. To see a planet or moon as a crescent, you need to be looking from the side -- a perspective on Saturn which Voyaer 1 achieved in 1980, but which is impossible from Earth. Photo courtesy of NASA JPL.
People have been watching Saturn for millennia, but Galileo Galilei was the first to point a telescope at the planet and see its rings. From his discovery, in 1610, until the 19th century, astronomers debated whether Saturn's rings were a solid disc or a swarm of objects. In a telescope, the rings look solid, yet Saturn's gravity should tear a solid structure apart. American astronomer James Edward Keeler resolved the dilemma in 1895. Using a spectroscope to study sunlight reflected off different parts of the rings, he found that they do not all move at the same speed, as they would if they were solid. Instead, the parts closest to Saturn are moving faster than the parts farther out. Keeler concluded that the rings must consist of individual objects revolving around Saturn just like tiny moonlets.
These objects range from tiny crystals, like those in an ice fog, to flying icebergs. Each has its own orbit about Saturn, but occasionally jostles its neighbors. The gentle collisions gradually grind down the larger particles. Meanwhile, the smallest particles tend to stick to one another and create larger clumps. These competing actions have established an equilibrium of sizes. For every house-sized boulder, there are a million baseball-sized chunks and trillions of sand-sized grains.
In the denser rings, the baseball-sized particles are separated by a meter or so; the house-sized ones are kilometers apart. In fact, most of the rings are empty space. If they could be melted and refrozen as a solid body, they would be a solid disc less than 2 feet thick.
The rings are enormous. From one edge to the other, they span a distance equivalent to two-thirds of the gulf between Earth and the Moon. Yet the ring particles seldom stray more than a few hundred meters from a perfectly flat plane, making the rings the height of a 30-story building. If the rings were the size of a football field, they would be paper-thin.
The reason why the rings are so flat -- rather than a random haze -- has to do with Saturn itself. Saturn is the least dense of the gaseous giant planets, yet a day on Saturn is only 11 hours long. This rapid rotation has bulged the planet at the equator and compressed it at the poles. As a result, there is more material at the equator, so the gravity is stronger there. A body orbiting Saturn feels a greater gravitational pull as it passes over the equator, compared with the poles. Over time, this difference distorts the orbits of the ring particles, causing them to collide and settle into a circular orbit above the equator.
Although astronomers are still trying to determine exactly where the rings came from, they think a collision, either between two of Saturn's moons or between a moon and a comet, blasted debris into orbit around the planet. The debris became the rings; it could not regroup into a moon because, near the planet, gravity rips large objects to shreds. Some scientists think the rings are less than a billion years old -- fairly young by astronomical standards -- while others think they date back to the early days of the solar system.
| 1 | 2 | 3 | next page >>
back to Teachers' Newsletter Main Page | <urn:uuid:edb080b3-78a8-4dd6-bd12-e8462606f0c6> | 2.96875 | 1,147 | Knowledge Article | Science & Tech. | 52.909215 |
Comprehensive DescriptionRead full entry
Biology/Natural History: Rhizocephalan barnacles such as this species are bizarrely distorted parasitic barnacles. It was not even known that they were barnacles until the cypris larva in their life cycle was discovered. The eggs of Rhizocephalans usually hatch as a nauplius larva which metamorphoses into a cypris larva. Members of order Akentrogonida, however, such as Sylon hippolyte, apparently pass the nauplius stage in the egg and hatch as a cypris. The female cyprid settles onto a recently molted host or attaches to the host gill. She attaches to the host using a glue gland on her antennae. She then metamorphoses, losing her legs and eyes. She extrudes her tissue through the antenna or through her mouth through the host carapace into the host internal tissue. At that point she may be called a "kentrogon" if she is in order Kentrogonida. The injected barnacle grows into a ramifying rootlike structure called an "interna". The interna begins growing by sending out rootlike projections through the body of the host. These projections absorb nutrients from the host, and typically destroy the gonads (a parasitic castrator). The interna may grow very large and may actually become heavier than the host tissue. When she matures, a part of her body called the "externa" erupts through the exoskeleton of the host, usually on the ventral side of the abdomen near the gonads. The externa has a cavity for eggs and a place for males to attach. Male cyprids settle into the externa and metamorphose into a wormlike structure.
Female shrimp which have this parasite species do not bear eggs, so the parasite is probably a parasitic castrator as are many Rhizocephalans. | <urn:uuid:080930ac-c1b9-47d2-ac0f-b2bb22aab7cd> | 3.453125 | 399 | Knowledge Article | Science & Tech. | 36.024496 |
The FORALL statement and construct is a generalization of the Fortran 95/90 masked array assignment (WHERE statement and construct). It allows more general array shapes to be assigned, especially in construct form.
FORALL is a feature of Fortran 95. It takes the following form:
The FORALL construct takes the following form:
The subscript-name must be a scalar of type integer. It is valid only within the scope of the FORALL; its value is undefined on completion of the FORALL.
The subscripts and stride cannot contain a reference to any subscript-name in triplet-spec.
The stride cannot be zero. If it is omitted, the default value is 1.
Evaluation of an expression in a triplet specification must not affect the result of evaluating any other expression in another triplet specification.
The WHERE statement and construct use a mask to make the array assignments (see Section 4.2.4).
Rules and Behavior
If a construct name is specified in the FORALL statement, the same name must appear in the corresponding END FORALL statement.
A FORALL statement is executed by first evaluating all bounds and stride expressions in the triplet specifications, giving a set of values for each subscript name. The FORALL assignment statement is executed for all combinations of subscript name values for which the mask expression is true.
The FORALL assignment statement is executed as if all expressions (on both sides of the assignment) are completely evaluated before any part of the left side is changed. Valid values are assigned to corresponding elements of the array being assigned to. No element of an array can be assigned a value more than once.
A FORALL construct is executed as if it were multiple FORALL statements, with the same triplet specifications and mask expressions. Each statement in the FORALL body is executed completely before execution begins on the next FORALL body statement.
Any procedure referenced in the mask expression or FORALL assignment statement must be pure.
Pure functions can be used in the mask expression or called directly in a FORALL statement. Pure subroutines cannot be called directly in a FORALL statement, but can be called from other pure procedures.
Consider the following:
FORALL(I = 1:N, J = 1:N, A(I, J) .NE. 0.0) B(I, J) = 1.0 / A(I, J)
This statement takes the reciprocal of each nonzero element of array A(1:N, 1:N) and assigns it to the corresponding element of array B. Elements of A that are zero do not have their reciprocal taken, and no assignments are made to corresponding elements of B.
Every array assignment statement and WHERE statement can be written as a FORALL statement, but some FORALL statements cannot be written using just array syntax. For example, the preceding FORALL statement is equivalent to the following:
WHERE(A /= 0.0) B = 1.0 / A
It is also equivalent to:
FORALL (I = 1:N, J = 1:N) WHERE(A(I, J) .NE. 0.0) B(I, J) = 1.0/A(I, J) END FORALL
However, the following FORALL example cannot be written using just array syntax:
FORALL(I = 1:N, J = 1:N) H(I, J) = 1.0/REAL(I + J - 1)
This statement sets array element H(I, J) to the value
/REAL(I + J - 1) for values of I and J between 1 and N.
Consider the following:
TYPE MONARCH INTEGER, POINTER :: P END TYPE MONARCH TYPE(MONARCH), DIMENSION(8) :: PATTERN INTEGER, DIMENSION(8), TARGET :: OBJECT FORALL(J=1:8) PATTERN(J)%P => OBJECT(1+IEOR(J-1,2))
This FORALL statement causes elements 1 through 8 of array PATTERN to point to elements 3, 4, 1, 2, 7, 8, 5, and 6, respectively, of OBJECT. IEOR can be referenced here because it is pure.
The following example shows a FORALL construct:
FORALL(I = 3:N + 1, J = 3:N + 1) C(I, J) = C(I, J + 2) + C(I, J - 2) + C(I + 2, J) + C(I - 2, J) D(I, J) = C(I, J) END FORALL
The assignment to array D uses the values of C computed in the first statement in the construct, not the values before the construct began execution.
For More Information: | <urn:uuid:803216b5-539c-48fa-881c-3bbac48e30ca> | 3.046875 | 1,020 | Documentation | Software Dev. | 60.309446 |
By: Ally McEntire
The calls of the baby gator in the Holliday lab got me thinking about alligator vocalizations. On a whim, I decided to look this up and found a little more than I had bargained for. Alligators and other crocodilians have a different vocal structure than any other reptile, amphibian or bird– all nearby relatives. Their vocal structure is actually quite similar to that of mammals. Instead of a syrinx like birds have, which involves air passing through the trachea while it vibrates at different rates; they have vocal folds—a larynx—just like humans.
This article details a study done on live juvenile crocs to study their vocalizations. It discusses a number of very specific things like sexual dimorphism in calls and amount of time the noises lasted. What I find more interesting than all of that is that their vocal structure is so unique. Even though they are evolutionarily related to these 3 other branches, their system of making noise is not really like any of their closest relatives. Basically, they have vocal folds, which contract and/or relax when the gator breathes out, like mammals.
This made me want to find out more about the ancestral line, if the present one is so estranged. Would other archosaurs resemble birds, lizards or crocs more closely? I’m not sure there’s a way to know this, considering the vocal cords are a soft tissue, and therefore incredibly difficult to preserve. But, if there were tissue attachments that could be identified, I think it’s something worth looking into.
Also, if you’re curious to hear our captive alligator making his own vocalizations, check Romer out here: | <urn:uuid:9d240c03-c1ad-4958-a672-a8d9ce5b36d1> | 3.140625 | 360 | Personal Blog | Science & Tech. | 40.6 |
Long imagined to be virtual dead zones, perhaps since humans first trekked to the Arctic and Antarctic a century ago, the polar regions are actually living laboratories where scientists can study the mechanisms of evolution and adaptation to some of the harshest conditions on Earth. At the extreme ends of the planet, life persists, adapts and astounds in its abilities to survive at all.
At the South Pole, it is possible, according to one NSF-funded scientist, that microbes can eke out a living. Some come to life after lying dormant for thousands of years in hyper-salty, frozen lakes. Other microbes may survive thousands of meters below the Antarctic ice sheet in ancient Lake Vostok, deep in the continent's interior.
Fish have blood that acts like antifreeze to keep them alive in frigid oceans.
Birds and mammals dive to great ocean depths and swim and hunt for long periods without harm.
Creatures, unknown to science until recently, swim the world's southernmost ocean.
There may even be more than meets the eye, biologically speaking, in some familiar creatures. One NSF-funded researcher recently reported that, based on DNA analysis, killer whales who inhabit the waters around McMurdo Station, NSF's research hub in Antarctica, may be a new and previously unknown species of orca unique to that ecological niche.
Along with similarly hardy and exotic creatures elsewhere on the globe, these creatures are known collectively to science as "extremophiles." What scientists can learn from studying them will tell us not only a great deal about the individual species themselves, it is also likely to inform other fields of biology. Who knows, for example, what studying the physiology of a diving seal's ability to survive enormous underwater pressures for long periods and on a single breath may one day mean to medicine?
Understanding how creatures adapt to polar cold, darkness and radiation may even lead to new ways to determine whether life exists--most likely at the microscopic level--on other planets and moons
of the solar system. Conditions on Mars and Jupiter's moon, Europa, for instance, are very similar to those found in Antarctica's Dry Valleys and in Lake Vostok.
Though scientists have long studied the behavior of some of the better-known polar species such as penguins and seals, the new tools of biochemistry, molecular biology and microbiology will allow researchers to enter a new age of understanding how animals survive and adapt to extremes that characterize the polar regions.
Emphasizing the potential importance of the biological discoveries that are yet to be made in the polar regions, a 2003 report from the National Academy of Sciences said, "Many of the potential discoveries to be made in the study of adaptations of polar organisms stand not only to make important contributions to basic biological science but also to offer opportunities for advancing biotechnology and biomedicine." The report, Polar Biology in the Genomic Era, calls on NSF to "develop a major new initiative in polar genomic sciences."
Such an initiative, the report says, would address such questions as:
- What can be learned from ancient organisms and DNA preserved in permafrost, subglacial lakes and other frozen environments?
- What new types of genetic information enable polar organisms to function under the stresses of polar conditions?
- How rapidly do the genomes of polar organisms evolve?
- What are the evolutionary origins of organisms present in the polar ice caps, glaciers and subglacial lakes?
- How have many polar fishes and other animals succeeded in reducing their metabolic rates and can these mechanisms be used in biotechnology and biomedicine?
- What types of molecules serve as "antifreeze" agents in the blood of fish and other animals and how do they work?
- What are the impacts of human influence on the polar regions?
The list goes on, matching the diversity of life at the extremes.
Return to Overview | <urn:uuid:725ad3c8-2ee5-4f73-8334-38d84a1e3ba7> | 3.75 | 798 | Knowledge Article | Science & Tech. | 29.112867 |
In this mad rush towards the use of so called renewable power to replace existing coal fired power generation, many wild claims are being made. Because of that, those members of the public who have little knowledge of exactly how power is generated can be lured into believing that some of these methods actually can be used to generate the electrical power we take so much for granted.
The public believe that electricity is just something that is always there, and because of that, then the question of how it is actually generated rarely enters the equation, because the belief that electricity is, well, just electricity. That being the case, then as long as it still comes out of the ‘hole in wall’ or when the switch is turned on, then the perception of how it is generated is probably just a matter of moving from one source to another.
There are important questions that need to be addressed about the variability of these renewable methods of generation, and how they cannot supply power that is constantly required for 24/7/365, and not just on average 8 hours out of every 24.
In numerous other posts I have gone into intricate detail how the three favoured methods of using renewable sources to generate electrical power are problematic. The cost factor alone of using these three methods should be the most worrying factor of all, but in this day and age when the word Billion just rolls so readily off the tongue, we have all become inured to this monetary cost, and these three methods are incredibly costly, which sounds sort of incongruous when the original thought processes told us that these forms of generating power provided a free source of generating power. After all, the light from the Sun, and the blowing of the wind are free to all. Now we know that while that light and wind are free, the ability to actually harness it means an intricate construction that is incredibly costly.
I have also dealt at length with that exact variability of these methods, and this would have to be the most worrying aspect of these forms of generation, and these have also been glossed over by those with an agenda to replace those coal fired power plants.
To that end, we are being told that of the three forms of power, Wind, Solar Photovoltaic (Solar PV), and Solar Thermal, one of them, Solar Thermal CAN actually be used to replace those coal fired power plants.
So, let’s look in detail at that Solar Thermal form of generating electrical power.
More correctly, it is called Concentrating Solar Power. Whereas Solar PV uses the light of the Sun to generate electricity in solar cells, Concentrating Solar Power is completely different. It uses specially manufactured mirrors to focus the light to a focal point, creating an intensely hot area.
Some plants use a large tower with all the mirrors focused onto a single point, at the top of a tower, as shown in the image at the top of the page. Other plants use vast arrays of mirrors in rows with a central pipe passing through the focal point, as shown in the image at the left. Both of these methods use mirrors that are mounted on tracks so that they can follow the movement of the Sun across the sky, and these are called Heliostats. Both use compounds that are either stored in that tower or are passed through the pipes. These compounds can be salts, graphite or other compounds. The immense heat generated by the focused Sunlight heats these compounds to a molten state. Through these compounds are passed pipes carrying water, and the molten compounds boil that water to steam, and that steam then drives a turbine which then drives a generator, a similar process to that used in coal fired power plants, and also in nuclear power plants.
So, this similarity in using steam to drive a turbine, which then drives the generator gives people the belief that because they are similar, then this form of Concentrating Solar power CAN in actual fact be used to replace coal fired power plants.
This is a patently false belief, and I will explain exactly why.
People have no concept about the actual process of conventional steam turbine power generation, and as an example, ask anyone how coal is used to produce electrical power. If one person in ten answers anywhere even close to the mark, I would be surprised, in fact dumbfounded.
The problem with Concentrating Solar Power lies not with the process of providing the steam to drive the turbine, but with the actual generator itself.
To produce large amounts of electricity, as those coal fired plants provide, the generator has to be large, and by large I mean huge.
A huge magnetic field (the Rotor) rotates inside of a huge system of electrical windings (the Stator) wrapped closely around that magnetic field. An electrical current is induced in that Stator, and this is the generated electrical power, and believe me, I have simplified it to the most extreme so it can hopefully be understood.
The greater the size of the Stator, then the more the electrical power that is produced. The greater the Stator, then the greater the Rotor.
The Rotor in one of those large coal fired power plants can be anything up to thirty feet across the diameter, and up to 50 feet in length. This huge size encompasses huge metallic cores wrapped in vast lengths of electrical wiring. The current passing through these windings generates a magnetic field in those metallic cores, and special materials are used for these cores, and these are the super conductors you hear about. Superconductors are erroneously thought to be the wiring that carries the electrical current, and in some cases they are called that, hence the misconceptions, but superconductor technology mainly deals with the generation of huge magnetic fields in certain metals.
This huge complex of metal and wires to create those huge magnetic fields can weigh anything up to 250 to 400 tons, and read that again …. TONS.
That huge weight has to rotate at the desired speed, around 3600 RPM, and that is 3600 revolutions per minute, or 60 revolutions each second, 60 per second, got that?
To do that, you need a huge turbine, and in most cases with those coal fired power plants, that turbine is a multi stage turbine, three stages, the first driven by high pressure steam, the second medium pressure, and the third low pressure, as the steam cools slightly. Each stage is similar to a jet aircraft engine.
This turbine is mounted on the same shaft (naturally) as the generator itself, further increasing the weight of the complex that needs to be driven at that speed.
I can understand it can be a little difficult to try and comprehend that, but walk out onto the road in front of your house and look back at the house. Then imagine what it might take to make your house rotate at 3600 RPM. That is the scale of this.
To drive all that weight then an immense amount of high temperature and high pressure steam is required, and required all the time, 24/7/365, to keep all of that revolving.
So then that is the scale of conventional steam driven power generation.
Let’s then look at Concentrating Solar power, where the light of the Sun is focused to a central point to make a compound molten, to boil water to steam, to drive a turbine, which in turn drives a generator.
Never, and let me repeat that, NEVER. will you see a concentrating solar plant of even a fraction the size of a large coal fired power plant.
They will be small boutique plants producing an absolute maximum power less than 10% of a single coal fired power plant.
Why is that?
No matter how many mirrors you have and no matter what the compound, it can only boil water to steam at the required temperature and pressure to drive a much smaller turbine, hence a much smaller generator, hence considerably less power.
Build a huge generator, with a huge turbine, similar to a coal fired power plant, and it will NEVER run up to speed, because that amount of steam can never be generated. So, it is carefully calculated how much steam CAN actually be generated and the turbine/generator complex is designed accordingly from that.
Okay, so now we have carefully designed a complex that can actually generate electrical power, let’s just build a whole bunch of them to replace that one coal fired plant, and hang the expense.
Second problem, and one completely ignored by those who advocate this form of power generation.
That steam is only there to drive the turbine/generator while that compound remains in the molten state. As soon as the Sun sets, that compound starts to cool, and cool rapidly. As it cools, then the steam, while still remaining as steam, will lose temperature, and pressure, and in so doing, the fraction of a second it reaches the level where it can no longer drive that turbine/generator’s huge weight, then the whole complex just stops completely, and no electrical power is generated at all. Nothing.
It will then not start to rotate again until that compound again reaches the molten state when it can produce that level of steam. This will be a couple of hours after the Sun rises in the morning.
So, from this, you can now see with remarkable clarity that even though we have been told that Concentrating Solar is the plan for the future, and can actually replace those large coal fired power plants, the exact opposite is the case.
These plants at best can provide power for maybe ten hours in a 24 hour day, on average, extrapolated across a whole year. In the deepest of Winters, there is every chance it will never be able to generate the steam needed, and in the North, that problem will be even further exacerbated.
The problem is a physical problem that no amount of wishing and hoping can overcome.
To drive that huge weight to produce those large amounts of electrical power, you need huge amounts of high temperature high pressure steam.
So, these Concentrating Solar plants will always be of a much smaller nature, and until someone can tell me how they can make the Sun shine over one of these plants for 24 hours of every day to keep that molten compound actually molten enough to provide the required amount of steam to drive all that, then these plants will only ever be of a boutique nature, and will never come into widespread use.
Because of these inherent problems, entrepreneurs are not flocking to make these plants, and those who are have been subsidised by huge amounts of Government incentives, and monetary backing both at the construction phase and at the sale of electricity to the grid phase, because they will NEVER return a profit to anybody, and the cost of the power actually provided to consumers will be prohibitive, if any of those costs are ever to be recovered.
The problem is not one of will, be it environmental will, or even political will.
The problem is a physical problem, that can never be overcome to the point of replacing those coal fired power plants.
This has been a technical post, but what I have specifically attempted to do is to try and explain the problem so it can be easily understood.
It is just so easy for people to say that this is the way of the future, and because of that, then lay people will accept that, and puzzle as to why people actually try and speak against it.
I caution against these renewable plants not because of a political agenda, or a perceived anti environmentalist vandalism approach, but from a technical viewpoint, which as you can see from reading this, is not readily understood. | <urn:uuid:a952a065-beb7-40cc-a121-e2bff8af9644> | 3.296875 | 2,360 | Personal Blog | Science & Tech. | 40.941512 |
The Cat's Eye Nebula
(NGC 6543) is one of the best known planetary
nebulae in the sky.
Its haunting symmetries are seen
in the very central region of
this tantalizing image, processed to reveal the
enormous but extremely faint halo of gaseous material,
about 6 light-years across, which surrounds the brighter,
familiar planetary nebula.
Made with narrow and broadband data
the composite picture shows the remarkably strong extended emission from
twice ionized oxygen atoms in blue-green hues and ionized hydrogen
and nitrogen in red.
nebulae have long been appreciated as a final phase
in the life of a sun-like star.
But recently many planetaries have been
found to have halos like this one, likely formed of material shrugged
off during earlier active episodes in the star's evolution.
nebula phase is thought to last for around 10,000 years,
astronomers estimate the age of the
portions of this halo to be 50,000 to 90,000 years. | <urn:uuid:1ed6f639-3291-4132-9adf-189722133911> | 3.296875 | 229 | Knowledge Article | Science & Tech. | 37.197924 |
Figure 4: The effect of varying threshold on the binarization of a protein-aggregate image. Red pixels are particles, based
on the threshold. The green boxes show enclosed particles found after thresholding. Image 1 is the original protein aggregate
image. Image 2 shows the result of a threshold of 25 darker than background. Image 3 shows the result of a threshold of 15
darker. Image 4 shows the result of a threshold of 15 lighter. Image 5 shows the result of thresholding 15 lighter and darker.
Image 6 is the same as Image 5, but with the addition of neighborhood analysis.
To avoid this problem, the imaging particle analysis system should allow for thresholding based on pixels that are either
darker or lighter than the background. While this solution is certainly helpful, it still may allow the thresholding process
to form some image artifacts. Fortunately, common image-processing algorithms are available to overcome this problem by using
neighborhood analysis to group disparate clusters of thresholded pixels into logical whole images. Figure 4 shows the differences
obtained for a thresholded image of a single protein aggregate image using a dark-only pixel threshold, a dark-and-light pixel
threshold, and finally, a dark-and-light pixel threshold plus neighborhood analysis.
Figure 5: Thresholded images of National Institute of Standards and Technology traceable 10-µm spheres. Sphere images in sharp
focus are on the left, and less sharp images are at right. The table shows variance in measured diameter based on different
The third factor to consider when using dynamic imaging particle analysis is the effect of image quality. Although the overall
topic of image quality is quite broad and well beyond the scope of this article, a basic tenet of the subject is that image
sharpness is the most critical measurement of image quality (6). Indeed, in imaging particle analysis, the sharpness of the
particle images is directly proportional to the accuracy of the measurements obtained.
Table II: Equivalent spherical diameters (ESD).
Figure 5 demonstrates this by showing images and measurements obtained in an imaging particle analysis system for National
Institute of Standards and Technology traceable size bead standards. The images at left are beads in sharp focus, and the
images at right are beads in less sharp focus. The variation in size and shape caused when the bead images are less sharp
is easy to see, and Table II shows the variation in size measurements that would be obtained by the system with different
thresholds. For 10-µm calibrated spheres, the variation in measurement for the blurry images is more than 12 µm for a difference
of 100 in threshold value, whereas the variation in measurement for the sharp images is only 1.67 µm over the same threshold
range. It is clear that image sharpness greatly affects the accuracy and precision of particle measurements. | <urn:uuid:e2ca0339-ea9f-4b4c-928a-9e60d49af2f1> | 2.859375 | 591 | Academic Writing | Science & Tech. | 35.27 |
Carbon is the sixth most abundant element in the universe and is unique due to its dominant role in the chemistry of life and in the human economy. It is a nonmetallic element having the symbol C, the atomic number 6, an atomic weight of 12.01115, and a melting point about 360ºC. There are four known allotropes of carbon: amorphous, graphite, diamond, and fullerene. A new fifth allotrope of carbon was recently produced, a spongy solid called a magnetic carbon “nanofoam” that is extremely lightweight and attracted to magnets.
|Previous Element: Boron
Next Element: Nitrogen
|Phase at Room Temp.||solid|
|Melting Point (K)||3823.2|
|Boiling Point (K)||---|
|Heat of Fusion (kJ/mol)||---|
|Heat of Vaporization (kJ/mol)||---|
|Heat of Atomization (kJ/mol)||717|
|Thermal Conductivity (J/m sec K)||1.59|
|Electrical Conductivity (1/mohm cm)||0.727|
|Source||Coal, Petroleum, Natural gas|
|Number of Isotopes||3|
|Electron Affinity (kJ/mol)||121.85|
|First Ionization Energy (kJ/mol)||1086.4|
|Second Ionization Energy (kJ/mol)||2352.6|
|Third Ionization Energy (kJ/mol)||4620.4|
|Atomic Volume (cm3/mol)||5.3|
|Ionic Radius2- (pm)||---|
|Ionic Radius1- (pm)||---|
|Atomic Radius (pm)||77.2|
|Ionic Radius1+ (pm)||---|
|Ionic Radius2+ (pm)||---|
|Ionic Radius3+ (pm)||---|
|Common Oxidation Numbers||-4, +4|
|Other Oxid. Numbers||-3, -2, -1, +1, +2, +3|
|In Earth's Crust (mg/kg)||2.00×102|
|In Earth's Ocean (mg/L)||2.8×101|
|In Human Body (%)||22.85%|
|Regulatory / Health|
|CAS Number||7440-44-0 synthetic
|OSHA Permissible Exposure Limit (PEL)||TWA: 15 mppcf|
|OSHA PEL Vacated 1989||TWA: 2.5 mg/m3|
|NIOSH Recommended Exposure Limit||TWA: 2.5 mg/m3
IDLH: 1250 mg/m3
Mineral Information Institute
Jefferson Accelerator Laboratory
The name derives from the Latin carbo, for "charcoal". It was known in prehistoric times in the form of charcoal and soot. In the year 1797, the English chemist Smithson Tennant proved that diamond is pure carbon. It is found in abundance in the sun, stars, comets, and atmospheres of most planets. Carbon in the form of microscopic diamonds is found in some meteorites.
Natural diamonds are found in kimberlite of ancient volcanic "pipes," found in South Africa, Arkansas, and elsewhere. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. About 30% of all industrial diamonds used in the U.S. are now made synthetically.
The energy of the sun and stars can be attributed at least in part to the well-known carbon-nitrogen cycle.
Due to carbon’s unusual chemical property of being able to bond with itself and a wide variety of other elements, it forms over 10 million known compounds. Carbon is present as carbon dioxide in the atmosphere and dissolved in all natural waters. It is a component of rocks as carbonates of calcium (limestone), magnesium and iron.
The fossil fuels (coal, crude oil, natural gas, oils sands, and shale oils) are chiefly hydrocarbons. Carbon is the active element of photosynthesis and the key structural component of all living matter. The isotope carobon-12 is used as the basis for atomic weights. Carbon-14, a radioactive isotope with a half-life of 5,730 years, is used to date such materials as wood and archaeological specimens. In 1960, W.F. Libby was awarded the Nobel Prize in Chemistry for developing the carbon dating method.
Organic chemistry, a major subfield of chemistry, is the study of carbon and its compounds. Because carbon dioxide is a principal greenhouse gas, the global carbon cycle has become a focus of scientific inquiry in relation to global warming, and the management of carbon dioxide emissions from the combustion of fossil fuels is a central technological, economic, and political concern; furthermore, imbalances to the carbon cycle due to deforestation, overgrazing, peatland exploitation and other land cover changes are thought to be significantly implicated in climate change. Methane, CH4, is another important carbon compound that is also a significant greenhouse gas.
Carbon is found free in nature in three allotropic forms: amorphous, graphite, and diamond. A fourth form, known as "white" carbon, is now thought to exist. Ceraphite is one of the softest known materials while diamond is one of the hardest.
Graphite exists in two forms: alpha and beta. These have identical physical properties, except for their crystal structure. Naturally occurring graphites are reported to contain as much as 30% of the rhombohedral (beta) form, whereas synthetic materials contain only the alpha form. The hexagonal alpha type can be converted to the beta by mechanical treatment, and the beta form reverts to the alpha on heating it above 1000°C.
In 1969 a new allotropic form of carbon was produced during the sublimation of pyrolytic graphite at low pressures. Under free-vaporization conditions above ~ 2550K, "white" carbon forms as small transparent crystals on the edges of the planes of graphite. The interplanar spacings of "white" carbon are identical to those of carbon form noted in the graphite gneiss from the Ries (meteroritic) Crater of Germany. "White" carbon is transparent birefringent material.
Carbon found in organic molecules—molecules that contain carbon atoms bonded to hydrogen atoms and to other carbon atoms—is called organic carbon. Carbon is the most abundant element found in organisms. For this reason, carbon is considered the fundamental building block of all life. Plants acquire carbon from the atmosphere through photosynthesis. Using inorganic carbon in the form of carbon dioxide (CO2) from the atmosphere and energy from sunlight, plants convert CO2 to organic carbon as they produce stems, leaves, and roots. Carbon may also be converted from inorganic to organic forms using chemical energy in the absence of light by chemoautotrophs. Heterotrophs—organisms such as animals, fungi, and many types of bacteria that cannot synthesize their own food from carbon dioxide—obtain their carbon from organic compounds.
In combination, carbon is found as carbon dioxide (CO2) in the atmosphere of the Earth and dissolved in all natural waters. It is a component of great rock masses in the form of carbonates of calcium (limestone), magnesium, and iron. Coal, petroleum, and natural gas are chiefly hydrocarbons.
Carbon is unique among the elements in the vast number and variety of compounds it can form. With hydrogen, oxygen, nitrogen, and other elements, it forms a very large number of compounds, carbon atom often being linked to carbon atom. There are over ten million known carbon compounds, many thousands of which are vital to organic and life processes.
Without carbon, the basis for life on Earth would not be possible. While it has been thought that silicon might take the place of carbon in forming a host of similar compounds, it is now known that stable compounds with very long chains of silicon atoms cannot be formed. The atmosphere of Mars contains 96.2% CO2. Some of the most important molecular compounds of carbon are carbon dioxide (CO2), carbon monoxide (CO), carbon disulfide (CS2), chloroform (CHCl3), carbon tetrachloride (CCl4), methane (CH4), ethylene (C2H4), acetylene (C2H2), benzene (C6H6), acetic acid (CH3COOH), and their derivatives.
Carbon has many isotopes, but just three are stable enough to exist in detectable amounts in nature. Carbon-12, a stable (non-radioactive) isotope, comprises nearly 99% of all carbon on Earth. In 1961 the International Union of Pure and Applied Chemistry adopted the isotope carbon-12 as the basis for atomic weights. Carbon-13, also a stable isotope, is the next most abundant, comprising slightly more than 1% of all carbon on Earth. Carbon-14 is the most abundant radioactive isotope of carbon at 1 part per trillion. It has a half life of 5730 years and has been widely used to date such materials as wood, archaeological specimens, etc, through radiocarbon dating. All other isotopes of carbon are highly unstable and extremely rare.
Carbon is conveyed among features of the lithosphere, biosphere, atmosphere and oceans; in addition it is transformed to different molecular forms as well as physical forms. The composite of all these transformations is termed the carbon cycle. The most important forms of carbon in the Earth's atmosphere are carbon dioxide, carbon monoxide, methane and black carbon; these forms are variously important as plant metabolite (carbon dioxide, black carbon); toxic agent (carbon monoxide, black carbon) and radiative forcing agent (methane, carbon dioxide, black carbon).
Carbon is stored on in the following major dynamic sinks: (a) as organic molecules in living and dead organisms found in the biosphere; (b) in gas and particulate form in the atmosphere; (c) as organic matter in soils; (d) in the lithosphere as fossil fuels and sedimentary rock deposits such as limestone, dolomite and chalk; and (e) in the oceans as dissolved atmospheric carbon dioxide and as calcium carbonate shells in marine organisms; and as methane clathrates, deep frozen methane formations under circumpolar seabeds.
Carbon moves from the atmosphere to the biosphere chiefly by the process of photosynthesis; conversely carbon moves from the biosphere to the atmosphere by several processes, including: burning of organic matter; methane emission from ruminants; deforestation, with some carbon lost to the atmosphere via decay, soil carbon disturbance loss or forest product use; and decay of organic matter, with a fraction of decayed matter being released to the atmosphere.
Other sizable sinks of carbon are the oceans themselves, seabed carbonates, biota and decaying matter present in soils, methane clathrates. These are very large carbon sinks, even compared to the atmosphere; furthermore, the status of research on their size and dynamics is embryonic, such that important new perspectives are likely to materialize on these sinks over the next decade.
Carbon is a critical element to all life. It is one of the six bulk elements and is the second-most common element in the human body. By mass it is the most abundant constituent of all the major molecules that organisms are formed from, including nucleic acids (e.g., DNA), proteins, carbohyrdrates, and lipids. As a result, living organisms are intimately involved in the carbon cycle. . Some carbon compounds such as carbon monoxide (CO) or the cyanide ion CN- pose health and mortality risks to most fauna including humans.
- D.R.Lide, ed, (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.
- A.J.A.Janse (2007). Global Rough Diamond Production Since 1870. Gems and Gemology (GIA) XLIII (Summer 2007): 98–119.
- D.Gorman, A.Drewry, Y.L.Huang & C.Sames (2003). The clinical toxicology of carbon monoxide. Toxicology 187 (1): 25–38.
- The Chemistry of Carbon, New York University.
- Understanding the Global Carbon Cycle, The Woods Hole Research Center.
- P.Falkowski, R.J.Scholes, E.Boyle, J.Canadell, D.Canfield, J.Elser, N.Gruber, K.Hibbard et al. (2000).The Global Carbon Cycle: A Test of Our Knowledge of Earth as a System. Science 290 (5490): 291–296
- T.M.Smith, W.P.Cramer, R.K.Dixon, R.Leemans, R.P.Neilson, A.M.Solomon, A. M. (1993). The global terrestrial carbon cycle. Water, Air, & Soil Pollution 70: 19–37.
- Joel S.Levine, Tommy R.Augustsson, Murali Natarajan (1982). The prebiological paleoatmosphere: stability and composition. Origins of Life and Evolution of Biospheres 12 (3): 245–259 | <urn:uuid:dd31ec20-ebde-4215-9b57-c1a8a7f622af> | 3.28125 | 2,874 | Knowledge Article | Science & Tech. | 50.921706 |
Most sharks have an unusual combination of biological characteristics: slow growth and delayed maturation; long reproductive cycles; low fecundity; and long life spans. These factors determine the low reproductive potential of many shark species.
Slow growth and delayed maturation: Some species of sharks, including some of the commercially important species, are extremely slow growing. The picked dogfish (Squalus acanthias) has been estimated by Jones and Geen (1977) to reach maturity at about 25 years. The sandbar shark (Carcharhinus plumbeus), the most economically important species along the southeastern coast of the United States, has been estimated to reach maturity from 15-16 years (Sminkey and Musick 1995) to about 30 years (Casey and Natanson 1992).
Long reproductive cycles: Sharks produce young that hatch or are born fully developed, and that are relatively large at hatching or birth. The energy requirements of producing large, fully developed young result in great energy demands on the female, and in reproductive cycles and gestation periods that are unusually long for fishes. Both the reproductive cycle and the gestation period usually last one or two years in most species of sharks, reflecting the time it takes a female to store enough energy to produce large eggs and to nurture her large young through development (Castro 1996). The reproductive cycle is how often the shark reproduces, and it is usually one or two years long. The gestation period is the time of embryonic development from fertilization to birth, and is frequently one or two years long. The reproductive cycle and the gestation period may run concurrently or consecutively. For example, in the picked dogfish, the reproductive cycle and gestation run concurrently and both last two years. A female carries both developing oocytes in the ovary and developing embryos in the uteri concurrently for two years. Shortly after parturition, it mates and ovulates again, and the process begins anew. In this case, both ovulation and parturition are biennial. In most of the large, commercially important, carcharhinid sharks, the reproductive cycle and the gestation period run consecutively. These sharks have biennial reproductive cycles (Clark and von Schmidt 1965) with one-year gestation cycles. They accumulate the energy reserves necessary to produce large eggs for about a year, then, mate, ovulate, gestate for one year, and give birth. For example, after giving birth in the spring, a blacktip shark (Carcharhinus limbatus) enters a "resting" stage where it stores energy and nourishes its large oocytes for one year. After mating and ovulation, it begins a year gestation period, giving birth in the spring of the second year after its previous parturition (Castro 1996). Thus, these sharks also reproduce biennially. Some of the hammerhead sharks (Sphyrna) and the sharpnose sharks (Rhizoprionodon) reproduce annually (Castro 1989, Castro and Wourms 1993). Even longer cycles of three and four years have been proposed for other species without adducing any evidence.
Low fecundity: The small size of their broods, or "litters", is another factor contributing to the low reproductive potential of sharks. The number of young or "pups" per brood usually ranges from two to a dozen, although some species may produce dozens of young per brood. Most of the commercially important carcharhinid sharks usually produce less than a dozen young per brood. For example, the sandbar shark averages 8 young per brood, while the blacktip averages 4 per brood (Castro 1996). An exception, among the targeted species, is the blue shark for which broods of over 30 young have often been reported.
Long life spans: Although, many species of sharks are known to be long-lived (Pratt and Casey 1990), the reproductive life span of sharks is unknown. Because the long time before maturation and the long reproductive cycles, it appears that a given female may only produce a few broods in its lifetime (Sminkey and Musick 1995).
Many of the commercially important species use shallow coastal waters, known as "nurseries", to give birth to their young, and where the young spend their first months or years (Castro 1993b). The mating grounds are often close to the nurseries, and thus adults of both sexes congregate close to shore in large numbers. These areas are highly attractive to fishermen, because of their nearness to shore and the high concentration of sharks. Most of the commercially important species (e.g. the genera Carcharhinus, Sphyrna, Rhizoprionodon, Negaprion) have shallow water nurseries (Castro 1987, 1993b). These sharks are very vulnerable to modern fishing operations, and are easily overfished.
There is no evidence of any compensatory mechanisms by female sharks that will increase brood size or decrease the length of the ovarian and gestation cycles in response to overfishing. It is highly unlikely that those mechanisms can be evolved rapidly enough to compensate for the increase in mortality. Even if such mechanisms could be evolved, brood size would be limited by the maximum number of young that can be carried by a female, and ovulatory and gestation cycles are limited by complex metabolic processes. The long ovarian cycles and long gestation periods probably reflect he minimal times required by the species to acquire and transfer the necessary energy to large ova and young. | <urn:uuid:160bb057-12b5-4f1d-9fc1-8ace1e035d62> | 4.03125 | 1,122 | Academic Writing | Science & Tech. | 32.585106 |
Sea Creatures’ Response to Ocean Acidification Shocks Scientists
Have you ever heard those tales of how dumping toxic waste into water will cause the aquatic life to mutate into something out of science fiction? Well, apparently marine life as of late are going through some shocking changes, and while it may not be like something out of the twilight zone, it sure has baffled scientists.
Acidification of the ocean has been on the rise, thanks to increasing levels of CO2 in the atmosphere. The CO2 dissolves in the water, which makes the water more acidic. This decreases the number of carbonate ions in the ocean, which some marine life use to build their shells and skeletons. Scientists had thought this acidic increase would cause shells of sea creatures to become brittle, but it would seem the opposite has happened. Crabs, lobsters and other such animals have been building more shell when exposed to the acidification, rather than losing shell.
Past studies showed that the changing ocean chemistry was thinning the shells of some microscopic creatures. However, the latest study—published in the journal Geology—shows that 7 out of 18 creatures built more shell when exposed to the acidic changes. One theory comes from former WHOI (Woods Hole Oceanographic Institution) member, Justin B. Ries:
“Most likely the organisms that responded positively were somehow able to manipulate … dissolved inorganic carbon in the fluid from which they precipitated their skeleton in a way that was beneficial to them. They were somehow able to manipulate CO2 … to build their skeletons.”
Apparently the process also affects shell-less sea life, such as algae. A lot more research is needed into this discovery, however. Why does it only affect certain marine life in this way, and not all marine life? What about the impact acidification has on coral? Now that some of the animals have been adapting to higher levels of acidification, what will happen to them if the acidic levels should drop again?
A lot of questions definitely need to be answered, especially in regards to the way animals have responded to the acidic changes. According to study co-author and WHOI research specialist, Anne L. Cohen:
“We were surprised that some organisms didn’t behave in the way we expected under elevated CO2. What was really interesting was that some of the creatures, the coral, the hard clam and the lobster, for example, didn’t seem to care about CO2 until it was higher than about 1,000 parts per million [ppm. Current levels are at 380 ppm.]. I wouldn’t make any predictions based on these results. What these results indicate to us is that the organism response to elevated CO2 levels is complex and we now need to go back and study each organism in detail.”
Given the amount of sea creatures scientists already know about, and the possible amount they have yet to find, this study could take quite a while to complete. It is an interesting thing to see the creatures adapt in such a way, but is it really a good thing?
By Heidi Marshall | <urn:uuid:3e335817-f7a1-49d8-bfdd-f5ab0285a141> | 3 | 633 | Truncated | Science & Tech. | 44.054175 |
Photographic Benthic Observing System (PhoBOS)
Capturing video of the deep seafloor 24/7
|This photo shows the PhoBOS video system on top of the MARS science node in Monterey Bay. Image: (c) 2010 MBARI
The Photographic Benthic Observing System (PhoBOS) is an integrated suite of instruments that have been place at the MARS site to monitor ocean conditions and seafloor life. PhoBOS includes several instruments that measure water temperature, salinity (a Seabird CTD), and ocean currents (an RDI Workhorse ADCP). The package aalso includes an underwater video camera (Pegasus Zoom Camera) that whose pan and tilt can be controlled from shore.
PhoBOS is currently sitting directly on the top of the MARS node and is plugged into one of the eight science ports. Data and video from PhoBOS helps engineers and ROV pilots better understand the conditions at the MARS site. This makes it easier to design and install equipment on the deep-sea observatory.
The PhoBOS video camera provides another view of the site that is useful for ROV operators working around the MARS site. Data from the system, including still images is being logged by the MBARI Shore Side Data System (SSDS) and is available on the web. | <urn:uuid:a5e6a386-efeb-467a-9898-b1d666fcdffe> | 3.078125 | 289 | Knowledge Article | Science & Tech. | 37.498444 |
Buddenbrockia plumatellae is a tiny parasitic worm of freshwater bryozoans (moss animals). First described in 1910 by the German natural historian, Olaw Schröder, it has been encountered by only a few scientists since.
Infection with Buddenbrockia reduces growth and reproduction and may cause mortality of bryozoan hosts (Canning et al. 2002). There is also some evidence that carp and minnow are susceptible to Buddenbrockia infections transmitted from bryozoans.
The anatomy and life-style of this tiny worm are so unusual that until very recently it remained unclear where to place it in the animal kingdom.
Mature Buddenbrockia plumatellae are colourless, whitish worms, with grainy mass of internal ovoid spores.
The spores have 2 sporoplasms and 4 polar capsules.
Myxozoan diagnostic characters include:
Within the Myxozoa the genera Buddenbrockia and Tetracapsuloides are united in the Malacosporea.
Malacosporean characters include
Find out about the form and structure of Buddenbrockia plumatellae, learn about similar looking species and read what studies have suggested about the evolution of the species.
Learn where in the world, as well as the host species, that Buddenbrockia plumatellae is known from, what is known about its typical habitats.
Discover the interactions that Buddenbrockia plumatellae has with other species and the impact it has on these species.
Get reference material for Buddenbrockia plumatellae.
Buddenbrockia plumatellae, mature worm. Light micrograph (Photo: A. Gruhl).
Buddenbrockia plumatellae worms in a colony of Plumatella sp. Drawing from original species description (Schröder 1910).
Buddenbrockia plumatellae worm leaving bryozoan host. Light micrograph (Photo: S. Tops).
Detail of spores in a mature worm. Light micrograph (Photo: A. Gruhl). | <urn:uuid:bb1cf1a0-55e3-4828-8551-82c14b8eee6c> | 3.453125 | 442 | Knowledge Article | Science & Tech. | 30.810668 |
Conifer vs. Angiosperm
An idea long prevailed that angiosperm trees, having superior reproductive capability compared to conifers, displaced them to extreme habitats considered unfavorable to angiosperms. A study of extant conifers supports the notion that in productive regions of the tropics, leaf structure (needles instead of wide leaves) and vascular structure (tracheids instead of vessels) puts conifers at a competitive disadvantage with angiosperms in terms of growth speed. There is evidence that some conifers became extinct as a result. But the “slow seedling” hypothesis does not account for the broad range of habitats occupied by modern conifers. The three most successful conifer families are Pinaceae (Northern Hemisphere), Podocarpaceae (Southern Hemisphere), and Cupressaceae (global). Various distinct mechanisms employed by conifers, allow them to escape direct confrontation with angiosperm competitors. Besides their conservative functional traits that allow them to endure and survive stress in extreme habitats, conifers successfully: 1) compete against angiosperms for resources once they are mature, 2) colonize sites following a disturbance and, 3) tolerate low-intensity fires.
International Journal of Plant Sciences, 173 (6) 673-694
An Improved USDA Climate Zone Map
The USDA hardiness zone map is based on mean annual minimum temperature. The most current map was published in 2012 and is based on data from 8000 weather stations between 1976-2005, taking into account elevation and proximity to shorelines. The data is averaged over a relatively recent chunk of time because of the known warming trend, but may not
reflect accelerating change well enough. A current study uses more of the historical data, focusing on trends in various regions to come up with a better prediction of how the zones should be described. This mathematical modeling strategy suggests that, in general, winters are warming up faster than summers. The southwest United States is demonstrating the least change in average minimum temperatures and the southern Appalachians are showing the most change. The data also suggests that annual minimum temperatures are, on average, already 1.2°C higher than shown in the 2012 USDA zone map. This would shift one-fifth of the nation into another climate zone. The method is expected to be applicable to precipitation and other climatic variables and may provide a more useful climate zone map in the future.
Advances in Meteorology, Vol 2012, article ID 404876
Plants Assist in Materials Science Research
“There is plenty of room at the bottom,” is a famous quote by the physicist Richard Feynman in his 1959 Cal Tech lecture, encouraging scientists to think small—really small. Now it appears that plants and fungi can help with that. Silver nanoparticles, used in medicine, optics, and electronics, can be produced within minutes, at room temperature, in an environmentally friendly way without toxic chemicals, by the reduction of silver nitrate in a liquid medium containing various leaf extracts. The latest plant to be used for the process is the strawberry tree (Arbutus unedo); however success has also been achieved using other plant leaf extracts, fungi, and even humic acids. There remains the question though of how environmentally friendly the nanoparticles themselves are once in the environment.
Materials Letters, 76: 18-20
Potential Help for Depression Patients
Chemicals from the South African species Crinum and Cyrtanthus and several daffodils have an effect on the part of the human brain involved in clinical depression. Studies that show the chemicals can traverse the blood-brain barrier are promising—a test most potential drugs fail. This is just the first stage of lengthy testing procedures, so it will be many years before a drug
will be available.
Journal of Pharmacy and Pharmacology, 64 (11) 1667-1677 | <urn:uuid:e095415e-03fd-4bd5-be3c-2d1058bb1063> | 3.890625 | 785 | Content Listing | Science & Tech. | 26.079029 |
By Lawrence Ulrich
Posted 08.30.2011 at 1:22 am 0 Comments
In the wake of the 1973 oil embargo, Detroit automakers tried to convince regular, non-truck-driving Americans to switch to diesel. Diesel engines, after all, burn fuel 30 percent more efficiently than gasoline engines. The carmakers failed, in part because of poor engineering: Between 1978 and 1985, General Motors’s Oldsmobile division produced a series of shoddy, failure-prone engines that gave diesel a bad reputation that persists to this day.
Electric cars might be the future, but for some uses, like the demands of a delivery truck, they just don't have the power or range quite yet. But that doesn't mean giving up and using inefficient materials and construction while waiting for the electric revolution to come. UPS is testing out prototype plastic trucks that reduce the usual truck weight by 1,000 pounds, increase the mileage by up to 40%, and are even more easily serviced.
Volkswagen's latest eye-catching creation, the ultra-efficient diesel-hybrid XL1, debuted at the Qatar Auto Show, immediately garnering attention both for its looks and its specs. But according to German publication Automobilwoche (warning: German), VW actually intends to bring the XL1 to market, albeit in a (very) limited run.
Tiny organisms such as algae offer great promise for a clean energy future by creating biofuels or even hydrogen, if only scientists can figure out how to use them in a cost-efficient way. A startup named Joule Unlimited has hit upon a possible solution, with a genetically tailored organism that sweats out its fuel and lives on to continue making more, New York Times reports. The company broke ground recently on a Texas pilot plant that will house the single-cell plant organisms in flat structures resembling solar panels facing the sun.
A team of gearheads at the University of Wisconsin-Madison have developed an engine that can handle a blend of gasoline and diesel fuel. It outputs low emissions, and offers up to 20 percent greater fuel efficiency.
Could a diesel-producing tree be the key to fuel independence?
By Matt Ransford
Posted 04.04.2008 at 11:37 am 3 Comments
Money doesnt grow on trees, so it should stand to reason that diesel fuel wouldnt grow on trees either. And yet the Brazilian Copaifera langsdorfii tree has been quietly producing a natural diesel variant in the tropical rainforest, something weve known about since the seventeenth century. Its only now that farmers in Australia have decided to farm the tree on a large scale in the hopes of having 20,000 living, above-ground fuel wells.
A British backhoe manufacturer takes its new engine to an unlikely work site: Utah´s Bonneville Salt Flats
By Tom Colligan
Posted 11.01.2006 at 3:00 am 0 Comments
Past owners of the notoriously wheezy diesel Rabbit will find it hard to believe, but this blurry streak is also powered by a four-cylinder diesel. Two of them actually: one for the front wheels and one for the rear. Built for use in front-loaders and forklifts, the 4.4-liter engines were specially tuned to 750 horsepower each by U.K. construction-equipment company JCB as part of an effort to set a new speed record for a diesel-powered car. It paid off.
Technology beat the clatter and smoke of diesel car engines. It's time for a U.S. comeback.
By Michelle Krebs
Posted 01.29.2002 at 9:00 pm 1 Comment
During the 1980s, A diesel-powered Volkswagen Rabbit was briefly part of my household fleet. It was a particularly frigid Detroit winter, and we had to plug in the Rabbit's engine block heater if we were parked for even an hour or it absolutely refused to start. Even on warm days, our Rabbit hesitated at the touch of the ignition key. Its lack of power necessitated long-term planning for the simplest highway passing maneuvers. The engine clattered, smoked, and smelled like a city bus. The car's sole saving grace was that it traveled miles and miles on a single tank of fuel.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:e42fb7b5-5202-42ea-a6d3-5914995bdbee> | 2.84375 | 928 | Content Listing | Science & Tech. | 56.233735 |
Note: This month's "Scientist's Pick" is from Science Buddies' staff scientist, Sandra Slutz. ~ Science Buddies' Editorial Staff
Project: Forensic Science: Building Your Own Tool for Identifying DNA
Scientist: Sandra Slutz
Science Buddies' Difficulty Level: 7-9
Sometimes, when I'm supposed to be sitting down and concentrating on a task, my mind wanders. You never know what will come of those momentary mental strolls. In this case, the result was a pretty cool Do-It-Yourself project to build one of the very basic tools used in a biotechnology lab: an electrophoresis chamber.
I was supposed to be working on a science project called Investigate Native Plant Evolution with Chloroplast Sequencing. The project shows students how to harvest plants indigenous to the area in which they live, extract DNA from the plants, and then sequence the DNA. Then (drum roll for the really awesome part) if the sequence is new, meaning if no one has ever recorded that information before, they can submit the data to GenBank — the public gene sequence data bank — for scientists worldwide to see and use!
The only downside to the project is that it requires access to some specific biotechnology equipment. As I started to write down the list of materials and equipment that the project calls for, I asked myself, "Hey, I wonder if you can build any of this stuff yourself?"
From there my mind went racing through a list of possibilities. What I finally settled on was an electrophoresis chamber, the fancy title for a box that you pass current through to separate DNA into different size pieces and get a look at those pieces. The electrophoresis chamber is one of the most common pieces of equipment in any biotech laboratory. Why? There are literally dozens of reasons, but here's an example:
Imagine you are working in a forensics lab trying to determine if the hair left at a crime scene belongs to any of the suspects. How would you do it? You would isolate DNA from the crime scene hair and some DNA from each suspect. Then you'd cut up each DNA sample using enzymes. When you cut the DNA this way, each person has his or her own unique pattern of pieces (similar to the way each person has a unique fingerprint—in fact the DNA pattern is referred to as a DNA fingerprint). To look at the DNA pattern, you would use an electrophoresis chamber. If one of the suspects' DNA pattern matched the crime scene hair's pattern, you'd be able to place that suspect at the scene of the crime.
As you can see, the electrophoresis chamber is an important tool for forensics research!
It turns out you can make a simple version of an electrophoresis chamber on your own kitchen counter using just a few household items like batteries, a plastic soap dish, some stainless steel wire, and baking soda.
Once you've built your electrophoresis chamber, you have lots of options for putting it to use. Don't feel up to processing DNA on your home-made electrophoresis chamber? No problem! You can use the same equipment to examine food dyes. Did you know that some of the primary colors (like red) in food dyes are actually blends of several colors? Can you guess which colors? Give it a try! You might just find yourself hooked on the power of kitchen biotech.
For similar project ideas, explore the Biotechnology interest area, sponsored by Bio-Rad, in the Science Buddies Project Directory.
You may also enjoy these related blog entries: | <urn:uuid:3fcb221f-706e-49a4-aa99-95e739f06550> | 3.484375 | 747 | Personal Blog | Science & Tech. | 50.930427 |
Sep 29, 2009 | 8
The 60-Second Science series was created with the intention of providing our audience with bite-size, consume-in-one-minute pieces of scientific coverage. This format has proved ideal for our podcasts, but we've missed the option to write longer news and opinion pieces for the blog. We also wanted a way to better highlight the themes that have emerged within the 60-Second Science blogs. The editors put their heads together and the final result is the introduction of five ScientificAmerican.com blog categories:
Sep 25, 2009 | 7
Living whales may seem scarce in the world's vast oceans—and their carcasses even more rare. But to animals and bacteria that feed on these graveyards, they are a rich source of life. And to one doctoral researcher in Sweden, they proved to be a source of several new species.
In her dissertation for the University of Gothenburg, Helena Wiklund describes nine new species of polychaete worms found living in whale carcasses and other nutrient-rich areas off the coast of Sweden, Norway and California.
A whale carcass can bring as much nutrition to the seafloor as would otherwise take some 2,000 years to filter down. Wiklund and her coauthors note that although the worms seem to be especially adapted to live in environments such as whale falls, where they feed off the bacteria that cover the bones, they seem to also be thriving in bacteria-rich areas of waste resulting from human activity, such as below fish farms and even pulp mills.
Sep 25, 2009 | 25
The U.S. Secretary of Energy—channeling former Soviet leader Nikita Khrushchev perhaps?—has one thing to say in this week's Science to the greenhouse gases emitted by coal-fired power plants: We will bury you. Nobel laureate Steven Chu's department has funneled $3.4 billion in stimulus dollars to research and develop the technology known as carbon capture and storage (CCS).
But to give you a sense of the challenge, here are his estimates of the scale of the challenge: six billion metric tons of coal burned every year, producing 18 billion metric tons of carbon dioxide and requiring an underground storage volume of 30,000 cubic kilometers per year with untold consequences on subsurface pressure, mineral composition and the like. And we are nowhere near that scale: "We now sequester a few million metric tons of CO2 per year," he wrote, largely from cleaning natural gas or so-called "enhanced oil recovery" efforts, in which CO2 is pumped down to flush out more of the valuable petroleum (and therefore not as useful, from a climate perspective, as sequestration for its own sake).
Sep 24, 2009 | 10
Editor's Note: A team of Rensselaer Polytechnic Institute students are traveling up New York's Hudson River this week on the New Clermont, a 6.7-meter boat outfitted with a pair of 2.2-kilowatt hydrogen fuel cells to power the boat's motor. Their journey began September 21 from Manhattan's Pier 84 and will cover 240 kilometers (at a projected speed of 8 kilometers per hour). After making several stops along the way, the crew expects to arrive back at Rensselaer Polytech's campus in Troy, N.Y., on September 25. This is the third of Scientific American.com's blogs chronicling this expedition, called the New Clermont Project.
The New Clermont Project crew is learning valuable lessons about what it will take to make hydrogen power not only possible but practical as well. After losing both hydrogen fuel-cell-powered boat motors Tuesday, the New Clermont spent Wednesday docked in Beacon, N.Y., while the Rensselaer students figured out what went wrong.
Sep 24, 2009 | 14
In an early-morning announcement today, researchers reported that an experimental HIV (human immunodeficiency virus) vaccine effectively reduced the number of people who contracted the virus by nearly a third.
Tested in a U.S.-sponsored trial that involved more than 16,000 volunteers in Thailand, the vaccine was administered in six injected doses starting in 2006 to half of the group, and the other half received a placebo. Seventy-four people in the placebo group had contracted HIV by the end of the trial, whereas only 51 of the vaccinated group tested positive.* The injections consisted of two vaccines that had proven unsuccessful on their own: Sanofi-Aventis SA's ALVAC and VaxGen Inc.'s AIDSVAX.
The results came as a surprise to HIV-vaccine skeptics in the AIDS (acquired immunodeficiency syndrome) research field, whose numbers have increased after years of failed vaccine trials. "It's safe to say that the scientific community is caught off-guard," Mitchell Warren, director of the AIDS Vaccine Advocacy Coalition, told Bloomberg News. Before the announcement, Marie-Paule Kieny, director of the World Health Organization's Initiative for Vaccine Research, told the news service: "I don't think that there is a lot of expectation that the efficacy of this vaccine will be very high." A 2007 clinical trial of a vaccine made by Merck was stopped when researchers found that, in fact, more people who received the active vaccine (49) than the placebo (33) had contracted HIV.
Sep 24, 2009 | 6
Planetary scientists looking for water ice on Mars have employed a number of tactics to great success in their search. The Phoenix lander dug it up; orbiting radar measurements have seen it under insulating blankets of debris. (Frozen water sublimates to vapor in Mars's climate and so is not stable when exposed at the surface.)
Now a team of researchers has let meteorite impacts do the digging for them—a paper in this week's Science presents observations of fresh impacts and what they turn up from below the surface.
Using instruments on NASA's Mars Reconnaissance Orbiter (MRO), a group led by Shane Byrne, a planetary scientist at the University of Arizona's Lunar and Planetary Laboratory, found five recent impact craters in the Martian mid-latitudes, near the boundary where subsurface ice is thought to be no longer tenable. All were relatively small, ranging in size from about four to 12 meters across.
Sep 24, 2009 | 2
BALTIMORE—Deep in the brain, buried in the hippocampus and subventricular zone, reside adult neural stem cells, cells that retain the ability to become other types of neural cells and could serve as possible treatments for ailments ranging from vision impairment to Parkinson's to spinal cord injuries. Doctors, scientists and patients, however, are understandably hesitant to go digging around for them, their location being "a great deterrent," Sally Temple, founder of the New York Neural Stem Cell Institute, said at the 2009 World Stem Cell Summit here on Wednesday.
Researchers, therefore, are anxious to uncover other, more accessible neural stem cell candidates. Temple and her team have turned their sights to the retinal pigment epithelium (RPE), a layer of tissue at the base of the retina that comes into being within 30 to 50 days of conception, before many other parts of the neural system differentiate. Cells from this area of the eye can be easily harvested from retinal fluid that is usually discarded during retinal surgery, she explained.
Sep 23, 2009 | 31
Bigger than coyotes but smaller than wolves, their howl is high-pitched and their diet includes deer and small rodents. They are "coywolves" (pronounced "coy," as in playful, "wolves"), and they are flourishing in the northeastern U.S., according to a study published today in Biology Letters.
Although coyote–wolf breeding has been reported in Ontario, where coyotes started migrating from the Great Plains in the 1920s, this study provides the first evidence of coywolves—also known as coydogs or eastern coyotes—in the Northeast. And even though they are more coyote (Canis latrans) than wolf (gray wolves are Canis lupus, and red wolves are Canis rufus), the expansion of these hybrids into western New York State marks the return of wolves to the Empire State.
Sep 23, 2009 | 10
A team of Duke University researchers in Durham, N.C., is studying new ways to use the abundance of sensors contained in most smart phones (including the camera, accelerometer, microphone, GPS and Wi-Fi radio) to determine mobile users' precise locations and thereby deliver hyper-localized services. This could enable a business such as Starbuck's to text-message a coupon to a person's phone as he or she enters the coffee shop, or it could allow Wal-Mart to send shoppers a listing of sales items as soon as the store's doors slide open. Another option could be to provide blind mobile subscribers with information about where they are as they move from store to store within a mall.
The researchers argue in a paper presented today at the ACM MobiCom 2009 conference in Beijing that the increasing number of sensors on mobile phones presents new opportunities for logical localization, which is more useful to people than simply representing their location as a set of latitude and longitude coordinates.
Sep 23, 2009 | 4
Editor's Note: A team of Rensselaer Polytechnic Institute students are traveling up New York's Hudson River this week on the New Clermont, a 6.7-meter boat outfitted with a pair of 2.2-kilowatt hydrogen fuel cells to power the boat's motor. Their journey began September 21 from Manhattan's Pier 84 and will cover 240 kilometers (at a projected speed of 8 kilometers per hour). After making several stops along the way, the crew expects to arrive back at Rensselaer Polytech's campus in Troy, N.Y., on September 25. This is the second of Scientific American.com's blogs chronicling this expedition, called the New Clermont Project.
New Clermont Project team members Jenn Gagner and Jason Kumnick took the helm of the New Clermont for the second leg of the journey between Manhattan and Troy, N.Y. It was rough going for Gagner, a Rensselaer materials science and engineering grad student, and Kumnick, a Rensselaer doctoral student studying decarburization and workability of hardened steels, as the New Clermont suffered repeatedly from engine problems while traveling from Ossining, N.Y., farther up the Hudson to Beacon.
Deadline: Aug 31 2013
Reward: $100,000 USD
The Geoffrey Beene Foundation Alzheimer’s Initiative (GBFAI) is launching the 2013 Geoffrey Beene Global NeuroDiscovery Challenge whose
Deadline: Jun 29 2013
Reward: $7,000 USD
The Seeker for this Challenge desires proposals for chemical methods that could rapidly degrade a dilute aqueous solution
Save 66% off the cover price and get a free gift!
Learn More >>X | <urn:uuid:ad3dc4c3-290a-471e-873b-6eb51ac4b5ae> | 3.015625 | 2,279 | Content Listing | Science & Tech. | 45.881304 |
This is an image of Io.
Click on image for full size
The Jet Propulsion Laboratory
There are no clouds and lightning. The atmosphere of Io is very thin and does not remain gravitationally bound to Io for very long. Even so, it has an important impact on the Jupiter system.
Io's atmosphere comes from its volcanoes, then disperses because, as a small moon, Io is not massive enough to have substantial gravity. Portions of the atmosphere may also come from other processes which cause molecules to be extracted from the surface.
Because the atmosphere comes from it's volcanos, the air of Io is made primarily of sulphur.
Once the particles from the atmosphere get into the magnetosphere, they create a donut-shaped cloud of material around Io.
The Galileo spacecraft, in exploring the moons of Jupiter will try to learn more about the atmosphere of Io.
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
A satellite which has an atmosphere, such as Jupiter's moon Io, will leave a cloud of particles behind as it orbits the planet. Rhea, a body with an icy surface exposed to energetic particles, will have...more
A satellite which has an atmosphere, such as Jupiter's moon Io, and which also is inside a magnetosphere (unlike the Earth's moon), will leave a cloud of particles behind as it orbits the planet. This...more
Most forms of life leave evidence behind that they are there. Plants use up carbon dioxide and release oxygen. Some bacteria are known to release nitrogen into the environment. People leave behind smog,...more
Amalthea was discovered by E Barnard in 1872. Of the 17 moons it is the 3rd closest to Jupiter, with a standoff distance of 181,300 km. Amalthea is about the size of a county or small state, and is just...more
Callisto was first discovered by Galileo in 1610, making it one of the Galilean Satellites. Of the 60 moons it is the 8th closest to Jupiter, with a standoff distance of 1,070,000 km. It is the 2nd largest...more
Most of the moons and planets formed by accretion of rocky material and volatiles out of the primitive solar nebula and soon thereafter they differentiated. Measurements by the Galileo spacecraft have...more
Many examples of the differing types of terrain are shown in this image. In the foreground is a huge impact crater, which extends for almost an entire hemisphere on the surface. This crater may be compared...more | <urn:uuid:fa7515ee-efcf-4e30-b069-d2bbd7c00dd2> | 3.953125 | 569 | Content Listing | Science & Tech. | 54.927244 |
EUNIS is the European Nature Information System. It contains information on selected species, habitats and sites of importance for protecting Europe's biodiversity. The main content concerns the European Community Birds and Habitats directives as well as the Bern Convention. EUNIS has been developed for the European Environment Agency (EEA) by the European Topic Center for Nature Protection and Biodiversity.
The web application has been developed with European Commission funds (IDA-II programme). EUNIS is a live system, which will change over time with new data. The latest data set covers all the nationally designated areas data delivered by countries as one of the EEA priority data sets.
EUNIS was demonstrated at the Joint CHM meeting between the CBD Secretariat and EEA in Prague in September and at the EEA National Focal Point meeting of EEA in October 2003. The EUNIS web application can be found directly at http://eunis.eea.europa.eu or via EC CHM (the European Community Clearing House Mechanism) at http://biodiversity-chm.eea.europa.eu. You can register comments on EUNIS in the feedback section.
|Source||European Environment Agency (EEA)| | <urn:uuid:f96adf84-915a-467f-98d5-046b7cd7d155> | 3.046875 | 255 | Knowledge Article | Science & Tech. | 40.075279 |
In 2005, corals in the large reef off the coast of Florida were saved by four hurricanes. Tropical storms seem to be unlikely heroes for any living thing. Indeed, coral reefs directly in the way of a hurricane, or even up to 90km from its centre, suffer serious physical damage. But Derek Manzello from the National Oceanic and Atmospheric Administation has found that corals just outside the storm’s path reap an unexpected benefit.
Hurricanes can significantly cool large stretches of ocean as they pass overhead, by drawing up cooler water from the sea floor. And this cooling effect, sometimes as much as 5°C, provides corals with valuable respite from the effects of climate change.
As the globe warms, the temperature of its oceans rises and that causes serious problems for corals. Their wellbeing depends on a group of algae called zooxanthellae that live among their limestone homes and provide them with energy from photosynthesis. At high temperatures, the corals eject the majority of these algae, leaving them colourless and starving.
These ‘bleached’ corals are living on borrowed time. If conditions don’t improve, they fail to recover their algae and eventually die. But if the water starts to cool again, they bounce back, and Manzello found that hurricanes can help them to do this. | <urn:uuid:3c01697e-0b72-4c6e-a65b-9b8166122790> | 4.125 | 274 | Personal Blog | Science & Tech. | 44.431667 |
Which planets have rings?
Officially, how many of the planets of our solar system have rings and what are they? Thanks in advance for your assistance.
All of the giant planets in our solar system have rings: Jupiter, Saturn, Uranus, and Neptune. Jupiter's ring is thin and dark, and cannot be seen from Earth. Saturn's rings are the most magnificent; they are bright, wide, and colorful. Uranus has nine dark rings around it, and Neptune's rings are also dark, but contain a few bright arcs.
Get More 'Curious?' with Our New PODCAST:
- Podcast? Subscribe? Tell me about the Ask an Astronomer Podcast
- Subscribe to our Podcast | Listen to our current Episode
- Cool! But I can't now. Send me a quick reminder now for later.
How to ask a question:
If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site menu, or go here for help.
This page has been accessed 284708 times since September 12, 2002.
Last modified: June 4, 2003 9:38:05 PM
Ask an Astronomer is hosted by the Astronomy Department at Cornell University and is produced with PHP and MySQL.
Warning: Your browser is misbehaving! This page might look ugly. (Details) | <urn:uuid:b15eb542-8437-4501-b842-e12b85b1165d> | 3.59375 | 300 | Q&A Forum | Science & Tech. | 65.615824 |
| || |
| The Sixth Mass Extinction |
Should we consider, as some environmental scientists have, that the current biodiversity crisis is the start of a sixth mass extinction? Regardless of how one answers that question, it is clear that we are losing species at rates that, while exceedingly difficult to calculate, are above the background extinction rate and far exceed the speciation rate. Estimates are that 100,000-500,000 species of insects will go extinct in the next 300 years. The higher end of that estimate is comparable to the magnitude of the loss of species during the previous mass extinction episodes. Even the lower estimate represents a considerable loss of biodiversity. Moreover, 300 years is much shorter than the duration of those mass extinction periods.
The current biodiversity crisis stems from several causes. The two major contributors are habitat destruction and global climate change, both of which are largely due to human activity. As discussed earlier, much of the (largely unexplored) biodiversity lies in the tropics and, in particular, tropical rain forests. Tropical forests are being lost at an alarming rate. Conservative estimates place the loss of rain forests during the 1980s and 1990s at about 0.8 percent per year. This is in large part due to changes in the way the land has been used. For quite a long time, many areas had practiced slash and burn agriculture. In recent decades, however, the practice of cutting and clearing has been used increasingly for grazing or timber harvest, resulting in the loss of the tropical forest habitat. As a consequence, countless thousands of species (most of which are unknown to humans) are imperiled.
Global climate change has also impacted biodiversity. During the twentieth century, the mean temperature has increased by slightly more than one degree Fahrenheit (0.6 degree Celsius), and most of that change occurred between 1970 and 2000. Projections vary between x and y degrees Fahrenheit increase by mid-century. These changes do not appear great in the context of daily and seasonal temperature fluctuations, but they are large in comparison with prehistoric climate changes. While the magnitude of these changes is not beyond the range of historical variation, the rate at which the change has taken place appears to be so. The climate change is human-induced, due mainly to increases in carbon dioxide and other "greenhouse gases" that have appeared since the Industrial Revolution and accelerated during the twentieth century.
The human-induced global climate change is coupled with other climate cycles of various temporal and spatial scales. For example, the eastern United States had a cold winter in 2002-3 after several mild winters. In contrast, the western United States had a milder than normal winter that year. The pattern in 2002-3, most likely due to El Niño, does not invalidate the global upward climb in temperatures over a decades-long timespan. In addition to a mean increase in temperature, human-induced global warming is also likely to cause increased variation in climate. Some climate models suggest that the global warming may actually cause the northeastern United States to be cooler. The reason for this seemingly paradoxical possibility is that warming of the oceans could cause the Gulf Stream to be diverted south and east. Were this to happen, it would cause the Atlantic coast to be cooler. Regardless of the specifics of the local changes, more extreme weather will likely exacerbate already fragile ecosystems.
A paper published by Terry Root and her colleagues in 2003 shows that many species have altered their geographic ranges, presumably as a result of global climate change.5 Of those species that had altered their range, eighty percent were in the direction predicted by climate change models. The mean change of movement was about six kilometers per decade. In addition, many bird species have started laying eggs earlier in the spring. This study shows that forces of small, sustained change can be powerful over long enough time scale. But what about species that are unable to move? What will happen as their habitat changes due to human-induced global climate change?
Because of human-induced climate change and habitat destruction, we face a grave and growing crisis. Biodiversity is being lost at alarming but unknown rates. Moreover, if the diversity-stability hypothesis is true, loss of some species may trigger the loss of others, leading to a vicious circle. Although our knowledge about biodiversity and the extent to which it is lost is meager, the consequences are too grave to continue in ignorance.
| || |
| || |
| ||End Notes |
| || |
- Wilson, E. O. 1994. Naturalist. Washington, D. C.: Naturalist Warner Books, 359.
- Levin, S. A. 1999. Fragile dominion: Complexity and the commons. Cambridge, MA: Perseus Books, 6.
- McCann, K. S. 2000. The diversity-stability debate. Nature 405:228-33.
- Root, T. L., J. T. Price, K. R. Hall, S. H. Schneider, C. Rosenzweig, and J. A. Pounds. 2003. Fingerprints of global warming on wild animals and plants. Nature 421:57-60. | <urn:uuid:348fc7e7-e206-4574-becd-bdb931ddc1a8> | 4.15625 | 1,042 | Knowledge Article | Science & Tech. | 48.601139 |
rint - round to closest integer
#include <math.h> double rint(double x);
The rint() function rounds x to an integer value according to the prevalent rounding mode. The default rounding mode is to round to the nearest integer.
The rint() function returns the integer value as a floating-point number. | <urn:uuid:76eea17e-6b43-41c0-9d32-0531c60109e5> | 2.734375 | 69 | Documentation | Software Dev. | 63.8635 |
Triangle grid paper (to print out)
- Ask Dr. Math: Archive - Search for Pascal
- Practical Uses for Pascal's Triangle
- One Person of Seven Born on Monday
- Pascal's Triangle and the Ice Cream Cones
- Binomial Expansions and Pascal's Triangle
- Binomial Theorem
- Proof by Induction
- Or search the Dr. Math archives for "Pascal's Triangle" (just the words, not the quotes).
- Ask Dr. Math FAQ: Pascal's Triangle
- What is Pascal's Triangle? How do you construct it? What is it used for? Answers to questions frequently sent to the Math Forum's service Ask Dr. Math. See Pascal's Triangle to 19 Rows and Interactive Pascal's Triangle by Ken Williams of the Math Forum.
- Biographies from the MacTutor Math History Archive (St. Andrews)
- Blaise Pascal
- Omar Khayyam
- Yang Hui
- Eugène Charles Catalan
- The Binomial Expansion and Infinite Series - The MathMan
- Triangular, tetrahedral, even the Fibonacci numbers are in Pascal's triangle. Sample problems with some answers from Chapter 9 of Don Cohen's worksheet book for children as young as 7.
- The Catalan Numbers in Pascal's Triangle - Seth Johnson
- The Catalan Numbers appear in at least two places in Pascal's Triangle, first in the middle column going directly down the center (see illustration), subtracting the element immediately adjacent to it. If one does this on the 2N row, the result is the Nth Catalan Number. Second, they appear one row above: take the Nth term over and subtract the term immediately to the right...
- Combinatorics Topics for K-8 Teachers - Roger Day
- Pascal's Formula and Pascal's Triangle; Exchanging Conjectures and Justifications; Finding Fibonacci's Sequence in Pascal's Triangle. Notes from sessions of a 1996 course for professional development in Mathematics Education at Illinois State University in Normal, Illinois.
- Connections with Combinations and The Binomial Theorem - Hartig
- An exploration of the properties of Pascal's triangle, this site illustrates how Pascal's Triangle is an infinite triangular array of integers, listing some properties and showing evidence that suggests there is a close connection between the numbers in Pascal's Triangle, the numbers generated when counting combinations, and the Binomial Theorem, which identifies the coefficients appearing in expansions of powers of a + b. From the PSW Publishing Company.
- Mathworld (formerly Eric's Treasure Trove of Mathematics) - Pascal's Triangle
- Mathematical definitions and descriptions of numbers you can find in Pascal's triangle.
- Generating Pascal's Triangle - Karl F. Kuhn
- The simplest view of Pascal's Triangle is that it may be generated by affixing a one a either end of the new row and then generating all numbers in between by adding together the two numbers above it. The numbers may also be generated by using the idea of combinations found in probability theory. To do this assign a column and row number to each value. Then use the combinations formula to produce the value in question...
- Math on Wheels - Pascal's Triangle - David Fayegh
- Studying the pattern of even and odd numbers in the triangle provides a basis and motivation for visualization. Display even numbers in the triangle using yellow dots and odd numbers using black dots. Noting that each element of the rows of the triangle is just the binomial coefficients n choose k as k runs from 0 to n, we can write Maple code that computes the elements of Pascal's triangle...
- Number Patterns in Pascal's Triangle - Ulysses Harrison
- A lesson plan designed to enable students at grade 5 or higher to
recognize the integers, rows and columns that comprise Pascal's Triangle. The main objective of the lesson is to enable students to reproduce the first eleven rows of Pascal's Triangle by recalling number patterns given in the lesson without having to look again at the original triangle.
- Pascal's Fractals - MAA Online, Ivars Peterson (MathLand)
- It's possible to convert Pascal's triangle into eye-catching geometric forms. For example, one can replace the odd coefficients with 1 and even coefficients with 0. Continuing the pattern for many rows reveals an ever-enlarging host of triangles, of varying size, within the initial triangle.
In fact, the pattern qualifies as a fractal. The even coefficients occupy triangles much like the holes in a fractal known as the Sierpinski gasket. An article written by the mathematics and physics writer and online editor at Science News.
- Pascal's Pyramid Or Pascal's Tetrahedron? - Jim Nugent
- An advanced exploration. The three-dimensional analog of Pascal's triangle is of interest for the same reasons that Pascal's triangle is of interest. It touches on number theory, the distribution of primes and twin primes, divisors, factors, combinatorics, and geometry. Article abstract: A lattice of octahedra and tetrahedra (oct-tet lattice) is a useful paradigm for understanding the structure of Pascal's pyramid, the 3-D analog of Pascal's triangle. Notation for levels and coordinates of elements, a standard algorithm for generating the values of various elements, and a ratio method that is not dependent on the calculation of previous levels are discussed. Figures show a bell curve in 3 dimensions, the association of elements to primes and twin primes, and the values of elements mod(x) through patterns arranged in triangular plots. It is conjectured that the largest factor of any element is less than the level index. With illustrations (visualizing Pascal's tetrahedron as a stack of marbles...)
- Pascal's Triangle Applet - developer.com
- An applet that computes the binomial coefficients, graphically presenting Pascal's triangle modulo an integer number p. It can draw triangles of up to 650 rows with a p value between 2 and 15,000. The resulting pictures may be good graphical illustrations of the binomial theorem and the laws of division of natural numbers.
- Pascal's Triangle From Top to Bottom - Matthew Hubbard and Tom Roby
- The section "Applications" addresses questions such as "What types of questions are answered by the binomial coefficients?" and draws on Java applets to address "What does Pascal's Triangle look like mod n?" The section "Identities" organizes identities and proofs into the categories sums, products, sums of products, products of sums, factorization identities, identities by name, and identities involving famous numbers (e.g., Euler numbers, Fibonacci numbers, and Stirling numbers).
- Pascal's Triangle Interface - Andrew Granville
- A form that lets you visualize the entries of Pascal's triangle with respect to a modulus between 2 and 16. Select values for the number of rows, modulus, and the size of the image, and submit. Also see Granville's The Arithmetic Properties of Binomial Coefficients I.
- Pascal's Triangle mod 2 - Riddle
- From the Iterated Functions Systems site by Larry Riddle of Agnes Scott College. An illustrated explanation: the figure represents the first 128 rows of Pascal's triangle. The coefficients in the triangle that are odd are displayed as red boxes. The coefficients in the triangle that are even are displayed as black boxes.
- Pascal's Triangle and Programming - Brian Ward
- A common programming project for new students, Pascal's Triangle is useful in number theory, probability, and having fun (among other things). This version came into being after I saw someone in the process of constructing one by hand. I thought it was silly to do that when we have computers, which can save you a lot of time and pain (especially if you add something incorrectly). I remembered that the only triangle programs I'd ever seen were those silly beginner's course programs... so I wrote mine.
- Pascal's Triangle and Related Triangles - Helena Verrill
- Links, Puzzles, and Related Triangles: Circle regions problem, Fibonacci sequence and other diagonals of Pascal's triangle, Clown Problem, Tchebychev Polynomials, Bessel Polynomials, and Stirling numbers.
- Pascal's Triangle using Clock Arithmetic - Part I - Jay's Corner
- Exploring Pascal's triangle when the modulus is a prime; when the modulus is a power of a prime; and when the modulus has at least two different prime divisors.
- Patterns in Pascal's Triangle - Jeremy Baer
- An applet for exploring patterns in the numbers contained in Pascal's triangle, which colors the cells in the first 128 rows of the triangle depending on whether or not they are divisible by some number x, where x is entered by the user. Many of the patterns created tend to be self-similar; for example, for x=2, the resulting pattern looks like Sierpinkski's triangle. Some information about Pascal's triangle and a number of related links are also included.
- Pinball and Pascal's Triangle - Eric Hiob
- The picture shows a type of pinball machine that you can build yourself using 10 finishing nails, 5 small cups, a wooden board and a pinball (marble). Nail the nails part way into the board in the triangular pattern shown, with one nail in the top row, two in the second, three in the third and so on, and with enough space for the pinball to fit between the nails. How many different paths are there through the pinball machine and what are they? If we superimpose Pascal's triangle on top of the pinball machine then we see the connection between the two: Each number of Pascal's triangle represents the number of distinct paths that a pinball can take to arrive at that point in the pinball machine... The numbers in Pascal's triangle can also be gotten using the combination function... Pascal's triangle is essentially a listing of all the possible values of the combination function.
- Pizza and Pascal's Triangle - Morganna Letsch
- The Pizza Place offers pepperoni, mushrooms, sausages, onions, anchovies and peppers as toppings for their regular plain pizza. How many different pizzas can be made? The first possible method of solving this problem is to use
combinatorics and the following formula: Pt = nC0+nC1+nC2+nC3+...+nCn . Where Pt=total number of pizzas that can be made and n=the number of possible toppings This means: Pt = 6C0+6C1+6C2+6C3+6C4+6C5+6C6 . These terms correspond to the rows in Pascal's triangle...
- Polynomials from Pascal's Triangle
- There are many interesting things about polynomials whose coefficients
are taken from slices of Pascal's triangle. (These are a form of
what's called Chebyshev polynomials.) For example, the numbers
1,9,28,35,15,1 are taken from the 11th diagonal of Pascal's triangle...
- A Short Account of the History of Mathematics, W. W. Rouse Ball (4th ed., 1908)
- From the School of Mathematics, Trinity College, Dublin.
- Sierpinski meets Pascal - Cynthia Lanius
- Have you ever seen the triangular pattern of numbers (shown) named after the famous French mathematician Blaise Pascal? Do you see the pattern of how the numbers are placed in the triangles? Print out or copy the triangle and fill in the missing numbers; then check your answer here and see what Pascal's Triangle has to do with Sierpinski's Triangle.
- Spreadsheets, Pascal's Triangle and Sierpinski Gaskets - Aarnout Brombacher
- Download an Excel spreadsheet to generate diagrams like the one illustrated.
- The Twelve Days of Christmas and Pascal's Triangle - Judy Brown
- Using Pascal's triangle, find the number of items given each day in the song, "The 12 Days of Christmas." See also Pascal's Triangle Activities.
Questions? Write to the workshop facilitators. | <urn:uuid:5dc420fc-b15e-4c32-91e0-ee9a3fc7ac24> | 3.046875 | 2,539 | Content Listing | Science & Tech. | 53.581288 |
Sea Hare Aplysia punctata
The sea hare, a type of sea slug, has tentacles reminiscent of a hare’s ears. It has an internal shell about 1 in (4 cm) long that is visible only through a dorsal opening in the mantle. If disturbed, it releases purple or white ink. It is not known if this response is a defense mechanism.
- Class Gastropoda
- Length Up to 8 in (20 cm)
- Habitat Shallow water
- Distribution Northeast Atlantic and parts of the Mediterranean | <urn:uuid:24f79009-08bf-4a97-b278-08e41985287d> | 2.875 | 113 | Knowledge Article | Science & Tech. | 54.68125 |
Major samplers being prepared for placement on the vehicle. The one shown in this image is called a "double pair" because it has two sampling chambers. The robotic claws of the vehicle grab onto the T-shaped handle for placement of the nozzle into the vent orifice. Click image for larger view and image credit. (HR)
Sensors, Sampling, and Analyses in Extreme Environments
Associate Professor Oceanography
University of Washington
Sample collection and data acquisition in extreme environments is challenging. Instruments must be able to withstand the pressures associated with great ocean depths (>200-600 times those we live at), corrosive hot fluids (>700°F), cold seawater, and long periods on the seafloor. They also have to survive the onslaught of opportunistic microbes and animals looking for a new place to live. Because of the inherent difficulties of working in this environment, scientific instruments have to be innovative and durable.
Major water sample being taken from a black smoker on the Juan de Fuca Ridge. These samplers are used both in acidic systems, like black smoker chimneys, and in basic environments such as Lost City. To withstand hot, acidic fluids a lot of the parts on the samplers are made out of titanium. Click image for larger view and image credit.
Hydrothermal Vent Fluids
Collecting hydrothermal fluids and gases might seem like an easy task, but it actually requires special instruments coupled with skillful ROV operations. The first type of instrument is the major sampler , which collects fluids through a thin titanium nozzle placed inside the vent opening. When the syringe-like plunger is triggered, it extracts about 750 ml of fluid into the barrel of the instrument. The fluids are later analyzed for major and trace elements and some isotopes. Chemical analyses of the fluids collected by the major sampler help scientists understand the fluid-rock interactions beneath the surface and within the chimneys, constrain the temperature at which rocks are altered, and even show the influence of microbial processes. In addition, they may provide insight about the composition of fluids during evolution of the early Earth.
Fluid samples also provide a way to recover microorganisms that live within the fluid-saturated walls of the carbonate towers at Lost City. As vent fluids are sucked into the major samplers, micron-sized organisms that live within the chimneys are also sucked in.
Once the samplers are onboard, microbiologists take small portions of the fluids and put them into test tubes filled with different nutrients to try and grow the organisms. Other fluid samples are filtered so that the researchers can extract DNA from the microbes and look at their genetic composition.
Mausmi Mehta, a graduate student in Oceanography at the University of Washington is shown processing microorganisms on the 2003 Lost City cruise. The syringes are filled with different nutrients and gases and inoculated with organisms obtained form rock and fluid samples. The tubes are then placed in ovens at a variety of temperatures (usually 50 to 90°C) to grow the organisms. Click image for larger view and image credit. (HR)
Gas-tight bottles readied for deployment. The titanium bottles withstand the harsh vent environment. The small snorkel is deployed within the vent orifice and the bottle is then "triggered" to immediately suck in vent fluid. Vent fluid temperatures are taken before sampling. Click image for larger view and image credit. (HR)
Gases from the Deep
Another instrument used to collect hydrothermal fluid is the titanium gas-tight sampler. Like the major sampler, it consists of a nozzle attached to a barrel, which collects the fluid after sampling. However, it is less than half the size. Unlike the major samplers, which can depressurize and lose gas as they ascend to the surface, gas-tight samplers are specifically designed to stay completely sealed to prevent any gas loss. Before use, the air is removed from gas-tights to minimize contamination and achieve nearly instantaneous sampling. If seawater is sucked into the major and gas-tight samples it is harder to analyze their compositions. ROV pilots have to be very precise when placing the nozzle in the vent and taking the sample.
After sampling, a gas extraction line onboard the ship is used to remove gas from the sampler, dry the gas, and package it into a glass ampule (small glass capsule) for later analysis of gas composition, helium content, and stable and radiocarbon isotopes. Helium isotopes can indicate whether there are magmatic processes impacting the development of the vents, while carbon isotopes provide age information and insights into the origin of the gases. For example, methane, a dominant gas at Lost City, is generated by seawater alteration of the mantle rocks beneath the field, and perhaps by millions of microorganisms that live within the carbonate chimneys.
Rocks and Life
Rocks: Rocks and life at Lost City are intertwined. To sample the vent chimneys and the underlying crustal rocks we use the strong jaws located at the ends of the manipulator arms on the robotic vehicles . The "fingers" on the jaws pry the rocks off of the cliffs, or break pieces of the chimneys and then store them in baskets located at various places on the vehicle. Once onboard, the rocks are cataloged, extensively imaged and described and then stored in a variety of ways for follow-on geochemical and biological analyses.
A gas-tight sampler is attached to a gas extraction line (titanium cylinder being heated with a heat gun). Giora Proskurowski, a recent graduate student in Oceanography at the University of Washington, is studying the gases at Lost City in collaboration with Dr. Marvin Lilley. The gases are trapped within this complex line by opening and closing a variety of valves and by cooling of the gases under vacuum. Click image for larger view and image credit. (HR)
Animals: Life thrives on the surfaces of the rocks and within the chimneys. Researchers studying the larger animals at Lost City use tweezers to sample most of the animals because at Lost City, the "large" animals are rarely more than a 1/2 an inch in size and most are much smaller. Animals are also collected using suction samplers (underwater vacuum cleaners) attached to submersibles and robotic vehicles. A nozzle attached to a suction tube is used to "vacuum" the surfaces of the chimneys. Animals are sucked into a variety of small chambers and then brought to the surface. These animals are preserved in a variety of ways and photographed with a special microscope. Some animals are frozen so that their DNA can be later analyzed to determine what kind of species they are. One of the most fascinating things about Lost City is that, although animals are not very abundant within the field, their diversity is as high as or higher than black smoker systems on the Mid-Atlantic Ridge. Numerous species never before seen where recovered on the expedition to Lost City in 2003, and we believe that we may find more on this cruise.
Dr. Tim Shank, from Woods Hole Oceanographic Institution, sampling animals from Lost City carbonate samples during the 2003 expedition. Tim has found many new species of animals at Lost City. Dr. Gretchen Früh-Green from Zürich, is working with Tim, describing the rocks that the biological samples are taken from. Click image for larger view and image credit. (HR)
Microbes: Whenever samples of rocks come onboard, there is much excitement in the laboratory by the microbiologists and a rush to get rock samples as fast as possible. Many of the microbes that live within the chimneys cannot tolerate oxygen. If left too long they will die. Because of this, the microbiologists rapidly subsample the rocks by breaking small pieces off. They immediately transfer some of the material into specially designed "airtight" glove bags that are filled with nitrogen gas. The conditions within the bag are anaerobic, meaning that there is no oxygen. Built-in gloves that protrude into the bag allow the microbiologists to process the samples without introducing air into the environment. Similar to processing of the vent fluid samples, some of the rock material is placed in test tubes with a variety of nutrients and gases (e.g. methane, carbon dioxide) added to them. Special gas-tight caps are used to seal the test tubes, keeping the gas in and the air out. These tubes are then placed in ovens set at different temperatures to grow or culture the organisms. Most microbes are very difficult to grow, however, and so a lot of the information on these organisms comes from analyzing their genetic composition using DNA and RNA analyses. To do this, the rocks are frozen at -40 to -112 Fahrenheit! | <urn:uuid:16840c66-1fcf-45fb-afdf-8a4da89579f7> | 3.640625 | 1,808 | Knowledge Article | Science & Tech. | 40.356226 |
The Hydrological Cycle
We read in Oceans
and Climate that:
- Sunlight reaching earth's surface is absorbed mostly in the tropics,
mostly in the tropical ocean.
- The absorbed sunlight warms the ocean, which cools mainly by evaporation
from the surface (think of it as the ocean sweating to keep cool).
Water evaporated from the ocean eventually condenses as water droplets
in clouds. If the cloud grows large enough, the droplets coalesce
and fall as precipitation, mostly as rain, sometimes as snow or ice.
- 74% of all water evaporated into the atmosphere falls as precipitation
on the ocean, mostly in the tropics.
- 26% falls on the land.
But the distribution of rainfall is very uneven.
- Some of the water runs into streams, lakes, and
rivers, which return the water to the ocean.
- Some soaks into the ground (infiltrates) and becomes groundwater.
The water then can percolate deeper into the ground supplying water
to subsurface reservoirs. The rate of infiltration depends on:
- The type of soil. Sandy soils absorb water faster than
- Vegetation, which tends to delay runoff.
- Water content
of the soil. Soils saturated with water absorb little more.
- Rainfall rate.
- Some evaporates back into the air, or it is absorbed by plants,
which transpire the water into the air. This is called evapotranspiration.
The cycling of water molecules from the ocean to the atmosphere
to the land and back to the ocean, and the storage in various reservoirs,
is called the hydrological cycle or water cycle. Here are the major parts
of the cycle.
World water cycle and estimated residence times.
From United Nations Environmental Programme: Vital
Global average rainfall map. Notice the importance
of tropical rain. From Negri et al (2004).
Most of earth's water is in the oceans, and most of the fresh water
is in ice and below ground (groundwater).
Very little water is available for human use. For example, only 0.91%
of all earth's water is available as fresh ground water or surface water.
Only 0.009% (3% times 0.3%) is available in lakes, rivers, and swamps.
Most of the fresh water available for human use is ground water.
From US Geological Survey Earth's
The problem with water is, it is not uniformly distributed. It is
not often available where it is needed. Globally, there is enough precipitation
to serve 6.5 billion people. But many people live in desert regions or
in densely populated regions, leading to water shortages in these regions.
The U.S. receives enough annual precipitation to cover the entire
country to a depth of 30 inches. This 30 inches is known as the U.S.
water budget. The eastern half of the country receives more rainfall
than the western half. Most of this precipitation returns to the water
cycle through evapotranspiration. Of the 30 inches of rainfall, 21
inches returns to the atmosphere in this manner. Water loss by plants,
the transpiration portion of evapotranspiration, is most significant.
One tree transpires approximately 50 gallons of water a day. Approximately
8.9 inches of annual precipitation flows over the land in rivers and
returns to the ocean. Only 0.1 of an inch of precipitation infiltrates
into the ground water zone by gravity percolation. The actual amount
of water that enters the ground water zone for any specific area depends
upon the annual rainfall in that area.
States Water Budget, Purdue University.
Therefore if we want to understand water use on land, we must focus
on groundwater, even though rivers and lakes are much more visible. Most
people get water from wells. Roughly half the population served by public
water systems use ground water.
Human Influence on the water Cycle
We read in the Anthropocene that
human activity has a significant influence on the hydrological cycle
at the global level. About 40% of the total global runoff to the oceans
has been captured for human use (Steffan et al, 2004: 113). Groundwater
is being used faster than it is replenished in most dry areas of the
world. We have extensively altered river systems through impoundments
and diversions to meet their water, energy, and transportation needs.
There are >45,000
dams above 15 m high, capable of holding back >6500 km3 of water (1),
or about 15% of the total annual river runoff globally. (Nillson et al,
Overall Water Use
We use water in households, to grow crops, to manufacture goods,
and to carry off waste and sewage. Most uses require clean,
unpolluted water free of harmful molecules, yet the very use of the
water tends to add pollution. As a result, clean water is often scarce,
and most easily accessible sources of
water have been developed. In some regions, clean water is not available.
Most water is consumed by agriculture. In the following table of global
water use, note that some uses withdraw water from reservoirs, but the
water is returned. The difference between what is withdrawn and what
is returned is consumption. Most domestic water is returned to streams
via city sewage systems.
Evaporation from reservoirs
Table from World Water Council, Water
At A Glance.
Original data from Shiklomanov,
For information on water use in the US, read Estimated
Use of Water in the United States in 2000 (Hutson, 2005). Here is the
breakdown of use in Texas from the Texas Environmental Almanac: Water
Quantity: Chapter 1:
1990 WATER USE IN TEXAS
(millions of acre-feet)
|Category of Use||Total||Ground Water Use||Surface Water Use||% of Total|
|Steam Electric Power||0.43||0.06||0.38||2.8%|
Changes in Texas water use from 1974 to 2001. Although total water use (in
millions of acre-feet) has not changed much, the distribution of use has
changed. More water is being used by cities, less by irrigated crops.
Historical Water Use Data, Texas Water Development Board.
Household Water Use
The amount of water used by each household varies between countries,
with households in the US using the most.
From: Manitoba State
of the Environment Report 1997: Issues
The American Water Works Association has studied the use of water by
households in the US. They found
The North American households
included in this study use approximately 146,000 gallons annually.
Of this amount, 42 percent (61,300 gallons) is used indoors. The
remaining 58 percent (84,700 gallons) is used outdoors.
In households that utilized water-efficient fixtures, Clothes washers
assume the role of top water user (15 gallons per capita per day), followed
by faucets (10.9 gallons per capita per day), showers (10 gallons per
capita per day) and toilets (9.6 gallons per capita per day). NOTE: The
REUWS study group did not contain a significant number of homes with
water conserving clothes washers.
Works Association Fact Sheet.
From: American Water Works Association: Residential
End Use of Water,
cited by State of Washington Water
Click on figure for a zoom.
In Texas, household use averages 167 gallons per person
per day. Of this, about 25% is used for lawns and outdoors in Spring
Who Owns Water
The ownership of water and water rights has a long and complex history.
Rights vary from country to country, and from state to state in the US.
In general, states own surface water, but the federal government exercises
its right to control use of rivers and pollutants dumped into rivers.
Texas Water Law Synopsis
Here is a brief summary of Texas water law from the Texas
Texas water law is - in a word - complex. It has its roots in Hispanic
law and in English common law and has been hammered into its current
form by more than 200 years of legislation and court cases.
Basically, water rights in Texas are divided into two categories: groundwater
and surface water.
Groundwater law, which pertains to any water that is underground, is
fairly limited. Groundwater includes water percolating through soil
and rock, underground flow in confined channels, artesian water, and
In Texas, groundwater is considered the property of the owner of the
surface property from which it is pumped - much like a mineral or oil
The English common law of "rule of capture" is
in force, allowing landowners to pump as much as they want without
regard to how such action might affect a neighbor's water supply.
Generally, surface water is owned by the state.
All natural streams, rivers, lakes, watersheds and bays of the Gulf
of Mexico are considered property of the state. There are exceptions,
however. Surface water can be used for domestic purposes and for livestock.
Texans who own property next to a body of water are free to make reasonable
use of it.
For more information consult the Texas
Water Resources Education web page on Water
Law at Texas A&M
University, and in the Handbook of Texas Online article on Water
Various units are used to measure water volume:
1 gallon (US liquid) = 3.785 411 8 liters
1 acre-foot = 325,851 gallons = an area about the size
of a football field covered with one foot of water = 1,233,480 liters
Hutson,Susan S.; Nancy L. Barber, Joan F. Kenny, Kristin S. Linsey,
Deborah S. Lumia, and Molly A. Maupin (2005) Estimated
Use of Water in the United States in 2000. U.S. Geological
Survey Circular 1268, 15 figures, 14 tables, with revisions.
Negri, A. J., R. F. Adler, et al. (2004). A 16-year climatology of global
rainfall from SSM/I highlighting morning versus evening differences.
13th Conference on Satellite Meteorology and
Oceanography, Norfolk, VA,
American Meteorological Society.
Nilsson, C., C. A. Reidy, et al. (2005). "Fragmentation and Flow
Regulation of the World's Large River Systems." Science 308
Steffen, W., A. Sanderson, et al. (2003). Global
Change and the Earth System, Springer.
23 December, 2008 | <urn:uuid:27e3678b-1c6e-4387-9a22-8b030f5dd9d1> | 4.09375 | 2,281 | Knowledge Article | Science & Tech. | 53.047599 |
Demons in the History of Science
Part one of two: Laplace’s Demon
Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.
Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.
Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.
Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace, A Philosophical Essay on Probabilities
Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events.
With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon.
Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon. | <urn:uuid:eb507624-cb7e-4304-a8ad-c575ddaa2888> | 3.203125 | 737 | Personal Blog | Science & Tech. | 22.659054 |
Peer Reviewed: Droughts have not increased since the 1950s
According to a commonly used model of drought patterns, researchers had previously assumed that higher global temperatures were causing greater evaporation of water, and therefore more droughts.
But a more detailed analysis of weather data, including wind speed, humidity and radiation levels, found that in fact there has been “little change” in drought over the past 60 years.
Researchers from Princeton University and the Australian National University said drought was “expected to increase in frequency and severity” in the future, but added that currently used prediction methods are inaccurate.
Overestimating the influence of temperature on evaporation could skew estimates of the likely impacts of climate change over the coming decades, they reported in the Nature journal.
The Intergovernmental Panel on Climate Change (IPCC) estimates that global temperatures have risen by about 0.13C per decade for the last 50 years – nearly twice the rate of increase for the last 100 years.
In a report published in 2007, the IPCC claimed that “more intense and longer droughts have been observed over wider areas since the 1970s”, adding that “increased drying linked with higher temperatures and decreased precipitation has contributed to changes in drought”.
In a recent review, however, the statement was significantly revised to recognise that over-reliance on temperature recordings to predict evaporation may have inflated estimates of drought at regional and global scales.
Now in their new study, the American and Australian scientists have outlined “more realistic calculations” which suggest major uncertainty over drought trends since 1950, and little sign of an increase in the overall area affected by droughts.
They added that droughts may cause hot weather, and not the other way around, because there is less of a cooling effect from evaporation due to lower rainfall levels.
In a linked comment article Prof Sonia Seneviratne of the ETH science university in Zurich wrote: “The authors’ results confirm the complexity of the processes that lead to changes in drought conditions.
“The findings imply that there is no necessary correlation between temperature changes and long-term drought variations, which should warn us against using any simplifications regarding their relationship.”
Prof Piers Forster, Professor of Physical Climate Change at the University of Leeds, said: “This study is an important contribution highlighting the complexity of drought prediction but it does not make me downgrade the substantial threat to harvests posed by climate change.
“In terms of staple harvests of wheat and maize, high temperatures at certain times of the growing season, for example temperatures above 35C at the time of wheat flowering, can kill off crops.”
but..the science is settled???
sciene by its very nature is NEVER settled..and anyone who says it is is telling a lie for a reason..one again the doomsters and warmists are wrong..but you wont hear them retract their previous accusations..they just ignore it and wait for “good news” | <urn:uuid:60b60d5b-9da3-4c57-bd67-870edbde14fa> | 3.21875 | 633 | Comment Section | Science & Tech. | 33.639074 |
Color-Color Analysis of GLIMPSE Point Sources
Daniel Capellupo, University of Rochester
For my project, I made use of new data from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE), which uses the Spitzer space telescope. GLIMPSE covers the longitude range |l| = 10-65 and latitude b = -1 to 1 in four mid-infrared wavelength bands, 3.6, 4.5, 5.8, and 8.0 microns. For more information, please visit the GLIMPSE website, or take a look at Benjamin et al. 2003 (1).
I spent a lot of time this summer learning how to create, and then analyze, what are called color-color diagrams. In astronomy, a color is defined as the difference between a magnitude at one wavelength and a magnitude at another wavelength. On a color-color diagram, we plot one color versus another color. Another useful plot is a color-magnitude diagram, which plots the magnitude at a certain wavelength versus some color. Sometimes, we will color select objects, meaning we isolate a group of objects that are located in a certain area on the color-color diagram. When I began my work here, I was provided with a list of point sources from the GLIMPSE survey that had a [K]-[8.0] color of at least 3. This means that all these objects were 3 magnitudes brighter at 8.0 microns than at 2.17 microns. Basically, these are very red objects; in fact, there are only 44,464 objects with this color selection out of the 30 million point sources in the GLIMPSE catalog. In addition, I had a list of OH/IR stars from a catalog by Sevenster. Laura Chomiuk, one of Prof Churchwell's graduate students, provided me with these two lists. A color-color diagram was then created by making a contour diagram of the [K]-[8.0]>3 sources, along with red symbols representing the OH/IR stars and green points representing regular field stars:
This diagram has [3.6]-[4.5] on the x-axis and [3.6]-[8.0] on the y-axis. As you can see from the contour diagram, there are two distinct peaks in the distribution of the [K]-[8.0]>3 objects. So, it became my job to try to figure out what kinds of objects are populating these two peaks, what I call my “mystery objects.”
In addition, I should note that the color-color plots, as well as any other plot I made, were created using IDL.
|Page 1||Page 2||Page 3|
|References and Links| | <urn:uuid:0d7c7819-2992-4b8d-bed6-ad3b32e6ee1a> | 3.03125 | 592 | Personal Blog | Science & Tech. | 66.517324 |
The Oldest Trees on the Planet
Trees are some of the longest-lived organisms on the planet. At least 50 trees have been around for more than a millenium, but there may be countless other ancient trees that havenít been discovered yet.
Trees can live such a long time for several reasons. One secret to their longevity is their compartmentalized vascular system, which allows parts of the tree to die while other portions thrive. Many create defensive compounds to fight off deadly bacteria or parasites.
And some of the oldest trees on earth, the great bristlecone pines, donít seem to age like we do. At 3,000-plus years, these trees continue to grow just as vigorously as their 100-year-old counterparts. Unlike animals, these pines donít rack up genetic mutations in their cells as the years go by.
Some trees defy time by sending out clones, or genetically identical shoots, so that one trunkís demise doesnít spell the end for the organism. The giant colonies can have thousands of individual trunks, but share the same network of roots.
PandoWhile Pando isnít technically the oldest individual tree, this clonal colony of Quaking Aspen in Utah is truly ancient. The 105-acre colony is made of genetically identical trees, called stems, connected by a single root system. The ďtrembling giantĒ got its start at least 80,000 years ago, when all of our human ancestors were still living in Africa. But some estimate the woodland could be as old as 1 million years, which would mean Pando predates the earliest Homo sapiens by 800,000 years. At 6,615 tons, Pando is also the heaviest living organism on earth... http://www.wired.com/wiredscience/2010/03/old-tree-gallery/all/1?npu=1&mbid=yhp | <urn:uuid:379d290f-1dcc-45ab-8c0b-df6e921b1719> | 3.484375 | 396 | Comment Section | Science & Tech. | 61.379004 |
Find mudminnows and pikes information at Animal Diversity Web
The mudminnows and pike group is a small group of freshwater fish, including 5 species of mudminnows and 5 species of pikes. This group is found only in the northern parts of the Northern Hemisphere, including northern North America and Eurasia. Pikes are long, streamlined, predatory fish, capable of great bursts of speed. They are voracious, and will eat almost anything they can get into their mouths. Pikes are popular with sport fisherman, because they put up a big fight when caught and can grow to be quite large. Mudminnows are smaller than pikes, but are also efficient predators that capture prey by ambushing them with speed. | <urn:uuid:4554b3c8-3061-4388-95f6-9126de8350d1> | 2.765625 | 153 | Knowledge Article | Science & Tech. | 47.18408 |
autoconf.info: Using System Type
Go backward to Canonicalizing
Go up to Manual Configuration
Go to the top op autoconf
Using the System Type
How do you use a canonical system type? Usually, you use it in one or
more `case' statements in `configure.ac' to select system-specific C
files. Then, using `AC_CONFIG_LINKS', link those files which have
names based on the system name, to generic names, such as `host.h' or
`target.c' (*note Configuration Links::). The `case' statement
patterns can use shell wild cards to group several cases together, like
in this fragment:
case $target in
i386-*-mach* | i386-*-gnu*)
obj_format=aout emulation=mach bfd_gas=yes ;;
i960-*-bout) obj_format=bout ;;
and later in `configure.ac', use:
Note that the above example uses `$target' because it's taken from a
tool which can be built on some architecture (`$build'), run on another
(`$host'), but yet handle data for a third architecture (`$target').
Such tools are usually part of a compiler suite, they generate code for
a specific `$target'.
However `$target' should be meaningless for most packages. If you
want to base a decision on the system where your program will be run,
make sure you use the `$host' variable, as in the following excerpt:
case $host in
*-*-msdos* | *-*-go32* | *-*-mingw32* | *-*-cygwin* | *-*-windows*)
You can also use the host system type to find cross-compilation
tools. *Note Generic Programs::, for information about the
`AC_CHECK_TOOL' macro which does that.
Created Mon Nov 8 17:41:59 2004 on tillpc with info_to_html version 0.9.6. | <urn:uuid:de2cf026-2ccd-409e-a2c5-d24075a4e1bc> | 3.09375 | 458 | Documentation | Software Dev. | 52.64432 |
Data reported by the weather station: 918300 (NCAI)
Latitude: -18.83 | Longitude: -159.76 | Altitude: 4
|Main||Year 1993 climate||Select a month|
To calculate annual averages, we analyzed data of 365 days (100% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||26.0°C||365|
|Annual average maximum temperature:||28.4°C||365|
|Annual average minimum temperature:||22.5°C||365|
|Annual average humidity:||80.2%||365|
|Annual total precipitation:||1654.60 mm||365|
|Annual average visibility:||26.9 Km||365|
|Annual average wind speed:||18.1 km/h||365|
Number of days with extraordinary phenomena.
|Total days with rain:||186|
|Total days with snow:||0|
|Total days with thunderstorm:||8|
|Total days with fog:||0|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||1|
Days of extreme historical values in 1993
The highest temperature recorded was 31.1°C on December 27.
The lowest temperature recorded was 15.6°C on July 8.
The maximum wind speed recorded was 92.4 km/h on September 26. | <urn:uuid:61eaf593-cd35-473f-99f8-fae9ab85e175> | 2.75 | 360 | Structured Data | Science & Tech. | 72.456166 |
Writing an Event Handler for the Button
Go Up to Getting started with IntraWeb Index
The form does not yet perform any actions when the user clicks the OK button.
For information on editing the main form, see Editing the Main Form.
You will now write an event handler that will display a greeting when the user clicks OK.
- Double-click the OK button on the form. An empty event handler is created in the editor window, like the one shown here:
procedure TformMain.IWButton1Click(Sender: TObject); begin end;
Using the editor, add code to the event handler so it looks like the following:
procedure TformMain.IWButton1Click(Sender: TObject); var s: string; begin s := editName.Text; if Length(s) = 0 then WebApplication.ShowMessage("Please enter your name.") else begin WebApplication.ShowMessage("Hello, " + s +"!"); editName.Text := ""; end; end;
For information about running the completed application, see Running the Completed Application. | <urn:uuid:0571336f-7fb5-4b44-94b1-30b1025aa7db> | 2.6875 | 236 | Documentation | Software Dev. | 50.984089 |
Science says: The warming trend is the same in rural and urban areas, measured by thermometers and satellites.
Surveys of weather stations in the USA have indicated that some of them are not sited as well as they could be. This calls into question the quality of their readings.
However, when processing their data, the organizations which collect the readings take into account any local heating or cooling effects, such as might be caused by a weather station being located near buildings or near tarmacs at an airport. This is done, for instance, by weighting (adjusting) readings after comparing them against those from more rural weather stations nearby.
More importantly, for the purpose of establishing a temperature trend, the relative level of single readings is less important than whether the pattern of all readings from all stations taken together is increasing, decreasing, or staying the same from year to year. Furthermore, since this question was first raised, research has established that any error that can be attributed to poor siting of weather stations is not enough to produce a significant variation in the overall warming trend being observed. Even groups that have recreated the global temperature record on their own, with the intent to prove that there are problems with the data, have admitted that there is no substance to the claim.
It's also vital to realize that climate change not based simply on ground level temperature records. Other, completely independent temperature data compiled from weather balloons, satellite measurements, and from sea and ocean temperature records, also tell a remarkably similar warming story.
Confidence in climate science depends on the correlation of many sets of these data from many different sources in order to produce conclusive evidence of a global trend.
Science says: Numerous studies into the effect of urban heat island effect and microsite influences find they have negligible effect on long-term trends, particularly when averaged over large regions.
The goal of improving temperature data is something we can all agree on and on this point, the efforts of Anthony Watts and Steve McIntyre are laudable. However, their presupposition that improving temperature records will remove or significantly lower the global warming trend is erroneous.
Adjusting for urban heat island effect
When compiling temperature records, NASA's GISS goes to great pains to remove any possible influence from urban heat island effect. They compare urban long-term trends to nearby rural trends. They then adjust the urban trend so it matches the rural trend. The process is described in detail on the NASA website (Hansen 2001).
They found in most cases, urban warming was small and fell within uncertainty ranges. Surprisingly, 42% of city trends are cooler relative to their country surroundings as weather stations are often sited in cool islands (a park within the city). The point is they're aware of UHI and rigorously adjust for it when analyzing temperature records. (More on the urban heat island effect.)
Climate Audit and NASA's "Y2K" glitch
Steve McIntyre's discovery of a glitch in the GISS temperature data is an impressive achievement. Make no mistake, it's an embarrassing error on the part of NASA. But what is the significance?
Figure 1 compares the global temperature trend from before and after adjustments. Before the error was discovered, the trend was 0.185°C per decade. After corrections were made, the trend was still 0.185°C/decade. The change to the global mean was less than one thousandth of a degree. (More on NASA's Y2K glitch.)
Figure 1.Global temperature anomaly before (red squares) and after (black diamonds) NASA's "Y2K" corrections (Open Mind).
Other lines of evidence for rising temperatures
The surface temperature trends are also confirmed from multiple, independent sources:
- Surface temperature analysis by NASA GISS finds strong agreement with two independent analyses by CRU's Global Temperature Record and NCDC.
- Weather balloon measurements have found from 1975 through 2005, the global mean, near-surface air temperature warmed by approximately 0.23°C/decade.
- Satellite measurements of lower atmosphere temperatures show temperature rises between 0.16°C and 0.24°C per decade since 1982.
- Ice core reconstructions found the 20th century to be the warmest of the past five centuries, confirming the results of earlier proxy reconstructions.
- Sea surface temperatures, borehole reconstructions and ocean temperatures all show long-term warming trends.
Science says: Independent studies using different software, different methods, and different data sets yield very similar results. The increase in temperatures since 1975 is a consistent feature of all reconstructions. This increase cannot be explained as an artifact of the adjustment process, the decrease in station numbers, or other non-climatological factors.
There are three prominent reconstructions of monthly global mean surface temperature (GMST) from instrumental data (figure 1): NASA's GISTEMP analysis, the CRUTEM analysis (from the University of East Anglia's Climatic Research Unit), and an analysis by NOAA's National Climatic Data Center (NCDC).
Figure 1. Comparison of global (land and ocean) mean surface temperature reconstructions from NASA GISS, the University of East Anglia's CRU, and NOAA NCDC.
How reliable are these temperature reconstructions? Various questions have been raised about both the data and the methods used to produce them. Now, thanks to the hard work of many people, we can conclude that the three global temperature analyses are reasonable, and the true surface temperature trend is unlikely to be substantially different from the picture drawn by NASA, CRU, and NOAA.
The three GMST analyses have much in common, though there are significant differences among them as well. All three have at their core the monthly temperature data from the Global Historical Climatology Network (GHCN), and all three produce both a land-stations-only reconstruction and a combined land/ocean reconstruction that includes sea surface temperature measurements.
Let's explore the reliability of these reconstructions, from several different angles.
The data and software used to produce these reconstructions are publicly available
Source code and data to recreate GISTEMP and CRUTEM are available from NASA and CRU websites. (The data set provided by CRU excludes a fraction of the data that were obtained from third parties, but the results are not substantially affected by this).
The software has been successfully tested outside of NASA and CRU
Both GISTEMP and CRUTEM have been successfully implemented by independent investigators. For example, Ron Broberg has run both the CRUTEM and GISTEMP code. In addition, the Clear Climate Code project has duplicated GISTEMP in Python. Figure 2 shows a comparison of the output of the GISTEMP reconstruction process as implemented by NASA and by Clear Climate Code ... but since the results are identical, the second line falls exactly on top of the first.
Figure 2. The GISTEMP land/ocean temperature analysis as implemented by NASA and by Clear Climate Code. Results of the two analyses are effectively identical.
Similar results can be obtained using different software and methods
Over the past year, there has been quite a flurry of "do-it-yourself" temperature reconstructions by independent analysts, using either land-only or combined land-ocean data. In addition to the previously-mentioned work by Ron Broberg and Clear Climate Code, these include the following:
(There are probably others as well that we're omitting!)
Most recently, the Muir Russell investigation in the UK was able to write their own software for global temperature analysis in a couple of days.
For all of these cases, the results are generally quite close to the "official" results from NASA GISS, CRU, and NOAA NCDC. Figure 3 shows a collection of seven land-only reconstructions, and Figure 4 shows five global (land-ocean) reconstructions.
Figure 3. The GISTEMP land/ocean temperature analysis as implemented by NASA and by Clear Climate Code. Results of the two analyses are effectively identical.
Figure 4. Comparison of land-ocean reconstructions, 1900-2009.
Obviously, the results of the reconstructions are quite similar, whether they're by the "Big Three" or by independent analysts.
The temperature increase is not an artifact of the GHCN adjustment process
Most of the analyses shown above actually use the raw (unadjusted) GHCN data. Zeke Hausfather has done comparisons using both the adjusted and raw versions of the GHCN data set, and as shown in fig. 5, the results are not substantially different at the global scale (though 2008 is a bit of an outlier).
Figure 5. Comparison of global temperatures from raw and adjusted GHCN data, 1900-2009 (analysis by Zeke Hausfather).
The temperature increase is not an artifact of declining numbers of stations
While it is true that the number of stations in GHCN has decreased since the early 1990s, that has no real effect on the results of spatially weighted global temperature reconstructions. How do we know this?
- Comparisons of trends for stations that dropped out versus stations that persisted post-1990 show no difference in the two populations prior to the dropouts (see, e.g., here and here and here).
- The spatial weighting processes (e.g., gridding) used in these analyses makes them robust to the loss of stations. In fact, Nick Stokes has shown that it's possible to derive a global temperature reconstruction using just 61 stations worldwide (in this case, all the stations from GISTEMP that are classified as rural, have at least 90 years of data, and have data in 2010).
- Other data sets that don't suffer from GHCN's decline in station numbers show the same temperature increase (see below).
One prominent claim (by Joe D'Aleo and Anthony Watts) was that the loss of "cool" stations (at high altitudes, high latitudes, and rural areas) created a warming bias in the temperature trends. But Ron Broberg conclusively disproved this, by comparing trends after removing the categories of stations in question. D'Aleo and Watts are simply wrong.
The temperature increase is not an artifact of stations being located at airports
This might seem like an odd statement, but some people have suggested that the tendency for weather stations to be located at airports has artificially inflated the temperature trend. Fortunately, there is not much difference in the temperature trend between airport and non-airport stations.
The temperature increase is present in other data sets, not just GHCN
All of the above studies rely (mostly or entirely) on monthly station data from the GHCN database. But it turns out that other, independent data sets give very similar results.
Figure 6. Comparison of global temperatures from the Global Historical Climatology Network (GHCN) and Global Summary of the Day (GSOD) databases. (Analysis by Ron Broberg and Nick Stokes).
What about satellite measurements of temperatures in the lower troposphere? There are two widely cited analyses of temperature trends from the MSU sensor on NOAA's polar orbiting earth observation satellites, one from Remote Sensing Systems (RSS) and one from the University of Alabama-Huntsville (UAH). These data only go back to 1979, but they do provide a good comparison to the surface temperature data over the past three decades. Figure 7 shows a comparison of land, ocean, and global temperature data from the surface reconstructions (averaging the multiple analyses shown in figs. 3 and 4) and from satellites (averaging the results from RSS and UAH):
Figure 7. Comparison of temperatures from surface stations and satellite monitoring of the lower troposphere.
We'll end by looking at all the surface and satellite-based temperature trends over the entire period for which both are available (1979-present). What are the trends in the various data sets and regions? As shown in fig. 8, the surface temperature trends over land have a fair amount of variability, but all lie between +0.2 and +0.3 C/decade. Surface trends that include the oceans are more uniform.
Figure 8. Comparison of temperature trends, in degrees C per decade.
Overall, the satellite measurements show lower trends than surface measurements. This is a bit of a puzzle, because climate models suggest that overall the lower troposphere should be warming about 1.2X faster than the surface (though over land there should be little difference, or the surface should be warming faster). Thus, there are at least three possibilities:
- The surface temperature trends show slightly too much warming.
- The satellite temperature trends show slightly too little warming.
- The prediction of climate models (about amplified warming in the lower troposphere) is incorrect, or there are complicating factors that are being missed.
It should be noted that in the past the discrepancy between surface and satellite temperature trends was much larger. Correcting various errors in the processing of the satellite data has brought them into much closer agreement with the surface data.
The well-known and widely-cited reconstructions of global temperature, produced by NASA GISS, UEA CRU, and NOAA NCDC, are replicable.
Independent studies using different software, different methods, and different data sets yield very similar results.
The increase in temperatures since 1975 is a consistent feature of all reconstructions. This increase cannot be explained as an artifact of the adjustment process, the decrease in station numbers, or other non-climatological factors. | <urn:uuid:adc9d4c0-6330-465a-af5e-d8a9806e6e29> | 3.375 | 2,783 | Knowledge Article | Science & Tech. | 36.130345 |