text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
|Allegro CL version 9.0|
Unrevised from 8.2 to 9.0.
This document contains the following sections:1.0 Garbage collection introduction
Lisp does its own memory management. The garbage collector is part of the memory management system. It disposes of objects that are no longer needed, freeing up the space that they occupied. While the garbage collector is working, no other work can be done. Therefore, we have made the garbage collector as fast and unobtrusive as possible. Allegro CL uses a version of the generation-scavenging method of garbage collection. Because optimal performance of generation-scavenging garbage collection depends on the application, you have a great deal of control over how the garbage collector works. In this section, we will describe the user interface to the garbage collector, and suggest how to tune its performance for an application. In what follows, the generation-scavenging garbage collection system will be abbreviated gsgc, and the act of garbage collecting will be abbreviated gc.
The Allegro CL garbage collector is a two-space, generation-scavenging system. The two spaces are called newspace and oldspace. Note that, as we describe below, newspace is divided into two pieces, called areas, and oldspace may be divided into a number of pieces, also called areas. Generally, when we say newspace, we mean both newspace areas and when we say oldspace, we mean all oldspace areas. We try to use the word area when we want to refer to a single area, but please note that this naming convention is new and you may run into text that uses `oldspace' to refer to an oldspace area. Usually, the context should make this clear.
The two pieces of newspace are managed as a stop-and-copy garbage collector system. The two areas are the same size. At any one time, one area is active and the other is not. A newspace area is filled from one end to the other. Imagine, for example, a book shelf. Existing books are packed together on the right side. Each new book is placed just to the left of the leftmost book. It may happen that books already placed are removed, leaving gaps, but these gaps are ignored, with each new book still being placed to the left of the location of the last new book. When the shelf fills up, the other shelf (newspace area) is used. First, all books remaining are moved to the other shelf, packed tight to one side, and new books are placed in the next free location.
So with Lisp objects in newspace areas. All existing objects are packed together on one side of the area and new objects are placed in the free space next to the existing objects, with gaps left by objects which are no longer alive being ignored. When the area fills up, Lisp stops and copies all live objects to the other area, packing them tight. Then the process is repeated.
The process of copying objects from one newspace area to the other is called a scavenge. We will discuss the speed of scavenges below, but scavenges are supposed to be so fast that humans usually barely notice them.
A scavenge happens in one of the following cases:
:allocation :lispstatic-reclaimable, for example, see cl:make-array in implementation.htm), or a weak-vector or finalization will cause aclmalloc space to be freed (see Section 10.0 Weak vectors, finalizations, static arrays, etc.).
The first listed cause is under user-control. The second and third causes are under system control and the resulting scavenge cannot be prevented if the system determines it must occur.
The system keeps track of the age of objects in newspace by counting the number of scavenges that the object has survived. The number of scavenges survived is called the generation of an object. When objects are created, they have generation 1, and the generation is increased by 1 at each scavenge.
Of course, many objects become garbage as time passes. (An object is garbage when there are no pointers to it from any other live object. If there are no pointers to an object, nothing can reference or access it and so it is guaranteed never to be looked at again. Thus, it is garbage.) The theory of a generation scavenging garbage collector is that most objects that will ever become garbage will do so relatively quickly and so will not survive many scavenges.
The problem with a stop-and-copy system is that objects that survive have to be moved and moving objects takes time. If an object is going to be around for a while (or for the entire Lisp session), it should be moved out of newspace to some place where it does not have to be moved (or is moved much less often). This is where the other half of the generation scavenging algorithm comes into play. Once an object has survived enough scavenges, it is assumed to be long-lived and is moved to oldspace. Oldspace is not touched during scavenges and so objects in oldspace are not moved during scavenges, thus saving considerable time over a pure stop-and-copy system.
Part of a scavenge is checking the age (generation) of surviving objects and moving those that are old enough to oldspace. The remaining objects are moved to the other newspace area. The age at which objects are tenured is user-settable. Its initial value is 4 and that seems to work for many applications. We will discuss below how changing that (and many other) settings can affect gc performance.
The process of moving an object to oldspace is called tenuring and the object moved is said to be tenured. At one point, oldspace was also called tenured space and you may see that term occasionally in Allegro CL documents.
Note the assumption: objects that survive a while are likely to survive a long while.
If one could know exactly how long an object is going to survive, one could provide the best possible garbage collection scheme. But that knowledge is not available. Objects are created all the time by different actions and users and even application writers typically do not know what actions create objects or how long those objects will live. Indeed, that information often depends on future events that are hard to control -- such as the behavior of the person running the application.
So the algorithm makes that assumption: if an object survives for a while, it is likely to survive for a long while, perhaps forever (forever means the length of the Lisp session). Of course, for many objects this assumption is wrong: the object may become garbage soon after it is tenured. However, as we said above, scavenges (which are automatic and cannot be prevented by a user although they can be triggered by a user) do not touch oldspace. In order to clear garbage from oldspace, a global garbage collection (global gc) must be done. An interface for automating global gc's is provided in Allegro CL and different interfaces are easy to implement (see below for more information), but the two important points about global gc's are:
The Lisp heap grows upwards (to higher addresses). Oldspaces are at low addresses and newspace occupies addresses higher than any oldspace area. This means that newspace can grow without affecting oldspace and oldspace can grow (usually by creating a new oldspace area) by having newspace move up as far as necessary.
Why might newspace grow? Suppose, for example, a newspace area is 600 Kbytes and you want to allocate a 1 Mbyte array. Newspace has to grow to accommodate this.
Why might oldspace grow? As objects are tenured to oldspace, it slowly fills up. Even with regular global gc's, it can fill up. When it does, newspace moves up and a new old area is created. (New areas are created rather than a single area being expanded for various technical reasons. We discuss below how to reduce sizes dynamically. See Section 3.5 Can other things be changed while running?.)
We will not describe the internal algorithms of the garbage collector because they cannot be changed or modified by users in any way. But let us consider how newspace might be moved, as this might make the process clearer. Suppose the current scavenge is about to move live objects to the high address area. Before anything is moved, Lisp can compute how much space the live objects need, how much space objects waiting to be allocated need, and how much space a new old area needs. From that information, it can compute the highest address of the high address newspace area. It requests from the Operating System that the area be allocated (using malloc or sbrk), and once the Operating System confirms the allocation, starts copying the live objects to that high address, filling toward lower addresses. When all live objects have been moved and new objects are allocated in the high address newspace area, the new oldspace area (if one is required) can be created and the location of the low address newspace area can be determined. Recall that high address newspace area is active so the low address newspace area does not contain anything of importance.
A consequence of what we just said about newspace moving when it has to grow or when a new oldspace area is needed is that the size of the Lisp image can grow while it is running. This is usually normal, indeed what you want. It allows images to start small and grow as much (but no more) than they need. It also allows the same image to run effectively on machines with different configurations.
But, sometimes growth can be unexpected and the image can want to grow to a size larger than the Operating System can handle (usually because there is not enough swap space).
The growth is often necessary, because of the type of application being run. What is important is that the growth be managed and be no more than is really needed.
In earlier releases, space for foreign code loaded into the image, space for foreign objects, and direct calls to malloc all could cause a gap to be placed above newspace. If a new oldspace or a larger newspace was needed, it had to be placed above the gap, causing in some cases a small need for additional space to result in a multimegabyte increase in image size. Now, malloc space is placed away from the new and old spaces and so the Lisp heap (new and old spaces together) are unaffected and can grow incrementally as needed. There is a Lisp heap size specified by the lisp-heap-size argument to build-lisp-image (see building-images.htm). The OS will try to reserve this space when Lisp starts up. If more space is needed, Lisp will request it from the OS but it is possible more space will not be available. If this happens, you might increase the original request.
The space reserved in a running Lisp is reported as 'resrve' on the
'Lisp heap' line of the output of
(room t). If the
heap grows larger than that size, gaps may appear. If you see gaps in
your application, you should consider starting with an image with a
larger heap size.
Application writers and users can control the behavior of the garbage collector in order to make their programs run more efficiently. This is not always easy, since getting optimal behavior depends on knowing how your application behaves and that information may be difficult to determine. Also, there are various paths to improvement, some of which work better than others (but different paths work better for different applications).
One thing to remember is that (unless the image needs to grow larger than available swap space), things will work whether or not they work optimally. You cannot expect optimal gc behavior at the beginning of the development process. Instead, as you gather information about your application and gc behavior, you determine ways to make it work better.
The automated gc system is controlled by switches and parameters (they are listed in Section 5.0 System parameters and switches below). There is not much difference between a switch and a parameter (a switch is usually true or false, a parameter usually has a value) and there probably should not be a distinction, but these things are hard to change after they are implemented. The functions gsgc-switch and gsgc-parameter can be used to poll the current value and (with setf) to set the value of switches and parameters.
The function gsgc-parameters prints out the values of all switches and parameters:
cl-user(14): (sys:gsgc-parameters) :generation-spread 4 :current-generation 4 :tenure-limit 0 :free-bytes-new-other 131072 :free-percent-new 25 :free-bytes-new-pages 131072 :expansion-free-percent-new 35 :expansion-free-percent-old 35 :quantum 32 (switch :auto-step) t (switch :use-remap) t (switch :hook-after-gc) t (switch :clip-new) nil (switch :gc-old-before-expand) nil (switch :next-gc-is-global) nil (switch :print) t (switch :stats) t (switch :verbose) nil (switch :dump-on-error) nil cl-user(15):
gsgc-switch can poll
and set switches while gsgc-parameter can poll and set parameters:
Here we poll and set the
cl-user(15): (setf (sys:gsgc-switch :print) nil) nil cl-user(16): (sys:gsgc-switch :print) nil cl-user(17): (setf (sys:gsgc-switch :print) t) t cl-user(18): (sys:gsgc-switch :print) t cl-user(19):
The gc function can be used to toggle some of the switches.
The system will cause a scavenge whenever it determines that one is necessary. There is no way to stop scavenges from occurring at all or even to stop them from occurring for a specified period of time.
However, you can cause a scavenge by calling the gc function with no arguments:
(excl:gc) ;; triggers a scavenge
You can also cause a scavenge and have all live objects tenured by
calling the gc function with
:tenure, like this
Global gc's (a gc of old and new space) are not triggered automatically (but triggering can be automated). You can trigger a global gc by calling gc with the argument t:
(excl:gc t) ;; triggers a global gc
See section Section 6.0 Global garbage collection for information on other ways to trigger a global gc and ways to automate global gc's.
The function room provides
information on current usage (it identifies oldspaces and newspace and
free and used space in each). Setting the
:verbose switches causes the system to print
information while gc's are occurring. See
Section 5.4 Gsgc switches and
Section 3.1 How do I find out when scavenges happen?.
(room t) provides the most
information about the current state of memory management. Here is a
(room t) output from a Allegro CL image that has
been doing a fair amount of work. This is from a UNIX machine and was
done immediately after a global gc, so some of the oldspaces have
significant free space.
CL-USER(1): (room t) area area address(bytes) cons other bytes # type 8 bytes each (free:used) (free:used) Top #x106a0000 New #x10598000(1081344) 134:13113 340128:582984 New #x10490000(1081344) ----- ----- 1 Old #x10290000(2097152) 0:0 2095952:0 0*Old #x10000e80(2683264) 0:68273 0:2124792 Tot (Old Areas) 0:68273 2095952:2124792 * = closed old area Root pages: 61 Lisp heap: #x10000000 pos: #x106a0000 resrve: #x10fa0000 Aclmalloc heap: #x64000000 pos: #x64011000 resrve: #x640fa000 Pure space: #x2d1aa000 end: #x2d747ff8 code type items bytes 112: (SIMPLE-ARRAY T) 6586 930920 28.3% 1: CONS 80502 644016 19.6% 8: FUNCTION 8432 520192 15.8% 7: SYMBOL 17272 414528 12.6% 117: (SIMPLE-ARRAY CHARACTER) 2259 259984 7.9% 96: (SHORT-SIMPLE-ARRAY T) 17498 151736 4.6% 18: BIGNUM 2966 139928 4.3% 125: (SIMPLE-ARRAY (UNSIGNED-BYTE 8)) 31 87816 2.7% 12: STANDARD-INSTANCE 3291 52656 1.6% 9: CLOSURE 2301 39688 1.2% 15: STRUCTURE 666 24944 0.8% 127: (SIMPLE-ARRAY (UNSIGNED-BYTE 32)) 9 9920 0.3% 108: (SHORT-SIMPLE-ARRAY CODE) 16 7368 0.2% 10: HASH-TABLE 108 3456 0.1% 17: DOUBLE-FLOAT 120 1920 0.1% 111: (SHORT-SIMPLE-ARRAY FOREIGN) 51 1216 0.0% 16: SINGLE-FLOAT 141 1128 0.0% 118: (SIMPLE-ARRAY BIT) 11 296 0.0% 20: COMPLEX 11 176 0.0% 80: (ARRAY T) 7 168 0.0% 11: READTABLE 8 128 0.0% 123: (SIMPLE-ARRAY (SIGNED-BYTE 32)) 1 88 0.0% 13: SYSVECTOR 3 48 0.0% 85: (ARRAY CHARACTER) 1 24 0.0% total bytes = 3292344 aclmalloc arena: max size free bytes used bytes total 112 3472 112 3584 496 3472 496 3968 1008 2016 2016 4032 2032 0 12192 12192 4080 0 8160 8160 9200 18400 18400 36800 total bytes: 27360 41376 68736 CL-USER(2):
Newspace is divided into two equal size parts (only one of which is
used at any time). There can be numerous oldspaces: two are shown in
the example, but many more are common after Lisp has run for a
while. Oldpsaces are numbered. The gsgc-parameter
:open-old-area-fence takes such a number as an
Section 5.1 Parameters that control generations and tenuring
for information on gsgc-parameters).
The 0th old area in the output is closed, as indicated by the
asterisk. If there are no closed old areas
:open-old-area-fence) returns 0) then no asterisks show
up and the "* = closed old area" note isn't given. When asterisks are
shown, they denote any old areas that are closed. See the discussion
of :open-old-area-fence in
Section 5.1 Parameters that control generations and tenuring
and also the note on closed old areas after the table for information
on closed and open old areas.
Root pages: 61
Root pages contain information about pointers from oldspace to newspace.
Lisp heap: #x10000000 pos: #x106a0000 resrve: #x10fa0000 Aclmalloc heap: #x64000000 pos: #x64011000 resrve: #x640fa000 Pure space: #x2d1aa000 end: #x2d747ff8
The first value is the starting address of the specified heap in memory. The `Pure space' line only appears in Lisps which use a pll file (see pll-file), showing where the pll files is mapped.
In the first two lines, pos (position) is the highest-use location; this is one byte larger than the highest memory used by the lisp. Some operating systems will only commit (assign physical pages) to memory between base (inclusive) and position (exclusive). This is a hexadecimal address value. resrve (reserved) is the number of bytes lisp thinks is reserved to it in virtual memory space. On some operating systems which support it, addresses greater than position, but less than starting location+reserved, will not be overwritten by shared-libraries, other memory mapping operations, etc.
The Lisp heap reserved size is a true limit only for certain free products. With paid license images (and some free products), this value is important only because if the heap grows larger than this limit, gaps in the heap may appear. See Section 1.8 The almost former gap problem for more information. This value is not a limit in any sense on how big the image can grow.
The Aclmalloc heap was called the "C heap" in earlier releases but its name was changed to reflect its real nature. It is the aclmalloc area used for space allocated by the aclmalloc function. That function differs from malloc() in that it ensures that Lisp will remember the location of aclmalloc'ed allocations and preserve it through dumplisp and restarts, thus guaranteeing that aclmalloc addresses remain valid.
More information on aclmalloc and regular malloc():
malloc()space is always started fresh when a Lisp starts up, so addresses used by malloc calls must never be used across dumplisps.
aclmalloc()is aclmalloc. There is no Lisp interface to
malloc()(there is an internal function called excl::malloc, but it calls
aclmalloc(), and not
malloc(), which is why it is not exported). Foreign definitions for
free()can be made in order to gain access to these functions. However, calling
malloc()in a Lisp environment is dangerous, and should only be done after careful consideration of the above.
The type counts are as if printed by print-type-counts:
code type items bytes 112: (SIMPLE-ARRAY T) 6586 930920 28.3% 1: CONS 80502 644016 19.6% 8: FUNCTION 8432 520192 15.8% 7: SYMBOL 17272 414528 12.6% 117: (SIMPLE-ARRAY CHARACTER) 2259 259984 7.9% 96: (SHORT-SIMPLE-ARRAY T) 17498 151736 4.6% 18: BIGNUM 2966 139928 4.3% 125: (SIMPLE-ARRAY (UNSIGNED-BYTE 8)) 31 87816 2.7% 12: STANDARD-INSTANCE 3291 52656 1.6% 9: CLOSURE 2301 39688 1.2% 15: STRUCTURE 666 24944 0.8% 127: (SIMPLE-ARRAY (UNSIGNED-BYTE 32)) 9 9920 0.3% 108: (SHORT-SIMPLE-ARRAY CODE) 16 7368 0.2% 10: HASH-TABLE 108 3456 0.1% 17: DOUBLE-FLOAT 120 1920 0.1% 111: (SHORT-SIMPLE-ARRAY FOREIGN) 51 1216 0.0% 16: SINGLE-FLOAT 141 1128 0.0% 118: (SIMPLE-ARRAY BIT) 11 296 0.0% 20: COMPLEX 11 176 0.0% 80: (ARRAY T) 7 168 0.0% 11: READTABLE 8 128 0.0% 123: (SIMPLE-ARRAY (SIGNED-BYTE 32)) 1 88 0.0% 13: SYSVECTOR 3 48 0.0% 85: (ARRAY CHARACTER) 1 24 0.0% total bytes = 3292344
The aclmalloc arena describes allocation of space for aclmallocs and foreign data. It is divided into chunks of various sizes to allow allocation of requests of various sizes without fragmentation. (Space allocated by aclmalloc is freed by aclfree.)
aclmalloc arena: max size free bytes used bytes total 112 3472 112 3584 496 3472 496 3968 1008 2016 2016 4032 2032 0 12192 12192 4080 0 8160 8160 9200 18400 18400 36800 total bytes: 27360 41376 68736
As a user, or as an application writer, how can you get the garbage collector to work best for you? At first, you do not have to do anything. The system is set up to work as delivered. You will not run out of space, global gc's will happen from time to time (as described below, see Section 6.0 Global garbage collection), the image will grow as necessary, and assuming you do not run out of swap space, everything will work.
Of course, it will not necessarily work as well as it could. As delivered, the garbage collector is set to work best with what we assume is a typical application: objects, none of which are too big, are created as needed. Most objects that survive a while are likely to survive a long while or perhaps forever, and so on. If your application's use of Lisp has different behavior, performance may be suboptimal.
So what to do? One problem is that optimizing gc behavior is a multidimensional problem. Factors that affect it include
Optimization in a multidimensional environment is always complicated.
The first step is always to gather the information necessary to do the tuning. Information like:
Section 3.1 How do I find out when scavenges happen?
Section 3.2 How many bytes are being tenured?
Section 3.3 When there is a global gc, how many bytes are freed up?
Section 3.4 How many old areas are there after your application is loaded?
Section 3.5 Can other things be changed while running?
There are three gsgc switches (these control the behavior of the
garbage collector) that affect printing information about the garbage
(setf (sys:gsgc-switch :print) t)
will cause a short message to be printed whenever a scavenge
happens. Unless the
t, no message will be printed.
switches control the amount of information printed. If the
:stats switch is true, the message contains more
information but the information is compact. If the
:verbose switch is also true, a longer, more easily
understood message is printed.
;; In this example, we cause a scavenge with all flags off, ;; then with :print true, then :print and :stats true, ;; and finally :print, :stats, and :verbose all true. cl-user(5): (gc) cl-user(6): (setf (sys:gsgc-switch :print) t) t cl-user(7): (gc) gc: done cl-user(8): (setf (sys:gsgc-switch :stats) t) t cl-user(9): (gc) gc: E=17% N=17536 T+=0 A-=0 cl-user(10): (setf (sys:gsgc-switch :verbose) t) t cl-user(11): (gc) scavenging...done eff: 15%, copy new: 1664 + tenure: 16064 = 17728 Page faults: gc = 0 major + 2 minor cl-user(12):
:stats true, the message contains
much more information, but it is coded -- E means Efficiency, N means
bytes copied in newspace, T means bytes copied to oldspace (i.e. bytes
:verbose also true, the same information is
displayed in expanded form and additional information (about page
faults) is provided.
Efficiency is defined as the ratio of cpu time not associated with gc to total cpu time. Efficiency should typically be 75% or higher, but the efficiencies in the example are low because we triggered gc's without doing anything else of significance.
It is usually desirable to have
:stats true while developing software. This allows you to
monitor gc behavior and see if there seems to be a problem.
That information is shown when the
:stats switches are true, but perhaps the real
question is whether things are being tenured that would be better left
in newspace (because they will soon become garbage). This often
happens when a complex operation (like a compile of a large file) is
being carried out. This question, in combination with the next can
tell you if that is the case.
In the following, copied from above, 0 bytes are tenured in the first gc (T+=0) and 16064 in the second (tenure: 16064):
cl-user(9): (gc) gc: E=17% N=17536 T+=0 A-=0 cl-user(10): (setf (sys:gsgc-switch :verbose) t) t cl-user(11): (gc) scavenging...done eff: 15%, copy new: 1664 + tenure: 16064 + aclmalloc free: 0 = 17728 Page faults: gc = 0 major + 2 minor cl-user(12):
switches are true, the amount of space freed by a global gc is printed
at the end of the report. Here is an example. The form
t) triggers a global gc.
cl-user(13): (gc t) gc: Mark Pass...done(1,583+66), marked 128817 objects, max depth = 17, cut 0 xfers. Weak-vector Pass...done(0+0). Cons-cell swap...done(0+67), 346 cons cells moved Symbol-cell swap...done(17+0), 1 symbol cells moved Oldarea break chain...done(83+0), 40 holes totaling 6816 bytes Page-compaction data...done(0+0). Address adjustment...done(1,400+67). Compacting other objects...done(150+0). Page compaction...done(0+0), 0 pages moved New rootset...done(667+0), 20 rootset entries Building new pagemap...done(83+0). Merging empty oldspaces...done, 0 oldspaces merged. global gc recovered 9672 bytes of old space. gc: E=0% N=1504 T+=0 A-=0 pfg=54+187 cl-user(14):
The next to last line reports on what was recovered from oldspace (9672 bytes). The value is often much higher. It is low in this example because we have not in fact done anything significant other than test gc operations.
There is plenty of other information but we will not describe its meaning in detail. It is typically useful in helping us help you work out complicated gc problems.
The amount of space freed is a rough measure of how many objects
are being tenured that perhaps should be left for a while longer in
newspace. If the number is high, perhaps things are being tenured too
quickly (increasing the value of the
switch will keep objects in newspace longer, as will a larger
The output printed by room
shows the two newspace areas and the various oldspace areas. Here is
an example of room
output. (room takes an
argument to indicate how much information should be displayed). The
following is the output of
(cl:room t), which
causes the most information to be displayed.
cl-user(3): (room t) area area address(bytes) cons other bytes # type 8 bytes each (free:used) (free:used) Top #x569a000 New #x5134000(5660672) 5:95781 597040:4239616 New #x4bce000(5660672) ----- ----- 7 Old #x498e000(2359296) 458:18903 31904:2170416 6 Old #x494e000(262144) 0:1019 0:253648 5 Old #x478e000(1835008) 0:41779 0:1498064 4 Old #x474e000(262144) 0:23437 0:73424 3 Old #x45ce000(1572864) 0:27513 0:1350736 2 Old #x454e000(524288) 0:7133 0:466512 1 Old #x448e000(786432) 0:4076 0:753104 0 Old #x4000d00(4772608) 0:97824 0:3983672 Tot (Old Areas) 458:221684 31904:10549576 Root pages: 158 Lisp heap: #x4000000 pos: #x569a000 resrve: 23699456 Aclmalloc heap: #x54000000 pos: #x54027000 resrve: 1024000 code type items bytes 96: (simple-array t) 76658 3864816 22.8% 108: (simple-array code) 8699 3608136 21.3% 1: cons 314901 2519208 14.9% 99: (simple-array (unsigned-byte 16)) 10938 2242320 13.2% 101: (simple-array character) 38383 1632920 9.6% 8: function 21721 1284216 7.6% 7: symbol 36524 876576 5.2% 107: (simple-array (signed-byte 32)) 264 264336 1.6% 12: standard-instance 14244 227904 1.3% 9: closure 8854 145448 0.9% 98: (simple-array (unsigned-byte 8)) 44 105184 0.6% 97: (simple-array bit) 49 103952 0.6% 15: structure 830 33144 0.2% 100: (simple-array (unsigned-byte 32)) 12 10264 0.1% 10: hash-table 225 7200 0.0% 18: bignum 410 4480 0.0% 16: single-float 505 4040 0.0% 111: (simple-array foreign) 103 2464 0.0% 17: double-float 124 1984 0.0% 64: (array t) 22 528 0.0% 65: (array bit) 13 312 0.0% 13: sysvector 14 224 0.0% 20: complex 12 192 0.0% 11: readtable 7 112 0.0% 69: (array character) 1 24 0.0% total bytes = 16939984 aclmalloc arena: max size free bytes used bytes total 48 3024 48 3072 496 3968 0 3968 1008 4032 0 4032 2032 2032 2032 4064 4080 8160 36720 44880 5104 10208 10208 20416 9200 27600 9200 36800 20464 20464 20464 40928 total bytes: 79488 78672 158160 cl-user(4):
The output shows the two equal size newspace areas, only one of which is being used. It also shows eight oldspaces and provides information about what is in the oldspaces. Then information is printed about other objects such as the number of root pages (a root page keeps information on pointers from oldspace to newspace -- these pointers must be updated after a scavenge), and the locations of the Lisp and C heaps. Then, there is a table showing the types and numbers of objects. Finally, used and available malloc space is displayed.
Yes. The function resize-areas can be used to rearrange things while running. It is typically useful to call this function, for example, after loading a large application. If you know desirable old- and newspace sizes for your application, it is preferable to build an image with those sizes (using the :oldspace and :newspace arguments to build-lisp-image, see building-images.htm for more information). However, you may not know until runtime what the best sizes are, in which case you can call resize-areas on application startup. Be warned that it may take some time.
Another use of resize-areas is when you wish to dump an image (with dumplisp) into which your application has been loaded. You call resize-areas just before dumplisp in that case.
The initial sizes of newspace and oldspace are determined when the image is built with build-lisp-image. See building-images.htm (where build-lisp-image is fully documented -- the page on it is brief to avoid two sources for the same complex discussion) The :newspace argument to build-lisp-image controls the initial size of newspace and the :oldspace argument the initial size of oldspace.
An image dumped with dumplisp inherits new and oldspace sizes from the dumping image. See dumplisp.htm.
resize-areas will restructure old and newspace sizes in a running image.
The garbage collector will automatically resize old and newspace when it needs to. The amount of resizing depends on space required to allocate or move to oldspace live objects, and also on the parameters that relate to sizes.
The parameters and switches described under the next set of headings control the action of the garbage collector. You may change them during run time to optimize the performance of the Lisp process. All parameters and switches values may be set with setf. However, some values should not be changed by you. The descriptions of the parameters say whether you should change their values. By default, the system does automatically increase the generation number. You may find that it is useful to step it yourself at appropriate times with a call to gsgc-step-generation.
There is really no difference between a parameter and a switch
other than the value of switches is typically
nil while parameters
often have numeric values. However, once both were implemented, it
became difficult to redo the design.
The function gsgc-parameters prints the values of all parameters and switches. gsgc-switch and gsgc-parameter retrieve the value of, respectively, a single switch or a single parameter, and with setf can set the value as follows.
(setf (sys:gsgc-parameter parameter) value) (setf (sys:gsgc-switch switch) value)
Switches and parameters are named by keywords.
The first three parameters relate to the generation number and when
objects are tenured. Please note that of the three parameters, you
should only set the value of
The fourth parameter, which is setf'able, allows closing off some old
areas, meaning that no objects will be tenured to them. Old areas are
now numbered, allowing for some to be closed off.
||The value of this parameter is a 16 bit unsigned integer. New objects are created with this generation tag. Its initial value is 1, and it is incremented when the generation is stepped. The system may change this value after a scavenge. Users should not set this value. Note: Both the current generation number and the generation of an individual object are managed in a non-intuitive way. While it is conceptually correct that the generation number increases, the actual implementation works quite differently, often resetting the generation number toward 0.|
||The value of this parameter is a 16 bit integer. During a scavenge, objects whose generation exceeds this value are not tenured and all the rest are tenured. Users should not set this value. Its initial value is 0, and it is constantly being reset appropriately by the system.|
||The value of this parameter is the number of distinct generations that will remain in newspace after garbage collection. Note: objects that are marked for tenuring and objects that are to stay in newspace permanently do not belong to a specific generation. Setting the value of this parameter to 0 will cause all data to be tenured immediately. This is one of the most important parameters for users to set. Its initial value is 4 and its maximum effective value is 25.|
||The value of this
parameter is always a non-negative integer which is the number of the
oldest old area that is open (not closed). Old areas are numbered
with 0 as the oldest old area.
This parameter is setfable,
either with the number of the old-area that is desired to be the first
open-old-area, or with a negative number, for which the old-areas are
counted backward to set the fence. For example,
See the note on closed old areas just after this table for more information.
Old areas can be marked as closed. When an old area is closed, no objects are newly tenured into a closed old-area; it is as if the area is full. Also, no dead object in a closed old area is collected while the area is closed, and data pointed to by that object is also not collected.
See the description of the
the table just above for details on how to specify old areas as
The intended usage model for closing old areas is this: a programmer with an application, such as a VAR, will load up their application, perform a global-gc and possibly a resize-areas, and then close most of the old-areas, leaving room for their users' data models to be loaded into the open-areas. When the user is done with the data model, it can be thrown away and a fast global-gc performed, making way for the next data model.
The following parameters control the minimum size of newspace and when
the system will allocate a new newspace. At the end of a scavenge, at
:free-bytes-new-other bytes must be free, and at
:free-percent-new percent of newspace must be
free. If these conditions are not met, the system will allocate a new
newspace, large enough for these conditions to be true after
allocating the object that caused the scavenge, if there is
one. (Unless explicitly called by the user, a scavenge occurs when the
system is unable to allocate a new object.) Note that there is no
system reason why there are two parameters,
:free-bytes-new-other -- differences were
anticipated in the original specification but none was never
implemented. The two parameter values are added to get total free
||The value of this parameter is a 32-bit integer which represents the minimum amount of space (in 8 Kbyte pages) that will be requested for a new new or old space and the granularity of space requested (that is space will be requested in multiples of :quantum pages). Its initial value is 32. This parameter value is overshadowed by the other size-related parameters described immediately below, and for that reason, we do not recommend that you change this value.|
||This is one of the parameters which determine the minimum free space which must be available after a scavenge. Its initial value is 131072.|
||This is one of the parameters which determine the minimum free space which must be available after a scavenge. Its initial value is 131072.|
||This parameter specifies the minimum fraction of newspace which must be available after a scavenge, or else new newspace will be allocated. Its initial value is 25.|
The final two parameters control how large new newspace (and new oldspace) will be. If newspace is expanded or a new oldspace is allocated, then at least the percentage specified by the appropriate parameter shall be free, after, in the case of newspace, the object that caused the scavenge has been allocated, and after, in the case of oldspace, all objects due for tenuring have been allocated. There are different concerns for the newspace parameter and the oldspace parameter.
Let us consider the oldspace parameter first. In the case where no
foreign code is loaded, then oldspaces are carved out of newspace, and
newspace grows up into memory as needed. If each new oldspace is just
large enough, the next time an object is tenured, another oldspace,
again just large enough, will be created, and the result will be a
bunch of small oldspaces, rather than a few larger ones. This problem
will not occur if there is foreign code, since some oldspaces will be
as large as previous newspaces. If the function room shows a bunch of
little oldspaces, you might try increasing the
:expansion-free-percent-old parameter to cure the
problem. However, resize-areas can be used instead to coalesce
the oldspaces into one.
The newspace parameter is more complicated, since newspace can grow
incrementally (assuming growth is not blocked by foreign code). Since
growing newspace takes time, you want to ensure that when newspace
grows, it grows enough. Therefore, it is essential that
:expansion-free-percent-new be larger than
:free-percent-new. Otherwise, you might find
newspace growing just enough to satisfy
:free-percent-new, and then having to grow again at
the next scavenge, since allocating a new object again reduced the
free space below the
||At least this percentage of newspace must be free after allocating new newspace. The system will allocate sufficient extra space to guarantee that this condition is met. Its initial value is 35.|
||At least this percentage of space must be free in newly allocated oldspace (note: not the total oldspace). Its initial value is 35.|
There are several switches which control the action of gsgc. The
value of a switch must be
nil. The function gsgc-switch takes a switch name as an
argument and returns its value or with setf, sets its
also prints out their values. The switches can be set by evaluating
(setf (sys:gsgc-switch switch-name) nil-or-non-nil)
||If this switch is set true, then before
expanding oldspace, the system will do a global garbage collection (that is, it will
gc oldspace) to see if the imminent expansion is necessary. If enough space is free after
the garbage collection of oldspace the expansion will not occur. Initially
|If true, print a message when a gc occurs. Can be set by
excl:gc. The length of the message is determined by the next two switches. If both are
||If true and
||If true, make the message printed (when :print is true) more
||This is the most important of the switches. If true, which is its initial value, gsgc-step-generation is effectively called after every scavenge. Thus (with the default :generation-spread) an object is tenured after surviving four scavenges.|
||If this switch is true, the function object
bound to the variable
If this switch is set true, the next gc will
be a global gc (that is both newspace and oldspace will be gc'ed). After the global gc,
the system will reset this switch to
The difference between setting this switch and causing a global gc explicitly with the function excl:gc is that setting this switch causes the system to wait until a scavenge is necessary before doing the global gc while calling the function causes the global gc to occur at once. The system uses this switch under certain circumstances.
The scavenger maintains a new logical pool of memory in
newspace called `reserved'. When the
If :print and :verbose are both true, information about the action triggered by this switch is printed. The information refers to `hiding' (moving space to the reserved bucket) and `revealing' (moving space to the free bucket).
If this switch is set true and the operating system on which Allegro CL is running supports it, then physical memory pages that are no longer used after a garbage collection are given back to the operating system in such a fashion that paging is improved.
Specifically, when this switch is true and the currently used half of newspace is copied to the unused half, the following things are done with the previously-used memory area: (1) the operating system is advised to ignore the page reference behavior of those addresses, and (2) the memory is unmapped and then is remapped, after being filled with zeros. The zero filling is necessary for security reasons, since the memory given back to the operating system will potentially be given to another process that requests virtual memory, without first being cleared. If it were not for (2), then remapping would always be advantageous and there would be no switch to control this behavior. As it is, there may be certain situations where zero filling will be too expensive, especially on machines which have a very large amount of physical memory and the decrease in locality does not effect the runtime performance of the Allegro CL application, or where the mmap() implementation is flawed.
If this switch is set true, then a core dump is automatically taken when an internal garbage collection error occurs. The core dump will fail, however, if (1) there is a system imposed limit on the size of a core dump and dumping the image would exceed this limit or (2) there is some other system impediment to dumping core, such as the existence of a directory named core. We assume that you can prevent the second limitation. Here are a few more words on the first limitation. In the C shell, the limit command and its associates can be used to set a higher limit or no limit for the maximum size of a core dump.
If the value of this switch is
These examples show the effect on gc messages of
nil, a message like the following is printed during a
:verbose is also true but
the message is:
:stats is true and
nil, the message is similar to:
gc: E=34% N=30064 T+=872 A-=0 pfu=0+101 pfg=1+0
The same message with
:verbose true would be:
scavenging...done eff: 9%, new copy: 148056 + tenure: 320 + aclmalloc free: 0 = 148376 Page faults: non-gc = 0 major + 1 minor
abbreviations are used. Their meanings are explained when
:verbose is true. T or Tenure
means bytes tenured to oldspace. A and alcmalloc free
refer to malloc space (see aclmalloc).
E or eff. is efficiency: the ratio of non-gc time and all time (the efficiency is low in our example because we forced gc's in order to produce the example; as we say elsewhere, efficiencies of less than 75% are a cause for concern when the gc is triggered by the system).
The copy figures are the number of bytes copied within newspace and to oldspace.
X means "expanding", so XO means "expanding oldspace" and XN means "expanding newspace". XMN means "expanding and moving newspace".
Page faults are divided between user (pfu or non-gc) caused and gc (pfg or gc) caused. See the Unix man page for getrusage for a description of the difference between major and minor page faults.
Here are a couple of more examples (with
on and off in a fresh Lisp each time):
cl-user(1): (gc :print) t cl-user(2): (setf (sys:gsgc-switch :verbose ) t) t [...] cl-user(7): (defconstant my-array (make-array 10000000)) scavenging...expanding new space...expanding and moving new space...done eff: 36%, copy new: 7533984 + old: 85232 = 7619216 Page faults: non-gc = 1 major + 0 minor my-array ;; And in a fresh image with :verbose off: cl-user(1): (gc :print) t cl-user(2): (defconstant my-array (make-array 10000000)) gc: XN-XMN-E=32% N=7522488 T+=85632 A-=0 pfu=4+0 my-array
|Function or variable||Arguments of functions||Brief Description|
|gsgc-step-generation||Calling this function, which returns the new value of
:current-generation, increases the current generation number and, if necessary, the value
|gc||&optional action||Called with no arguments, perform a scavenge; called with
|print-type-counts||&optional (location t)||Prints a list of quantities and sizes of lisp objects in the specified location in the heap, along with type names and type codes of each object type printed. See the print-type-counts page for location details.|
|lispval-storage-type||object||Returns a keyword denoting where object is stored. See the lispval-storage-type page for interpretation of the returned value and examples. (In earlier releases, the function pointer-storage-type performed this function. It is still supported, but its use is deprecated. lispval-storage-type is more flexible and should be used instead.)|
|resize-areas||&key verbose old old-symbols new global-gc tenure expand sift-old-spaces pack-heap||This function resizes old and newspaces, perhaps coalescing oldspaces, according to the arguments. See the resize-areas page for details.|
||If the gsgc switch :hook-after-gc is true,
then the value of this symbol, if true, will be funcalled immediately
after a scavenge. See the description of
|gc-after-c-hooks||Returns a list of addresses of C functions that will be called after a gc. See gc-after-c-hooks for details.|
|gc-before-c-hooks||Returns a list of addresses of C functions that will be called before a gc. See gc-before-c-hooks for details.|
In a global garbage collection (global gc), objects in oldspace are garbage collected. Doing so frees up space in oldspace for newly tenured objects. Global gc's are time consuming (they take much longer than scavenges) and they are not necessary for Lisp to run.
The effect of never doing a global gc is the Lisp process will slowly grow larger. The rate of growth depends on what you are doing. The costs of growth are that the paging overhead increases and, if the process grows too much, swap space is exhausted, perhaps causing Lisp to stop or fail.
You have complete control over global gc's. The system will keep track of how many bytes have been tenured since the last global gc. You can choose one of these options for global gc:
The function that records how many bytes have been tenured since
the last global gc is the default value of the variable
*gc-after-hook*. If you set
that variable (whose value must be a function or
nil or a function
that does not keep records of bytes tenured, you will not get the
behavior described here. (See the description of
information on defining a function that does what you want and records
bytes tenured correctly.)
has as its value its initial value or a function that records bytes
tenured correctly, global gc behavior is controlled by the global
*tenured-bytes-limit* is used in conjunction
*global-gc-behavior*. The number of bytes
tenured (moved to oldspace) since the last global gc is remembered and
*global-gc-behavior* depends on when
The tenuring macro causes the immediate tenuring (moving to oldspace) of all objects allocated while within the scope of its body. This is normally used when loading files, or performing some other operation where the objects created by forms will not become garbage in the short term. This macro is very useful for preventing newspace expansion.
It is useful if possible to provide some sort of cue while garbage collections are occurring. This allows users to know that a pause is caused by a gc (and not by an infinite loop or some other problem). Typical cues include changing the mouse cursor, printing a message, or displaying something in a buffer as Emacs does when the emacs-lisp interface is running.
Unfortunately, providing such a cue for every scavenge is a difficult problem and if it is done wrong, the consequences to Lisp can be fatal. However, we have provided an interface for the brave. The functions gc-before-c-hooks and gc-after-c-hooks return settable lists of addresses of C functions to be called before and after a gc.
Luckily, scavenges are usually fast and so failing to provide a cue may not be noticeable. Global gc's, however can be slow but it is possible to provide a cue for a global gc even without using C functions. There are two strategies: (1) determine when a global gc is necessary and either schedule it when convenient or warn the user that one is imminent; (2) do not give advance warning but provide a cue when it happens. Looking at these examples, you can probably craft your own method of warning or cue.
Note that in these examples, we replace the value of
*gc-after-hook* with a new
value, destroying the current value (which provides for the default
automated behavior). The default function is named by the (internal)
One way to implement the first strategy is to set a flag when a global gc is needed and then have code that acts on that flag. This code can be run at your choosing -- but be sure that it is run at some point. You might do this:
(defvar *my-gc-count* 0) (defvar *time-for-gc-p* nil) (defun my-gc-after-hook (global-p to-new to-old eff to-be-alloc) (declare (ignore eff to-new to-be-alloc)) (if global-p (progn (setq *my-gc-count* 0) (setq *time-for-gc-p* nil)) (progn (setq *my-gc-count* (+ *my-gc-count* to-old)) (if (> *my-gc-count* excl:*tenured-bytes-limit*) (setq *time-for-gc-p* t)))))
Make sure you compile my-gc-after-hook before
making it the value of
*gc-after-hook*. Now, define a function that
triggers a global gc (calls
(gc t)) when
*time-for-gc-p* is true. This function can be called by a
user of your application, or when your application is about to do
something that the user expects to wait for anyway, or whenever, so
long as it is called at some point.
In the second strategy, we provide some cue to the user that a global gc is occurring. We have not included the code for the cue (you should supply that) and notice we have gone to some pains to avoid a recursive error (where the garbage collector calls itself).
(defvar *my-gc-count* 0) (defvar *prevent-gc-recursion-problem* nil) (defun my-gc-after-hook (global-p to-new to-old eff to-be-alloc) (declare (ignore eff to-new to-be-alloc)) (when (null *prevent-gc-recursion-problem*) (if global-p (setq *my-gc-count* 0) (progn (setq *my-gc-count* (+ *my-gc-count* to-old)) (if (> *my-gc-count* excl:*tenured-bytes-limit*) (excl:without-interrupts (setq *prevent-gc-recursion-problem* t) ;; (<change the cursor, print a warning, whatever>) (gc t) ;; (<reset the cursor if necessary>) (setq *my-gc-count* 0) (setq *prevent-gc-recursion-problem* nil)))))))
;; (<change the cursor, print a warning, whatever>) ;; (<reset the cursor if necessary>)
with whatever code you want, but be careful that there is no possibility of waiting (for user input, e.g.) or going into an infinite loop because you are in a without-interrupts form and waiting is wrong and an infinite loop is fatal in that case.
The following list contains information and advice concerning gsgc. Some of the information has already been provided above, but is given again here for emphasis.
nilunless some other method of stepping the generation is enabled (including specific action by you). If objects are not tenured, newspace will grow, filling up with long-lived objects, and performance will degrade significantly.
:tenure(which will cause all live objects to be tenured) or with the tenuring macro. There is no way to prevent a specific object from ever being tenured except by disabling generation stepping and thus preventing all objects from being tenured.
It is not easy to cause a gsgc error. Such errors are usually catastrophic (often Lisp dies either without warning or with a brief message that some unrecognizable object was discovered). Once the garbage collector becomes confused, it cannot be straightened out.
Such errors can be caused when Lisp code is compiled with the compiler
optimizations set so that normal argument and type checking is
disabled. For example, if a function is compiled with the values of
speed and safety such that the compiler:verify-argument-count-switch
nil, and that function is passed the wrong
number of arguments (usually too few), it can trigger a fatal gsgc
error. Before you report a gsgc error as a bug (and please do report
them), please recompile any code where checking was disabled with
settings of speed and safety which allow checking. See if the error
repeats itself. See Declarations and optimizations in
compiling.htm for more information on compiler
optimization switches and values of speed and safety.
Garbage collector errors may also be caused by foreign code signal handlers. Note that foreign code signal handlers should not call lisp_call_address or lisp_value. See foreign-functions.htm for more information on signals.
See the information on the
Section 5.4 Gsgc switches.
The Allegro CL image will grow as necessary while running. If it needs to grow and it cannot, a storage-condition type error is signaled (storage-condition is a standard Common Lisp condition type). While these errors might arise from insufficient swap space, the typical cause is a conflict in the virtual address space. That is, something else (a program or a library location) has grabbed virtual address space in a range that Lisp needs to grow the heap. (Allegro CL does not allow discontinuous virtual address ranges.)
Whatever the cause, the error is triggered by a request for space which cannot be fulfilled. Here we show the error when a largish array is created (this example is contrived in order to show the error: a request for such an array does not typically cause a problem).
CL-USER(25): (setq my-array (make-array 100000)) Error: An allocation request for 483032 bytes caused a need for 12320768 more bytes of heap. The operating system will not make the space available. [condition type: STORAGE-CONDITION] CL-USER(26):
A global gc may free up enough space within Lisp to continue without growing. Killing processes other than Lisp may free enough space for Lisp to grow. But it may be the case that other allocations of virtual address space conflicts with Lisp usage. Please contact Franz customer support for assistance in determining whether this is the case if the problem persists.
You trigger a global gc by evaluating (see gc):
When the garbage collector gets confused, usually by following what it believes to be a valid pointer, but one that does not point to an actual Lisp object, Lisp fails with a two part message. The first part is a brief description of the specific problem. The second part is a general statement about the gc failures and the fact they cannot be recovered from, along with an opportunity of get a core file (which may be useful for later analysis). Here are some examples of the second part:
The gc's internal control tables may have been corrupted and Lisp execution cannot continue. [or] The internal data structures in the running Lisp image have been corrupted and execution cannot continue. [then] Check all foreign functions and any Lisp code that was compiled with high speed and/or low safety, as these are two common sources of this failure. If you cannot find anything incorrect in your code you should contact technical support for Allegro Common Lisp, and we will try to help determine whether this is a coding error or an internal bug. Would you like to dump core for debugging before exiting(y or n)?
Here are some examples of the first part:
system error (gsgc): Unknown object type at (0xc50ec9a) system error (gsgc): Object already pointing to target newspace half: 0x42c43400 system error (gsgc): Scavenger invoked itself recursively.
As the text says in the second part, there is no recovery.
The causes of such errors can be one or more of the following:
Diagnosing and fixing the problem can be difficult. Here are some initial steps to take where possible:
(gc :print), see gc.)
If you cannot quickly determine the cause of the problem and a solution for it, contact Franz Inc. technical support at firstname.lastname@example.org. Be sure to include the output of a call to print-system-state, and provide any information about foreign code, optimizations, etc. that you can. We may ask you for a core file (which, as said above, can be optionally generated when there is a failure).
A Lisp object becomes garbage when nothing points to or references it. The way the garbage collector works is it finds and identifies live objects (and, typically, moves them somewhere). Whatever is left is garbage. Finalizations and weak vectors allow pointers to objects which will not, however, keep them alive. If one of these pointers exists, the garbage collector will see the item and (depending on the circumstances), either keep it alive (by moving it) or abandon it.
It is useful to distinguish two actions of the garbage collector. When the only pointers to an object are weak pointers or finalizations, the garbage collector follows those pointers and `identifies the object as garbage'. If it decides not to keep the object alive, it `scavenges the object away'. Note that any Lisp object can be scavenged away - that just means that the memory it used is freed and (eventually) overwritten with new objects. Only objects pointed to by a weak vector or a finalization can be identified as garbage, however. Live objects are not garbage and objects with nothing pointing to them are not even seen by the garbage collector.
The function lispval-storage-type applied to an object returns a keyword providing information about the storage type of the object. Weak vectors and finalizations are identified by this function which can be used to test whether an object is a weak vector.
Weak arrays are created with the standard Common Lisp function make-array (using the non-standard weak keyword argument) or (if a vector is desired) with the function weak-vector. When you create a weak array by specifying a true value for the weak keyword argument to make-array, you cannot also:
:displaced-tokeyword argument (weak arrays cannot be displaced into other arrays).
:new(the non-standard allocation keyword argument allows specifying where the arrays will be created, with choices including foreign space, non-gc'ed Lisp space, or the Lisp heap, called for by
:new, which is also the default).
When make-array successfully accepts a true value for the weak keyword argument, the object that is weak is always the underlying simple-vector; if the resultant array is non-simple or is multidimensional, then the array itself is not marked as weak, but objects in the array will still be dropped by the gc when not otherwise referenced, because the simple-array that is the data portion of the array is itself weak.
See the discussion of extensions to make-array (in implementation.htm) for further details on the Allegro CL implementation of make-array and the weak keyword argument.
As we said in the brief description above, the most important feature
about weak arrays is that being pointed to by a weak array does not
prevent an object from being identified as garbage and scavenged
away. When an object is scavenged away, the entry in a weak array that
points to the object will be replaced with
Weak arrays allow you to keep track of objects and to discover when
they have become garbage and disposed of. An application of weak
arrays might be determining when some resource external to Lisp can be
flushed. Suppose that all references to a file external to Lisp are
made through a specific pathname. It may be that once all live
references to that pathname are gone (i.e. the pathname has become
garbage) the file itself is no longer needed and it can be removed
from the filesystem. If you have a weak array which points to the
pathname, when the reference is replaced by
nil by the garbage collector, you can tell the system
to kill the file.
It is important to remember that objects which have been tenured will not be identified as garbage unless a global gc is performed. If you use weak arrays, you should either arrange that global gc's are done regularly or that you do an explicit global gc before checking the status of an element of the weak array.
We provide a simple example of weak vectors (weak arrays differ from weak vectors only in having more dimensions) and finalizations in Section 10.3 Example of weak vectors and finalizations.
Weak hashtables are also supported. See implementation.htm for information on an extension to make-hash-table that creates weak hashtables.
The function lispval-storage-type applied to an object returns a keyword providing information about the storage type of the object. Weak vectors and finalizations are identified by this function which can be used to test whether an object is a weak vector.
A finalization associates an object with a function and optionally
queue. If the object
is identified as garbage by the garbage collector, either the function
is called with the object as its single argument before the object is
scavenged away (if there is no associated queue) or a list consisting
of the function and the object is placed on the queue. In the latter
case, no further action is taken by the system. The program must apply
the function call-finalizer at its convenience.
Multiple finalizations can be scheduled for the same object; all are run or placed on queues if and when the gc identifies the object as garbage. The order of the running of multiple finalizations is unspecified.
The timing is important here. While the garbage collector is running, nothing else can be done in Lisp. In particular, functions (except those internal to the garbage collector itself) cannot be called. Therefore, the process of running a finalization and the eventual reclamation of the finalized object occurs over two invocations of the garbage collector. During the first gc, the object is identified as garbage. The garbage collector sees the finalization and so, instead of immediately scavenging the object away, it keeps it alive and makes a note to call the finalization function or enqueue the list of the function and the object as soon as it finishes the current scavenge (or global gc). The finalization is removed from the object after the function is run so that during a subsequent garbage collection, the object is either garbage or, if a weak vector points to it, again identified as garbage and (since it no longer has a finalization) scavenged away. See the example in Section 10.3 Example of weak vectors and finalizations.
A finalization is not a type of Lisp object. Finalizations are implemented as vectors (but that may change in a later release). You should not write any code which depends on the fact that finalizations are implemented as vectors.
You schedule a finalization with the function schedule-finalization. The function unschedule-finalization removes the finalization from the object. Finalizations are only run once, either immediately after the garbage collection which identified the object as garbage or by the program. The object is not scavenged away during the garbage collection in which it is identified as garbage (since it must be around to be the argument to the finalization function).
On Windows, load the file after the transcript example. Typing the example into the Debug window in the Integrated Development Environment on Windows does not work as described, because tools in the IDE cause pointers to the objects to be preserved, and, as a result, the example does not work as described. On Unix, you can type the example directly to a prompt.
;; This example can be typed to a prompt on a UNIX platform. ;; We define a weak vector and a function to be called by finalizations. CL-USER(10): (setq wv (weak-vector 1)) #(nil) CL-USER(11): (defun foo (x) (format t "I am garbage!~%")) foo ;; We create an object and put it in the weak vector: CL-USER(12): (setq a (cons 1 2)) (1 . 2) CL-USER(13): (setf (aref wv 0) a) (1 . 2) ;; We schedule a finalization. CL-USER(14): (schedule-finalization a 'foo) #((1 . 2) foo nil) ;; We remove the reference to the cons by setting the value of A to NIL. CL-USER(15): (setq a nil) nil ;; We evaluate unrelated things to ensure that the object ;; is not the value of *, **, or ***. CL-USER(16): 1 1 CL-USER(17): 2 2 CL-USER(18): 3 3 ;; We trigger a scavenge. The finalization function is called. ;; Note: an automatic gc might occur while you are entering the ;; forms shown. If it does, you might see the message printed by ;; our finalization sooner. CL-USER(19): (gc) I am garbage! ;; At this point, the weak vector still points to the object: CL-USER(20): wv #((1 . 2)) ;; The next gc causes the value in the weak vector to ;; be changed. CL-USER(21): (gc) CL-USER(22): wv #(nil)
On Windows, put this in a file finalization.cl:
(in-package :cg-user) (setq wv (weak-vector 1)) (defun foo (x) (format t "I am garbage!~%")) (setq a (cons 1 2)) (setf (aref wv 0) a) (schedule-finalization a 'foo) (setq a nil)
and do this in the Debug Window (the transcript shows the I am
garbage! message occurring after
evaluated at cl-user(11) but if a gc was triggered by the
loading of finalization.cl, you may see instead
the I am garbage! message sooner):
cl-user(10): :ld finalization.cl Loading finalization.cl cl-user(11): wv #((1 . 2)) cl-user(12): (gc) I am garbage! cl-user(13): 1 1 cl-user(14): 2 2 cl-user(15): 3 3 cl-user(16): (gc) cl-user(17): wv #(nil)
The data in a static array is placed in an area that is not garbage collected. This means that the data is never moved and, therefore, pointers to it can be safely stored in foreign code. (Generally, Lisp objects are moved about by the garbage collector so a pointer passed to foreign code is only guaranteed to be valid during the call. See foreign-functions.htm for more information on this point.) Only certain element types are permitted (listed below). General arrays (whose elements can be any Lisp object) cannot be created as static arrays.
The primary purpose of static arrays is for use with foreign code. They may also be useful with pure Lisp code but only if the array is needed for the whole Lisp run and the array never has to be significantly changed. The problem is the array is not garbage collected and there is no way (within Lisp) to free up the space.
Static arrays are returned by make-array with the (Allegro CL-specific) allocation keyword argument specified. The data is stored in an area which is not garbage collected but the header (if any) is stored in standard Lisp space. The following element types are supported in static arrays.
bit (signed-byte 8) (unsigned-byte 4) (unsigned-byte 8) (signed-byte 16) (unsigned-byte 16) (signed-byte 32) (unsigned-byte 32) character single-float double-float (complex single-float) (complex double-float)
See implementation.htm for information on this extension to make-array. See lispval-other-to-address for information on freeing static arrays.
Copyright (c) 1998-2012, Franz Inc. Oakland, CA., USA. All rights reserved.
Documentation for Allegro CL version 9.0. This page was not revised from the 8.2 page.
|Allegro CL version 9.0|
Unrevised from 8.2 to 9.0. | <urn:uuid:b14c698d-79cb-429f-9908-bccc75d9404c> | 2.765625 | 16,820 | Documentation | Software Dev. | 60.488878 | 1,100 |
|Figure 2-3: Components of the terrestrial carbon pool (compiled
and amplified from Apps and Price, 1996).
To fully account for carbon at a site, one must examine the forest, the crops, and the soils as a dynamic multi-component ecosystem, above- and below-ground, with changes in biomass and soil organic matter as key tracking mechanisms.
The most easily measurable pool is the total standing aboveground biomass of woody vegetation elements. The aboveground biomass comprises all woody stems, branches, and leaves of living trees, creepers, climbers, and epiphytes, as well as herbaceous undergrowth. In some inventories, dead fallen trees and other coarse woody debris, as well as the litter layer, are included in biomass estimates; in other inventories, these categories are considered as a separate dead organic matter pool. In practice, standing timber volumes per hectare are often taken as a proxy value, applying a locally tested conversion factor (see Section 2.4).
The below-ground biomass comprises living and dead roots, soil mesofauna, and the microbial community. There also is a large pool of organic carbon in various forms of soil humus (soil organic carbon, SOC). Other forms of soil carbon are charcoal from fires and consolidated carbon in the form of iron-humus pans or concretions. Many soils also contain a subpool of inorganic carbon in the form of hard or soft calcium carbonate (soil inorganic carbon, SIC).
Another major pool of carbon consists of forest products (timber, pulp products, non-timber forest products such as fruits and latex) and agricultural crops (food, fiber, forage, biofuels) taken off the site. Section 2.4 discusses their measurement and the monitoring of their routing and stability.
The components of the terrestrial carbon pools are illustrated in Figure 2-3.
Other reports in this collection | <urn:uuid:f9633ce4-5efc-4337-8fbd-3d72989f2884> | 3.40625 | 396 | Knowledge Article | Science & Tech. | 36.887571 | 1,101 |
FOR RELEASE: 09:00 am (EDT) August 31, 2011
Donna Weaver / Ray Villard
Space Telescope Science Institute, Baltimore, Md.
410-338-4493 / 410-338-4514
firstname.lastname@example.org / email@example.com
Rice University, Houston, Texas
Rice University, Houston, Texas
Hubble Movies Provide Unprecedented View of Supersonic Jets From Young Stars
New movies created from years of still images collected by NASA’s Hubble Space Telescope provide new details about the stellar birthing process, showing energetic jets of glowing gas ejected from young stars in unprecedented detail.
The jets are a byproduct of gas accretion around newly forming stars and shoot off at supersonic speeds of about 100 miles per second in opposite directions through space.
These phenomena are providing clues about the final stages of a star’s birth, offering a peek at how our Sun came into existence 4.5 billion years ago.
Hubble’s unique sharpness allows astronomers to see changes in the jets over just a few years’ time. Most astronomical processes change over timescales that are much longer than a human lifetime.
A team of scientists led by astronomer Patrick Hartigan of Rice University in Houston, Texas, collected enough high-resolution Hubble images over a 14-year period to stitch together time-lapse movies of the jets ejected from three young stars.
Never-before-seen details in the jets’ structure include knots of gas brightening and dimming over time and collisions between fast-moving and slow-moving material, creating glowing arrowhead features. The twin jets are not ejected in a steady stream, like water flowing from a garden hose. Instead, they are launched sporadically in clumps. The beaded-jet structure might be like a “ticker tape,” recording how material episodically fell onto the star.
“For the first time we can actually observe how these jets interact with their surroundings by watching these time-lapse movies,” said Hartigan. “Those interactions tell us how young stars influence the environments out of which they form. With movies like these, we can now compare observations of the jets with those produced by computer simulations and laboratory experiments to see what aspects of the interactions we understand and what parts we don’t understand.”
Jets are an active, short-lived phase of star formation, lasting only about 100,000 years. Astronomers don’t know precisely what role jets play in the star-formation process or exactly how the star unleashes them. The jets appear to work in concert with magnetic fields. This helps bleed excess angular momentum from infalling material that is swirling rapidly. Once the material slows down it feeds the growing protostar, allowing it to fully condense into a mature star.
Hartigan and his colleagues used the Wide Field Planetary Camera 2 to study the jets, called Herbig-Haro (HH) objects, named in honor of George Herbig and Guillermo Haro, who studied the outflows in the 1950s. Hubble followed HH 1, HH 2, HH 34, HH 46, and HH 47 over three epochs, 1994, 1998, and 2008.
The team used computer software that wove together the observations to generate movies showing continuous motion.
“Taken together, our results paint a picture of jets as remarkably diverse objects that undergo highly structured interactions both within the material in the outflow and between the jet and the surrounding gas,” Hartigan explained. “This contrasts with the bulk of the existing simulations, many of which depict jets as smooth systems.”
Hartigan’s team’s results appeared in the July 20, 2011 issue of The Astrophysical Journal.
For images and more information about these results, visit: | <urn:uuid:874df36f-2dea-45e1-892f-a619473b3dee> | 3.4375 | 805 | News (Org.) | Science & Tech. | 41.637635 | 1,102 |
We explore some of the questions surrounding asynchronous programming and reveal that the need for new constructs, such as await in C#, isn't due to deep considerations, but results from the fact that the UI is single threaded.
There is currently a lot of ground-breaking work on the subject of asynchronous programming and how to make it easier for programmers to make use of the asynchronous nature of today's environments. The idea is to restore the correspondence between the programs text and the order of execution by using some interesting compiler rewriting techniques. While this is a step forward the real situation is more complicated. In fact this is a mess of our own creation and with a little more though no solution would have been necessary.
When the modern GUI was invent some means of attracting the programs attention to what was going on was needed. We could have opted for a polling loop. For example a rule of the UI could have been that every program took the form:
if( any controls active?)
Process active controls
If this sounds crazy you have been working with events too long. Events in most GUIs are just a way of covering up the polling loop.
Instead of polling the designers opted for an event based system of asynchronous procedure calls. Instead of a polling loop any control that needs attention, a button that has been clicked, places a record in a queue maintained by a dispatcher object running on the UI thread.
The jargon isn't universal, but the way things work is more or less so. The event record can get into the dispatcher queue by way of another thread, but once it is in the queue it is then only processed by the UI thread and this is where the problems start.
The UI is really is written in the form of a polling loop. Rather than you writing the polling loop, it is part of the internals of the framework. The UI simply scans the dispatcher object's queue for new event records. As soon as it finds one, it calls the event handlers assigned to the event, if any. Notice that it is the UI thread that is used to run the event handlers and while it is doing this the dispatcher object is not doing anything. This doesn't stop other threads putting event records into its queue, however.
Freezing the thread
As long at the UI thread is being used to run an event handler it isn't available to poll the dispatchers queue and so no events get processed. This is the reason that a long running event handler will cause the UI to freeze. This is true of Windows forms, WPF, Silverlight, HTML, Swing, etc. In nearly all cases worth mentioning the use of a single threaded dispatcher results in the UI freezing if any event handler performs a task that takes a lot of time.
So how do you let an event handler do a lot of time-consuming work?
The standard way is to spin the work off into a worker thread that then releases the UI thread to process the dispatcher queue and so keep the UI responsive. However this takes the average programmer into the realm of multithreaded programming something that even the above average programmer makes mistakes with.
The traditional alternative to multithreading is to allow the dispatcher some time to process the queue every now and again - for example the much derided DoEvents command in Win Forms. A DoEvents command simply calls the dispatcher to process its queue with the UI thread returning to the instruction after the DoEvents when the queue is empty. By putting DoEvents commands at regular intervals within a long-running event handler, the UI can be kept responsive.
So what is the problem with DoEvent style commands?
The usual objection is that the current event handler might be reactivated by accident and a nested sequence of DoEvents could bring the system to a halt. In practice DoEvents is also a difficult one to deal with, because there is no guarantee how long the UI thread will be occupied servicing other event code.
There is also the small matter that other event handlers could make changes to variables. In other words the event handler that yielded the UI thread might discover that things have changed when it gets control back.
However, this is not as bad as a true multithread situation as there is only one thread doing the work and race hazards can't occur. In this situation all event handlers have to be written with the idea that all of the in-scope variables are volatile and can change their values. Notice that this is a problem that exists no matter what mechanism you use to allow the UI thread to run the dispatcher while an event handler is active.
So what is the current solution all about?
It isn't only event handlers that have a lot of work to do that are the problem. Sometimes an event handler that doesn't have much work to do stops the system in its tracks because it makes use of a service which is slow. It downloads a file from the internet, say. Usually this is done by calling another method and in this case the problem of blocking the UI thread is solved not by a "DoEvents" mechanism but by using a callback. A callback is just a function that is passed to the method doing the work that it promises to call when the job is complete.
Notice that the callback mechanism only works if the method using the callback terminates very soon after setting it. Only then can the UI thread be released and the dispatcher serviced. Notice that there is a subtlety here in that the callback has to be run on the UI thread and this either has to involve the dispatcher or some other similar mechanism to redirect the UI thread's attention to the callback function.
Essentially the callback function represents a continuation of the original method after the time consuming process is complete. Unfortunately the continuation is a far from perfect one. Usually little, if any, of the state of the method that issued the callback is preserved. That is, the callback starts off as a new function with no knowledge of what the original method was doing, other than what is conveyed in its parameters. Providing some of the state to a callback function is what closures are all about - a callback function can have access to the variables that were local when it was defined, even if those variable are now out of scope and supposedly destroyed.
The problem of loss of context when a callback is used to continue a method can be illustrated by a very simple example. Imagine you want to download a single file from the internet. You might write:
and expect the mycallback function to store the downloaded file when the download was complete. Easy - and because the method ends after the call to downloadfile, the UI thread is released only to continue the method's work later when mycallback is invoked.
Now consider how you might download 25 files in the same way. The obvious thing to do is use a for loop but you can get the call back to continue a for loop automatically - the context is lost.
So what can we do?
The current solution offered by C# 4.5 is interesting. It simply takes the idea of the continuation seriously. You can now write something like:
Now we don't need a callback because what happens is that the UI thread is released at the await until the download is complete, when the method continues as if nothing had happened. Notice that, from the point of view of the innocent programmer, nothing much exciting has happened. The program has worked by executing commands in the order in which they are written and the call to downloadfile() is nothing special, apart from the need to put await infront of it. Of course, we know that at the await the UI thread was released and it got on with other tasks until the downloadfile call completed and the method was resumed with the mycallback function. (There is no reason to use a function or to call it mycallback - this is just for continuity with the previous example.)
Notice that the await approach doesn't really do much more than the DoEvent approach, apart from avoiding the problem of re-entering the method that used the DoEvent call.
Multithread the UI?
Overall this seems like a lot of machinery to solve a simple problem - that the UI is single threaded.
Surely in this more advanced age it is time to implement a UI that isn't single threaded. Currently the dispatcher approach to implementing the UI is simply a gloss over the polling approach that seems so crude. It implements a sort of co-operative multitasking when we are quite capable of implementing a true multitasking architecture.
Why does the dispatcher have to run on the same thread as the event handlers?
This may be simple but it causes all of the problems we have seen earlier and forces the compiler designers to introduce keywords like await and their corresponding semantics, simply because the UI is single threaded and not because of some deep fundamental need of asynchronous programming.
If the dispatcher ran on its own thread and invoked event handlers using additional threads then it wouldn't matter at all if an event handler held onto its thread for a long time - it wouldn't block anything. Of course, this is true multitasking and as such has some inherent dangers, but the dispatcher could apply some very simple rules to make sure nothing really terrible happened. It could ensure that only one thread was allocated at a time to control, so avoiding the problem of re-entrancy, i.e. an event cannot occur while its event handler is active. To make things simpler we could share a single thread between the UI components unless they were marked as concurrent and so on.
The point is that we could invent some new mechanisms to protect the UI from accidental multithreading while making it easy to allow it where it was beneficial.
I'm not suggesting that making the UI multithreaded is an easy option, but I am suggesting it is the worthwhile option and tinkering with asynchronous constructs such as await is really not getting to the heart of the problem. If you build a real user interface using electronics then being able to press more than one button at a time is the way the world works, and it is the way the programmatic UI should work as well.
The strange fact is that we are adding core language constructs to deal with a problem of framework design. Make the UI multithreaded and the need for await goes away. | <urn:uuid:564ad05a-18a6-4e9d-93f0-3a1d8cad8f49> | 2.703125 | 2,092 | Nonfiction Writing | Software Dev. | 53.042809 | 1,103 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Issues in the Integration of Research and Operational Satellite Systems for Climate Research: II. Implementation
For research missions as for operational systems, observational requirements and considerations of data accessibility drive sensor and ground systems to a particular implementation that emphasizes innovation, data quality, and long-term accessibility. Rather than fly copies of specific sensors, the science research community generally prefers to fly new technology, and its missions are usually designed and implemented with strong scientific oversight. Sensor calibration and data product validation are critical elements of each mission. Ground systems focus on providing long-term access to low-level data to support reprocessing. Although there are cost constraints for research missions, these constraints are not applied to the system as a whole. Compelling new missions can be developed and funded somewhat independently of other elements of the NASA/ESE budget.
Although the operational and the research approaches can appear to conflict, there are features of both that are essential for climate research and monitoring. However, the operational agencies are necessarily wary of assuming responsibility for new requirements that may be open ended in an environment that is cost constrained. The research agencies are similarly concerned about requirements for long-term, operational-style measuring systems that might inhibit their ability to pursue new technologies and new scientific directions. Despite their need for long-term commitments to measure many critical variables, they wonder about relying on operational programs that might decrease the level of scientific oversight as well as opportunities for innovation.
Long-term, consistent time series are essential for the study of many critical climate processes, which vary over inherently long time scales. That said, many of the variables of interest for climate research have analogs in observing systems for short-term forecasting, although the performance requirements may differ significantly. For example, one of the fundamental attributes of operational observing systems—long-term commitment to data availability—is especially appealing to the climate research community. However, climate research also requires the ability to insert new observing capabilities to ensure that data remain at the state of the art as well as to respond to new science opportunities. Thus, the fundamental challenge is not the transitioning of research capabilities into the operational systems but rather the integration of the two capabilities in a rational manner. Both climate research and climate monitoring require a long-term commitment to consistent data sets and short-term flexibility to pursue new science and technology directions.
The integration of research and operational capabilities for climate research will require continuing cooperation between NASA and NOAA. Eventually, a single federal agency may be responsible for the overall climate observing strategy, but for the foreseeable future, the committee expects that the expertise from multiple agencies will be required. Its recommendations, therefore, are directed to NASA, NOAA, and the IPO.
KEY IMPLEMENTATION ISSUES
The following are the key implementation issues:
Long-term comparability of data sets such that sensor performance and other technical performance issues are not mistaken for natural variability in Earth’s system (the committee prefers the term “comparability” rather than “consistency” because, in its view, the long-term objective is to develop data records that can be compared and the basis quantified—it is difficult to develop consistent data sets even with identical sensors);
Data product validation, including quantitative assessments of the temporal and spatial accuracy of the data;
Data continuity and strategies to launch replacement sensors to maintain the quality of the long-term data record;
Long-term archiving of data sets and capabilities for reprocessing and analysis;
Accessibility and availability of data, including pricing; and | <urn:uuid:1e620865-a893-40ca-bcce-e819295b3f62> | 2.578125 | 756 | Knowledge Article | Science & Tech. | 1.372826 | 1,104 |
Artist's rendering of Mariner 4 in space. Credit: NASA
"Mariner 4 ran into a cloud of space dust," says Bill Cooke of the Marshall Space Flight Center Space Environments Team. "For about 45 minutes the spacecraft experienced a shower of meteoroids more intense than any Leonid meteor storm we've ever seen on Earth." The impacts ripped away bits of insulation and temporarily changed the craft's orientation in space.
Fortunately, the damage was slight and the mission's main objective -- a flyby of Mars -- ad been completed two years earlier. But it could have been worse.
"There are many uncharted dust clouds in interplanetary space. Some are probably quite dense," says Cooke. Most of these clouds are left behind by comets, others are formed when asteroids run into one another. "We only know about the ones that happen to intersect Earth's orbit and cause meteor showers such as the Perseids or Leonids." The Mariner 4 cloud was a big surprise.
What does a piece of space dust look like? This picture shows one that is only 10 microns across. It was captured by a U2 aircraft in the stratosphere. Credit: NASA
"Of all NASA's Mars spacecraft, Mariner 4 was the only one we've sent with a micrometeoroid detector," he continued. During its journey to Mars and back, the detector registered occasional impacts from interplanetary dust grains -- as expected. The space between the planets is sprinkled with dust particles. They're harmless in small numbers. But when Mariner 4 encountered the cloud "the impact rate soared 10,000 fold," says Cooke.
Mapping these clouds and determining their orbits is important to NASA for obvious reasons: the more probes we send to Mars and elsewhere, the more likely they are to encounter uncharted clouds. No one wants their spaceship to be surprised by a meteor shower hundreds of millions of miles from Earth.
Much of Cooke's work at NASA involves computer-modeling of cometary debris streams -- long rivers of dust shed by comets as they orbit the sun. He studies how clumps form within the streams and how they are deflected by the gravity of planets (especially giant Jupiter). He and his colleagues also watch the sky for meteor outbursts here on Earth. "It's a good way to test our models and discover new streams," he says.
One such outburst happened on June 27, 1998. Sky watchers were surprised when hundreds of meteors streamed out of the constellation Bootes over a few-hour period. Earth had encountered a dust cloud much as Mariner 4 had done years earlier.
The meteors of 1998 were associated with a well-known meteor shower called the June Bootids. Normally the shower is weak, displaying only a few meteors per hour at maximum. But in 1998 it was intense. Similar outbursts had occurred, with no regular pattern, in 1916, 1921, and perhaps 1927.
The source of the June Bootids is comet 7P/Pons-Winnecke, which orbits the Sun once every 6.37 years. The comet follows an elliptical path that carries it from a point near the orbit of Earth to just beyond the orbit of Jupiter. Pons-Winnecke last visited the inner solar system in 2002. The comet's dusty trail is evidently clumpy. When our planet passes through a dense spot in the debris stream, a meteor shower erupts.
Meteor forecasters D.J. Asher and V.V. Emel'yanenko (MNRAS 331, 1998, 126) have calculated that the meteors seen in 1998 might return in 2003, although 2004 is more likely. "That's why watching for June Bootids this year is important: any activity now may herald another outburst in 2004," notes Robert Lunsford, Secretary-General of the International Meteor Organization, who is encouraging people to monitor the sky this week. | <urn:uuid:6ceedc25-efac-4946-976c-46508415dea1> | 3.734375 | 803 | Knowledge Article | Science & Tech. | 59.332175 | 1,105 |
By Neal Singer The first step in deploying lasers on space satellites will take place Sept. 9, when NASA sends a lidar system into orbit on the space shuttle Discovery, says the UI researcher who helped design the major experiments for the mission. The lidar system, dubbed NASA LITE, is expected to be aboard the shuttle when it is launched at 4:30 p.m. EDT from Cape Canaveral, Fla., to monitor Earth's atmosphere. The shuttle is scheduled to return Sept. 18. Lidar, an acronym derived from light detection and ranging, emits laser beams rather than radio waves to obtain information. LITE derives from Lidar in Space Technology Experiment. "Through use of a pulsed laser beam as an active sensor, we can measure light back-scattered from air molecules, clouds and aerosols, or reflections from Earth's surface, and infer information about characteristics of the atmosphere," says UI electrical and computer engineer Chester S. Gardner. The system will be firing 10 times a second in the green, ultraviolet and infrared ranges. The returning light, captured by a telescope adjacent to the laser on the satellite, should yield the best information on concentrations of aerosols, movements of storm systems and heights of clouds. Later lidar systems will study atmospheric ozone depletion, the accumulation of carbon dioxide and water vapor in the atmosphere, and the expansion or shrinkage of polar ice caps. The information is expected to be much more detailed than that obtained by current methods. Satellites now use a horizon-scanning technique that measures the thinning of sunlight as it passes through the edge of Earth's atmosphere. The technique can identify chemical constituents of the atmosphere, but its resolution is relatively poor, he said. A major challenge for LITE will be to measure water reflectivity at different amounts of surface turbulence. In an attempt to accomplish this, the shuttle will pitch forward 30 degrees as it passes over specific sites and will rotate at a speed that allows it to keep the laser beam focused on specific sites. The process is expected to help calibrate water reflections of different size waves for future generations of orbiting lidar systems. Calibration stations around the globe will beam lidars upward to compare their readings with the shuttle's lidar as it passes above them. The largest such check station in terms of telescope size - 3.5 meters - will be at the Starfire Optical Range in Albuquerque, N.M., and will be staffed by UI electrical and computer engineer George Papen and several UI graduate students. Gardner, who for eight years has been part of the 12-member LITE advisory committee, will lead the day shift at the Project Science control room at the Johnson Space Center in Houston during the 45 hours of intermittent observation expected from the lidar system. He and his colleagues will advise mission controllers on science problems that may arise during the tests. | <urn:uuid:aad5a46d-0aa2-43ca-8173-02db404991f7> | 3.671875 | 575 | News Article | Science & Tech. | 42.473056 | 1,106 |
The Economics of Ecosystems and Biodiversity: Ecological and Economic Foundations
Human well-being relies critically on ecosystem services provided by nature. Examples include water and air quality regulation, nutrient cycling and decomposition, plant pollination and flood control, all of which are dependent on biodiversity. They are predominantly public goods with limited or no markets and do not command any price in the conventional economic system, so their loss is often not detected and continues unaddressed and unabated. This in turn not only impacts human well-being, but also seriously undermines the sustainability of the economic system.
It is against this background that TEEB: The Economics of Ecosystems and Biodiversity project was set up in 2007 and led by the United Nations Environment Programme to provide a comprehensive global assessment of economic aspects of these issues. The Economics of Ecosystems and Biodiversity, written by a team of international experts, represents the scientific state of the art, providing a comprehensive assessment of the fundamental ecological and economic principles of measuring and valuing ecosystem services and biodiversity, and showing how these can be mainstreamed into public policies. The Economics of Ecosystems and Biodiversity and subsequent TEEB outputs will provide the authoritative knowledge and guidance to drive forward the biodiversity conservation agenda for the next decade.
1. Integrating the Ecological and Economic Dimensions in Biodiversity and Ecosystem Service Valuation
2. Biodiversity, Ecosystems and Ecosystem Services
3. Measuring Biophysical Quantities and the Use of Indicators
4. The Socio-cultural Context of Ecosystem and Biodiversity Valuation
5. The Economics of Valuing Ecosystem Services and Biodiversity
6. Discounting, Ethics, and Options for Maintaining Biodiversity and Ecosystem Integrity
7. Lessons Learned and Linkages with National Policies
Appendix 1: How the TEEB Framework Can be Applied: The Amazon Case
Appendix 2: Matrix Tables for Wetland and Forest Ecosystems
Appendix 3: Estimates of Monetary Values of Ecosystem Services
"A landmark study on one of the most pressing problems facing society, balancing economic growth and ecological protection to achieve a sustainable future."
- Simon Levin, Moffett Professor of Biology, Department of Ecology and Evolution Behaviour, Princeton University, USA
"TEEB brings a rigorous economic focus to bear on the problems of ecosystem degradation and biodiversity loss, and on their impacts on human welfare. TEEB is a very timely and useful study not only of the economic and social dimensions of the problem, but also of a set of practical solutions which deserve the attention of policy-makers around the world."
- Nicholas Stern, I.G. Patel Professor of Economics and Government at the London School of Economics and Chairman of the Grantham Research Institute on Climate Change and the Environment
"The [TEEB] project should show us all how expensive the global destruction of the natural world has become and – it is hoped – persuade us to slow down.' The Guardian 'Biodiversity is the living fabric of this planet – the quantum and the variability of all its ecosystems, species, and genes. And yet, modern economies remain largely blind to the huge value of the abundance and diversity of this web of life, and the crucial and valuable roles it plays in human health, nutrition, habitation and indeed in the health and functioning of our economies. Humanity has instead fabricated the illusion that somehow we can get by without biodiversity, or that it is somehow peripheral to our contemporary world. The truth is we need it more than ever on a planet of six billion heading to over nine billion people by 2050. This volume of 'TEEB' explores the challenges involved in addressing the economic invisibility of biodiversity, and organises the science and economics in a way decision makers would find it hard to ignore."
- Achim Steiner, Executive Director, United Nations Environment Programme
This volume is an output of TEEB: The Economics of Ecosystems and Biodiversity study and has been edited by Pushpam Kumar, Reader in Environmental Economics, University of Liverpool, UK. TEEB is hosted by the United Nations Environment Programme (UENP) and supported by the European Commission, the German Federal Ministry for the Environment (BMU) and the UK Department for Environment, Food and Rural Affairs (DEFRA), recently joined by Norway's Ministry for Foreign Affairs, The Netherlands' Ministry of Housing (VROM), the UK Department for International Development (DFID) and also the Swedish International Development Cooperation Agency (SIDA). The study leader is Pavan Sukhdev, who is also Special Adviser – Green Economy Initiative, UNEP.
View other products from the same publisher | <urn:uuid:906f7240-4b78-478d-9b89-4b845237d4f3> | 3.21875 | 966 | Content Listing | Science & Tech. | 9.332509 | 1,107 |
Graphene Membranes May Lead To Enhanced Natural Gas Production, Less CO2 Pollution, Says CU Study
Engineering faculty and students at the University of Colorado Boulder have produced the first experimental results showing that atomically thin graphene membranes with tiny pores can effectively and efficiently separate gas molecules through size-selective sieving.
The findings are a significant step toward the realization of more energy-efficient membranes for natural gas production and for reducing carbon dioxide emissions from power plant exhaust pipes.
Mechanical engineering professors Scott Bunch and John Pellegrino co-authored a paper in Nature Nanotechnology with graduate students Steven Koenig and Luda Wang detailing the experiments. The paper was published Oct. 7 in the journal’s online edition.
The research team introduced nanoscale pores into graphene sheets through ultraviolet light-induced oxidative “etching,” and then measured the permeability of various gases across the porous graphene membranes. Experiments were done with a range of gases including hydrogen, carbon dioxide, argon, nitrogen, methane and sulphur hexaflouride -- which range in size from 0.29 to 0.49 nanometers -- to demonstrate the potential for separation based on molecular size. One nanometer is one billionth of a meter.
“These atomically thin, porous graphene membranes represent a new class of ideal molecular sieves, where gas transport occurs through pores which have a thickness and diameter on the atomic scale,” said Bunch.
Graphene, a single layer of graphite, represents the first truly two-dimensional atomic crystal. It consists of a single layer of carbon atoms chemically bonded in a hexagonal “chicken wire” lattice -- a unique atomic structure that gives it remarkable electrical, mechanical and thermal properties.
“The mechanical properties of this wonder material fascinate our group the most,” Bunch said. “It is the thinnest and strongest material in the world, as well as being impermeable to all standard gases.”
Those characteristics make graphene an ideal material for creating a separation membrane because it is durable and yet doesn’t require a lot of energy to push molecules through it, he said.
Other technical challenges will need to be overcome before the technology can be fully realized. For example, creating large enough sheets of graphene to perform separations on an industrial scale, and developing a process for producing precisely defined nanopores of the required sizes are areas that need further development. The CU-Boulder experiments were done on a relatively small scale.
The importance of graphene in the scientific world was illustrated by the 2010 Nobel Prize in physics that honored two scientists at Manchester University in England, Andre K. Geim and Konstantin Novoselov, for producing, isolating, identifying and characterizing graphene. Scientists see a myriad of potential for graphene as research progresses, from making new and better display screens and electric circuits to producing tiny biomedical devices.
The research was sponsored by the National Science Foundation; the Membrane Science, Engineering and Technology Center at CU-Boulder; and the DARPA Center on Nanoscale Science and Technology for Integrated Micro/Nano Electromechanical Transducers at CU-Boulder.
SOURCE: University of Colorado Boulder | <urn:uuid:3b25753a-7497-4795-9e82-d31a0a8014b9> | 3.3125 | 668 | News Article | Science & Tech. | 13.330443 | 1,108 |
Summary: THE GEOMETRY OF K3 SURFACES
LECTURES DELIVERED AT THE
SCUOLA MATEMATICA INTERUNIVERSITARIA
JULY 31--AUGUST 27, 1988
DAVID R. MORRISON
This is a course about K3 surfaces and several related topics. I want
to begin by working through an example which will illustrate some of
the techniques and results we will encounter during the course. So
consider the following problem.
Problem . Find an example of C X P3
, where C is a smooth
curve of genus 3 and degree 8 and X is a smooth surface of degree 4.
Of course, smooth surfaces of degree 4 are one type of K3 surface.
(For those who don't know, a K3 surface is a (smooth) surface X which
is simply connected and has trivial canonical bundle. Such surfaces
satisfy (OX ) = , and for every divisor D on X, D · D is an even
We first try a very straightforward approach to this problem. Let C | <urn:uuid:b5b3c86a-08e0-40fa-bbc0-ba8b5f9bc304> | 2.734375 | 232 | Academic Writing | Science & Tech. | 61.468521 | 1,109 |
No doubt about it: The federal government wants the Northern Goshawk to survive.
So every national forest in the West wrote into its forest plan detailed rules to protect critical habitat the bird- and rodent-hunting raptors need to survive.
Forest Service administrators apply those prescriptions to every thinning project, timber sale, off-road vehicle analysis and pipeline construction plan that comes along.
But what if goshawks don’t actually need the closed forest canopy, tree thickets and nearby underbrush full of scuttling rodents that biologists had assumed?
Turns out, the guidelines the U.S. Forest Service has applied to millions of acres in the West to protect the goshawk don’t help the beleaguered bird at all, according to a study by researchers from Northern Arizona University and published in the Journal of Applied Ecology.
In fact, a decade-long study of 13 goshawk nesting areas in the Apache Sitgreaves National Forest yielded the startling conclusion that the more closely the area resembled Forest Service guidelines for goshawk heaven, the fewer chicks the feathered pairs actually produced.
The researchers confessed themselves confounded by “these surprising results.”
Authors of the paper included Paul Beier, Erik Rogan, Michael Ingraldi and Steven Rosenstock, all with the NAU School of Forestry or the Arizona Department of Game and Fish.
“Contrary to expectation,” the researchers concluded, “goshawk breeding areas that resembled most closely the forest structure prescribed by the goshawk guidelines tended to have lower goshawk productivity.”
The research demonstrated the limits of common sense when it comes to predicting something as complicated as the relationship between an adaptable predator like the forest-dependent goshawk and either its habitat or its prey base. The goshawk guidelines now in effect on millions of acres start with the reasonable assumption that goshawks need clusters of big trees in which to build their nests close to the kinds of underbrush that harbor the rabbits, mice, squirrels and other creatures on which they prey.
Many biologists have assumed that goshawks have moved into forests in the Southwest as those forests have grown thicker in the past century, shifting from big trees separated by grassy swales to fire-prone tree thickets. Therefore, the Forest Service has struggled to shift back toward an open, fire-adapted forest without exterminating species that prefer a closed canopy with interlocking tree branches — like the goshawks and the Mexican Spotted Owl.
The NAU research now throws into question many key assumptions built in ponderous legal strictures of existing forest plans.
“The results raise questions about the decision to implement the goshawk guidelines on most Forest Service lands in Arizona and New Mexico,” the researchers concluded.
However, the Forest Service remains legally bound to the detailed guidelines now cast in the legal concrete of adopted forest plans.
That applies even to vital efforts like the 4-Forests Restoration Initiative, an ambitious plan intended to dramatically thin millions of acres of forest across central Arizona.
The U.S. Forest Service recently picked Pioneer Forest Products to thin an initial 300,000 acres over the next decade. The project relies on completing a single environmental assessment on nearly a million acres as a way to streamline the thinning process. The Forest Service would then train the Pioneer subcontractors to create with their chain saws a more healthy, diverse, fire-adapted forest. The prescription attempts to also protect critical habitat for a host of species, including goshawks.
That means implementing the current goshawk guidelines, which will leave many dense patches of forest as nesting areas for goshawks and Mexican Spotted Owls.
4FRI Forest Service team leader Henry Provencio expressed surprise at the NAU findings, which cast doubt on the goshawk guidelines embedded in the current approach. But he said that once the guidelines get written into the forest plans, the Forest Service remains legally required to abide by them.
“Our forest plans require it,” he said. “But that would be a pain” if the existing guidelines don’t actually help the goshawks successfully rear more chicks. “We do have different prescriptions for the goshawk areas. In those breeding areas we know they typically have a higher (tree) density. So we have prescriptions for that. We’re trying to manage the future forest. One of the big concerns is whether we’re going to have adequate canopy cover — so we’re really managing groups of trees and also providing for those interspaces and managing for their prey.”
But the NAU study raises questions about whether biologists yet know enough to micro-manage the forest for the benefit of any individual species.
The goshawk and the Mexican Spotted Owl for years have fluttered about at the center of the legal and political fight about the future of the forest. The agile, crazy-orange-eyed goshawk is nearly as large as a red tailed hawk, but can maneuver deftly through the thick forest. In open areas, they tend to lose out to the red tails — which circle overhead looking for prey rather than perching on tree branches for a quick swoop to the ground.
The now nearly defunct timber industry in Arizona made most of its money on cutting the big, old growth trees associated with those species and others like the Kaibab squirrel and the Allen’s lappet-browed bat. With most of those trees reduced to two-by-fours, the timber industry had a hard time making money on the smaller trees that remained in dangerous profusion.
The Centers for Biological Diversity has repeatedly sued to prevent timber sales that included a large number of old growth pines greater than 16 inches in diameter at about chest height. For instance, earlier this year the Centers for Biological Diversity successfully blocked a timber sale on the North Rim of the Grand Canyon on the grounds that the 25,000-acre sale would include about 8,000 old-growth trees — even though such trees account for only about 3 percent of the trees.
The NAU study demonstrated that biologists still don’t really understand what species like goshawks need.
None of the sites studied very closely matched the guidelines, which call for clusters of giant, old-growth trees and nearby areas with underbrush likely to result in high populations of 14 different prey species.
Although little true old-growth ponderosa pine forest remains in Arizona, the researchers expected to find that the more closely the conditions around the nest area resembled that prescription — the more chicks the goshawks would produce. In fact, the more closely the forest matched the prescription the fewer chicks the hawks reared.
That doesn’t mean the goshawks don’t prefer nesting in big, old growth trees. But it does mean that they’re not as sensitive to the prey populations in the area or the nearby forest conditions as biologists had expected.
The study did find that goshawks produce more chicks in years when conditions produce a bumper crop of rodents, but that accounted for year-to-year variations — not a consistent difference between nesting areas.
The results “were remarkably consistent in documenting that goshawks use forest structures characterized by relatively dense canopy and many large trees, but do not use sites with higher prey abundance,” the researchers concluded.
The moral of the story would seem to support an approach that produces a diverse, healthy forest — without trying to micromanage the details.
Moreover, the researchers concluded that many seemingly common-sense assumptions must be challenged — and the impact of changes in management monitored.
That conclusion might also raise red flags when it comes to the current approach to the 4FRI effort, which many consider the last best hope to both avert catastrophic wildfires and restore forest health.
As it happens, the Forest Service rejected the bid of the 4FRI contractor that pledged to spend about $5 million monitoring the ecological effects of the massive thinning effort in favor of a contractor that included no money for monitoring in the bid.
However, the NAU researchers concluded, “our study suggests that goshawks did not respond as expected and the monitoring and adaptive management approach recommended in 1993 (when the goshawk guidelines were first adopted) is equally important today.” | <urn:uuid:77e38e22-4cac-44c9-916e-eb4cebb53cd8> | 3.296875 | 1,730 | News Article | Science & Tech. | 35.130294 | 1,110 |
It can be safely assumed that databases with a high volume of data or a complicated relational setup (like, perhaps, a lexical database for a living language) must be accessible to many users and operators at the same time. Ideally, it should be possible to use existing different hardware and software platforms that can be combined into the actual system. In order to reduce the implementation cost, only one system, the database server, needs to be powerful; the user stations typically just display data and accept user commands, but the processing is done on one machine only which led to the name client-server database. In addition, the user interface should be easy to maintain and should require as little as possible on the client side.
A system which meets these criteria can be built around the following items of protocols, concepts and software:
supplies the operating system. It is a stable Unix implementation providing true multi-user multi-tasking services with full network (TCP/IP e. a.) support. Except from the actual media and transmission cost, it is available free of charge and comes in form of so-called distributions which usually include everything needed from the basic OS to text processing, scripting, software development, interface builders, etc.
is the Hypertext Markup Language used to build interfaces to network systems like Intranets and the WWW, the World Wide Web. HTML is very simple and can be produced with any ASCII-capable text editor.
are text-based (e. g. Lynx) or graphical (e. g. Mosaic, Netscape, Arena etc.) applications accepting, evaluating and displaying HTML documents. They are the only piece of software which is directly operated by the database user. Using browsers, it is possible to display various types of data (text, possibly images) and communicate with http servers (see next) on about every popular computer model for which a browser has been made available.
provide access to the area of a host computer where data intended for public use in a network are stored. They understand the http protocol and procure the information the user requests.
Structured Query Language is a language for manipulating data in relational databases. It has a very simple grammar and is a standard with wide industry support. SQL-based databases have become the core of the classical client/server database concept. There are many famous SQL systems available, like Oracle, Informix etc., and then there is also msql which comes with a very low or even zero price tag if it is used in academical and educational environments.
Common Gateway Interface is the programming interface between the system holding the data (in our case an SQL-based system) and the network protocol (HTML, of course). CGIs can be built around many programming languages, but a particularly popular language is perl.
is an extremely powerful scripting language which combines all merits of C, various shell languages, and stream manipulation languages like awk and sed. Perl has a lot of modularized interfaces and can be used to control SQL databases, for example. | <urn:uuid:8c3e1c2b-c4d7-4124-a21f-a56240e02a05> | 3.640625 | 617 | Documentation | Software Dev. | 34.726796 | 1,111 |
July 13, 2012 An international team, led by Institute of Genetics and Developmental Biology, Chinese Academy of Science, and BGI, the world's largest genomics organization, has completed the genomic sequence and analysis of salt cress (Thellungiella salsuginea), a wild salt-tolerant plant. The salt cress genome serves as a useful tool for exploring mechanisms of adaptive evolution and sheds new lights on understanding the genetic characteristics underlying plant abiotic stress tolerance.
The study was published online in PNAS.
Salt Cress is a typical halophyte with high resistance to cold, drought, oxidative stresses and salinity. Due to its small plant size, short life cycle, copious seed production, small genome size, and an efficient transformation, salt cress could serve as an important genetic model system for botanist, geneticists, and breeders to better explore the genetic mechanisms of abiotic stress tolerance.
In the study, researchers sequenced the genome of salt cress (Shandong ecotype) using the paired-end Solexa sequencing technology. The genomic data yielded a draft sequence of salt cress with about 134-fold coverage. The final length of the assembled sequences amounted to about 233.7 Mb, covering about 90% of the estimated size (~260 Mb). A total of 28,457 protein-coding regions were predicted in the sequenced salt cress genome. Researchers found that the average exon length of salt cress and A. thaliana genes was similar, whereas the average intron length of salt cress was about 30% larger than that of A. thaliana.
The evolutionary analysis indicated that salt cress and its close relative- Arabidopsis thaliana- diverged from approximately 7 -12million years ago. When tracing the differences between salt cress and A. thaliana, researchers found salt cress was characterized by a dramatically different lifestyle, a unique gene complement, significant differences in the expression of orthologs, and a larger genome size. Noticeably, the salt cress genome showed a dramatically higher content of transposable elements (TEs) than that of A. thaliana, which may be the reason for its enlarged genome size. In common with other higher plants, salt cress genome was consisted of abundance of long terminal repeat (LTR) retrotransposons.
Salt can have drastic effects on the growth and yield of agronomical crops. It is estimated that salinity renders about one-third of the world's irrigated land unsuitable for crop production. In this study, researchers identified many genes in salt cress that contribute to its success in high-salt environments, such as the genes related with cation transport, abscisic acid signaling, and wax production.
Junyi Wang, Director of Science & Technology, Research & Cooperation Center, BGI, said, "Salt cress provides an excellent model and opportunity for researchers to explore plant's mechanisms of abiotic stress tolerance. The completed genomic sequence of salt cress will boost the advancement of stress tolerance research as well as provide a valuable theoretic instruct and technical support for researchers worldwide to better face the challenges of the soil salinization in irrigation area, the development and utilization of shallow offshore waters and beaches, and food security."
Other social bookmarking and sharing tools:
- H.-J. Wu, Z. Zhang, J.-Y. Wang, D.-H. Oh, M. Dassanayake, B. Liu, Q. Huang, H.-X. Sun, R. Xia, Y. Wu, Y.-N. Wang, Z. Yang, Y. Liu, W. Zhang, H. Zhang, J. Chu, C. Yan, S. Fang, J. Zhang, Y. Wang, F. Zhang, G. Wang, S. Y. Lee, J. M. Cheeseman, B. Yang, B. Li, J. Min, L. Yang, J. Wang, C. Chu, S.-Y. Chen, H. J. Bohnert, J.-K. Zhu, X.-J. Wang, Q. Xie. Insights into salt tolerance from the genome of Thellungiella salsuginea. Proceedings of the National Academy of Sciences, 2012; DOI: 10.1073/pnas.1209954109
Note: If no author is given, the source is cited instead. | <urn:uuid:92f22cc4-3df3-42ba-a015-1dd1f1511427> | 3.265625 | 917 | News (Org.) | Science & Tech. | 53.585492 | 1,112 |
Web edition: December 13, 2012
Print edition: December 29, 2012; Vol.182 #13 (p. 34)
According to one popular notion, everyone has a twin somewhere. Who knows, maybe the same is true for planets. Maybe there’s even a doppelgänger Earth orbiting at just the right distance from a sunlike star to support life. In his latest book, science writer Lemonick provides a behind-the-scenes look at the decades-long search for just such a planet. The endeavor, long considered a scientific backwater with little chance of success, is now one of the hottest fields in astronomy.
Like any nascent field of science, the search for exoplanets poses a challenge that has lured both established researchers and ambitious students. These pioneers aim to detect planets too distant to see directly, by discerning the subtle wobbles of stars being tugged back and forth by the planets, as well as slight dimmings that result when planets pass in front of their parent stars.
In a fascinating chronicle of camaraderie and competition, Lemonick profiles the prominent researchers in an astronomical discipline that is coming of age. He follows the twists and turns in their careers as well as the towering hurdles they faced and ultimately solved — including oft-denied funding requests and the equally daunting search for respect among scientific peers.
At first, researchers could discern only exceptionally large planets closely orbiting small stars. But techniques used to detect exoplanets are becoming more and more sensitive, and scientists may be getting close to discovering a mirror Earth — a find that might be revealed within months, not years, Lemonick contends. — Sid Perkins
Walker & Co., 2012, 294 p., $26 | <urn:uuid:c9218405-a15e-4b02-9a5e-f124aabe9c3a> | 3.109375 | 353 | Truncated | Science & Tech. | 47.021576 | 1,113 |
Young Cells in Old Brains [Preview]
The paradigm-shifting conclusion that adult brains can grow new neurons owes a lot to Elizabeth Gould's rats and monkeys
ELIZABETH GOULD: CHANGING MINDS
Image: PETER MURPHY
- Past thinking: Memories are stored by locked-in neural connections. Present: The brain can add neurons, perhaps to establish new memories.
- Hope for dementia: New neurons seem able to migrate, suggesting that therapeutic cells can be guided to areas damaged by disease or injury.
it or lose it: In lab animals not kept in a stimulating cognitive environment, "most new neurons will die within a few weeks."
PRINCETON, N.J.--Reunion weekend at Princeton University, and the shady Gothic campus has been inundated by spring showers and men in
boaters and natty orange seersucker jackets. Tents and small groups of murmuring alumni dot the courtyards. Everything proper, seemingly in its place. In
This article was originally published with the title Young Cells in Old Brains. | <urn:uuid:432f2674-a5de-42b2-9ab7-169c23f586c7> | 2.953125 | 224 | Truncated | Science & Tech. | 45.971972 | 1,114 |
Written by: Emilio D’alise (SoSF Staff Journalist)
Perhaps not all are aware 2009 has been designated the International Year of Astronomy. What does this mean? It means the public will be targeted by a world-wide effort to raise awareness of what’s revolving above our heads. More precisely, we revolve under it, and I can say it these days without fearing a visit from the Inquisitors. Yep, it’s the 400th anniversary of Galileo incurring their wrath.
Galileo saw something and wanted to make others aware of it, and now, 400 years later, the aim is not just to raise awareness, but to elicit participation by people young and old, from varied backgrounds, and in far and near places around the world. The hub for this effort is the International Year of Astronomy website. This is a deceptively simple and unassuming site that opens up to a wide variety of groups, associations, universities, and international groups all wanting to make observing the universe a better understood, and enjoyed, activity.
Sure, people can go to the site on their own, but I figure making a few suggestions, and providing the more interesting links here, may spur more people to make the virtual journey. It was difficult to sort through everything and choose a manageable number of things to write about; what interests me varies with the mood I’m in, the time of day, the time of year, and the last meal I ate. I resolved the issue by writing about the links I find myself revisiting. The order of the following is not an indication of ranking, or of my preference; it’s just the order they appear on the International Year of Astronomy website.
100 Hours of Astronomy
This is a worldwide event whose key goals is having as many people as possible look through a telescope as Galileo did 400 years ago. It consists of a number of public outreach activities scheduled for April 2nd through April 5th. The closest event near me is at the Colorado National Monument (near Grand Junction, CO). Given some cooperation by the weather, I plan to attend. I mention the weather because the months of April and May are likely candidates for major snow events around these parts. The observation itself is scheduled during favorable viewing conditions (i.e. the moon is not too bright), but while we can predict what the moon will do, weather is not so easily predicted, especially here in Colorado.
This ambitious project aims to give ten million people their first look through an astronomical telescope. The plan is to do so by providing high-quality, low-cost (around $10) telescope kits. The telescopes will be provided in kit form as this will get the recipient involved in the understanding of the construction, and in having some investment in the tool they will use. I like this idea a lot, and plan to get some for my nephews. The only thing I am unsure about is the design of the telescope would have people see an inverted image, and while this is not necessarily a big deal when observing stars, it takes some getting used to when tracking an object like a planet or the moon. I do understand that will keep the cost down, and it is part of the learning process, but still, I’m hoping it will not turn some people off from learning to use it.
Dark Skies Awareness
This project ties in nicely with the energy concern we read about in the news. The project aims to make people aware of the “adverse impact of excess artificial lighting on local environments”. I moved to Colorado four years ago. I lived in a Detroit suburb, and although I have lived here in Colorado four years now, I still stop and stare when I walk outside on a clear night, or head to work in the early hours. It’s hard to convey the view by just saying “you see the stars”; a more accurate phrase is “you experience the stars”. I can walk out onto my deck before going to bed and stare at the big dipper which seems to be hanging just out of arm’s reach. Satellites are crossing the sky, and as your eyes adjust the gazillion light sources separate into groups of individual points of light. Grab a pair of binoculars, and you get lost looking at stars whose light traveled billion of years to strike the receptors in your eyes. I don’t live in a truly dark area; the glow of Colorado Springs is to the South of me, and to the North Denver spills its glow over the horizon toward me. Still, the sky above is dark enough to offer a spectacular sight . . . unless one of my neighbors left their outdoor lights on. The point is that in 30 years of living in Michigan I was deprived of the opportunity to see the universe above me for all but a few nights I spent in the Upper Peninsula, or at some vacation place. It would be nice if more people realized what they are missing, and actively worked to limit light pollution. It saves energy, and provides cheap entertainment in the form of mesmerizing displays of twinkling stars.
From Earth to the Universe
Those of us fascinated with the beauty hidden in the dark skies above will periodically seek out images captured by orbital and Earth-bound telescopes. Some of these photographs are truly stunning. The photographs deemed the most stunning are scheduled to travel around the world in exhibits held in public places; public parks, airports, malls, and anywhere diverse people can be exposed to the beauty of near and deep sky objects. There are a number of US and international events already planned, and more are in the process of being scheduled. (Note: if the reader wants some hands-on involvement, there is a project called Galaxy Zoo that might be of interest. In short, it aims to classify a large database of galaxies based on shape. This is done by people who sign up and essentially look at pictures of various galaxies and classify them according to a given criteria. You could be the first to ever look at a particular picture, and discover something new . . . or rather, very old.)
The World at Night
I am an amateur photographer, and this effort has spurned my interest in expanding (shameless plug) my photography into the realm of nighttime photography. The World at Night Galleries aim to showcase landmarks and symbols of various nations against the backdrop of the night sky. Perhaps viewing photographs from locales we are used to associate with the nightly news will remind us our politics, grudges, and differences are insignificant when put against the backdrop of the vastness of the visible sky. It may seem a hopeless cause, but every person who is moved to look beyond the pettiness of a world lost to misguided self-importance, that person is a prophet for improving the human condition. Not everyone can afford the time, expense, and effort of telescope observation, but nearly everyone can be made aware they need only raise their eyes a little bit to discover where our hope for the future lies. For surely if we continue looking down, the dirt we gaze upon will forever keep its hold on us.
The International Year of Astronomy website has much more to offer, and is well worth a visit. It’s also worth checking if your city or town is the home of a planetarium, star-gazing club, or astronomical observatory. Many offer tours, viewing parties, and various activities aimed at celebrating the last 400 years of mankind looking ever deeper in to the darkness of the night . . . only it turns out it’s not darkness at all. Go see for yourself. | <urn:uuid:d4362660-0d93-413f-96af-f27910667289> | 2.671875 | 1,565 | Personal Blog | Science & Tech. | 50.672883 | 1,115 |
Corals Corals All corals belong to the Phylum Cnidaria (Ni-da´-ri-a). The cnidarians are a natural group of invertebrate animals that have a simpler organization than most other inverte
The corals discussed in this article are capable of growing very fast. Fragging is in your future whether you realize it or not. Some of the slimy beginner corals like mushrooms, ...
Branching corals, especially shallow-water Acroporawhichare primary habitat builders, will become brittle and more easily damaged leading to extensive habitat deterioration.
Although sedimentation and destructive fishing methods may pose more risk to Indonesian coral reef ecosystems as a whole, the commercial extraction of corals cannot be overlooked.
*oceanservice.noaa.gov/education Subject Review Corals 8.
NOAA National Ocean Service Education: Corals NOS home NOS education home site index This site NOAA Corals Roadmap Corals Lesson Plans Welcome What are Corals?
© 2004, PETCO Animal Supplies, Inc. All rights reserved . (0315) 1 of 2 Soft corals are leathery or fleshy colonies with a soft skeleton. They are hardier than hard corals and grow rapidly.
Most corals consist of many small polyps living together in a large group or a colony. A single polyp has a tube-shaped body with a mouth which is surrounded by tentacles.
© 2004, PETCO Animal Supplies, Inc. All rights reserved . (0315) 1 of 2 "Stony" or "hard" corals have a hard calcium carbonate skeleton. They are a popular saltwater invertabrate for aquariums because of their beautiful colors or flower-like appearance.
This report was written by Patty Debenham, Ph.D. Contributing authors and editorial assistance from: Andrew Baker, Ph.D., Elisabeth Banks, Shannon Crownover, Lauren Cuneo, Hollis A. Hope, Corinne Knutson, Cindy Krupp, Dawn M. Martin, Bruce McKay, Elizabeth Neeley, Eric Punkay, Julia Roberson ... | <urn:uuid:afe86adf-7a3a-4729-8461-cced1e447b66> | 3.234375 | 449 | Truncated | Science & Tech. | 38.486573 | 1,116 |
Use Default Code Snippets in .NET to Accelerate Coding
Knowing how to use default code snippets in .NET is really beneficial for any programmer. Code snippets are pre-written codes in .NET which programmer can quickly insert using shortcut keys. This makes programmer’s job easy by not having to retype frequent reused structures of code.
Code snippets are an excellent way to accelerate your coding. Most frequently used code constructions are included. I’ll use MessageBox in my first sample.
Whenever you need to use MessageBox.Show method, you write the entire one line code like this:
But using default Code Snippets you could save time and build more productivity.
Type ‘mbox’ and press Tab key twice, and the Visual Studio IDE will fill in MessageBox.Show method for you.
Let us try another. Type ‘for’ and press Tab key twice. IDE will generate the loop syntax automatically.
for (int i = 0; i < length; i++)
Like this way, you can use the following default code snippets.
These are the available Code snippets in Visual Studio IDE.
#if Creates #if and #endif directive. #region Creates #region and #endregion directive. ~ Creates a Destructor. attribute Creates a declaration for a class that derives from Attribute. checked Creates a checked code block. class Creates a class declaration. ctor Creates a constructor. cw Creates a Console.Writline code block. do Creates a do while loop. else Creates an else code block. enum Creates an enum declaration. equals Creates a method declaration that overrides the Equals method. exception Creates a declaration for a class that derives from an exception. for Creates a for loop. foreach Creates a foreach loop. forr Creates a for loop with decrementing loop variable. If Creates a if block. Indexer Creates an index declaration. Interface Creates an interface declaration. Invoke Creates a block that invoke an event. Iterator Creates an iterator. Iterindex Creates a "named" iterator and indexer pair by using a nested class. lock Creates a lock block. mbox Creates a call to MessageBox.Show method. namespace Creates a namespace declaration. prop Creates an auto-implemented property declaration. propfull Creates a property declaration with get and set successors. propg Creates a read-only auto-implemented property with a private "set" accessor. sim Creates a static int Main method declaration. struct Creates a struct declaration. svm Creates a static void Main method declaration. switch Creates a switch block. try Creates a try-catch block. tryf Creates a try-finally block. unchecked Creates an unchecked block. unsafe Creates an unsafe block. using Creates a using directive. while Creates a while loop.
Follow the reactions below and share your own thoughts. | <urn:uuid:c81f4c3b-186f-409c-8057-b096a9f8d66e> | 2.640625 | 628 | Tutorial | Software Dev. | 47.097176 | 1,117 |
DailyDirt: Life, But Not As We Know It
from the urls-we-dig-up dept
We've noted before that the classification of living things is a bit tricky. It's not just plants and animals anymore. Biologists are continuously discovering creatures that defy the old taxonomies. Here are just a few more examples of the incredible diversity of life on our planet.
- Christoph Adami tries to define life -- by studying artificial life and the characteristics of information processes that seem to behave like life. His self-replicating programs evolve and create virtual ecosystems -- and could help figure out how to find extraterrestrial life. [url]
- Biologists thought they had only cataloged about 10% of all fungal species. But there could be a whole different domain of life that is similar to fungus, but isn't. [url]
- Mantaphasmatodes is the first new insect order discovered since the 1900s. And these insects are almost everywhere. [url]
- To discover more interesting biological curiosities, check out what's currently floating around the StumbleUpon universe. [url] | <urn:uuid:85faa978-0869-4eae-9c24-2cf8ad183e29> | 2.546875 | 234 | Content Listing | Science & Tech. | 37.193206 | 1,118 |
The study of petroleum in both the saturated and unsaturated zones, to better understand the processes that control contaminant behavior, and to use this understanding to estimate the future behavior of the contaminants.
The bibliography provides citations pertinent to the effects of fire and its prescribed use on the ecosystems and species of Wisconsin and the upper Midwest. Three separate subject indexes are provided: general, species, and geographic location.
Explains what biochar is and how it is formed, its potential use in both fertilizer and carbon sequestration, and some of the research questions remaining to be addressed before we can utilize it fully in practical ways.
Study of the processes by which the conversion occurred of the Palouse bioregion from perennial native grass, shrub, and forest vegetation to agriculture and the interactions between human cultures and environment. | <urn:uuid:1704415c-7971-4b29-9f4b-3e5eba67664b> | 2.546875 | 165 | Content Listing | Science & Tech. | 11.059036 | 1,119 |
how to create a two-dimensional array?
Posted by yfpeng (yfpeng), 17 September 2002As I promised I will ask you more questions.
I guess we can use two-dimensional arrays in perl - I saw some examples but their elements are all hard-coded. I want to create an empty two-dimensional array first, then push elements into this array. The elements are paired, e.g. if I push an T into the first array (in the two-dimensional array), then it will push A into the same position of the second array, so the 2-D array looks like:
TTGC... --the first array
AACG... --the second array
It is like a matrix. how to do this in perl. thanks very much.
Posted by admin (Graham Ellis), 17 September 2002Perl uses "lists" rather than arrays - they can do everything an array can go in other languages (except give you problems when you run off the end), and lots more things besides ...
Amazingly, you don't have to tell Perl what's a list of lists (oh - sorry - that's what a "2 dimensional array" should be called). It works it out for itself from your code, and creates all necessary structure parts on the way. They call THAT auto-vivification
Your 2D array looks like a DNA sequence, with the first 'row' being a sequence, and the second row being the reverse complement (which no doubt you'll want to reverse later).
Here's a sample program that reads in a file containing a single FASTA sequence, splits in into a list, makes a second copy in reverse order, and switches As and Ts, Cs and Gs. It then puts it all into a list of lists, as per your question. Finally, it lists it out to "prove" that it's worked, using 2D array type notation.
"More than simple lists and hashes" is a huge topic and I've
just started to scrath the surface here ... think I may have to
refer you to a more substantial set of material that I can
possibly write on the board if you want to take it further.
By the way ... output from my program was
with the input file being
Posted by yfpeng (yfpeng), 17 September 2002Thanks for your neat and excellent code...by the way, do you charge for this? FYI, I am a graduate student in bioinformatics.
But you still have two predefined arrays (sequence and reverse sequence) for your list. I want the list to be empty first, then I will go through the sequence, check if there is an A, if so I push A into the first array and T into the second array of the same column...check for the rest of As and do the same. This procedure will go for C and G too. Then I will count how many A T pairs, how many C G pairs in the sequence.
Your code has given me some ideas, but I am still thinking about how to achieve this? Thanks.
Posted by admin (Graham Ellis), 18 September 2002You ask "do we charge" ... no. I've posted some background to what we're about onto another of the boards here (it's of more general interest that just the Perl crew:
The Reasoning being this board
I'll come back to the Perl question in an hour or two - I'm just in the middle of soring out an email server that's got into some trouble ....
Posted by yfpeng (yfpeng), 18 September 2002I was just kidding (for the excellent help you guys provide here).
Actually my last message was kind of silly, since we know A pairs T, C pairs G, why bother storing them in a list? What I wanted to do is store A, C, T, G in one row, and their occurrence in the other row, because I need to count the frequency of each one.
The list should look like:
row#1 (nucleotide) ATTCGAGTCT
row#2 (occurrence) 1111111111
then count the occurrence of A (1+1=2), etc., and sort row #2 so that I know the one with the most frequency. interesting bioinformatics question, isn't it? thanks
Posted by admin (Graham Ellis), 18 September 2002Quote:
I know ... but many a true word is spoken in jest. Folks come on the courses I give in the expectation that they'll not be able to ask me questions when the course is over - that's the way it works on other courses - and just wonder what my motivation is in providing email / board help.
Thanks for giving me (even in jest!) the opportunity to answer that! Now ... on to the Bioinformatics question. Separate message, Ithink?
Posted by admin (Graham Ellis), 18 September 2002I might not be understanding the current question properly here - but I think it boils down to "how many each of C A G and T are there in a list of single characters?
Just dashed that out - so there may be a couple of typos. I've used a hash with the keys being A C G and T to be my table of counters, then I've just stepped through the list in @dna.
Note the heavy use of $_ (did you know that print with no parameters prints the contents of $_?). When Perl 6 comes along, $_ will be even more powerful and important - expect to hear a lot about "Topicalkization"
Posted by yfpeng (yfpeng), 18 September 2002I did not phrase the problem clearly. Yes, we can use a counter for each character, but then I need to sort A, C, T, G by their occurrence. That is why I was thinking about using a list - the first row is each A, C, T, G (each one can occur multiple times in the sequence), the second row is occurrence (when I find one A in the sequence, I put 1 in the second row of the same column), then sort the first row in descending order according to the second row (ocurrence or frequency). I do not know if we can do this in Perl - but I guess it can, since I found can do a lot of "strange" things from a beginner's point of view.
Hopefully the problem is a little more clear.
Posted by yfpeng (yfpeng), 18 September 2002It probably make more sense to use hash in this case, I am trying...
Posted by admin (Graham Ellis), 19 September 2002I'm afraid I'm still not really clear on what the lists(s) or hashes should look like in the end - I'm just a programmer and know nothing of bioinformatics . Earlier in the thread you wrote
If you could tell me what the result would look like in this example, I might be better able to follow that I can at the moment
PH: 01144 1225 708225 • FAX: 01144 1225 899360 • EMAIL: firstname.lastname@example.org • WEB: http://www.wellho.net • SKYPE: wellho | <urn:uuid:ffde9ef0-7eeb-4286-9eb4-88e2957e2c13> | 3.09375 | 1,514 | Comment Section | Software Dev. | 78.113442 | 1,120 |
The burning of fossil fuels like coal, gas and oil releases particles into the atmosphere. When fossil fuels are not burned completely, they produce black carbon -- otherwise known as soot. Soot looks like a black or brown powder and though it's made up of tiny particles, it can have a big impact on climate.
Black carbon stays in the atmosphere for several days to weeks and then settles out onto the ground. It can be produced from natural causes like when lightning causes a forest fire. Most black carbon results from human practices like slash and burn methods for clearing land, using diesel engines, and industrial processes that burn coal, gas and oil, and coal burning in homes. Black carbon is produced around the world and the type of soot produced varies by region.
Black carbon adds to global warming in two ways. First, when soot enters the atmosphere, it absorbs sunlight and generates heat, warming the air. Second, when soot settles on snow and ice, it makes the surface darker, so the surface absorbs more sunlight and generates heat. This warming causes more snow and ice to melt, in what can be a vicious cycle.
Black carbon lowers the albedo of a surface. Scientists use the term "albedo" as an indicator of the amount of energy reflected by a surface. Albedo is measured on a scale from zero to one (or sometimes as a percent).
- Very dark colors have an albedo close to zero (or close to 0%).
- Very light colors have an albedo close to one (or close to 100%).
Soot is dark in color, and so has a low albedo and reflects only a small fraction of the Sun's energy. Forests have low albedo, near 0.15. Snow and ice, on the other hand, are very light in color. They have very high albedo, as high as 0.8 or 0.9, so they reflect most of the solar energy that gets to them, absorbing very little. The more dark surfaces on Earth, the less solar energy is reflected and this means more warming as more solar radiation is absorbed. Soot makes surfaces (or the atmosphere) darker and so adds to global warming.
Scientists say that black carbon emissions are the second largest factor in global warming, after carbon dioxide. Reducing black carbon is one of the fastest ways for slowing global warming. Luckily, many policies have been put in place to lessen black carbon around the world, and the technology needed to lessen black carbon already exists. The importance of black carbon's role in global warming has come to the forefront of the minds of many concerned citizens and exciting steps are already being taken to address issues like making cleaner burning cookstoves available in developing nations and improving industrial practices that produce black carbon. Reducing black carbon around the world will not only lessen global warming, but will cut down on air pollution and will improve human health. | <urn:uuid:4bec3b16-8808-45c8-a741-13815cc65911> | 4.34375 | 597 | Knowledge Article | Science & Tech. | 54.113747 | 1,121 |
setfsuid - set user identity used for file system checks
#include /* glibc uses
int setfsuid(uid_t fsuid);
setfsuid sets the user ID that the Linux kernel uses
to check for all accesses to the file system. Normally, the
value of fsuid will shadow the value of the effective
user ID. In fact, whenever the effective user ID is changed,
fsuid will also be changed to new value of effective
An explict call to setfsuid is usually only used by
programs such as the Linux NFS server that need to change
what user ID is used for file access without a corresponding
change in the real and effective user IDs. A change in the
normal user IDs for a program such as the NFS server is a
security hole that can expose it to unwanted signals from
other user IDs.
setfsuid will only succeed if the caller is the
superuser or if fsuid matches either the real user
ID, effective user ID, saved set-user-ID, or the current
value of fsuid.
On success, the previous value of fsuid is returned.
On error, the current value of fsuid is
setfsuid is Linux specific and should not be used in
programs intended to be portable.
No error messages of any kind are returned to the caller. At
the very least, EPERM should be returned when the
When glibc determines that the argument is not a valid uid,
it will return -1 and set errno to EINVAL without
attempting the system call. | <urn:uuid:5265ab6b-9d40-44b4-9169-5ca2d64eac58> | 3.03125 | 355 | Documentation | Software Dev. | 46.508047 | 1,122 |
Science Fair Project Encyclopedia
Mist is a phenomenon of a liquid in small droplets floating through air. It can occur naturally as part of natural weather or volcanic activity, and is common in cold air above hot water, in exhaled air in the cold, and in a steam room of a sauna. It can also be created artificially with aerosol canisters. Fog is a definition closely related to mist, in that their definitions overlap. At performances and parties a fog machine can be used for visual effects, possibly combined with special lighting.
Not to be confused with the Myst computer game series.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:37be06d5-2ae6-4b0e-8da4-7e7644cd38a5> | 3.125 | 151 | Knowledge Article | Science & Tech. | 45.384341 | 1,123 |
(Click to enlarge)
This image is of the solar eclipse earlier this week. Solar eclipses occur when the moon comes between the Earth and Sun. However, there's more to it than just that, otherwise we'd have a solar eclipse every ~28 days (one full lunar cycle).
When viewed edge on, the plane in which the moon orbits is slightly tilted in relation to the plane the Earth and Sun lie on (hence the reason the shadow moves along a different line in the sky than the sun, intersecting only at the one point). Because of this, most of the time, when the moon is on the line between the Warth and the Sun, it is simply too high or too low to cause an eclipse.
Sometimes it's between the point where it's too high or low and the point where it will completely come in front of the Sun. In this case, the moon will only cover part of the Sun and the result will be a partial ecplipse, such as this one I photographed in Spring 2005.
Additionally, the moon's orbit around the Earth is not perfectly circular. It is slightly elliptical. This means that at some points in its orbit, it further than other points. As common experience should tell you, ther further away an object is, the smaller it will look (which is why the sun appears the same size in the sky as the moon dispite being millions of times bigger). Therefore, since the moon is further away, it will be smaller, and may not cover the sun entirely. This is known as an annular eclipse in which the moon will be silhouetted on the sun leaving a ring (such as in this picture).
Thus this image is an extremely rare "total solar eclipse" in which the moon completely covers the full disk of the sun. But what's all that fuzzy stuff around it in the center one? That's called the corona and is essentially the sun's extended atmosphere which is shaped by the Sun's immense magnetic field. It's actually always there, but it's extremely faint in comparison to the sun, so we can't see it unless the sun is somehow blocked out, as in the case of a total solar eclipse. It is primarily composed of the nuclei of ionized hydrogen atoms.
You may also be wondering why you didn't happen to catch this eclipse given that it only happened a few days ago. The reason is that this one only happened to be visible from regions of northern Africa and the Middle East. You should not be asking yourself, "why only such a small location given that half the Earth can see the sun at any time?" The reason for this is something called parallax. In the scenario of a total eclipse, only the locations directly below the center of the moon will see the eclipse. Locations slightly further away will be viewing the event from a slightly different angle.
While this wouldn't seem like it would play much of a difference, try a quick experiment. Imagine your left eye is someone standing in southern Africa and that your right is someone standing in England. Close one eye and hold your fist out in front of you and cause it to eclipse something on the other side of the room (or outside if possible, the further away the better). Make sure the object you choose is just barely covered by your fist. Now without moving your arm, change eyes. You'll notice that your fist is no longer covering the object at all.
This effect that you have just observed is precisely what happens in the case of an eclipse for different observers and is what astronomers call parallax (Parallax also has many other applications in astronomy such as directly measuring the distance to a great number of stars to extremely high precision thanks to the HIPPARCOS satellite). This quick experiment is also reasonably close to actual scale in terms of angular sizes and relation between sizes for the earth and moon. The distances between objects and true sizes aren't even close, but those don't matter in this case.
So you're probably wondering why there's the strange disjointed path. After all, we never see that. There's only one sun in the sky. This image is actually a compilation of 18 images taken ~3 minutes apart (and presumably one more to use as the beautiful background). I can say that these were taken ~3 minutes apart because of the spacing of the suns.
In 24 hours, the sun makes a full 360º path around the sky. Thus, converting hours to minutes and dividing, we find that the sun moves 1º every 4 minutes. Although it doesn't seem that there's any scale marked on this image to permit me to figure out how many degress there is between each image from which to figure out the time between images, there actually is a very easy one: the sun itself.
Both the sun and the moon have an angular size of 1/2º. That means that if the little suns were butted right up against one another, it would have traveled 1/2º between images, which in turn implies that it would have been 2 minutes (4/2) between each image. Since there's a little more space, roughly 1/2 of a sun width (ie, 1/4º), I can estimate there was approximately another minute between pictures. Thus 2 + 1 = 3.
So ultimately 18 images of the sun were taken and then reassembled to produce this dramatic image. While in and of itself it is quite stunning, a closer look reveals more information than meets the eye. This concept is one I feel is important to keep in mind in the sciences. Things are not always what they seem to be at a first glance. If this wasn't the driving concept behind science, we would still hold with many ridiculous ideas such as the Earth being flat, or alchemy, or perhaps more relavant today, intelligent design.
Image copyright: Stefan Seip
Found via: NASA Astronomy Picture of the Day
Update: The original version of this post contained erronious math which was noted by reader, Benjamin Franz, in the comments. I have corrected my math here, but wanted to make sure he was given due credit. | <urn:uuid:c439f314-1664-4d1c-830e-b989fad44188> | 4.09375 | 1,262 | Personal Blog | Science & Tech. | 56.817564 | 1,124 |
The western chorus frog is found in the middle to eastern portions of the North American continent. Its range extends from southern Quebec and northern New York west to South Dakota, then south to Kansas and Oklahoma (Harding 1997).
Western chorus frogs can be found in a variety of habitats, including marshes, meadows, swales, and other open areas. Less frequently they can be found in fallowed agricultural fields, damp woods, and wooded swamps. These areas of less permanent water offer reduced risk of egg and tadpole predation by other animals such as fish. There is a trade-off, however, as these temporary bodies of water can dry up in years of drought, resulting in reproductive failure for that year (Harding 1997).
The western chorus frog is characterized by a white or cream colored stripe along the upper lip, bordered by a dark brown stripe running through the eye from the nostril to the groin. There are usually 3 dark stripes running down the back, although these may be broken into rows of spots in some specimens. Background color ranges from brown to gray or olive. The underside is white or cream colored, possibly with dark spots on the chin and throat (Conant and Collins, 1991). Males have a yellow colored vocal sac that appears as a dark, loose flap of skin when not calling. The skin of the western chorus frog is typically moist and bumpy, and the toes end in slightly expanded toepads. Adult length is typically 1.9 to 3.9 cm (.75" to 1.5"), with males usually smaller than females. P. triseriata tadpoles have gray or brown bodies round in shape. Their tail fins are clear, often with dark flecks. The intestinal coil can be seen through the bronze belly skin. Maximum length before metamorphosis is about 3cm (1.2 inches)(Harding 1997).
The rate of development of the eggs and larvae is dependent on water temperature, with specimens in colder water requiring more time for development. Maximum length before metamorphosis is about 3cm.
In Michigan, the breeding season of Pseudacris triseriata begins in mid-March and runs through late May, although most activity occurs in April. These periods can vary, with breeding taking place earlier in the southern end of its range and later in the northern end. (Conant and Collins, 1991). Breeding sites include the edges of shallow ponds, flooded swales, ditches, wooded swamps, and flooded fields. Breeding choruses early in the season can be heard on clear, sunny days, but shift to evenings or cloudy, rainy days as the season progresses. Picking the small end of a high quality fine tooth comb with a fingernail can reproduce the call of the western chorus frog. The call sounds like "Cree-ee-ee-ee-eek", rising in speed and pitch as it progresses.
During amplexus, the female will lay 500-1500 eggs in several loose, gelatinous clusters attached to submerged grasses or sticks. Each cluster will typically have 20 to 300 eggs. Hatching generally occurs in 3 to 14 days and tadpoles transform into tiny froglets 40 to 90 days thereafter. The rate of development of the eggs and larvae is dependent on water temperature, with specimens in colder water requiring more time for development. Western chorus frogs can mature and breed in less than one year (Harding 1997).
After laying their eggs in clusters on vegetation there is no further parental care in Striped Chorus Frogs.
Most Striped Chorus Frogs will probably die as tadpoles or froglets. Once they reach adulthood, Striped Chorus Frogs may live for about 5 years.
Western chorus frogs tend to remain close to their breeding grounds throughout the year. They often hide from predators beneath logs, rocks, leaf litter, and in loose soil or animal burrows. They will typically hibernate in these places as well (Harding 1997).
Picking the small end of a high quality fine tooth comb with a fingernail can reproduce the call of the western chorus frog. The call sounds like "Cree-ee-ee-ee-eek", rising in speed and pitch as it progresses. Striped Chorus Frog males use these calls to attract females to breeding sites during the breeding season. Striped Chorus Frogs also use their keen vision to capture prey.
Western chorus frogs eat a variety of small invertebrates, including ants, flies, beetles, moths, caterpillars, leaf hoppers, and spiders. Newly formed froglets feed on smaller prey, including mites, midges, and springtails. Tadpoles are herbivorous, foraging mostly on algae (Harding 1997).
Striped Chorus Frogs help to control insect populations where they live, they also act as an important food source for their predators.
The western chorus frog (and most other frogs) acts as a critical indicator species. Because the larval and adult forms of this species occupy different levels of the food chain, anomalies (such as deformities) or a reduction in reproductive success can be linked to either aquatic or terrestrial ecosystems, depending on the life stage of the animal. This makes this species valuable in determining the overall health of both ecosystems. The permeable skin of the western chorus frog also makes it susceptible to contaminants and other external stimuli. Changes in morphology or ecology of this species might indicate high levels of pollution or other activity detrimental to their well being.
The western chorus frog can be common to locally abundant, although some areas have shown a decline. The subspecies Pseudacris triseriata maculata is listed as special concern in the state of Michigan. This species appears to be quite tolerant of human activities, considering its presence in agricultural and suburban areas. Caution must be exercised during agricultural practices, as runoff containing pesticides, herbicides, and fertilizers often fills breeding ponds, making eggs and larvae susceptible to detrimental effects (Harding 1997).
Kevin Gardiner (author), Michigan State University, James Harding (editor), Michigan State University.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
an animal that mainly eats meat
active at dawn and dusk
animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature
fertilization takes place outside the female's body
union of egg and spermatozoan
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
mainly lives in water that is not salty.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
A large change in the shape or structure of an animal that happens as the animal grows. In insects, "incomplete metamorphosis" is when young animals are similar to adults and change gradually into the adult form, and "complete metamorphosis" is when there is a profound change between larval and adult forms. Butterflies have complete metamorphosis, grasshoppers have incomplete metamorphosis.
having the capacity to move from one place to another.
the area in which the animal is naturally found, the region in which it is endemic.
active during the night
reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body.
having more than one female as a mate at one time
breeding is confined to a particular season
reproduction that includes combining the genetic contribution of two individuals, a male and a female
a wetland area that may be permanently or intermittently covered in water, often dominated by woody vegetation.
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
Conant, R., J. Collins. 1991. Peterson Field Guides: Reptiles and Amphibians of Eastern and Central North America. New York: Houghton Mifflin Company.
Harding, J. 1997. Amphibians and Reptiles of the Great Lakes Region. Ann Arbor, MI: The University of Michigan Press. | <urn:uuid:64311fa3-046d-41ff-afac-2a6636023af4> | 3.625 | 1,823 | Knowledge Article | Science & Tech. | 45.199891 | 1,125 |
It has been nearly 200 years since we became aware of the Neanderthals, an extinct form of humans that once shared Europe and Asia with the modern humans. But it has been less than two years since we discovered that the Neanderthals were not the only archaic modern human around at the time. In short order, researchers in Germany produced a draft of the Denisova genome, which showed that the ancestors of some modern human populations had interbred with the Denisovans at some point in the past.
However, the genome sequence that was published in 2010 was only a draft, which is expected to contain errors and areas of very poor coverage. The folks at the Max Planck Institute have continued sequencing away, though, and have greatly expanded their coverage of the Denisova genome; they're apparently preparing a paper to describe the expanded sequence right now. But to keep the research community from waiting for the paper to clear peer review, they've decided to release the sequence, both on the Max Planck website and through Amazon's web services. The release includes both the raw sequence itself, as well as alignments to the human and chimp genomes.
To protect their ability to publish a paper, the Max Planck team is releasing the sequence under a license that prohibits anyone else from doing an analysis of the complete genome. But anyone interested in looking at specific genes is able to do their analysis without waiting. People interested in doing something in between these two extremes are invited to get in touch with Svante Pääbo, who is directing the work, to sort out an agreement. | <urn:uuid:88a205a2-d204-4b25-954a-22b924ebf26a> | 3.171875 | 320 | News Article | Science & Tech. | 34.641045 | 1,126 |
You may remember 2004’s disaster movie and CGI delivery vehicle, The Day After Tomorrow. The premise of the film (which like any self-respecting disaster film, is excessively absurd) is that global warming suddenly plunges the world into the depths of an ice age. New York City drowns under the largest storm surge in history, and then flash freezes. As is the case with many disaster movies, there’s a small kernel of truth at the eye of this hurricane of exaggeration.
That kernel relates to ocean circulation and the Gulf Stream. The Gulf Stream carries warm water toward Western Europe, helping to keep it more temperate than its latitude would otherwise dictate. It depends on the downward flow of dense, salty water in the North Atlantic that drives a "conveyor belt" of ocean circulation in the Atlantic. Large amounts of fresh water discharged to the North Atlantic (from melting ice sheets, for example) can clog up that overturn by decreasing the density of the surface water. Slowing down Atlantic circulation drives down temperatures in Europe and affects climate around the globe.
During the most recent ice age, changes to the Atlantic conveyor system appear to have triggered bursts of extremely rapid climate change. A new study pins these changes on an event that took place elsewhere in the globe: the closing of the Bering Strait between Alaska and Russia.
Although it's not able to generate cinematic quality climatic chaos, researchers think that the shutdown in the Atlantic conveyor is behind some of the most rapid climate changes visible in ice core records from Greenland—the Dansgaard-Oeschger oscillations. These events occurred in cycles roughly 1,500 years long throughout much of the last glacial period.
Although North Atlantic overturning seems to be involved in these events, it’s unclear what alters the currents. It could be an external trigger (though no orbital or solar cycles really fit the bill), or it could be a sort of ice sheet heartbeat. It may be that the events can only occur when ice sheets reach a critical size, meaning that the rhythm of the cycles could be determined by the growth rate of ice sheets.
Whatever the trigger is, it appears to have been absent or ineffective at the start of the most recent ice age. The last glacial period began around 115,000 years ago, but Dansgaard-Oeschger oscillations were only prevalent between 11,000 and 80,000 years ago. They didn’t appear for the first 35,000 years of the glacial period, and they haven’t been seen since it ended. A paper published this week in Proceedings of the National Academy of Sciences pins this difference on a feature that's an ocean away.
It had been proposed that the Bering Strait between Alaska and Eastern Russia—which is replaced by a land bridge when sea level drops during glacial periods—could have something to do with these rapid climate shifts. So, a group of researchers set out to test the idea using the latest Community Climate System Model (CCSM3).
The model was run under two scenarios—one with modern sea level and an open Bering Strait, and one with a lower sea level and a closed Bering Strait. In each, freshwater was added to the North Atlantic at a slowly increasing rate until the overturning circulation slows down, after which the freshwater input is ramped back down to zero.
During the Dansgaard-Oeschger oscillations, the overturning circulation seems to show a sort of double equilibrium. One state is the normal mode, like it behaves today. That seems to collapse to a low-circulation state that can hang around for quite a while before flipping back to full strength.
The simulation with an open Bering Strait couldn’t replicate this behavior. The overturning circulation would slow down, but as soon as the freshwater addition started to drop, the circulation would smoothly recover right along with it. With the Bering Strait closed, however, the circulation would collapse more quickly, hold steady there for a while, and then abruptly kick back into gear. Much like the real thing is thought to have done.
The Bering Strait exerts its influence by controlling flow between the Arctic and the North Pacific. Normally, fresher water flows into the Arctic, but when freshwater is being added to the North Atlantic some of it leaks into the Arctic and out to the Pacific. That helps keep the overturning circulation in the North Atlantic from clogging up so easily. In contrast, when the Bering Strait is closed, the freshwater in the North Atlantic piles up and lingers.
Beyond offering an explanation of why the Dansgaard-Oeschger oscillations happened when they did (during the period when sea level was low enough that the Bering Strait was closed off), this work also has something to say about the future. Since the Bering Strait is open today, an abrupt collapse of overturning circulation in the North Atlantic due to melting Greenland ice could be much less likely. And that’s just one more reason why the day after tomorrow probably won’t resemble The Day After Tomorrow. | <urn:uuid:f60b8262-9541-4c90-8a7e-e42f65ecaf05> | 3.59375 | 1,046 | News Article | Science & Tech. | 46.339761 | 1,127 |
Macroscopic Properties and Microscopic Models
As a simple example of how the macroscopic properties of a substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula. can be explained on a microscopic level, consider the liquidA state of matter in which the atomic-scale particles remain close together but are able to change their positions so that the matter takes the shape of its container mercury. Macroscopically, mercury at ordinary temperatures is a silvery liquid which can be poured much like water—rather unusual for a metalAn element characterized by a glossy surface, high thermal and electrical conductivity, malleability, and ductility.. Mercury is also the heaviest known liquid. Its densityThe ratio of the mass of a sample of a material to its volume. is 13.6 g cm–3, as compared with only 1.0 g cm–3 for water. When cooled below –38.9°C mercury solidifies and behaves very much like more familiar solid metals such as copper and iron. Mercury frozen around the end of a wooden stick can be used to hammer nails, as long as it is kept sufficiently cold. Solid mercury has a density of 14.1 g cm–3 slightly greater than that of the liquid.
When mercury is heated, it remains a liquid until quite a high temperature, finally boilingThe process of a liquid becoming vapor in which bubbles of vapor form beneath the surface of the liquid; at the boiling temperature the vapor pressure of the liquid equals the pressure of the gas in contact with the liquid. at 356.6°C to give an invisible vaporThe gaseous state of a substance that typically exists as a liquid or solid; a gas at a temperature near or below the boiling point of the corresponding liquid.. Even at low concentrations gaseous mercury is extremely toxic if breathed into the lungs. It has been responsible for many cases of human poisoning. In other respects mercury vapor behaves much like any other gas. It is easily compressible. Even when quite modest pressures are applied, the volume decreases noticeably. Mercury vapor is also much less dense than the liquid or the solid. At 400°C and ordinary pressures, its density is 3.6 × 10–3 g cm–3 about one four-thousandth that of solid or liquid mercury.
A modern chemist would interpret these macroscopic properties in terms of a <span style="background-color: navy; color: white;" />sub-microscopic model involving atoms of mercury. As shown in the following figure, the atoms may be thought of as small, hard spheres. Like billiard balls they can move around and bounce off one another. In solid mercury the centers of adjacent atoms are separated by only 300 pm (300 × 10–12 m or 3.00Å). Although each atom can move around a little, the others surround it so closely that it cannot escape its allotted position. Hence the solid is rigid. Very few atoms move out of position even when it strikes a nail. As temperature increases, the atoms vibrate more violently, and eventually the solid melts. In liquid mercury, the regular, geometrically rigid structure is gone and the atoms are free to move about, but they are still rather close together and difficult to separate. This ability of the atoms to move past each other accounts for the fact that liquid mercury can flow and take the shape of its container. Note that the structure of the liquid is not as compact as that of the solid; a few gaps are present. These gaps explain why liquid mercury is less dense than the solid.
In gaseous mercury, also called mercury vapor, the atoms are very much farther apart than in the liquid and they move around quite freely and rapidly. Since there are very few atoms per unitA particular measure of a physical quantity that is used to express the magnitude of the physical quantity; for example, the meter is the unit of the physical quantity, length. volume, the density is considerably lower than for the liquid and solid. By moving rapidly in all directions, the atoms of mercury (or any other gas for that matterAnything that occupies space and has mass; contrasted with energy.) are able to fill any container in which they are placed. When the atoms hit a wall of the container, they bounce off. This constant bombardment by atoms on the <span style="background-color: navy; color: white;" />sub-microscopic level accounts for the pressure exerted by the gas on the macroscopic level. The gas can be easily compressed because there is plenty of open space between the atoms. Reducing the volume merely reduces that empty space. The liquid and the solid are not nearly so easy to compress because there is little or no empty space between the atoms.
You may have noticed that although our sub-microscopic model can explain many of the properties of solid, liquid, and gaseous mercury, it cannot explain all of them. Mercury’s silvery color and why the vapor is poisonous remain a mystery, for example. There are two approaches to such a situation. We might discard the idea of atoms in favor of a different theory that can explain more macroscopic properties. On the other hand it may be reasonable to extend the atomic theory so that it can account for more facts. The second approach has been followed by chemists. In the current section on Atoms, Molecules and Chemical Reactions as well as Using Chemical Equations in Calculations we shall discuss in more detail those facts that require only a simple atomic theory for their interpretation. Many of the subsequent sections will describe extensions of the atomic theory that allow interpretations of far more observations. | <urn:uuid:788a26bb-3431-4e86-88e7-32adca8979c4> | 3.953125 | 1,156 | Academic Writing | Science & Tech. | 44.467513 | 1,128 |
Researchers in Spain and Norway reported in the periodical Nature they had found tree-like growth rings on the bones of mammals, a characteristic that until now was thought to be limited to cold-blooded creatures and dinosaurs.
They also found proof that dinosaurs probably had a high metabolic rate to allow fast growth another pointer of warm-bloodedness.
"Our results strongly propose that dinosaurs were hot-blooded," lead author Meike Koehler of Spain's Institut Catala de Paleontologia told AFP.
If so, the findings should punctual a rethink about reptiles, she said.
Modern-day reptiles are cold-blooded, meaning they cannot control their body temperatures through their own metabolic system relying instead on outside means such as basking in the sun.
While the dinosaurs may have been hot-blooded, their other characteristics kept them directly in the reptile camp, said Koehler.
Paleontologists have long noted the ring-like markings on the bones of cold-blooded creatures and dinosaurs, and taken them to designate pauses in growth, perhaps due to cold periods or lack of food.
The bones of hot-blooded animals such as birds and mammals had never been correctly assessed to see if they, too, display the lines.
Koehler and her team found the rings in all 41 hot-blooded animal species they studied, counting antelopes, deer and giraffes.
The finding "eliminates the strongest quarrel that does survive for cold-bloodedness" in dinosaurs, she said.
The team's analysis of fillet tissue also showed that the fast enlargement rate of mammals is related to a high metabolism, which in turn is characteristic of hot-bloodedness.
"If you compare this hankie with dinosaur tissue you will see that they are equal," said Koehler.
"So this means that dinosaurs not only grew very fast but this increase was sustained by a very high metabolic rate, representative hot-bloodedness."
A comment by University of California palaeontologist Kevin Padian that was available with the paper said the study was the latest to chip away at the long-held theory that dinosaurs were cold-blooded.
"It seems that these were anything but characteristic reptiles, and Koehler and colleagues' findings remove another false association from this picture." | <urn:uuid:610e771c-75f8-4bff-bd4c-483f5ffa2f9f> | 4.03125 | 474 | News Article | Science & Tech. | 37.310017 | 1,129 |
|Skip Navigation Links|
|Exit Print View|
|man pages section 3: Basic Library Functions Oracle Solaris 11 Information Library|
- get current file position information
#include <stdio.h> int fgetpos(FILE *stream, fpos_t *pos);
The fgetpos() function stores the current value of the file position indicator for the stream pointed to by stream in the object pointed to by pos. The value stored contains unspecified information usable by fsetpos(3C) for repositioning the stream to its position at the time of the call to fgetpos().
Upon successful completion, fgetpos() returns 0. Otherwise, it returns a non-zero value and sets errno to indicate the error.
The fgetpos() function may fail if:
The file descriptor underlying stream is not valid.
The file descriptor underlying stream is associated with a pipe, a FIFO, or a socket.
The current value of the file position cannot be represented correctly in an object of type fpos_t.
The fgetpos() function has a transitional interface for 64-bit file offsets. See lf64(5).
See attributes(5) for descriptions of the following attributes: | <urn:uuid:12f66b29-9587-449a-a78e-ec2f43fce73a> | 2.71875 | 258 | Documentation | Software Dev. | 43.548332 | 1,130 |
The power operator binds more tightly than unary operators on its left; it binds less tightly than unary operators on its right. The syntax is:
Thus, in an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left (this does not constrain the evaluation order for the operands).
The power operator has the same semantics as the built-in pow() function, when called with two arguments: it yields its left argument raised to the power of its right argument. The numeric arguments are first converted to a common type. The result type is that of the arguments after coercion.
With mixed operand types, the coercion rules for binary arithmetic
operators apply. For int and long int operands, the result has the
same type as the operands (after coercion) unless the second argument
is negative; in that case, all arguments are converted to float and a
float result is delivered. For example,
0.01. (This last feature was added in
Python 2.2. In Python 2.1 and before, if both arguments were of integer
types and the second argument was negative, an exception was raised).
0.0 to a negative power results in a
ZeroDivisionError. Raising a negative number to a
fractional power results in a ValueError.
See About this document... for information on suggesting changes. | <urn:uuid:832d6b1e-1229-44fa-9a1a-ad1e31b0a7a4> | 3.5625 | 291 | Documentation | Software Dev. | 47.856354 | 1,131 |
TRectangle = class(TShape)
class PASCALIMPLEMENTATION TRectangle : public TShape
TRectangle defines 2D rectangles with customized corners. It inherits TControl and can be used in styles to construct controls.
The rectangle size and position are defined by the following properties of the TRectangle object:
- The shape rectangle ShapeRect defines the initial size and position of the rectangle.
- You can use the scaling factors to the TRectangle object to proportionally scale rectangle coordinates along local coordinate axes. Scaling moves the rectangle and changes its size.
Note: Scaling not only scales the shape of an object proportionally to the scaling factors, but also changes the StrokeThickness of the contour proportionally to the scaling factor for each axis.
- You can use the rotation axis RotationCenter and rotation angle RotationAngle of the TRectangle object to rotate and move the rectangle.
- The Corners, CornerType, XRadius, and YRadius properties customize the shape of the rectangle corners.
TRectangle draws the contour and fills the background with the Paint method.
Paint draws the contour and fills the background using the drawing pen and brush with the properties, color, and opacity defined by the Stroke, StrokeThickness, StrokeCap, StrokeDash, StrokeJoin, and Fill properties of the TRectangle object. | <urn:uuid:415715e7-dda0-40f9-b473-16254eb3f60a> | 2.703125 | 300 | Documentation | Software Dev. | 23.4825 | 1,132 |
Scientists estimate that critically endangered North Atlantic right whales have lost 63 to 67 percent of their communication space in the Stellwagen Bank National Marine Sanctuary due to noise created by passing ships. The paper was published on August 14, 2012 in an early online edition of the journal Conservation Biology.
The North Atlantic right whale is one of the rarest marine mammals. Only 350 to 400 whales are believed to currently exist. They are spectacularly large and can grow up to 45 to 55 feet (13.7 to 16.7 meters) in length and weigh up to 70 tons (63,500 kilograms). They feed on plankton, particularly copepods, by straining seawater through baleen plates located in their mouths.
North Atlantic right whales face many threats including those from shipping strikes, entanglement in fishing gear and habitat degradation caused by chemical and noise pollution. While much effort has been put forth to reduce whale deaths caused shipping strikes and gear entanglements, little is known about how noise pollution may negatively impact these whales.
To study how noise impacts North Atlantic right whales, a group of scientists placed several acoustic recording devices in the Stellawagen Bank National Marine Sanctuary – a critical feeding ground for right whales – and compared the noise from ships today to historically lower noise levels that existed nearly a half century ago. They estimate that right whales have lost 63 to 67 percent of their communication space in the sanctuary and surrounding waters. The louder ambient noise created by today’s busy world appears to be reducing the ability of the whales to detect calls from other whales.
Scientists call this phenomenon communication masking.
You can listen to what communication masking sounds like on the Cornell University Whale Listening Project’s website. First, listen to a typical right whale call at the top of the webpage. Then, try to see if you can hear the whale call within the recording from a noisy shipping lane. I could not. However, I did listen to a lot of loud music as a teenager.
Leila Hatch, a marine ecologist at the Stellwagen Bank National Marine Sanctuary and lead author of the paper, commented on the findings in a press release. She said:
A good analogy would be a visually impaired person, who relies on hearing to move safely within their community, which is located near a noisy airport. Large whales, such as right whales, rely on their ability to hear far more than their ability to see. Chronic noise is likely reducing their opportunities to gather and share vital information that helps them find food and mates, navigate, avoid predators and take care of their young.
Christopher Clark, director of Cornell’s bioacoustics program and co-author of the work also commented on the findings. He said:
We had already shown that the noise from an individual ship could make it nearly impossible for a right whale to be heard by other whales. What we’ve shown here is that in today’s ocean off Boston, compared to 40 or 50 years ago, the cumulative noise from all of the shipping traffic is making it difficult for all the right whales in the area to hear each other most of the time, not just once in a while. Basically, the whales off Boston now find themselves living in a world full of our acoustic smog.
It remains to be seen just how the underwater noise and decreased communication space may impact the right whales ability to survive and reproduce.
A previous study published on February 8, 2012 in Proceedings of the Royal Society found that decreased shipping noise in the Bay of Fundy, Canada after the events of September 11, 2011 was associated with a decrease in the right whales stress hormone levels. Long-term elevation of stress hormones is known to have negative health effects in many different species.
Holly Bamford, deputy assistant administrator of NOAA’s National Ocean Service also commented on the August 14, 2012 study in the press release. She said:
We are starting to quantify the implication of chronic, human-created ocean noise for marine animals. Now, we need to ask how we can adapt our management tools to better address these problems.
Bottom line: Scientists estimate that critically endangered North Atlantic right whales have lost 63 to 67 percent of their communication space in the Stellwagen Bank National Marine Sanctuary due to noise created by passing ships. The paper was published on August 14, 2012 in an early online edition of the journal Conservation Biology. | <urn:uuid:ef8d8f70-4c7e-4df0-bf0b-fde2ecdce13f> | 3.90625 | 898 | News Article | Science & Tech. | 42.199474 | 1,133 |
The aerosphere supports a range of animal life both at the Earth’s surface and in the air. While monitoring the movements and activities of terrestrial animals can be demanding, observation of volant organisms are even more challenging because they require novel technologies. Here we focus on the analysis of animal movements using radar. It has long been known that radio waves scattered from flying organisms (bioscatter) can be detected and processed using radar. Depending on the particular design, radar can be used to track individuals, observe the movements of organisms over a variety of spatial and temporal scales, and to some extent discriminate between and identify different taxa. These capabilities are being further enhanced through continuing innovations in radar hardware and signal processing technologies. Moreover, thousands of radar installations are located around the world with many of these already integrated into cohesive networks. For example, NEXRAD (Next-Generation Radar) operates continuously and provides near complete spatial coverage across the continental U.S. in near real time. In this presentation we explore the fundamental question: To what extent can radar observations be used to investigate questions about ecology, abundance, and airborne movement of animals over large spatial and temporal domains, and promote the transdisciplinary field of aeroecology.
Although designed to collect meteorological data, weather radars such as the WSR-88D also regularly detect bioscatter. We provide an overview of existing and developing radar technology within the framework of “radar aeroecology” and outline an approach to generate meaningful biological products. Our investigations have shown that observations from existing radar networks provide a viable means of observing and studying flying animals. Additionally, when coupled with measurements of the scattering properties of individual animals, we are using radar data to estimate numbers of birds and bats and the population sizes of roosting colonies. Results of this collaborative research are benefitting biologists and atmospheric scientists and creating crossover research opportunities. Our findings are timely given the importance of using this technology for understanding factors that affect movements of animals in the aerosphere relative to regional and global climatic variability. Together with complementary weather observations, NEXRAD data provide unprecedented opportunities to observe birds, bats, and insects in the aerosphere on both local and large scales. | <urn:uuid:6f857cd1-63f5-4ad8-874b-e9be31a93fd4> | 3.515625 | 450 | Academic Writing | Science & Tech. | 5.338679 | 1,134 |
IB Environmental Studies/Ecosystem
Topic 2: The Ecosystem
Ecosystems are the biotic and abiotic factors in a specified area that interact with one another.
biotic factor - A living, biological factor that may influence an organism or ecosystem, such as producers-plants,consumers-animals etc
abiotic factors – A non-living or physical factor that may influence an organism or ecosystem. For example, precipitation, wind, sunlight, soil, temperature, pH, salinity, light, temperature
- Understanding the interaction of the biotic and abiotic factors in an ecosystem can help us to see why particular human activities may be a problem for human survival.
- Example: The loss of ozone in the stratosphere increases the quantity of UV radiation on the surface of the planet. In the same way that humans experience sunburn from too much sun exposure, so do plants. Excessive UV may damage or destroy plant protein and DNA, killing the plant.
Trophic level - The position that an organism occupies in a food chain, or a group of organisms in a community that occupy the same position in food chains. And is mainly considered as feeding level.
Producers/Autotrophs - Organisms that make their own food. Usually plants through photosynthesis. Primary consumers - Organisms that consume producers. Secondary consumers - Organisms that consume primary consumers.
Herbivores - Organisms that eat plants, but no meat. Carnivores - Meat-eaters. Omnivores - Organisms that eat both plants and meat.
Heterotrophs - Decomposers -
species - A group of organisms that interbreed and produce fertile offspring.
population - A group of organisms of the same species living in the same area at the same time, and which are capable of interbreeding.
community - A group of populations living and interacting with each other in a common habitat.
ecosystem - A community of interdependent organisms and the physical environment they inhabit.
habitat - The environment in which a species normally lives.
niche - A species' share of a habitat and the resources in it. An organism's ecological niche depends not only on where it lives but on the role it plays in the ecosystem.
fundamental niche - The part of the habitat in which a species can live in the absence of competitors and predators realized niche - The part of the habitat that the organism actually occupies.
biome – large, relatively distinct terrestrial region characterized by similar climate (temperature and precipitation), soil, and organisms.
competition – [– ,–] Two species (interspecies competition) or two populations of the same species (intraspecies competition) compete for the same resources. Both sides are harmed.
symbiosis – biological interaction where two different species are in direct contact with each other.
commensalism – [0 , +] One species benefits, the other is unaffected amensalism – [0 , –] One species is harmed, the other is unaffected mutualism – [+ , +] Both species benefit predation – [+ , –] One species benefits, the other is harmed. The prey is usually killed quickly. parasitism – [+ , –] One species benefits, the other is harmed. The host is killed slowly if at all.
photosynthesis – 6CO2 + 6H2O + light energy ⇒ C6H12O6 + 6O2 + heat – Carbon Dioxide, Water and Sunlight go in, Glucose, Oxygen and Heat are produced.
respiration – C6H12O6 + 6O2 ⇒ 6CO2 + 6H2O + released energy (heat) – Glucose and Oxygen go in, Carbon Dioxide and Water and Heat are produced.
Reproductive Strategies: r and K
- Short-lived - Large broods - Reproduce early in life - Little to no care for young - Relatively small
- Long-lived - Few offspring per reproductive period - Reproduce later in life - Nurture young - Relatively large
succession – a change over time in the types of species that occupy a given area.
primary succession – ecological succession in an environment that has not been previously inhabited (no soil is present). pioneer communities – the first organisms to colonize (or recolonize) an area. secondary succession – ecological succession in an environment that was exposed to some type of disturbance (soil is already present).
sere – a sequence of communities over ecological time. Each stage of succession is called a seral stage.
lithosere – succession on bare rock hydrosere – succession in freshwater lakes psammosere – succession on sand dunes halosere – succession on salt marshes
climax community – species composition no longer changes over time; secession stops. Community retains an overall uniform appearance. | <urn:uuid:23c0eef0-e117-4709-8939-c19b7e242e2c> | 3.765625 | 1,014 | Knowledge Article | Science & Tech. | 21.57918 | 1,135 |
||June 14, 2001
GSA Release No. 01-18
Scientists Share Multidisciplinary Discoveries at "Earth System Processes"
(II) Session Highlights
- SESSION 23: Tuesday, June 26.
ARCHEAN EARTH AND CONTEMPORARY LIFE: THE TRANSITION FROM AN ANAEROBIC TO AN
AEROBIC MARINE ECOSYSTEM
Session 30 Abstracts ]
- Can you imagine our world being devoid of oxygen for almost half of its history?
And suddenly there's an abrupt rise at ~2 ga? This amazing picture of our
Earth is finding increasing support from rock record geologic data.
- "But what intrigues physical and biological scientists alike is 'why the rise
and why at ~2 ga'?" remarked Janet Siefert, co-chair for this session. Siefert
is a molecular evolutionist at Rice University in Houston, Texas.
- "We know with increasing certainty from geologic biomarkers, fossils, and
molecular phylogenies that the organisms most likely responsible for this rise,
the ubiquitous cyanobacteria or their ancestors, were surely present at least
half a million years prior to the rise. So why the lag?"
- The rise of oxygen is a great science question. It is unique because there
are as many theories that postulate the event as dominated by physical processes
as there are ones that predict it was a biologically mediated phenomenon. This
session will bring oceanographers, genome analysts, and geoscientists with their
competing theories, together in one venue with the hopes that a more substantive
and accurate picture of what may have actually occurred can be brought to the
- The newest discoveries are coming from papers that are multidisciplinary
- David Catling, from the Space Science Division of the NASA Ames Research
Center and the SETI Institute, will build on the idea that the Archean atmosphere
was dominated by methane as the primary greenhouse gas, but expands it to include
an explanation for the oxidation of Earth. He postulates that the kinetic effects
of oxygen and methane are reversed from today; in effect hydrogen escape to space
was inescapable and oxygenation of the atmosphere was irreversible.
- Christian J. Bjerrum, Danish Center for Earth System Science at the
University of Copenhagen, will postulate that due to the limited availability
of phosphorous, a principle nutrient in limiting carbon production, photosynthesis
was depressed in cyanobacteria during the Archean and early Proterozoic.
- Christopher House, a microbial geobiologist from Pennsylvania State
University, will provide further evidence for a newly discovered archaeal bacterial
lineage that consumes methane anaerobically. He gathered this evidence using FISH
and iron microbe techniques to measure carbon isotope depletion.
- Martin Brasier, Earth Sciences at the University of Oxford, will provide
a new look at whether or not morphological remains of fossils thought to be photosynthetic
are as old as previous estimates have stipulated. This is important as it is some
of the most quoted evidence for the antiquity of oxygenic photosynthesis.
- Janet Siefert will present data that uses some of the most basic molecules
for biochemistry, iron-sulfur clusters, to determine the sequence of metabolic
events that may have occurred in the Archean timeframe.
- George Fox, Department of Biology and Biochemistry at the University
of Houston, will give a genomic perspective of what the Archean contemporary cyanobacteria
must have contained as its genomic component. This is an important piece of molecular
evidence that can be compared to the morphological fossil record and the proposed
atmospheric conditions prior to the rise of oxygen.
- SESSION 24: Tuesday, June 26.
CONTROLS ON PHANEROZOIC DIVERSIFICATIONS AND EXTINCTIONS: LONG-TERM INTERACTIONS
BETWEEN THE PHYSICAL AND BIOTIC REALMS
- In this session, scientists from the USA and the UK will explore what controls
the long-term patterns of origination and extinction that give shape to the history
of life and how life itself has participated in that process. This connects directly
to contemporary concerns with biodiversity issues and questions about the effects
of climate change. For example: What natural processes have caused past climate
shifts and mass extinction episodes?
- By examining the records of climate changes over the past 600 million years,
what caused the changes, and what effects they had on ancient organisms, these
scientists are discovering what implications this information has for understanding
- The most important general themes of the papers in this session are: (1) the
adoption of a systems approach to the understanding of Earth's history (major
events being caused by multiple, independent factors) and (2) the significance
of 'feedback mechanisms.'
- Some of the highlights are as follows:
- Session co-chair Norman MacLeod will begin by taking a look at the
identifying controls on Phanerozoic extinction and diversification patterns. (MacLeod
is the Associate Keeper in the Department of Palaeontology at The Natural History
Museum in London.) Patterns of biodiversification and extinction over the last
250 million years show evidence of having been controlled by multiple factors,
especially the interplay between tectonic processes (e.g., volcanism, sea-level
change), and the evolutionary history of primary producer lineages (e.g., phytoplankton
and land plants).
- Doug Erwin, from the Department of Paleobiology at the Smithsonian
Institution, will explain how the biodiversity increase that characterizes the
biotic recovery from a mass extinction event is structured by ecological factors
to a larger extent than previously thought.
- Paul Wignall, Earth Sciences Department at the University of Leeds,
has discovered that the relationship between mass extinction events and large
volcanic eruptions is complex and likely involves climate forcing factors other
than the eruption itself, including the presence of life forms (e.g., phytoplankton)
that collectively possess the ability to buffer the global climate from short-term
- Geerat Vermeij, from the Department of Geology at the University of
California, Davis, will consider how patterns of ecological feedback between herbivores
and carnivores that result in a progressive intensification of nutrient recycling
in the oceans have been a dominant theme in the history of life and have exerted
an important 'top-down' (as opposed to bottom-up) evolutionary and ecological
- SESSION 28: Tuesday, June 26.
ANTHROPOGENIC MODIFICATIONS TO THE EARTH SYSTEM
Poster Session 27 abstracts ]
- The human presence on Earth has greatly impacted the environment for better
or worse. Geoscientists in this session will consider a wide variety of evidence
for the nature, magnitude, and implications of human impact on our planet for
the past, present, and future.
- Fred T. Mackenzie, Professor of Sedimentary and Global Geochemistry
from the Department of Oceanography at the University of Hawaii, will begin the
session by looking at how human activities influenced the biogeochemical cycles
of carbon, phosphorus, and nitrogen in the surface of the Earth since 1840. This
will be quite interesting for those who would like to know what the unperturbed
Earth system was like, and who also want to understand the fate of these elements
through projections to the year 2040.
- Timothy M. Lenton, Centre for Ecology and Hydrology at the Edinburgh
Research Station, will speak about positive feedbacks in the global carbon cycle
that may make it more difficult for the oceans and atmosphere to absorb anthropogenic
CO2 in the future.
- Gary Hughes, from Raytheon Santa Barbara Remote Sensing in California,
will talk about a direct, empirical correlation between atmospheric CO2
and land temperature measurements that indicates a strong greenhouse warming effect:
5 degrees Celsius for CO2 doubling.
- Berry Lyons, Department of Geological Sciences and Byrd Polar Research
Center at the Ohio State University, will take a look at how urbanization--specifically
in Atlanta, Georgia and Columbus, Ohio--affects water quality in local rivers.
He will note the similarity of increased concentrations of elements in the Chattahoochie
River downstream from Atlanta, Georgia, with those observed for the Seine River
as it passes through metro Paris. (Yet the Scioto River in the Columbus, Ohio,
metro area, shows little urban influence.) The urban "footprint" in various areas
in the United States and Europe has made quite an impact on the water quality
of rivers downstream from major urban centers.
- SESSION 36: Wednesday, June 27.
ROLE OF HYDROTHERMAL SYSTEMS IN BIOSPHERIC EVOLUTION
Poster Session 40 Abstracts ]
- Hydrothermal environments are unique because they offer a potentially widespread
habitat for life both on the early Earth, and elsewhere in the Solar System. The
study of hydrothermal systems provides a doorway into the early history of life
and allows us to better understand the origins of our biosphere, its history,
and place in the universe. This session will provide an updated overview of the
field and will serve as a forum to report its cutting-edge science.
- "I think one of the most intriguing things about hydrothermal systems is that
they teem with life at temperatures far exceeding what humans would consider viable,"
said session co-chair Jack Farmer. Farmer is an astrobiologist and geologist
at Arizona State University.
- "They also harbor many unusual organisms and unique metabolic strategies.
Some of the organisms found living at the highest temperatures appear to be very
primitive forms, close to a last common ancestor of life. These forms are able
to exist on inorganic by-products derived from the aqueous weathering of rocks."
Not requiring sunlight for their metabolism, nor organic inputs from other organisms,
these "chemoautotrophic" (chemically-based) forms provide models for the kinds
of organisms that could exist in subsurface environments of Mars, Europa, and
- Hydrothermal systems are also very interesting places to explore for novel
organisms and their metabolic processes of interest to biotechnology. "The famous
example of PCR (polymerase chain reaction), a genetic process whereby even tiny
sequences of a genome can be cloned and amplified, was discovered in a Yellowstone
hot spring," Farmer explained. "This discovery revolutionized molecular biology
and spawned a multi-billion dollar industry."
- Franco Piranjo (Geological Survey of Western Australia) will present
an overview on the nature of hydrothermal environments and Anna Louise Reysenbach
(Portland State University) will provide an overview of their microbiology.
- Mike Russell, from the Isotope Geoscience Unit at the Scottish Universities
Environmental Research Centre, also co-chairs the session and will present a talk
on hydrothermal systems as a potential cradle for early prebiotic chemistry and
- Sherry Cady, Geology Department at Portland State University, will
review what we have learned about life near its upper temperature limit.
- Tullis Onstott, from the Department of Geosciences at Princeton, will
discuss the exploration for a deep, hot microbial biosphere in deep gold mines
in South Africa.
- Malcolm Walter, Australian Centre for Astrobiology at Macquariue University,
will talk about a variety of fossil biosignatures in billion-year-old-plus deep-sea
vent deposits in Australia.
- Beda Hofmann, from the Natural History Museum in Bern, Switzerland,
will review a newly-discovered fossil record of deep subsurface life found in
- These subsurface talks hold special importance in opening up new opportunities
to explore for life on the early Earth and elsewhere in the Solar System.
- Finally, Martin Van Kranendonk, Geological Survey of Western Australia,
will describe hydrothermal environments associated with the oldest previously-reported
cellular fossils (~3.5 billion years) from Australia.
- In the poster session, Meredith Payne (Arizona State) will present
a review of the prospects for hydrothermal life on Mars, focusing special attention
on polar regions where volcanoes appear to have recently interacted with and melted
polar ice, sustaining recent liquid water habitats near the surface. The Mars
sites she will review are considered important potential candidates for future
landed missions to Mars.
Other noteworthy sessions include:
- Session 8: Monday, June 25.
FRAGILE AND HAZARDOUS ENVIRONMENTS
[ View Abstracts ]
[ View Poster Session 7 Abstracts ]
- Session 10: Monday, June 25.
THE ROLE OF NATURAL GAS HYDRATES IN THE EVOLUTION OF PLANETARY BODIES AND LIFE
[ View Abstracts ]
[ View Poster Session 11 Abstracts ]
- Session 42: Wednesday, June 27.
FLUID SEEPS AND SHALLOW MIGRATION PHENOMENA AT CONTINENTAL MARGINS: IMPACTS SPANNING THE LITHOSPHERE, BIOSPHERE, AND HYDROSPHERE
[ View Abstracts ]
[ View Poster Session 43 Abstracts ] | <urn:uuid:392ec9ae-6956-4e6e-b56a-156c46fed533> | 2.84375 | 2,816 | News (Org.) | Science & Tech. | 16.67189 | 1,136 |
A Geometric Proof
See Also: Problem Solving with Heron's Formula
1. The incircle and its properties.
2. An excircle and its properties.
3. The area of the triangles is rs, where r is the inradius and s the semiperimeter.
4. The points of tangency of a circle inscribed in an angle are equidistant from the vertex.
Return to the EMAT 4400/6600 Page | <urn:uuid:cd57d55d-6149-46a7-8969-8de8714ca8d7> | 3.3125 | 97 | Tutorial | Science & Tech. | 74.45875 | 1,137 |
This web site is about Cascading Style Sheets as a means to separate the presentation from the
structural markup of a web site, and it will help you answer some of those frequently asked Questions,
explains some of the Basics of CSS, gives you tips and tricks for tackling
the problems with Netscape 4, offers you a tutorial about
Positioning with CSS (CSS-P, web design
without tables). There is also a page with interesting Links.
Cascading Style Sheets is a means to separate the presentation from the structural markup of a web site. By applying a CSS style you have the ability to keep the structure of your document lean and fast, while controlling the appearance of the content.
HTML was intended as the structural markup language. This language focuses on the roles that the different elements of a document have to play, not how they have to look. CSS has been invented and developed for the Internet. It is not an adapted tool from print or programming, but a means of enhancing HTML.
Since CSS takes care of the presentation, the structure of the document can be static HTML, and the content either contained in the HTML itself, or generated by ASP, ColdFusion, XML and/or other technologies that are being hatched now and we haven't heard about yet.
This web site will answer some of those frequently asked Questions, explains some of the Basics of CSS, gives you tips and tricks for tackling the problems with Netscape 4, offers you a tutorial about Positioning with CSS (CSS-P, web design without tables). There is also a page with interesting Links. As an example, if you check out this site http://www.onlinepoker.net, you can see how CSS is used to create a seamless website without the use of tables. The CSS sheet is used to construct all of the principle sections including navigation, header, content area and footer. You can also combine the standard CSS design with tables for promotions, as seen here http://www.pokersite.org. Both examples use cascading style sheets for the general layout, however the second example uses tables specifically for displaying ranking charts and both poker sites are W3C compliant.
CSS is here to stay. It is a fascinating, elegant technology that can make sites faster, less complicated, easy to change, better adaptable to the need of emerging technologies - and more disabled accessible. And for being that powerful, it is surprisingly easy to learn.
Does your company have a web department that needs some CSS training? Do you want to outsource the development of the stylesheets for your web site/s? For more information about CSS Seminars or stylesheet development feel free to contact me at . | <urn:uuid:79f8bfea-782b-4dc8-b390-0c501a314f16> | 2.5625 | 557 | About (Org.) | Software Dev. | 53.443768 | 1,138 |
See also the
Dr. Math FAQ:
Browse High School Sequences, Series
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Strategies for finding sequences.
- Divergent Infinite Series [05/30/2003]
I bought a book _Praise for The Mathematical Universe_, by William
Dunham, with a chapter on Euler's infinite series. A proof he outlined
that I could not follow is this...
- Dividing a Circle using Six Lines [08/29/2001]
What is the largest number of regions into which you can divide a circle
using six lines?
- Does the Series cos(n)/n^(3/4) Converge or Diverge? [11/10/2009]
Doctor Jordan invokes the Euler equation to bound a doozy of a series.
- Do I Use n Or n-1 to Find the nth Term in a Geometric Sequence? [12/01/2009]
A look at how the formula for the nth term in a geometric sequence,
a*r^(n-1), sometimes needs to be a*r^n to fit the problem context.
- Doubling Pennies [11/26/1996]
If I start with a penny and double it daily for 30 days, how many pennies
do I have at the end?
- e as a Series and a Limit [03/30/1998]
Why does e = 1 + 1/2! + 1/3! + 1/4! + ... and lim (1 + 1/n) ^ n, as n --
- Equation of a Sequence with Constant Third Differences [05/26/1998]
Using the method of difference or the Gregory-Newton formula.
- Euler's summmation of 1/n^2 [03/15/2000]
Prove that pi^2/6 = the summation of 1/n^2 from 1 to infinity.
- Evaluating Indefinite Sums [12/07/2003]
How can I evaluate the sum of the terms 1/(3n+1)(4n+2), where n ranges
from -infinity to +infinity?
- Evaluating the Series n^2/2^n a Differential Way [11/05/2010]
A student knows that the series n^2/2^n converges as n goes from zero to infinity.
Doctor Ali offers one approach for determining its sum, based on differentiating the
geometric series and its closed form solution.
- Expansion of (x+y)^(1/2) [06/07/1999]
Is there a way to expand (x+y)^(1/2)? If so, how is it derived?
- Expected Tosses for Consecutive Heads with a Fair Coin [06/29/2004]
What is the expected number of times a person must toss a fair coin to
get 2 consecutive heads?
- Exponential Generating Function [05/06/2000]
How can I prove that the exponential generating function of the series 1,
1*3, 1*3*5, 1*3*5*7, ... is 1/sqrt(1-2*x)?
- Exponential Series Proof [05/05/2001]
Given e^x greater than or equal to 1 + x for all real values of x,and
that (1+1)(1+(1/2))(1+(1/3))...(1+(1/n)) = n+1, prove that e^(1+(1/2)+
(1/3)+...+(1/n)) is greater than n. Also, find a value of n for which
1=(1/2)+(1/3)+...+(1/n) is greater than 100.
- Factors and Multiples - Hamiltonian Path [11/02/1998]
We have to make a sequence of numbers, all different, each of which is a
factor or a multiple of the one preceding it.
- Feeding Chickens - Arithmetical Progression [7/6/1996]
A farmer has 3000 hens. Each week he sells 20... what is the total cost
of feeding the hens...?
- A Fibonacci Proof by Induction [06/05/1998]
Let u_1, u_2, ... be the Fibonacci sequence. Prove by induction...
- Fibonacci Sequence - An Example [05/12/1999]
Glass plates and reflections.
- Figurate and Polygonal Numbers [11/21/1998]
I need to know everything about figurate numbers.
- Finding a Formula for a Number Pattern [09/30/2004]
We are learning about sequences and how to find the patterns in
numbers. Our teacher gave us the sequence 0, 3, 8, 15, 24, 35 and
told us that we had to use factoring to find the answer. I know the
answer is (n + 1)(n - 1), but I can't see how to get that.
- Finding a Function to Generate a Particular Output [09/21/2004]
Dr. Vogler presents several possible functions f(n) that will generate
the output 0,0,1,1,0,0,1,1,0,0... for n = 1 to infinity.
- Finding an Explicit Formula for a Recursive Series [05/17/2000]
How far will a man end up from his home if he walks a mile west, then
walks east one half that distance, then walks west half of the distance
he has just walked, and so on?
- Finding a Non-Recursive Formula [06/10/1999]
How can I find a non-recursive formula for the recurrence relation s_n =
- [s_(n-1)] - n^2 with the initial condition s_0 = 3?
- Finding an Unknown Sequence [3/31/1996]
I can't figure out where to start with this Series and Sequences
question: 1+3x+6(x)(x)+10(x)(x)(x)+15(x)(x)(x)(x)+. . .
- Finding a Series Given the Sum [09/27/1999]
How can I find all series of consecutive integers whose sum is a given
- Finding a Term of an Arithmetic Series [12/13/1995]
The fifth term of an arithmetic series is 16 and the sum of the first 10
terms is 145. Write the first three terms.
- Finding Catalan Numbers [12/15/1999]
What are Catalan numbers and what applications do we have for them?
- Finding Common Numbers in Two Sequences [09/21/2006]
I'm working with sequences that start with an initial value and an
initial amount to add to get the next term. The amount added then
increases by 2 as you move from term to term. If I have two such
sequences, is there a way to calculate what numbers they will have in
common based on the two initial values and amounts to add?
- Finding Number Patterns [05/29/1999]
I am trying to find the pattern of the numbers
- Finding Rules for Number Patterns [06/05/2009]
I'm having trouble finding an algebraic expression that generates the
pattern 3, 5, 8, 12, 17, 23, 30. Can you help?
- Finding Sums of Sines and Series [03/10/2004]
I am trying to find the sum of sin1 + sin2 + sin3 + ... + sin90. I'm
also trying to find the sum of 1^n + 2^n + 3^n + 4^n + ... + n^n. Can
you help me?
- Finding the 1000th Term in a Sequence [1/19/1996]
Two kids on a car trip decide to count telephone poles. One kid counts
normally, 1,2,3,4,5...25,26,27...31,32,33, etc. The other kid counts them
a different way: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 8, 7, 6, 5, 4, 3, 2,
- Finding the Digit of a Decimal Expansion [11/14/1998]
What digit will appear in the 534th place after the decimal point in the
decimal representation of 5/13?
- Finding the Missing Numbers in a Sequence [11/30/1995]
Fill in the blanks for this series of numbers based on its underlying
pattern: 3, 4, 6, 8, 12, (), 18, 20, (), 30, 32
- Finding the Next Number in a Sequence Given Its Geometric Mean ... Which Is a Square Root [09/24/2009]
A student who knows how to calculate geometric means gets rattled when
trying to determine a sequence from its square root geometric mean.
- Finding the Next Number in a Series [07/22/2002]
Are there any formal or systematic methods for solving problems that
ask you to find the next number in a series?
- Finding the Rule for a Given Sequence [09/11/2008]
If the first six terms of a sequence are -4, 0, 6, 14, 24, 36, what is
the rule? Find the 20th and 200th terms. This answer discusses finite
differences and other handy techniques for solving this sort of problem.
- Finding the Sum of an Infinite Series [03/05/2006]
Find the sum of the series 1 + 1/2 + 1/3 + 1/4 + 1/6 + 1/8 + 1/9 +
1/12 + ... which are the reciprocals of the positive integers whose
only prime factors are 2's and 3's.
- Finding the Sum of Arithmetico-Geometric Series [09/13/2004]
Find the sum of the infinite series 1/7 + 4/(7^2) + 9/(7^3) + 16/(7^4)
+ ... I would also like to know if there is a general rule to find the sum
of (n^2/p^n) for n = 1 to infinity.
- Finding the Sum of Arithmetic Series [06/12/2006]
Find the sum of the arithmetic series 4 + 10 + 16 + ... + 58. | <urn:uuid:a6f8a7d9-961d-42ff-b61f-f7f8028823b4> | 2.890625 | 2,270 | Q&A Forum | Science & Tech. | 95.26681 | 1,139 |
A 136-m-wide river flows due east at a uniform speed of 9 m/s. A boat with a speed of 1 m/s relative to the water leaves the south bank pointed in a direction 16° west of north. How long does the boat take to cross the river? Assume an xy coordinate system with the positive direction of the x axis due east and the positive direction of the y axis due north.
Thanks in advance. | <urn:uuid:24efdb04-aa9f-4dee-b76e-2828b63bf5bb> | 2.71875 | 92 | Q&A Forum | Science & Tech. | 80.496734 | 1,140 |
A triangle is inscribed in a circle. The vertices of the triangle divide the circle into three arcs of lengths 3,4,and 5. What is the area of the triangle?
Not sure how to start the problem...
Could I get some hints please?
You must draw a diagram. Inscribe a triangle in a circle.
Draw three radial segments, one from each vertex to the center,
Now you have three sub-triangles. The sum of their areas is the area you want.
The radius of the circle is . How and why?
The angle subtending the arc of length 5 measures . Again how and why?.
If are the length of two sides of a triangle and is the measure of the angle between then the area to that triangle is .
I think I got it. It took me a while though....
I understand the radius is 6/pi because 2pir = 12 r= 6/pi.
Then I understood the angle subtending the arc of length 5 is 5pi/6 because 2pi X 5/12 = 5pi/6
arc of length 4 2pi X 4/12 = 2pi/3
arc of length 3 2pi X 3/12 = pi/2
I fully understand why 1/2absin(theta) gets the area therefore,
1/2 X 6/pi X 6/pi X sin 5pi/6 + 1/2 X 6/pi X 6/pi X sin 2pi/3 + 1/2 X 6/pi X 6/pi X sin pi/2 = 9/pi squared X (3 + root3) | <urn:uuid:f65c1bb5-3661-40a5-8ba0-a035c529ac8d> | 3.484375 | 340 | Q&A Forum | Science & Tech. | 104.82709 | 1,141 |
Delegates (C# Programming Guide)
A delegate is a type that defines a method signature. When you instantiate a delegate, you can associate its instance with any method with a compatible signature. You can invoke (or call) the method through the delegate instance.
Delegates are used to pass methods as arguments to other methods. Event handlers are nothing more than methods that are invoked through delegates. You create a custom method, and a class such as a windows control can call your method when a certain event occurs. The following example shows a delegate declaration:
Any method from any accessible class or struct that matches the delegate's signature, which consists of the return type and parameters, can be assigned to the delegate. The method can be either static or an instance method. This makes it possible to programmatically change method calls, and also plug new code into existing classes. As long as you know the signature of the delegate, you can assign your own method.
In the context of method overloading, the signature of a method does not include the return value. But in the context of delegates, the signature does include the return value. In other words, a method must have the same return value as the delegate.
This ability to refer to a method as a parameter makes delegates ideal for defining callback methods. For example, a reference to a method that compares two objects could be passed as an argument to a sort algorithm. Because the comparison code is in a separate procedure, the sort algorithm can be written in a more general way.
Delegates have the following properties:
Delegates are like C++ function pointers but are type safe.
Delegates allow methods to be passed as parameters.
Delegates can be used to define callback methods.
Delegates can be chained together; for example, multiple methods can be called on a single event.
Methods do not have to match the delegate signature exactly. For more information, see Using Variance in Delegates (C# and Visual Basic).
C# version 2.0 introduced the concept of Anonymous Methods, which allow code blocks to be passed as parameters in place of a separately defined method. C# 3.0 introduced lambda expressions as a more concise way of writing inline code blocks. Both anonymous methods and lambda expressions (in certain contexts) are compiled to delegate types. Together, these features are now known as anonymous functions. For more information about lambda expressions, see Anonymous Functions (C# Programming Guide).
For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. | <urn:uuid:0d00acab-afd5-4a68-acaf-f8e47e41ba68> | 4.5625 | 526 | Documentation | Software Dev. | 34.008319 | 1,142 |
Nasty, brutish, and short--Thomas Hobbes's famous description of life without government could as easily be applied to baboons. The primates are famous for their bad manners. However, a troop of baboons in Kenya has recently changed its ways. Researchers suggest that the relatively peaceable behavior is a type of culture that's passed on to newcomers to the troop.
Baboon culture is rife with violence. Males fight over females, food, resting spots, and sometimes for no apparent reason. The most serious altercations are usually between baboons of close rank; but baboons low on the totem pole get bullied all the time by higher-ups looking for an ego boost.
Now it appears that one troop has found a better way. Robert Sapolsky, a primatologist at Stanford University in California, observed a troop of savanna baboons, dubbed Forest Troop, from the late 1970s until 1986, when an outbreak of bovine tuberculosis killed off the most aggressive males in the group. After the deaths of so many of the members, Sapolsky abandoned his study and stayed away for 10 years.
In the 13 April PLoS Biology, Sapolsky and his wife and colleague Lisa Share describe the dramatic changes they found when they returned. Members sat closer together and groomed each other more. The dominance hierarchy remained--Number Two still scrapped with Numbers One and Three as in a normal troop--but the higher-ranking baboons didn't vent their anger on subordinates. And that's apparently improved life for lowlier baboons; they don't have the classic markers of chronic stress--such as elevated levels of stress hormones--found in their peers in other troops. The most remarkable observation, however, was that the troop had apparently maintained the peace despite a complete turnover in the male population. Normally aggressive male adolescent baboons leave their native troop and slowly work into a new one; Forest Troop had somehow managed to assimilate these surly newcomers without losing its peaceful culture.
Sapolsky and Share are still unsure how the culture is being passed on, but they suspect that it has to do with the observed friendly attitude of the female baboons towards newcomer males. "Sapolsky's research seems to show that the female baboons have 'seen the light,' " and realized that life is better with peaceful males, says Frans de Waal, a primatologist at Emory University in Atlanta. But how the females might calm the waters is still unknown. | <urn:uuid:2387f851-83f6-4e7b-80f7-b9e78285d877> | 2.953125 | 508 | News Article | Science & Tech. | 39.492358 | 1,143 |
Polarization of radio waves transmitted through Antarctic ice shelves
Doake, C.S.M.; Corr, H.F.J.; Jenkins, A.. 2002 Polarization of radio waves transmitted through Antarctic ice shelves. Annals of Glaciology, 34. 165-170. 10.3189/172756402781817572Full text not available from this repository.
The polarization behaviour of radar waves transmitted through two Antarctic ice shelves has been investigated using a step frequency radar with a centre frequency of 300 MHz and a bandwidth of 150 MHz. One site was on Brunt Ice Shelf at a site near Halley station, and 17 sites were oil George VI Ice Shelf near the southern ice front. Birefringence in the ice dominated the behaviour oil Brunt Ice Shelf, where the anisotropy in the effective permittivity was found to be about 0.14%. On George VI Ice Shelf, a highly anisotropic reflecting surface was the controlling feature, suggesting a fluted ice-shelf base formed by oceanographic currents.
|Programmes:||BAS Programmes > Antarctic Science in the Global Context (2000-2005) > Global Interactions of the Antarctic Ice Sheet|
|NORA Subject Terms:||Glaciology
|Date made live:||15 Nov 2011 09:47|
Actions (login required) | <urn:uuid:dc246a8e-0e10-4ff5-b495-b8be05d9a379> | 2.953125 | 284 | Academic Writing | Science & Tech. | 53.527312 | 1,144 |
A quadrilateral changes shape with the edge lengths constant. Show
the scalar product of the diagonals is constant. If the diagonals
are perpendicular in one position are they always perpendicular?
NRICH has always had good solutions from Madras College in St
Andrew's, Scotland but the solutions to this problem were truly
As a quadrilateral Q is deformed (keeping the edge lengths constnt)
the diagonals and the angle X between them change. Prove that the
area of Q is proportional to tanX. | <urn:uuid:8d5b2533-1a34-440b-9f92-e0d7f7a65a30> | 2.859375 | 114 | Academic Writing | Science & Tech. | 48.845098 | 1,145 |
Greenland Ice Sheet Melt Characteristics Derived from Passive Microwave Data
The Greenland ice sheet melt extent data, acquired as part of the NASA Program for Arctic Regional Climate Assessment (PARCA), is a daily (or every other day, prior to August 1987) estimate of the spatial extent of wet snow on the Greenland ice sheet since 1979. It is derived from passive microwave satellite brightness temperature characteristics using the Cross-Polarized Gradient Ratio (XPGR) of Abdalati and Steffen (1997). It is physically based on the changes in microwave emission characteristics observable in data from the Scanning Multi-channel Microwave Radiometer (SMMR) and the Special Sensor Microwave/Imager (SSM/I) instruments when surface snow melts. It is not a direct measure of the snow wetness but rather is a binary indicator of the state of melt of each SMMR and SSM/I pixel on the ice sheet for each day of observation. It is, however, a useful proxy for the amount of melt that occurs on the Greenland ice sheet. The data are provided in a variety of formats including raw data in ASCII format, gridded daily data in binary format, and annual and complete time series climatologies in gridded binary and GeoTIFF format. All data are in a 60 x 109 pixel subset of the standard Northern Hemisphere polar stereographic grid with a 25 km resolution and are available via FTP.
The following example shows how to cite the use of this data set in a publication. For more information, see our Use and Copyright Web page.
Waleed Abdalati. 2008. Greenland Ice Sheet Melt Characteristics Derived from Passive Microwave Data. [indicate subset used]. Boulder, Colorado USA: National Snow and Ice Data Center. | <urn:uuid:471a4be3-b5de-422c-bae0-c0d1f976e2cb> | 3.328125 | 370 | Knowledge Article | Science & Tech. | 34.724422 | 1,146 |
The international team of astronauts taking part in ESA's caving adventure have returned to Earth after spending six days underground. The voyage to the surface of our planet took them five hours from basecamp.
CAVES gives astronauts a taste of working as a safe and effective team during long spaceflights. In particular, they can hone their leadership and group skills while working in a typical multicultural team found on the International Space Station.
Course designer Loredana Bessone explains the similarities of caving and working in space: "The 'cavenauts' have to adapt to a completely new environment. Working and living underground is both physically and mentally demanding."
Space protocols were used in the course: "Cavewalking is similar to a spacewalk. You have to pay continuous attention to the correct use of tools and safety protocols, to the progression path and to obstacles, which correspond to No Touch Zones and Keep Out Zones on the Space Station."
CAVES is the first behavioural course to involve astronauts from all partners of the International Space Station. Astronauts from USA, Japan, Canada, Russia and Denmark participated this year.
Apart from exploring and surveying parts of the caves, the astronauts also conducted speleological research: cave meteorology, geology, biology and microbiology.
They set traps and collected specimens of underground life, which have now been forwarded to specialists for further analysis.
This year the astronauts explored further than the CAVES 2011 team and discovered what NASA astronaut Mike Fincke described as an underground "wonderland."
ESA astronaut Andreas Mogensen is very positive about the course: "CAVES is perhaps the most physically demanding astronaut training that I have taken part in, and perhaps also the most rewarding.
"To complete the training, our crew had to work together effectively and efficiently as a team, which we did.
"All in all, it was a fantastic and unique experience."
Explore further: Astronauts going underground | <urn:uuid:da1ff45f-8df0-4b6b-bcb1-426f4a7d6eb6> | 2.890625 | 404 | News Article | Science & Tech. | 28.146346 | 1,147 |
Satellite imagery from NASA's TRMM satellite showed that wind shear is pushing the bulk of rainfall away from the center of Tropical Storm Anais.
When NASA's Tropical Rainfall Measuring Mission (TRMM) satellite passed over Tropical Storm Anais on Oct. 16 at 0654 UTC (2:54 a.m. EDT), light to moderate rainfall was occurring southeast of the center and falling at a rate between .78 to 1.57 inches/20 to 40 mm per hour. The displacement of rainfall from around the storm's center to the southeast indicates moderate to strong northwesterly wind shear.There no areas of heavy rainfall, indicating that the storm had weakened since the previous day.
Forecasters at the Joint Typhoon Warning Center noted "the TRMM image depicts tightly-curved, shallow convective (thunderstorm) banding wrapping into a well-defined center with deep convective banding limited to the south quadrant."
Tropical Storm Anais had maximum sustained winds near 55 knots (63.2 mph/102 kph) on Oct. 16 at 1500 UTC (11 a.m. EDT). Anais was located near 14.4 South and 59.8 East, about 500 nautical miles (575 miles/926 km) north-northeast of La Reunion and moving toward the west-southwest at 9 knots (10.3 mph/16.6 kph).
Anais is forecast to continue tracking to the west-southwest toward Madagascar, while weakening.
Explore further: NASA sees Tropical Cyclone Anais headed near La Reunion Island | <urn:uuid:773b97df-db99-47c7-bab1-36a89be74619> | 3.03125 | 330 | News Article | Science & Tech. | 72.471554 | 1,148 |
Shining new light on dark matter
In the 1930s, astronomers discovered that many galaxy clusters observable from Earth have a much stronger gravitational field than they should have given their predicted mass. Further astronomical observations only added to this puzzle. After much consideration, it was concluded that something mysterious called dark matter must be involved. Dark matter is in all respects invisible and can only be detected by its gravitational effect on normal matter. If this new theory was right then dark matter would make up most of the mass of the universe.
However, in February three scientists claimed that dark matter was not necessary and in fact by slightly altering Einstein's equations for general relativity they could account for the acceleration. Not everyone was convinced by the new explanation though, and now new evidence has been put forward in support of dark matter through studying the "bullet" galaxy cluster with the Chandra X-ray telescope.
The cluster was created when two separate clusters smashed together. Tremendous amounts of energy were released in this collision; enough in fact to tear the normal matter and the dark matter apart. Even though dark matter is invisible, scientists were able to see the effect by measuring how the mass of the cluster was distributed.
The data gathered supported a model involving dark matter but not an altered form of general relativity as was previously proposed. No doubt the argument over the existence of dark matter will continue but supporters of the dark matter model believe this provides the most conclusive evidence yet.
You can read the full story on Science Daily
posted by Plus @ 2:37 PM | <urn:uuid:5ffda486-7c3b-4d30-ac79-4b772b89c2ef> | 3.890625 | 309 | News Article | Science & Tech. | 36.524402 | 1,149 |
Radio JOVE home page
The Galactic Background Radiation
The ever-present sound of our galaxy
by Dr. Leonard N. Garcia
|When making observations of Jupiter you may hear all kinds of radio
frequency interference. Most of the interference will likely be of terrestrial
origin and may be natural, like distant lightning strikes, or man-made
like power line "buzz" or radio stations. There is however a type of interference
that will be inescapable and doesn't come from anything on Earth. It is
the ever-present galactic background radio noise. It can always be heard
but not always at the same strength. The discovery of the origin of this
background noise is usually marked as the birth of radio astronomy.
Karl Jansky, an engineer working for Bell Telephone Labs, was assigned
the task of locating sources of interference in long distance radio-telephone
communications. He built in 1931 in Holmdel, New Jersey an antenna operating
at 20.5 Mega Hertz which rotated horizontally like a merry-go-round. The
rotation of the antenna would allow him to determine the direction from
which this interference was coming. Jansky heard interference from local
lightning storms, interference from distant lightning storms and a third
type of interference, "... a steady hiss type static of unknown origin."
The direction of this third type of interference gradually changed over
the course of a day moving nearly 360 degrees over 24 hours. After months
of careful study of these records Jansky concluded as he reports in a paper
published in 1933, "the direction of arrival of these waves is fixed in
space, i.e., the waves come from some source outside the solar system".
His approximate coordinates for the peak in these radio waves was in the
constellation of Sagittarius, towards the direction of the center of the
Milky Way Galaxy.
|On a clear night away from city lights the Milky Way appears to us as a fuzzy band of light arching across the sky. The Moon, the Sun and the planets tend to follow a path which intersects this band of light at two points. When Jupiter happens to cross the Milky Way at one of these points especially the point that lies closer to the center of the Galaxy we tend to hear an increase in the galactic background level when we point our antennas towards Jupiter. The figure below shows a plot of the average galactic background levels heard during observations of Jupiter at the University of Florida Radio Observatory. The peaks in the figure in 1972 and 1984 are when Jupiter was in the vicinity of the direction towards the center of the Galaxy. The spacing of 12 years between peaks corresponds to the orbital period of Jupiter. The smaller peaks seen in 1977-78 and 1990 correspond to times when Jupiter was crossing the galactic plane far from the direction of the center. The galactic noise is understood as coming from high speed electrons spiraling around the weak magnetic fields which permeate our galaxy.|
|Left: A plot of the galactic background antenna temperature (proportional to brightness) at 18, 20, and 22 MHz using Yagi antennas at the University of Florida Radio Observatory.|
|To learn more|
|Radio JOVE Multimedia Exhibits - To hear for yourself what the Galactic Background
sounds like. Narrated by Richard Flagg.
Ham radio and radio astronomy- The role amateur radio operators "hams" have played in the development of radio astronomy.
John D., Radio Astronomy, Cygnus-Quasar Books, 1988.
Smith, A. G. and T. D. Carr, Radio Exploration of the Planetary System, Van Nostrand Co., 1964.
Radio JOVE home page
More Radio Jove Science Briefs | <urn:uuid:aa4802f3-de79-4536-866d-b744d30c9426> | 3.0625 | 765 | Knowledge Article | Science & Tech. | 49.154834 | 1,150 |
The gem command is one of the most used Ruby-related commands, but most users don't take the time to learn anything past gem install and gem search. Learning the gem command well is an essential Ruby skill.
The gem command-line utility is split into a number of commands. Among these are the familiar install and search, but other useful commands exist such as spec and sources. However, you should start with the help command.
The gem command has integrated help. By running the command gem help, you can get a basic help screen. To get a list of commands available, run the gem help commands command. To get further help about a specific command, here, for example, the purge command, run the gem help purge command. Another useful help screen is the examples screen, accessible by the gem help examples command.
Most commands work on a gem repository, either local (the gems you have installed), or remote. Though, by default, it's the local repository. To specify the repository you intend , add either --remote or --local to the end of the command. For example, to search the remote repository for gems with the word "twitter" in them, you would run gem search twitter --remote. Specify both remote and local repositories by using the --both switch.
When running any gem command, the name can be shortened as long as it doesn't become ambiguous. To run a gem dependency command, you can simply run a gem dep command.
Below is a list of the commands and an explanation of their function.
build - Given the source code for a gem and a .gemspec file, this will build a .gem file suitable for uploading to a gem repository or installing on another computer with the gem command. A .gemspec file holds information about a gem including name, author, version and dependencies.
cert - Manages certificates for cryptographically signed gems. If you're worried that a malicious user is going to compromise the gems you install, you can cryptographically sign them to prevent this. Keys may be added or deleted from your list of acceptable keys, as well as a few other crypto key related functions.
check - Performs a number of actions, including running any unit tests, checking the checksum of installed gems and looking for unmanaged files in the gem repository. The type of check you wish to run must be added to the end of the gem command.
cleanup - Removes old versions of installed gems from your local repository. If you frequently upgrade gems, you can have old versions hanging around that you don't need anymore.
contents - Shows the contents of an installed gem. This is a list of files the gem installed and where they are on the filesystem.
dependency - Shows all the gems the listed gem depends upon, as well as the versions of the gem it depends upon. For example, running gem dep twitter tells me the twitter gem relies on hpricot, activesupport, httparty and echoe. This is useful when packaging your applications for deployment.
environment - Displays various information about the RubyGems environment, including the version installed, where it's installed, where the gem repository is, etc.
fetch - Fetches a gem and saves the .gem file in the current directory. This is useful for transferring gems to be deployed on other servers, without them needing to download the gem themselves.
generate_index - Generates an index for a gem server. This is only useful if you're running a gem repository.
install - Downloads a gem from the specified repository (--local or --remote) and install it. Also, downloads any dependencies and installs them as well. To install a specific version of a gem, use the --version switch.
list - Displays a list of gems in the repository. Note that doing this with --remote will generate quite a large list. Save this list to a file for fast searching.
lock - Generates a Ruby script that requires the exact version of all dependencies of a certain gem. This ensures that the gem versions tested during development will be installed, not future or past versions which may have bugs the developers cannot account for.
mirror - Mirrors an entire gem repository. Note that trying to mirror the RubyGems repository is a huge task. Do not do so unless you need to run a local mirror for other clients.
outdated - Displays a list of installed gems that have newer versions on the remote repository.
pristine - Returns gems to their original state. This means unpacking all gems from the local cache, overwriting any changes made to the gems in the local gem repository. This can be used to repair a broken gem.
rdoc - Generates rdoc documentation for an installed gem. This rdoc documentation can then be viewed with a web browser.
search - Searches the names of all gems and returns a list of gems whose name contains a string. For example, to search for all gems containing the word twitter in the name, run gem search twitter.
server - Starts a web server that will act as a gem repository and serves RDoc documentation for all installed gems. This is most useful for the documentation feature.
sources - Manages the list of sources for remote repositories. By default, only http://gems.rubyforge.org is in the list, but more can be added or removed.
specification - Displays the gemspec of a gem. This will tell you all the information about a gem, including author, dependencies, etc.
stale - Displays a list of installed gems, as well as the access times (the last time the gem was included). This can help you weed out gems you no longer user to uninstall them.
uninstall - Uninstalles a gem. If there are any installed gems that depend on this gem, you will be prompted whether you want to uninstall this gem. If you do, any gems that depended on this gem will be broken until it is reinstalled.
unpack - Unpacks an install gem into the current directory. This can be used to "freeze" gems to your project directory.
update - Checks if there are new versions of the specified gem in the remote repository. If there are, download and install the newest version.
which - Finds the exact location of the .rb file to include. This can be useful for getting a path for requiring a gem without requiring the rubygems library. | <urn:uuid:b6d89e79-cbf4-48cc-9c11-f4650381ab55> | 2.703125 | 1,330 | Tutorial | Software Dev. | 55.000636 | 1,151 |
Pressure and Buoyancy Problems
Let's state the two working equations we have so far.
| Pressure and Depth: || P = Po + rgh |
|Buoyancy:|| Fbuoyancy = Weight displaced = rgVdisplaced |
The solutions to the problems below can be found at the end of this page. As always, try all the problems before looking at the solutions. It's much easier to understand a solution put before you than to come up with the solution yourself. To develop the skills necessary to solve the problems yourself, you must spend the time doing it.
- What is the absolute pressure at the bottom of the Virgin Islands Basin (located between St. Thomas and St. Croix), at a depth of 4000 meters? Express your answer in atmospheres of pressure. What is the gauge pressure? If there are fish at this depth, how would they deal with this pressure? The density of sea water is 1.03 x 103 kg/m3
- A water hose is connected to a spigot located at the bottom of a cistern. The cistern is half full with 5 ft of water. The nozzle at the other end of the hose is turned off but is left down by the papaya tree, which is 20 ft below the bottom of the cistern. If the spigot is left open, what is the pressure at the nozzle? Why would it be a good idea to turn off the spigot when you are finished watering the tree?
- A large part of Holland is below sea level. Earthen dikes keep the sea at bay. There's a Holland legend of a boy who uses his finger to plug a hole in the dike and saves the country side. Assume the hole is located 3.0 meters below the sea level. The hole is the same size as the childs finger, a diameter of about 1 cm. How much force would the child have to exert against the sea pressure in order to keep the sea at bay? Do you think a child could do this?
- A 10 lb box falls overboard and is floating. The box has the shape of a cube, 1 ft on a side. What is the buoyancy force on the box?
- The float in a toilet tank is a sphere of diameter 10 cm.
1) What is the buoyancy force on the float when it is completely submerged? You might need a reminder that the volume of a sphere is V = 4/3p(r)3
2) Here's a slightly tougher one. If the float must have an upward buoyancy force of 3.0 N to shut off the ballcock valve, what percentage of the float will be submerged?
- Here's an interesting puzzle to see if you really understand buoyancy and displacement. You are floating in a small dingy in your pool. There's a brick in the boat. You toss the brick out of the boat and into the pool. The brick sinks to the bottom of the pool. Does the water level at the side of the pool rise, stay the same, or decrease?
Don't look at the answers until you've tried the problems on your own!!
- Using the SI system of units, P = Po + rgh = 1.01 x 105 + 1.03 x 103 x 9.8 x 4000 = 4.0 x 107 Pa. In terms of atmospheres, that would be 3.9 x 107 Pa / 1.01 x 105 Pa/atm = 400 atmospheres! The gauge pressure is P - Po which is just the rgh term. That would be about 399 atm. If fish lived at that depth, they would not notice the pressure anymore than we notice the 15 psi pressure pushing on us. Organism generally adapt to the pressure around them. The fish take water into their bodies at the ambient pressure so there is no net or gauge pressure difference. However, changing depth can present problems. Many sea mammals, such as sea lions, have developed systems that allow them to dive to extraordinary depths.
- The nozzle end is 5 + 20 = 25 ft below the water level. We can convert this to meters and apply the static pressure equation in the SI units. But we could also use the fact that 34 ft of fresh water produces a pressure of 1 atmosphere = 14.7 psi. So 25 ft corresponds to 14.7 x 25/34 = 11 psi. Note that this is the gauge pressure, which is appropriate since atmospheric pressure act both on the surface of the water and on the hose. This means there will be a net force of 11 lb pushing outward on every square inch of the hose. It's probably best to turn the spigot off.
- The gauge pressure would be 1.03 x 103 x 9.8 x 3.0 = 3.1 x 104 Pa. The force exerted by his "round" finger would be F = PA = 3.1 x 104(p(.01/2)2 = 2.4 N. This is about .53 lb ... no problem!
- The info on the size of the box is not relevant. If the box is floating, then the buoyancy force must be equal to the weight of the box ... = 10 lb! Here's another problem to try. A cubic foot of water weighs about 64 lb. Can you see why the box would float with 10/64 th of its volume submerged? This would mean about 1.9 inches below the water.
- The volume of the float is V = 4/3p(.05)3 = 5.2 x 10-4 m3. Assuming there is freshwater in your toilet tank, then Fbuoyancy = 103 x 9.8 x 5.2 x 10-4 = 5.1 N.
If you need 3.0 N of upward force to shut off the valve and there's 5.1 N of buoyancy force when completely submerged, then you would need 3.0 / 5.1 x 100% = 59% of the float to be submerged.
- Did you figure this one out? The water level in the pool goes down! Some of our physics major get fooled by this one. While in the boat, the entire weight of the brick is being supported ... ultimately by water displaced by the dingy. Since the brick sinks when out of the boat, it must be more dense than water. Hence, the volume of water displaced is greater than the volume of the brick. But when the brick is tossed into the pool, it displaces only its own volume. OK, try again. What if the object tossed overboard floated? | <urn:uuid:d48331c1-3e49-40c6-8b4d-fd710bfe9652> | 3.25 | 1,366 | Tutorial | Science & Tech. | 91.524973 | 1,152 |
Trees Come 'From Out Of The Air' Says Nobel Laureate Richard Feynman. Really?
Originally published on Tue September 25, 2012 4:05 pm
Ask one of the greatest scientists of the 20th century a simple question, and his answer makes me go, "What? What did he just say?"
The question was: Where do trees come from?
Meaning, when you see a tree, a big, tall, heavy one, and you wonder where did it get its mass, its thick trunk, its branches — the instinctive answer would be from the soil below, plus a little water (and, in some mysterious way, sunshine), right?
Nope, says the late Nobel laureate Richard Feynman, sitting in an easy chair, thinking out loud in a You Tube video clip from 1983: "People look at a tree and think it comes out of the ground, that plants grow out of the ground, " he says, but "if you ask, where does the substance [of the tree] come from? You find out ... trees come out of the air!"
From the air? Trees are hard, branchy, heavy, covered with bark. They don't precipitate out of air. This sounds like sorcery, not science.
But then Feynman says it again, "They surely ... come out of the air."
If you are wondering how tons of wood, leaf, bark and all the innards of, say, a massive redwood tree can get pulled out of air, you'll want to hear Feynman's explanation, which is mostly him happily arguing with himself. ("How is it the tree is so smart ... and do that so easily? Ah! Life! Life has some mysterious force? No! ...")
But before you go to Feynman, it's best to start here, with this primer from Derek Miller of Australia's science video site, Veritasium. "Would it surprise you," Derek asks three young guys in a park — one of them wearing a T-shirt that says "living the dreem," "to discover that 95 percent of a tree is actually from carbon dioxide, that trees are largely made up of air?" The guys smile politely and say, "Ummmm ... OK ... "
I think, watching this video, you'll be more surprised than they were.
So that's the lesson: that a tree gets its mass from air and water. It "eats" air, chomps down on airborne carbon dioxide, then uses sunshine to pull the carbon dioxide apart, gets rid of the oxygen, which "it spits back into the air," says Feynman, "leaving the carbon and water, the stuff to make the substance of the tree."
But wait a second! Water is in the ground, right? Water is not in the air. Ah, says Feynman, but how did water get into the ground? "It came mostly out of the air, didn't it?" Waving his hands, he says rain "came out of the sky."
What a beautiful notion, that from the dancing air comes the towering monarchs that are our trees. But don't take my word for it, or Derek's. You're now ready to hear it from the Big Guy. When this begins, he's talking about fire. He gets to trees about two minutes in. | <urn:uuid:1b8ea010-a1ef-4cc7-9ae0-a5a425c06df3> | 2.953125 | 701 | News Article | Science & Tech. | 82.654486 | 1,153 |
Turtles to be climate change canaries
“Turtles are a really good way to study climate change because they depend on healthy beaches as well as mangroves, sea grass beds, coral reefs and deep ocean ecosystems to live”, said Dr. Lucy Hawkes, coordinator of an initiative to develop adaptation strategies for climate change impacts to turtles.
As part of the initiative, WWF launched a new website today, Adaptation to Climate Change in Marine Turtles (ACT).
“Understanding of how climate change may affect the beaches, the reef and the open ocean will not only benefit endangered sea turtle populations, but also the millions of people who live along the coastlines of the world and depend upon marine resources and environmental services.”
The public, educators, conservationists and scientists will be able to share information and projects to try to gain a better picture of how climate change will affect turtles and what might be done to combat the impacts.
According to the latest reports by the International Panel on Climate Change (IPCC), our environment will be altered dramatically over the next years by increasing temperatures, increased severity and frequency of storm events and rising sea levels.
These effects could be devastating within low situated tropical areas, where the majority of the population depends on coastal resources and tourism.
The Caribbean is one such important region that is greatly threatened by climate change and is also host to globally important populations of sea turtles.
By 2010 the project hopes to understand the current state of knowledge about the impacts of climate change on marine turtles and their habitats with a global network of marine turtle and climate specialists, and make management recommendations for their conservation.
It is an initiative of WWF through a grant from the MacArthur Foundation and support from Hewlett Packard.
The website, hosting free downloads, information and latest scientific findings, can be accessed at: http://www.panda.org/lac/marineturtles/act
For further information:
WWF Central America,
Tel. +501 223 76 80, firstname.lastname@example.org | <urn:uuid:38e4b43c-92b9-477f-864c-4cce528d1f90> | 3.546875 | 419 | News (Org.) | Science & Tech. | 28.603682 | 1,154 |
Parametric Cartesian equation:
x = a(cos(t) + t sin(t)), y = a(sin(t) - t cos(t))
Click below to see one of the Associated curves.
|Definitions of the Associated curves||Evolute|
|Involute 1||Involute 2|
|Inverse curve wrt origin||Inverse wrt another circle|
|Pedal curve wrt origin||Pedal wrt another point|
|Negative pedal curve wrt origin||Negative pedal wrt another point|
|Caustic wrt horizontal rays||Caustic curve wrt another point|
It was studied by Huygens when he was considering clocks without pendulums that might be used on ships at sea. He used the involute of a circle in his first pendulum clock in an attempt to force the pendulum to swing in the path of a cycloid.
Finding a clock which would keep accurate time at sea was a major problem and many years were spent looking for a solution. The problem was of vital importance since if GMT was known from a clock then, since local time could be easily computed from the Sun, longitude could be easily computed.
The pedal of the involute of a circle, with the centre as pedal point, is a Spiral of Archimedes.
Of course the evolute of an involute of a circle is a circle.
|Main index||Famous curves index|
|Previous curve||Next curve|
|History Topics Index||Birthplace Maps|
|Mathematicians of the day||Anniversaries for the year|
|Societies, honours, etc||Search Form|
The URL of this page is: | <urn:uuid:6f821d81-329c-43a1-8627-0e356c489cd2> | 2.65625 | 363 | Knowledge Article | Science & Tech. | 35.319237 | 1,155 |
Science Fair Project Encyclopedia
The trade winds are a pattern of wind found in bands around the Earth's equatorial region. The trade winds are the prevailing winds in the tropics, blowing from the high-pressure area in the horse latitudes towards the low-pressure area around the equator. The trade winds blow predominantly from the northeast in the northern hemisphere and from the southeast in the southern hemisphere.
Their name comes from the fact that these winds enabled trading ships to sail in two directions between Europe and the Americas: the ships could sail a southern route with the trade winds westward from Europe to the Americas, then head north to the middle latitudes and sail with the westerlies eastward from the Americas back to Europe.
In the zone between about 30° N. and 30° S., the surface air flows toward the equator and the flow aloft is poleward. A low-pressure area of calm, light variable winds near the equator is known to mariners as the doldrums. Around 30° N. and S., the poleward flowing air begins to descend toward the surface in subtropical high-pressure belts. The sinking air is relatively dry because its moisture has already been released near the Equator above the tropical rain forests. Near the center of this high-pressure zone of descending air, called the "Horse Latitudes," the winds at the surface are weak and variable. The name for this area is believed to have been given by colonial sailors, who, becalmed sometimes at these latitudes while crossing the oceans with horses as cargo, were forced to throw a few horses overboard to conserve water.
The surface air that flows from these subtropical high-pressure belts toward the Equator is deflected toward the west in both hemispheres by the Coriolis effect. Because winds are named for the direction from which the wind is blowing, these winds are called the northeast trade winds in the Northern Hemisphere and the southeast trade winds in the Southern Hemisphere. The trade winds meet at the doldrums. Surface winds known as "westerlies" flow from the Horse Latitudes toward the poles. The "westerlies" meet "easterlies" from the polar highs at about 50-60° N. and S.
Near the ground, wind direction is affected by friction and by changes in topography. Winds may be seasonal, sporadic, or daily. They range from gentle breezes to violent gusts at speeds greater than 300 km/h (~200 mph).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:5bcdf32a-0830-494a-8ded-f3d80434eb16> | 3.90625 | 543 | Knowledge Article | Science & Tech. | 52.77384 | 1,156 |
Phylogenetic Analysis of Mexican Cave Scorpions Suggests Adaptation to Caves is Reversible
NEW EVIDENCE THAT SPECIALIZED ADAPTATIONS ARE NOT EVOLUTIONARY DEAD ENDS
Blind scorpions that live in the stygian depths of caves are throwing light on a long-held assumption that specialized adaptations are irreversible evolutionary dead-ends. According to a new phylogenetic analysis of the family Typhlochactidae, scorpions currently living closer to the surface (under stones and in leaf litter) evolved independently on more than one occasion from ancestors adapted to life further below the surface (in caves). The research, currently available in an early online edition, will be published in the April issue of Cladistics.
"Our research shows that the evolution of troglobites, or animals adapted for life in caves, is reversible," says Lorenzo Prendini, Associate Curator in the Division of Invertebrate Zoology at the American Museum of Natural History. "Three more generalized scorpion species living closer to the surface evolved from specialized ancestors living in caves deep below the surface."
Scorpions are predatory, venomous, nocturnal arachnids that are related to spiders, mites, and other arthropods. About 2,000 species are distributed throughout the world, but only 23 species found in ten different families are adapted to a permanent life in caves. These are the specialized troglobites.
This study concentrates on the family Typhlochactidae that includes nine species of scorpions endemic to the karstic regions of eastern Mexico. These species were initially grouped together by Robert Mitchell in 1971 but were elevated to the rank of family for the first time last year, based on morphological data published by Prendini and Valerio Vignoli of the Department of Evolutionary Biology, University of Siena, Italy, in the Bulletin of the American Museum of Natural History. Prendini, Vignoli, and Oscar F. Francke of the Departmento de Zoologia, Instituto de Biolog’a at the Universidad Nacional Aut—noma de México, Mexico City, also created a new genus, Stygochactas, for one species in the family and described a new surface-living species,Typhlochactas sissomi, in a separate American Museum Novitates paper. All species in the family have adapted to the dark with features such as loss of eyes and reduced pigmentation. The family contains the most specialized troglobite scorpion,Sotanochactas elliotti, one of the world's smallest scorpions, Typhlochactas mitchelli, and the scorpion found at the greatest depth (nearly 1 km below the surface), Alacran tartarus. Three of the species (including T. mitchelli) live closer to the surface and are more generalized morphologically than the other six, making this family an excellent model with which to test and falsify Cope's Law of the unspecialized (novel evolutionary traits tend to originate from a generalized member of an ancestral taxon) and Dollo's Law of evolutionary irreversibility (specialized evolutionary traits are unlikely to reverse).
For the current research paper, Prendini and colleagues gathered data for 195 morphological characteristics, including a detailed mapping of the positions of all trichobothria (sensory setae) on the pedipalps, among the species of Typhlochactidae. The resulting phylogenetic tree shows that adaptation to life in caves has reversed among this group of scorpions: two of the less specialized, surface-living species, T. mitchelli and T. sylvestris, share a common ancestor with a much more cave-adapted species, and a similar pattern was found for the third less specialized, surface-living species, T. sissomi.
"Scorpions have been around for 450 million years, and their biology is obviously flexible," says Prendini. "This unique group of eyeless Mexican scorpions may have started re-colonizing niches closer to the surface from the deep caves of Mexico after their surface-living ancestors were wiped out by the nearby Chicxuluxb impact along with non-avian dinosaurs, ammonites, and other species."
The research was funded by the National Science Foundation, the Theodore Roosevelt Memorial Fund, and a SYNTHESYS grant.
Media Inquiries: Department of Communications, 212-769-5800 | <urn:uuid:01fd82dc-292c-44ea-8c49-76842c63e612> | 3.578125 | 940 | News (Org.) | Science & Tech. | 11.007808 | 1,157 |
Little was known about this hydrogen-breathing organism before its genome sequence was determined. By utilizing computational analyses and comparison with the genomes of other organisms, the researchers have discovered several remarkable features. For example, the genome encodes a full suite of genes for making spores, a previously unknown talent of the microbe. Organisms that make spores have attracted great interest recently because this is a process found in the bacterium that causes anthrax. Sporulation allows anthrax to be used as a bioweopon because the spores are resistant to heat, radiation, and other treatments.
By comparing this genome to those of other spore-making species, including the anthrax pathogen, Eisen and colleagues identified what may be the minimal biochemical machinery necessary for any microbe to sporulate. Thus studies of this poison eating microbe may help us better understand the biology of the bacterium that causes anthrax.
Building off this work, TIGR scientists are leveraging the information from the genome of this organism to study the ecology of microbes living in diverse hot springs, such as those in Yellowstone National Park. They want to know what types of microbes are found in different hot springs--and why. To find out, the researchers are dipping into the hot springs of Yellowstone, Russia, and other far-flung locales, to isolate and decipher the genomes of microbes found there.
"What we want to have is a field guide for these microbes, like those available for birds and mammals," Eisen says. "Right now, we can't even answer simple questions. D
Source:The Institute for Genomic Research | <urn:uuid:f98faee8-b113-4114-800d-2befb0883078> | 4.1875 | 327 | Knowledge Article | Science & Tech. | 26.775693 | 1,158 |
Exception Handling in Visual COBOL.NET
Let's start by taking a look at a simple and pretty standard COBOL way of handling exceptions. We’ll then see how that same example would be coding in a managed environment utilizing Visual COBOL.NET.
COBOL developers like other Developers have for years used various techniques to identify and handle issues when processing yields results that are unexpected during execution of their code. While the techniques vary in each shop there is a technique that is more widely used in the .NET environment that COBOL developers should look at and consider. The technique is referred to by a number of names or phrases such as 'throwing exceptions', 'raising exceptions' or simply 'exceptions'. While it can be a more standard way of identifying and handling errors that occur, exception processing is not without some overhead that developers must be aware of and take into consideration when deciding how to handle non-standard results from execution. Visual COBOL provides mechanisms to enable COBOL developers to take advantage of the same exception handlers that distributed developers have been doing for years.
Let's start by taking a look at a simple and pretty standard COBOL way of handling exceptions. We'll then see how that same example would be coding in a managed environment utilizing Visual COBOL.NET.
Standard COBOL Error Handling
In COBOL shops developers use whatever technique is most widely used and accepted in their environment. Most developers plan for and take action to identify what would be called standard errors. Standard errors refers to the type of issues that may be expected to be encountered such as missing data, invalid data, or improperly formatted data. Checking for this type of error is common and expected and developers know how to do this with their coding techniques. Even in the .NET arena standard checks for missing data are common and are not considered exceptional.
So what would be considered exceptional errors? One example would be a missing file that is supposed to be present. When the application was created it was part of the design specification that a file needed for processing would always be present. But what if it isn't? This would be an exceptional condition and one in which 'special' error handling must be created for.
Declaratives are generally used in shops to denote when exceptional errors have occurred and are also used to handle these exceptional errors. The technique can be as simple as the following.
Notice the header 'Declaratives'. This is the area where you typically define an action that would occur for each exceptional error code encountered. Continuing our example from above, we expect a file to be present when the application starts. The code shows a check for a 'File Not Found' exception. Since we always expect a file to be present this would be an exceptional condition and processing cannot continue.
Now let's see how to do this in a managed .NET environment with Visual COBOL.
Throwing things around
In a .NET environment when unexpected processing occurs an 'exception is raised' or 'we throw an exception'. The terms are synonymous with error handling in the distributed world so you may want to get used to them. Throwing an exception in Visual COBOL is really straightforward and thanks as you'll see to intelli-sense is easy to code up.
Let's take our previous example and see how to recode it for .NET.
We've used two techniques to raise the exception. For the file status of '35' we used the instantiation method to establish an exception object. We first define an exception object in Working-Storage:
Next we instantiate (or create a working version within our run-unit) the object and populate it. Once we have populated the object with the necessary information we 'raise' the exception to the run-unit. This is accomplished by the following lines of code:
The other mechanism we used was for the 'other' condition. In this instance we did not instantiate a specific object, but rather we simply raised an exception with a specific error message.
Both methods are valid and each accomplishes our goal of raising an exception. The first method however creates an object that we can then reuse at some later point in time if we so chose to. The first method though also adds additional overhead to the run-unit in not only creating the object, but maintaining it.
As we've just seen, the .NET method of throwing exceptions is quite straightforward and easy to implement in Visual COBOL. There are some caveats though you should be aware of. In general, raising an exception is just staggeringly inefficient. The program has to populate the stack trace when the exception is raised and this is generally really slow. It is quite literally thousands of times faster to trap a condition with an 'on' clause or an 'if' statement than to utilize exception handling. Back in the early Java days the mantra was:
"Never raise an exception for something you expect to happen".
So, if we expect empty strings to happen sometimes then that is not an exception, rather it is an expected condition and an issue that needs to be handled and addressed in standard coding techniques for the application. In our example we expect a file to be present and if none is found then we should by all means raise an exception. We expect users to enter invalid data though so these would not be exceptions but would rather be standard input errors we would trap for in our application source.
The attached zip file contains two solutions, Declaratives and Exceptions. The Declaratives solution is the standard manner in which COBOL developers have raised exceptions. The Exception solution is a managed Visual COBOL solution using the technique described above. Experiment with both solutions and see how to adapt this technique to your environment.
Exception handling in the distributed world can be quite a powerful tool for a developer. It provides a standard mechanism in which exceptions are identified, presented and dealt with. Caution should be used however when implementing exception handling to only handle those conditions that are truly exceptional and not expected in the normal course of processing. | <urn:uuid:0b2af34e-20f9-40cc-a461-985cd88ef8d0> | 2.703125 | 1,237 | Documentation | Software Dev. | 43.388779 | 1,159 |
Classification & Distribution
- incomplete development (egg, nymph, adult)
- closely related to Thysanoptera and Psocoptera
Distribution: Abundant worldwide. Found in most terrestrial and freshwater habitats.North America
Worldwide Number of Families4073 Number of Species3587>50,000
Life History & Ecology
Members of the suborder Heteroptera are known as "true bugs". They have very distinctive front wings, called hemelytra, in which the basal half is leathery and the apical half is membranous. At rest, these wings cross over one another to lie flat along the insect's back. These insects also have elongate, piercing-sucking mouthparts which arise from the ventral (hypognathous) or anterior (prognathous) part of the head capsule. The mandibles and maxillae are long and thread-like, interlocking with one another to form a flexible feeding tube (proboscis) that is no more than 0.1 mm in diameter yet contains both a food channel and a salivary channel. These stylets are enclosed within a protective sheath (the labium) that shortens or retracts during feeding.
The Heteroptera include a diverse assemblage of insects that have become adapted to a broad range of habitats -- terrestrial, aquatic and semi-aquatic. Terrestrial species are often associated with plants. They feed in vascular tissues or on the nutrients stored within seeds. Other species live as scavengers in the soil or underground in caves or ant nests. Still others are predators on a variety of small arthropods. A few species even feed on the blood of vertebrates. Bed bugs, and other members of the family Cimicidae, live exclusively as ectoparasites on birds and mammals (including humans). Aquatic Heteroptera can be found on the surface of both fresh and salt water, near shorelines, or beneath the water surface in nearly all freshwater habitats. With only a few exceptions, these insects are predators of other aquatic organisms.
- Antennae slender with 4-5 segments
- Proboscis 3-4 segmented, arising from front of head and curving below body when not in use
- Pronotum usually large, trapezoidal or rounded
- Triangular scutellum present behind pronotum
- Front wings with basal half leathery and apical half membranous (hemelytra). Wings lie flat on the back at rest, forming an "X".
- Tarsi 2- or 3-segmented
- Structurally similar to adults
- Always lacking wings
Plant feeding bugs are important pests of many crop plants. They may cause localized injury to plant tissues, they may weaken plants by removing sap, and they may also transmit plant pathogens. Predatory species of Heteroptera are generally regarded as beneficial insects, but those that feed on blood may transmit human diseases. Chagas disease, for example, is transmitted to humans by conenose bugs (genus Triatoma, family Reduviidae). Although bed bugs (family Cimicidae) can inflict annoying bites, there is little evidence that they regularly transmit any human or animal pathogen.
The three largest families of Heteroptera are:
- Miridae (Plant Bugs) -- Most species feed on plants, but some are predaceous. This family includes numerous pests such as the tarnished plant bug (Lygus lineolaris).
- Lygaeidae (Seed Bugs) -- Most species are seed feeders, a few are predatory. This family includes the chinch bug, Blissus leucopterus a pest of small grains, and the bigeyed bug, Geocoris bullatis, a beneficial predator.
- Pentatomidae (Stink Bugs) -- Shield-shaped body with large, triangular scutellum. Most species are herbivores, some are predators. All have scent glands which can produce an unpleasant odor.
Other families of terrestrial herbivores include:
- Tingidae (lace bugs)
- Coreidae (squash bugs and leaffooted bugs)
- Alydidae (broadheaded bugs)
- Rhopalidae (scentless plant bugs)
- Berytidae (stilt bugs)
Other families of terrestrial predators include:
- Reduviidae (assassin bugs)
- Phymatidae (ambush bugs)
- Nabidae (damsel bugs)
- Anthocoridae (minute pirate bugs)
The major families of aquatic predators include:
- Two families of Heteroptera are ectoparasites. The Cimicidae (bed bugs) live on birds and mammals (including humans). The Polyctenidae (bat bugs) live on bats.
- Water striders in the genus Halobates (family Gerridae) are the only insects that are truly marine. They live on the surface of the Pacific Ocean.
- Unlike other insects, male bedbugs do not place their sperm directly in the female's reproductive tract. Instead, they puncture her abdomen and inject the sperm into her body cavity. The sperm swim to the ovaries where they fertilize the eggs. This unusual type of reproductive behavior is appropriately known as "traumatic insemination".
- Some members of the family Largidae resemble ants. They live as social parasites in ant nests, mimicking the ants' behavior to get food | <urn:uuid:6abddc74-7f17-4ddf-839f-d451d3f0d228> | 3.96875 | 1,149 | Knowledge Article | Science & Tech. | 28.587396 | 1,160 |
Moons are shaped by the same surface processes that
shape the planets. Several of the moons of the outer
planets are large enough to be thought of as planets
themselves. In fact, Jupiter's largest moon, Ganymede,
is larger than the planet Mercury.
Moon: NASA/USGS; Ganymede:
Io: NASA/JPL/University of Arizona; Triton:
! Click each thumbnail image to see a larger version.
Examine each image to look for evidence of the surface processes at work on these moons.
7. What processes do you see evidence of
on these moons? Identify the moon and the process you
8. What is the most common surface process
you observed in the solar system? Why do you think this
process is so universal? | <urn:uuid:a92c7f4e-88c9-43da-97de-f1a6f8483f51> | 3.796875 | 172 | Tutorial | Science & Tech. | 56.172273 | 1,161 |
hi folks, i know DTD,CSS,XSL ans SCHEMA'S but i dont know where these will apply..be cos these seems to be individul. i want where xml using and how they are using ..what is the xml liknk with java. where exactly.. i mean one complete life cycle of xml. regards deva
Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA).
Joined: Mar 17, 2000
Though these are different specifications, they all fit together. DTD and/or Schema enforces constraints on your XML and hence ensures integrity. XSL provides a way to browse your XML document and extract parts of the document that matches your criteria. The resulting "chunks" of original XML document can either be used to generate a HTML or an XHTML or another XML document. XSL also provides vocabulary and syntax for rendering the XML data in different "styles" and for different media types. Hope that helps, Ajith | <urn:uuid:1ac6a761-b838-4c86-8461-e41871be0aa4> | 2.671875 | 210 | Comment Section | Software Dev. | 65.094559 | 1,162 |
You will get the answer of your question if you think about the consequences of method local inner class using variables of the same method. Local variables of the method exists only for the lifetime of the method. When the method ends, the stack and the variable is also destroyed. But even after the method completes, the inner class object created within it might still be alive on the heap. And if this object tries to access a variable which is no more then!!!
Joined: Jun 03, 2008
very thank you for your reply. OK ,then where are the FINAL variables stored?.is it heap?.
Because the variable is final, the inner class instance keeps a copy of that variable's value - either primitive, or the reference to an object. If the variable was a reference to an object, this object will still have at least one reference (in the inner class) and not be garbage collected.
So yes, in the end, its value is stored on the heap, within the instance. [ August 19, 2008: Message edited by: Rob Prime ]
Since JDK 6, you can't always be sure that objects will be stored on the heap. Sometimes, if the JVM can determine that an object is only accessible from within a single thread, and it can determine when the last variable referencing that thread has gone out of scope, it can allocate the memory for the object on the stack rather than the heap. This means it can be garbage collected very efficiently. It also means the JVM can ignore synchronization requests if it's already determined that no other threads can touch the object. This is useful for a class like StringBuffer, whose methods are synchronized but in most cases this is completely unnecessary.
Anyway, the point here is that nowadays it's often hard to know for sure whether an object is really on the heap, or on the stack. And it generally makes very little difference, from the programmer's perspective. | <urn:uuid:9fc63c5b-8ac2-4874-929b-fd56966d2a80> | 2.546875 | 392 | Q&A Forum | Software Dev. | 58.797229 | 1,163 |
A group of rows and columns. The x-axis is the horizontal row, and the y-axis is the vertical column. An x-y matrix is the reference framework for two-dimensional structures, such as mathematical tables, display screens, digitizer tablets, dot matrix printers and 2D graphics images.
Search For x-y matrix On ChannelWeb
Find the latest news and information on x-y matrix from across the Channelweb Network of IT Web sites. | <urn:uuid:0f375849-2c68-4d90-9bab-6a69151aed48> | 2.96875 | 96 | Structured Data | Science & Tech. | 54.298041 | 1,164 |
Washington, D.C. (November 30, 2010) -- One of the holy grails of nanotechnology in medicine is to control individual structures and processes inside a cell. Nanoparticles are well suited for this purpose because of their small size; they can also be engineered for specific intracellular tasks. When nanoparticles are excited by radio-frequency (RF) electromagnetic fields, interesting effects may occur. For example, the cell nucleus could get damaged inducing cell death; DNA might melt; or protein aggregates might get dispersed.
Some of these effects may be due to the localized heating produced by each tiny nanoparticle. Yet, such local heating, which could mean a difference of a few degrees Celsius across a few molecules, cannot be explained easily by heat-transfer theories. However, the existence of local heating cannot be dismissed either, because it's difficult to measure the temperature near these tiny heat sources.
Scientists at Rensselaer Polytechnic Institute have developed a new technique for probing the temperature rise in the vicinity of RF-actuated nanoparticles using fluorescent quantum dots as temperature sensors. The results are published in the Journal of Applied Physics.
Amit Gupta and colleagues found that when the nanoparticles were excited by an RF field the measured temperature rise was the same regardless of whether the sensors were simply mixed with the nanoparticles or covalently bonded to them. "This proximity measurement is important because it shows us the limitations of RF heating, at least for the frequencies investigated in this study," says project leader Diana Borca-Tasciuc. "The ability to measure the local temperature advances our understanding of these nanoparticle-mediated processes."
The article, "Local Temperature Measurement in the Vicinity of Electromagnetically Heated Magnetite and Gold Nanoparticles" by Amit Gupta, Ravi Kane and Diana-Andra Borca-Tasciuc appears in the Journal of Applied Physics. See: http://link.aip.org/link/japiau/v108/i6/p064901/s1
Journalists may request a free PDF of this article by contacting firstname.lastname@example.org
ABOUT JOURNAL OF APPLIED PHYSICS
Journal of Applied Physics is the American Institute of Physics' (AIP) archival journal for significant new results in applied physics; content is published online daily, collected into two online and printed issues per month (24 issues per year). The journal publishes articles that emphasize understanding of the physics underlying modern technology, but distinguished from technology on the one side and pure physics on the other. See: http://jap.aip.org/
The American Institute of Physics is a federation of 10 physical science societies representing more than 135,000 scientists, engineers, and educators and is one of the world's largest publishers of scientific information in the physical sciences. Offering partnership solutions for scientific societies and for similar organizations in science and engineering, AIP is a leader in the field of electronic publishing of scholarly journals. AIP publishes 12 journals (some of which are the most highly cited in their respective fields), two magazines, including its flagship publication Physics Today; and the AIP Conference Proceedings series. Its online publishing platform Scitation hosts nearly two million articles from more than 185 scholarly journals and other publications of 28 learned society publishers.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:b5b9cf27-216a-4161-8688-c5e9e70ea11e> | 3.140625 | 726 | News (Org.) | Science & Tech. | 25.602309 | 1,165 |
Published by the American Geological Institute
Newsmagazine of the Earth Sciences
Monday, Jan. 29: A preliminary magnitude-7.9 earthquake struck India at 8:46 a.m. on Jan. 26. The death toll, originally estimated at 1,100, reached 20,000 by Monday morning. Most of the casualties were reported from the state of Gujarat in western India, mainly in the cities of Bhuj and Ahmedabad. The depth of the earthquake was 23.6 kilometers and felt as far as Pakistan and Nepal. This is the fifth earthquake in January to reach a magnitude of 7.0 or higher.
Geotimes is working to provide readers with up-to-the-minute reports from other news services. The links below provide information on news stories about the event in addition to information about the science behind the the geologic disaster.
[Image at right taken from A Probabilistic Seismic Hazard Map of India and Adjoining Regions by S.C. Bhatia, M. Ravi Kumar and H K Gupta of the National Geophysical Research Institute, Hyderabad, India.]
Incorporated Research Institutions for Seismology (IRIS) Special Events:
USGS Earthquake Hazards Program:
USGS earthquake bulletin and map:
CARE International relief efforts:
India Development and Relief Fund:
Times of India Online:
Link to several stories on the earthquake from today’s home page.
The Times of India also has a link to a photo gallery specifically for this earthquake.
Press Trust of India:
The Hindu News Update Service: | <urn:uuid:9f5c8b2b-8f69-49d8-8b64-a84c83f192d2> | 2.875 | 331 | News (Org.) | Science & Tech. | 48.514118 | 1,166 |
Sizing up Earthquake Damage: Differing Points of View
When a catastrophic event strikes an urban area, many different professionals hit the ground running. Emergency responders respond, reporters report, and scientists and engineers collect and analyze data. Journalists and scientists may share interest in these events, but they have very different missions. To a journalist, earthquake damage is news. To a scientist or engineer, earthquake damage represents a valuable source of data that can help us understand how strongly the ground shook as well as how particular structures responded to the shaking.
Media reports and private accounts can provide important information about an earthquake’s impact. But a recent study co-authored by Prabhas Pande, director of the Earthquake Geology Division of the Geological Survey of India, and Susan Hough, published in the April issue of the Bulletin of the Seismological Society of America, illustrates how scientists can potentially be led astray by failing to recognize that written accounts tend to emphasize especially dramatic events rather than representative, overall effects.
For a journalist, the news is what happened. When mid-rise buildings collapsed in Mexico City in 1985, that was big news. When the Nimitz Freeway collapsed in Oakland in 1989, that was big news. In any earthquake, the most dramatic damage is the biggest story. If nothing, or not much, happens, that isn’t news. Modest earthquake effects will merit a much smaller story, if one at all. Those who experience an earthquake use the same sort of selection process in relaying what happened, either orally or in correspondence. When people experience something dramatic, they are apt to write letters — or, these days, e-mail messages. As a rule, people don’t tend to write to say, “We didn’t feel the earthquake that happened last Tuesday.”
For the scientist or engineer, however, the damage that didn’t happen can be every bit as important as the damage that did happen. An engineer knows, for example, that isolated damage might not reflect how hard the ground was shaking because in any area, buildings that are relatively poorly built are especially susceptible to damage.
The issue of bias in media reports looms especially large for those earthquakes historically important for their impact on society or their physical devastation or both, such as the one that took place in Charleston, S.C., in 1886. As no modern instruments were available to estimate earthquake magnitudes before the late 19th century, scientists measure this by quantifying the distribution of damage and other effects, such as the area over which the shaking was felt, and then comparing the results to the effects of modern earthquakes for which a magnitude can be determined.
However, the older the earthquake, the more sparse the written record. After the Charleston earthquake struck, Clarence Dutton — an Army captain working for the U.S. Geological Survey — set out to systematically compile every available account. Thanks largely to his efforts, we have accounts of this earthquake from almost a thousand locations. No similar compilation was made after the so-called New Madrid sequence of large earthquakes struck the mid-continent during the winter of 1811 to 1812. For these earthquakes, seismologists have turned to extensive archival searches to unearth written records squirreled away in old newspapers, diaries and letters. This sleuthing has turned up accounts from only about a hundred locations for each of the three largest New Madrid earthquakes.
Written accounts of an earthquake’s effects are obviously quite different from a modern seismogram, but both types of observations represent data. To analyze written accounts of earthquake effects, the seismologist first assigns an intensity value based on the severity of documented effects. Intensity values differ from magnitude values in that the latter reflect the size of the earthquake itself, whereas intensity reflects the severity of shaking at a particular location. Confusions sometimes arise because intensity and magnitude values span a similar range. In fact, the magnitude scale is open-ended, and tiny earthquakes can have negative magnitudes. Intensity values, usually denoted by Roman numerals, are defined to span a range of I to X. Intensity I corresponds to shaking that is not felt while intensity X corresponds to shaking that is strong enough to cause significant damage to even well-built modern structures.
As intensity values are assigned to old, often brief, archival accounts, one question often rattles in the back of seismologists’ minds: Do available accounts provide a good overview of an earthquake’s effects? The nagging voice suspects the answer is no. But how does one evaluate information that isn’t there? If the only available account of an earthquake just describes damage done to adobe houses in a certain town, it is hard to know if they were the especially poorly built structures in that area or not.
Sometimes more recent newspaper articles are helpful in this regard, for example, noting that adobe buildings collapsed while wood-frame houses were only lightly damaged. But often older newspapers are less helpful and the seismologist is left guessing.
The 2001 Bhuj, India, earthquake provided a unique opportunity to quantify the media bias. This magnitude-7.6 earthquake struck western India on Jan. 26, 2001, claiming nearly 20,000 lives and causing extensive damage throughout the state of Gujarat. Immediately after the earthquake, seismologists realized that damage surveys would be invaluable because the earthquake was only recorded on a handful of instruments within India, none of them very close to Bhuj.
An early study analyzed media accounts of the earthquake published on the Web and in local newspapers to assign intensity values for more than 200 locations in India and Pakistan. In the meantime, the Geological Survey of India sent out teams to survey the damage directly. These teams were charged specifically with assessing the overall severity of earthquake effects in towns throughout India. When their map was complete, it could be compared in detail with the media’s map. The comparison revealed that the two approaches yielded similar results for low intensities. When media accounts report that an earthquake was lightly felt in a certain town, it appears that such accounts tend to be representative. But in regions where damage occurred, the suspicious nagging voice proves to be correct: Intensity values based on media accounts were systemically biased toward higher values than those based on direct surveys.
The availability of two independent intensity surveys for the Bhuj earthquake allowed media bias to be analyzed in some detail — but only for this particular earthquake. It is not clear that reporting in modern newspapers and Web sites is comparable to that in newspapers and private letters from 100 or 200 years ago. However, the results of the Bhuj comparison provide at least a preliminary quantification of how older archival accounts might be biased. On an encouraging note, the results suggest that archival accounts of historical earthquakes can provide a good indication of the area over which shaking was felt. Comparing the extent that historical and modern earthquakes were felt therefore may yield more reliable results than comparing the extent of damage.
This study also serves as a reminder that the news media past and present can help seismologists do their job, but it remains part of the seismologist’s job to understand the nature — and the limitations — of the information that journalists provide. | <urn:uuid:a9c1c6e1-1061-4a80-b4d8-cd1f516c04f6> | 3.453125 | 1,471 | Knowledge Article | Science & Tech. | 31.868729 | 1,167 |
Greater DOF with secondary electron imaging is largely a matter of working distance--defined as the distance (in mm) from the objective lens to the top of the sample being imaged. Of course, the lenses in this type of instrument are electromagnetic (not glass) lenses, and can effect different crossover (focus) points based on current supplied to the lens coils.
The longer the WD, the greater the DOF, (but this entails other tradeoffs as with every operating parameter). Of course, this is a familiar principle to any photographer; the closer you move to an object, the shallower the DOF is.
The WD I used for this shot was 28mm, which is considered very long. I also use a tilt of around 30 degrees. This adds an additional sense of depth. If you were trying to convey the three dimensionality of a sphere, or ping pong ball for example, the worst way to photograph it would be from directly above. Better to come in obliquely from the side.
The protozoa (protists is a better word) that live in the guts of lower termites are often very large, and this presents a challenge for DOF. This one in question is about 40 microns long, but others can be up to 300 microns long. We beleive that they have evolved large size in order to engulf the relatively large wood fragments that make their way to the hindgut after being chewed by the termites jaws.
Focus stacking is something I've never tried, but for some large cells, I've taken multiple images with different portions in focus. If someone can point me to a tutorial for focus stacking, in Photoshop (I use CS2) I would appreciate it!
Thanks very much for this explanation. Never worked with or read about this kind of equipment before so I still didnít quite get it. Due to that I asked to Mr. Google who provided the following reference. Of course there could be a lot of different SEMs but the key details that caught my eye were:
ďThe scanning electron microscope has many advantages over traditional microscopes. The SEM has a large depth of field, which allows more of a specimen to be in focus at one time.Ē http://www.purdue.edu/rem/rs/sem.htm#2
My next question is: does a SEM require a light source? The shading on the image you displayed is so delicate it made me wonder how one could position one or more lights to produce the result on such a small object.
We believe that they have evolved large size in order to engulf the relatively large wood fragments that make their way to the hindgut after being chewed by the termites jaws.
Any time a case for selection can be exemplified that is pretty cool! I guess itís a completely different discussion but I have to ask: Are only the larger protists found in adult termites? Do they grow in size as the termite does?
This is all way outside of my experience, so I hope you donít mind some questions.
WRT focus stacking, there are a few here who have a lot of experience with focus stacking software. I think that kind of technology might be very useful for this kind of work.
Remember us when you get a Nobel for your future work! <big toothy grin> | <urn:uuid:8197b1fa-fc76-4562-b21d-a175d8d77214> | 2.703125 | 693 | Comment Section | Science & Tech. | 57.02492 | 1,168 |
There is clear and unambiguous scientific evidence that documents how rising atmospheric carbon dioxide is leading to increasingly acidic seawater. This phenomenon has been termed ‘ocean acidification’ and presents a real threat to marine organisms that build their structures of calcium carbonate and, by extension, the organisms that feed on and live among them.
Marine Conservation Institute is working with partners from the scientific community, political arena, and coastal fishing and aquaculture industries to address the emerging threat of ocean acidification and the impacts on the marine ecosystems upon which we all depend.
Reducing carbon dioxide emissions to zero overnight is highly unlikely, but working towards carbon dioxide reduction over the coming decades is imperative if we are to avoid the worst possible scenarios. The opportunity exists for creating innovative solutions for adapting to the problems associated with climate change and ocean acidification, gaining a better understanding of what society stands to lose (biologically and economically), and hedging our bets by protecting areas of the ocean most likely to survive the coming changes to our oceans.
This issue of Current highlights ocean acidification, a term used to describe the ongoing global scale changes in seawater chemistry caused largely by human combustion of fossil fuels. Ocean acidification will have severe consequences for marine life and humankind, and has been nicknamed global warming’s “evil twin.” The articles in this special issue focus on multiple facets of ocean acidification, including threats to marine organisms, economic implications for fisheries and ecosystem services, and policy options for mitigating negative impacts. Because the dangers posed by ocean acidification are so serious, responsible carbon policy must be implemented immediately at all levels of government and individuals must do their part to curtail carbon consumption, in the hope of safeguarding the future of our oceans. | <urn:uuid:a999f511-2a0e-4e8c-b946-fd64aefcb9ac> | 3.375 | 354 | News (Org.) | Science & Tech. | -2.786201 | 1,169 |
No, this is NOT the moon:
Credit to Bad Astronomy, which never disappoints!
The NASA probe Messenger flew by the planet Mercury last week, sending back the first close-up, high-resolution photos of the solar system's smallest planet. These images are exciting, and BA says it quite well:
But there’s a terrible beauty in all these pictures. Mercury is a strange little world. Hot, dense, battered, cracked… it’s as unlike Earth as any solid body can be, and it’s exactly those contrasts that will help us understand more about planetary geology and environments. We travel the solar system for many reasons — to learn about strange, new worlds; to discover new science; to have our brains tickled at the wonder and majesty of nature — but it’s funny how so many of these findings wind up helping us understand our own planet. That may not be the only reason we go, or even the most important one, but it’s still a fine thing to do.
The very reasons to get excited about Mercury are the same reasons I love Ultrarunning: "...to learn about strange, new worlds; to discover new science; to have our brains tickled at the wonder and majesty of nature...." | <urn:uuid:ba6fe0ee-7d60-42ba-b16a-da0fc041313f> | 2.578125 | 263 | Personal Blog | Science & Tech. | 58.592717 | 1,170 |
Wondering why there was a thick layer of fog over the city this morning and afternoon?
This fog is called advection fog. It forms as a warm, moist air mass settles over the cooler Lake Michigan waters. Hot and humid air (temps in the 90s, dew points in the 70s) just west of Chicago comes over the cool Lake Michigan waters (60s and low 70s) and forms fog. The near-surface air is cooled to the high dew point (in the 70s) and fog forms. Then, the lake breeze transports the fog from the lake back onto land.
The fog is concentrated at very low levels of the atmosphere, with the tops of the taller buildings likely experiencing temperatures in the 90s above the fog layer, because the relatively cool air off of Lake Michigan is quite shallow, the temperature is cooled to the dew point only at near-surface levels. Given the strength of the sun and mixing of higher temperatures just above the ground, the air temperatures even near Lake Michigan should slowly rise above the dew point and the fog will dissipate.
What does that mean? Even if you got a break from the heat from the fog…you will start to warm up again very soon. | <urn:uuid:2eb9d563-4bfe-431f-865b-6965e64f8373> | 3.3125 | 255 | News Article | Science & Tech. | 61.842857 | 1,171 |
Particle looking more like Higgs boson
- From: AFP
- March 07, 2013
THE subatomic particle whose discovery was announced amid much fanfare last year, is looking "more and more" like it could be the elusive Higgs boson.
But in the latest update, physicists told a conference in La Thuile, Italy, that more analysis is needed before a definitive statement can be made.
Key to a positive identification of the particle is a detailed analysis of its properties and how it interacts with other particles, the European Organisation for Nuclear Research (CERN) explained in a statement.
Since scientists' announcement last July that they had found a particle likely to be the Higgs, much data has been analysed, and its properties are becoming clearer. It is exciting for scientists because it could explain why matter has mass.
One property that will allow several teams researching the particle to declare whether or not it is a Higgs, is called spin.
A Higgs must have spin-zero.
"All the analysis conducted so far strongly indicates spin-zero, but it is not yet able to rule out entirely the possibility that the particle has spin-two," said CERN.
"Until we can confidently tie down the particle's spin, the particle will remain Higgs-like. Only when we know that it has spin-zero will we be able to call it a Higgs."
British physicist Peter Higgs theorised in 1964 that the boson could be what gave mass to matter as the Universe cooled after the Big Bang.
Last July, scientists said they were 99.9 per cent certain they had found the particle without which, theoretically, humans and all other joined-up atoms in the Universe would not exist. | <urn:uuid:c1e57fb0-0b5c-49df-8ff6-11f35caeb572> | 2.65625 | 357 | News Article | Science & Tech. | 43.041704 | 1,172 |
Curt Stager, a scientist at Paul Smiths College, is publishing an article later this month in the journal Science that describes an ancient drought that transformed Asia and Africa thousands of years ago.
The "H1 mega-drought" may have wiped out whole tribes of humans, as it dried up rivers and lakes across whole continents.
As Brian Mann reports, Stager thinks that devastating event could be a warning for people living in a new period of global warming.
The Department of Corrections will close two more prisons this year, bringing to a total of nine the number...
The General Brown Central School District in Dexter, along with...
Twenty-five years ago, Curt Stager was paddling the waters of Lake Barombi Mbo in Cameroon.
He and other researchers had rigged a crude drilling platform – not searching for oil, but rooting around for the deep layers of muck that have been settling on the bottom of the lake for millenia.
"We built the raft," Stager recalled. "We strung 10 foot long sections of pipe together…it plunges into the bottom of the lake. We ended up getting twenty-something thousand years worth of sediment."
These days, much of Cameroon is cloaked in rain forest. But down under all that lake mud Stager found sediments that showed that this African landscape has changed dramatically.
"You bore on down through this wet brown mud. Suddenly you hit shells and sand and soil. Which means that it would have looked like a beach and before that it would have looked like the Serengeti Plains.
Stager is a paleo-climatologist at Paul Smiths College in the Adirondacks. He studies the history of the really big climate patterns that have shaped our planet over thousands of years.
When scientists were collecting those ancient sediment samples from lakes across Africa and Asia, he says they weren’t really sure what it would all mean.
But in a new paper set to be published next month in the journal Science, Stager argues that he’s found evidence of a massive drought, so big that it literally changed the world.
Sixteen thousand years ago, the Monsoons of Asia failed. The Nile and the Congo shriveled up. The great lakes of Africa and the Near East turned to dust.
"There would have been nowhere [for tribes of humans] to go," he said. "It would have turned what is now a green part of Africa into a savannah or a desert."
Normally this might just sort of be cool information, a time-machine glimpse of the harsh world faced by our ancient ancestors.
But Stager’s paper makes one more connection.
He says this mega-drought, which lasted for centuries, was likely caused by a natural cycle of global warming, the end of an ice age that melted glaciers and tipped thick ice sheets into the North Atlantic.
"Somehow the oceans not only cooled in the North Atlantic, but it looks like all through the Indian Ocean, too, and that’s probably why the rains were shut off in the tropics," he said.
These days, scientists are seeing a similar warming pattern in the North Atlantic, triggered not by the end of an ice age this time, but by industrial greenhouse gases.
Once again, the two-mile thick ice sheet in Greenland is melting fast.
"From what we’ve seen in this one glacier and other ones like it in Greenland, there’s been just such rapid enormous changes in the last five or ten years," said glaciologist Gordon Hamilton from the University of Maine, speaking with CNN. "It’s really been quite alarming.
Curt Stager doesn’t think this round of global warming will trigger the kind of devastating disruption in ocean currents and rains that our ancestors faced.
For one thing, the amount of ice in the North Atlantic is far smaller now.
But he says the mega-drought sixteen thousand years ago shows that rapid warming in the arctic might trigger sudden and dramatic changes all over the planet, possibly occurring in a single lifetime.
"Abrupt, severe climate change really can happen, because it did," Stager said. "We know tipping points like this exist. The big mystery is where is the tipping point?"
Stager points out that even a much smaller disruption to global rain patterns could have huge implications – especially now when the human population in places like Cameroon is so much greater.
"It would be unbelievable. More than half of humankind lives in the zones that were affected by this," he said. | <urn:uuid:28f997e7-5544-41c7-a7f4-99ba7089b9e5> | 3.40625 | 951 | News Article | Science & Tech. | 58.143304 | 1,173 |
- Number 333 |
- March 21, 2011
An X-ray laser captures the structures of life
An X-ray laser captures the
structures of life
Two studies published recently in Nature demonstrate how the unique capabilities of the world’s first hard X-ray free-electron laser—the Linac Coherent Light Source, located at DOE’s SLAC National Accelerator Laboratory—could revolutionize the study of life.
In one study, an international research team used the LCLS to demonstrate a shortcut for determining the 3-D structures of proteins.
The laser’s brilliant pulses of X-ray light pulled structural data from tiny
protein nanocrystals, avoiding the need to use large protein crystals
that can be difficult or impossible to prepare. This process could lop
years off the structural analysis of some proteins and allow scientists
to decipher tens of thousands of others that are out of reach
today, including many involved in infectious disease.
In a separate paper, the same team reported making the first single-shot images of intact viruses, paving the way for snapshots and movies of molecules, viruses and live microbes in action.
Since the publication of these papers, members of the research team have returned to SLAC to continue their studies of proteins involved in photosynthesis, parasitic disease and other important life processes. Using the Coherent X-ray Imaging instrument (CXI)—the fourth instrument to become operational since the LCLS opened for research in 2009—the researchers shined highly energetic “hard” X-rays at the photosynthetic protein complex Photosystem I and an enzyme that breaks down proteins, extracted from the parasite that causes African sleeping sickness.
Though the results of these more recent studies won't be known until extensive analysis of the data has been completed, the researchers were extremely excited to see fine, crisply detailed protein structures at near atomic-scale resolution.
"It's going very well," said SLAC researcher Marvin Seibert, grinning. "The fireworks are back. It's always fun." | <urn:uuid:deb7f787-3426-41b8-ad30-9afa5e1d4864> | 3.203125 | 424 | News Article | Science & Tech. | 25.867902 | 1,174 |
Equilibrium inorganic chemistry underlies the composition and properties of the aquatic environment and provides a sound basis for understanding both natural geochemical processes and the behavior of inorganic pollutants in the environment. Designed for readers having basic chemical and mathematical knowledge, this book includes material and examples suitable for undergraduate students in the early stages of chemistry, environmental science, geology, irrigation science and oceanography courses. Aquatic Environmental Chemistry
covers the composition and underlying properties of both freshwater and marine systems and, within this framework, explains the effects of acidity, complexation, oxidation and reduction processes, and sedimentation. The format adopted for the book consists of two parallel columns. The inner column is the main body of the book and can be read on its own. The outer column is a source of useful secondary material where comments on the main text, explanations of unusual terms and guidance through mathematical steps are to be found. A wide range of examples to explain the behavior of inorganic species in freshwater and marine systems are used throughout, making this clear and progressive text an invaluable introduction to equilibrium chemistry in solution.
About the Author(s)
Alan G. Howard, Senior Lecturer, Department of Chemistry, University of Southampton | <urn:uuid:9aa268dc-91cc-4d14-8963-7f0b0ad2d30b> | 2.96875 | 243 | Product Page | Science & Tech. | 5.724072 | 1,175 |
What determines how much coverage a climate study gets?
It probably goes without saying that it isn’t strongly related to the quality of the actual science, nor to the clarity of the writing. Appearing in one of the top journals does help (Nature, Science, PNAS and occasionally GRL), though that in itself is no guarantee. Instead, it most often depends on the ‘news’ value of the bottom line. Journalists and editors like stories that surprise, that give something ‘new’ to the subject and are therefore likely to be interesting enough to readers to make them read past the headline. It particularly helps if a new study runs counter to some generally perceived notion (whether that is rooted in fact or not). In such cases, the ‘news peg’ is clear.
And so it was for the Steig et al “Antarctic warming” study that appeared last week. Mainstream media coverage was widespread and generally did a good job of covering the essentials. The most prevalent peg was the fact that the study appeared to reverse the “Antarctic cooling” meme that has been a staple of disinformation efforts for a while now.
It’s worth remembering where that idea actually came from. Back in 2001, Peter Doran and colleagues wrote a paper about the Dry Valleys long term ecosystem responses to climate change, in which they had a section discussing temperature trends over the previous couple of decades (not the 50 years time scale being discussed this week). The “Antarctic cooling” was in their title and (unsurprisingly) dominated the media coverage of their paper as a counterpoint to “global warming”. (By the way, this is a great example to indicate that the biggest bias in the media is towards news, not any particular side of a story). Subsequent work indicated that the polar ozone hole (starting in the early 80s) was having an effect on polar winds and temperature patterns (Thompson and Solomon, 2002; Shindell and Schmidt, 2004), showing clearly that regional climate changes can sometimes be decoupled from the global picture. However, even then both the extent of any cooling and the longer term picture were more difficult to discern due to the sparse nature of the observations in the continental interior. In fact we discussed this way back in one of the first posts on RealClimate back in 2004.
This ambiguity was of course a gift to the propagandists. Thus for years the Doran et al study was trotted out whenever global warming was being questioned. It was of course a classic ‘cherry pick’ – find a region or time period when there is a cooling trend and imply that this contradicts warming trends on global scales over longer time periods. Given a complex dynamic system, such periods and regions will always be found, and so as a tactic it can always be relied on. However, judging from the take-no-prisoners response to the Steig et al paper from the contrarians, this important fact seems to have been forgotten (hey guys, don’t worry you’ll come up with something new soon!).
Actually, some of the pushback has been hilarious. It’s been a great example for showing how incoherent and opportunistic the ‘antis’ really are. Exhibit A is an email (and blog post) sent out by Senator Inhofe’s press staff (i.e. Marc Morano). Within this single email there are misrepresentations, untruths, unashamedly contradictory claims and a couple of absolutely classic quotes. Some highlights:
Dr. John Christy of the University of Alabama in Huntsville slams new Antarctic study for using [the] “best estimate of the continent’s temperature”
Perhaps he’d prefer it if they used the worst estimate? ;)
[Update: It should go without saying that this is simply Morano making up stuff and doesn't reflect Christy's actual quotes or thinking. No-one is safe from Morano's misrepresentations!]
[Further update: They've now clarified it. Sigh....]
Morano has his ear to the ground of course, and in his blog piece dramatically highlights the words “estimated” and “deduced” as if that was some sign of nefarious purpose, rather than a fundamental component of scientific investigation.
Internal contradictions are par for the course. Morano has previously been convinced that “… the vast majority of Antarctica has cooled over the past 50 years.”, yet he now approvingly quotes Kevin Trenberth who says “It is hard to make data where none exist.” (It is indeed, which is why you need to combine as much data as you can find in order to produce a synthesis like this study). So which is it? If you think the data are clear enough to demonstrate strong cooling, you can’t also believe there is no data (on this side of the looking glass anyway).
It’s even more humourous, since even the more limited analysis available before this paper showed pretty much the same amount of Antarctic warming. Compare the IPCC report, with the same values from the new analysis (under various assumptions about the methodology).
(The different versions are the full reconstruction, a version that uses detrended satellite data for the co-variance, a version that uses AWS data instead of satelltes and one that use PCA instead of RegEM. All show positive trends over the last 50 years).
Further contradictions abound: Morano, who clearly wants it to have been cooling, hedges his bets with a “Volcano, Not Global Warming Effects, May be Melting an Antarctic Glacier” Hail Mary pass. Good luck with that!
It always helps if you haven’t actually read the study in question. That way you can just make up conclusions:
Scientist adjusts data — presto, Antarctic cooling disappears
Nope. It’s still there (as anyone reading the paper will see) – it’s just put into a larger scale and longer term context (see figure 3b).
Inappropriate personalisation is always good fodder. Many contrarians seemed disappointed that Mike was only the fourth author (the study would have been much easier to demonise if he’d been the lead). Some pretended he was anyway, and just for good measure accused him of being a ‘modeller’ as well (heaven forbid!).
Others also got in on the fun. A chap called Ross Hays posted a letter to Eric on multiple websites and on many comment threads. On Joe D’Aleo’s site, this letter was accompanied with this little bit of snark:
Icecap Note: Ross shown here with Antarctica’s Mount Erebus volcano in the background was a CNN forecast Meteorologist (a student of mine when I was a professor) who has spent numerous years with boots on the ground working for NASA in Antarctica, not sitting at a computer in an ivory tower in Pennsylvania or Washington State
This is meant as a slur against academics of course, but is particularly ironic, since the authors of the paper have collectively spent over 8 seasons on the ice in Antarctica, 6 seasons in Greenland and one on Baffin Island in support of multiple ice coring and climate measurement projects. Hays’ one or two summers there, his personal anecdotes and misreadings of the temperature record, don’t really cut it.
Neither do rather lame attempts to link these results with the evils of “computer modelling”. According to Booker (for it is he!) because a data analysis uses a computer, it must be a computer model – and probably the same one that the “hockey stick” was based on. Bad computer, bad!
The proprietor of the recently named “Best Science Blog”, also had a couple of choice comments:
In my opinion, this press release and subsequent media interviews were done for media attention.
This remarkable conclusion is followed by some conspiratorial gossip implying that a paper that was submitted over a year ago was deliberately timed to coincide with a speech in Congress from Al Gore that was announced last week. Gosh these scientists are good.
All in all, the critical commentary about this paper has been remarkably weak. Time will tell of course – confirming studies from ice cores and independent analyses are already published, with more rumoured to be on their way. In the meantime, floating ice shelves in the region continue to collapse (the Wilkins will be the tenth in the last decade or so) – each of them with their own unique volcano no doubt – and gravity measurements continue to show net ice loss over the Western part of the ice sheet.
Nonetheless, the loss of the Antarctic cooling meme is clearly bothering the contrarians much more than the loss of 10,000 year old ice. The poor level of their response is not surprising, but it does exemplify the tactics of the whole ‘bury ones head in the sand” movement – they’d much rather make noise than actually work out what is happening. It would be nice if this demonstration of intellectual bankruptcy got some media attention itself.
That’s unlikely though. It’s just not news. | <urn:uuid:0fe1fa4f-99f0-436d-ba48-6f2e07ec325e> | 2.609375 | 1,918 | Personal Blog | Science & Tech. | 47.057576 | 1,176 |
One geoengineering proposal involves painting buildings white on a massive scale, to reflect sunlight
[LONDON] Decisions on whether and how to use massive technical solutions known as 'geoengineering' to mitigate or reverse climate change must involve developing countries, a session on geoengineering governance at the Planet Under Pressure conference agreed yesterday (28 March).
Geoengineering proposals have included reflecting sunlight away from the Earth by spraying ocean water into clouds or loading the stratosphere with sulphate aerosols, bioengineering crops to be paler and more reflective of sunlight, managing solar radiation and removing carbon dioxide from the atmosphere.
Although geoengineering research groups are emerging in Africa, China and India, the controversial discipline is dominated by a small number of organisations in North America and Europe, the meeting heard.
"It's very important that people with knowledge and understanding of science and the climate change challenges faced by developing countries are involved in setting the agenda for research," Jason Blackstock, a visiting geoengineering expert at the Institute for Science, Innovation and Society at the University of Oxford, United Kingdom, told SciDev.Net.
The issues faced by vulnerable populations "should be front and centre in the conversation about the technologies and the governance structures that are going to evolve," he said.
Andy Parker, a senior policy officer at the Royal Society of London, the UK's science academy, which issued a report on research governance for managing solar radiation in December, said the effects of deploying such technology "will not be localised" and that there are many unknowns. For example, he said, scientists do not know how geoengineering could impact rainfall patterns around the world.
And while the Royal Society's report did not make specific governance recommendations — "it is too early" for these, Parker said — it did highlight the need for open and inclusive dialogue.
Parker added that meetings held over the past year in China, India and Pakistan had registered a "general and healthy scepticism" in geoengineering, and had not regarded geoengineering as a useful or quick technical fix.
He said these meetings had also been characterised by a "genuine desire to cooperate" on research and governance and a wide appreciation of open discussions on geoengineering, rather than "being told what to think" by the developed world.
The Royal Society is now funding a geoengineering meeting in Africa, in association with the African Academy of Sciences and TWAS, the academy of sciences for the developing world. It is expected to be held later this year in Ethiopia.
Kathy Jo Wetter is a researcher at the Action Group on Erosion, Technology and Concentration (ETC), a non-governmental organisation based in Canada which has held workshops on new technologies in Ethiopia, South Africa and Uganda.
She told SciDev.Net: "The technology that people in our workshops were most interested in was geoengineering, because they say, 'we never hear about this ... we don't want our first experience of this to be when it's there at our doorstep'."
Although there are mechanisms in place that govern how people use technologies, Blackstock said, there are no international research frameworks in place to assess early stage technologies and the best way to develop them.
He suggested that the International Council for Science (ICSU) or the Future Earth alliance may be able to develop such a framework.
Gordon McBean, a climatologist at the University of Western Ontario and president elect of ICSU, agreed that the organisation could address this issue, and told SciDev.Net that he was involved in discussions at the conference to consider this.
Although governance of new technologies has not been included in the first draft of the outcome document for the UN Conference on Sustainable Development (Rio+20), the latest negotiating draft for the June meeting does refer to technology assessment, said Wetter.
If that reference stays in the final draft, it may help fill the "vacuum of technology assessment that exists within the UN system right now", she said.
All SciDev.Net material is free to reproduce providing that the source and author are appropriately credited. For further details see Creative Commons. | <urn:uuid:21d047eb-3944-47fb-803c-72e5adb3b5a2> | 3.109375 | 843 | News Article | Science & Tech. | 25.675 | 1,177 |
Now, scientists with the Lawrence Berkeley National Laboratory have demonstrated the first technique that provides dynamic control in real-time of the curved trajectories of Airy beams over metallic surfaces. This development paves the way for fast-as-light, ultra-compact communication systems and optoelectronic devices,and could also stimulate revolutions in chemistry, biology and medicine.
The key to the success of this work was their ability to directly couple free-space Airy beams – using a standard tool of optics called a “grating coupler” – to quasi-particles called surface plasmon polaritons (SPPs). Directing a laser beam of light across the surface of a metal nanostructure generates electronic surface waves – called plasmons – that roll through the metal’s conduction electrons (those loosely attached to molecules and atoms). The resulting interaction between plasmons and photons creates SPPs. By directly coupling Airy beams to SPPs, the researchers are able to manipulate light at an extremely small scale beyond the diffraction limit.
The movie shows the computer-based dynamical control of the trajectory and peak intensity position of plasmonic Airy beams achieved by Berkeley Lab’s Xiang Zhang.
“Dynamic controllability of SPPs is extremely desirable for reconfigurable optical interconnections,” says Xiang Zhang, the leader of this research. “We have provided a novel approach of plasmonic Airy beam to manipulate SPPs without the need of any waveguide structures over metallic surfaces, providing dynamic control of their ballistic trajectories despite any surface roughness and defects, or even getting around obstacles. This is promising not only for applications in reconfigurable optical interconnections but also for precisely manipulating particles on extremely small scales.”
Examples of the dynamic control of the plasmonic Airy beams shows switching the trajectories to different directions (a,b) and bypassing obstacles (gray solid circle in c). Left panels are numerical simulations, right panels are experimental demonstrations (courtesy of Zhang group)
Zhang, a principal investigator with Berkeley Lab’s Materials Sciences Division and director of the University of California at Berkeley’s Nano-scale Science and Engineering Center (SINAM), is the corresponding author of a paper published in the journal Optics Letters. The paper is titled “Plasmonic Airy beams with dynamically controlled trajectories.” Coauthoring the paper were Peng Zhang, Sheng Wang, Yongmin Liu, Xiaobo Yin, Changgui Lu and Zhigang Chen.
“Up to now, different plasmonic elements for manipulating surface plasmons were realized either through structuring metal surfaces or by placing dielectric structures on metals,” says Peng Zhang, lead author of the Optics Letters paper and member of Xiang Zhang’s research group. “Both approaches are based on the fabrication of permanent nanostructures on the metal surface, which are very difficult if not impossible to reconfigure in real time. Reconfigurability is crucial to optical interconnections, which in turn are crucial for high performance optical computing and communication systems. The reconfigurability of our technique is a huge advantage over previous approaches.”
Adds co-author Zhigang Chen, a principal investigator with the Department of Physics and Astronomy at San Francisco State University, “With the reconfigurability of our plasmonic Airy beams, a small number of optical devices can be employed to perform a large number of functions within a compact system. In addition, the unique properties of the plasmonic Airy beams open new opportunities for on-chip energy routing along arbitrary trajectories in plasmonic circuitry, and allows for dynamic manipulations of nano-particles on metal surfaces and in magneto-electronic devices.”
Dynamic control of the plasmonic Airy beams is provided by a computer-controlled spatial light modulator, a device similar to a liquid crystal display that can be used to offset the incoming light waves from a laser beam with respect to a cubic phase system mask and a Fourier lens. This generates a plasmonic Airy beam on the surface of a metal whose ballistic motion can be modified.
From left Yongmin Liu, Xiang Zhang, Peng Zhang, and Sheng Wang were part of a team that developed the first technique for dynamically controlling plasmonic Airy beams without the need of waveguides and other permanent structures. (Photo by Roy Kaltschmidt)“The direction and speed of the displacement between the incoming light and the cubic phase mask can be controlled with ease simply by displaying an animation of the shifting mask pattern as well as a shifting slit aperture in the spatial light modulator,” Peng Zhang says. “Depending on the refresh rate of the spatial light modulator this can be done in real time. Furthermore, our spatial light modulator not only sets the plasmonic Airy beam into a general ballistic motion, it also enables us to control the Airy beam’s peak intensity at different positions along its curved path.”
The ability of the spatial light modulator to dynamically control the ballistic motions of plasmonic Airy beams without the need of any permanent guiding structures should open doors to a number of new technologies, according to Xiang and Peng Zhang and their collaborators. For example, in nano-photonics, it enables researchers to design practical reconfigurable plasmonic sensors or perform nano-particle tweezing on microchips. In biology and chemistry, it allows researchers to dynamically manipulate molecules.
Says Sheng Wang, second lead author of the Optics Letters paper, “The ultrafine nature of SPPs is extremely promising for applications of nanolithography or nanoimaging. Having dynamic tunable plasmonic Airy beams should also be useful for ultrahigh resolution bioimaging. For example, we can directly illuminate a target, for example a protein, bypassing any obstacles or reducing the background.”
Adds third lead author Yongmin Liu, “Our findings may inspire researchers to explore other types of non-diffracting surface waves, such as electron spin waves, in other two-dimensional systems, including graphene and topological insulators.”
This work was supported by the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, and the National Science Foundation Nanoscale Science and Engineering Center. | <urn:uuid:9318e603-9188-4da9-8b90-3e94e42165e3> | 2.890625 | 1,344 | News Article | Science & Tech. | 16.180275 | 1,178 |
Scientists have long held theories about the importance of proteins called B-type lamins in the process of embryonic stem cells replicating and differentiating into different varieties of cells. New research from a team led by Carnegie's Yixian Zheng indicates that, counter to expectations, these B-type lamins are not necessary for stem cells to renew and develop, but are necessary for proper organ development. Their work is published 24 November by Science Express.
Nuclear lamina is the material that lines the inside of a cell's nucleus. Its major structural component is a family of proteins called lamins, of which B-type lamins are prominent members and thought to be absolutely essential for a cell's survival. Mutations in lamins have been linked to a number of human diseases. Lamins are thought to suppress the expression of certain genes by binding directly to the DNA within the cell's nucleus.
The role of B-type lamins in the differentiation of embryonic stem cells into various types of cells, depending on where in a body they are located, was thought to be crucial. The lamins were thought to use their DNA-binding suppression abilities to tell a cell which type of development pathway to follow.
But the team - including Carnegie's Youngjo Kim, Katie McDole, and Chen-Ming Fan - took a hard look at the functions of B-type lamins in embryonic stem cells and in live mice.
They found that, counter to expectations, lamin-Bs were not essential for embryonic stem cells to survive, nor did their DNA binding directly regulate the genes to which they were attached. However, mice deficient in B-type lamins were born with improperly developed organs - including defects in the lungs, diaphragms and brains - and were unable to breathe.
'Our works seems to indicate that while B-type lamins are not part of the early developmental tissue-building process, while they are important in facilitating the integration of different cell types into the complex architectures of various developing organs,' Kim, the lead author, said. 'We have set the stage to dissect the ways that a cell's nuclear lamina promote tissue organisation process during development.' | <urn:uuid:f1a40e5b-b317-4e28-bbfc-3e107fe169aa> | 3.65625 | 441 | News Article | Science & Tech. | 36.629779 | 1,179 |
COLD SPRING HARBOR, N.Y. (Tues., June 1, 2010) -- Gel electrophoresis is one of the most important and frequently used techniques in RNA analysis. Electrophoresis is used for RNA detection, quantification, purification by size and quality assessment. Gels are involved in a wide variety of methods including northern blotting, primer extension, footprinting and analyzing processing reactions. The two most common types of gels are polyacrylamide and agarose. Polyacrylamide gels are used in most applications and are appropriate for RNAs smaller than approximately 600 nucleotides (agarose gels are preferred for larger RNAs). "Polyacrylamide Gel Electrophoresis of RNA" describes how to prepare, load and run polyacrylamide gels for RNA analysis. The article is featured in the June issue of Cold Spring Harbor Protocols and is freely available on the journal's website. It is part of a suite of basic RNA protocols included in this month's issue that provide an early preview of the forthcoming RNA: A Laboratory Manual due later this year from Cold Spring Harbor Laboratory Press.
The rapid pace of technological progress in biological imaging has provided great insight into the processes of embryonic development. But for higher organisms with opaque eggs or internal development, optical access to the embryo is limited. While various embryonic culture methods are available, vertebrate development is best studied in an intact embryo model, one in which the natural environment has not been disrupted. In the June issue of Cold Spring Harbor Protocols, Paul Kulesa and colleagues from the Stowers Institute for Medical Research present "In Ovo Live Imaging of Avian Embryos," a detailed set of instructions for time-lapse imaging of fluorescently labeled cells within a living avian embryo. During the procedure, a hole is made in the shell, and a Teflon membrane that is oxygen-permeable and liquid-impermeable is used to provide a window for visualization of the embryo via confocal or two-photon microscopy. Imaging can take place for up to five days without dehydration or degradation of the normal developmental environment. As one of June's featured articles, the protocol is freely available on the journal's website. | <urn:uuid:95c87d41-efb2-4fab-a614-c70a21d1d1ea> | 2.84375 | 465 | News Article | Science & Tech. | 22.49146 | 1,180 |
Obviously, birds sing. But mice?
[Mice song sound.] That’s a mouse song.
Researchers have known about these high-pitched squeaky songs for years. But they only recently discovered that mice can learn the songs of other mice. Such vocal learning is a rarity among animals. We know of only three kinds of birds—parrots, hummingbirds and songbirds—and some mammals—like humans, whales, dolphins, sea lions, bats and elephants—that have demonstrated the ability to learn the vocal patterns of other animals. That is, until now.
Scientists at Duke University observed that when two male mice of different lineages were kept together, the animals gradually learned to match the pitch of their songs to one another. And when the researchers examined the mice, they found that the rodents can also form the correct brain-to-vocal-cord connections to control the sounds they make. The research is published in the journal PLoS ONE. [Gustavo Arriaga, Eric P. Zhou and Erich D. Jarvis, Of Mice, Birds, and Men: The Mouse Ultrasonic Song System Has Some Features Similar to Humans and Song-Learning Birds]
The mouse songs are admittedly primitive. But the findings left scientists on a high note.
—Gretchen Cuda Kroen
[The above text is a transcript of this podcast.] | <urn:uuid:04438b5d-ad98-462e-8d8d-6e6984464144> | 3.21875 | 283 | Audio Transcript | Science & Tech. | 56.519622 | 1,181 |
The layout page is primarily designed to be the top-level page of the web UI. The idea is to set up a layout page as the navigation URL for the browser, so the layout page fills the browser window. You then arrange your functional windows within the layout page - a command window, a status line window, etc. This arrangement is similar to banner window in HTML TADS, but IFRAMEs are considerably more flexible; for example, they don't have to tile the main window, and you can size them in the full range of units CSS provides.
Layout windows aren't limited to the top level, though. Since you can put any HTML page within an IFRAME, you can put another layout window within an IFRAME, to further subdivide the space inside the IFRAME.
WebLayoutWindow : WebWindow
If the window already exists, this updates the window with the new layout settings.
'win' is a WebWindow object that will be displayed within the IFRAME. This method automatically loads the HTML resource from the WebWindow into the new IFRAME.
'name' is the name of the window. Each window within a layout must have a distinct name. This allows you to refer to the dimensions of other windows in 'pos' parameters. The name should be alphanumeric.
123 - a number, representing a number of pixels on the display
5em - 5 'em' units, relative to the main BODY font in the window
5en - 5 'en' units in the main BODY font
5ex - 5 'ex' units in the main BODY font
window.width - the width in pixels of the enclosing window
window.height - the height in pixels of the enclosing window
50% - percentage of the width or height of the enclosing window
content.width - the width in pixels of the contents of the frame
content.height - the height in pixels of the contents of the frame
x.left - horizontal coordinate of leftmost edge of window 'x'
x.right - horizontal coordinate of rightmost edge of window 'x'
x.top - vertical coordinate of top edge of window 'x'
x.bottom - vertical coordinate of bottom edge of window 'x'
x.width - width in pixels of window 'x'
x.height - height in pixels of window 'x'
The "window" dimensions refer to the *enclosing* window. If this layout window is the main page of the UI, this is simply the browser window itself. For a layout window nested within another frame, this is the enclosing frame.
Percentage units apply to the enclosing window. When a percentage is used in the 'left' or 'width' slot, it applies to the width of the enclosing window; in the 'top' or 'height' slot, it applies to the height.
The "content" dimensions refer to the contents of the frame we're creating. This is the size of the contents as actually laid out in the browser.
"x.left" and so on refer to the dimensions of other frames *within this same layout window*. 'x' is the name of another window within the same layout, as specified by the 'name' argument given when the window was created. | <urn:uuid:5bdc56f5-0d2f-4436-aeed-56278c491e69> | 2.640625 | 689 | Documentation | Software Dev. | 62.755695 | 1,182 |
Because endemic species - native species not found outside the state - make up nearly half of all California's native plants, a changing climate will have a major impact on the state's unparalleled plant diversity, the researchers warn.
Photo © 2001 Tony Morosco(Scott Loarie and David Ackerly/UC Berkeley)
The researchers caution that their study can't reliably predict the fate of specific species. However, the trend is clear: The researchers project that, in response to rising temperatures and altered rainfall, many plants could move northward and toward the coast, following the shifts in their preferred climate, while others, primarily in the southern part of the state and in Baja California, may move up mountains into cool but highly vulnerable refugia
Coast redwoods may range farther north, for example, while California oaks could disappear from central California in favor of cooler weather in the Klamath Mountains along the California-Oregon border. Many plants may no longer be able to survive in the northern Sierra Nevada or in the Los Angeles basin, while plants of northern Baja California will migrate north into the San Diego mountains. The Central Valley will become preferred habitat for plants of the Sonoran desert.
"Across the flora, there will be winners and losers," said first author Scott Loarie, a Ph.D. candidate at Duke University's Nicholas School for the Environment who has worked with Ackerly on the analysis for the past four years. "In nearly every scenario we explored, biodiversity suffers - especially if the flora can't disperse fast enough to keep pace with climate change."
The authors identified several "climate-change refugia" scattered around the state. These are places where large numbers of the plants hit the hardest by climate change are projected to relocate and hang on. Many of these refugia are in the foothills of coastal mountains such as the Santa Lucia Mountains along California's Central Coast, the Transverse Ranges separating the Central Valley from Los Angeles and the San Gabriel Mountains east of Los Angeles. Many of these areas are already under increasing pressure from encroaching suburban development.
"There's a real potential for sheltering a large portion of the flora in these refugia if they are kept wild and if plants can reach them in time," Loarie said. The authors argue that it's not too early to prepare for this eventuality by protecting corridors through which plants can move to such refugia, and maybe even assisting plants in reestablishing themselves in new regions.
"Part of me can't believe that California's flora will collapse over a period of 100 years," Ackerly said. "It's hard to comprehend the potential impacts of climate change. We haven't seen such drastic changes in the last 200 years of human history, since we have been cataloguing species."
Ackerly, Loarie and colleagues at UC Berkeley, Duke University in Durham, N.C., California Polytechnic State University (Cal Poly) in San Luis Obispo and Texas Tech University in Lubbock report their findings in the open-access journal PLoS ONE, which appears online June 25.
Photo © Michael Charters(Scott Loarie and David Ackerly/UC Berkeley)
In collaboration with climate modeler Katharine Hayhoe of Texas Tech, Loarie and Ackerly then employed two different climate models - one based at the U.S. National Center for Atmospheric Research and the other at the United Kingdom Meteorological Office - that predict changes in temperature and precipitation through the year 2100 for lower and higher greenhouse gas emissions scenarios. They then projected for each model and scenario where California's endemic species would have to move in order to find the microclimate they need to survive. One set of projections assumed that plants can easily relocate, while another assumed that they cannot migrate at all by 2100, so their ranges will only shrink as climate changes.
Loarie emphasized that there are many uncertainties in the analysis - for example, in the known range of individual plants; in knowledge of the microclimate each plant prefers; in how much warming can be expected based on best- and worst-case greenhouse gas scenarios; in the direction and magnitude of changes in California rainfall; and in whether or not plants can migrate sufficiently in 100 years to discover congenial habitat.
Despite these unknowns, the researchers said they are confident in their approach, which has been used previously to predict global warming's effects on isolated species or plant families in places such as South Africa, Europe, the eastern U.S. and southern California.
"We can have confidence in the trends, if not in what happens to specific species," Loarie said. "There is a clear trend despite the uncertainty."
In the most optimistic scenario, in which global emissions of carbon dioxide return to near-1990 levels by the end of the century and plants are able to move into new habitats within a century, diversity of species in parts of California might actually increase, especially along the northwest and central coast. Nevertheless, diversity in the northern Sierra and in southern California would decrease.
However, such an optimistic outcome is far less likely than more dire ones, Ackerly said. In the higher scenario - the greatest warming, and plants unable to move in the 90- to-100-year time frame of global warming - plant diversity will decrease everywhere by as much as 25 percent, even if no species actually become extinct. Similarly, 66 percent of all endemic species will experience more than an 80 percent reduction in range.
If plants are able to disperse in time to find more suitable habitat, the researchers found that ranges will shift by an average of 150 kilometers (95 miles) under higher climate change, often with no overlap between the old and new ranges. Paradoxically, this may separate species that now live together: Substantial numbers of floral communities may be split up as some species move south and uphill while others move north and towards the coast.
Though the study did not look at the response of invasive or non-native plants to climate change, Ackerly said that they likely will expand their ranges at the expense of natives and endemics. And shifting and shrinking ranges of endemic species likely will affect animal diversity as well. Ackerly noted that range change may separate an animal from its major food source, or a pollinator from its preferred plant.
With the shifting ranges of endemic species, species conservation becomes a moving target, the researchers noted. Brent Mishler, director of the University and Jepson herberia and a professor of integrative biology, anticipates a big need for information on possible plant movement among those people protecting, managing or restoring natural areas around the state.
"They could really benefit by knowing what plants are in danger of being eliminated from their area, and maybe even more importantly, what plants to keep in mind that will be 'refugees' from other sites that will need to move into their area to avoid extinction," he said. "Planning for refugees will become a new but important concept for natural reserves to think about."
Co-authors with Loarie, Ackerly and Hayhoe are UC Berkeley graduate student Benjamin E. Carter; database specialist Richard Moe of UC Berkeley's University and Jepson herbaria; professor Charles A. Knight of Cal Poly; and statistician Sean McMahon of Duke. Loarie, who began the research project with Ackerly while working on his B.S. at Stanford University, continued the project while earning his Ph.D. at Duke, and will be starting a post-doctoral fellowship at the Carnegie Institute of Washington's Department of Global Ecology this summer.
Note: To read more about this and other UC Berkeley news, visit the Berkeley News Center at: http://newscenter.berkeley.edu. | <urn:uuid:8f56905a-c78f-4877-8382-117ab4a5e671> | 3.515625 | 1,578 | News Article | Science & Tech. | 33.778309 | 1,183 |
Location: Prince William Sound, Alaska
Material spilled: North Slope crude oil
Amount spilled: 10.8 million gallons
Spill extent: 1,300 miles of coastline
Many factors complicated the cleanup efforts following the spill. The size of the spill and its remote location, accessible only by helicopter and boat, made government and industry efforts difficult and tested existing plans for dealing with such an event. Officials employed a variety of countermeasures to control the slick, including burns and dispersants, as well as high-pressure washing on areas of oiled shoreline. Today, most signs of the spill are gone from sight, but research into the long-term biological impacts of the oil has shown that many organisms in Prince William Sound continue to show effects.
PublicationsJewett, SC and JJ Stegeman, et al., “Exposure to hydrocarbons 10 years after the Exxon Valdez oil spill: evidence from cytochrome P4501A expression and biliary FACs in nearshore demersal fishes,” Marine Environmental Research 54(2002): 21-48
Trust, KA and Stegeman, et al., “Cytochrome P450 1A induction in sea ducks inhabiting nearshore areas of Prince William Sound, Alaska,” Marine Pollution Bulletin 40(2000): 397-403.
Marty, GD and JJ Stegeman, et al., “Ascites, premature emergence, increased gonadal cell apoptosis, and cytochrome P4501A induction in pink salmon larvae continuously exposed to oil-contaminated gravel during development,” Canadian Journal of Zoology-Revue Canadienne de Zoologie 75(1997): 989-1007.
Woodin, BR; RM Smolowitz, and JJ Stegeman, “Induction of cytochrome P4501A in the intertidal fish Anoplarchus purpurescens by Prudhoe Bay crude oil and environmental induction in fish from Prince William Sound,” Environmental Science & Technology 31(1997): 1198-1205.
OpinionLet's not forget Exxon Valdez
March 24, 2009 editorial by WHOI researcher Christopher Reddy
» Visit Website
Testimonies & Briefings
Oversight Hearing on “Ocean Science and Data Limits in a Time of Crisis: Do NOAA and the Fish and Wildlife Service (FWS) have the Resources to Respond?"
Christopher M. Reddy, Ph.D., Associate Scientist, Marine Chemistry & Geochemistry, Woods Hole Oceanographic Institution | <urn:uuid:9f8cb76c-ece4-4ad1-962d-c2fcb2784629> | 3.15625 | 527 | News (Org.) | Science & Tech. | 31.94765 | 1,184 |
Fossil range: Early Paleocene - Recent
| Ammospermophilus leucurus|
Forty percent of mammal species are rodents, and they are found in vast numbers on all continents other than Antarctica. Common rodents include mice, rats, squirrels, chipmunks, gophers, porcupines, beavers, hamsters, gerbils, guinea pigs, chinchillas and degus. Rodents have sharp incisors that they use to gnaw wood, break into food, and bite predators. Most eat seeds or plants, though some have more varied diets. Some species have historically been pests, eating human seed stores and spreading disease.
Size and range of order
In terms of number of species — although not necessarily in terms of number of organisms (population) or biomass — rodents make up the largest order of mammals. There are about 2,277 species of rodents (Wilson and Reeder, 2005), with over 40 percent of mammalian species belonging to the order. Their success is probably due to their small size, short breeding cycle, and ability to gnaw and eat a wide variety of foods. (Lambert, 2000)
Rodents are found in vast numbers on all continents except Antarctica, most islands, and in all habitats except oceans. They are the only placental order, other than bats (Chiroptera) and Pinnipeds, to reach Australia without human introduction.
Many rodents are small; the tiny African pygmy mouse is only 6 cm in length and 7 grams in weight. On the other hand, the capybara can weigh up to 65 (Expression error: Missing operand for * ), and the largest known rodent, the extinct Josephoartigasia monesi, is estimated to weigh about 1,000 (Expression error: Missing operand for * ), and possibly up to 1,534 (Expression error: Missing operand for * ) or 2,586 (Expression error: Missing operand for * ).
Rodents have two incisors in the upper as well as in the lower jaw which grow continuously and must be kept worn down by gnawing; this is the origin of the name, from the Latin rodere, to gnaw, and dens, dentis, tooth. These teeth are used for cutting wood, biting through the skin of fruit, or for defense. The teeth have enamel on the outside and exposed dentine on the inside, so they self-sharpen during gnawing. Rodents lack canines, and have a space between their incisors and premolars. Nearly all rodents feed on plants, seeds in particular, but there are a few exceptions which eat insects or fish. Some squirrels are known to eat passerine birds like cardinals and blue jays.
Rodents are important in many ecosystems because they reproduce rapidly, and can function as food sources for predators, mechanisms for seed dispersal, and as disease vectors. Humans use rodents as a source of fur, as pets, as model organisms in animal testing, for food, and even in detecting landmines.
Members of non-rodent orders such as Chiroptera (bats), Scandentia (treeshrews), Insectivora (moles, shrews and hedgehogs), Lagomorpha (hares, rabbits and pikas) and mustelid carnivores such as weasels and mink are sometimes confused with rodents.
The fossil record of rodent-like mammals begins shortly after the extinction of the non-avian dinosaurs 65 million years ago, as early as the Paleocene. Some molecular clock data, however, suggests that modern rodents (members of the order Rodentia) already appeared in the late Cretaceous, although other molecular divergence estimations are in agreement with the fossil record. By the end of the Eocene epoch, relatives of beavers, dormouse, squirrels, and other groups appeared in the fossil record. They originated in Laurasia, the formerly joined continents of North America, Europe, and Asia. Some species colonized Africa, giving rise to the earliest hystricognaths. There is, however, a minority belief in the scientific community that evidence from mitochondrial DNA indicates that the Hystricognathi may belong to a different evolutionary offshoot and therefore a different order. From there hystricognaths rafted to South America, an isolated continent during the Oligocene and Miocene epochs. By the Miocene, Africa collided with Asia, allowing rodents such as the porcupine to spread into Eurasia. During the Pliocene, rodent fossils appeared in Australia. Even though marsupials are the prominent mammals in Australia, rodents make up almost 25% of the mammals on the continent. Meanwhile, the Americas became joined and some rodents expanded into new territory; mice headed south and porcupines headed north.
- Some Prehistoric Rodents
- Castoroides, a giant beaver
- Ceratogaulus, a horned burrowing rodent
- Spelaeomys, a rat that grew to a large size on the island of Flores
- Giant hutias, a group of rodents once found in the West Indies
- Ischyromys, a primitive squirrel-like rodent
- Leithia, a giant dormouse
- Neochoerus pinckneyi, a giant North American Capybara that weighed 50 kg
- Josephoartigasia monesi, the largest known rodent
- Phoberomys pattersoni, the second largest known rodent
- Telicomys, a giant South American rodent
The rodents are part of the clades: Glires (along with lagomorphs), Euarchontoglires (along with lagomorphs, primates, treeshrews, and colugos), and Boreoeutheria (along with most other placental mammals). The order Rodentia may be divided into suborders, infraorders, superfamilies and families.
ORDER RODENTIA (from Latin, rodere, to gnaw)
- Suborder Anomaluromorpha
- Suborder Castorimorpha
- Suborder Hystricomorpha
- Family incertae sedis Diatomyidae: Laotian rock rat
- Infraorder Ctenodactylomorphi
- Family Ctenodactylidae: gundis
- Infraorder Hystricognathi
- Family Bathyergidae: African mole rats
- Family Hystricidae: Old World porcupines
- Family Petromuridae: dassie rat
- Family Thryonomyidae: cane rats
- Parvorder Caviomorpha
- Family †Heptaxodontidae: giant hutias
- Family Abrocomidae: chinchilla rats
- Family Capromyidae: hutias
- Family Caviidae: cavies, including guinea pigs and the capybara
- Family Chinchillidae: chinchillas and viscachas
- Family Ctenomyidae: tuco-tucos
- Family Dasyproctidae: agoutis
- Family Dinomyidae: pacaranas
- Family Echimyidae: spiny rats
- Family Erethizontidae: New World porcupines
- Family Myocastoridae: nutria
- Family Octodontidae: octodonts
- Suborder Myomorpha
- Superfamily Dipodoidea
- Family Dipodidae: jerboas and jumping mice
- Superfamily Muroidea
- Family Calomyscidae: mouse-like hamsters
- Family Cricetidae: hamsters, New World rats and mice, voles
- Family Muridae: true mice and rats, gerbils, spiny mice, crested rat
- Family Nesomyidae: climbing mice, rock mice, white-tailed rat, Malagasy rats and mice
- Family Platacanthomyidae: spiny dormice
- Family Spalacidae: mole rats, bamboo rats, and zokors
- Superfamily Dipodoidea
- Suborder Sciuromorpha
The above taxonomy uses the shape of the lower jaw (sciurognath or hystricognath) as the primary character. This is the most commonly used approach for dividing the order into suborders. Many older references emphasize the zygomasseteric system (suborders Protrogomorpha, Sciuromorpha, Hystricomorpha, and Myomorpha).
Several molecular phylogenetic studies have used gene sequences to determine the relationships among rodents, but these studies are yet to produce a single consistent and well-supported taxonomy. Some clades have been consistently produced such as:
- Ctenohystrica contains:
Monophyly or polyphyly?
In 1991, a paper submitted to Nature proposed that caviomorphs should be reclassified as a separate order (similar to Lagomorpha), based on an analysis of the amino acid sequences of guinea pigs. This hypothesis was refined in a 1992 paper, which asserted the possibility that caviomorphs may have diverged from myomorphs prior to later divergences of Myomorpha; this would mean caviomorphs, or possibly hystricomorphs, would be moved out of the rodent classification into a separate order. A minority scientific opinion briefly emerged arguing that guinea pigs, degus, and other caviomorphs are not rodents, while several papers were put forward in support of rodent monophyly. Subsequent studies published since 2002, using wider taxon and gene samples, have restored consensus among mammalian biologists that the order Rodentia is monophyletic.
- Adkins, R. M. E. L. Gelke, D. Rowe, and R. L. Honeycutt. 2001. Molecular phylogeny and divergence time estimates for major rodent groups: Evidence from multiple genes. Molecular Biology and Evolution, 18:777-791.
- Carleton, M. D. and G. G. Musser. 2005. Order Rodentia. Pp 745-752 in Mammal Species of the World A Taxonomic and Geographic Reference. Johns Hopkins University Press, Baltimore.
- David Lambert and the Diagram Group. The Field Guide to Prehistoric Life. New York: Facts on File Publications, 1985. ISBN 0-8160-1125-7
- Jahn, G. C. 1998. “When Birds Sing at Midnight” War Against Rats Newsletter 6:10-11.
- Leung LKP, Peter G. Cox, Gary C. Jahn and Robert Nugent. 2002. Evaluating rodent management with Cambodian rice farmers. Cambodian Journal of Agriculture Vol. 5, pp. 21-26.
- McKenna, Malcolm C., and Bell, Susan K. 1997. Classification of Mammals Above the Species Level. Columbia University Press, New York, 631 pp. ISBN 0-231-11013-8
- Nowak, R. M. 1999. Walker's Mammals of the World, Vol. 2. Johns Hopkins University Press, London.
- Steppan, S. J., R. A. Adkins, and J. Anderson. 2004. Phylogeny and divergence date estimates of rapid radiations in muroid rodents based on multiple nuclear genes. Systematic Biology, 53:533-553.
- University of California Museum of Paleontology (UCMP). 2007 "Rodentia".
- Wilson, D. E. and D. M. Reeder, eds. 2005. Mammal Species of the World A Taxonomic and Geographic Reference. Johns Hopkins University Press, Baltimore.
- ↑ 1.0 1.1 rodent - Encyclopedia.com. Retrieved on 2007-11-03.
- ↑ Rodents: Gnawing Animals. Retrieved on 2007-11-03.
- ↑ Myers, Phil (2000). Rodentia. Animal Diversity Web. University of Michigan Museum of Zoology. Retrieved on 2006-05-25.
- ↑ Millien, Virginie (05 2008). "The largest among the smallest: the body mass of the giant rodent Josephoartigasia monesi". Proceedings of the Royal Society B. doi:10.1098/rspb.2008.0087. Retrieved on 2008-05-27.
- ↑ Rinderknecht, Andrés; Blanco, R. Ernesto (01 2008). "The largest fossil rodent" (pdf). Proceedings of the Royal Society B: 923–928. doi:10.1098/rspb.2007.1645. Retrieved on 2008-05-27.
- ↑ Wines, Michael. "Gambian rodents risk death for bananas", The Age, The Age Company Ltd., 2004-05-19. Retrieved on 2006-05-25. "A rat with a nose for landmines is doing its bit for humanity" Cited as coming from the New York Times in the article.
- ↑ Douzery, E.J.P., F. Delsuc, M.J. Stanhope, and D. Huchon (2003). "Local molecular clocks in three nuclear genes: divergence times for rodents and other mammals and incompatibility among fossil calibrations". Journal of Molecular Evolution 57: S201. doi:10.1007/s00239-003-0028-x.
- ↑ Horner, D.S., K. Lefkimmiatis, A. Reyes, C. Gissi, C. Saccone, and G. Pesole (2007). "Phylogenetic analyses of complete mitochondrial genome sequences suggest a basal divergence of the enigmatic rodent Anomalurus". BMC Evolutionary Biology 7: 16. doi:10.1186/1471-2148-7-16.
- ↑ Graur, D., Hide, W. and Li, W. (1991) 'Is the guinea-pig a rodent?' Nature, 351: 649-652.
- ↑ Li, W., Hide, W., Zharkikh, A., Ma, D. and Graur, D. (1992) 'The molecular taxonomy and evolution of the guinea pig.' Journal of Heredity, 83 (3): 174-81.
- ↑ D'Erchia, A., Gissi, C., Pesole, G., Saccone, C. and Arnason, U. (1996) 'The guinea-pig is not a rodent.' Nature, 381 (6583): 597-600.
- ↑ Reyes, A., Pesole, G. and Saccone, C. (2000) 'Long-branch attraction phenomenon and the impact of among-site rate variation on rodent phylogeny.' Gene, 259 (1-2): 177-87.
- ↑ Cao, Y., Adachi, J., Yano, T. and Hasegawa, M. (1994) 'Phylogenetic place of guinea pigs: No support of the rodent-polyphyly hypothesis from maximum-likelihood analyses of multiple protein sequences.' Molecular Biology and Evolution, 11: 593-604.
- ↑ Kuma, K. and Miyata, T. (1994) 'Mammalian phylogeny inferred from multiple protein data.' Japanese Journal of Genetics, 69 (5): 555-66.
- ↑ Robinson-Rechavi, M., Ponger, L. and Mouchiroud, D. (2000) 'Nuclear gene LCAT supports rodent monophyly.' Molecular Biology and Evolution, 17: 1410-1412.
- ↑ Lin, Y-H, et al. "Four new mitochondrial genomes and the increased stability of evolutionary trees of mammals from improved taxon sampling." Molecular Biology and Evolution 19 (2002): 2060-2070.
- ↑ Carleton, Michael D., and Musser, Guy G. "Order Rodentia". Mammal Species of the World, 3rd edition, 2005, vol. 2, p. 745. (Concise overview of the literature)
Extant mammal orders by infraclass
Eutheria: Afrosoricida · Macroscelidea · Tubulidentata · Hyracoidea · Proboscidea · Sirenia · Cingulata · Pilosa · Scandentia · Dermoptera · Primates · Rodentia · Lagomorpha · Erinaceomorpha · Soricomorpha · Chiroptera · Pholidota · Carnivora · Perissodactyla · Artiodactyla · Cetacea
<span id="interwiki-de-fa" /> ar:قوارض bar:Fieslfiecha bg:Гризачи ca:Rosegador cs:Hlodavci da:Gnavere de:Nagetiere el:Τρωκτικάeo:Ronĝuloj eu:Karraskari fa:جوندگان fo:Gnagdýrgl:Roedor ko:설치류 hi:गिलहरी hsb:Hrymzaki hr:Glodavci io:Rodero id:Hewan pengerat is:Nagdýr it:Rodentia he:מכרסמים ka:მღრღნელები la:Rodentia lv:Grauzēji lb:Knabberdéieren lt:Graužikai lij:Rodentia li:Knaagdiere hu:Rágcsálók mk:Глодари nl:Knaagdierenno:Gnagere nn:Gnagarar nrm:Grugeux nov:Rodentia oc:Rodentia nds:Gnaagdeerterqu:Khankiqsimple:Rodent sk:Hlodavce sl:Glodavci sr:Глодари fi:Jyrsijät sv:Gnagare ta:கொறிணி th:สัตว์ฟันแทะuk:Гризуни
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | <urn:uuid:076634c7-bef0-4102-9d72-986c3aced601> | 3.5625 | 3,997 | Knowledge Article | Science & Tech. | 43.927548 | 1,185 |
How a nuclear reactor makes electricity
A nuclear reactor produces and controls the release of energy
from splitting the atoms of uranium.
Uranium-fuelled nuclear power is a clean and efficient way of
boiling water to make steam which drives turbine generators. Except
for the reactor itself, a nuclear power station works like most
coal or gas-fired power stations.
The Reactor Core
Several hundred fuel assemblies containing thousands of small
pellets of ceramic uranium oxide fuel make up the core of a
reactor. For a reactor with an output of 1000 megawatts
(MWe), the core would contain about 75 tonnes of enriched
In the reactor core the U-235 isotope fissions or splits,
producing a lot of heat in a continuous process called a chain
reaction. The process depends on the presence of a moderator
such as water or graphite, and is fully controlled.
The moderator slows down the neutrons produced by fission of the
uranium nuclei so that they go on to produce more fissions.
Some of the U-238 in the reactor core is turned into plutonium
and about half of this is also fissioned similarly, providing about
one third of the reactor's energy output.
The fission products remain in the ceramic fuel and undergo
radioactive decay, releasing a bit more heat. They are the
main wastes from the process.
The reactor core sits inside a steel pressure vessel, so that
water around it remains liquid even at the operating temperature of
over 320°C. Steam is formed either above the reactor core or
in separate pressure vessels, and this drives the turbine to
produce electricity. The steam is then condensed and the
PWRs and BWRs
The main design is the pressurised water reactor (PWR) which has
water in its primary cooling/heat transfer circuit, and generates
steam in a secondary circuit. The less popular boiling water
reactor (BWR) makes steam in the primary circuit above the reactor
core, though it is still under considerable pressure. Both
types use water as both coolant and moderator, to slow
To maintain efficient reactor performance, about one-third or
half of the used fuel is removed every year or two, to be replaced
with fresh fuel.
The pressure vessel and any steam generators are housed in a
massive containment structure with reinforced concrete about 1.2
metres thick. This is to protect neighbours if there is a
major problem inside the reactor, and to protect the reactor from
Because some heat is generated from radioactive decay even after
the reactor is shut down, cooling systems are provided to remove
this heat as well as the main operational heat output.
Natural Prehistoric Reactors
The world's first nuclear reactors operated naturally in a
uranium deposit about two billion years ago in what is now
Gabon. The energy was not harnessed, since these were in rich
uranium orebodies in the Earth's crust and moderated by percolating
Nuclear energy's contribution to global electricity supply
Nuclear energy supplies some 14% of the world's electricity.
Today 31 countries use nuclear energy to generate up to three
quarters of their electricity, and a substantial number of these
depend on it for one quarter to one half of their supply. Almost
15,000 reactor-years of operational experience have been
accumulated since the 1950s by the world's 440 nuclear power
reactors (and nuclear reactors powering naval vessels have clocked
up a similar amount). | <urn:uuid:c259cc03-df21-435b-b4d6-fb7b27710b2b> | 4.125 | 738 | Knowledge Article | Science & Tech. | 35.885882 | 1,186 |
If current forecast trends continue, you'll likely need the umbrella more than the sunglasses this winter in the twin states, thanks to El Nino.
El Nino is Spanish for the 'Christ Child,' because its effects usually begin to show up around Christmas in certain years and these effects are felt far and wide as weather patterns shift across the globe.
In an El Nino year, warm waters in the Pacific Ocean near the equator migrate east toward the Americas. The warmer seas temperatures heat the atmosphere above them, fueling thunderstorms that transport heat high into the atmosphere creating a large ridge over the eastern Pacific. The jet stream then moves that moisture toward the U.S. where large storms develop and head right for the deep South. As long as that pattern continues, we can expect more stormy weather across the southern tier of states.
With thin in mind one thing is for sure, El Nino has a big influence on weather patterns. Although El Nino influences weather conditions it cannot always be directly blamed for individual episodes of bad weather such as tornadoes, floods or snowstorms. | <urn:uuid:427388df-34a5-4b7a-875f-5b2e74ea3698> | 3.171875 | 220 | News Article | Science & Tech. | 50.503 | 1,187 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 July 18
Explanation: Sometimes it's night on the ground but day in the air. As the Earth rotates to eclipse the Sun, sunset rises up from the ground. Therefore, at sunset on the ground, sunlight still shines on clouds above. Under usual circumstances, a pretty sunset might be visible, but unusual noctilucent clouds float so high up they can be seen well after dark. Pictured above last month, a network of noctilucent clouds cast a colorful but eerie glow after dusk near Vallentuna, Sweden. Although noctilucent clouds are thought to be composed of small ice-coated particles, much remains unknown about them. Recent evidence indicates that at least some noctilucent clouds result from freezing water exhaust from Space Shuttles.
Authors & editors:
NASA Web Site Statements, Warnings, and Disclaimers
NASA Official: Jay Norris. Specific rights apply.
A service of: EUD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:8a5305ba-9694-429f-a0c5-d255dd350e04> | 3.515625 | 243 | Knowledge Article | Science & Tech. | 45.644455 | 1,188 |
Whether they’re toys that shine in the night, black lights, glow sticks or fireflies, things that produce an eerie glow are fascinating. Give a kid a glow-in-the-dark toy or paper her ceiling in dimly shining plastic stars, and she will be occupied forever. She’ll find ever brighter lights to charge them up, ever darker places to view them for maximum glow effect, and generally love exploring how it all works.
You know this; you were that kid. So what’s the deal with the glow?
Learn how to make this amazing looking glow-in-the-dark cocktail over at Neatorama
It’s 10 p.m. Do you know where your electrons are?
While there are several “flavors” of things that glow, they all have something in common: Things glow because photons are emitted when “excited” (at a higher energy state) electrons drop back to a lower, more stable state. Aside from promising them a pony or a tour of CERN, there are several ways to get your electrons excited.
In chemical glow sticks, a chemical reaction excites the electrons. This process is called chemiluminescence. Glow sticks are an excellent way to experiment with reaction rates and temperature. If you want the reaction to last longer, follow a kid’s advice and put the glow stick in the freezer or in ice water so the reaction slows down; it’ll take longer to use up the chemicals in the glow stick. The trade-off is that because the production of photons is also slower, a cold glow stick is dimmer than a warm one.
Fluorescence is like light recycling. Fluorescent rocks, laundry detergent additives, paint, and even some animals can re-emit light after something shines on them. Usually we’re talking about things getting hit with ultraviolet or ‘black’ light and re-emitting within the visible spectrum. This makes sense because as you progress along the spectrum of electromagnetic radiation, visible light is a bit lower in energy than ultraviolet light — you can’t expose something to lower energy red light and get it to fluoresce in UV, for example. Fluorescent things certainly fluoresce in daylight, but not enough to outshine the ambient light, so they’re most noticeable under a black light in an otherwise dark space.
Phosphorescence is a lot like fluorescence but stretched out over time — a slow glow. So you can shine light (visible or UV) on a glow-in-the-dark star and it re-emits light, too, but over a lot more time, so the glow continues for minutes or hours before it completely dies out. If you have a glow-in-the-dark toy or T-shirt, try “charging it up” with lights of different colors or intensities and checking out the glow that results.
Fireflies produce and use their own chemicals, luciferin and luciferase, to dazzle and attract potential mates — and sometimes to lure prey. A surprising number of marine critters are bioluminescent, too, like dinoflagellates (plankton) that glow when disturbed, the angler fish, and some squid (perhaps they are blending in with starlight from above). Headlines occasionally announce a new genetically engineered “glowing” kitten, rabbit, plant, sheep, etc., but they are almost always talking about fluorescence instead of bioluminescence, so the light is only seen when the animal is placed under ultraviolet light. (One useful application of this is the ability to track a protein related to a certain disease by getting the introduced gene for Green Fluorescent Protein (GFP) to link to the gene for the protein of interest). Some animals like scorpions and jellyfish (the original source of GFP) fluoresce naturally.
Sugar and adhesives can exhibit triboluminescence, in which friction or fracturing produces the light. This one is great to try out at home; you just need Wint-O-Green Lifesavers®, transparent tape and a very dark room (a buddy or a room with a mirror is helpful for the Lifesavers portion). Dr. Sweeting (that’s her real name) has more detailed instructions and explanation, but the big idea is that a tiny, but visible, amount of light is emitted when you peel tape off the roll and when you bite into the candy, crushing sugar crystals against each other. The wintergreen oil even improves the effect by fluorescing!
Are there any other kinds of luminescence? Yes! Incandescence, piezoluminescence, radioluminescence, etc. But that’s enough fun for one post. Go try out triboluminescence!
Just can’t get enough? Make sure to come early for the educational portion of HMNS’ LaB 5555 this Friday for more GLOW fun, and learn all about the science of what gives things light. I’ll be there doing demos to light up your night. For tickets and more info, click here! | <urn:uuid:c05ab62e-2728-42c0-bfc6-01e5f75eafc8> | 3.53125 | 1,083 | Personal Blog | Science & Tech. | 49.596934 | 1,189 |
Expanding the Ark - advancing invertebrate conservation
24 May 2004 | News story
Gland, Switzerland, 24 May 2004 (IUCN)-The World Conservation Union. 'Expanding the Ark' neatly summarises the aims of the invertebrate specialists who gathered at the American Museum of Natural History in New York for a symposium recently. They plan to raise awareness of invertebrate conservation requirements so that these are included in conservation planning, management and policy decisions. Currently, invertebrate species are often overlooked in conservation strategies, despite representing the vast majority of our planet's biodiversity: a staggering 95% of all known animal species.
To address this, the global invertebrate conservation community, including SSC, has united under a new initiative: the 'Expanding the Ark Coalition' (ETAC). This will provide an ideal forum for deciding the best ways to advance invertebrate conservation and help mobilise the necessary resources.
Considering the sheer number of invertebrates on our planet - 1,190,200 species have been described and nearly 10,000 new ones are discovered each year - it is not surprising to learn that they play an indispensable ecological and economic role. Invertebrates occupy key roles in most food chains through nutrient recycling, pollination, pest control and water filtration, as well as performing many other vital functions. Conserving invertebrate biodiversity is therefore essential for the maintenance of ecosystem health. From squids to dragonflies, and spiders to termites, invertebrates are also fascinating animals in their own right, many displaying incredible life histories.
However, it is the sheer abundance and diversity of invertebrates that makes assessing their conservation status on limited resources one of the biggest challenges to conservationists. Recent studies suggest that many invertebrates face imminent extinction, so the importance of this challenge cannot be understated. To date, the threatened status of only 3,400 invertebrates has been assessed (0.2% of known species) and 1,959 species are included in the IUCN Red List of Threatened Species.
Initial steps were taken by the IUCN's Species Survival Commission (SSC) to improve invertebrate conservation by organising a workshop in November 2001. This brought together representatives of all the SSC invertebrate Specialist Groups and other experts to develop a plan of action for the Commission's invertebrate conservation work.
As a result, the geographical coverage of invertebrate Specialist Groups has increased, with the creation of new European and South Asian Specialist Groups and a Declining Pollination Task Force. The ongoing SSC Freshwater Biodiversity Assessment Programme has fully integrated invertebrates starting in Eastern Africa where all species of odonates (including dragonflies and damselflies), freshwater molluscs and crabs have been assessed.
ETAC aims to build on this work and give fresh impetus to invertebrate conservation. A lot remains to be done, but following the symposium, the global invertebrate conservation community, including SSC members, are enthusiastic and motivated to meet the challenge.
Several proposals have already been suggested:
Revive the Grasshopper and Cricket (Orthopteroidea) and Butterflies and Moths (Lepidoptera) Specialist Groups.
Complete a Global Dragonfly Assessment
Consider developing a European Invertebrate Red List
Andrew McMullin, IUCN/SSC Communications Officer, firstname.lastname@example.org; Tel: ++41 22 999 0153 | <urn:uuid:cea10659-8d1c-4d77-9527-fc5df3b16a17> | 3.265625 | 721 | News (Org.) | Science & Tech. | 14.256559 | 1,190 |
|Date of discovery||May 15, 2005|
|Name of discoverer||Hubble Space Telescope Pluto Companion Search Team|
|Name origin||Mythical 9-headed monster; second initial of "New Horizons" mission|
|Order from primary||3|
|Semi-major axis||64,780 km|
|Sidereal month||38.206 da|
|Inclination||0.22° to Pluto's equator|
|Equatorial radius||37 km|
|Mean temperature||44 K|
The Hubble Space Telescope Pluto Companion Search Team examined Pluto and its already-known moon Charon using the Hubble Space Telescope's Advanced Camera for Surveys. The stated discovery date of May 15, 2005 is the date that the discovery images were taken; the discovery was announced only after confirmatory comparison with previous images of the Plutonian system.
In Greek mythology, the Hydra was a nine-headed monster who was eventually destroyed by Hercules. The names given to Hydra and the middle moon Nix are also the initials of the NASA mission New Horizons, launched in 2006.
Nix and Hydra, like Charon, revolve around Pluto in the direction of Pluto's own rotation about its axis. Hydra's orbit is significantly eccentric, and its period is nearly, but not quite, six times that of Charon. This last finding has led to speculation that Hydra and Charon resonate in their orbits.
The Hubble Space Telescope's instruments have not been able to resolve Hydra sufficiently to measure its diameter. The diameter has a calculated range of between 61 km and 167 km, depending on the albedo—and current instruments cannot adequately resolve the albedo, either. Nor has any instrument been able to resolve its sidereal day. Hydra is somewhat brighter than its companion moon Nix, and so might be the larger of the two, but this is not proved.
The New Horizons space probe, launched in 2006, will visit Hydra and its companions in February of 2015. This will represent the earliest opportunity to study Hydra in detail.
The very existence of such a complex system as the plutonian system is difficult to explain, primarily on account of Pluto's small size. This has led one member of the discovery team to speculate that a giant impact (similar to that which most astronomers now favor for the origin of the Moon of Earth) on Pluto formed Charon, and that Nix and Hydra are two pieces of debris from that same impact. While the lead investigator doubts that Nix and Hydra are captured objects, Pluto is no longer considered a planet precisely on account of its failure to "clear its neighborhood" of other objects. Nix and Hydra could, therefore, be two of many Trans-Neptunian Objects (TNOs) that, instead of crashing into Pluto, fell into orbit around it as Pluto passed.
- ↑ "Gazetteer of Planetary Nomenclature: Planetary Body Names and Discoverers." US Geological Survey, Jennifer Blue, ed. March 31, 2008. Accessed April 17, 2008.
- ↑ "Pluto: Moons: Nix." Solar System Exploration, NASA, September 15, 2006. Accessed April 17, 2008.
- ↑ "IAU Circular No. 8723: Satellites of Pluto." International Astronomical Union, June 21, 2006. Accessed April 17, 2008.
- ↑ Than, Ker. "Pluto's Newest Moons Named Hydra and Nix." Space.com, June 21, 2006. Accessed April 17, 2008.
- ↑ 5.0 5.1 5.2 Buie, M. W., Grundy, W. M., Young, E. F., et al. "Orbits and photometry of Pluto's satellites: Charon, S/2005 P1, and S/2005 P2." Astronomical Journal 132:290, submitted December 19, 2005.
- ↑ 6.0 6.1 6.2 Stern, S. A., Mutchler, M. J., Weaver, H. A., and Steffl, A. J. "The Positions, Colors, and Photometric Variability of Pluto's Small Satellites from HST Observations 2005-2006." Astronomical Journal, submitted April 29, 2006. Accessed April 17, 2008.
- ↑ An assumed value
- ↑ Arnett, Bill. "Pluto." <http://www.nineplanets.org/> Accessed January 22, 2008.
- ↑ "IAU Circular No. 8625." International Astronomical Union, October 31, 2005. Accessed April 17, 2008.
- ↑ 10.0 10.1 10.2 "NASA's Hubble Reveals Possible New Moons Around Pluto." News Release STScl-2005-19, Hubble Site, October 31, 2005. Accessed April 17, 2008.
- ↑ 11.0 11.1 Weaver, H. A., Stern, S. A., Mutchler, M. J., et al. "The Discovery of Two New Satellites of Pluto." Nature 439(7079):943-945. Accessed April 17, 2008.
- ↑ Steffl, A. J., Mutchler, M. J., Weaver, H. A., et al. "New Constraints on Additional Satellites of the Pluto System." Astronomical Journal 132:614-619. Accessed April 17, 2008.
- ↑ 13.0 13.1 13.2 Britt, Robert Roy. "Two More Moons Discovered Orbiting Pluto." Space.com, October 31, 2005. Accessed April 17, 2008.
- Background Information Regarding Our Two Newly Discovered Satellites of Pluto (Official Nix/Hydra Web Site) | <urn:uuid:085cbc8e-5b07-4163-9b83-27b65c9eef2f> | 3.265625 | 1,190 | Knowledge Article | Science & Tech. | 68.36804 | 1,191 |
October 25, 2012
New Seafloor Bacteria Discovered, Built like Undersea Electric Cables
By Colleen Lynch
An undersea mystery has finally been solved, but the answer is almost a mystery in itself. Researchers from Aarhus University in Denmark have discovered that areas of the seafloor which were strangely able to conduct electric currents are actually fueled by undiscovered multicellular bacteria.
Electricity underwater seems impossible, but nature has proven time and again that humanity’s assumptions are generally laughable--this bacteria finding marks the case and point.
The electrically charged species of bacteria lives in the mud of the seafloor, acting in a sense as living electrical cables, despite being surrounded by water, which is obviously not the greatest conductor.
Aarhus University researchers became aware of the conundrum three years ago, and have spent their time in a search of the source of the seemingly impossible current. Initially they began the search by looking for a way to shut the current down, which led the researchers to lay non-conducting wire in the charged mud. The wire successfully stopped the current, which suggested that the current was traveling through a physical medium--just like an actual electric wire would.
But there was no wire there. The researchers were baffled, but sifting through the mud they eventually discovered the culprit: an entirely new species of bacteria, about a centimeter long and 100 times thinner than a human hair.
The bacterium remains unnamed as of yet, but the species apparently consists of a mind-bending biology: the bacteria is essentially made of long, electrically conducting filaments packed inside an insulated membrane.
It is a living electric cable, stretched incredibly thin and built from condensed living cells.
Numerous samples of the bacteria are now being studied by the Aarhus team, with an eye to what the researchers can find out about the bacteria’s past, its role in the development of ocean life, and what prospects the bacterial wires could hold for the future.
Biotechnology and engineering are two areas which could benefit greatly from such an astounding natural phenomenon.
Edited by Brooke Neuman
More Dark Fiber Community Stories | <urn:uuid:a4fa17ea-6097-4805-97c8-54861df8d112> | 3.453125 | 441 | News Article | Science & Tech. | 23.617803 | 1,192 |
Weather, temperature, carbon, water, soil and all other kinds of climate related open data. Including (but not limited to) data about:
- the atmosphere
- the weather
- air quality
- the ocean
- water levels
- water quality
- carbon emissions
- soil erosion
Existing Lists of Climate Datasets to Migrate Here
Conjunts de dades
7 conjunts de dades trobats.
About Full details can be found in: Renfrew, I. A. and P. S. Anderson, 2002: The surface climatology of an ordinary katabatic wind regime in Coats Land, Antarctica, Tellus, 54A,...
About Website says: This page gives information about ozone at Halley, Rothera and Vernadsky/Faraday stations. Openness Not open. Permission required to reproduce data: Most of our...
About Variety of public datasets from BAS, including: Historical Meteorological Data Radiosonde data Southern Oscillation Index - SOI Openness Presumably...
About Sodar echogrammes from 1991-2007. Background information at: http://www.antarctica.ac.uk/met/psa/sodar/sodar_intro.htm
About From website: Monthly mean surface temperature data and derived statistics for some Antarctic stations This website will eventually encompass all Antarctic stations with surface...
About From Laboratory profile: NCAR's Earth Observing Laboratory develops and deploys NSF Lower Atmospheric Observing Facilities (LAOF) and provides field project support and data...
About The National Snow and Ice Data Center (NSIDC) is part of the Cooperative Institute for Research in Environmental Sciences at the University of Colorado at Boulder. NSIDC supports... | <urn:uuid:32e54bb8-4673-48eb-b926-802e1f504dbc> | 2.515625 | 366 | Content Listing | Science & Tech. | 29.689469 | 1,193 |
Some 3.5 billion years ago, a single-celled organism now named LUCA (for the Last Universal Common Ancestor of all life on Earth) developed the ability to pull oxygen out of its environment. Although LUCA is long gone, University of Hawaii microbiologist Maqsudul Alam has taken a step toward understanding the secret behind this world-changing feat of chemical engineering.
LUCA evolved in an oxygen-free, or anaerobic, environment. But as oxygen levels rose in the ocean and atmosphere, the cell had to develop a way to neutralize what was, in essence, a poison. Alam hit on that defense while studying archaea—another type of primitive, single-celled creature. Alam studied two species of archaea, one aerobic and the other anaerobic. He isolated a crucial compound called protoglobin that protects anaerobic species of archaea from the toxic effects of oxygen. “Protoglobin is the nose and the hand of the archaea,” he says. “It senses oxygen, binds it, and removes it from the cell before it can do any harm.” Protoglobin, or something much like it, apparently provided a similar defense for LUCA. But that is only half of the story. When Alam purified the protoglobin to study its structure, he saw that the molecule looks surprisingly like diluted blood. In fact, protoglobin binds and releases oxygen the same way that hemoglobin does as it transports oxygen through blood. Alam believes that while LUCA initially evolved protoglobin for protection from oxygen, the organism’s descendants developed a variant of the molecule—hemoglobin—that transformed oxygen from a poison into a nutrient. That innovation enabled life to expand into new environments and set the stage for all oxygen-breathing organisms, Alam says.
The next step is to create a computer model that will explain how protoglobin works. Alam hopes such a model will allow him to unravel the genetic changes that transformed protoglobin and answer what he calls the $64 million question: How did protoglobin evolve to transport oxygen through the bodies of multicellular organisms? | <urn:uuid:6f68e90c-6447-4914-90b4-113218e933d7> | 3.9375 | 432 | Knowledge Article | Science & Tech. | 31.719075 | 1,194 |
Memory-mapped file objects behave like both bytearray and like file objects. You can use mmap objects in most places where bytearray are expected; for example, you can use the re module to search through a memory-mapped file. You can also change a single byte by doing obj[index] = 97, or change a subsequence by assigning to a slice: obj[i1:i2] = b'...'. You can also read and write data starting at the current file position, and seek() through the file to different positions.
A memory-mapped file is created by the mmap constructor, which is different on Unix and on Windows. In either case you must provide a file descriptor for a file opened for update. If you wish to map an existing Python file object, use its fileno() method to obtain the correct value for the fileno parameter. Otherwise, you can open the file using the os.open() function, which returns a file descriptor directly (the file still needs to be closed when done).
If you want to create a memory-mapping for a writable, buffered file, you should flush() the file first. This is necessary to ensure that local modifications to the buffers are actually available to the mapping.
For both the Unix and Windows versions of the constructor, access may be specified as an optional keyword parameter. access accepts one of three values: ACCESS_READ, ACCESS_WRITE, or ACCESS_COPY to specify read-only, write-through or copy-on-write memory respectively. access can be used on both Unix and Windows. If access is not specified, Windows mmap returns a write-through mapping. The initial memory values for all three access types are taken from the specified file. Assignment to an ACCESS_READ memory map raises a TypeError exception. Assignment to an ACCESS_WRITE memory map affects both memory and the underlying file. Assignment to an ACCESS_COPY memory map affects memory but does not update the underlying file.
To map anonymous memory, -1 should be passed as the fileno along with the length.
(Windows version) Maps length bytes from the file specified by the file handle fileno, and creates a mmap object. If length is larger than the current size of the file, the file is extended to contain length bytes. If length is 0, the maximum length of the map is the current size of the file, except that if the file is empty Windows raises an exception (you cannot create an empty mapping on Windows).
tagname, if specified and not None, is a string giving a tag name for the mapping. Windows allows you to have many different mappings against the same file. If you specify the name of an existing tag, that tag is opened, otherwise a new tag of this name is created. If this parameter is omitted or None, the mapping is created without a name. Avoiding the use of the tag parameter will assist in keeping your code portable between Unix and Windows.
offset may be specified as a non-negative integer offset. mmap references will be relative to the offset from the beginning of the file. offset defaults to 0. offset must be a multiple of the ALLOCATIONGRANULARITY.
(Unix version) Maps length bytes from the file specified by the file descriptor fileno, and returns a mmap object. If length is 0, the maximum length of the map will be the current size of the file when mmap is called.
flags specifies the nature of the mapping. MAP_PRIVATE creates a private copy-on-write mapping, so changes to the contents of the mmap object will be private to this process, and MAP_SHARED creates a mapping that’s shared with all other processes mapping the same areas of the file. The default value is MAP_SHARED.
prot, if specified, gives the desired memory protection; the two most useful values are PROT_READ and PROT_WRITE, to specify that the pages may be read or written. prot defaults to PROT_READ | PROT_WRITE.
access may be specified in lieu of flags and prot as an optional keyword parameter. It is an error to specify both flags, prot and access. See the description of access above for information on how to use this parameter.
offset may be specified as a non-negative integer offset. mmap references will be relative to the offset from the beginning of the file. offset defaults to 0. offset must be a multiple of the PAGESIZE or ALLOCATIONGRANULARITY.
To ensure validity of the created memory mapping the file specified by the descriptor fileno is internally automatically synchronized with physical backing store on Mac OS X and OpenVMS.
This example shows a simple way of using mmap:
import mmap # write a simple example file with open("hello.txt", "wb") as f: f.write(b"Hello Python!\n") with open("hello.txt", "r+b") as f: # memory-map the file, size 0 means whole file mm = mmap.mmap(f.fileno(), 0) # read content via standard file methods print(mm.readline()) # prints b"Hello Python!\n" # read content via slice notation print(mm[:5]) # prints b"Hello" # update content using slice notation; # note that new content must have same size mm[6:] = b" world!\n" # ... and read again using standard file methods mm.seek(0) print(mm.readline()) # prints b"Hello world!\n" # close the map mm.close()
import mmap with mmap.mmap(-1, 13) as mm: mm.write("Hello world!")
New in version 3.2: Context manager support.
The next example demonstrates how to create an anonymous map and exchange data between the parent and child processes:
import mmap import os mm = mmap.mmap(-1, 13) mm.write(b"Hello world!") pid = os.fork() if pid == 0: # In a child process mm.seek(0) print(mm.readline()) mm.close()
Memory-mapped file objects support the following methods:
Close the file. Subsequent calls to other methods of the object will result in an exception being raised.
True if the file is closed.
New in version 3.2.
Returns the lowest index in the object where the subsequence sub is found, such that sub is contained in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Returns -1 on failure.
Flushes changes made to the in-memory copy of a file back to disk. Without use of this call there is no guarantee that changes are written back before the object is destroyed. If offset and size are specified, only changes to the given range of bytes will be flushed to disk; otherwise, the whole extent of the mapping is flushed.
(Windows version) A nonzero value returned indicates success; zero indicates failure.
(Unix version) A zero value is returned to indicate success. An exception is raised when the call failed.
Copy the count bytes starting at offset src to the destination index dest. If the mmap was created with ACCESS_READ, then calls to move will raise a TypeError exception.
Return a bytes containing up to n bytes starting from the current file position. If the argument is omitted, None or negative, return all bytes from the current file position to the end of the mapping. The file position is updated to point after the bytes that were returned.
Changed in version 3.3: Argument can be omitted or None.
Returns a byte at the current file position as an integer, and advances the file position by 1.
Returns a single line, starting at the current file position and up to the next newline.
Resizes the map and the underlying file, if any. If the mmap was created with ACCESS_READ or ACCESS_COPY, resizing the map will raise a TypeError exception.
Returns the highest index in the object where the subsequence sub is found, such that sub is contained in the range [start, end]. Optional arguments start and end are interpreted as in slice notation. Returns -1 on failure.
Set the file’s current position. whence argument is optional and defaults to os.SEEK_SET or 0 (absolute file positioning); other values are os.SEEK_CUR or 1 (seek relative to the current position) and os.SEEK_END or 2 (seek relative to the file’s end).
Return the length of the file, which can be larger than the size of the memory-mapped area.
Returns the current position of the file pointer.
Write the bytes in bytes into memory at the current position of the file pointer; the file position is updated to point after the bytes that were written. If the mmap was created with ACCESS_READ, then writing to it will raise a TypeError exception. | <urn:uuid:b636de05-d717-47af-ac57-614e8a8186c6> | 3.015625 | 1,910 | Documentation | Software Dev. | 56.802515 | 1,195 |
A new kind of thyristor – an electronic switch – has the potential to change the landscape for power control and conversion. And it's thanks to a partnership between the Energy Department’s Sandia National Laboratory (SNL) and GeneSiC Semiconductor that this technology is now on the market. Learn more.
Inefficient light bulbs can drive up electricity bills and drain homeowners' wallets. With that in mind, government officials in the east Texas city of Longview established a light bulb swap program that is projected to save participating households $242.
Teams at two of the Energy Department's laboratories are making headway on two projects that will enable building a new lithium battery that charges faster, lasts longer, runs more safely, and might also arrive on the market in the not-too-distant future. Learn more.
Can you imagine a photovoltaic module that’s able to generate and store electricity on its own? Or an electric vehicle (EV) powered by a technology more durable than the advanced batteries in today’s EVs? Innovative solid-state nanocapacitors are making this clean technology possible. | <urn:uuid:412b9804-40d5-49c9-8f56-9daa1cb39062> | 2.625 | 236 | Content Listing | Science & Tech. | 34.875 | 1,196 |
Morphological and genetic variation indicate cryptic species within Lamarck’s little sea star, Parvulastra (=Patiriella) exigua
Hart, Michael W., Keever, Carson C., Dartnall, Alan J., and Byrne, Maria (2006) Morphological and genetic variation indicate cryptic species within Lamarck’s little sea star, Parvulastra (=Patiriella) exigua. Biological Bulletin, 210 (2). pp. 158-167.
|PDF (Published Version) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader|
View at Publisher Website: http://www.biolbull.org/cgi/reprint/210/...
The asterinid sea star Parvulastra exigua (Lamarck) is a common member of temperate intertidal marine communities from geographically widespread sites around the southern hemisphere. Individuals from Australian populations lay benthic egg masses (through orally directed gonopores) from which nonplanktonic offspring hatch and metamorphose without a dispersing planktonic larval phase. Scattered reports in the taxonomic literature refer to a similar form in southern Africa with aborally directed gonopores (and possibly broadcast spawning of planktonic eggs and larvae); such differences would be consistent with cryptic species variation. Surveys of morphology and mtDNA sequences have revealed cryptic species diversity in other asterinid genera. Here we summarize the taxonomic history of Lamarck’s "Astérie exiguë" and survey morphological variation (the location of the gonopores) for evidence that some P. exigua populations include cryptic species with a different mode of reproduction. We found strong evidence for multiple species in the form of two phenotypes and modes of reproduction (oral and aboral gonopore locations) in populations from southern Africa and islands in the Atlantic and Indian oceans. Both modes of reproduction have broad geographic ranges. These results are consistent with previously published genetic data that indicate multiple species in African and island (but not Australian) populations.
|Item Type:||Article (Refereed Research - C1)|
|Keywords:||asterin sea star; Parvulastra exigua; Lamarck; asterinid sea stars|
|SEO Codes:||96 ENVIRONMENT > 9608 Flora, Fauna and Biodiversity > 960808 Marine Flora, Fauna and Biodiversity @ 100%|
|Deposited On:||26 Oct 2009 13:54|
|Last Modified:||20 May 2013 00:36|
Last 12 Months: 0
|Citation Counts with External Providers:||Web of Science: 20|
Repository Staff Only: item control page | <urn:uuid:155fce45-1d5e-4c35-a076-090f6f87d6e5> | 2.8125 | 587 | Academic Writing | Science & Tech. | 22.378719 | 1,197 |
VLTI observations of the radii of four small stars
The radii and masses of the four very-low-mass stars now observed with the VLTI, GJ 205, GJ 887, GJ 191 (also known as "Kapteyn''s star") and Proxima Centauri (red filled circles; with error bars). For comparison, planet Jupiter's mass and radius are also plotted (blue triangle). The two curves represent theoretical models for stars of two different ages (400 million years - red dashed curve; 5 billion years - black fully drawn curve; models by Gilles Charier and collaborators at the Ecole Normale Supérieur de Lyon, France). As can be seen, theory and observations fit very well.
About the Image
|Release date:||29 November 2002|
|Size:||800 x 789 px| | <urn:uuid:0fe245f7-49df-4a89-861a-e329222c30b8> | 2.890625 | 180 | Truncated | Science & Tech. | 52.377531 | 1,198 |
Planning Begins for Experiment to Study Influence of Aerosols on California Water Supply
September 10, 2008
Researchers from NOAA's Earth System Research Laboratory will participate in the CalWater: Energy, Water, and Regional Climate Planning Meeting on September 15-17, 2008 at Scripps Institute of Oceanography in La Jolla, CA. Participants are planning a joint experiment between NOAA, the California Energy Commission, and Scripps Institute of Oceanography to study the interactions between air quality and the hydrologic cycle in a changing climate. Expected outcomes of this meeting include a science plan and an implementation strategy describing testable hypotheses addressing how aerosols from regional and trans-Pacific pollution influence water supply and snowpack in California.
The interaction of increased regional and transported pollution and global warming trends will have an unknown affect on California's water supplies. On one hand, air pollution may partially offset the warming trend by increasing the sunlight scattering off of aerosols which will cool the surface. But on the other hand, increasing pollution deposited within snowpacks will darken the snow which will absorb more radiation and melt the snowpack quicker reducing California's natural snowpack reservoirs. These non-linear and off-setting interactions are dependent on the type and amount of aerosols as well as the state of the regional climate. California is interested in finding the most effective way to simultaneously reduce emissions of greenhouse gases, aerosols, other air pollutants and their precursors while maximizing the benefits for air quality, water resources, and climate change.
This experiment will leverage heavily the NOAA HMT-West effort, as well as NOAA's UAS Program and will address anthropogenic effects of aerosols on precipitation, and climate change impacts on the amplitude and frequency of extreme precipitation events involving atmospheric rivers. | <urn:uuid:ae3fc182-46be-4454-8a2c-b1b4d10bc2e6> | 2.828125 | 356 | News (Org.) | Science & Tech. | 7.021697 | 1,199 |