text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
I am currently attempting to theorise an explanation for the apparent relationship between gravity and magnetism. Both are a 'force' which acts over a distance, the strength of which declines the further from the source one gets. Typical questions one might ask: Are there particles for these forces, or is it something deeper?
Please state your ideas, because I am interested in all kinds related to this topic, albeit I'd rather original theories, than regurgitated mainstream.
Well since you asked..
I'm going to try to explain this in a logical form although without pictures, it is difficult to communicate this type of thing.
Hypothesis 1: Our universe is made entirely of a single substance currently called the "fabric of space" once called "ether".
Hypothesis 2: This fabric or ether has the property of mutual repelling of every point within much like a huge cloud of electrons. But it is also 100% homogeneous having no particles within.
Hypothesis 3: Variations in the density of the ether causes what we observe and call "electric charge".
Hypothesis 4: Positive electric charge
is created by an increase in density of the substance of space, ether. Negative electric charge is created by the relative decrease in density.
Hypothesis 5: As density variations take place, they form a traveling wave of density variation.
Hypothesis 6: The speed of the travel is limited due to the reluctance of the density to be altered. This reluctance to alteration is the result of the mathematical situation of every point in space as it is affected by all other points. This is what causes time
Hypothesis 7: The wave front ramp speed or acceleration rate of density change determines the inertia
of the wave. The sharper or faster ramping the change in density, the more difficult it is to change it any faster. It is by this effect, that all things feel the encounter any other thing.
Hypothesis 8: Electric charge waves, known as "electro-magnetic waves
" are formed as the either density rises and lowers sequentially traveling across a space. The rise rate is slow enough that very little inertia is apparent.
Hypothesis 9: Matter
particles are created as waves of "ether charge" fall back into their own back wave creating a much faster density change rate. This is evident by their "spin" and increased inertia (inability to be affected). Light photons
are just within the boundary of being stable particles and thus display both particle and wave attributes.
Hypothesis 10: These spinning waves have specific distances required from wave front to wave tail for them to form a stable spin. Each harmonic of this distance forms another stable particle size
. These are what creates the "quantum
" effect of size constraints on stable particles.
Hypothesis 11: The internal motion of the variations in density (often neutral in total density and charge) travel at the speed of light as they form a racing 3 dimensional envelope in pursuit of their own density back waves.
Hypothesis 12: The spinning of the wave fronts within each particle creates a charge wave radiation effect extending outward from the particle epicenter. Due to the circling nature of the spin
, this radiation is in the form of a 45 degree spiraling wave front of effect on all space surrounding the particle. No energy is lost from the particle because the spiral wave is not encountering opposition or doing work.
Hypothesis 13: When a spiraling ether wave encounters a spinning ether object (all particles), the internal spin speed of the particle is affected such as to cause one side of the spin to necessarily be slower than the other. But due to the cycling of both the particle spin as well as the wave being encountered, the particle is impeded first in one direction such as to pull the particle at a 45 degree angle but then reversed such as to pull it at the opposite 45 degree angle. The result is that the particle moves directly toward the source of the radiant wave - gravity
Magnetism is formed by a similar means;
Hypothesis 14: When the ether density variations form an overall increase or decrease in ether density, an electric charge is apparent. The electric effect is substantially greater than the ether density effect alone because it represents a total ether density change in space over a much larger area and strongly affects any other great density variation. The much smaller density variations creating the particles are confined to a very small area as they are filling their own back waves leaving a neutral overall density change as felt outside the particle. A particle with an overall variation in density has nothing to confine its density change effect.
Hypothesis 15: When an electric charge races by another charge, it has a different affect as it approaches than as it leaves. This is similar to the Doppler effect of sound. The approaching charge affects the space of a second charge in an increasing rate of overall density change. This gives a stronger push or pull on the second charge particle than when it is leaving the second particle.
Hypothesis 16: A series of charged particles racing by a target charged particle will continuously push the target particle - magnetism
Hypothesis 17: An increase in density can be more easily arranged than a decrease due to the fact that any decrease must be formed by the inherent willingness of the ether substance to disperse. It cannot be "pulled", but it can be "pushed".
Hypothesis 18: A heavy or high inertia positive particle can be naturally formed more easily than a negative high inertial particle. As the wave front of the inner particle waves increases, the inertia of the particle increases due to the reluctance of density change. An increase in density change rate can be forced through compression of additional efforts to increase the density. But a decrease in density change by adding additional efforts to remove the substance is dependent on the speed at which the substance travels out of an area. This is why we find heavy positive particles (protons) occurring naturally and not heavy negative particles (negatons)
Hypothesis 19: A negative charge, low ether density, particle occupies a different size of space. Although a heavy negative particle can be formed, it cannot be made stable due to the requirements of proximity of the leading and trailing wave ramps.
Hypothesis 20: An electron is a different size and density dispersion than a positron. And protons have a much higher wave ramp dispermitting the electrons to merge with them. This is what yields "atoms
- a single
particle made by a negative charge field attempting to collapse into a heavier (more dense) positive charge wave. The Bohr atomic model has never been correct.
Hypothesis 21: A nucleus of an atom is a single particle
fixed at inertial states that are a harmonic of a single neutron particle. Protons combine into a single enveloping wave required to be stable at specific harmonic sizes. A nucleus is not "held together" but is a single wave formed by combining the waves of other particles into one - there is no "strong force"
I could go on for more bits of interesting physics, but 21 concepts should be enough for now. You can simulate all of this with merely a PC and a large spreadsheet like Excel if you are very careful in choosing the exact right formula for "ether repelling force" or the reluctance for the density of the fabric of space to be altered. The resultant programming will allow you to see waves and particles forming along with their naturally occurring magnetic and gravitation effects even though you had not programmed those effects in. | <urn:uuid:f92c215b-4c58-48b7-acbe-a567bca4ac9f> | 2.71875 | 1,548 | Q&A Forum | Science & Tech. | 34.404252 |
Provided by: umview_0.6-1_i386
umview - User Mode implementation of View-OS
umview [ options ] prog
The main goal of the View-OS project is to give each process its own
view of the system resources. For example, each process can mount
filesystems or hide some file or directories. It is also possible to
assign virtual network interfaces, IP addresses or define virtual
devices for each process (or for hierarchies of processes).
umview is the user mode implementation of View-OS concepts. It is a
modular partial virtual machine. umview before loading any module is
completely transparent, a process behaves inside umview as it would
have behaved outside. Each module can customize specific entities:
there are modules to mount filesystems at user-level (umfuse) , to
define virtual networks (umnet) , to define virtual devices (umdev) ,
to provide interpreters for executables, e.g. to support executables
for foreign architectures (umbinfmt) , to hide, move, overlay parts of
the file system (viewfs).
These are some examples of modules provided by the View-OS team.
umview aims to provide a general interface to customize the system call
semantics of process under specified conditions. So more modules will
be added both by the View-OS team and by third parties.
umview can also be used as a login shell in /etc/passwd (see
passwd(5)). When umview is a login shell the mapping between each user
and his/her startup script is in the file /etc/viewospasswd (see
Load the specified startup script instead of the standard one
($(HOME)/.viewosrc). /etc/viewosrc is always loaded first, and
then either $(HOME)/.viewosrc or the script specified by -fB or
set the name of the view. The view can be read and set using vuname
or viewname commands.
-p module [ , module_options ]
--preload module [ , module_options ]
preload modules. Modules will be loaded as shared libraries thus
all the rules to load libraries apply. Modules must be loaded from
a directory within the ld.so search path or should be specified by
their pathnames. If necessary configure the LD_LIBRARY_PATH
environment variable appropriately. module_options are module
specific configuration options, thus the reader should refer to
each service module manual for a complete description. Modules can
be loaded at run time using the um_add_service command.
umview is able to provide module nesting, i.e. a module can provide
services on the basis of virtual services provided by another
module or even by the module itself. For example it is possible to
mount a file system image which is stored in an already virtually
mounted filesystem. This feature requires the pure_libc library.
The -x or --nonesting option disables the nesting feature.
umview is able to use some specific kernel extensions (when
present) to increase its performance. The source distribution of
umview include the kernel patches for the latest kernels. The
kernel extensions are enabled by default when available. This
option disables the kernel extensions.
This option disables the PTRACE_MULTI kernel extension.
This option disables the PTRACE_SYSVM kernel extension.
This option disables the PTRACE_SYSVIEWOS kernel extension (already
experimental, not yet released).
This option diverts the debugging output to the file specified, it
is useful when umview has been compiled with debugging extensions.
Print the version and exit.
Print a short help message and exit.
user startup scripts information for kmview login shells.
global configuration script
user configuration script (it is not automatically loaded in case
of login shell. Add it at the end of the login script if needed)
um_add_service(1), um_del_service(1), um_ls_service(1),
um_mov_service(1), umfuse(1viewos), lwipv6(1viewos), umdev(1viewos),
umbinfmt(1viewos), viewfs(1viewos), vuname(1viewos), viewname(1viewos),
View-OS is a project of the Computer Science Department, University of
Bologna. Project Leader: Renzo Davoli. Development Team: P. Angelelli,
A. Bacchelli, M. Belletti, P. Beverini, D. Billi, A. Forni, L.
Gardenghi, A. Gasparini, D. Lacamera, C. Martellini, A. Seraghiti
Howto’s and further information can be found on the project wiki | <urn:uuid:322899a9-5900-4af3-8329-2d663288946b> | 2.75 | 1,054 | Documentation | Software Dev. | 38.368275 |
Distance from Sun and the Milky Way's Center
Name: dutch minott
Date: 1993 - 1999
What is the distance between the center of the milky way and our sun?
The Milky Way has a radius of about 15 kpc, and the Sun is about 10 kpc
from the center. A "kpc" is a kilo-parsec. One parsec is 3.26 light years
or 3.086 x 10^18 cm. I leave the awesome mathematical challenge of
converting to miles (or kilometers) as an exercise for the reader.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:ceda9163-c09a-48f3-8bcd-a102f179dd04> | 2.984375 | 137 | Knowledge Article | Science & Tech. | 78.67398 |
Clang is a new C-targeted compiler intended specifically to work on top of LLVM.
The combination of clang and LLVM provides the majority of a toolchain, allowing the replacement of the whole GCC stack.
One of clang's primary goals is to better support incremental compilation to allow the compiler to be more tightly tied to the IDE GUI
. GCC is designed to work in a "classic" compile-link-debug cycle, and although it provides useful ways to support incremental and interrupted compiling on-the-fly, integrating them with other tools is not always easy. For instance, GCC uses a step called "fold" that is key to the overall compile process, which has the side effect of translating the code tree into a form that does not look very much like the original source code. If an error is found during or after the fold step, it can be difficult to translate that back into a single location in the original source. Additionally, vendors using the GCC stack within IDEs used separate tools to index the code to provide features like syntax highlighting
Clang is designed to retain more information during the compilation process than GCC, and preserve the overall form of the original code. The objective of this is to make it easier to map errors back into the original source. The error reports offered by Clang are also aimed to be more detailed and specific, as well as machine-readable, so IDEs can index the output of the compiler during compilation. Since the compiler is always running, it can offer source code indexing
, syntax checking, and other features normally associated with rapid application development
]. The parse tree
is also more suitable for supporting automated code refactoring
, as it remains in a parsable text form at all times. Changes to the compiler can be checked by diffing
GCC systems don't support threading at the single compilation level and cannot take advantage of the multi-processor hardware for single compilation units. Clang was designed from the start to be threaded and aims for reduced memory footprint and increased speed. As of October 2007, clang compiled the Carbon
libraries well over twice as fast as GCC, while using about five times less memory and disk space.
Although development on GCC may be difficult, the reasons for this have been well explored by its developers. This allowed the clang team to avoid these problems and make a more flexible system. Clang is highly modularized, based almost entirely on replaceable link-time libraries as opposed to source code modules that are combined at compile time, and well documented. This makes it much easier for new developers to get up to speed in clang and add to the project. In some cases the libraries are provided in several versions that can be swapped out at runtime; for instance the parser
comes with a version that offers performance measurement of the compile process. | <urn:uuid:8fd729c1-76c3-45ea-a831-d73d32e407a9> | 3.8125 | 579 | Comment Section | Software Dev. | 34.248888 |
Sea Otters and Polar Bears: Marine Fissipeds
Two Alaska marine mammals are neither pinniped nor cetacean: the polar bear and sea otter. They are both fissipeds, “split-footed” members of the order Carnivora, and are more closely related to terrestrial carnivores, like weasels, than seals or whales. Evolutionary newcomers to the marine environment, these species lack many of the physiologic adaptations to marine life seen in pinnipeds and cetaceans. Both species are considered marine mammals under U.S. laws because of the roles they play in the marine environment.
Polar bears, in the bear family (Ursidae), spend most of their lives associated with marine ice and waters. Although competent swimmers, they are the marine mammal least adapted to aquatic existence. They rest, mate, give birth, and suckle their young on the ice, and as such, are vulnerable to reductions in the extent and duration of sea ice.
Sea otters, in the weasel family (Mustelidae), live a primarily marine life: they rest, mate, give birth, and suckle their young in the water. Their hind limbs are webbed for swimming, but their front paws are padded with separate, clawed digits. They lack blubber, but are insulated by air trapped in their thick fur, which is densest among all mammals.
Taxonomic Relationships of Alaska Fissipeds
Identifying Swimming Mammals
Some terrestrial (land-based) mammals can be confused with marine mammals when seen swimming in ocastal marine waters. Although their silhouette head profiles may be similar, the swimming behavior of terrestrial and marine mammals differs and may be used to distinguish the two groups.
Most terrestrial mammals like bears, deer, and moose rarely submerge, and their backs may be visible while they are swiming. Beavers, mink, and river otters may submerge momentarily but resurface within seconds. Although river otters swim and roll at the surface like sea otters, they never float on their backs. Continued observation of a swimming mammal's behavior may be required to distringuish terrestrial frommarine species. | <urn:uuid:efb01c98-2e44-4617-99fe-7abcfb012737> | 3.46875 | 456 | Knowledge Article | Science & Tech. | 33.885842 |
Learn more physics!
If Photon has zero mass how can gravity effect light ? What is rational behind Pauli exclusion principle and Why not two Fermion can have same quantum state ? What is the significance of Pauli exclusion principle ?
- Mansur (age 27)
We've answered the photon question several times before. Use our search for 'photon mass gravity'.
The pauli exclusion principle is exactly the same as the statement that no two fermions can be in exactly the same quantum state. It was first found as a simple observation. It was later understood via the 'spin-statistics theorem' to be a necessary consequence of Special Relativity combined with quantum theory. I think that this site is not the right place to try to reproduce that argument, but you can easily find it in print or online.
I'm not sure quite what you mean by 'significance'. If there were no exclusion principle everything would collapse. I think it's significant that that doesn't happen.
(published on 09/04/2008)
Follow-up on this answer. | <urn:uuid:8aeb7085-d4d2-48e3-9efb-1ffcf65c2a10> | 3.453125 | 224 | Q&A Forum | Science & Tech. | 54.192414 |
Courtesy of EarthSky
A Clear Voice for Science
Can you see that the moon is farther from Spica tonight than it was last night? The moon is shifting farther and farthest east, with respect to the stars, each day. The moon always moves toward the east on our sky’s dome. This motion is a translation on our sky’s dome of the moon’s orbit around Earth.
You can observe the moon’s orbital motion from one night to the next by watching the moon’s location with respect to background stars.
Alternatively, you can look outside each evening at the same time to notice that the moon is in a more easterly location on the sky’s dome than it was the night before. Just remember, when you do this, that you’re actually observing the moon moving in its orbit around Earth.
The moon is now at the waxing gibbous phase. A waxing gibbous moon carries that designation because it is more than half illuminated but less than full. The terminator – or the shadow line dividing the lunar day from the lunar night – shows you where it’s sunrise on the waxing moon.
Every July, you’ll find the moon at or near the same phase when it swings between Spica, and the stars Zubenelgenubi and Zubeneschamali in the constellation Libra. These two Libra stars have been seen as a “gateway” on the sky’s dome by stargazers in times past. That’s because at certain times in the moon’s cycle the moon passes in between Libra’s two brightest stars. For the next several years, though, the moon rides too far south to travel the passage between Zubenelgenubi and Zubeneschamali. The moon won’t start to pass through the celestial “gateway” until the year 2014!
Written by Deborah ByrdPrint This Post | <urn:uuid:38de5d57-817e-401b-b9f6-97a9ea6af2b1> | 3.75 | 412 | Personal Blog | Science & Tech. | 61.993706 |
I have created a calculator in applet using awt. I got a problem in textfield as when you type, textfield displays from left to right but calculator displays from right to left. So what should i do so that textfield should print from right.
Joined: Mar 22, 2005
You can pad whatever you want to display with spaces on the left side. If you use a monospaced font (like Courier) then the width taken up by a space character is the same as the width of any digit. So if you want to display a 5-digit number, you'd prepend 20 spaces to it (assuming your TextField is 25 characters wide). | <urn:uuid:354f18ef-57fc-4f05-91b7-e309b0b04e2d> | 2.828125 | 137 | Q&A Forum | Software Dev. | 72.31869 |
The core technologies at the heart of Web servicesWSDL, SOAP, and UDDIare designed to ease interoperability. In particular, SOAP, a protocol built upon XML, is the fundamental Web service transport technology. Because XML is a widely accepted technology for passing structured data, it is tempting to believe that SOAP messages provide interoperability. But, XML is not the
"silver bullet" for solving integration problems.
To achieve true interoperability across multiple companies, standard syntax and semantics are both required. For example, let's define a simple data structure for a purchase order. The over-simplified purchase order example will contain only the company name, product identifier, and price.
<name> "Widgets, Inc." </name>
<productid> 123456 </productid>
<price> "$59.95" </price>
This example represents pricing data as a string. If a partner were to instead represent pricing data as a floating-point number, data type errors would be likely to occur. Therefore, agreement among partners about how to handle data is essential. XML solves data typing issues through the use of XML schemas. The following example shows how each element in the purchase order can be designated a specific data type:
<xs:schema xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="name" type="xs:string"/>
<xs:element name="productid" type="xs:int"/>
<xs:element name="price" type="xs:string"/>
With the XML schemas defined, you can create the WSDL that defines the input and output operations. In this scenario, there might be an XML type defining the "order_request" and another defining the "order_response." The following WSDL could be used to tie the request and response messages together:
<definitions xmlns:tns="uri:order.hp.com" >
<wsdl:part element="tns:order_request" name="input"/>
<wsdl:part element="tns:order_response" name="output"/>
In the WSDL example above, the request and response messages are mapped to the specific XML schema types. Then, a specific operation is defined, called "placeOrder" that connects the request with the response message. This simple example illustrates how XML schemas can be used to identify specific data types that are required for a Web service and how WSDL messages and operations can be constructed. Developers wishing to use the service (for example, those who work for your partners) then know the exact contract or interface required.
Things get more complicated when you try to understand the meaning, or underlying semantics, of the data. In my fictitious example, there are two major assumptions about products and prices: First, that products will always be represented with numbers, and second, that pricing can be expressed in U.S. dollars. Both assumptions are unlikely to be tenable in the real world.
Neither XML nor WSDL can solve such semantic issues. They must be agreed upon in advance by the various partners in the interaction, and a mechanism to translate from one structure to another must be developed. Technologies such as XSLT (the Extensible Stylesheet Language Transformations) and XPath (the XML Path Language) meet that need.
There are tools available that provide direct support for mapping between different XML formats or schemas. For example, with Cape Clear's CapeStudio developers can input two XML schemas, define mappings between the types, and the product will automatically generate an XSLT document for translation. BEA's WebLogic Workshop provides a Map and Interface Editor in which you can easily set up XML mappings between incoming XML requests and back-end logic. Other platforms can utilize handlers to pre-process incoming SOAP requests. Here, a developer could introduce XSLT logic to transform the XML document if necessary. | <urn:uuid:e6995c5c-4def-4a50-a033-51152e5ec2d6> | 3.109375 | 833 | Documentation | Software Dev. | 40.685543 |
OpenCL provides many benefits in the field of high-performance computing, and one of the most important is portability. OpenCL-coded routines, called kernels, can execute on GPUs and CPUs from such popular manufacturers as Intel, AMD, Nvidia, and IBM. New OpenCL-capable devices appear regularly, and efforts are underway to port OpenCL to embedded devices, digital signal processors, and field-programmable gate arrays.
Not only can OpenCL kernels run on different types of devices, but a single application can dispatch kernels to multiple devices at once. For example, if your computer contains an AMD Fusion processor and an AMD graphics card, you can synchronize kernels running on both devices and share data between them. OpenCL kernels can even be used to accelerate OpenGL or Direct3D processing.
Despite these advantages, OpenCL has one significant drawback: it's not easy to learn. OpenCL isn't derived from MPI or PVM or any other distributed computing framework. Its overall operation resembles that of NVIDIA's CUDA, but OpenCL's data structures and functions are unique. Even the most introductory application is difficult for a newcomer to grasp. You really can't just dip your foot in the pool you either know OpenCL or you don't.
My goal in writing this article is to explain the concepts behind OpenCL as simply as I can and show how these concepts are implemented in code. I'll explain how host applications work and then show how kernels execute on a device. Finally, I'll walk through an example application with a kernel that adds 64 floating-point values together.
Host Application Development
In developing an OpenCL project, the first step is to code the host application. This runs on a user's computer (the host) and dispatches kernels to connected devices. The host application can be coded in C or C++, and every host application requires five data structures:
cl_device_id, cl_kernel, cl_program, cl_command_queue, and
When I started learning OpenCL, I found it hard to remember these structures and how they work together, so I devised an analogy: An OpenCL host application is like a game of cards.
A Game of Cards
In a card game, a dealer sits at a table with one or more players and distributes cards from a deck. Each player receives these cards as part of a hand and then analyzes how best to play. The players can't interact with one another or see another player's cards, but they can make requests to the dealer for additional cards or a change in stakes. The dealer handles these requests and takes control once the game is over. Figure 1 illustrates this analogoy.
In addition to the dealer and the players, Figure 1 also depicts the table that supports the game. The players seated at the table don't have to take part, but only those seated at the table can participate in the game.
The Five Data Structures
In my analogy, the card dealer represents the host. The other aspects of the game correspond to the five OpenCL data structures that must be created and configured in a host application:
- Device: OpenCL devices correspond to the players. Just as a player receives cards from the dealer, a device receives kernels from the host. In code, a device is represented by a
- Kernel: OpenCL kernels correspond to the cards. A host application distributes kernels to devices in much the same way a dealer distributes cards to players. In code, a kernel is represented by a
- Program: An OpenCL program is like a deck of cards. In the same way that a dealer selects cards from a deck, the host selects kernels from a program. In code, a program is represented by a
- Command queue: An OpenCL command queue is like a player's hand. Each player receives cards as part of a hand, and each device receives kernels through a command queue. In code, a command queue is represented by a
- Context: OpenCL contexts correspond to card tables. Just as a card table makes it possible for players to transfer cards to one another, an OpenCL context allows devices to receive kernels and transfer data. In code, a context is represented by a
To clarify this analogy, Figure 2 shows how these five data structures work together in a host application. As shown, a program contains multiple functions, and each kernel encapsulates a function taken from the program.
Once you understand how host applications work, learning how to write code is straightforward. Most of the functions in the OpenCL API have straightforward names like
clGetDeviceInfo. Given the analogy, it should be clear that
clCreateKernel requires a
cl_program structure to execute, and
clCreateContext requires one or more
Shortcomings of the Analogy
My analogy has its flaws. Six significant shortcomings are given as follows:
- There's no mention of platforms. A platform is a data structure that identifies a vendor's implementation of OpenCL. Platforms make it possible to access devices. For example, you can access an Nvidia device through the Nvidia platform.
- A card dealer doesn't choose which players sit at the table. However, an OpenCL host selects which devices should be placed in a context.
- A card dealer can't deal the same card to multiple players, but an OpenCL host can dispatch the same kernel to multiple devices through their command queues.
- The analogy doesn't mention how devices execute kernels. Many OpenCL devices contain multiple processing elements, and each element may process a subset of the input data. The host identifies the number of work items that should be generated to execute the kernel.
- In a card game, the dealer distributes cards to players and each player arranges the cards to form a hand. In OpenCL, the host creates a command queue for each device and enqueues commands. One type of command tells a device to execute a kernel.
- In a card game, the dealer passes cards in a round-robin fashion. OpenCL sets no constraints on how host applications distribute kernels to devices.
At this point, you should understand that a large part of a host application's job involves creating kernels and deploying them to OpenCL-compliant devices such as GPUs, CPUs, or hybrid processors. Next, I'll discuss how these kernels execute on the devices. | <urn:uuid:1fa2dd1d-1745-4989-bb3c-58d2ba542e4d> | 3.4375 | 1,302 | Documentation | Software Dev. | 47.092993 |
February 28, 2007
GCRIO Program Overview
Our extensive collection of documents.
Archives of the
Global Climate Change Digest
A Guide to Information on Greenhouse Gases and Ozone Depletion
Published July 1988 through June 1999
FROM VOLUME 5, NUMBER 3, MARCH 1992
Save the Planet 1991, IBM or Macintosh, $20 ($28 overseas).
Includes a rudimentary climate change model, a tutorial on the science of
climate change and ozone depletion, energy saving tips, and resources for
activists. Geared toward the general public with an emphasis on secondary school
and college students. Available from Save the Planet Software, Box 45, Pitkin CO
Hothouse Planet, IBM PC or Apple II, grades 9-14, $70. Users first
set values for levels of CO2, methane, nitrous oxide, CFCs, volcanic aerosols
and solar irradiance, then explore the projected results over the next decade.
Order from EME Corp., POB 2805, Danbury CT 06813.
Guide to Publishers
Index of Abbreviations | <urn:uuid:3fe47d29-1248-465b-a000-63279bd75c34> | 2.765625 | 232 | Content Listing | Science & Tech. | 40.327206 |
Continued from previous page
Figure. 2.2: Trends in annual diurnal temperature range (DTR, °C/decade), from 1950 to 1993, for non-urban stations only, updated from Easterling et al. (1997). Decreases are in blue and increases in red. This data set of maximum and minimum temperature differs from and has more restricted coverage than those of mean temperature used elsewhere in Section 2.2.
Figure 2.3: Cloud cover (solid line) and DTR (°C, dashed line) for Europe, USA, Canada, Australia, the former Soviet Union, and eastern China (from Dai et al., 1997a). Note that the axis for DTR has been inverted. Therefore, a positive correlation of cloud cover with inverted DTR indicates a negative cloud cover/DTR correlation.
Since the DTR is the maximum temperature minus the minimum temperature, the DTR can decrease when the trend in the maximum or minimum temperature is downward, upward, or unchanging. This contributes to less spatial coherence on the DTR map than on maps of mean temperature trend. Maximum temperatures have increased over most areas with the notable exception of eastern Canada, the southern United States, portions of Eastern and southern Europe (Brunetti et al., 2000a), southern China, and parts of southern South America. Minimum temperatures, however, increased almost everywhere except in eastern Canada and small areas of Eastern Europe and the Middle East. The DTR decreased in most areas, except over middle Canada, and parts of southern Africa, south-west Asia, Europe, and the western tropical Pacific Islands. In some areas the pattern of temperature change has been different. In both New Zealand (Salinger, 1995) and central Europe (Weber et al., 1994; Brázdil et al., 1996) maximum and minimum temperatures have increased at similar rates. In India the DTR has increased due to a decrease in the minimum temperature (Kumar et al., 1994). Eastern Canada also shows a slight increase in DTR (Easterling et al., 1997). However, recently annual mean maximum and minimum temperatures for Canada have been analysed using newly homogenised data (Vincent, 1998; Vincent and Gullet, 1999); these have increased by 0.3 and 0.4°C, respectively, over the last fifty years (Zhang et al., 1999). Central England temperature also shows no decrease in DTR since 1878 (Parker and Horton, 1999). Similarly, a new temperature data set for north-east Spain (not available on Figure 2.2 below, Brunet-India et al., 1999a,b), shows an increase in maximum temperature over 1913 to 1998 to be about twice as fast as that of minimum temperature. Recent analyses by Quintana-Gomez (1999) reveal a large reduction in the DTR over Venezuela and Colombia, primarily due to increasing minimum temperatures (up to 0.5°C/decade). In northern China, the decrease in DTR is due to a stronger warming in minimum temperature compared with maximum temperatures. However, in southern China the decreased DTR is due to a cooling in maximum with a slight warming in minimum temperature (Zhai and Ren, 1999).
The DTR is particularly susceptible to urban effects. Gallo et al. (1996) examined
differences in DTR between stations based on predominant land use in the vicinity
of the observing site. Results show statistically significant differences in
DTR between stations associated with predominantly rural land use/land cover
and those associated with more urban land use/land cover, with rural settings
generally having larger DTR than urban settings. Although this shows that the
distinction between urban and rural land use is important as one of the factors
that can influence the trends observed in temperatures, Figure
2.2 shows annual mean trends in diurnal temperature range in worldwide non-urban
stations over the period 1950 to 1993 (from Easterling et al., 1997). The trends
for both the maximum and minimum temperatures are about 0.005°C/decade smaller
than the trends for the full network including urban sites, which is consistent
with earlier estimated urban effects on global temperature anomaly time-series
(Jones et al., 1990).
Minimum temperature for both hemispheres increased abruptly in the late 1970s, coincident with an apparent change in the character of the El Niño-Southern Oscillation (ENSO) phenomenon, giving persistently warmer sea temperatures in the tropical central and east Pacific (see Section 2.6.2). Seasonally, the strongest changes in the DTR were in the boreal winter (-0.13°C/decade for rural stations) and the smallest changes were during boreal summer (-0.065°C/decade), indicating some seasonality in the changes. Preliminary extensions of the Easterling et al. (1997) analysis to 1997 show that the declining trends in DTR have continued in much of North America and Asia.
Figure 2.3 shows the relationship between cloudiness and the DTR for a number of regions where long-term cloud cover data are available (Dai et al., 1997a). For each region there was an increase in cloud cover over the 20th century and generally a decrease in DTR. In some instances the correlation between annual cloud cover and annual DTR is remarkably strong, suggesting a distinct relationship between cloud cover and DTR. This would be expected since cloud dampens the diurnal cycle of radiation balance at the surface. Anthropogenically-caused increases in tropospheric aerosol loadings have been implicated in some of these cloud cover changes, while the aerosols themselves can cause small changes in DTR without cloud changes (Hansen et al., 1998 and Chapter 6).
Other reports in this collection | <urn:uuid:6f217529-521f-4204-ba7f-5a5ee860acd2> | 3.515625 | 1,192 | Knowledge Article | Science & Tech. | 45.127189 |
Cercomonas clavideferens is a protozoa, a single celled organism, about 20 microns (thousandths of a millimetre) long, only visible under a powerful microscope.
Cercomonas clavideferens is abundant in soils and freshwater sediments, but has not been found alive in the sea. There are probably millions of these organisms in every kilogram of soil.
Cercomonas clavideferens is a voracious predator of bacteria, but probably eats a much wider range of organisms. Some Cercomonas species are known to gang up on and kill nematode worms, many times their size.
Cercomonas clavideferens has two flagella, one pointing in the direction of movement and one trailing behind the cell. These can be seen as thin projections extending away from the cell in the photographs above. The flagella are about twice as long as the cell and are used for locomotion, feeding, and probably sensing the cell’s immediate environment.
Find out more about the taxonomic ranks to which Cercomonas clavideferens belongs.
Discover the types of habitat that Cercomonas clavideferens is known from, why it is important in microbial food webs and how understanding this species and its wider family can help us learn how environments 'work'. Find out about the feeding patterns of the species.
Learn about the different forms of Cercomonas clavideferens and ability of the cells to join together and fuse to form multicellular masses.
Get reference material for Cercomonas clavideferens.
An SEM of the moving, feeding forms of Cercomonas clavideferens.
The cyst form of Cercomonas clavideferens which can survive long periods of dessication and other unfavourable conditions.
A multicellular plasmodia form of Cercomonas clavideferens showing individual cells joined by strands of cytoplasm.
A multicelluar plasmodia form of Cercomonas clavideferens showing cells combined into a single rounded mass. | <urn:uuid:b9cb619b-5e2d-4fff-9cec-0f5b1ddb3721> | 3.921875 | 475 | Knowledge Article | Science & Tech. | 23.812964 |
Editor’s note (10/9/2012): We are making the text of this article freely available for 30 days because the article was cited by the Nobel Committee as a further reading in the announcement of the 2012 Nobel Prize in Physics. The full article with images, which originally appeared in the June 1997 issue, is available for purchase here.
“I am sorry that I ever had anything to do with quantum theory,” Erwin Schrödinger reportedly complained to a colleague. The Austrian physicist was not lamenting the fate of his now famous cat, which he figuratively placed in a box with a vial of poison in 1935. Rather he was commenting on the strange implications of quantum mechanics, the science behind electrons, atoms, photons and other things submicroscopic. With his feline, Schrödinger attempted to illustrate the problem: according to quantum mechanics, particles jump from point to point, occupy several places at once and seem to communicate faster than the speed of light. So why don’t cats—or baseballs or planets or people, for that matter—do the same things? After all, they are made of atoms. Instead they obey the predictable, classical laws quantified by Isaac Newton. When does the quantum world give way to the physics of everyday life? “That’s one of the $64,000 questions,” chuckles David Pritchard of the Massachusetts Institute of Technology.
Pritchard and other experimentalists have begun to peek at the boundary between quantum and classical realms. By cooling particles with laser beams or by moving them through special cavities, physicists have in the past year created small-scale Schrödinger’s cats. These “cats” were individual electrons and atoms made to reside in two places simultaneously, and electromagnetic fields excited to vibrate in two different ways at once. Not only do they show how readily the weird gives way to the familiar, but in dramatic fashion they illustrate a barrier to quantum computing—a technology, still largely speculative, that some researchers hope could solve problems that are now impossibly difficult.
The mystery about the quantum-classical transition stems from a crucial quality of quantum particles—they can undulate and travel like waves (and vice versa: light can bounce around as a particle called a photon). As such, they can be described by a wave function, which Schrödinger devised in 1926. A sort of quantum Social Security number, the wave function incorporates everything there is to know about a particle, summing up its range of all possible positions and movements.
Taken at face value, a wave function indicates that a particle resides in all those possibilities at once. Invariably, however, an observation reveals only one of those states. How or even why a particular result emerges after a measurement is the point of Schrödinger’s thought experiment: in addition to the cat and the poison, a radioactive atom goes into the box. Within an hour, the atom has an even chance of decaying; the decay would trigger a hammer that smashes open the vial of antifeline serum.
The Measurement Problem
According to quantum mechanics, the unobserved radioactive atom remains in a funny state of being decayed and not decayed. This state, called a superposition, is something quantum objects enter quite readily. Electrons can occupy several energy levels, or orbitals, simultaneously; a single photon, after passing through a beam splitter, appears to traverse two paths at the same time. Particles in a well-defined superposition are said to be coherent.
But what happens when quantum objects are coupled to a macroscopic one, like a cat? Extending quantum logic, the cat should also remain in a coherent superposition of states and be dead and alive simultaneously. Obviously, this is patently absurd: our senses tell us that cats are either dead or alive, not both or neither. In prosaic terms, the cat is really a measuring device, like a Geiger counter or a voltmeter. The question is, then, Shouldn’t measuring devices enter the same indefinite state that the quantum particles they are designed to detect do? | <urn:uuid:29f05004-906b-4b73-94d3-1b6e8928cae6> | 2.953125 | 858 | Truncated | Science & Tech. | 35.395814 |
Why Bays Matter
Texas needs bays, and Texas bays need fresh water
By Larry McKinney
The wind on Redfish Bay was cold and gusty as we prepared to launch our kayaks on an early morning last October. We had to cross a shrimp-boat channel where the north wind was pitching a steady train of high swells across our path, but that was a minor impediment. Once across, we turned into one of the myriad of tidal channels connecting the mangrove maze, seagrass flats and lakes that border the southwestern margin of the bay.
The winds diminished rapidly and when the sun rose fully above the horizon, we were treated to a scene that Audubon should have painted. Hundreds of birds, from stubby pelicans to elegant blue herons, had sought refuge from the wind in the stunted mangroves. Only a few feet from the bows of our kayaks, the common birds of the Texas Coast flapped and waded and hopped in a riot of colors, ranging from the hot pink of roseate spoonbills to the dun of plovers.
We glided past one another along the winding channels in the silent truce of the windblown. The occasional boom of distant duck hunters chased thousands of redhead ducks low overhead, seeking safer waters. In the middle of one such darkening passage of ducks, I spotted the first redfish of the day, industriously working the shallow bottom, its fanlike, iridescent tail waving in the air. Several more tails popped up behind that one. What a choice - birds or fish.
The fish won and I cast my voodoo-child fly just to its left. Even with the wind, the water in the protected lake was fairly clear, so I could see the fish briefly hesitate. I never finished my first strip. The redfish hit, I set the hook and the shallow lake exploded into action as the fly line ripped upward in a curtain of water. The fish and most of his buddies headed for the far side of the bay.
What is remarkable about this experience is that it is not an uncommon one on the Texas Coast. Yet few Texans would recognize it as something within their grasp. It is possible because of our state's most valuable and under-appreciated natural resources: our estuaries. Flowing into some 2.6 million acres of coastal waters, Texas estuaries create diverse wetlands that support the production of 100 million pounds of seafood annually and sustain an internationally recognized birding Mecca.
From space, Texas estuaries appear as evenly spaced pearls strung along 360 miles of coastline. Each of the seven major estuaries, or bays, as we more commonly refer to them, is different from the next. Their names ring with Texas history. LaSalle's ship foundered in Matagorda Bay. We won our independence from Mexico at San Jacinto, on the margins of Galveston Bay. The pirate Jean Lafitte cruised the waters of San Antonio Bay, slipping out to sea through Cedar Bayou.
Few Texans recognize our bays and estuaries for much more than this, if they note them at all. Less than half of the population of the City of Houston and Harris County, which occupies much of the northern margins of Galveston Bay, has swum in, fished in or boated on the bay.
Perhaps that helps explain why bays and estuaries have failed to win a place in Texas mythology, which is full of cowboys, oil rigs and wide-open land. It's difficult to appreciate the natural wonder of our bays as you whiz by them on the highway. At 70 miles an hour, they appear as dull expanses of water broken by intermittent stretches of marsh and mudflat. To really see their remarkable nature, you have to get out into them, and few people seem willing to do so nowadays.
Another reason our bays go unrecognized is their resilient nature. We tend to take them for granted, even though they deserve as much protection as other noteworthy ecosystems. We fret about and raise money to save Central American rainforests, old-growth timber in the West and coral reefs everywhere, and all the while we ignore the plight of the treasure at our back door. We fill in the wetlands (about half of Texas coastal wetlands are gone) to provide housing for the fastest-growing areas in the state. We crisscross bay bottoms with channels, drastically altering hydrology to speed commerce and promote petroleum development. We depend upon these waters to treat our waste and assimilate our pollution, which results in the closure of more than 30 percent of their waters to shellfish harvest. Through all of this abuse, our bays and estuaries persevere, absorbing blow after blow, rebounding only to suffer new abuse and serve yet again.
The resiliency of our estuaries is their greatest strength and ultimately may be their greatest weakness. Despite all of the abuses, each year these coastal ecosystems generate $2 billion in economic benefits from recreational fishing alone. Commercial fisheries average another $266 million. Coastal destinations account for about 30 percent of travel in Texas, and that translates into $10 billion in economic benefits each year. All these benefits are based on healthy and productive estuaries.
The good news for Texans is that our estuaries are absorbing all that we throw at them and they seem to come back for more. That means we still have the time to take those actions necessary to preserve them for our children. The bad news swirling below the surface and out of sight is that no matter how resilient our estuarine systems are, they do have breaking points. The world abounds with examples of broken systems: the Aral Sea, the Colorado River (the western one emptying into the Gulf of California), the Mississippi River, the Everglades, the Nile and on and on. The reasons for their destruction, in hindsight, are obvious: poor planning, greed, ignorance and just plain bad luck.
We can see the breaking point for Texas bays rushing toward us in the form of people. Texas' population is predicted to nearly double in the next 50 years. We already have used half of our natural resources, such as wetlands and hardwood bottomlands, to get where we are now. We cannot continue on that course. Unless we take steps to protect our bays and estuaries now, we may lose them in a crisis in the next 10 or 15 years.
We do not have to await that crisis. We can act now and do so responsibly and reasonably, in a way that balances all needs - municipal, industrial, agricultural and environmental. For Texas estuaries, the key to the future is water - freshwater inflows to maintain their integrity.
Freshwater inflows are important to these ecosystems for the most fundamental of reasons. An estuary is that place on the coast where fresh water from rivers meets and mixes with seawater. Sabine Lake, Galveston Bay, Matagorda Bay, San Antonio Bay and Corpus Christi Bay are vast caldrons where freshwater inflows create salinity gradients that expand and contract with drought and flood. Along with fresh water, the rivers that empty into them bring nutrients and sediments that feed both fish and wildlife and the wetlands in which they live and grow. Shrimp, crabs, oysters, redfish and spotted seatrout, to name only a few, have evolved to take advantage of these dynamic ecosystems. Their life cycles are inexorably linked to the ebb and flow of water into these systems. Adapted to flood and drought, they require both to prosper. Freshwater inflows mean fish to catch and shrimp to eat. If estuaries are like factories, the resource that fuels them is fresh water.
We have not always recognized that fact in Texas. Often has been heard the cry: "A drop of water past my dam is a drop of water wasted." Our earliest water plans, in the 1950s, proposed a canal to run the length of the coast that would capture flow from 11 major rivers and divert it to South Texas to irrigate a million acres of agricultural lands. To its credit, the plan did recognize the need for freshwater inflows to estuaries, and allocated 2.5 million acre-feet to supplement them. The average annual inflow to Texas estuaries is approximately 24.5 million acre-feet.
Thus began the first battles in a protracted war that often has pitted one Texan against another. The arguments took on new intensity following the drought of the 1950s, when we realized that water was, or could be, a scarce commodity that we should use wisely. We began to build reservoirs to hold enough water to get us through the next drought. When Texans start a project, we do it big, and today we have 4,790 square miles of surface water, almost as much as Minnesota, the land of 10,000 lakes. The water now captured behind dams serves a real need, for sure, but it is water that no longer nurtures the estuaries.
The Texas Legislature has continued to struggle with water issues, including the needs of bays and estuaries, through many sessions. In 1985, the 69th legislature directed the Texas Water Development Board and the Texas Parks and Wildlife Department to undertake the studies necessary to develop freshwater inflow recommendations for all Texas estuaries. This has been a long and difficult process that created a groundbreaking application of science to resource management that has not happened anywhere else. The study of environmental inflows required thousands upon thousands of hours by dedicated scientists and technicians, millions of dollars and 15 years of effort to complete. Today that work represents the best science available, and we have it just in time.
Senate Bill 1 (SB-1), championed by the late lieutenant governor Bob Bullock, was a historic piece of water legislation adopted by the 75th legislature in 1997. Addressing nearly all aspects of water management in Texas, it put the state in position to address its growing water needs. SB-1 provided the tools with which to address our state's future water needs. Now all that is needed are the means to use those tools to assure that enough environmental water will be provided to our rivers, lakes and estuaries to keep them healthy and productive. The 78th legislature is contemplating the next logical step - the framework within which we apply the science and make use of the tools we have to balance the water needs of Texas.
This is both a complex and a simple problem. The complexity is that no two Texas estuaries are similar. Sabine Lake has an abundance of fresh water and Corpus Christi Bay has too little. More and more thirsty people hem in Galveston Bay, and they live downstream from even more thirsty people in Dallas and Fort Worth. Many people want to move water destined for Matagorda Bay and San Antonio Bay to just about everywhere else. The simplicity is that all it takes to keep our estuaries healthy is water, and not even all of the water they normally receive, but water nonetheless.
We know the problem. We know the solution. We have the science and the means to apply that science. In this we are better prepared by far than anyone who has faced this challenge before us. Mark Twain once said, "Just do the right thing; it will gratify some of the people and it will astound the rest." If we have the will to do so, Texans can astound the rest.
How Do We Know How Much Fresh Water Bays Need?
Rivers and streams are the arteries for our estuaries, constantly carrying the nutrients and sediments that estuaries need in order to thrive. The river delivers the sediment into the quiet waters of the delta marsh, where it settles to the bottom, providing footing for marsh plants and shelter for myriad worms, clams and other animals. Within the sediments are nutrients such as nitrogen and phosphorous that feed marsh plants as well as millions of microscopic floating plants called plankton. The marsh plants shelter juvenile fish, shrimp and crabs from predators. The microscopic plankton are eaten by oysters that build reefs, which provide more shelter for fish and crabs. Without enough fresh water, sediment and nutrients, the estuaries we know and the benefits they provide us would cease to exist.
Understanding the importance of ensuring that estuaries stay healthy, in 1985 the Texas Legislature directed Texas Parks and Wildlife Department (TPWD) and the Texas Water Development Board (TWDB) to calculate how much fresh water, sediment and nutrients our estuaries need to remain healthy. These freshwater inflow studies, guided by Section 11.147 of the Texas Water Code, define beneficial inflows as a "salinity, nutrient, and sediment loading regime adequate to maintain an ecologically sound environment in the receiving bay and estuary system that is necessary for the maintenance of productivity of economically important and ecologically characteristic sport or commercial fish and shellfish species and estuarine life upon which such fish and shellfish are dependent."
TPWD and TWDB developed a method, now nationally recognized, for determining beneficial freshwater inflow needs for estuaries. During the last 15 years, scientists have collected information about the river flows, water circulation patterns, tides, weather, concentrations of salts, nutrients and sediment, and the fish and shellfish populations for seven major Texas estuaries. This information was analyzed in computer models to estimate how much fresh water each estuary needs and what seasons are important for freshwater inflow. Two computer models were created. A computer optimization model produced a freshwater inflow schedule that met state management objectives while producing optimal levels of finfish and shellfish. A second model predicted circulation patterns and salinity gradients that will result from the freshwater inflow patterns. To make sure the predictions of the computer models were reasonable, they were compared to TPWD data on fisheries and salinity for the past 25 years.
The computer model predictions are complete for all seven major Texas estuaries: Sabine Lake, Galveston Bay, Matagorda Bay, San Antonio Bay, Aransas Bay, Nueces Bay and Laguna Madre. Results show that all estuaries need high freshwater inflows during the late spring and early summer. Some estuaries benefit from having elevated freshwater inflows during September and October as well. Freshwater inflow requirements tend to duplicate rainfall patterns. Estuaries in East Texas are adapted to much higher amounts of freshwater inflow than estuaries in South Texas. Since East Texas experiences more rainfall than South Texas, on average, estuaries in the eastern part of the state receive more inflow in the form of runoff as well as freshwater inflows from rivers and streams. Rainfall patterns, which influence river flows, also dictate how much water is available for human uses.
Anyone who takes water from rivers or streams must obtain permission from the Texas Commission on Environmental Quality (TCEQ). The TCEQ must consider the effect on freshwater inflow to estuaries when it issues a permit to take surface water. TCEQ is required to include permit conditions "to the extent practicable when considering all public interests" necessary to maintain beneficial inflows. The freshwater inflow studies conducted by the TPWD and TWDB provide a scientific basis for the TCEQ as it evaluates water rights permits and establishes permit conditions.
- Cindy Loeffler | <urn:uuid:fac6665f-5818-423e-ae15-bf19b413e42c> | 2.921875 | 3,161 | Nonfiction Writing | Science & Tech. | 45.594374 |
Edited by Zoe Volt, Viral, Trigtchr, Flickety and 31 others
A cube is a three-dimensional object that has the same width, height, and length. That means that finding out the volume, or total space, of a cube is relatively simple.
Help Finding Volume of a Cube
Method 1: Traditional Calculation
- 1Measure the width, length and height of the cube. Look to the image to know where to measure. Once you have accurate measurements,
write them down W = ____, L = ____ h = ____?
- 2Multiply the height by the width. This will give you the area of the base square. The equation should look like this, ("h" symbolizing height, "A" symbolizing area, and "W" symbolizing width).
- hxW = A area of the base, which would indicate the number of square shapes on or in the base of the cube.
- 3Multiply the area of the square shaped base by the height. This will give you your final answer, the volume. The equation should look like this, ("L" symbolizing length, and "V" symbolizing volume).
- LxA = V, the volume.
Method 2: Simple Calculation
- 1Measure one side of the cube. For example, let's say one side measures 2 inches.
- 2Cube the length of your side. This just means multiplying the number by itself three times: "2 x 2 x 2."
- 3Do your calculation. We have 2 x 2 x 2 = 8 inches. 8 in3 is the answer.
- 4Make sure to cube your unit of measurement. This just means putting a "3" behind "inches," like this: in3.
Edit Related wikiHows
Recent edits by: Rosejuice, Ciccio Veronese, Allie | <urn:uuid:5e5f2af0-76ef-4d18-b994-157d1864297d> | 3.84375 | 398 | Tutorial | Science & Tech. | 68.055478 |
What is the universe made of? This is one of the most basic questions humanity has had about the world surrounding us, but has proven one of the most difficult to answer. Our knowledge of matter and energy has progressed rapidly over the last few centuries, leading to a detailed understanding of nearly all of the materials and phenomena around us. Scientists have even begun to understand the principles that have shaped the evolution of our universe itself over the past few billion years.
One of the most shocking discoveries of recent science, however, is that we have much yet to learn. Only a tiny fraction of the universe seems to be composed of the protons, neutrons, and electrons we have labored so long to understand. The identity of the vast majority of the cosmos remains a mystery!
Our present understanding of the universe's composition is roughly as follows:
Baryonic Matter: ~5% of the mass in the universe
This is ordinary matter composed of protons, neutrons, and electrons. It comprises gas, dust, stars, planets, people, etc.
Cold Dark Matter: ~25%
This is the so-called "missing mass" of the universe. It comprises the dark matter halos that surround galaxies and galaxy clusters, and aids in the formation of structure in the universe. The dark matter is said to be "cold" because it is nonrelativistic (slow-moving) during the era of structure formation. Dark matter is currently believed to be composed of some kind of new elementary particle, usually referred to as a weakly interacting massive particle (WIMP).
Dark Energy: ~70%
Through observations of distant supernovae, two research groups have independently discovered that the expansion of the universe appears to be getting faster with time. This seems to require some kind of "antigravity" effect which we do not understand. Cosmologists believe that the acceleration may be caused by some kind of new energy field that permeates the universe, perhaps even the cosmological constant that Einstein imagined almost a century ago. Whatever the source of this phenomenon turns out to be, cosmologists refer to it generically as dark energy.
The focus of these pages is on the second of these components - cold dark matter. We hope to give an idea of why we think this bizarre stuff is out there, what it might be, and how we hope to unlock its mysteries. Follow the links at left for further information! | <urn:uuid:4b920e01-64e5-4038-abe6-a0e290cc1db1> | 3.515625 | 497 | Knowledge Article | Science & Tech. | 38.376727 |
Molecular Biology and Genetics
Statistics of barcoding coverage: Xylocopa violacea
Public Records: 0
Specimens with Barcodes: 2
Species With Barcodes: 1
Xylocopa violacea, the violet carpenter bee, Indian Bhanvra is the common European species of carpenter bee, and one of the largest bees in Europe. Like most members of the genus Xylocopa, it makes its nests in dead wood.
It is not particularly aggressive and will attack only if forced to. It is sometimes mistaken for the European hornet. This species is well known in India as the 'Bhanvra'.
In 2006 it was reported from Cardigan and 2007 it was found breeding in England for the first time in Leicestershire, this follows a northwards expansion of its range in France and Germany and breeding in the Channel Islands and in 2010 it was also recorded in Northamptonshire and Worcestershire.
Violet Carpenter Bees hibernate overwinter and they emerge in the spring, usually around April or May. Hibernation is undertaken by the adults in wood where there are abandoned nest tunnels. In the late spring or early summer, they may be seen around searching for mates and suitable nesting sites. After mating, the gravid queens bore tunnels in dead wood, which is where the name Carpenter Bee comes from,although old nest tunnels may be used. Like other solitary bees, the queen creates the nest alone. The eggs are laid within a series of small cells, each of which is supplied with a pollen ball for the larvae to feed upon. The adults emerge in late summer then hibernate until the following year.
|This bee-related article is a stub. You can help Wikipedia by expanding it.|
EOL content is automatically assembled from many different content providers. As a result, from time to time you may find pages on EOL that are confusing.
To request an improvement, please leave a comment on the page. Thank you! | <urn:uuid:87567f35-d3a0-419c-8e29-129c5e345059> | 2.96875 | 418 | Knowledge Article | Science & Tech. | 46.456331 |
Nuclear Waste and Caves
What are the benefits of storing nuclear waste in
subterranean salt mining caves?
The main advantage is that since the salt has been in its deposit for
millions of years, there could not have been groundwater flowing through the
deposit for at least that long. If there had been, it would have carried
the salt away. So, if nuclear waste is placed inside the salt bed, chances
are good that groundwater won't carry it anywhere else. The nuclear waste
will decay away to nonradioactive elements before it leaves the salt
Richard E. Barrans Jr., Ph.D.
PG Research Foundation, Darien, Illinois
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:dd51a15f-15e0-4506-95ca-06c6bfe3ae7f> | 3.03125 | 154 | Knowledge Article | Science & Tech. | 57.498317 |
Botany online 1996-2004. No further update, only historical document of botanical science!
Generally, answers to questions of the numerical capture, analysis or representation of mass phenomenons like allele frequencies in populations are given by statistics and probability calculus. Readings of experiments may be grouped around a mean (= mean value). To find out whether two series of readings represent basically the same or significantly different values, the t-test is performed. It answers the question of how far the means of the two readings differ from each other.
The chi2-test is done to see if a result corresponds with the theoretically expected values. The smaller the chi2-value the more probable it is that a deviation is caused merely by chance.
MENDEL, his rediscoverers and the geneticists of this century never got exact but only approximate ratios in their crossings. Ratios like 3:1 or 1:1 are idealized values. Though the interpretation of the mechanism they are based on is plausible, several questions have to be asked not only by a mathematician but also by a practical geneticist:
How large is the deviation from the theoretically expected values allowed to be?
How many specimens have to be counted to regard the results as reliable?
Is there a way to reach the same results with less
Answers to these questions are given by statistics or probability calculus. Therefore, a clear yes or no is never to be expected, instead, the answers will be with how many percent probability an event corresponds with an assumption or whether there is a significant difference between two series of readings. The geneticist is helped by several formulas that he can use for his values and by calculated standards documented in tables that he can refer to. The decisive precondition for the use of the mathematical approaches is the choice of the right formula. It has to be clear whether the own, experimentally gained data satisfy the respective conditions. They all have to share the same dimensions, absolute values cannot be mixed up with relative values (in percent). Further preconditions that have to be taken into account when performing the respective statistic tests can be found in reviews like that of ZAR (1984).
The mean of a series of values
is calculated as follows:
X = sum xi / n
where xi represents the single values and n the number
of the values.
Mean variation (=
variance):When depicting the readings in a histogram, it
is usually easily discernible whether they are grouped around a mean
or not. If the readings stem from a Gaussian
normal distribution, the resulting curve will be
bell-shaped with an increasing number n. Only if this is the case, it
makes sense to go on working with it statistically as is shown in the
The picture (below) shows that the curve of a Gaussian normal
distribution can be described by the position of its maximum that
corresponds to its mean X and its points of inflection. The distance
between X and one of the points of inflection is called mean
variation or standard deviation.
The square of the mean variation is the variance. A series of
readings is always a more or less large spot check of a totality.
Spot checks do always have a relative error, the
standard error or standard
deviation of the mean, whose size is dependent on the number of
readings. It can mathematically be expressed by 1 / square
The readings have at first to be standardized, the type and the degree of divergence of a Gaussian normal distribution have to be taken into account. The mean variation of a spot check (s = square root of the medium quadratic deviation) can be ascertained by the following formula:
s = square root [xi - X]2 / n - 1
By integration of the Gaussian normal distribution, the area
marked by the base line and the curve area from + 1
sigma, 2 sigma, 3 sigma etc. can be
Parameter of a Gaussian distribution: The value P refers to the part of the area that is enclosed by the curve and the base line between the values + and -1 sigma, (light blue area) and + and -2 sigma (light blue + medium blue area), respectively, or + and -3 sigma (light blue + medium blue + dark blue areas).
This means that 68.3 percent of all readings of an ideal distribution scatter with 1 sigma, 95.4 percent with 2 sigma and 99.7 percent with 3 sigma around the mean. These values are important since they are used as standards for most statistical statements. It is thus important for a practitioner to measure and to incorporate his own readings critically so that they can refer to such an ideal distribution.
An ever repeating question is whether two (or more) series of
readings represent significantly different results or whether
different means are caused merely by chance and have thus different
values as a result of 'errors'. To solve the problem, the relation of
the means of both series to the standard deviation have to be
compared. For the comparison of two series of readings, the t-test is
used. The aim of the comparison is the examination, how far the means
Xa and Xb differ from each other. The measure
for this is the quantity t.
The probability P that corresponds to a calculated t can be found in probability tables. If Xa and Xb vary more than Xa + 3 sigma, it is spoken of a significant difference. The probability that both values tally is < 0.3% meaning that the probability that both represent distributions different from each other is > 99.7%. If the difference is larger than Xa + 2 sigma but smaller than Xa + 3 sigma, then it is spoken of a secure difference. In this case, the probability (P) of the correspondence is about five percent, the probability of difference is accordingly 95 percent. 3 sigma and 2 sigma respectively are also referred to as a one percent and a five percent respectively degree of confidence. It is common to use fractions of the number 1 instead of percent in statistics, P would thus be 0.01 and 0.05 respectively. Two further things become clear when looking at the table:
In a prevoiusly shown table, the splitting ratios obtained by MENDEL have been depicted. He extrapolated a 3:1 ratio. The chi2-test shows whether this is permitted:
d = divergence of the expected result, e = expectation. The smaller the value of chi2, the more probable it is that only chance is responsible for a divergence. Only absolute numbers (never percentages) can be used for the chi2-test. The test shows that the correspondence of MENDEL's numbers with his expectations is very high. The mathematically calculated values for the expectations can also be found in the respective tables. Later studies showed that even much smaller amounts of data are sufficient to obtain significant values. | <urn:uuid:7a7e4f8e-6024-415e-8b78-a5d5b3efed15> | 3.734375 | 1,441 | Knowledge Article | Science & Tech. | 46.209181 |
Refraction is the change in direction of a wave due to a change in its medium. It is essentially a surface phenomenon. The phenomenon is mainly in governance to the law of conservation of energy and momentum. Due to change of medium, the phase velocity of the wave is changed but its frequency remains constant. This is most commonly observed when a wave passes from one medium to another at any angle other than 90° or 0°. Refraction of light is the most commonly observed phenomenon, but any type of wave can refract when it interacts with a medium, for example when sound waves pass from one medium into another or when water waves move into water of a different depth. Refraction is described by Snell's law, which states that for a given pair of media and a wave with a single frequency, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equivalent to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to the opposite ratio of the indices of refraction (n2 / n1):
In general, the incident wave is partially refracted and partially reflected; the details of this behavior are described by the Fresnel equations.
A drop or droplet is a small column of liquid, bounded completely or almost completely by free surfaces. A drop may form when liquid accumulates at the lower end of a tube or other surface boundary, producing a hanging drop called a pendant drop. Drops may also be formed by the condensation of a vapor or by atomization of a larger mass of liquid. | <urn:uuid:47596991-7e19-4de8-a923-35d3b4b80e5c> | 4.03125 | 330 | Knowledge Article | Science & Tech. | 38.1255 |
New discoveries to science from Kew
Over 250 years, Kew has made many discoveries about the fascinating worlds of plants and fungi. Each year, many new species of plant and fungi are discovered by our world class scientists.
We discover new things about the plants and fungi every day. This includes how different species relate to one another and new ways to use plants to make life easier and better.
11 Aug 2009
The great rainforests of Africa are some of the most species-rich natural habitats in the world and we found many new and exciting plants on our recent expedition there.5 likes
07 Aug 2009
By recording the variety of plant life across Africa we identify threatened species and regions and help save plant life and habitats under threat.2 likes
We rely on plants to feed, house and clothe us, but have long been using the Earth’s resources at an unsustainable rate. Kew’s mission is to inspire and deliver science-based plant conservation worldwide. To meet this aim, we work with a network of partners and collaborators around the world.
Traditionally, Kew’s botanists have classified plants by analysing their characteristics. Today, Kew scientists are supplementing the old methods with DNA analysis.
15 Jul 2009
Tahina spectabilis joins a 380 million year old fossilised fish and the world's longest insect in the Top 10 List of New species described in 2008, announced by the International Institute for Species Exploration at Arizona State University.8 likes
Keep up to date with events and news from Kew
- around the world
- the UK
- at risk
- ground breaking
- needs help
- english heritage
- Kew overseas
- verge of extinction
- wet tropics
- gifts that help
- South East Asia
- hot spot
- english garden | <urn:uuid:540a1365-72db-4d44-8f52-b91ce173bd64> | 3.109375 | 379 | Content Listing | Science & Tech. | 45.965434 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Northeast Precipitation Rankings, March 1895
More information on Climatological Rankings
(out of 119 years)
|19th Driest||1915||Driest to Date|
|101st Wettest||1936||Wettest to Date| | <urn:uuid:c8a4495e-80e1-4902-9d2c-443c81685200> | 2.765625 | 137 | Structured Data | Science & Tech. | 52.857895 |
Get in an insight into how scientists convince the world their ideas are worth publishing in this video.
Scientists spend days, months or even years analysing the data they have collected in the field or lab, in a quest to find the evidence to prove their ideas. Sometimes the data can produce unexpected results, which may change the way they need to be interpreted, but this can be very exciting.
Getting a new result is only the beginning. In this video Museum plant researcher Karen James talks about her orchid research and describes the roller-coaster ride of getting her results accepted by the global scientific community, and published.
'The most exciting thing for me about science is when you get a new result and, even though it may just be a small advance, for a few days or weeks you are the only person on earth who knows about it.' says Karen James. | <urn:uuid:3f638edd-bf9d-4096-863d-4820180ad198> | 2.765625 | 176 | Truncated | Science & Tech. | 52.660844 |
Cooking with Apache, Part 3
Pages: 1, 2
Recipe 12.4: Solving the "Trailing Slash" Problem
Loading a particular URL works with a trailing slash but does not work without it.
Make sure that
ServerName is set correctly and that none of the
Alias directives have a trailing slash.
The "trailing slash" problem can be caused by one of two configuration
problems: an incorrect or missing value of
ServerName, or an
Alias with a trailing slash that doesn't work without it.
An incorrect or missing
ServerName seems to be the most prevalent cause of the problem, and it
works something like this: when you request a URL such as http://example.com/something, where something is the name of a directory, Apache actually sends
a redirect to the client telling it to add the trailing slash.
The way that it does this is to construct the URL using the value of
ServerName and the requested URL. If
ServerName is not set correctly, then the resultant URL, which
is sent to the client, will generate an error on the client end when it can't
find the resulting URL.
If, on the other hand,
ServerName is not set at all,
Apache will attempt to guess a reasonable value when you start it up. This will
often lead it to guess incorrectly, using values such as 127.0.0.1 or localhost,
which will not work for remote clients. Either way, the client will end up
getting a URL that it cannot retrieve.
Invalid Alias directive
In the second incarnation of this problem, a slightly malformed
Alias directive may cause a URL
with a missing trailing slash to be an invalid URL entirely.
Consider, for example, the following directive:
Alias /example/ /home/www/example/
Alias directive is very literal, and aliases URLs starting with /example/, but it does not alias URLs starting with /example. Thus, the URL http://example.com/example/ will display the default document from the directory /home/www/example/, while the URL http://example.com/example will generate a "file not found" error message, with an error log entry that will look something like:
File does not exist: /usr/local/apache/htdocs/example
The solution to this is to create
Alias directives without the trailing slash, so that they will work whether or not the trailing slash is used:
Alias /example /home/www/example
Rich Bowen is a member of the Apache Software Foundation, working primarily on the documentation for the Apache Web Server. DrBacchus, Rich's handle on IRC, can be found on the web at www.drbacchus.com/journal.
Ken Coar is a member of the Apache Software Foundation, the body that oversees Apache development.
Return to the Apache DevCenter. | <urn:uuid:e13796f9-2cae-41fd-ba7c-9fb10239fdab> | 2.78125 | 613 | Documentation | Software Dev. | 53.653526 |
This is a photograph of a cumulonimbus cloud. Notice how the top of the cloud is shaped like an anvil.
Click on image for full size
Courtesy of UCAR Digital Image Library
Cumulonimbus clouds belong to the Clouds with Vertical Growth group. They are
generally known as thunderstorm clouds. A
cumulonimbus cloud can grow up to 10km high.
At this height, high winds will flatten the top of the cloud out into an
anvil-like shape. Cumulonimbus clouds are associated with heavy rain, snow, hail, lightning, and tornadoes.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Clouds with vertical growth include cumulus and cumulonimbus clouds. These clouds grow high up into the atmosphere rather than spreading across the sky. They span all levels of the troposphere and can...more
Thunderstorms are one of the most thrilling and dangerous types of weather phenomena. Over 40,000 thunderstorms occur throughout the world each day. Thunderstorms form when very warm, moist air rises into...more
Rain is precipitation that falls to the Earth in drops of 5mm or more in diameter according to the US National Weather Service. Virga is rain that evaporates before reaching the ground. Raindrops form...more
Lightning is the most spectacular element of a thunderstorm. In fact it is how thunderstorms got their name. Wait a minute, what does thunder have to do with lightning? Well, lightning causes thunder....more
Cumulus clouds belong to the Clouds with Vertical Growth group. They are puffy white or light gray clouds that look like floating cotton balls. Cumulus clouds have sharp outlines and a flat base. Cumulus...more
At the center of a fierce tropical storm, there is a small area where the weather is calm, the sky is clear, and the winds are just light breezes. This area is called the eye of the storm. As a hurricane...more
Mammatus clouds are pouches of clouds that hang underneath the base of a cloud. They are usually seen with cumulonimbus clouds that produce very strong storms. Mammatus clouds are sometimes described as...more | <urn:uuid:2b65767f-46f5-4bb3-b9bd-a5b86c1cf6f1> | 3.546875 | 508 | Content Listing | Science & Tech. | 60.886957 |
Movie of Yearly Changes in Sea Ice around the North Pole
In late spring the weather gets warmer. The sea ice starts to melt. All through the summer more and more of the ice melts. When is there the least sea ice? Since a lot of ice melts in the summer, there is usually much less sea ice in early fall around September, right after the end of summer.
The freezing and melting of the sea ice happens year after year. It is one of the cycles that come with the changing seasons.
This movie shows seven years of this cycle, from January 2002 through December 2008.
(Note: If you cannot see the movie you may need to download the latest QuickTime player.)
- Click here to check out an interactive that lets you compare sea ice at different times of the year or during different years.
- Click here to watch a movie of changes in sea ice in the Antarctic regions of the Southern Hemisphere.
- Click here to see the predictions that global climate models make about future changes in sea ice.
If you want to see more movies and pictures of sea ice, go to the NSIDC web site to:
- Look at pictures of sea ice extent and make movies.
- Look at more than one picture of sea ice at the same time so you can compare. | <urn:uuid:693ab653-c1bb-4d67-aae9-b60d1194652d> | 3.34375 | 268 | Truncated | Science & Tech. | 64.889211 |
Earth with Sun (bodies are not to scale). We all know the Sun provides Earth with light and warmth. It also provides energy that drives our weather. But how much influence do changes in the Sun have on the Earth's climate?
Click on image for full size
Courtesy of UCAR
Solar Cycle Variations and Effect on Earth's Climate
For more than 100 years, scientists have wondered if cycles on the Sun and changes of the energy
received at Earth because of those cycles affect weather
or global climate
on Earth. It is now thought that solar cycles on the Sun don't affect weather
, but that they do have a very slight effect on global climate.
The solar cycle is the rise and fall of the number of sunspots on the Sun. Solar activity is correlated to the number of sunspots on the Sun. As the number of sunspots goes up, solar activity occurrences go up.
Energy output from the Sun also changes as the sunspot count on the Sun changes. It is greatest when there are the most sunspots and lowest when there are the least sunspots. With satellite measurements, scientists have been able to confirm that the total solar energy varies 0.1% over one 11-year sunspot cycle. This variation of 0.1% means a global tropospheric temperature difference of 0.5oC to 1.0oC (1). These facts definitely have to be taken into account when dealing with climate models and predictions.
An example of when the solar cycle affected Earth's climate is the Maunder Minimum. This was when almost no sunspots were seen from about 1645 to 1715. During this time, Europe and parts of North America were struck by spells of really cold weather. This was a change to the expected regional climate.
So there does seem to be a connection between the solar cycle and climate - the very small change in solar energy that changes over the solar cycle seems to have a very small impact on Earth's climate (see IPCC report). Modern climate models take these relationships into account. The changes in solar energy are not big enough, however, to cause the large global temperature changes we've seen in the last 100 years. Indeed, the only way that climate models can match the recent observed warming of the atmosphere is with the addition of greenhouse gases.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
Sunspots are dark, planet-sized regions that appear on the "surface" of the Sun. Sunspots are "dark" because they are colder than the areas around them. A large sunspot might have a temperature of about...more
To figure out the future of climate change, scientists need tools to measure how Earth responds to change. Some of these tools are global climate models. Using models, scientists can better understand...more
Over 100 years ago, people worldwide began burning more coal and oil for homes, factories, and transportation. Burning these fossil fuels releases carbon dioxide and other greenhouse gases into the atmosphere....more
Even though only a tiny amount of the gases in Earth’s atmosphere are greenhouse gases, they have a huge effect on climate. There are several different types of greenhouse gases. The major ones are carbon...more
Some of the factors that have an affect on climate, like volcanic eruptions and changes in the amount of solar energy, are natural. Others, like the addition of greenhouse gases to the atmosphere, are...more
Energy from the Sun is very important to the Earth. The Sun warms our planet, heating the surface, the oceans and the atmosphere. This energy to the atmosphere is one of the primary drivers our weather....more
Scientists at the National Center for Atmospheric Research (NCAR) have found a connection between solar activity and climate changes on earth. Their research may lead to the ability to predict how the...more | <urn:uuid:bb56c0e7-c2b8-42d9-9ffb-9eb5fa2713a6> | 3.59375 | 843 | Knowledge Article | Science & Tech. | 55.596507 |
How Does Excess Co2 Effect The Earth
Excess Co2 is bad for the planet. It’s bad for the planet because it destroys the planet’s atmosphere. CO2 heats the earth. When CO2 heats the earth the polar ice caps start to melt and when they melt city’s start to flood.
Article posted June 1, 2012 at 09:53 AM •
comment • Reads 418
see all articles
Conditions of Use | <urn:uuid:cdb0247a-3807-4431-8c6e-e6d48c56e184> | 2.75 | 97 | Truncated | Science & Tech. | 83.926786 |
Pythons are one of the largest group of snakes in the world.
They are cold-blooded reptiles and cannot generate their own body heat.
Notice in these infrared images how cool it appears compared to the people
Pythons usually need to warm themselves up by basking in the sun before they
hunt for food. They are usually found in warm, tropical regions. | <urn:uuid:458246ae-d418-4cfc-8081-51112aa0d790> | 3.03125 | 77 | Knowledge Article | Science & Tech. | 53.286371 |
This Demonstration illustrates the formation of time-averaged Moiré fringes. The horizontal axis represents the amplitude of harmonic oscillation ; the vertical axis, the longitudinal coordinate . The stationary Moiré grating is shown at . Time-averaged images of the Moiré grating are shown at increasing amplitudes of oscillation. The parameter controls the number of discrete time nodes in a period of the oscillation used to integrate the dynamical process. Double-exposure fringes are produced at ; time-averaged fringes are produced at . You can choose the type of the Moiré grating—it can be harmonic, stepped, or stochastic. Moreover, you can observe two digital images. The first is the time-averaged Moiré image and the second is the filtered image in which time-averaged fringes are shown in high contrast. Interesting time-averaged patterns can be observed whenever a regular Moiré grating is replaced by a set of random numbers uniformly distributed in the interval . Those patterns can reveal certain properties of the random number generator used to construct the stochastic Moiré grating.
(Research Group for Mathematical and Numerical Analysis of Dynamical Systems)
Theoretical relationships governing the formation of time-averaged Moiré fringes are discussed in the references.
M. Ragulskis, "Time-Averaged Patterns Produced by Stochastic Moiré Gratings," Computers and Graphics, 2009, 33(2), pp. 147–150.
M. Ragulskis, L. Saunoriene, and R. Maskeliunas, "The Structure of Moiré Grating Lines and Influence to Time-Averaged Fringes," Experimental Techniques,33(2), 2009 pp. 60–64.
M. Ragulskis and Z. Navickas, "Time-Average Moiré—Back to the Basics," Experimental Mechanics, 49(8), 2009 pp. 439–450.
M. Ragulskis, A. Aleksa, and R. Maskeliunas, "Contrast Enhancement of Time-Averaged Fringes Based on Moving Average Mapping Functions," Optics and Lasers in Engineering, 47(7-8), 2009 pp. 768–773.
M. Ragulskis and A. Aleksa, "Image Hiding Based on Time-Averaging Moiré," Optics Communications, 282, 2009 pp. 2752–2759.
M. Ragulskis, A. Aleksa, and Z. Navickas, "Image Hiding Based on Time-Averaged Fringes Produced by Non-Harmonic Oscillations," Journal of Optics A: Pure and Applied Optics, 11(12), 2009. doi:10.1088/1464-4258/11/12/125411. | <urn:uuid:2e934581-a53d-4016-9a54-3341ca6aaffb> | 2.734375 | 615 | Academic Writing | Science & Tech. | 44.550905 |
Length: head and body 3.4-4.5"; tail 1.75-2"
Weight: 0.25-0.5 oz.
Number of teeth: 32
Young: 2-5 born in May and June.
Usually found in trees, rarely found in caves or attics.
Insect-eater, or insectivore: moths, flying ants, leafhoppers, flies and beetles
Period of Activity
USA: See distribution map.
Red Bat Trivia
Red bats reside in Illinois during the spring-summmer-fall and migrate south during the winter when their food supply, insects, is not available. There are twelve species of bats found in Illinois and red bats are one of the more common species. Bats have poor eyes and rely on echolocation, or supersonic sounds, to locate objects. | <urn:uuid:b2518d36-c39e-47d2-b672-e056f1c44be8> | 3.28125 | 181 | Knowledge Article | Science & Tech. | 75.68976 |
This module defines an object type which can compactly represent an array of basic values: characters, integers, floating point numbers. Arrays are sequence types and behave very much like lists, except that the type of objects stored in them is constrained. The type is specified at object creation time by using a type code, which is a single character. The following type codes are defined:
|Type code||C Type||Python Type||Minimum size in bytes|
|'u'||Py_UNICODE||Unicode character||2 (see note)|
The 'u' typecode corresponds to Python’s unicode character. On narrow Unicode builds this is 2-bytes, on wide builds this is 4-bytes.
The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed through the itemsize attribute.
The module defines the following type:
A new array whose items are restricted by typecode, and initialized from the optional initializer value, which must be a list, object supporting the buffer interface, or iterable over elements of the appropriate type.
If given a list or string, the initializer is passed to the new array’s fromlist(), frombytes(), or fromunicode() method (see below) to add initial items to the array. Otherwise, the iterable initializer is passed to the extend() method.
A string with all available type codes.
Array objects support the ordinary sequence operations of indexing, slicing, concatenation, and multiplication. When using slice assignment, the assigned value must be an array object with the same type code; in all other cases, TypeError is raised. Array objects also implement the buffer interface, and may be used wherever buffer objects are supported.
The following data items and methods are also supported:
The typecode character used to create the array.
The length in bytes of one array item in the internal representation.
Append a new item with value x to the end of the array.
Return a tuple (address, length) giving the current memory address and the length in elements of the buffer used to hold array’s contents. The size of the memory buffer in bytes can be computed as array.buffer_info() * array.itemsize. This is occasionally useful when working with low-level (and inherently unsafe) I/O interfaces that require memory addresses, such as certain ioctl() operations. The returned numbers are valid as long as the array exists and no length-changing operations are applied to it.
When using array objects from code written in C or C++ (the only way to effectively make use of this information), it makes more sense to use the buffer interface supported by array objects. This method is maintained for backward compatibility and should be avoided in new code. The buffer interface is documented in Buffer Protocol.
“Byteswap” all items of the array. This is only supported for values which are 1, 2, 4, or 8 bytes in size; for other types of values, RuntimeError is raised. It is useful when reading data from a file written on a machine with a different byte order.
Return the number of occurrences of x in the array.
Append items from iterable to the end of the array. If iterable is another array, it must have exactly the same type code; if not, TypeError will be raised. If iterable is not an array, it must be iterable and its elements must be the right type to be appended to the array.
Appends items from the string, interpreting the string as an array of machine values (as if it had been read from a file using the fromfile() method).
Read n items (as machine values) from the file object f and append them to the end of the array. If less than n items are available, EOFError is raised, but the items that were available are still inserted into the array. f must be a real built-in file object; something else with a read() method won’t do.
Append items from the list. This is equivalent to for x in list: a.append(x) except that if there is a type error, the array is unchanged.
Extends this array with data from the given unicode string. The array must be a type 'u' array; otherwise a ValueError is raised. Use array.frombytes(unicodestring.encode(enc)) to append Unicode data to an array of some other type.
Return the smallest i such that i is the index of the first occurrence of x in the array.
Insert a new item with value x in the array before position i. Negative values are treated as being relative to the end of the array.
Removes the item with the index i from the array and returns it. The optional argument defaults to -1, so that by default the last item is removed and returned.
Remove the first occurrence of x from the array.
Reverse the order of the items in the array.
Convert the array to an array of machine values and return the bytes representation (the same sequence of bytes that would be written to a file by the tofile() method.)
Convert the array to an ordinary list with the same items.
Convert the array to a unicode string. The array must be a type 'u' array; otherwise a ValueError is raised. Use array.tobytes().decode(enc) to obtain a unicode string from an array of some other type.
When an array object is printed or converted to a string, it is represented as array(typecode, initializer). The initializer is omitted if the array is empty, otherwise it is a string if the typecode is 'u', otherwise it is a list of numbers. The string is guaranteed to be able to be converted back to an array with the same type and value using eval(), so long as the array() function has been imported using from array import array. Examples:
array('l') array('u', 'hello \u2641') array('l', [1, 2, 3, 4, 5]) array('d', [1.0, 2.0, 3.14]) | <urn:uuid:b5897ce5-0dff-416c-b5d3-97bb356ab2a0> | 2.953125 | 1,307 | Documentation | Software Dev. | 47.176657 |
Branches of Science
Science refers to a body of knowledge, or a method of study devoted to developing this body of knowledge, concerning the nature of the universe gained through methodological observation and experimentation (scientific method). Exactly what constitutes science and scientific methods are subjects studied by the philosophy of science. The scientific method consists of various principles and procedures that are objective and repeatable by other scientists.
There are several ways of broadly categorizing the sciences, e.g. Pure science is systematic study of natural or physical phenomena by observation and experiment, critical testing and review, and ordering by general principles. Applied science is the search for practical uses of scientific knowledge; technology is the application of applied science.
Exact sciences are those which typically require precise measurements, such as physics, and to a lesser degree, chemistry. Descriptive sciences are those which are more oriented towards classificationand description, such as biology and paleontology.
The pure natural sciences are typically divided into the physical sciences and the biological sciences, both of which can be subdivided. The major physical sciences are physics, astronomy, chemistry, and geology; the main biological sciences are botany and zoology.
The sciences aren't distinct and independent from each other, but rather, there are are interconnections and cross-fertilisations. These interrelationships are often responsible for much of the progress today in several specialized fields of research, such as molecular biologyand genetics. Several interdisciplinary sciences, such as biochemistry, have been created as a result.. Advances can be the result of research by teams of specialists representing different sciences, both pure and applied.
The Andromeda Galaxy
Buy at AllPosters.com
is the scientific study of objects in space: stars, planets, galaxies etc. Astronomy is the science dealing with all the celestial objects in the universe, including the planets and their satellites (e.g. our Earthand the moon), comets and meteors, the stars (including our sun), and interstellar matter, the star systems known as galaxies, and clusters of galaxies. Astronomers use telescopes(optical, radio, and others) to study stars, planets, and galaxies. Spacecraft carry telescopes and other astronomical instruments above the Earth's atmosphere, and to other planets in our solar system.
is the study of the Universe and astrophysical phenomena, by examining their emission of electromagnetic radiation in the radio portion of the spectrum. Radio astronomy has greatly improved our understanding of the evolution of stars, the structure of galaxies, and the origin of the universe.
Questions concerning the nature of the Universe as a whole were until recently, the province of philosophy and superstition only. There was no way to examine the fabric of the heavens to see what it was made of - until the invention of spectroscopy and the construction of powerful telescopes in the past century. The data collected have been analysed with sophisticated mathematical techniques, and models have been developed which help us to understand how this Universemay have come to be how it is. Cosmologydraws on the physical sciences - especially mathematics, physics, and astronomy.
The science and study of life, from the tiniest microscopic organisms to the largest whalesin the sea. Biology studies how living things grow, feed, move, reproduce, and evolve over long periods of time. It covers an enormous range of topics and deals with millions of species of animals, plants, and other organisms. To cope with this, biology is divided into several specialised branches such as anatomy (the structure of living things), and physiology (the way animals and plants function). Biology is useful to other sciences and professions that deal with life, such as agriculture, forestry, and medicine.
Because there is such a huge variety of living things on the earth, the science of biology has many different branches and areas of study. Depending on their discipline, biologists usually research one or more of the following categories:
Microbiology, a study dealing with the structure and existence of microorganisms, which are tiny life forms such as a bacteria or a virus; Zoology, which is the study of animal life; Botany, which is focused on plant life; And physical anthropology, where scientists study human life, such as our existence and how we interact with other life forms.
Buy at AllPosters.com
is the scientific study of life-forms existing in former geological time periods. When living things die they are sometimes buried in a layer of mud. After millions of years the mud turns into solid rock and the remains are preserved as fossils. The layers of rock can be dated, and so we know the age of the fossils in that layer. Paleontologists have discovered much about life that existed millions of years ago, by studying fossils. Especially interesting are the fossils of dinosaurs, some of which were very large indeed. Paleontologists know what they looked like and what they ate.
Periodic Table of Elements
Buy at AllPosters.com
is the study of the composition of substances and the changes that they undergo. In particular, chemistry is the study of elements(substances containing only one kind of atom) and the compounds (substances containing combined elements) they form. Chemists work with reactions between substances to create plastics, medicines, dyes, and many other materials useful in our modern world. They study what substances are made of, and how they can be altered or combined to create new materials. 92 elements occur in nature, and another 17 have been created in nuclear laboratories. Several million compounds have been synthesised by chemists.
There are two main divisions, organic and inorganic.
Organic chemistry originated with the isolation of medical compounds from animals and plants. It has expanded to include the reactions of carbon based compounds (which are 100 times more numerous than non-carbon based compounds) and the study of molecules.
Inorganic chemistry studies the preparation, properties, and reactions of all chemical elements and compounds except those that are carbon based.
Design of the Atom
Buy at AllPosters.com
is the science of matter and energy, including light, sound, electricity, magnetism, radiation, and motion. Physics was once called natural philosophy, since it was "thoughts about the natural world".
Physicists work with a mixture of theory and experiment. They perform experiments and try to construct theories to explain their results. These theories should make predictions which can be tested by new experiments. Those theories which have stood the test of time and have been especially useful are called the laws of physics.
Pencil Nub and Equation
Buy at AllPosters.com
- "the queen of the sciences" - deals with abstractions rather than observables, e.g. numbers, shapes, logic, size, structure, order, and other relationships among quantities. Some of the major branches are:
- Arithmetic concerns addition, subtraction, multiplication, and division of numbers.
- Algebra is a symbolic language in which problems can be solved using symbols to stand for varying or unknown quantities.
- Geometry is the study of shapes and angles, and is useful in carpentry, architecture, and many other fields.
One of the commonest applications of mathematics to science is the use of equations to fit observed data, e.g. as in a graph of one quantity against another, such as temperature against time, for a cooling body.
School of Athens
Buy this Art Print at AllPosters.com
Western philosophy is generally considered to have begun in ancient Greece as speculation about the underlying nature of the physical world. Philosophy comprised all areas of speculative thought and included the arts and sciences.
The philosophy of science seeks to clarify the objectives and means used by scientists, and what is the reliability of scientific theories.
Astronomy is the science dealing with all the celestial objects in the universe, including the planets and their satellites (e.g. our Earth and the moon), comets and meteors, the stars (including our sun), and interstellar matter, the star systems known as galaxies, and clusters of galaxies. Astrophysics, Cosmology, Galaxies, History, Radio, SETI, Solar, Stars, Telescope, UFO
The science of life and living organisms, biology studies the form, structure, function, growth and development, behavior and interaction of all living things. Biologists study the characteristics of life forms, such as their cellular organization and development, how they respond to stimulation, the chemical processes of their growth and production of energy (metabolism) and how they reproduce. Biochemistry, Bioinformatics, Biophysics, Biotechnology, Botany, Cells, Ecology, Genetics, Human, Microbiology, Paleontology.
Chemistry is the branch of science concerned with the properties, structure, and composition of substances and their reactions with one another. There are two main divisions, organic and inorganic. Chemical elements are the fundamental materials of which all matter is composed. An element is a pure substance that cannot be broken down or reduced further without changing its properties. Analytical, Crystallography, Elements, Inorganic, Organic
This article addresses modern science, by which we mean science as we now understand it; e.g. making use of the scientific method of controlled experimental verification of hypotheses. Before the 1500s, it was typically thought that the natural world could be understood by invoking supernatural deities, or by simplistic (and sometimes, not so simplistic) theories founded on casual observation and 'common sense' - e.g. that the Earth was the center of the Universe, because one could plainly see that the heavenly bodies (sun and planets) rotated about the Earth.
Mathematics is the study of numbers, sets of points, and various abstract elements, together with relations between them and operations performed on them. Originally mathematics was concerned with the properties of numbers and space, as the science of quantity, whether of magnitudes, as in geometry, or of numbers, as in arithmetic, or the generalization of these two fields, as in algebra. Algebra, Analysis, Applied, Calculus, Chaos, Fractals, Game Theory, Games, Geometry, Graphs, History, Infinity, Logic, Measurement, Number
Physics is the study of matter, energy, motion, and forces. Physics is a major branch of science, concerned with the fundamental components of the universe, the forces they exert on one another, and the results produced by these forces. Physicists study the properties and forms of matter and energy - heat, light, electricity and magnetism, and nuclear energy. They try to understand the forces that act in the universe, and the laws that these forces obey - e.g. matter and energy can't be destroyed, only changed from one to the other (a conservation law). Electricity, Light, Machines, Magnetism, Mechanics, Nuclear, Optics, Quantum, Relativity, Thermodynamics, Waves | <urn:uuid:9cfe447a-1cb3-4a70-bffe-658f0097505d> | 3.390625 | 2,229 | Content Listing | Science & Tech. | 27.839067 |
I don't know about OpenCV, so I can't tell you what
cv.GetMat() does. Apparently, it returns something that can be used as or converted to a two-dimensional array. The C or C++ interface to OpenCV that you are using will probably have a similarly names function.
The following lines create an array of index pairs of the entries in
grey_image_as_array that are bigger than
3. Each entry in
non_black_coords_array are zero based x-y-coordinates into
grey_image_as_array. Given such a coordinates pair
y, you can access the corresponsing entry in the two-dimensional C++ array
The Python code has to avoid explicit loops over the image to achieve good performance, so it needs to make to with the vectorised functions NumPy offers. The expression
grey_image_as_array > 3 is a vectorised comparison and results in a Boolean array of the same shape as
numpy.where() extracts the indices of the
True entries in this Boolean array, but the result is not in the format described above, so we need
zip() to restructure it.
In C++, there's no need to avoid explicit loops, and an equivalent of
numpy.where() would be rather pointless -- you just write the loops and store the result in the format of your choice. | <urn:uuid:22ab0a8b-8a76-45f2-a72e-89cd6c3481d3> | 2.796875 | 293 | Q&A Forum | Software Dev. | 47.666299 |
|Home » Howtos » Secure code » Taint|
Taint mode is a way of making your code more secure. It means that your program will be fussier about data it receives from an external source. External sources include users, the file system, the environment, locale information, other programs and some system calls (e.g. readdir).
Some functions can unexpectedly cause problems with bad data. For example, the "magical" properties of the perl open function mean that it may be used to open a pipe to any arbitrary shell commands. It's in your interests to ensure that your program is opening the file that you think it should be opening, and not a command line supplied by a mischievous user.
There are differing opinions on when to use taint mode. Certainly as a minimum, any CGI application should use taint mode. When accepting external data you should always program defensively and use taint mode to ensure that the external data matches your expectations.
Some people argue that taint mode should always be used as it forces you to consider the implications of your use of external data. This is also useful because if you write your program without taint mode and then decide you need taint mode later on, you may have a lot of work adding the checks required.
You turn taint mode on by using the -T flag in your hashbang line. For example:
Or if you turn warnings on using -w:
Taint mode cannot be turned off in a script once it has been turned on. Note that the -T argument is read by perl even on those platforms where the hashbang line itself is not used by the operating system.
When your program receives any data in taint mode, that data is marked as tainted. Tainted data may not be used to affect anything outside your program (for example, to open a file, or used in a system call), until you have specifically un-tainted it.
If you assign a variable a tainted value, that variable is also tainted. For example:
#!/usr/bin/perl -T use strict; use warnings; my $arg = $ARGV; my $file = "/home/foo/$arg"; open FOO, ">$file" or die $!; print FOO "Yay\n"; close FOO; exit 0;
This program would fail with the following error:
Insecure dependency in open while running with -T switch at ./test.pl line 9.
To untaint data you need to apply a regular expression and any data that you capture is then untainted. Remember when writing your regular expressions that it is better to include patterns that are allowed rather than those that aren't.
For the example above, assume we want the argument passed in on the command line to be a filename (not a path to a file), so we will only allow a filename argument limited to containing any alphanumeric characters, dots and underscores.
#!/usr/bin/perl -T use strict; use warnings; my $arg = $ARGV; # Untaint data and check result $arg =~ m/^([a-zA-Z0-9\._]+)$/ or die "Bad data in first argument"; my $file = "/home/foo/$1"; # Assign untainted data open FOO, ">$file" or die $!; print FOO "Yay\n"; close FOO; exit 0;
It is important to anchor your regular expression (use ^ and $ to ensure you match the entire string) and it is equally important to check that your regular expression has succeeded.
When writing modules and subroutines, you need to be careful that your code doesn't untaint data where it shouldn't. For example, the CGI module used to untaint all parameters because it used a regular expression with capturing parenthesis to capture the data.
Using the above example, if we want the data to remain tainted despite our check, we could use:
#!/usr/bin/perl -T use strict; use warnings; use re 'taint'; # Keep data captured by parens tainted my $arg = $ARGV; $arg =~ m/^([a-zA-Z0-9\._]+)$/ or die "Bad data in first argument"; my $file = "/home/becky/$1"; open FOO, ">$file" or die $!; print FOO "Yay\n"; close FOO; exit 0;
This would fail like the first example:
Insecure dependency in open while running with -T switch at ./test.pl line 11.
If you then want to use capturing parenthesis to untaint data, you can use:
no re 'taint'; | <urn:uuid:639a3fe2-5182-4790-b8bd-838016aef869> | 3.5625 | 1,006 | Tutorial | Software Dev. | 54.860395 |
- Higgs boson discovery
This one is obvious: the Higgs tops the ranking not only on Résonaances, but also on BBC and National Enquirer. So much has been said about the boson, but let me point out one amusing though rarely discussed aspect: as of this year we have one more fundamental force. The 5 currently known fundamental forces are 1) gravitational, 2) electromagnetic, 3) weak, 4) strong, and 5) Higgs interactions. The Higgs force is attractive and proportional to the masses of interacting particles (much like gravity) but manifests itself only at very short distances of order 10^-18 meters. From the microscopical point of view, the Higgs force is different from all the others in that it is mediated by a spinless particle. Résonaances offers a signed T-shirt to the first experimental group that will directly measure this new force.
- The Higgs diphoton rate
Somewhat disappointingly, the Higgs boson turned out to look very much as predicted by the current theory. The only glitch so far is the rate in which it decays to photon pairs. Currently, the ATLAS experiment measures the value 80% larger than the standard model prediction, while CMS also finds it a bit too large, at least officially. If this were true, the most likely explanation would be a new charged particle with the mass of order 100 GeV and a large coupling to the Higgs. At least until the next Higgs update in March we can keep a glimmer of hope that the standard model is not a complete theory of the weak scale...
Actually, the year 2012 was so kind as to present us not with one but with two fundamental parameters. Except the Higgs boson mass, we also learned about one entry in the neutrino mixing matrix, the so-called θ_13 mixing angle. This parameter controls, among other things, how often the electron neutrino transforms into other neutrino species. It was pinpointed this year by the neutrino reactor experiment Daya Bay who measured θ_13 to be about 9 degrees - a rather uninspired value. The sign of the times: the first prize was snatched by the Chinese (Daya Bay), winning by a hair before the Koreans (RENO), and leaving far behind the Japanese (T2K), the Americans (MINOS), and the French (Double-CHOOZ). The center of gravity might be shifting...
- Fermi line
Dark matter is there in our galaxy, but it's very difficult to see its manifestations other than the gravitational attraction. One smoking-gun signature would be a monochromatic gamma-ray line from the process of dark matter annihilation into photon pairs. And, lo and behold, a narrow spectral feature near 130 GeV was found in the data collected by the Fermi gamma-ray observatory. This was first pointed out by an independent analysis, and later confirmed (although using a less optimistic wording) by the collaboration itself. If this was truly a signal of dark matter, it would be even more important than the Higgs discovery. However past experience has taught us to be pessimistic, and we'd rather suspect a nasty instrumental effect to be responsible for the observed feature. Time will tell...
This year the LHCb experiment finally pinpointed the super-rare process of the Bs meson decaying into a muon pair. The measured branching fraction is about 3 in a billion, close to what was predicted. The impact of this result on theory was a bit overhyped, but it's anyway an impressive precision test. Even if "The standard model works, bitches" is not really the message we wanted to hear...
- Pioneer anomaly
A little something for dessert: one long standing mystery was ultimately solved this year. We knew all along that the thermal emission from Pioneer's reactors could easily be responsible for the anomalous deceleration of the spacecraft, but this was cleanly demonstrated only this year. So, one less mystery, and no blatant violation of Einstein's gravity in our solar system...
Monday, 31 December 2012
One can safely assume nothing else important will happen this year... so let's wrap up. Here are the greatest moments of the year 2012, from the point of view of an obscure particle physics blog.
Thursday, 13 December 2012
For the annual December CERN council meeting the ATLAS experiment provided an update of the Higgs searches in the γγ and ZZ→4 leptons channels. The most interesting thing about the HCP update a month ago was why these most sensitive channels were *not* updated (also CMS chose not to update γγ). Now we can see why. The ATLAS analyses in these channels return the best fit Higgs masses that differ by more than 3 GeV: 123.5 GeV for ZZ and 126.6 GeV for γγ, which is much more than the estimated resolution of about 1 GeV. The tension between these 2 results is estimated to be 2.7σ. Apparently, ATLAS used this last month to search for the systematic errors that might be responsible for the discrepancy but, having found nothing, they decided to go public.
One may be tempted to interpret the twin peaks as 2 separate Higgs-like particles. However in this case they most likely signal a systematic problem rather than interesting physics. First, it would be quite a coincidence to have two Higgs particles so close in mass (I'm not aware of a symmetry that could ensure it). Even if the coincidence occurs, it would be highly unusual that one Higgs decays dominantly to ZZ and the other dominantly to γγ, each mimicking pretty well the standard Higgs rate in the respective channel. Finally, and most importantly, CMS does not see anything like that; actually their measurements give a reverse picture. In the ZZ→4l channel CMS measures mh=126.2±0.6 GeV, above (but well within the resolution) the best fit mass they find in the γγ channel which is 125.1±0.7 GeV GeV. That makes us certain that down-to-earth reasons are responsible for the double vision in ATLAS, the likely cause being an ECAL calibration error, an unlucky background fluctuation, or alcohol abuse.
Here are the links to the ATLAS diphoton, ZZ, and combination notes.
Monday, 3 December 2012
The LHC routinely measures cross sections of processes predicted by the standard model. Unlike the Higgs or new physics searches, these analyses are not in the spotlight, are completed at a more leisurely pace, and are forgotten minutes after publication. One such observable is the WW pair production cross section. Both CMS and ATLAS measured that cross section in the 7 TeV data using the dilepton decay channel, both obtaining the result slightly above the standard model prediction. The situation got more interesting last summer after CMS put out a measurement based on a small chunk of 8 TeV data. The CMS result stands out more significantly, 2 sigma above the standard model, and the rumor is that in 8 TeV ATLAS it is also too high.
It is conceivable that new physics leads to an increase of the WW cross section at the LHC. This paper proposes SUSY chargino pair production as an explanation. If chargino decays dominantly to a W boson and an invisible particle - neutralino or gravitino, the final state is almost the same as the one searched by the LHC. Moreover, if charginos are light the additional missing energy from the invisible SUSY particles is small, and would not significantly distort the WW cross section measurement. A ~110 GeV wino would be pair-produced at the LHC with the cross section of a few pb - in the right ballpark to explain the excess.
Such light charginos are still marginally allowed. In the old days, the LEP experiments excluded new charged particles only up to ~100 GeV, LEP's kinematic reach for pair production. At the LHC, the kinematic reach is higher, however small production cross section of uncolored particles compared to the QCD junk the makes chargino searches challenging. In some cases, charginos and neutralinos have been recently excluded up to several hundred GeV (see e.g. here), but these strong limits are not bullet proof as they rely on trilepton signatures. If one can fiddle with the SUSY spectrum so as to avoid decays leading to trilepton signatures (in particular, the decay χ1→ LSP Z* must be avoided in the 2nd diagram) then 100 GeV charginos can be safe.
Of course, the odds for the WW excess not being new physics are much higher. The excess at the LHC could simply be an upward fluctuation of the signal, or higher-order corrections to the WW cross section in the standard model may have been underestimated. Still, it will be interesting to observe where the cross section will end up after the full 8 TeV dataset is analyzed. So, if you have a cool model that overproduces WW (but not WZ) pairs, now may be the right moment to step out. | <urn:uuid:cd3c3bad-18f4-4fc9-bc50-e90d886ad140> | 2.703125 | 1,918 | Content Listing | Science & Tech. | 54.857334 |
Calculate Mushrooms, white, microwaved calories from carbohydrates, fats and proteins based on weight
Gravels and Substrates
Materials and Substances
What is rem heavy particles?
Rem (Roentgen Equivalent Man) is the derived unit of any of the quantities expressed as the radiation absorbed dose (rad) equivalent. The dose equivalent in rems is equal to the radiation absorbed dose in rads multiplied by the quality factor. The quality factor is unique to the radiation type. Various radiation types have different biological effect, even for the same amount of radiation absorbed dose (rad). Rem heavy particles expresses the radiation absorbed dose in human tissue to the effective biological damage caused by exposure to heavy particles.read more...»
What is linear density or linear mass density measurement?
The linear density (μ) of a one-dimensional object, also known as linear mass density, is defined as the mass of the object per unit of length. The derived SI unit of the linear density measurement is kilogram per meter (kg/m). The linear density is a property of strings and other one-dimensional objects.read more...» | <urn:uuid:52860915-7b66-4154-abda-111841cce53f> | 3.21875 | 232 | Content Listing | Science & Tech. | 34.67625 |
Sponsored Link •
In the last dozen episodes I have defined plenty of macros, but I have not really explained what macros are and how they work. This episode closes the gap: it explains the true meaning of Scheme macros by introducing the concepts of syntax object and of transformer over syntax objects.
Scheme macros - as standardized in the R6RS document - are built over the concept of syntax object. The concept is peculiar to Scheme and has no counterpart in other languages (including Common Lisp), therefore it is worth to spend some time on it.
A syntax-object is a kind of enhanced s-expression: it contains the source code as a list of symbols and primitive values, but also additional informations, such as the name of the file containing the source code, the position of the syntax object in the file, a set of marks to distinguish identifiers according to their lexical context, and more.
The easiest way to get a syntax object is to use the syntax quoting operation, i.e. the syntax (#') symbol you have seen in all the macros I have defined until now. Consider for instance the following script, which displays the string representation of the syntax object #'1:
$ cat x.ss (import (rnrs)) (display #'1)
If you run it under PLT Scheme you will get
$ plt-r6rs x.ss #<syntax:/home/micheles/Dropbox/gcode/artima/scheme/x.ss:2:11>
In other words, the string representation of the syntax object #'1 contains the full pathname of the script and the line number/column number where the syntax object appears in the source code. Clearly this information is pretty useful for tools like IDEs and debuggers. The internal implementation of syntax objects is not standardized at all, so that you get different informations in different implementations. For instance Ikarus gives
$ ikarus --r6rs-script x.ss #<syntax 1 [char 28 of x.ss]>
i.e. in Ikarus syntax objects do not store line numbers, they just store the character position from the beginning of the file. If you are using the REPL you will have less information, of course, and even more implementation-dependency. Here are a few example of syntax objects obtained from syntax quoting:
> #'x ; convert a name into an identifier #<syntax x> > #''x ; convert a literal symbol #<syntax 'x> > #'1 ; convert a literal number #<syntax 1> > #'"s" ; convert a literal string #<syntax "s"> > #''(1 "a" 'b) ; convert a literal data structure #<syntax '(1 "a" 'b)>
Here I am running all my examples under Ikarus; your Scheme system may have a slightly different output representation for syntax objects.
In general #' can be "applied" to any expression:
> (define syntax-expr #'(display "hello")) > syntax-expr #<syntax (display "hello")>
It is possible to extract the s-expression underlying the syntax object with the syntax->datum primitive:
> (equal? (syntax->datum syntax-expr) '(display "hello")) #t
Different syntax-objects can be equivalent: for instance the improper list of syntax objects (cons #'display (cons #'"hello" #'())) is equivalent to the syntax object #'(display "hello") in the sense that both corresponds to the same datum:
> (equal? (syntax->datum (cons #'display (cons #'"hello" #'()))) (syntax->datum #'(display "hello"))) #t
The (syntax ) macro is analogous to the (quote ) macro. Mreover, there is a quasisyntax macro denoted with #` which is analogous to the quasiquote macro (`). In analogy to the operations comma (,) and comma-splice (,@) on regular lists, there are two operations unsyntax #, (sharp comma) e unsyntax-splicing #,@ (sharp comma splice) on lists and improper lists of syntax objects.
Here is an example using sharp-comma:
> (let ((user "michele")) #`(display #,user)) (#<syntax display> "michele" . #<syntax ()>)
Here is an example using sharp-comma-splice:
> (define users (list #'"michele" #'"mario")) > #`(display (list #,@users)) (#<syntax display> (#<syntax list> #<syntax "michele"> #<syntax "mario">) . #<syntax ()>)
Notice that the output - in Ikarus - is an improper list. This is somewhat consistent with the behavior of usual quoting: for usual quoting '(a b c) is a shortcut for (cons* 'a 'b 'c '()), which is a proper list, and for syntax-quoting #'(a b c) is equivalent to (cons* #'a #'b #'c #'()), which is an improper list. The cons* operator here is a R6RS shortcut for nested conses: (cons* w x y z) is the same as (cons w (cons x (cons y z))).
However, the result of a quasi quote interpolation is very much implementation-dependent: Ikarus returns an improper list, but other implementations returns different results; for instance Ypsilon returns a proper list of syntax objects whereas PLT Scheme returns an atomic syntax object. The lesson here is that you cannot rely on properties of the inner representation of syntax objects: what matters is the code they correspond to, i.e. the result of syntax->datum.
It is possible to promote a datum to a syntax object with the datum->syntax procedure, but in order to do so you need to provide a lexical context, which can be specified by using an identifier:
> (datum->syntax #'dummy-context '(display "hello")) #<syntax (display "hello")
(the meaning of the lexical context in datum->syntax is tricky and I will go back to that in a future episode).
syntax-match is a general utility to perform pattern matching on syntax objects; it takes a syntax object in output and returns a syntax object in output. Here is an example of a simple transformer based on syntax-match:
> (define transformer (syntax-match () (sub (name . args) #'name))); return the name as a syntax object > (transformer #'(a 1 2 3)) #<syntax a>
For convenience, syntax-match also accepts a second syntax (syntax-match x (lit ...) clause ...) to match syntax expressions directly. This is more convenient than writing ((syntax-match (lit ...) clause ...) x). Here is a simple example:
> (syntax-match #'(a 1 2 3) () (sub (name . args) #'args)); return the args as a syntax object #<syntax (1 2 3)>
Here is an example using quasisyntax and unsyntax-splicing:
> (syntax-match #'(a 1 2 3) () (sub (name . args) #`(name #,@#'args))) (#<syntax a> #<syntax 1> #<syntax 2> #<syntax 3>)
As you see, it easy to write hieroglyphs if you use quasisyntax and unsyntax-splicing. You can avoid that by means of the with-syntax form:
> (syntax-match #'(a 1 2 3) () (sub (name . args) (with-syntax (((a ...) #'args)) #'(name a ...)))) (#<syntax a> #<syntax 1> #<syntax 2> #<syntax 3>)
The pattern variables introduced by with-syntax are automatically expanded inside the syntax template, without need to resort to the quasisyntax notation (i.e. there is no need for #` #, #,@).
Macros are in one-to-one correspondence with syntax transformers, i.e. every macro is associated to a transformer which converts a syntax object (the macro and its arguments) into another syntax object (the expansion of the macro). Scheme itself takes care of converting the input code into a syntax object (if you wish, internally there is a datum->syntax conversion) and the output syntax object into code (an internal syntax->datum conversion).
Consider for instance a macro to apply a function to a (single) argument:
(def-syntax (apply1 f a) #'(f a))
This macro can be equivalently written as
(def-syntax apply1 (syntax-match () (sub (apply1 f a) (list #'f #'a))))
The sharp-quoted syntax is more readable, but it hides the underlying list representation which in some cases is pretty useful. This second form of the macro is more explicit, but still it relies on syntax-match. It is possible to provide the same functionality without using syntax-match as follows:
(def-syntax apply1 (lambda (x) (let+ ((macro-name func arg) (syntax->datum x)) (datum->syntax #'apply1 (list func arg)))))
Here the macro transformer is explicitly written as a lambda function, and the pattern matching is performed by hand by converting the input syntax object into a list and by using the list destructuring form let+ introduced in episode 15. At the end, the resulting list is converted back to a syntax object in the context of apply1. Here is an example of usage:
> (apply1 display "hey") hey
sweet-macros provide a convenient feature: it is possible to extract the associated transformer for each macro defined via def-syntax. For instance, here is the transformer associated to the apply1 macro:
> (define tr (apply1 <transformer>)) > (tr #'(apply1 display "hey")) #<syntax (display "hey")>
The ability to extract the underlying transformer is useful in certain situations, in particular when debugging. It can also be exploited to define extensible macros, and I will come back to this point in the future.
The previous paragraphs were a little abstract and probably of unclear utility (but what would you expect from an advanced macro tutorial? ;). Now let me be more concrete. My goal is to provide a nicer syntax for association lists (an association list is just a non-empty list of non-empty lists) by means of an alist macro expanding into an association list. The macro accepts a variable number of arguments; every argument is of the form (name value) or it is a single identifier: in this case latter case it must be magically converted into the form (name value) where value is the value of the identifier, assuming it is bound in the current scope, otherwise a run time error is raised "unbound identifier". If you try to pass an argument which is not of the expected form, a compile time syntax error must be raised. In concrete, the macro works as follows:
(test "simple" (let ((a 0)) (alist a (b 1) (c (* 2 b)))) '((a 0) (b 1) (c 2))) (test "with-error" (catch-error (alist a)) "unbound variable")
Here is the implementation:
(def-syntax (alist arg ...) (with-syntax (( ((name value) ...) (map (syntax-match () (sub n #'(n n) (identifier? #'n)) (sub (n v) #'(n v) (identifier? #'n))) #'(arg ...)) )) #'(let* ((name value) ...) (list (list 'name name) ...))))
The expression #'(arg ...) expands into a list of syntax objects which are then transformed by the syntax-match transformer, which converts identifiers of the form n into couples of the form (n n), whereas it leaves couples (n v) unchanged, just checking that n is an identifier. This is a typical use case for syntax-match as a list matcher inside a bigger macro. We will see other use cases in the next Adventures.
|Michele Simionato started his career as a Theoretical Physicist, working in Italy, France and the U.S. He turned to programming in 2003; since then he has been working professionally as a Python developer and now he lives in Milan, Italy. Michele is well known in the Python community for his posts in the newsgroup(s), his articles and his Open Source libraries and recipes. His interests include object oriented programming, functional programming, and in general programming metodologies that enable us to manage the complexity of modern software developement.| | <urn:uuid:4b039c7e-8fed-496f-9ec7-f094dd61c365> | 2.75 | 2,820 | Personal Blog | Software Dev. | 52.172142 |
|Galaxies don't normally look like this.
NGC 6745 actually shows the results of
two galaxies that have been
colliding for only
hundreds of millions of years.
Just off the
above digitally sharpened photograph to the lower right is the smaller galaxy,
The larger galaxy,
pictured above, used to be a
but now is damaged and appears
Gravity has distorted the shapes of the galaxies.
Although it is likely that no stars in the two
galaxies directly collided, the gas,
dust, and ambient
magnetic fields do interact directly.
In fact, a knot of gas pulled off the larger galaxy
on the lower right has now begun to form stars.
spans about 80 thousand light-years across and is located about 200 million
ESA, and the
Hubble Heritage Team | <urn:uuid:fd623f37-39bf-4448-81dd-a859dcb6ff87> | 3.421875 | 173 | Knowledge Article | Science & Tech. | 46.01975 |
Once we have a thorough understanding of polynomials we can look at rational functions that are a quotient of two polynomials. These rational functions have certain behaviors, and students are often asked to find their limits, or to graph them. Their graphs can have different characteristics depending on whether the numerator function has degree less than, equal to, or greater than the denominator function.
I want to talk about a very important class of functions called rational functions. A rational function is one that can be written f of x equals p of x over q of x where p of x and q of x are polynomials.
Now, f of x is defined for any number of x unless q of x the denominator equals zero so the domain will be all real numbers except those that make the denominator zero. And the zeros of a rational function will be the zeros of the numerator just as long as they are not also zeros of the denominator, so let's practice using these definitions in an example.
Each of these three is a rational function, polynomial divided by polynomial so p of x over q of x. Now, find the domain and zeros. The domain of this function is going to be all real numbers except where the denominator is zero, so where is the denominator zero? 2x-5=0 when 2x=5, so we divide by 2, x equals five halves so the domain is all real numbers except five halves, all real numbers except five halves, now what are the zeros? For the zeros we look to the numerator. When is the numerator equal to zero? 2x squared minus 5x minus 3. Now this looks like it's factorable so I'm going to try to factor it 2x, x. I need a 3 and a 1 now if I put -3 here and +1 here I'll get x-6x, -5x that works. That means that x equals negative one half and x=3 are both zeros of this function and because neither of those zeros is also a zero of the denominator, these are going to be zeros of my function so the zeros are negative one half, x=3.
Okay let's take a look at this guy, what's the domain? Well first we have to figure out where the where the denominator equals zero, so x squared minus 4x equal 0, I can factor this it equal zero when x is 0 or 4, so the domain will be all real numbers except 0 or 4, all real numbers except 0 or 4, now for the zeros of the function the numbers that make this function 0 we look to the numerator, x squared minus 1 equals zero and that's really easy x squared equals 1, x equals plus or minus 1, so as long as plus or minus 1 are not also zeros of the denominator, these are zeros of my function so the zeros are plus and minus 1.
Finally let's look look at this function, this denominator I can find the zeros by factoring, x cubed minus x squared minus 6x equals 0, so you get x times x squared minus x minus 6 and this can also be factored looks like it's going to be x and x. I need maybe a 2 and a 3 if I go -3+2 I get my minus 6 and I get -3x+2x negative x that works, so the zeros of the denominator are x=0, 3 or -2, so the domain will be all real numbers except those three. Domain all reals except 0, 3 or -2. And then what about the zeros of this function? Let's look at the numerator; x squared minus 4 equals zero means x squared equals 4 so x is plus or minus 2. Now here's a case where one of the zeros of the numerator is also a zero of the denominator now because 2 is a zero of both the numerator and denominator, the function is not going to be defined there so you can't say that the function's value is 0 there, its not a 0 the only 0 will be then be -2 again I'm sorry actually -2 is this is the it's where it's undefined so positive 2. Let me just clarify, the function is not defined at -2 so -2 can't be a zero so it has to be +2 only. | <urn:uuid:29a56270-e3c9-47b8-8d41-9579e90ad63f> | 4.40625 | 923 | Tutorial | Science & Tech. | 64.283437 |
[For trouble viewing the images/movies on this page, go here]
Cassini begins the three-week Rev151 on July 21 at its farthest distance from Saturn, called apoapse. At this point, Cassini is 2.69 million kilometers (1.67 million miles) from Saturn's cloud tops. The spacecraft is in the middle of the first equatorial phase of the Cassini Solstice Mission, which lasts until May 2012. During this phase, the spacecraft's orbits lie within the equatorial plane of the planet, providing opportunities to encounter Saturn's numerous moons, to image the rings edge-on, and to look at Saturn's cloud tops without the rings obscuring the view. Thirty-eight ISS observations are planned for Rev151, the vast majority designed to monitor the large northern hemisphere storm first seen in December 2010.
ISS begins its observations for Rev151 less than an hour after apoapse with a Saturn storm watch observation. Twelve such observations are planned between July 21 and July 29. They are designed to take advantage of short, two-minute segments when the spacecraft turns the optical remote sensing (ORS) instruments back to Saturn as a waypoint between other experiments' observations. These sequences include summed images taken in blue, clear and two methane band filters. Each of these sequences also will include one full-frame (not summed), continuum band image taken with the CB2 filter. On July 23 and 24, ISS will acquire ride-along images with a unique Visual and Infrared Spectrometer (VIMS) experiment. VIMS will attempt to observe two transits by an exo-planet orbiting the primary star of HD 189733 in the constellation Vulpecula. The planet has a mass of 1.14 Jupiter masses and orbits its parent star every 2.22 days. The first observation is designed to observe the planet passing in front of the star. The second observation is designed to observe the planet passing behind the star. The star will appear as only a dim dot in the ISS images, and the exo-planet will not be visible. ISS may detect the star dim as the planet passes in front during the first observation. Earth-based spectroscopy revealed water vapor and carbon dioxide in the planet's atmosphere, and data from the Spitzer Space Telescope has been used to create a crude map of its cloud-top features. On July 24, ISS will acquire a Titan monitoring observation. Titan will be 1.38 million kilometers (0.86 million miles) away at the time. A half-phase Titan will be visible from Cassini, allowing surface features and clouds across Titan's Shangri-La region to be observed. On July 26 and 29, ISS will acquire astrometric observations of Saturn's small, inner moons. During these two observations, the camera system will image Atlas, Prometheus, Janus (both times), Calypso (both times), Polydeuces (both times), Epimetheus, Telesto, Methone, and Pandora. On July 30, ISS will ride along with the Ultraviolet Imaging Spectrometer (UVIS) during its observation of the large northern storm on Saturn. The scan will proceed slowly from west to east during the 16-hour observation.
On August 1 at 08:12 UTC, Cassini will reach periapse, its closest distance to Saturn for Rev151, at a distance of 183,740 kilometers (114,170 miles), during which three observations are planned. The first is a ride along observation with VIMS. The instrument will capture seven mosaics that cover the equatorial and northern mid-latitudes of Saturn. The mosaics will have two rows (one centered near the equator, the other near 15 degrees north) with three footprints in each row. Immediately afterward, UVIS, ISS and the other ORS instruments will focus their attention on an encounter with Rhea. While technically a non-targeted encounter, the flyby will actually take the spacecraft fairly close to the moon, 5,862 kilometers (3,642 miles) from its surface. The fields-of-view of these instruments will be trained on Epsilon Orionis, the middle star of Orion's Belt. UVIS will use this stellar occultation to measure Rhea's very thin atmosphere and any dust in its environment. By measuring the density of Rhea's atmosphere over both its day and night sides, the UVIS team hope to learn more about how the atmosphere is generated and how it responds to heating by the Sun. Wide- and narrow-angle images are planned during this observation, showing Rhea pass through the field-of-view as the cameras stay fixated on Epsilon Orionis. Finally, VIMS and ISS will turn back to Saturn to image the "string of pearls" belt in Saturn's northern hemisphere. The belt is located just north of the large northern storm. This belt predates the storm and was observed by VIMS as early as 2006. Each "pearl" is a cloud clearing that is spaced about 3.5 degrees of longitude apart in the belt. On August 3, ISS and UVIS will acquire another west-to-east scan across Saturn.
Beginning on August 5, ISS will acquire a series of wind tracking observations. These involve taking a series of images over the same region of Saturn in order to see how much cloud features are displaced from one another. The distance of displacement compared to the rotation of the planet can then be used to measure wind speeds in the various belts and zones of the planet. On August 5, two, two-hour observations are planned. Two, four-hour observations are planned for both August 7 and 8. On August 9 and 10, five-hour cloud tracking observations are on the schedule. On August 5, ISS will ride along with the Composite Infrared Spectrometer (CIRS) to collect a series of WAC color filter images (including RED, GRN, and BL1) of Saturn as CIRS acquires a mid-infrared map of planet. On August 7, a Titan cloud monitoring observation is planned, covering the Xanadu region of the satellite from a distance of 1.53 million kilometers (0.95 million miles). Two more Titan observations are planned for August 9 and 10, as Cassini images the anti-Saturn hemisphere of the satellite. The August 9 observation will be taken from a distance of 1.41 million kilometers (0.87 million miles) while the August 10 sequence will be acquired from a distance of 1.78 million kilometers (1.11 million miles).
On August 12, Cassini will reach apoapse on this orbit, bringing it to a close and starting Rev152. In the day or so before the end of the orbit, ISS will acquire two movie observations of Saturn's cloud tops, each lasting for a full Saturn rotation. Every 1.9 hours, the wide-angle camera will acquire a set of images using the CB2, MT2, MT3, RED, GRN, BL1, VIO, and UV filters. The field-of-view will favor the northern hemisphere in order to avoid the rings for the other instruments that will be riding along, particularly CIRS. ISS will also take a rotational light curve of the distant irregular satellite, Tarqeq. The observation is designed to measure the length of the small moon's day. Similar observations have been successfully taken of Albiorix, Siarnaq, Ymir, Kiviuq, and Bebhionn.
Image products created in Celestia. All dates in Coordinated Universal Time (UTC). Rhea basemap by Steve Albers. | <urn:uuid:e3096ef9-caed-4588-8314-5c10905975f8> | 3.109375 | 1,571 | Knowledge Article | Science & Tech. | 53.730507 |
Core Ecological Concepts
Core Ecological Concepts
Understanding the patterns and processes by which nature sustains life is central to ecological literacy.
Fritjof Capra says that these may be called principles of ecology, principles of sustainability, principles of community, or even the basic facts of life. In our work with teachers and schools, the Center for Ecoliteracy has identified six of these principles that are important for students to understand and be able to apply to the real world.
Recognizing these core ecological concepts is one of the important results of our guiding principle, "Nature Is Our Teacher," which is described in the "Explore" section of this website. We present them again here for the guidance they can provide to teachers as they plan lessons as part of schooling for sustainability.
All living things in an ecosystem are interconnected through networks of relationship. They depend on this web of life to survive. For example: In a garden, a network of pollinators promotes genetic diversity; plants, in turn, provide nectar and pollen to the pollinators.
Nature is made up of systems that are nested within systems. Each individual system is an integrated whole and — at the same time — part of larger systems. Changes within a system can affect the sustainability of the systems that are nested within it as well as the larger systems in which it exists. For example: Cells are nested within organs within organisms within ecosystems.
Members of an ecological community depend on the exchange of resources in continual cycles. Cycles within an ecosystem intersect with larger regional and global cycles. For example: Water cycles through a garden and is also part of the global water cycle.
Each organism needs a continual flow of energy to stay alive. The constant flow of energy from the sun to Earth sustains life and drives most ecological cycles. For example: Energy flows through a food web when a plant converts the sun's energy through photosynthesis, a mouse eats the plant, a snake eats the mouse, and a hawk eats the snake. In each transfer, some energy is lost as heat, requiring an ongoing energy flow into the system.
All life — from individual organisms to species to ecosystems — changes over time. Individuals develop and learn, species adapt and evolve, and organisms in ecosystems coevolve. For example: Hummingbirds and honeysuckle flowers have developed in ways that benefit each other; the hummingbird's color vision and slender bill coincide with the colors and shapes of the flowers.
Ecological communities act as feedback loops, so that the community maintains a relatively steady state that also has continual fluctuations. This dynamic balance provides resiliency in the face of ecosystem change. For example: Ladybugs in a garden eat aphids. When the aphid population falls, some ladybugs die off, which permits the aphid population to rise again, which supports more ladybugs. The populations of the individual species rise and fall, but balance within the system allows them to thrive together. | <urn:uuid:b376f164-a49b-41ac-bd46-275a12d51835> | 3.953125 | 599 | Knowledge Article | Science & Tech. | 32.379688 |
A list of water pollution causes.
Scientific information, visually presented, on global climate change on earth with data on ice, carbon dioxide, sea level, and temperature data.
This interim report provides an assessment and review of the Japanese Climate Change Policy Programme.
The first purpose of Climate Interactive's C-Learn simulator is to help you use a scientifically rigorous model to set a goal for CO2 in the atmosphere, explore what it will take to reach that goal...
World Climate (formerly the Copenhagen Climate Exercise, or CCE) is a role-playing climate simulation designed by MIT and Sustainability Institute that gives groups from 10-60 an experience of...
1999–2013 Advanced Technology Environmental and Energy Center | Leave Feedback | <urn:uuid:900e677b-8151-4f2d-ac6e-273586f16e65> | 2.828125 | 148 | Content Listing | Science & Tech. | 29.901839 |
Family: Papilionidae, Swallowtails view all from this family
Description Papilio troilus, the spicebush swallowtail, is a common black swallowtail butterfly found in North America, also known as the Green-Clouded butterly. It has two subspecies, Papilio troilus troilus and Papilio troilus ilioneus, found mainly in the Florida peninsula. The spicebush swallowtail derives its name from its most common host plant, the spicebush, members of the genus Lindera.
The family to which spicebush swallowtails belong, Papilionidae, or Swallowtails, include the largest butterflies in the world. The Swallowtails are unique in that even while feeding, they continue to flutter their wings. Unlike other Swallowtail butterflies, Spicebushes fly low to the ground instead of at great heights.
Dimensions 3 1/2-4 1/2" (89-114 mm).
Habitat Freshwater swamps, marshes & bogs, Cities, suburbs & towns, Meadows & fields, Forests & woodlands.
Range Northwest, Plains, Mid-Atlantic, Southwest, New England, Rocky Mountains, Texas, Western Canada, Southeast, Eastern Canada, Florida, California, Great Lakes. | <urn:uuid:22725820-04a6-4957-8a1a-350823b33c3d> | 2.796875 | 262 | Knowledge Article | Science & Tech. | 35.76859 |
Traditional Physics Cinema Project, due at 7:53am on Wednesday, Jan 9, 2013 (7min bell)
Purpose: in groups of 3-4, analyze the physics of a movie scene
a) has cause and effect involved. This means that in order to solve your chosen problem, you must first solve something else. No fair calculating several unrelated things! :-o The first action must directly cause the next.
b) has at least two concepts in its solution.
For example, the movie Road Trip shows a car jumping a ravine and landing on the other side. Your problem might be: “Would a passenger of the car break a leg?” To determine if the person broke a leg you would need to:
a) first: use the TNEOM to solve for the speed of the car at the far side of the ravine
(which is the same as the passenger’s speed)
b) then: use an energy calculation to find the force of the collision.
c) Finally: compare that force to the force needed to break a bone, data found from
a) DVD or flash stick with scene.
b) a colorful, interesting poster which:
i. Names the movie
ii. Shows the scene you are using
iii. States the question you are answering, using a dozen words or less
iv. Connects the concepts and equations to when they are being used in the scene. (Please do not make this a list of equations.)
v. Answers the question!
c) a typed explanation of how you solved your problem, including all equations, substitutions and solutions as well as explanatory sentences, pictures and diagrams. If you are in Word, learn to use subscripts and superscripts and Greek letters (for example, to get delta D, type D and change its font to Symbol). The report will be handed in to your instructor before school the day of presentations, so keep a copy for yourself if you need one!
d) a well organized, entertaining presentation to the class during which you avoid numbers as much as possible (except for the final answer). All group members must present a significant portion of the project. You may NOT use the board.
The grading rubric:
Traditional Cinema Project Problems (30 pts)
* video clip of scene is shown (DVD or on flash stick)
*All members present part of project
* Visual aid is clear, colorful, and explains concepts being used
* Visual aid states question being asked and answer
* Visual aid has scene depicted
Written report expectations:
*Formulas with letters only are shown before numbers are put in
* Project has a minimum of two concepts
* Two movie actions are linked together.
* The problem is solved and a clear answer is offered.
· having multiple questions instead of a single question
· incorrect physics
· missing parts to presentation or report
· parts of report not typed
· presentation filled with numbers instead of conceptual explanations with a final numerical answer | <urn:uuid:690f7d04-1cfd-4326-8bc2-d3cf5fc9d27c> | 4.21875 | 622 | Tutorial | Science & Tech. | 56.046158 |
A tornado is a violently rotating column or air in contact with the ground. When the column of air is not in contact with the ground, it is called a funnel cloud. A tornado in contact with a water surface is called a water spout. Tornadoes can vary greatly in size, intensity and appearance. Wind speeds in a tornado can range from just less that 100 miles per hour to over 200 miles per hour. The intensity of a tornado is classified using the Fujita scale. | <urn:uuid:7b62d1e9-0bd6-46c8-99e8-9c252e7aa805> | 3.953125 | 97 | Knowledge Article | Science & Tech. | 62.435504 |
Located a 1,000 miles south of Hawai'i, Palmyra Atoll is one of the most spectacular marine wilderness areas on Earth. The Nature Conservancy bought Palmyra in 2000 from the Fullard-Leo family, who had previously turned down offers to have the atoll used as a nuclear waste site and a casino.
Today, Palmyra is a national marine monument and the Conservancy and the U.S. Fish & Wildlife Service are partnering to protect it. Through the Palmyra Atoll Research Consortium, it is also being developed as a center for scientific study. What we can learn at Palmyra—about global climate change, coral reefs, marine restoration and invasive species—promises to inform conservation strategies for island ecosystems throughout the Pacific and around the world.
Read the Conservancy national magazine story about Palmyra-- the ultimate living laboratory for researchers who study sharks.
Want to learn more about sharks? Read these stories from Palmyra and the Pacific.
Unexpected things happen on a remote atoll like Palmyra. Just ask the Conservancy’s Zach Caldwell, who was called upon to save the life of a sailor lost at sea.
See Palmyra through the eyes of Conservancy marine scientist Kydd Pollock.
Images from the past depict Palmyra as it was during the Second World War.
Follow the action and see how the Palmyra Restoration Project was carried out. | <urn:uuid:c9f86e09-1a8e-414d-8597-34728a7af940> | 3.375 | 287 | Knowledge Article | Science & Tech. | 40.755199 |
One of my faculty colleagues, Michael Runtz, took this photo (right) of ice bubbles in Cranberry Lake in Ontario. How did the bubbles form in this amazing fashion?
Without any scale reference or indication of the depth of the lake I cannot tell for certain, but I have seen similar bubbles frozen in ponds where I grew up in upstate New York.
In freshwater ponds and lakes, the biological activity of microbes in the sediments on the lake floor produces bubbles of gas, usually methane or carbon dioxide. In winter this activity is slow, but it is still present.
The gas bubbles rise to the frozen surface of the lake, becoming trapped there. The following night, another layer of ice forms beneath the bubble, so it is encased in ice. This leads to the flattened shape you see. The picture is a frozen daily record of the gas emissions.
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:45dfb6cf-43f0-4a4e-bed0-5df89fbeb61e> | 3.65625 | 205 | Truncated | Science & Tech. | 50.074045 |
Corals are marine organisms from the class Anthozoa and exist as small sea anemone–like polyps, typically in colonies of many identical individuals. The group includes the important reef builders that are found in tropical oceans, which secrete calcium carbonate to form a hard skeleton.
A coral "head", commonly perceived to be a single organism, is formed from thousands of individual but genetically identical polyps, each polyp only a few millimeters in diameter. Over thousands of generations, the polyps lay down a skeleton that is characteristic of their species. A head of coral grows by asexual reproduction of the individual polyps. Corals also breed sexually by spawning, with corals of the same species releasing gametes simultaneously over a period of one to several nights around a full moon.
Although corals can catch plankton using stinging cells on their tentacles, these animals obtain most of their nutrients from symbiotic unicellular algae called zooxanthellae. Consequently, most corals depend on sunlight and grow in clear and shallow water, typically at depths shallower than 60 m (200 ft). These corals can be major contributors to the physical structure of the coral reefs that develop in tropical and subtropical waters, such as the enormous Great Barrier Reef off the coast of Queensland, Australia. Other corals do not have associated algae and can live in much deeper water, such as in the Atlantic, with the cold-water genus Lophelia surviving as deep as 3000 m. Examples of these can be found living on the Darwin Mounds located north-west of Cape Wrath, Scotland. Corals have also been found off the coast of Washington State and the Aleutian Islands in Alaska.
Corals belong to the class Anthozoa and are divided into two subclasses, depending on the number of tentacles or lines of symmetry, and a series of orders corresponding to their exoskeleton, nematocyst type and mitochondrial genetic analysis. Those with eight tentacles are called octocorallia or Alcyonaria and comprise soft corals, sea fans and sea pens. Those with more than eight in a multiple of six are called hexacorallia or Zoantharia. This group includes reef-building corals (Scleractinians), sea anemones and zoanthids.
While a coral head appears to be a single organism, it is actually a head of many individual, yet genetically identical, polyps. The polyps are multicellular organisms that feed on a variety of small organisms, from microscopic plankton to small fish.
Polyps are usually a few millimeters in diameter, and are formed by a layer of outer epithelium and inner jellylike tissue known as the mesoglea. They are radially symmetrical with tentacles surrounding a central mouth, the only opening to the stomach or coelenteron, through which both food is ingested and waste expelled.
The stomach closes at the base of the polyp, where the epithelium produces an exoskeleton called the basal plate or calicle (L. small cup). This is formed by a thickened calciferous ring (annular thickening) with six supporting radial ridges (as shown below). These structures grow vertically and project into the base of the polyp. When polyps are physically stressed, they contract into the calyx so that virtually no part is exposed above the skeletal platform. This protects the organism from predators and the elements (Barnes, R.D., 1987; Sumich, 1996).
The polyp grows by extension of vertical calices which are occasionally septated to form a new, higher, basal plate. Over many generations this extension forms the large calciferous (Calcium containing) structures of corals and ultimately coral reefs.
Formation of the calciferous exoskeleton involves deposition of the mineral aragonite by the polyps from calcium ions they acquire from seawater. The rate of deposition, while varying greatly between species and environmental conditions, can be as much as 10 g / m² of polyp / day (0.3 ounce / sq yd / day). This is light dependent, with night-time production 90% lower than that during the middle of the day.
The polyp's tentacles trap prey using stinging cells called nematocysts. These are cells modified to capture and immobilize prey, such as plankton, by injecting poisons, firing very rapidly in response to contact. These poisons are usually weak but in fire corals they are potent enough to harm humans. Nematocysts can also be found in jellyfish and sea anemones. The toxins injected by nematocysts immobilize or kill prey, which can then be drawn into the polyp's stomach by the tentacles through a contractile band of epithelium called the pharynx.
The polyps are interconnected by a complex and well developed system of gastrovascular canals allowing significant sharing of nutrients and symbiotes. In soft corals these range in size from 50-500 μm in diameter and to allow transport of both metabolites and cellular components.
Aside from feeding on plankton, many corals as well as other cnidarian groups such as sea anemones (e.g. Aiptasia), form a symbiotic relationship with a class of algae, zooxanthellae, of the genus Symbiodinium. The sea anemone Aiptasia, while considered a pest among coral reef aquarium hobbyists, has served as a valuable model organism in the scientific study of cnidarian-algal symbiosis. Typically a polyp will harbor one particular species of algae. Via photosynthesis, these provide energy for the coral, and aid in calcification. The algae benefit from a safe environment, and use the carbon dioxide and nitrogenous waste produced by the polyp. Due to the strain the algae can put on the polyp, stress on the coral often triggers ejection of the algae, known on a large scale as coral bleaching, as it is the algae that contribute to the brown coloration of corals; other colors, however, are due to host coral pigments, such as GFPs (green fluorescent protein). Ejecting the algae increases the polyps' chances of surviving stressful periods - they can regain the algae at a later time. If the stressful conditions persist, the polyps, and corals, will eventually die.
Corals predominantly reproduce sexually, with 25% of hermatypic corals (stony corals) forming single sex (gonochoristic) colonies, whilst the rest are hermaphroditic. About 75% of all hermatypic corals "broadcast spawn" by releasing gametes - eggs and sperm - into the water to spread colonies over large distances. The gametes fuse during fertilisation to form a microscopic larvum called a planula, typically pink and elliptical in shape; a moderately sized coral colony can form several thousands of these larvae per year to overcome the huge odds against formation of a new colony.
The planula swims towards light, exhibiting positive phototaxis, to surface waters where they drift and grow for a time before swimming back down to locate a surface on which it can attach and establish a new colony. At many stages of this process there are high failure rates, and even though millions of gametes are released by each colony very few new colonies are formed. The time from spawning to settling is usually 2 or 3 days, but can be up to 2 months. The larva grows into a coral polyp and eventually becomes a coral head by asexual budding and growth, creating new polyps.
Corals that do not broadcast spawn are called brooders, with most non-stony corals displaying this characteristic. These corals release sperm but harbour the eggs, allowing larger, negatively buoyant, planulae to form which are later released ready to settle. The larva grows into a coral polyp and eventually becomes a coral head by asexual budding and growth to create new polyps.
Synchronous spawning is very typical on a coral reef and often, even when there are multiple species present, all the corals on the reef release gametes during the same night. This synchrony is essential so that male and female gametes can meet and form planula. The cues that guide the release are complex, but over the short term involve lunar changes, sunset time, and possibly chemical signalling. Synchronous spawning may have the result of forming coral hybrids, perhaps involved in coral speciation. In some places the coral spawn can be dramatic, usually occurring at night, where the usually clear water becomes cloudy with gametes.
Corals must rely on environmental cues, varying from species to species, to determine the proper time to release gametes into the water. There are two methods corals use for sexual reproduction which differ in whether the female gametes are released:
Within a head of coral the genetically identical polyps reproduce asexually to allow growth of the colony. This is achieved either through gemmation or budding or through division, both shown in the diagrams of Orbicella annularis. Budding involves a new polyp growing from an adult, whereas division forms two polyps each as large as the original.
Whole colonies can reproduce asexually through fragmentation or bailout, forming another individual colony with the same genome.
The hermatypic, stony corals are often found in coral reefs, large calcium carbonate structures generally found in shallow, tropical water. Reefs are built up from coral skeletons and held together by layers of calcium carbonate produced by coralline algae. Reefs are extremely diverse marine ecosystems being host to over 4,000 species of fish, massive numbers of cnidarians, molluscs, crustaceans, and many other animals.
Tabulate corals occur in the limestones and calcareous shales of the Ordovician and Silurian periods, and often form low cushions or branching masses alongside Rugose corals. Their numbers began to decline during the middle of the Silurian period and they finally became extinct at the end of the Permian period, 250 million years ago. The skeletons of Tabulate corals are composed of a form of calcium carbonate known as calcite.
Rugose corals became dominant by the middle of the Silurian period, and became extinct early in the Triassic period. The Rugose corals existed in solitary and colonial forms, and like the Tabulate corals their skeletons are also composed of calcite.
The Scleractinian corals filled the niche vacated by the extinct Rugose and Tabulate corals. Their fossils may be found in small numbers in rocks from the Triassic period, and become relatively common in rocks from the Jurassic and later periods. The skeletons of Scleractinian corals are composed of a form of calcium carbonate known as aragonite. Although they are geologically younger than the Tabulate and Rugose corals, their aragonitic skeleton is less readily preserved, and their fossil record is less complete.
At certain times in the geological past corals were very abundant, just as modern corals are in the warm clear tropical waters of certain parts of the world today. Like modern corals their ancestors built reefs, some of which now lie as great structures in sedimentary rocks.
These ancient reefs are not composed entirely of corals. Algae, sponges, and the remains of many echinoids, brachiopods, bivalves, gastropods, and trilobites that lived on the reefs are preserved within them. This makes some corals useful index fossils, enabling geologists to date the age the rocks in which they are found.
Corals are highly sensitive to environmental changes. Scientists have predicted that over 50% of the coral reefs in the world may be destroyed by the year 2030; as a result they are generally protected through environmental laws. A coral reef can easily be swamped in algae if there are too many nutrients in the water. Coral will also die if the water temperature changes by more than a degree or two beyond its normal range or if the salinity of the water drops. In an early symptom of environmental stress, corals expel their zooxanthellae; without their symbiotic unicellular algae, coral tissues become colorless as they reveal the white of their calcium carbonate skeletons, an event known as coral bleaching.
Many governments now prohibit removal of coral from reefs to reduce damage by divers. However, damage is still caused by anchors dropped by dive boats or fishermen. In places where local fishing causes reef damage, education schemes have been run to inform the population about reef protection and ecology.
The narrow niche that coral occupies, and the stony corals' reliance on calcium carbonate deposition, means they are very susceptible to changes in water pH. Ocean acidification, caused by dissolution of carbon dioxide in the water that lowers pH, is currently occurring in the surface waters of the world's oceans due to increasing atmospheric carbon dioxide. Lowered pH reduces the ability of corals to produce calcium carbonate skeletons, and at the extreme, results in the dissolution of those skeletons entirely. Without deep and early cuts in anthropogenic CO2, scientists fear that ocean acidification may inevitably result in the severe degradation or destruction of coral species and ecosystems.
A combination of temperature changes, pollution, and overuse by divers and jewelry producers has led to the destruction of many coral reefs around the world. This has increased the importance of coral biology as a discipline. Climatic variations can cause temperature changes that destroy corals. For example, during the 1997-98 warming event all the hydrozoan Millepora boschmai colonies near Panamá were bleached and died within six years - this species is now thought to be extinct.
Live coral is also highly sought after in the aquarium trade. Although difficult to maintain in some or most cases, they add a striking beauty. Provided the proper ecosystem, live coral makes a stunning addition to any salt water aquarium.
Intensely red coral is sometimes known as fire coral (but this is not at all the same thing as fire coral). This extremely red coral is very rare now because of overharvesting due to the great demand for perfect red coral in jewelry-making.
Some coral species exhibit banding in their skeletons resulting from annual variations in their growth rate. In fossil and modern corals these bands allow geologists to construct year-by-year chronologies, a form of incremental dating, which can provide high-resolution records of past climatic and environmental changes when combined with geochemical analysis of each band.
Certain species of corals form communities called microatolls. The vertical growth of microatolls is limited by average tidal height. By analyzing the various growth morphologies, microatolls can be used as a low resolution record of patterns of sea level change. Fossilized microatolls can also be dated using radioactive carbon dating to obtain a chronology of patterns of sea level change. Such methods have been used to used to reconstruct Holocene sea levels. | <urn:uuid:1557ffa8-5d1a-4e81-a014-cd91090ededb> | 3.84375 | 3,132 | Knowledge Article | Science & Tech. | 31.189018 |
Harpago chiragra is the second largest of the Strombidae in the Marshalls, second only to Lambis truncata. It is common on both lagoon and seaward reefs, including the large flat-topped lagoon pinnacles. They are typically on sand or rubble, or sometimes on hard substrate. Adults are nearly always seen paired, with a large female accompanied by one, or rarely, a couple of smaller males. Like the large Lambis truncata, this species is protected (at least where divers from Kwajalein are concerned), although the initial protection recommendations from Fish & Wildlife and the Kwajalein Army regulations on the issue both misidentified it as Lambis scorpius, a very different and much smaller species. More comments about this under L. truncata. The individual in the first two photos below had unusually long and recurved "fingers."
The eyeballs peek out over the opeculum in this aperture view.
Juveniles have thin outer lips on the shell and lack the fingers.
Here the green proboscis bearing the mouth can be seen between the two eye stalks in this juvenile specimen.
As the fingers develop, they remain hollow for a while until they eventually fill in.
The specimen below was putting down an egg mass, seen as the whitish sandy area just to the right of the shell. Like some of the other Strombidae species, H. chiragra mixes up its eggs in a sand and mucus slurry the holds together while the eggs develop. What surprised me is that this female is still rather young, with fingers that have not completely filled out.
Here's another young one getting ready to flip.
Created 1 October 2009
Updated 22 November 2011
Return to strombid list
Kwajalein Underwater home | <urn:uuid:60451c7c-a4b7-41d4-a7e9-7197f8f3d5d7> | 2.75 | 379 | Knowledge Article | Science & Tech. | 46.901639 |
Changing Planet: Melting Glaciers
As the Earth system warms due to rising levels of greenhouse gases in the atmosphere, observations show that land-based glaciers are melting fast in many places around the world - in the United States, Europe, South America, and Asia. Because of ice-albedo feedback, as ice melts, albedo rises, producing further warming and faster melting. Model predictions indicate that the remaining large glaciers in Glacier National Park will be gone by 2030.
Click on the video at the left to watch the NBC Learn video - Changing Planet: Melting Glaciers.
Lesson plan: Changing Planet: Melting Glaciers
Shop Windows to the Universe Science Store!
is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9.
You might also be interested in:
Energy from the Sun can enter the atmosphere, but not all of it can easily find its way out again. This is a natural process called the greenhouse effect. Without any greenhouse effect, Earth’s temperature...more
For a glacier to develop, the amount of snow that falls must be more than the amount of snow that melts each year. This means that glaciers are only found in places where a large amount of snow falls each...more
This picture shows a part of the Earth surface as seen from the International Space Station high above the Earth. A perspective like this reminds us that there are lots of different things that cover the...more
The cryosphere includes the parts of the Earth system where water is in its frozen (solid) form. This includes snow, sea ice, icebergs, ice shelves, glaciers, ice sheets, and permafrost soils. Approximately...more
Altocumulus clouds (weather symbol - Ac), are made primarily of liquid water and have a thickness of 1 km. They are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part...more
Altostratus clouds (weather symbol - As) consist of water and some ice crystals. They belong to the Middle Cloud group (2000-7000m up). An altostratus cloud usually covers the whole sky and has a gray...more
Cirrocumulus clouds (weather symbol - Cc) are composed primarily of ice crystals and belong to the High Cloud group (5000-13000m). They are small rounded puffs that usually appear in long rows. Cirrocumulus...more | <urn:uuid:9843e328-8ab7-4509-9dca-7c4b2e46c2e7> | 4.25 | 519 | Tutorial | Science & Tech. | 60.626977 |
From childhood to adulthood, magnets have fascinated man. Young children are given sets of magnets to play with, linking them end to end. School children are shown, with iron ore dust, just how those invisible magnetic lines reach and curl, preparatory to a lecture about the Earth's magnetic field and on to how to use the compass when lost in the woods. The use of magnets so permeates industrialized human society that one would be hard pressed to find an aspect not affected. Claspless doors are secured with magnets, airplanes fly on automatic based on magnetic alignment, and recyclers separate out metal with magnets, to name but a few. Yet magnetism is not understood by man, though theories abound. It's clear that something flows, but just what is flowing is unknown. It's clear that direction is important, but just what is dictating this direction is unclear. It's clear that magnetism occurs naturally, especially in certain ores such as iron, but what it is that is special about iron ore is a puzzle.
Magnetism is the palpable, measurable effect of a subatomic particle not yet delineated by man. In fact, there are several dozen sub-atomic particles involved, out of the 387 involved in what humans assume to be simply the flow of electrons. Where electric current can be made to flow in any direction, the path of least resistance, magnetic flow seems to be very single minded. In fact, it is also going in the path of least resistance, as can be seen when one understands the path and what constitutes resistance for magnetic flow. Unlike electricity, which only occasionally flows in nature, the flowing sub- atomic particles that constitute a magnetic field are constantly flowing. This is the natural state, to be in motion. The path of least resistance, therefore, is to go with the flow, and the flow is determined by the biggest bully in the vicinity.
A single atom of iron, isolated, will establish the direction of flow based on the tightly orbiting electron particles, of which there are hundreds of sub-types. These tight orbits arrange themselves in a manner not unlike the planets around a sun, but the field, of course, is much more crowded. Given the fairly static number of these particles that will hang around an iron ore nucleus, the orbiting swirl may have a rhythm, rather than a steady hum. Put 3 groups of 3 into a cycle of 10 and you have whomp whomp whomp pause. Should the cycle, based on the nucleus and the electron sub-atomic particles it attracts due to its size and composition, be 4 groups of 3 in a cycle of 12, you would have whomp whomp whomp whomp. The steady hum of the second cycle does not lack a magnetic flow, it is just diffuse. The irregular cycle in the first example finds the magnetic flow escaping during the pause. Being attracted again to the best partner in the vicinity, the single iron atom, the magnetic sub-atomic particles will circle around, taking the path of least resistance which of course is on the other side of the atom from the outward flow.
Placing a second iron atom next to the first finds the two lining up, so the flow escaping during the pause of each goes in the same direction. This is a bit like forcing a second water flow into a flowing stream. Toss a stick into both forceful streams and you will see that the water flows are moving in the same direction as much as possible - the path of least resistance. In this manner the magnetic flow of the largest bully forces all else in the neighborhood to line up. Where the iron ore atoms are caught in an amalgam and not altogether free to shift their positions within the amalgam, the magnetic flow may physically move the amalgam, this being, again, the path of least resistance. For those who would state that magnetism is not a thing, as it can't be weighted or measured or seen, we would point to the child's trick whereby two magnets are held positive end to positive end. Let go and they move so that they are aligned positive end to negative end. What made these magnets move, if not a thing? | <urn:uuid:20fa64da-b8d3-46da-bfc2-536bc7012820> | 3.265625 | 840 | Nonfiction Writing | Science & Tech. | 49.455041 |
In his lab at Indiana University, James Goodson keeps violet-eared waxbills – a stunning but notoriously aggressive type of finch. Males and females form life-long bonds but they don’t play well with others. “Most of our animals are housed in male-female pairs, but were you to introduce another adult into their cage, most of them would attack immediately,” says Goodson. But some of Goodson’s birds don’t fit the stereotype. They almost never attack intruders.
These birds weren’t born docile. They became that way after Goodson stopped a special group of neurons in their brains from releasing a chemical called VIP. This single act turned fighters into pacifists and confirmed, in dramatic fashion, that there’s a special class of cell that drives aggression in these bird brains.
The neurons that Goodson targeted are found in the hypothalamus – a primitive ball-shaped region in the centre of the brain that governs many of our basic functions, from sleep to hunger to body temperature. It also has a long history with aggression. For decades, scientists have found that an electric burst to the hypothalamus can make mammals and birds more aggressive. And last year, Dayu Lin showed that he could transform docile mice into vicious brutes by switching on neurons in a specific part of the hypothalamus called the ventrolateral ventromedial hypothalamus (VMHvl).
Goodson focused on an adjoining area called the anterior hypothalamus (AH), which has been linked to aggressive behaviour in back-boned animals from fish to humans. He found that neurons in the upper part of the HA are more active when male waxbills attack subordinate intruders. The more belligerent the bird, the more active these neurons were.
This small part of the hypothalamus is rife with neurons that secrete a hormone called VIP. It’s a chemical jack-of-all-trades that affects the width of blood vessels, controls the movement of gut muscles, and more. It also affects the brain, but aside from some involvement in body clocks, we know very little about what it does there. Here’s one clue: every part of the brain that’s important for social behaviour has neurons that secrete, carry or respond to VIP.
Goodson found another clue earlier this year: he showed that the amount of VIP in the AH predicted how aggressive different species of sparrow are. Now, he has managed to completely abolished any aggressive behaviour in his waxbills by stopping the neurons in their AH from producing VIP. He used a small piece of DNA that’s specifically matched to the birds’ VIP gene, and stops them from making the hormone. When he infused their AH with this fragment, their reactions to intruders changed from immediate fisticuffs to the equivalent of harsh language.
The birds hadn’t been generally sedated by the treatment, because they kept on eating and moving as per usual. Their social skills hadn’t been dulled, because they were still willing to spend time with blue waxbills – a different species that they sometimes associate with in the wild. And they could still tell that an intruder had arrived, for they continued to make angry calls and threatening displays. They just didn’t fight, chase or attack.
Goodson got the same results when he worked with zebra finches. Unlike the fierce waxbills, zebra finches are very sociable and live in large colonies. They only attack when competing for mates or defending their nests. But even these pacifists became even more peaceful after a VIP-reducing injection.
These results are astonishing in how specific they are. Goodson has shown that a specific group of neurons – those in the upper AH that secrete VIP – drive aggressive behaviour in birds, and only aggressive behaviour. They don’t affect more pleasant social skills, the ability to recognise other individuals, anxiousness, or movement—just aggression.
Even more specifically, Goodson thinks that these neurons affect aggression when birds defend their territories, but not when they compete for mates. After he injected the zebra finches with the anti-VIP fragment and placed them together with other finches, they took two days to become less aggressive. On the first day, when they were competing for mates, they’d fight as often as their untreated peers. On the second day, after they had paired up and set up a nest, suddenly they became more passive.
Dayu Lin, whose work I described above, praises the paper. “Not many mammalian studies have implied a role of VIP in aggression, “she says. “The big question is whether that cell group exists in mammals and whether it has the same function.” This is important not just for understanding the relevance to humans, but because it would be easier to unpick what VIP actually does in mice, according to Clifford Saper, who studies the hypothalamus. “The genetic tricks that are available in mice cannot be done in birds,” he says.
Saper also checked the Allen Brain Atlas – a massive database showing how genes are switched on across a mouse’s brain – and says, “It doesn’t appear that there is a similar VIP group in mouse brain.” But Goodson counters that the Atlas may be out of date.Many of the studies looking at VIP in mouse brains are old, and don’t focus on the hypothalamus. And because neurons make and release VIP very quickly, it’s quite hard to track the chemical. At the very least, we know that VIP is found in the AH of sheep and hedgehogs, and Goodson says, “This region of the brain is very similar across all vertebrate groups, and in both birds and mammals.”
Reference: Goodson, Kelly, Kingsbury & Thompson. 2012. An aggression-specific cell type in the anterior hypothalamus of finches. PNAS http://dx.doi.org/10.1073/pnas.1207995109 | <urn:uuid:0e0f51d5-bc34-4d63-b14d-d8a3b354b352> | 3.34375 | 1,260 | Nonfiction Writing | Science & Tech. | 50.284928 |
Renewable energy has a critical role to play in reducing greenhouse gases and leading the United States toward energy independence. That role should soon be getting bigger: The U.S. government is pushing for a 100 percent increase in renewable energy by 2012. The two biggest sources are the wind and the sun. But the variable nature of wind and solar energy can cause problems with matching supply to demand—problems that would be greatly eased if only we had a really good way of storing electricity on an industrial scale. Currently there are several storage systems vying for dominance.
Compressed-Air Energy Storage
At night, when the strongest winds blow and customers are sleeping, unused wind-generated electricity can run giant compressors, forcing large amounts of air into sealed underground spaces. When demand rises during the day, the compressed air can be used to spin turbines, turning the energy back into electricity. Georgianne Peek, a mechanical engineer at Sandia National Laboratories in New Mexico, says this technology can provide a lot of power over long periods of time at a relatively low cost. The technology is also well established: Two compressed-air storage plants have been in operation for decades. The McIntosh Unit 1 plant in McIntosh, Alabama, went online in 1991; a similar plant in Germany has been running since the 1970s. McIntosh 1 can reliably put out 110 megawatts for 26 hours. (One megawatt is enough power to supply roughly 600 to 1,000 typical American homes.)
The compressed-air system does have its drawbacks. For one, it does not completely eliminate the need for fossil fuels, because the associated electric generators use natural gas to supplement the energy from the stored compressed air. Compressed-air storage systems also require an airtight underground space, limiting the locations where they can be installed. The two existing compressed-air plants use natural salt domes. Engineers flushed the domes with water to dissolve the salt, then pumped out the brine to create a nicely sealed cavern. But salt dome formations are not plentiful, so researchers are investigating other inexpensive ways to create storage chambers. A facility proposed for Norton, Ohio, would use an abandoned limestone mine. Another, in Iowa, would pump air into drained natural aquifers. Abandoned oil wells and depleted natural gas reservoirs might also work, Peek says, as long as they are not too remote to be hooked into the electrical grid.
Molten Salt Heat Exchanger
The sun, like the wind, is a variable source of energy, disappearing at night and ducking behind clouds at inconvenient moments. Thermal storage systems, such as molten salt heat exchangers, mitigate those problems by making solar power available anytime.
Right now only one example exists: Spain’s Andasol Power Station, which began operating last fall. Andasol has about 126 acres’ worth of trough-shaped solar collectors (pdf) that focus the sun’s heat onto pipes full of synthetic oil. The hot oil is piped to a nearby power plant, where it is used to generate steam. During the day, some of the oil is used to heat a mixture of liquid nitrate salts (made by combining elements like sodium and potassium with nitric acid) to temperatures above 700 degrees Fahrenheit. These liquid salts can retain their heat for weeks in insulated tanks. When the collectors cannot generate enough power to meet demand, the salts are drawn out from the tanks and their heat is tapped to run the power plant. A full stockpile of molten salts can keep the Andasol plant running at top capacity—50 megawatts of electricity—for up to seven and a half hours.
Molten salt backup systems make solar power more flexible and reliable, says Frank Wilkins of the U.S. Department of Energy’s Solar Energy Technologies Program. Wilkins says that thermal storage systems can increase a solar plant’s annual capacity factor (the percentage of time, on average, that the plant is operational) from 25 percent to up to 70 percent. Expense is the biggest drawback. The Andasol Power Station cost about $400 million, and that was just for phase one of a planned three-phase project. But costs may come down as more plants are built. This past February, the Arizona Public Service power utility announced plans to construct a power station similar to Andasol. It is expected to go online in 2012.
Sodium-sulfur batteries work much the same way as the lead-acid battery that starts your car; both use chemical reactions to store and produce electricity. The difference lies in the materials used. Lead-acid batteries contain a lead plate and a lead dioxide plate (the electrodes) in a bath of sulfuric acid (the electrolyte). A reaction between the lead and the acid creates the electric current. Lead-acid batteries are simple and reliable, but they are impractical to use on wind farms because of the amount of space and power electronics they would require.
Sodium-sulfur batteries, which use molten sodium and sulfur as electrodes and a solid ceramic electrolyte, have a higher energy density. “Lead-acid batteries are cheaper,” Peek says. “But you can get the same amount of energy in a smaller amount of space with sodium-sulfur—and that’s important, because real estate costs money too.” Sodium-sulfur batteries can also be charged up to the maximum and discharged completely, which makes them more efficient. And they last about 20 years, versus three to five years for lead-acid.
Some U.S. utility companies, including Xcel Energy, have installed small-scale combinations of wind farms and sodium-sulfur batteries. (American Electric Power’s is not yet operational.) Excess electricity from the wind farms can be stored in the batteries and fed into the system later, when wind is low and demand is high. Each battery system, which is roughly the size of a semitrailer, can store about one megawatt and discharge it over six to eight hours. The downside, again, is cost, which is high in part because there are no American companies making sodium-sulfur batteries; the only manufacturers are in Japan.
Zinc bromide and vanadium redox flow batteries are other promising technologies. Although not as far along in development as sodium-sulfur, they may be easier to scale up. Vanadium batteries may also charge and discharge more quickly than sodium-sulfur, so they might be better suited to smoothing out power fluctuations caused by rapidly changing weather.
Hydrogen-based energy storage looks great on paper: Use electricity to split hydrogen out of water, then convert the hydrogen back into electricity in a fuel cell when needed. Alas, the underlying technology is expensive and complicated, but MIT chemist Daniel Nocera may have found a better way. His hydrogen-ion-creating system uses an indium tin oxide electrode and a container of water with cobalt and potassium phosphate mixed in. Put the electrode in the water and add voltage. Cobalt, potassium, and phosphate migrate to the electrode, forming a catalyst that begins splitting water molecules into oxygen gas and hydrogen ions. Unlike most existing systems, the materials are fairly inexpensive, and the catalyst renews itself so it lasts a long time.
Nocera is still seeking a cheap way to convert hydrogen ions into hydrogen gas and an efficient way to get electricity from photovoltaic panels to the catalyst. But he thinks his approach will help other pieces of the hydrogen infrastructure fall into place. “The discovery opens doors we haven’t been able to walk through before,” Nocera says. “I don’t think this will be as hard.” | <urn:uuid:724c39fa-2684-4023-9cec-f24215c78309> | 3.671875 | 1,588 | Knowledge Article | Science & Tech. | 40.00327 |
|570 genera, >4.300 species|
Linyphiidae is a family of spiders, including more than 4,300 described species in 578 genera worldwide. This makes Linyphiidae the second largest family of spiders after the Salticidae. New species are still being discovered throughout the world, and the family is poorly known. Because of the difficulty in identifying such tiny spiders, there are regular changes in taxonomy as species are combined or divided.
Spiders in this family are commonly known as sheet weavers (from the shape of their webs), or money spiders (in the United Kingdom, Ireland, New Zealand, and in Portugal, from the superstition that if such a spider is seen running on you, it has come to spin you new clothes, meaning financial good fortune).
Common genera include Neriene, Lepthyphantes, Erigone, Eperigone, Bathyphantes, Troglohyphantes, the monotypic genus Tennesseellum and many others. These are among the most abundant spiders in the temperate regions, although many are also found in the tropics. The generally larger bodied members of the subfamily Linyphiinae are commonly found in classic bowl and doily webs or filmy domes. The usually tiny members of the Erigoninae are builders of tiny sheet webs. These tiny spiders (usually 3 mm or less) commonly balloon even as adults and may be very numerous in a given area on one day, only to disappear on the next. Some males of the erigonines are very strange, with their eyes set up on mounds or turrets. This reaches an extreme in some members of the large genus Walckenaeria, where several of the male's eyes are placed on a stalk taller than the carapace.
A few spiders in this family include:
- Bowl and doily spider, Frontinella communis
- Filmy dome spider, Neriene radiata
- Blacktailed red sheetweaver, Florinda coccinea
Spiders of this family occur nearly worldwide. In Norway many species have been found walking on snow at temperatures of down to -7 °C.
See also
- Cirrus Digital, Sheetweb Spider - Drapetisca alteranda
- Hormiga 2000
- RSPB Birds magazine, Winter 2004
- Hormiga, G. (1998). The spider genus Napometa (Araneae, Araneoidea, Linyphiidae). Journal of Arachnology 26:125-132 PDF
- Hormiga, Gustavo (2000): Higher Level Phylogenetics of Erigonine Spiders (Araneae, Linyphiidae, Erigoninae). Smithsonian Contributions to Zoology 609. (160 pages) PDF
- Bosselaers, J & Henderickx, H. (2002) A new Savignia from Cretan caves (Araneae: Linyphiidae). Zootaxa 109:1-8 PDF
- Hågvar, S. & Aakra, K. 2006. Spiders active on snow in Southern Norway. Norw. J. Entomol. 53, 71-82.
- Platnick, Norman I. (2007): The world spider catalog, version 8.0. American Museum of Natural History.
|Wikispecies has information related to: Linyphiidae|
|Wikimedia Commons has media related to: Linyphiidae|
- Frontinella pyramitela web (photo)
- Linyphiidae of the world: genera, species & distribution
- ToL on Linyphiidae | <urn:uuid:6033c661-d7d1-452c-8e58-3acbee4ee053> | 3.46875 | 775 | Knowledge Article | Science & Tech. | 38.485366 |
Fractional Quantum Hall Effect
In high mobility semiconductor heterojunctions the integer
quantum Hall effect (IQHE) plateaux are much narrower than for lower
mobility samples. Between these narrow IQHE more plateaux are seen at fractional
filling factors, especially 1/3 and 2/3. This is the fractional quantum
Hall effect (FQHE) whose discovery in 1982 was completely
unexpected. In 1998
the Nobel Prize in Physics was awarded to Dan Tsui and Horst Stormer,
the experimentalists who first observed the FQHE, jointly with Robert Laughlin
who suceeded in explaing the result in terms of
new quantum states of matter.
figure shows the fractional quantum Hall effect in a GaAs-GaAlAs heterojunction,
recorded at 30mK. Also included is the diagonal component of resistivity,
which shows regions of zero resistance corresponding to each FQHE plateau.
The principle series of fractions that have been seen are listed below.
They generally get weaker going from left to right and down the page:
(The fractional quantum Hall effect (FQHE) is concerned centrally with
This is usually writen as the greek letter nu, or v due to the limitations
1/3, 2/5, 3/7, 4/9, 5/11, 6/13, 7/15...
2/3, 3/5, 4/7, 5/9, 6/11, 7/13...
5/3, 8/5, 11/7, 14/9...
4/3, 7/5, 10/7, 13/9...
1/5, 2/9, 3/13...
Explanation of the Fractional Quantum Hall Effect
Just as in the IQHE, FQHE plateaux are formed when the Fermi energy lies
in a gap of the density of states. The difference is the origin of the
energy gaps. While in the integer effect gaps are due to magnetic quantisation
of the single particle motion, in the fractional effect the gaps arise
from collective motion of all the electrons in the system.
For the state at filling factor 1/3 Laughlin found
a many body wavefunction with a lower energy than the single particle energy.
This can also be adopted at any fraction v=1/(2m+1), but
the energy difference is smaller at higher m and hence the fractions
become weaker along the series 1/3, 1/5, 1/7....
All tests of Laughlin's wavefunction have shown it to be correct. The
difficulty that arises is in accounting for all the other fractions at
v=p/q where p>1 and simple wavefunctions can not be
written down. It is also necessary to explain why q is always odd.
The original explanation, developed by Haldane and Halperin, used a
hierarchical model. Quasi-electrons or quasi-holes
excited out of the Laughlin ground state would condense into higher order
fractions, known as daughter states e.g. starting from the 1/3 parent state
addition of quasi-electrons leads to 2/5 and quasi-holes leads to 2/7.
Then quasi-particles are excited out of these daughter states which condense
again into still more daughter states..... and so on down the hierarchy.
There are several problems or unsatisfactory features within the hierarchical
More recently a model of composite
fermions (CFs) has been introduced. A composite fermion consists of
an electron (or hole) bound to an even number of magnetic flux quanta.
Formation of these CFs accounts for all the many body interactions, so
only single particle effects remain. The model exploits the similarities
observed in measurements of the IQHE and FQHE to map the latter onto the
former. Thus the fractional QHE of electrons in an external magnetic field
now becomes the integer QHE of the new composite fermions in an effective
magnetic field. The CFs have integer charge, just like electrons, but because
they move in an effective magnetic field they appear to have a fractional
it does not explain which daughter state (quasi-electron or -hole) should
be the stronger
after a few layers of the hierarchy there will be more quasi-particles
than there were electrons in the original system
between fractions the system is not well defined
the quasi-particles carry fractional charge
The composite fermion picture correctly predicts all the observed fractions
including their relative intensities and the order they appear in as sample
quality increases or temperature decreases. It also shows v=1/2,
where the effective field for the CFs is zero, to be a special state with
More details can be found in the following articles:
T. Chakraborty and P. Pietilainen, The Quantum Hall Effects-Fractional
and Integer, Springer Series in Solid-State Sciences 85, (Berlin,
D.C. Tsui, H.L. Stormer and A.C. Gossard, Phys. Rev.
Lett. 48, 1559 (1982).
R.B. Laughlin, Phys. Rev. Lett. 50,
F.D.M. Haldane, Phys. Rev. Lett. 51, 605
(1983); B.I. Halperin, Phys. Rev. Lett. 52, 1583 & 2390
Last updated 10/02/97 by David R Leadley.
All rights reserved. Text and diagrams from this page may only be
used for non-profit making academic excerises and then only when credited
to D.R. Leadley, Warwick University 1997. | <urn:uuid:21d3b7cb-7236-4277-aa69-8f026c78b9f9> | 2.828125 | 1,237 | Knowledge Article | Science & Tech. | 66.800564 |
Much of temperature change of 20th century not correlated with CO2
The rate of temperature rise during the early years of the 20th century was comparable to the current rate of rise, but the level of human contributions to the CO2 was much smaller. Then in the post-war industrial boom when CO2 emissions were higher, the temperature actually dropped significantly.
Current climatologists would see this as no problem to their modeling, since it is projected that the affects of human activity did not produce significant divergence from natural forcings until the post 1970 period. The drop in temperature in the 1940-1965 period is partially modeled in terms of the concentration of sulphate aerosols; those aerosols were reduced after that period.
Collins, et al. | <urn:uuid:447ca357-7ecf-4650-9f0e-dc56b63856c8> | 2.96875 | 151 | Academic Writing | Science & Tech. | 30.5336 |
Life Around the Iceberg
ICEBERG A43K, SOUTHERN OCEAN– Many birds, seals and whales are living around Iceberg A43K. We saw several of them as we approached the iceberg two days ago. In comparison, we had fewer sightings at SS-1, the smaller iceberg.
Jake Ellena and Ken Smith are our bird surveyors. They count birds in flight while the ship is transiting between stations or during iceberg circumnavigation. Snow petrels are the most abundant of them; they are attracted to the iceberg, feeding on the zooplankton congregating within a few miles of the iceberg.
Our sampling targets the study of the wildlife’s food source and the concept that birds and marine mammals are found in association with icebergs thanks to the physical and chemical modification of the ocean by the presence of the bergs. The icebergs enrich the water, promoting phytoplankton and zooplankton growth.
Ron Kaufmann and Bruce Robison have been monitoring some of this growth by using large nets to sample the macrozooplankton and micronekton around the iceberg. Salps (Salpa thompsoni) have dominated most of the samples at various distances from the berg. Many of these salps had highly colored guts, perhaps indicative of recent feeding, and representative salps have been analyzed for gut contents and pigments. Small phytoplankton cells, abundant at this time of the year, are preferred by salps.
Conspicuously rare in the samples have been large Antarctic Krill (Euphausia superba), though large numbers of young or juvenile krill have been collected. Krill typically feed on diatoms which are not abundant in winter.
The nets also contain large numbers of vertically migrating mesopelagic fishes as well as hyperiid amphipods, small krill and occasional large medusae. | <urn:uuid:51b37378-72d4-4923-b4cd-77c7fb9999db> | 3.515625 | 405 | Knowledge Article | Science & Tech. | 36.646323 |
I know (from Kinematics) that for an object moving linearly with an acceleration and without air resistance the following equations can be used to determine v(velocity) or x(position of the object) at any time:
Where x0 is the position of the object at the start of accelerated movement, v0 is the velocity at the start of the motion and a is the acceleration.
Now if I want to add air drag, say we take the formula for air drag as:
C is a coefficent, A is the reference Area, p is the density and v the velocity. Since D is a force the air drag results in an acceleration, so in other words: The resulting acceleration is a-D/m, however D depends on V, which depends on the acceleration. And that's where my problem is, I simply can't wrap my mind around that.
How would I go about getting the Equations for x and v including air drag (and don't forget there's acceleration too)? | <urn:uuid:57dc8f4d-101c-42be-8edc-5618b6056972> | 2.875 | 209 | Q&A Forum | Science & Tech. | 46.458842 |
A memory leak occurs when a program allocates memory for a data structure and then never releases that memory and does not reuse that memory. The next time the program needs the data structure, it allocates more memory again. Over time, the available memory appears to be shrinking and this is called a "memory leak." Memory leaks are overcome by ensuring the memory allocated for a data structure is deallocated when you are done using that data structure.
It used to be that if you did not close a cursor, that memory stayed allocated and unusable to anything else. This is no longer the case with Oracle. If you open a cursor and forget to close it, Oracle will close the cursor for you when the PL/SQL block ends execution. This assumes your PL/SQL block does reach an end.
While Oracle does automatically close cursors, it is still a very good idea to close them in your code. You should close your cursor when you are done looping through the cursor's contents. This way, you free up some memory for your next cursor. Plus, closing cursors is good programming style.
This was first published in March 2006 | <urn:uuid:9665338a-da89-476b-aea6-49d0fd791d31> | 3.28125 | 233 | Knowledge Article | Software Dev. | 52.020333 |
Now we’ve said a lot about the category of topological spaces and continuous maps between them. In particular we’ve seen that it’s complete and cocomplete — it has all limits and colimits. But we’ve still yet to see any good examples of topological spaces. That will change soon.
First, though, I want to point out something we can do with these limits: we can define topological groups. Specifically, a topological group is a group object in the category of topological spaces. That is, it’s a topological space along with continuous functions , , and that satisfy the usual commutative diagrams. A morphism of topological groups is then just like a homomorphism of groups, but by a continuous function between the underlying topological spaces.
Alternately we can think of it as a group to which we’ve added a topology so that the group operations are continuous. But as we’ve seen, a topological structure feels a bit floppier than a group structure, so it’s not really as easy to think of a “topology object” in a category. So we’ll start with and take group objects in there.
Now it turns out that every topological group is a uniform space in at least two ways. We can declare the set to be an entourage for any neighborhood of the identity, along with any subset of containing such an . Since any neighborhood of contains itself, each must contain the diagonal . The intersection is the entourage , and so this collection is closed under intersections.
To see that is an entourage, we must consider the inversion map. Any neighborhood of the identity contains an open set . Then the preimage is just the “reflection” that sends each element of to its inverse, which must thus be open. The reflection of contains the reflection of , and is thus a neighborhood of the identity. Then is the same as .
Now, why must there be a “half-size” entourage? We’ll need to construct a half-size neighborhood of the identity. That is, a neighborhood so that the product of any two elements of lands in the neighborhood . Then and in means that and are in , and thus their product is in , so .
To construct this neighborhood let’s start by assuming is an open neighborhood by passing to an open subset of our neighborhood if necessary. Then its preimage is open in by the continuity of , and and will be open by the way we built the product topology. The intersection of these will be the collection of pairs with both and in , and whose product also lands in , and will be open as a finite intersection of open sets. We can project this set of pairs onto its first or second factor, and take the intersection of these two projections to get the open set which is our half-size neighborhood.
The uniform structure we have constructed is called the right uniformity on because if we take any element the function from to itself define by right multiplication by — — is uniformly continuous. Indeed, right multiplication sends an entourage to itself, since the pair satisfies . Left multiplication, on the other hand, sends a pair in to , for which we have . Thus to an entourage we can pick the entourage . So left multiplication is also uniformly continuous, but not quite as easily. We could go through the same procedure to define the left uniformity which again swaps the roles of left and right multiplication. Note that the left and right uniformities need not be the same collection of entourages, but they define the same topology.
Still, this doesn’t tell us how to get our hands on any topological groups to begin with, so here’s a way to do just that: start with an ordered group. That is, a set with the structures of both a group and a partial order so that if then and . Using this translation invariance we can determine the order just by knowing which elements lie above the identity, for then if and only if . The elements with form what we call the positive cone .
We can now use this to define a topology by declaring the positive cone to be closed. Then we’d like our translations to be homeomorphisms, so for each the set of with must also be closed. Similarly we want inversion to be a homeomorphism, and since it reverses the order we find that for each the set of with is closed. And then we can use the complements of all these as a subbase to generate a topology. This topology will in fact be uniform by everything we’ve done above.
And, finally, one specific example. The field of rational numbers is an ordered group if we forget the multiplication. And thus we get a uniform topology on it, generated by the subbase of half-infinite sets. Specifically, for each rational number the set of all with and the set of all with are declared open, and they generate the topology. A neighborhood of will be any subset which contains one of the form . Since the group is abelian, both the left and the right uniformities coincide. For each rational number we have an entourage . That is, a pair of rational numbers are in if they differ by less than . | <urn:uuid:f82a64ed-7cd3-4f32-acb7-9a9bbd7eed25> | 2.765625 | 1,107 | Academic Writing | Science & Tech. | 52.122193 |
The majority of the carbon stored in peatlands is in the saturated peat soil that has been sequestered over millennia. In the sub (polar) zone, peatlands contain on average 3.5 times more carbon per hectare than the above-ground ecosystems on mineral soil; in the boreal zone they contain 7 times more and in the humid tropics over 10 times more carbon.
Growing source of greenhouse gas (GHG) emissions
Large areas of organic wetland (peat) soils are drained for agriculture, forestry and peat extraction all over the world. As a result, the organic carbon that is normally underwater is suddenly exposed to the air, where it decomposes and emits carbon dioxide (CO2). Peat fires, such as those take place in Southeast Asia every year and also in Russia, release huge amounts of CO2 as well. Altogether global CO2 emissions amount to at least 2,000 million tonnes annually, equivalent to 6% of the global fossil-fuel emissions.
Another growing source of GHG emissions from peatlands is methane. The scientific database for methane effluxes from peatlands is much larger than that for CO2 or N2O. The following report on Methane emission from peat soils and article on Towards developing IPCC methane 'emission factor' for peatlands (organic soils) give more insights on this matter.
While Indonesia currently has the highest CO2 emissions from peatlands due to logging and drainage (~900Mtons per year), the peatlands degradation is a global problem. Download The Global Peatland CO2 Picture for CO2 emissions worldwide inventory.
Impact of climate change on peatlands
Another cause for concern is that climate change poses an enormous challenge to peatlands and CO2 emissions. For instance, warmer summer weather threatens to thaw the large peatland (permafrost) areas of Canada and Russia, causing them to decompose. There is also a risk that fossilised methane, stored under the permafrost areas, could be released. | <urn:uuid:1df52d5b-f432-4d60-8e09-19ae035d462b> | 4.28125 | 421 | Knowledge Article | Science & Tech. | 35.891049 |
GUI ToolKits are libraries written to ease the task of writing GUI applications by abstracting the drudgework away from the programmer. As an added bonus, they offer a consistent look between applications that use the same one.
In a GUI application written without a ToolKit the actual program is a long loop that continuously calls a function to poll incoming events and then feeds them into a huge switch construct that decides how to react to each event. Every single facet of the application's behaviour has to be reflected here. Needless to say, writing non-trivial applications this way is tiresome at best.
A GUI ToolKit's main responsibility is to abstract away this event loop in a way that it can be maintained and extended easily. To this end, the different "widgets" (such as buttons, labels, menus etc) of a GUI are treated as black boxes. An EventModel is then specified, which defines how events "propagate" across these blackboxes. The EventModel is a large factor in the design of an application's architecture. Different ToolKits tend to use very different EventModels, which can cause a great deal of confusion. | <urn:uuid:2a0f4d06-eb99-4166-baa0-e05d05d094bc> | 3.078125 | 237 | Knowledge Article | Software Dev. | 43.938552 |
Image of the Day: Most Bizarre Fish You’ve Ever Seen?
A diver swims with a huge ocean sunfish, or Mola mola, off the coast of San Diego. They are the largest of the bony fish and often get mistaken for sharks due to their dorsal fins. They feed on jellyfish and plankton and are curious of humans, as seen in the photo. One threat to molas is drift nets, which they often get caught in, and garbage such as plastic bags that they mistake for jellyfish, their favorite food.
Climate change is also a threat as it is to all sea life. According to the Center for Ocean Solutions, some ocean areas have acidified to levels known to cause harm to ocean life. Also decreasing pH levels from CO2 acidosis are responsible for shifting the ecological balance of plankton and other bottom dwelling species. “Many Pacific Ocean areas may become uninhabitable due to sea level rise, coastal inundation, shifting rainfall, collapse of fresh water supplies, or changes in the migration patterns of food species,” says the Center for Ocean Solutions.
Credit: Daniel Botelho/Barcroft Media | <urn:uuid:2f79c506-1373-4e8e-b313-9f58af925e73> | 3.21875 | 241 | Truncated | Science & Tech. | 45.768011 |
CHANNEL MODIFICATIONS OF HAWAIIAN STREAMS
In their 1978 report, Timbol and Maciolek identified
channel modification as a significant factor contributing to the
degradation of stream biological quality in the Hawaiian Islands
Six types of channel
modifications were distinguished.
Timbol and Maciolek wrote, "Continued channel
modification in Hawaii is certain, as evidenced by current
channelization proposals for Kohoma and Iao streams on Maui and
Makaha stream on Oahu. Presently on Oahu, a dam is being
constructed across Kamooalii tributary of Kaneohe Stream.
Environmental commentary is difficult because of the lack of
definitive information on the effects of channelization either on
total stream ecology or on individual native species. Past
commentaries have been based on generalized information (mainland
U.S. and limited ecological data on Hawaiian streams) and a few
specific observations on local channelization effects.
Concrete-lined flat-bottomed channels obviously provide no
habitat for native fishes or crustaceans, and expose water to
excessive insolation. The effects of such lined channels on the
quality and quantity of fauna upstream (i.e., effects on
migration) or on the downstream environment (e.g., heating) are
unknown. Such are examples of the principal informational needs
"This report concerns that portion of the project involving a one-year (August 1975 - September 1976) statewide, exhaustive inventory of perennial streams with channel modifications, including a general survey of habitat factors and macrofauna. It includes the islands of Kauai, Oahu, Molokai, Maui, and Hawaii. Niihau and Lanai, the remaining two inhabited islands in the State, were not surveyed. Niihau is small, relatively arid, and under private ownership that prohibits entry of non-residents. Lanai apparently has only one stream, and it is located in an area of difficult access. It is assumed that this lone stream is not channelized. | <urn:uuid:8db206fb-9b6c-4836-b059-c234c7c11fa0> | 3.140625 | 439 | Knowledge Article | Science & Tech. | 22.380195 |
Anoxic refers a region having no oxygen present, with nitrite and nitrate present and taking over the function as the oxidiser (with the oxidation reduction potential between +50 and -50 mV).
Within a reef system, denitrification (conversion of nitrate to nitrogen gas) occurs in the anoxic regions (includes interior regions of liverock, bottom layers of sand bed and inside a denitrator i.e. surfaces not directly in contact with the water).
The presence of oxygen is referred to as aerobic and the absence anaerobic.
Oxygen will penetrate through approximately 0.5 millimeters of a stationary biofilm.
- ↑ (http://dx.doi.org/10.1016/j.bej.2003.10.003): Hibiya, K., Nagai, J., Tsuneda, S., and Hirata, A., Simple prediction of oxygen penetration depth in biofilms for wastewater treatment, Biochemical Engineering Journal, 19(1), (2004), 61-68. | <urn:uuid:f64ecb83-0b53-494d-8998-665b41fa6cb3> | 2.859375 | 213 | Knowledge Article | Science & Tech. | 53.609825 |
The vfork() function creates a new process as does
fork(), except that the child process shares the
same address space as the calling process. Execution of the calling
process is blocked until the child process calls one of the
exec() family of functions, or calls
Because parent and child share the address space, you must not
return from the function that called vfork(); doing so can
corrupt the parent's stack.
You should use vfork()
when your child process simply modifies the process state and then
calls one of the exec() functions. Because of the shared
address space, you must avoid doing anything in the child that
impacts the parent when it resumes execution. For example, if
your exec() call fails, you must call
_exit(), and not exit(),
because calling exit() would close standard I/O
stream buffers for the parent as well as the child.
Handlers registered with pthread_atfork() are not
invoked when vfork() is called, because doing so
would adversely affect the shared address space.
If successful, vfork()
returns 0 in the child process, and returns the process ID of the
child process to the parent process. On failure, it returns -1
and sets errno to one of the following values:
The system lacked the necessary
resources to create another process, or the system-imposed
limit on the total number of processes under execution
system-wide would be exceeded.
UNIX 98, with exceptions.
The vfork() function
provides an efficient mechanism to create a new process, in those
instances where you need to manipulate process state (for example,
closing file descriptors) prior calling one of the exec()
family of functions. vfork() does not create a new Windows
process context, however. Hence calling getpid()
in the child of a vfork() operation returns the same
value as in the parent. The process ID returned by vfork()
is actually the process ID of the exec()ed child. Refer to
Process Management in the
Windows Concepts chapter
of the MKS Toolkit UNIX to Windows Porting Guide for a
detailed discussion of the process model.
MKS Toolkit for Professional Developers
MKS Toolkit for Enterprise Developers
MKS Toolkit for Enterprise Developers 64-Bit Edition
- _exit(), execl(), execle(), execlp(), execlpe(), execv(), execve(), execvp(), execvpe(), exit(), fork(), getpid(), pthread_atfork()
MKS Toolkit 9.5 Documentation Build 3. | <urn:uuid:e5c8de64-6772-4306-8ffd-3b3167e6ea74> | 3.171875 | 558 | Documentation | Software Dev. | 43.726752 |
Name: Evan D.
Date: March 2001
Why is the south pole colder?
I answered this question for a Newton reader
just a few minutes ago.
Basically, because Antarctica is a continent.
Antarctica has much higher elevations because of this and
somewhat different meteorology than does the Arctic. The area
inside the Arctic Circle is mostly seawater and overlying ice;
this surface stays warmer than the land areas of Antarctica,
resulting in slightly warmer temperatures in the Arctic than
David R. Cook
Atmospheric Research Section
Environmental Research Division
Argonne National Laboratory
Click here to return to the Environmental and Earth Science Archives
Update: June 2012 | <urn:uuid:dcdd810a-0712-42d2-a44d-9a7ada6df533> | 3.5 | 145 | Audio Transcript | Science & Tech. | 25.735897 |
Synonyms: Collisella pelta
|Left: Top view of Lottia pelta with barnacle.
Right: Interior of Lottia pelta shell.
|Photo by: Ryan Lunsford 2002
Collected at Rosario
How to Distinguish from Similar Species: Lottia pelta can be easily confused with Tectura scutum. The distinguishing difference is the relative height of L. pelta as compared to the flat shell of T. scutum. Molecular evidence being prepared in 2008 (Brian Simison's dissertation, Berkeley, and Eernisse et al in prep.) suggests that from Monterey Bay southward the limpets identified as Lottia pelta are actually a sibling species.
Geographical Range: Alaska to Baja California (current work by Brian Simpson and Eernisse et al. at Hopkins Marine Station suggests that south of Monterey Bay the species is not L. pelta but an as-yet undescribed species).
Depth Range: Low to middle intertidal zones
Habitat: L. pelta is generally found in more protected locations on the rocks.
Biology/Natural History: The Lottia pelta diet consists of a large variety of both red and brown algae including Endocladia, Iridaea, Egregia, and Postelsia. However, it is most often associated with brown algae (Pelvetia and Laminaria in addition to those mentioned). These animals feed mainly at high tide, but do not feed at every high tide. Studies have shown that there is little competition for food between species of limpets. Perhaps this is due to slightly different enzymes and diets. Actually, even the diets of two individuals of the same species may vary according to the algae most available. It is in part this variability that accounts for the large variety of shell patterns and coloration. L. pelta has a definite and largely successful defense response to three species of predatory sea stars (Pisaster ochraceus, Leptasterias hexactis, and Evasterias troschelii). The limpet “runs” away by lifting its shell off the substrate and crawling at top speed.
This species is highly variable both in color and in genetics (Begovic,
2003). It may actually be a group of related species. Individuals
on rocks tend to be brown to brownish-green and often have ribs; those
on kelp are usually smooth and may be uniformly dark or variegated in color
(see photos below). Individuals on Mytilus mussels tend to
be uniformly dark. If individuals move between these substrates the
pattern in the new growth of their shell changes to match that typical
of the species on their new substrate. Evidence suggests that individuals
not colored typically for the substrate they are on may be subject to higher
predation by black oystercatchers (Sorensen
and Lindberg, 1991)
|Main Page||Alphabetic Index||Systematic Index||Glossary|
Kozloff, 1987, 1996. Marine Invertebrates of the Pacific Northwest. University of Washington Press
Morris et al. 1992. Intertidal Invertebrates of California. Stanford Univ. Press.
Niesen, 1997. Marine Life of the Pacific Northwest. Gulf Publishing Company, Houston, TX.
Begovic, E., 2004. Population structuring mechanisms and diversification patterns in the Patellogastropoda of the North Pacific. Ph.D. dissertation, Dept. of Integrative Biology, University of California, Berkeley. 291 pp.
Sorensen, Fred E. and David R. Lindberg, 1991. Preferential predation by American black oystercatchers on transitional ecophenotypes of the limpet Lottia pelta (Rathke). Journal of Experimental Marine Biology and Ecology 154: 123-136
A student study at Rosario by Brittany Pick and Bethany Reiswig (2007) found that the black oystercatchers Haematopus bachmani nesting on Northwest Island are being selective in the limpets they capture. Although Lottia digitalis was the most common limpet found in intertidal transects on Northwest Island and were the species found at the highest tide levels so they should have been more available than any other species, they were not the most abundant in shell middens found near the oystercatcher nesting and feeding site. The oystercatchers selected Lottia digitalis less often than expected, and those they did select were near the maximum size range found in the intertidal. The oystercatchers seemed to prefer other, larger limpet species such as Tectura persona, Tectura scutum, and Lottia pelta, all of which were found by chi-squared analysis to be in significantly higher numbers in the midden than expected from the intertidal abundance. Whether this selection by the oystercatchers is due to a specific selection of species or simply a selection of the largest individuals present in the intertidal is not known.
The photos above and below show limpets keying to Lottia pelta, found on Egregia kelp on the beach at San Simeon, CA August 2010. Photos by Dave Cowles
Note the mottled color on this individual, while the smaller, partly obscured individual to the left and another in the photo above are nearly black on the outside of the shell. Outside coloration in this species is highly variable, reflecting the genetic variability within the species.
The foot and viscera of the species are white. Photo by Dave Cowles, San Simeon Beach, CA, August 2010
My thanks to Ryan P. Kelly for updated information on this page. | <urn:uuid:90d77067-5b7c-4e2d-9d31-a2d20dcf455d> | 3.1875 | 1,203 | Knowledge Article | Science & Tech. | 40.585429 |
WebReference.com - Part 1 of chapter 5 from Beginning Java 2 SDK 1.4 Edition, Wrox Press Ltd (1/8)
Beginning Java 2 SDK 1.4 Edition
What is a Class?
As you saw in Chapter 1, a class is a prescription for a particular kind of object--it defines a new type. We can use the class definition to create objects of that class type, that is, to create objects that incorporate all the components specified as belonging to that class.
In case that's too abstract, look back to the last chapter, where we used the
Stringclass. This is a comprehensive definition for a string object, with all the operations you are likely to need built in. This makes
Stringobjects indispensable and string handling within a program easy.
String class lies towards one end of a spectrum in terms of complexity in a class.
String class is intended to be usable in any program. It includes facilities and
capabilities for operating on
String objects to cover virtually all circumstances in
which you are likely to use strings. In most cases your own classes won't need to be this elaborate.
You will typically be defining a class to suit your particular application. A very simple class for
Plane or a
Person, may well represent objects that can potentially
be very complicated, if that fulfils your needs. A
Person object might just contain a
name, address, and phone number for example if you are just implementing an address book. In another
context, in a payroll program perhaps, you might need to represent a
Person with a whole
host of properties, such as age, marital status, length of service, job code, pay rate, and so on. It
all depends on what you intend to do with objects of your class.
In essence a class definition is very simple. There are just two kinds of things that you can include in a class definition:
These are variables that store data items that typically differentiate one object of the class from another. They are also referred to as data members of a class.
These define the operations you can perform for the class--so they determine what you can do to, or with, objects of the class. Methods typically operate on the fields--the variables of the class.
The fields in a class definition can be of any of the basic types, or they can be references to objects of any class type, including the one that you are defining.
The methods in a class definition are named, self-contained blocks of code that typically operate
on the variables that appear in the class definition. Note though, that this doesn't necessarily have
to be the case, as you might have guessed from the
main() methods we have written in all
our examples up to now.
Variables in a Class Definition
An object of a class is also referred to as an instance of that class. When you create an object, the object will contain all the variables that were included in the class definition. However, the variables in a class definition are not all the same--there are two kinds.
One kind of variable in a class is associated with each object uniquely--each instance of the class
will have its own copy of each of these variables, with its own value assigned. These differentiate one
object from another, giving an object its individuality--the particular name, address, and telephone
number in a given
Person object for instance. These are referred to as
The other kind of class variable is associated with the class, and is shared by all objects of the
class. There is only one copy of each of these kinds of variables no matter how many class objects are
created, and they exist even if no objects of the class have been created. This kind of variable is
referred to as a class variable because the variable belongs to the class, not to any particular
object, although as we have said, all objects of the class will share it. These variables are also
referred to as static fields because, as we will see, you use the keyword
when you declare them.
Created: June 24, 2002
Revised: June 24, 2002 | <urn:uuid:02ef71f2-8e0d-47fe-bac4-78a15a9a1628> | 3.59375 | 859 | Tutorial | Software Dev. | 51.522329 |
A minor planet is an astronomical object in direct orbit around the Sun that is neither a dominant planet nor originally classified as a comet. Minor planets can be dwarf planets, asteroids, trojans, centaurs, Kuiper belt objects, and other trans-Neptunian objects. The first minor planet discovered was Ceres in 1801 (although from the time of its discovery until 1851 it was considered to be a planet). The orbits of more than 570,000 objects have been archived at the Minor Planet Center.
The term "minor planet" has been used since the 19th century to describe these objects. The term planetoid has also been used, especially for larger (planetary) objects, such as what since 2006 the IAU has called dwarf planets. Historically, the terms asteroid, minor planet, and planetoid have been more or less synonymous, but the issue has been complicated by the discovery of numerous minor planets beyond the orbit of Jupiter and especially Neptune that are not universally considered asteroids. Minor planets seen outgassing may receive a dual classification as a comet.
Before 2006 the International Astronomical Union had officially used the term minor planet. During its 2006 meeting, the Union reclassified minor planets and comets into dwarf planets and small Solar System bodies. Objects are called dwarf planets if their self-gravity is sufficient to achieve hydrostatic equilibrium, that is, an ellipsoidal shape, with all other minor planets and comets called "small Solar System bodies". The IAU states: "the term 'minor planet' may still be used, but generally the term 'small solar system body' will be preferred." However, for purposes of numbering and naming, the traditional distinction between minor planet and comet is still followed.
Hundreds of thousands of minor planets have been discovered within the Solar System, with the 2009 rate of discovery at over 3,000 per month. Of the more than 535,000 registered minor planets, 251,651 have orbits known well enough to be assigned permanent official numbers. Of these, 16,154 have official names. As of September 2010, the lowest-numbered unnamed minor planet is (3708) 1974 FV1; but there are also some named minor planets above number 240,000.
There are various broad minor-planet populations:
- Asteroids; traditionally, most have been bodies in the inner Solar System.
- Main-belt asteroids, those following roughly circular orbits between Mars and Jupiter. These are the original and best-known group of asteroids or minor planets.
- Near-Earth asteroids, those whose orbits take them inside the orbit of Mars. Further subclassification of these, based on orbital distance, is used:
- Aten asteroids, those that have semi-major axes of less than one Earth orbit. Those Aten asteroids that have their aphelion within Earth's orbit are known as Apohele asteroids;
- Amor asteroids are those near-Earth asteroids that approach the orbit of the Earth from beyond, but do not cross it. Amor asteroids are further subdivided into four subgroups, depending on where their semimajor axis falls between Earth's orbit and the asteroid belt;
- Apollo asteroids are those asteroids with a semimajor axis greater than Earth's, while having a perihelion distance of 1.017 AU or less. Like Aten asteroids, Apollo asteroids are Earth-crossers.
- Earth trojans, asteroids sharing Earth's orbit and gravitationally locked to it. As of 2011, the only one known is 2010 TK7.
- Mars trojans, asteroids sharing Mars's orbit and gravitationally locked to it. As of 2007, eight such asteroids are known.
- Jupiter trojans, asteroids sharing Jupiter's orbit and gravitationally locked to it. Numerically they are estimated to equal the main-belt asteroids.
- Distant minor planets; an umbrella term for minor planets in the outer Solar System.
- Centaurs, bodies in the outer Solar System between Jupiter and Neptune. They have unstable orbits due to the gravitational influence of the giant planets, and therefore must have come from elsewhere, probably outside Neptune.
- Neptune trojans, bodies sharing Neptune's orbit and gravitationally locked to it. Although only a handful are known, there is evidence that Neptune trojans are more numerous than either the asteroids in the asteroid belt or the Jupiter trojans.
- Trans-Neptunian objects, bodies at or beyond the orbit of Neptune, the outermost planet.
- The Kuiper belt, objects inside an apparent population drop-off approximately 55 AU from the Sun.
- Scattered disc, objects with aphelia outside the Kuiper belt. These are thought to have been scattered by Neptune.
- Detached objects such as Sedna, with both aphelia and perihelia outside the Kuiper belt.
- The Oort Cloud, a hypothetical population thought to be the source of long-period comets that may extend out to 50,000 AU from the Sun.
A newly discovered minor planet is given a provisional designation (such as 2002 AT4) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number, but dropping the parentheses is quite common. Informally, it is common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text.
Minor planets that have been given a number but not a name keep their provisional designation, e.g. (29075) 1950 DA. As modern discovery techniques are finding vast numbers of new asteroids, they are increasingly being left unnamed. The earliest discovered to be left unnamed was for a long time (3360) 1981 VA, now 3360 Syrinx; as of September 2008, this distinction is held by (3708) 1974 FV1. On rare occasions, a small object's provisional designation may become used as a name in itself: the still unnamed (15760) 1992 QB1 gave its "name" to a group of Kuiper belt objects which became known as cubewanos.
Minor planets are awarded an official number once their orbits are confirmed. With the increasing rapidity of discovery, these are now six-figure numbers. The switch from five figures to six figures arrived with the publication of the Minor Planet Circular (MPC) of October 19, 2005, which saw the highest numbered minor planet jump from 99947 to 118161.
Sources for names
The first few asteroids were named after figures from Greek and Roman mythology, but as such names started to dwindle the names of famous people, literary characters, discoverer's wives, children, and even television characters were used.
The first asteroid to be given a non-mythological name was 20 Massalia, named after the Greek name for the city of Marseilles. The first to be given an entirely non-Classical name was 45 Eugenia, named after Empress Eugénie de Montijo, the wife of Napoleon III. For some time only female (or feminized) names were used; Alexander von Humboldt was the first man to have an asteroid named after him, but his name was feminized to 54 Alexandra. This unspoken tradition lasted until 334 Chicago was named; even then, oddly feminised names show up in the list for years after.
As the number of asteroids began to run into the hundreds, and eventually the thousands, discoverers began to give them increasingly frivolous names. The first hints of this were 482 Petrina and 483 Seppina, named after the discoverer's pet dogs. However, there was little controversy about this until 1971, upon the naming of 2309 Mr. Spock (the name of the discoverer's cat). Although the IAU subsequently banned pet names as sources, eccentric asteroid names are still being proposed and accepted, such as 4321 Zero, 6042 Cheshirecat, 9007 James Bond, 13579 Allodd and 24680 Alleven, and 26858 Misterrogers.
A well-established rule is that, unlike comets, minor planets may not be named after their discoverer(s). One way to circumvent this rule has been for astronomers to exchange the courtesy of naming their discoveries after each other. An exception to this rule is 96747 Crespodasilva, which was named after its discoverer, Lucy d'Escoffier Crespo da Silva, because she died shortly after the discovery, at age 22.
Names were adapted to various languages from the beginning. 1 Ceres, Ceres being its Anglo-Latin name, was actually named Cerere, the Italian form of the name. German, French, Arabic and Hindi use forms similar to the English, whereas Russian uses a form, Tserera, similar to the Italian. In Greek the name was translated to Demeter, the Greek equivalent of the Roman goddess Ceres. In the early years, before it started causing conflicts, asteroids named after Roman figures were generally translated in Greek; other examples are Hera for 3 Juno, Hestia for 4 Vesta, Chloris for 8 Flora, and Pistê for 37 Fides. In Chinese, the names are not given the Chinese forms of the deities they are named after, but rather typically have a syllable or two for the character of the deity or person, followed by 神 'god(dess)' or 女 'woman' if just one syllable, plus 星 'star/planet', so that most asteroid names are written with three Chinese characters. Thus Ceres is 谷神星 'grain goddess planet', Pallas is 智神星 'wisdom goddess planet', etc.
Special naming rules
Minor-planet naming is not always a free-for-all: there are some populations for which rules have developed about the sources of names. For instance, centaurs (orbiting between Saturn and Neptune) are all named after mythological centaurs; Jupiter trojans after heroes from the Trojan War; resonant trans-Neptunian objects after underworld spirits; and non-resonant TNOs after creation deities.
Physical properties of comets and minor planets
Archival data on the physical properties of comets and minor planets are found in the PDS Asteroid/Dust Archive. This includes standard asteroid physical characteristics such as the properties of binary systems, occultation timings and diameters, masses, densities, rotation periods, surface temperatures, albedoes, spin vectors, taxonomy, and absolute magnitudes and slopes. In addition, European Asteroid Research Node (E.A.R.N.), an association of asteroid research groups, maintains a Data Base of Physical and Dynamical Properties of Near Earth Asteroids.
See also
- List of minor planets
- Solar System
- Groups of minor planets
- Small Solar System body
- "Unusual Minor Planets". Minor Planet Center. Retrieved 23 December 2011.
- "Minor Planet Statistics". Minor Planet Center. Retrieved 2011-02-26.
- When did the asteroids become minor planets?, James L. Hilton, Astronomical Information Center, United States Naval Observatory. Accessed May 5, 2008.
- Planet, asteroid, minor planet: A case study in astronomical nomenclature, David W. Hughes, Brian G. Marsden, Journal of Astronomical History and Heritage 10, #1 (2007), pp. 21–30. Bibcode: 2007JAHH...10...21H
- Mike Brown, 2012. How I Killed Pluto and Why It Had It Coming
- "Asteroid", MSN Encarta, Microsoft. Accessed May 5, 2008. Archived 2009-11-01.
- Press release, IAU 2006 General Assembly: Result of the IAU Resolution votes, International Astronomical Union, August 24, 2006. Accessed May 5, 2008.
- Questions and Answers on Planets, additional information, news release IAU0603, IAU 2006 General Assembly: Result of the IAU Resolution votes, International Astronomical Union, August 24, 2006. Accessed May 8, 2008.
- JPL. "How Many Solar System Bodies". JPL Solar System Dynamics. NASA. Retrieved 2010-09-27.
- "Discovery Circumstances: Numbered Minor Planets (1)-(5000)". Minor Planet Center. Retrieved 2011-02-26.
- "Discovery Circumstances: Numbered Minor Planets (240001)-(245000)". Minor Planet Center. Retrieved 2011-02-26.
- Yeomans, Don, "Near-Earth Object groups", Near Earth Object Project (NASA), retrieved 2011-12-24
- Connors, Martin; Wiegert, Paul; Veillet, Christian (July 2011), "Earth's Trojan asteroid", Nature 475 (7357): 481–483, Bibcode:2011Natur.475..481C, doi:10.1038/nature10233, PMID 21796207
- Trilling, David et al. (October 2007), "DDT observations of five Mars Trojan asteroids", Spitzer Proposal ID #465, Bibcode:2007sptz.prop..465T
- Horner, J.; Evans, N.W.; Bailey, M. E. (2004). "Simulations of the Population of Centaurs I: The Bulk Statistics". Monthly Notices of the Royal Astronomical Society 354 (3): 798–810. arXiv:astro-ph/0407400. Bibcode:2004MNRAS.354..798H. doi:10.1111/j.1365-2966.2004.08240.x.
- Neptune trojans, Jupiter trojans
- NASA JPL Small-Body Database Browser on 96747 Crespodasilva
- Staff (November 28, 2000). "Lucy Crespo da Silva, 22, a senior, dies in fall". Hubble News Desk. Retrieved 2008-04-15.
- 谷 'valley' being a common abbreviation of 穀 'grain' that would be formally adopted with simplified Chinese characters.
- "Division III Commission 15 Physical Study of Comets & Minor Planets". International Astronomical Union (IAU). September 29, 2005. Retrieved 2010-03-22.
- "Physical Properties of Asteroids".
- "The Near-Earth Asteroids Data Base". | <urn:uuid:ade17e47-f52a-4d38-8cdc-a44a5da582c6> | 3.875 | 3,064 | Knowledge Article | Science & Tech. | 49.477363 |
Despite Reductions by Industrialized Countries, Global CO2 Emissions Increase Steeply
Global emissions of carbon dioxide (CO2) increased by 45 percent between 1990 and 2010, and reached an all-time high of 33 billion tons in 2010. Increased energy-efficiency, nuclear energy and the growing contributions of renewable energy are not compensating for the globally increasing demand for power and transport, which is strongest in developing countries.
This increase took place despite emission reductions in industrialized countries during the same period. Even though different countries show widely variable emission trends, industrialized countries are likely to meet the collective Kyoto target of a 5.2 percent reduction of greenhouse gas emissions by 2012 as a group, partly thanks to large emission reductions from economies in transition in the early nineties and more recent reductions due to the 2008-2009 recession. These figures were published in the report "Long-term trend in global CO2 emissions," prepared by the European Commission's Joint Research Centre and PBL Netherlands Environmental Assessment Agency.
The report, which is based on recent results from the Emissions Database for Global Atmospheric Research (EDGAR) and latest statistics for energy use and other activities, shows large national differences between industrialized countries. Over 1990-2010, in the EU-27 and Russia, CO2 emissions decreased by 7 percent and 28 percent respectively, while the USA's emissions increased by 5 percent and the Japanese emissions remained more or less constant. The industrialized countries that have ratified the Kyoto Protocol (so-called ratifying Annex 1 countries) and the United States, in 1990 caused about two-thirds of global CO2 emissions. Their share of global emissions has now fallen to less than half the global total.
Continued growth in the developing countries and emerging economies and economic recovery by the industrialized countries are the main reasons for a record-breaking 5.8 percent increase in global CO2 emissions between 2009 and 2010. Most major economies contributed to this increase, led by China, the United States, India and EU-27 with increases of 10 percent, 4 percent, 9 percent and 3 percent respectively. The increase is significant even when compared to 2008, when global CO2 emissions were at their highest before the global financial crisis. It can be noted that in EU-27, CO2 emissions remain lower in absolute terms than they were before the crisis (4.0 billion tons in 2010 as compared with 4.2 billion tons in 2007).
At present, the USA emits 16.9 tons CO2 per capita per year, over twice as much as the EU-27 with 8.1 tons. By comparison, Chinese per capita CO2 emissions of 6.8 tons are still below the EU-27 average, but now equal those of Italy. It should be noted that the average figures for China and EU-27 hide significant regional differences.
Long term global growth in CO2 emissions continues to be driven by power generation and road transport, both in industrial and developing countries. Globally, they account for about 40percent and 15percent respectively of the current total and both have consistent long-term annual growth rates of between 2.5 percent and 5 percent.
Throughout the Kyoto Protocol period, industrialized countries have made efforts to change their energy sources mix. Between 1990 and 2010 they reduced their dependence on coal (from 25 percent to 20 percent of total energy production) and oil (from 38 percent to 36.5percent), and shifted towards natural gas (which increased from 23 percent to 27 percent), nuclear energy (from 8 percent to 9 percent) and renewable energy (from 6.5 percent to 8 percent). In addition they made progress in energy savings, for example by insulation of buildings, more energy-efficient end-use devices and higher fuel efficiencies.The report shows that the current efforts to change the mix of energy sources cannot yet compensate for the ever increasing global demand for power and transport. This needs to be considered in future years in all efforts to mitigate the growth of global greenhouse gas emissions, as desired by the UN Framework Convention on Climate Change, the Bali Action Plan and the Cancún agreements. | <urn:uuid:2c3f07dd-9530-45d3-9373-d6df3132eeb7> | 3.125 | 826 | Truncated | Science & Tech. | 41.681637 |
mikejuk writes “The Goldbach conjecture is not the sort of thing that relates to practical applications, but they used to say the same thing about electricity. The Goldbach conjecture is reasonably well known: every integer can be expressed as the sum of two primes. Very easy to state, but it seems very difficult to prove. Terence Tao, a Fields medalist, has published a paper that proves that every odd number greater than 1 is the sum of at most five primes. This may not sound like much of an advance, but notice that there is no stipulation for the integer to be greater than some bound. This is a complete proof of a slightly lesser conjecture, and might point the way to getting the number of primes needed down from at most five to at most 2. Notice that no computers where involved in the proof — this is classical mathematical proof involving logical deductions rather than exhaustive search.”
Read more of this story at Slashdot. | <urn:uuid:b593896d-856b-41ca-bf07-3075e89f2e4a> | 2.796875 | 198 | Comment Section | Science & Tech. | 47.464261 |
What is meant by an "elementary particle"?
Describe the progression of ideas about what
particles are truly "elementary".
What particles are currently considered to be
the building blocks of all matter?
Describe the difference in protons and neutrons
in terms of the quarks which make them up.
No one has seen an isolated quark. Why not?
Why are huge, high energy accelerators required to
observe the tiniest of particles?
What has led to the close cooperation between those
who study the very smallest things (quarks, leptons)
and those who study the very largest things (galaxies,
Which two of the four fundamental forces have now
been shown to be "unified"?
What is meant by "grand unification theories (GUT)"?
What evidence do we have that all parts of the
universe are expanding? How is this expansion
related to the measurement of distance to
remote parts of the universe?
What is the 3K background radiation? What is its
significance to the modeling of the formation of
Why is the relative abundance of hydrogen and
helium considered to be such an important part of
the modeling of the universe? | <urn:uuid:297fa2b6-4c15-4ea5-9ee0-1dda2e58b300> | 3.25 | 259 | Content Listing | Science & Tech. | 44.683742 |
(Submitted June 30, 1997)
How is it possible for manned space flights to survive the effects of
the Van Allen Radiation Belt?
As you know, the Van Allen radiation belts are doughnut-shaped regions
encircling Earth and containing high-energy electrons and ions
trapped in the Earth's magnetic field. Explorer I, launched by NASA in 1958,
discovered these two regions of intense radiation surrounding the Earth. They
are referred to as the inner and outer Van Allen radiation belts, after
James Van Allen who designed Explorer I. The inner region is centered at about
3000 km above Earth and has a thickness of about 5000 km. The outer region is
centered at about 15,000 -- 20,000 km above the surface of the Earth and has a
thickness of 6,000 -- 10,000 km.
Typically, manned space flight (such as the Shuttle) stays well below the
altitude of the van Allen radiation belts. Safe flight can occur below
altitudes of 400 km or so.
SO ...what do we do when we have to fly through the radiation belts -- like
when we went to the Moon or send probes to other planets?
In the 1960s, NASA asked Oak Ridge National Laboratory to predict how
astronauts and other materials would be affected by exposure to both the
Earth's Van Allen radiation belts and the Sun's radiation. Oak Ridge
biologists sent bacteria and blood samples into space and exposed small
animals to radiation. They concluded that proper shielding would be key to
successful flight not only for living organisms, but for electronic
instrumentation as well. To develop shielding for the Apollo crews,
Oak Ridge researchers recycled the Lab's Tower Shielding Facility, which had
hoisted shielding experiments aloft for the 1950's nuclear-plane project.
for Ask an Astrophysicist
Questions on this topic are no longer responded to by the "Ask an Astrophysicist" service. See http://imagine.gsfc.nasa.gov/docs/ask_astro/ask_an_astronomer.html
for help on other astronomy Q&A services. | <urn:uuid:3c355e78-175a-4d47-b412-ca2b452e5e96> | 4.03125 | 449 | Q&A Forum | Science & Tech. | 58.756884 |
The negative consequences of global warming are well-documented — melting ice caps, rising sea levels, loss of habitat for polar bears and countless other species, mass disruptions and dislocations around the world as formerly habitable areas become unlivable. It sounds like the world's going to become a very unpleasant place to call home if everything that's been predicted comes to pass.
The less-publicized reality of climate change is that some change is likely to be beneficial. Granted, virtually every positive effect has a negative corollary, and sometimes the negative outweighs the positive (territorial disputes over low-lying islands will cease, which is good, but only because the islands will be underwater, which is worse). But it's not all bad. The following list details the top 10 effects of global climate change that could be good for the planet. This may not convince the doomsayers, but should global warming transpire as many scientists predict, it could make waiting for that toasty Armageddon a much more endurable experience. | <urn:uuid:b9a4f9bb-20b1-4f8a-b5aa-aa4c8ed0a682> | 3.03125 | 207 | Listicle | Science & Tech. | 29.407353 |
This article from EARTH Magazine noted something I wasn’t aware of, but probably shouldn’t be surprised about – induction of earthquakes caused (or at least suspected to be caused) by geological exploration. Usually connected to work done for energy related purposes, it appears that at least one geologist has been taken to court (though acquitted) for inducing 30 earthquakes in Switzerland that caused millions in property damage.
While the Swiss case dealt with an enhanced geothermal energy project, the article also brings up the practice of fracking. It’s a related practice involving the injection of water at high pressure into the ground, except fracking is typically done in exploring for natural gas. Usually the complaints associated with fracking have to do with groundwater contamination, due in part to the chemicals used in the process.
Perhaps it shouldn’t, but the induction of earthquakes seems to cross across some kind of line in terms of environmental modification. Perhaps it’s worth including those techniques that might induce earthquakes in the same category as other geoengineering projects? You might think that the magnitudes of the quakes involved (typically under 3) would make it unnecessary, but it’s as much the location of the quakes and the lasting impact on surrounding rock (and nearby faults, should people be crazy enough to induce near fault lines) that matters. Such effects, more subtle than the damage of stronger quakes, would make better scrutiny of efforts that could induce earthquakes a good idea. | <urn:uuid:cbb4b824-1737-4070-9703-fd192e54360b> | 3.234375 | 299 | Personal Blog | Science & Tech. | 30.278635 |
Martian Dust Devil Movie, Phoenix Sol 104
The Surface Stereo Imager on NASA's Phoenix Mars Lander caught this dust devil in action west of the lander in four frames shot about 50 seconds apart from each other between 11:53 a.m. and 11:56 a.m. local Mars time on Sol 104, or the 104th Martian day of the mission, Sept. 9, 2008.
Dust devils have not been detected in any Phoenix images from earlier in the mission, but at least six were observed in a dozen images taken on Sol 104.
Dust devils are whirlwinds that often occur when the Sun heats the surface of Mars, or some areas on Earth. The warmed surface heats the layer of atmosphere closest to it, and the warm air rises in a whirling motion, stirring dust up from the surface like a miniature tornado.
The dust devil visible in this sequence was about 1,000 meters (about 3,300 feet) from the lander when the first frame was taken, and had moved to about 1,700 meters (about 5,600 feet) away by the time the last frame was taken about two and a half minutes later. The dust devil was moving westward at an estimated speed of 5 meters per second (11 miles per hour), which is similar to typical late-morning wind speed and direction indicated by the telltale wind gauge on Phoenix.
This dust devil is about 5 meters (16 feet) in diameter. This is much smaller than dust devils that have been observed by NASA's Mars Exploration Rover Spirit much closer to the equator. It is closer in size to dust devils seen from orbit in the Phoenix landing region, though still smaller than those..
The image has been enhanced to make the dust devil easier to see. Some of the frame-to-frame differences in the appearance of foreground rocks is because each frame was taken through a different color filter.
The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.
Video Credit: Image NASA/JPL-Caltech/University of Arizona/Texas A&M University | <urn:uuid:f8be76ac-9183-4aa7-bf58-b1b66d3f254f> | 3.328125 | 459 | Truncated | Science & Tech. | 58.961024 |
Energetic star becomes a cool supergiant
Mar 27, 2003
When a star in the constellation of Monoceros, or the “Unicorn”, erupted in January last year, it temporarily became the brightest star in the Milky Way. Howard Bond of the Space Telescope Science Institute in Maryland and astronomers in the US, the Canary Islands and Italy have now used the Hubble Space Telescope to study the light emitted by the star, known as V838 Monocerotis. Their work is significant in that it provides a new method for measuring how far away stars are (H E Bond et al. 2003 Nature 422 405).
Novae and supernovae usually undergo explosive outbursts that eject stellar material into space. When V838 Mon erupted, it brightened by a factor of 10 000 and so led astronomers to believe that it was a classical nova. However, the star did not eject its outer layers and expose a hot core – unlike a conventional nova - but simply expanded to become a cool, luminous supergiant instead. This transformation defies the conventional understanding of the life cycle of stars.
Bond and co-workers found that the star underwent rapid and complex changes in brightness between January and April 2002. Hubble Space Telescope images show “light echoes”, which are a series of nearly circular arcs and rings, centred on the star (see figure). These echoes are created by light propagating into the surrounding stellar dust.
Using these measurements, the researchers calculated that the star is about 20 000 light years away. It appears to be a new type of outburst in which the star expands rapidly to supergiant dimensions in a hitherto unseen mechanism.
“At this point, we can only say that we know of two sources that could release so much energy so quickly: gravitational energy, or thermonuclear energy,” Bond told Physics Web. “Gravitational energy, such as in a stellar collision or merger, seems unlikely because the surrounding circumstellar dust suggests that V838 Mon has undergone previous outbursts - a stellar interaction would be a one-time event. We may be seeing the release of energy through nuclear fusion, but in a region of parameter space that we have not seen before.”
The team now hopes to continue with the Hubble Telescope observations and create a three-dimensional map of the circumstellar dust. It also wants to refine the distance calculations and determine the exact nature of the underlying stellar system.
About the author
Belle Dumé is Science Writer at PhysicsWeb | <urn:uuid:29884caa-7b56-4458-b911-98bc6c6fb45b> | 3.390625 | 525 | Truncated | Science & Tech. | 37.738693 |
Major Section: MISCELLANEOUS
A ``check sum'' is an integer in some fixed range computed from the
printed representation of an object, e.g., the sum, modulo
the ascii codes of all the characters in the printed
Ideally, you would like the check sum of an object to be uniquely
associated with that object, like a fingerprint. It could then be
used as a convenient way to recognize the object in the future: you
could remember the check sum (which is relatively small) and when an
object is presented to you and alleged to be the special one you
could compute its check sum and see if indeed it was. Alas, there
are many more objects than check sums (after all, each check sum is
an object, and then there's
t). So you try to design a check sum
algorithm that maps similar looking objects far apart, in the hopes
that corruptions and counterfeits -- which appear to be similar to
the object -- have different check sums. Nevertheless, the best you
can do is a many-to-one map. If an object with a different check
sum is presented, you can be positive it is not the special object.
But if an object with the same check sum is presented, you have no
grounds for positive identification.
The basic check sum algorithm in ACL2 is called
computes the check sum of an ACL2 object. Roughly speaking, we scan
the print representation of the object and, for each character
encountered, we multiply the ascii code of the character times its
position in the stream (modulo a certain prime) and then add (modulo
a certain prime) that into the running sum. This is inaccurate in
many senses (for example, we don't always use the ascii code and we
see numbers as though they were printed in base 127) but indicates
the basic idea.
ACL2 uses check sums to increase security in the books
mechanism; see certificate. | <urn:uuid:d00c68ee-074e-4dc0-833c-e4e8e52c0221> | 3.453125 | 427 | Documentation | Software Dev. | 48.110299 |
An interactive presentation by Graeme Lennie.
- It takes about 100 gallons of water to grow and process a single pound of cotton, and the average American goes through about 35 pounds of new cotton material each year. Do you really need that additional T-shirt?
- One of the best ways to conserve water is to buy recycled goods, and to recycle your stuff when you’re done with it. Or, stick to buying only what you really need.
- The water required to create your laptop could wash nearly 70 loads of laundry in a standard machine.
- Recycling a pound of paper, less than the weight of your average newspaper, saves about 3.5 gallons of water. Buying recycled paper products saves water too, as it takes about six gallons of water to produce a dollar worth of paper.
(Source: National Geographic) | <urn:uuid:a1cc26f4-473e-475c-8021-4f46f565013d> | 3.203125 | 176 | Listicle | Science & Tech. | 60.733333 |
How to Secure a Web Site
Security is a very important aspect for any
developer of ecommerce web sites. To secure a web site, we must make sure
that private data that's sent between the client and server can't be deciphered.
To accomplish that, we use an Internet Protocol called SSL (Secure Socket
Layer). Its an important protocol that lets you transmit data over the
internet using data encryption.
How Secure Sockets Layer (SSL) connections Work:
SSL is the protocol used by the world wide web that allows clients and servers to communicate over a secure connection.
With SSL, the browser
encrypts all data that's sent to the server and decrypts all data that's
received from the server. Conversely, the server encrypts all data that's sent
to the browser and decrypts all data that's received from the
SSL is able to
determine if data has been tampered with during transmit and verify that a
server or a client is who claims to be.
To to determine if
you're transmitting data over a secure connection, you can read the URL in the
browser's address bar. If it starts with HTTPS rather than HTTP, then you're
transmitting data over a secure connection as shown in the folowing diagram:
To test an application that uses SSL, you must run the application under the control of IIS.
With some browsers, a
lock icon is displayed when a secure connection is being used.
How digital secure
To use SSL to transmit data, the client and the server use Digital secure certificates as shown in below diagram.
Certificates are the electronic counterparts to driver licenses, passports
and membership cards. You can present a Digital Certificate electronically to
prove your identity or your right to access information or services online.
A Digital Certificate
is issued by a Certification Authority (CA) and signed with the CA's private
Certificates serve two purposes. First, they establish the identity of the
server or clients. Second,they provide the information needed to encrypt data
before it's transmitted. By default, browsers are configured to accept
certificates that come from trusted sources. If a browser doesn't recognize a
certificate as coming from a trusted source, however, it informs the user and
lets the user view the certificate. Then, the user can determine whether
the certificate should be considered valid. If the user chooses to accept the
certificate, the secure connection is established. The certificate dialog box
for a digital secure certificate is as shown in the following figure:
How to determine
if a Digital Secure Certificate is installed on your server
If IIS is running on
your local machine, chances are that certificate hasn't been installed. But if
IIS is running on a server on a network, you can use the procedure as shown in
above figure to determine if a certificate has been installed and to view the
How to get a
Digital Secure Connection
If you want to
develop an ASP .NET application that uses SSL to secure client connections, you
must first obtain a digital secure certificate from a trusted source such as:
authorities, or CAs verify that the person or company requesting the
certificate is a valid person or company by checking with a registration
authority, or RA. To obtain a digital secure certificate, you'll need to provide
a registration authority with information about yourself or your company. Once
the registration authority approves the request, the certificate authority can
issue the digital secure certificate.
Here are some related resources: | <urn:uuid:b640b6cc-31f3-46d3-bbfa-eafdcf19bad1> | 3.625 | 739 | Tutorial | Software Dev. | 32.105574 |
Download this article
- PDF (1.9 MB)
Back to the Table of Contents
Explained in 60 Seconds: A collaboration with Symmetry Magazine, a Fermilab/SLAC publication (page 04)
Redshift, Explained in 60 Seconds
Redshift is the observed change in the colour of light emitted by a star or other celestial object that is moving away from Earth. | <urn:uuid:c916d591-64ee-41c0-9963-aabeed10636f> | 3.03125 | 83 | Truncated | Science & Tech. | 44.328553 |
Chemical Equilibrium vs Dynamic Equilibrium
When one or more reactants are converting to products, they may go through different modifications and energy changes. The chemical bonds in the reactants are breaking, and new bonds are forming to generate products, which are totally different from the reactants. This kind of chemical modification is known as chemical reactions. There are numerous variables controlling the reactions. Mainly, by studying thermodynamics and kinetics, we can draw a lot of conclusions about a reaction and how we can control them. Thermodynamics is the study of transformations of energy. It is concerned with the energetic and the position of the equilibrium in a reaction.
What is Chemical Equilibrium?
Some reactions are reversible, and some reactions are irreversible. In a reaction, reactants are converting to products. And in some reactions, the reactants can be generated again from the products. This type of reactions is called reversible. In irreversible reactions, once the reactants are converted to products, they cannot be regenerate again from the products. In a reversible reaction when reactants are going to products it is called the forward reaction and when products are going to reactants, it is called the backward reaction. When the rate of forward and backward reactions is equal, then the reaction is said to be at equilibrium. So over a period of time the amount of reactants and products are not changing. Reversible reactions always tend to come to equilibrium and maintain that equilibrium. When the system is at equilibrium, the amount of products and the reactants have not to be necessarily equal. There can be a higher amount of reactants than the products or vice versa. The only requirement in an equilibrium equation is to maintain a constant amount from both over time. For a reaction in equilibrium, an equilibrium constant can be defined; where it is equal to the ratio between concentration of products and concentration of reactions.
K= [product]n/[reactant]m n and m are the stoichiometric coefficients of the product and reactant.
For an equilibrium reaction, if the forward reaction is exothermic then the backward reaction is endothermic and vice versa. Normally, all the other parameters for forward and backward reactions are opposite to each other like this. Therefore, if we want to facilitate either one of the reactions, we simply have to adjust the parameters to facilitate that reaction.
What is Dynamic Equilibrium?
Dynamic equilibrium is also a type of equilibrium where the amounts of products and reactants do not change over time. However, in dynamic equilibrium, by saying that the amounts do not change does not mean that the reaction has stopped. Rather, reaction is proceeding in a way that it keeps the amounts unchanged (the net change is zero). Simply the word “dynamic equilibrium” means that the reaction is reversible and still continuing. For a dynamic equilibrium to take place, the system should be a closed one, so that no energy or matter is escaped from the system.
What is the difference between Chemical and Dynamic Equilibrium?
• Dynamic equilibrium is a type of chemical equilibrium.
• In a dynamic equilibrium, the reaction still continues, but the amount of reactants and products remain unchanged because the rates of the forward and backward reactions are the same. There can be some instances in chemical equilibrium where the amounts of products and reactant remain unchanged because the reaction has stopped. | <urn:uuid:801ff792-6a2e-4d87-9846-17392be7fdf9> | 3.984375 | 681 | Knowledge Article | Science & Tech. | 39.250523 |
For many programmers, the emergence of data immutability as a desirable feature in programming languages is a curious development. Immutability the capacity to create variables whose initial value cannot be changed is suddenly the mode.
When the programming world was dominated by C and C++, most instructional materials barely touched on immutability. The entire conversation recognized the occasional need for constants and proscribed the use of a magic number, such as 3.14159. Eventually, a constant, PI, was suggested to help the good folks who'd have to maintain the code at some time in the future.
Except for the plaintive cry of academics whose fondness for functional languages was thoroughly ignored, the above was pretty much all you heard about data immutability. This situation changed with the advent of Java in 1995: The language implementation hid immutability behind the scenes. Strings, as well as other fundamental data types, were constants, rather than variables. This design was in part a reaction to the great difficulty of managing strings in C and C++. Because those languages treat strings simply as null-terminated arrays of characters, C strings are infinitely pliable, plastic entities that anyone with a copy can modify. Java, in counterpoint, views strings as their own fundamental and immutable data type.
Java expanded immutability in the release of Java 2 by adding immutable collections. These collections are quite useful in regular serial programming. For example, a getter that returns a collection should in most cases return an immutable collection. This step enforces data hiding and encapsulation: Objects that don't own the collection cannot change its values. Immutability makes data items ideal for sharing between threads. It enables two threads to access a string simultaneously without the usual elaborate locking mechanisms.
With the wide adoption of x86 multicore processors, all programs have the possibility of useful parallelization, and so immutability is moving inexorably to the fore. New languages, such as Scala, provide for it expressly (a one-letter change in a variable declaration creates an immutable constant). Languages derived from the functional world (Erlang and Clojure, for example) embrace immutability even further, making it the default behavior for variables. Languages, such as Groovy, that did not have immutability as a definable quality, first added it as an adjunct (in Groovy's case, as an annotation), and then as an integrated part of the language. C and C++ are laggards here const correctness being the partial and somewhat cumbersome mechanism.
For developers not interested in multithreading, immutability still has a role to play as I've mentioned. But the full breadth of opportunity is much greater. It's safe to say that wherever possible, data items should be declared as immutable. The first benefit is performance. Compilers are very good at optimizing code when they know a data item won't change value.
Even if your code is fast enough, immutability has value. By specifying that an object is immutable, you can catch defects that might have been difficult to detect. For example, the long held practice of marking parameters to methods as final prevents you from mistakenly modifying a value that will be wiped away when the method returns. However, even within methods, there are many times when an object is returned from a function only for purposes of calling one of its methods. It too can be marked immutable. This prevents mechanical errors and provides greater readability. (As with all guidelines, this has to be tempered by the pragmatic realities. Code clutter, especially in languages that don't have simple immutability keywords, can be a drawback that more than offsets the readability benefits.)
My belief is that data immutability will become much more pervasive part of all programming languages fully integrated at the syntactical and semantic levels. This, I expect, will presage the wider penetration of parallel programming into general-purpose languages. | <urn:uuid:cfabd15e-b42d-43e8-9b03-55487c8f8d75> | 3.515625 | 797 | Personal Blog | Software Dev. | 28.88066 |
The National Lightning map shows where cloud-to-ground lightning strikes have occurred in the last hour. Lightning is an atmospheric discharge of electricity, which typically occurs during thunderstorms
, and sometimes during volcanic eruptions or dust storms
. In the atmospheric electrical discharge, a leader of a bolt of lightning can travel at speeds of 60,000 m/s, and can reach temperatures
approaching 30,000°C (54,000°F), hot enough to fuse soil or sand into glass channels. There are over 16 million lightning storms every year.
Lightning can also occur within the ash clouds
from volcanic eruptions, or can be caused by violent forest fires which generate sufficient dust to create a static charge.
How lightning initially forms is still a matter of debate: Scientists have studied root causes ranging from atmospheric perturbations (wind
, and atmospheric pressure) to the impact of solar wind
and accumulation of charged solar particles. Ice inside a cloud
is thought to be a key element in lightning development, and may cause a forcible separation of positive and negative charges within the cloud
, thus assisting in the formation of lightning. | <urn:uuid:a3ea2645-241f-4171-9816-5caee9a9134e> | 4.3125 | 232 | Knowledge Article | Science & Tech. | 35.22629 |
Andalusite is an aluminum-rich silicate mineral.
Andalusite is a common mineral in aluminum-bearing metamorphic rocks. It forms at low to medium temperatures and pressures. Andalusite is trimorphous with sillimanite and kyanite. It means that these three minerals have exactly the same chemical composition but they have different crystalline structure and therefore quite different appearance. The chemical composition of these three minerals are often expressed the following way: Al2SiO5 but not always. Sometimes it is written as AlAlOSiO4 or Al2OSiO4 to show that they are orthosilicates.
Orthosilicates are silicate minerals which possess isolated silica tetrahedra (SiO4) in their crystal structure. These tetrahedra are like three-dimensional islands surrounded by other elements. Other well-known orthosilicates are zircon, olivine, garnet, topaz, titanite, etc. These are the least siliceous minerals among the silicate minerals and their chemical formula is usually written in the way which clearly shows isolates silica tetrahedra as an important structural unit. If we write the chemical formula as Al2SiO5, then we set these three aluminosilicates artificially apart from their relatives.
Andalusite, kyanite, and sillimanite have quite distinctive appearance from each other. Andalusite crystals (they are commonly large enough to be seen) are elongated and have almost square cross-section. Kyanite is also elongated but it is bladed and often has a distinctive bright blue color. Sillimanite is usually fine-grained, crystals are elongated as well, sometimes fibrous (variety known as fibrolite).
Andalusite crystals are often large enough to be seen with the naked eye and have a characteristic square-shaped cross-section. Mn-rich variety from The Vosges Mountains, France. The width of the cross-section of the largest crystal is 16 mm.
Andalusite is usually pink but white, gray, yellow, green (greenish gray), and violet varieties also occur frequently. Variation of color is mostly due to chromophore elements. Iron gives pink coloration, manganese is responsible for greenish hue1. Andalusite is usually relatively pure but it may contain manganese and iron (both are chromophores) that replace aluminum in the lattice. Andalusite variety chiastolite contains dark carbonaceous inclusions that form a cross along the diagonals of the prism. Andalusite may easily alter to sericite (fine-grained muscovite) or to other sheet silicates. Variety chiastolite is especially prone to such alteration which starts from the contact surface between andalusite and carbonaceous inclusions1. Other inclusions like quartz, opaque minerals, and other minerals are also common in andalusite but they are small, visible with a microscope only. Andalusite is physically hard mineral (7.5 on Mohs scale) but it may be less on the surface because of alteration4.
Andalusite variety chiastolite porphyroblasts (note diagonal dark zones) in a metamorphosed claystone from Germany. Chiastolitic cruciform pattern (visible when crystals are cut at right angles to the longest axis of the prism) forms because growing andalusite crystal pushes impurities aside as it grows. Initially in was unable to free itself from all types of inclusions but as the crystals grow bigger they become more and more clear1. Width of sample 11 cm.
Andalusite occurs chiefly in metamorphic rocks. These metamorphic rocks are rich in aluminum. The protoliths are sedimentary rocks that consequently also have to contain lots of aluminum. These are sedimentary rocks that are rich in clay (shale, argillite, mudstone, etc.). All clay minerals contain lots of aluminum. Andalusite is the least dense of the three polymorphs (andalusite, kyanite, sillimanite) and is therefore stable at lower pressure. If pressure rises, andalusite transforms to kyanite. If temperature rises much faster than pressure, then sillimanite is the most stable of the three. All of them occur in metamorphic rocks which makes them very good indicators of the metamorphic conditions during their formation. Andalusite is no longer stable if the temperature rises approximately above 600 °C and the pressure over 4 kbar (diagram below) which equals about 12…14 km depth in the crust.
Andalusite is a common mineral in hornfels. Hornfels is a fine-grained metamorphic rock formed by contact metamorphism — baked sedimentary rock next to hot magma intrusion. Andalusite is also common in regionally metamorphosed (related to mountain-building events) rocks like slate and mica schist. Andalusite may occasionally occur in granitic igneous rocks. Andalusite is not particularly stable in the weathering environment but it may be found in sand and sandstone if low to medium grade metamorphic rocks are not too far away. Andalusite and kyanite are used as a refractory source material. They are heated to produce mullite (andalusite needs to be heated to 1450…1500 °C) which is used to make bricks resistant to high temperature and other fire-resistant materials (in spark plugs3, for example). Sillimanite is rarely used for that purpose because it tends to be too fine-grained which makes it difficult to extract sillimanite from rocks and it requires higher temperature to mullitize. Largest commercial deposits of andalusite are in South Africa. Transparent andalusite crystals may serve as gemstones. Andalusite was first described in Andalusia (Spain), and was named after this region4.
Stability fields of andalusite, kyanite, and sillimanite2. Andalusite is stable at low pressure and temperature. 1 kbar equals roughly 3.5 km depth in the continental crust.
1. Deer, W. A., Howie, R. A. & Zussman, J. (1996). An Introduction to the Rock-Forming Minerals, 2nd Edition. Prentice Hall.
2. Nesse, William D. (2011). Introduction to Mineralogy, 2nd Edition. Oxford University Press.
3. Klein, C., Hurlbut, C. S. (1993). Manual of Mineralogy, 21st Edition. John Wiley & Sons.
4. Hurlbut, C. S. (2007). Andalusite. In: McGraw Hill Encyclopedia of Science & Technology, 10th Edition. McGraw-Hill. Volume 1. 652-653. | <urn:uuid:b9c7e12f-6de5-4e41-99fc-4d1b083f00d2> | 3.828125 | 1,427 | Knowledge Article | Science & Tech. | 27.841913 |
Water & Food: How Plants Eat and Drink
Plants need water. Water in the cells helps plants to grow and make food, makes leaves and stems firm (a plant that needs watering wilts), and carries minerals from the soil.
Plants also need food. As plants cannot eat in the same way as animals, plants have to create sugars, by something called photosynthesis.
PhotosynthesisThe word comes from the Greek words for ‘light’ and ‘combination’. Photosynthesis uses chlorophyll (the green colouring in leaves), carbon dioxide from the air and sunlight, and produces oxygen and a sugar called glucose. Plants use the glucose to make starches, proteins and fats. These are used in the plant’s growth, as well as to store in seeds to feed new plants, and for times when there is no sunlight. These starches, proteins and fats make plants useful for making things, as well as good food for animals and humans.
The pondweed, Elodea canadensis, produces a lot of oxygen during photosynthesis.
Fill a tank or a glass bowl with water and leave it out for a couple of days to get rid of any chlorine. Put a small bunch of pondweed in the tank. Lower a glass jar into the tank, let it fill with water and then put it upside-down over the pondweed. Leave the tank in the sun and count the bubbles that come off the pondweed over a five-minute period. See how much oxygen collects at the top of the jar in a day.
Try the experiment in a light place but out of direct sun, and in the dark. Does this affect the number of bubbles and the amount of oxygen produced? If there is gas produced in the dark, do you think this is oxygen or carbon dioxide?
ChlorophyllChlorophyll is the pigment that makes plants green, and its name comes from the Greek words for ‘green’ and ‘leaf’. It is found in plants, algae and cyanobacteria. Chlorophyll is in a part of the plant cell called the chloroplast.
In the autumn, the chlorophyll in the leaves breaks down, and so the leaves lose their green colour, showing up the colours of other pigments in the leaves, including reds and yellows. Cover half of a leaf (still attached to the plant) with black plastic or black paper, or cover a square of grass with an upside down bucket, and leave for a couple of days. Because the chlorophyll has not been exposed to the light, it starts to break down, and the grass or leaf will look yellowish.
Carbon Dioxide and OxygenOn the underneath of the leaves, plants have openings called stomata (see ‘Do Plants Breathe?’) This allows carbon dioxide into the inside of the leaves and into the plant cells, and lets oxygen out.
SunlightPlants grow up towards the sun, and turn their leaves towards the sun so that they can catch as much light as possible. Put a plant on a bright windowsill – do the leaves move towards the sun? How long does it take?
WaterPlants need water for photosynthesis, and to carry minerals to all parts of the plant. Plants absorb water through their roots, especially through the very fine root hairs near the tips of the roots. Water evaporates out through the stomata. This is called transpiration, and pulls water up from the roots and through the stems to the leaves using tubes called xylem.
To stop plants using too much water, especially in hot weather, the stomata can close and the top surface of the leaves are waxy. Plants that live in hot, dry climates have smaller leaves with waxier surfaces and fewer stomata.
Put a whole, fresh carrot in a glass of water with red food dye and leave it overnight. Slice the carrot across and lengthways – as the water has travelled through the carrot, the red food colouring should have dyed the xylem.
Try this with a fresh stick of celery with leaves – partly split the stick of celery lengthways and put one half in a glass of water with red food colouring and the other in a glass with blue food colouring. Slice the stem through – the xylem should show two different colours. Even the leaves might show two different colours. Try it with a white carnation – do the petals change colour? | <urn:uuid:2f65aa9a-9ad2-4cf0-9808-fd57b029b4cb> | 3.6875 | 932 | Knowledge Article | Science & Tech. | 64.045359 |
View Single Post
06-24-2009, 04:19 PM
La Vida es Sueño
Join Date: Sep 2007
Typically a set of functions inside a set of classes, that are ideally able to be used independent of one another. Although some dependencies are naturally unavoidable. They hasten the most common of tasks, such as interacting with a database. And prevent us from
reinventing the wheel
time-and-time again, as it were.
Insofar as I can see, the only reason Ruby, quite an old language, made a brief return was because of the Rails framework. Many people seem to be unable to grasp the fact that Rails is a framework, however.
So in the most basic example, let's say you have a database class in your framework. That won't be specific to any particular project, but instead a configuration file of some type, which will be specific to a project, although retain the same syntax throughout, will be used to change specifics. Such as the database's credentials.
This saves the time of writing out all your SQL statements time-after-time, and repeating the code to connect to your database each time. A good framework will consist of many various classes, such as the Zend Framework.
The man who comes back through the Door in the Wall will never be quite the same as the man who went out.
The Following User Says Thank You to Wildhoney For This Useful Post:
View Public Profile
Send a private message to Wildhoney
Visit Wildhoney's homepage!
Find More Posts by Wildhoney | <urn:uuid:f5c2f30a-c4e9-41e5-a834-4a919439e5f5> | 2.796875 | 329 | Comment Section | Software Dev. | 51.939682 |
I have been teaching PreCalculus for the past ten years, and it always bothered me how trigonometry is developed and presented in most books. After having spent months on linear, quadratic, polynomial, rational, exponential and logarithmic functions where we define all of these functions having domains in terms of subsets of the real number set, we now define sine & cosine as having a domain that is a unit for measuring angular rotation. Wouldn't that be like beginning logarithms by saying it is a function that takes in sound intensities and its domain is in terms of watts per meter squared?
It just strikes me as odd. The last few years I have been developing the trig functions by first introducing periodic functions and then the wrapping function. By using the analogy of wrapping a number line around the unit circle, I can develop all six trigonometric functions, their graphs, the basic values for pi/6’s, pi/4’s, pi/3’s, pi/2’s, and many of the basic trigonometric identities all without memorizing. I find it to be a very powerful analogy. I usually do not even discuss degrees & radians until I move onto applications of trigonometric functions later on.
Do any of you use the wrapping function, or do you begin teaching trig using degrees & radians? | <urn:uuid:5f2e8c4e-0866-4f5c-9015-03c49d522088> | 3.171875 | 287 | Comment Section | Science & Tech. | 43.587525 |
Convertin’ a constant force into an oscillatin’ one is a useful trick. Ya’ll seen em: gravity-powered pendulums and wind-powered turbines for example, them both set machines a-spinin and a-swingin by exploitin’ a constant force.
Them machines might work sweetly at macroscopic scales but ain’t nobody cracked it on the nanoscale even though nanobods are a-chompin at the bit to reproduce this trick. The trouble is that gravity ain’t strong enough at this level and as for wind, who you kiddin?
That leaves only tricky-dicky forces from the dizzy world of electrostatics and magnetics and these are so poorly understood on tiny scales that them nanobods are still a-wondrin and a-ponderin over how to harness them.
But Hyun “Mighty” Kim and his crew at the University of Wisconsin-Madison say they cracked it.
Their device is a kinda nano-mushroom that stands between the plates of a capacitor, in a constant DC field.
Give the mushroom a push and it leans towards the source electrode where electrons tunnel across into the mushroom head. The DC field exerts a force on this extra charge on the ’shroom, pushing it towards the drain electrode where the electrons jump ship. The force disappears and the mushroom’s stiffness sends it swinging back to the source again like metronome, and the process starts again.
Voila! A nanomechanical oscillator that converts a a constant force into an oscillation.
Them nanobods are gonna be cockahoop over this one, betcha!
Ref: arxiv.org/abs/0708.1646: Self Excitation of Nano-Mechanical Pillars | <urn:uuid:12ae38bf-b742-4fda-98e0-58d65e9db237> | 2.734375 | 389 | Nonfiction Writing | Science & Tech. | 49.393145 |
About Dates and Times
Date and time objects allow you to store references to particular instances in time. You can use date and time objects to perform calculations and comparisons that account for the corner cases of date and time calculations.
At a Glance
There are three main classes used for working with dates and times.
NSDateallows you to represent an absolute point in time.
NSCalendarallows you to represent a particular calendar, such as a Gregorian or Hebrew calendar. It provides the interface for most date-based calculations and allows you to convert between
NSDateComponentsallows you to represent the components of a particular date, such as hour, minute, day, year, and so on.
In addition to these classes,
NSTimeZone allows you to represent a geopolitical region’s time zone information. It eases the task of working across different time zones and performing calculations that may be affected by daylight savings time transitions.
Creating and Using Date Objects to Represent Absolute Points in Time
Date objects represent dates and times in Cocoa. Date objects allow you to store absolute points in time which are meaningful across locales, calendars and timezones.
Working with Calendars and Date Components
Date components allow you to break a date down into the various parts that comprise it, such as day, month, year, hour, and so on. Calendars represent a particular form of reckoning time, such as the Gregorian calendar or the Chinese calendar. Calendar objects allow you to convert between date objects and date component objects, as well as from one calendar to another.
Performing Date and Time Calculations
Calendars and date components allow you to perform calculations such as the number of days or hours between two dates or finding the Sunday in the current week. You can also add components to a date or check when a date falls.
Working with Different Time Zones
Time zone objects allow you to present absolute times as local—that is, wall clock—time. In addition to time offsets, they also keep track of daylight saving time differences. Proper use of time zone objects can avoid issues such as miscalculation of elapsed time due to daylight saving time transitions or the user moving to a different time zone.
Special Considerations for Historical Dates
Dates in the past have a number of edge cases that do not exist for contemporary dates. These include issues such as dates that do not exist in a particular calendar—such as the lack of the year 0 in the Gregorian calendar— or calendar transitions—such as the Julian to Gregorian transition in the Middle Ages. There are also eras with seemingly backward time flow—such as BC dates in the Gregorian calendar.
How to Use this Document
If your application keeps track of dates and times, read from “Dates” to “Using Time Zones.” The
NSTimeZone classes described in these chapters work together to store, compare, and manipulate dates and times.
If your application deals with dates in the past—particularly prior to the early 1900s, also read “Historical Dates” to learn about some of the issues that can arise when dealing with dates in the past.
If you are new to Cocoa, read:
Cocoa Fundamentals Guide, which introduces the basic concepts, terminology, architectures, and design patterns of the Cocoa frameworks and development environment.
If you display dates and times to users or create dates from user input, read:
Data Formatting Guide, which explains how to create and format user-readable strings from date objects, and how to create date objects from formatted strings.
© 2002, 2013 Apple Inc. All Rights Reserved. (Last updated: 2013-04-23) | <urn:uuid:a2db8ff6-a9ad-4391-96e4-a70288339263> | 3.578125 | 766 | Documentation | Software Dev. | 35.065324 |
Without math, would our seafaring ancestors ever have seen the world? Great mathematical thinkers and their revolutionary discoveries have an incredible story. Explore the beginnings of logarithms through the history of navigation, adventure and new worlds.
John Napier was a famous Scottish theologian and mathematician who lived between 1550 and 1617. He spent his entire life seeking knowledge, and working to devise better ways of doing everything from growing crops to performing mathematical calculations. He is best known as the discoverer of logarithms. He was also the inventor of the so-called "Napier's bones". Napier also made common the use of the decimal point in arithmetic and mathematics. http://www.johnnapier.com/
Napier's bones (or Napier's rods) and logarithms: http://www.youtube.com/watch?v=ShjoKnSm9ds
Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations. They were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. http://en.wikipedia.org/wiki/Logarithm
A clock is an instrument used to indicate, keep, and co-ordinate time. The word clock is derived ultimately from the Celtic words clagan and clocca meaning "bell". A silent instrument missing such a mechanism has traditionally been known as a timepiece. http://en.wikipedia.org/wiki/Clock
A sextant is an instrument used to measure the angle between any two visible objects. Its primary use is to determine the angle between a celestial object and the horizon which is known as the object's altitude. Making this measurement is known as sighting the object, shooting the object, or taking a sight and it is an essential part of celestial navigation. http://www.mat.uc.pt/~helios/Mestre/Novemb00/H61iflan.htm | <urn:uuid:c596e2c5-d885-4391-8a25-5eef82538efe> | 3.703125 | 419 | Knowledge Article | Science & Tech. | 48.638878 |
Ncurses is a programming library providing an, allowing the programmer to write text-based user interfaces, s, in a terminal-independent manner. It also optimizes screen changes, in order to reduce the experienced when using remote Unix shell.
Ncurses stands for "new curses", and is a replacement for the discontinued 4.http://www.wikipedia.org/wiki/curses.classic
Ncurses is a part of theproject. It is one of the few GNU files not distributed under the GNU General Public License or GNU Lesser General Public License; it is distributed under a license like the X11 License, which is sometimes referred to as the MIT License.
See also:, with Ncurses installation and . | <urn:uuid:d7ec7464-f881-49a2-8db4-3a0ebd0eb12c> | 2.765625 | 150 | Knowledge Article | Software Dev. | 42.646396 |
Simple raster graphics ? How do I ...
jdhunter at ace.bsd.uchicago.edu
Wed Mar 24 21:41:52 CET 2004
>>>>> "Ray" == Ray Molacha <ray_molachaNOSPAM at gmx.co.uk> writes:
Ray> Hi Everybody. I'm learning Python on a Windows computer,
Ray> using PythonWin.
Ray> I'd like to be able to do some simple raster graphics, like
Ray> clearing the screen and plotting a point of some color. I
Ray> want to plot some fractals and math functions just for fun. I
Ray> understand the math ok -- I don't know how to do simple
Ray> raster graphics.
matplotlib may do what you want - http://matplotlib.sourceforge.net.
If you take fractal data and load it into an MxN Numeric array, you
can display it with a colormap with the plotting command imshow (image
To plot simple math functions, you can do things like
t = arange(0.0, 1.0, 0.01)
It's not a low level drawing program, but it is a high level plotting
program that may do what you need. You can use it to create simple
images to view in the viewer of your choice, or use of the GUI
interfaces (GTK, WX, Tk) and to view your figure with navigation
Check the screenshots section for samples plots and the scripts that
If you want low level control over raster graphics, you have lots of
choices. pygtk with gtk.gdk.Drawable and gtk.DrawingArea is one that
is fast and cross platform; it also comes with nice widgets. A wx
canvas is another alternative, or PIL, or .....
More information about the Python-list | <urn:uuid:5e322114-30a2-413b-aa6e-13e57b94831c> | 2.734375 | 407 | Comment Section | Software Dev. | 80.630311 |
Technology makes space exploration possible. Each Mars mission is part of a continuing chain of innovation: each relies on past missions
to identify needed new technologies and each contributes its own innovations to benefit future missions. This chain allows NASA to
continue to push the boundaries of what is currently possible, while relying on proven technologies.
The Mars Global Surveyor team implemented four innovative procedures that have enabled the Mars Global Surveyor spacecraft to return more science data than all previous Mars missions combined.
Technologies of Broad Benefit
Propulsion: Two innovations minimized propulsion usage, allowing Mars Global Surveyor to successfully operate at Mars for more than eight years, conducting long-term studies all the while.
- Angular Momentum Management Plan
As Mars Global Surveyor orbited the red planet, the force of gravity from the uneven terrain below tugged and pushed the spacecraft. Mountains and valleys caused the spacecraft to fly in a non-circular orbit at slightly varying altitudes. The effect was similar to turbulence in an airplane.
To stabilize the spacecraft above the martian atmosphere (but within the pull of gravity), the spacecraft used three reaction wheels, or "flywheels." The reaction wheels spun much like the whirling "tea cups" on a child's amusement ride. If a girl inside the teacup wants to make the teacup spin clockwise, she grabs the wheel inside the teacup and pulls counterclockwise. If a boy inside the same teacup wants to slow the teacup down, he grabs the wheel and pulls it clockwise. On the spacecraft, if the gravity of a mountain pulled the spacecraft, the reaction wheels spun and pulled the spacecraft back, keeping it balanced.
Early in the mission, Mars Global Surveyor used small thrusters to adjust the reaction wheels and keep the spacecraft from drifting off course or off balance. With the angular momentum management plan, the spacecraft balanced itself naturally, without the need to use as much fuel to fight the external forces of gravity. For example, instead of pointing straight down, the spacecraft was tilted backward 16 degrees, which helped distribute the gravitational pull of the planet more evenly on the body of the spacecraft and cut fuel consumption by 800 percent.
To reduce the mass and expense of fuel needed for the mission, the Mars Global Surveyor team used a braking technique called aerobraking to trim the spacecraft's initial, highly elliptical orbit into a nearly circular orbit after arriving at Mars. The aerobraking technique eliminated the need for 1500 kilograms (3,300 pounds) of braking propellant during the 700-million-kilometer (435-million-mile) interplanetary journey to Mars. The lighter weight reduced the size of the launch vehicle required from a Titan III to a Delta rocket, saving an additional $250 million.
The Magellan spacecraft at Venus was the first planetary spacecraft to use aerobraking, in a demonstration in the summer of 1993. Its success cleared the way for the use of aerobraking by Mars Global Surveyor. And the success of Mars Global Surveyor's aerobraking paved the way for
Odyssey and Mars Reconnaissance Orbiter to use the same technique.
Remote Science Instrumentation:
Two other innovations changed how the spacecraft was flown to increase its ability to collect science data from Mars.
- Image Motion Compensation
Starting in 2003, the camera and spacecraft teams for Mars Global Surveyor perfected a technique that allowed the entire spacecraft to roll so that the camera could scan surface details at three times higher resolution than if the spacecraft did not roll. The image motion compensation technique adjusted the spacecraft's rotation rate to match the ground speed under the camera.
For several years, the Mars Orbiter Camera acquired the highest-resolution images ever obtained from a Mars-orbiting spacecraft. During normal operating conditions, when the spacecraft did not roll, the smallest objects that could be resolved on the Martian surface were about 4 to 5 meters (13 to 16 feet) across. With the adjusted-rotation technique, called "compensated pitch and roll targeted observation" or "CPROTO," objects as small as 1.5 meters (4.9 feet) were visible in images from the same camera. Resolution capability of 1.4 meters (4.6 feet) per pixel was improved to one-half meter (1.7 feet) per pixel.
- Beta Supplement
A vital component of collecting science data is the ability to send the information to Earth. The High Gain Antenna was a dish antenna that sent and received data at high rates. To communicate with NASA's Deep Space Network antennas, it had to point toward Earth.
In 1997, after Mars Global Surveyor got into orbit around Mars, investigators realized that the High Gain Antenna's range of motion was surprisingly limited. When the spacecraft was in certain positions, they were unable to adjust the antenna to point it directly toward Earth. The team discovered that the problem stemmed from a loose screw that most likely vibrated out of position during launch.
In order to maximize science return, the team figured out a way to flip-flop the antenna during every 2-hour orbit. This flip-flop, officially named the "Beta Supplement," provided two 25-minute intervals for communicating with Earth during each orbit. The flip-flop or rewind of the antenna was a complex operation, as can be seen in the Beta Supplement Animation.
While the antenna had to point toward Earth for communications, the solar panels had to point toward the Sun for energy, and the science instruments had to point toward Mars to collect data. The Mars Global Surveyor team creatively twisted and turned the spacecraft during each revolution about Mars to accommodate all of these needs.
More information on Mars Technology >> | <urn:uuid:420c79b9-ae6f-4984-abbd-49ce293833c4> | 3.578125 | 1,168 | Knowledge Article | Science & Tech. | 38.346885 |
|To get started, use %hi to display a short, %i
for an int, %li for a long, %G
for a float or double, %LG for a long
double, %c for a char (or %i to
display it as a number), and %s for a string (char * or char
). Then refine the formatting further as desired.
To print a percent sign, use %%.
int printf(const char *format, ...)
int fprintf(FILE *stream, const char *format, ...)
int sprintf(char *string, const char *format, ...)
The functions return the number of characters written, or a negative value if an error occurred.
The format string is of the form
% [flags] [field_width] [.precision] [length_modifier] conversion_character
where components in brackets are optional. The minimum is therefore a % and a conversion character (e.g. %i).
These can be in any order.
|-||The output is left justified in its field, not right justified (the default).|
|+||Signed numbers will always be printed with a leading sign (+ or -).|
|space||Positive numbers are preceded by a space (negative numbers by a - sign).|
|0||For numeric conversions, pad with leading zeros to the field width.|
|#||An alternative output form. For o, the first digit will be '0'. For x or X, "0x" or "0X" will be prefixed to a non-zero result. For e, E, f, F, g and G, the output will always have a decimal point; for g and G, trailing zeros will not be removed.|
The converted argument will be printed in a field at least this wide, and wider if necessary. If the converted argument has fewer characters than the field width, it will be padded on the left (or right, if left adjustment has been requested) to make up the field width. The padding character is normally ' ' (space), but is '0' if the zero padding flag (0) is present.
If the field width is specified as *, the value is computed from the next argument, which must be an int.
A dot '.' separates the field width from the precision.
If the precision is specified as *, the value is computed from the next argument, which must be an int.
|s||The maximum number of characters to be printed from the string.|
|e, E, f||The number of digits to be printed after the decimal point.|
|g, G||The number of significant digits.|
|d, i, o, u, x, X||The minimum number of digits to be printed. Leading zeros will be added to make up the field width.|
|h||The value is to be displayed as a short or unsigned short.|
|l||For d, i, o, u, x or X conversions: the argument is a long, not an int.|
|L||For e, f, g or G conversions: the argument is a long double.|
|d, i||Display an int in signed decimal notation.|
|o||Display an int in unsigned octal notation (without a leading 0).|
|u||Display an int in unsigned decimal notation.|
|x, X||Display an int in unsigned hexadecimal notation (without a leading 0x or 0X). x gives lower case output, X upper case.|
|c||Display a single char (after conversion to unsigned int).|
|e, E||Display a double or float (after conversion to double) in scientific notation. e gives lower case output, E upper case.|
|f||Display a double or float (after conversion to double) in decimal notation.|
|g, G||g is either e or f, chosen automatically depending on the size of the value and the precision specified. G is similar, but is either E or f.|
|n||Nothing is displayed. The corresponding argument must be a pointer to an int variable. The number of characters converted so far is assigned to this variable.|
|s||Display a string. The argument is a pointer to char. Characters are displayed until a '\0' is encountered, or until the number of characters indicated by the precision have been displayed. (The terminating '\0' is not output.)|
|p||Display a pointer (to any type). The representation is implementation dependent.|
|%||Display the % character.| | <urn:uuid:b7b454cb-5040-4b2e-bfd0-3399a7515675> | 3.5 | 992 | Documentation | Software Dev. | 72.933889 |
Major Section: PROGRAMMING
Below are six commonly used idioms for testing whether
zp are the preferred termination tests for recursions
down the integers and naturals, respectively.
idiom logical guard primary meaning compiled code**See guards-and-evaluation, especially the subsection titled ``Guards and evaluation V: efficiency issues''. Primary code is relevant only if guards are verified. The ``compiled code'' shown is only suggestive.
(equal x 0)(equal x 0) t (equal x 0) (eql x 0) (equal x 0) t (eql x 0) (zerop x) (equal x 0) x is a number (= x 0) (= x 0) (equal x 0) x is a number (= x 0) (zip x) (equal (ifix x) 0) x is an integer (= x 0) (zp x) (equal (nfix x) 0) x is a natural (int= x 0) (zpf x) (equal (nfix x) 0) x is a fixnum >= 0 (eql (the-fixnum x) 0)
The first four idioms all have the same logical meaning and differ
only with respect to their executability and efficiency. In the
absence of compiler optimizing,
(= x 0) is probably the most
(equal x 0) is probably the least efficient, and
(eql x 0) is in between. However, an optimizing compiler could
always choose to compile
(equal x 0) as
(eql x 0) and, in
x is known at compile-time to be numeric,
(eql x 0) as
(= x 0). So efficiency considerations must, of
course, be made in the context of the host compiler.
Note also that
(zerop x) and
(= x 0) are indistinguishable.
They have the same meaning and the same guard, and can reasonably be
expected to generate equally efficient code.
(zip x) and
(zp x) do not have the same logical
meanings as the others or each other. They are not simple tests for
0. They each coerce
x into a restricted domain,
zip to the integers and
zp to the natural numbers, choosing
x is outside the domain. Thus,
'abc, for example, are all ``recognized'' as zero by both
zip reports that
-1 is different from
zp reports that
0. More precisely,
(zip -1) is
(zp -1) is
Note that the last five idioms all have guards that restrict their Common Lisp executability. If these last five are used in situations in which guards are to be verified, then proof obligations are incurred as the price of using them. If guard verification is not involved in your project, then the first five can be thought of as synonymous.
zp are not provided by Common Lisp but are
ACL2-specific functions. Why does ACL2 provide these functions?
The answer has to do with the admission of recursively defined
functions and efficiency.
Zp is provided as the zero-test in
situations where the controlling formal parameter is understood to
be a natural number.
Zip is analogously provided for the integer
case. We illustrate below.
Here is an admissible definition of factorial
(defun fact (n) (if (zp n) 1 (* n (fact (1- n)))))Observe the classic recursion scheme: a test against
0and recursion by
1-. Note however that the test against
0is expressed with the
zpidiom. Note also the absence of a guard making explicit our intention that
nis a natural number.
This definition of factorial is readily admitted because when
is false (i.e.,
n is a natural number other than
0 and so
(1- n) is less than
n. The base case, where
is true, handles all the ``unexpected'' inputs, such as arise with
(fact -1) and
(fact 'abc). When calls of
(zp n) checks
(integerp n) and
(> n 0). Guard
verification is unsuccessful for this definition of
zp requires its argument to be a natural number and there is no
fact, above. Thus the primary raw lisp for
inaccessible and only the
logic definition (which does runtime
``type'' checking) is used in computation. In summary, this
definition of factorial is easily admitted and easily manipulated by
the prover but is not executed as efficiently as it could be.
Runtime efficiency can be improved by adding a guard to the definition.
(defun fact (n) (declare (xargs :guard (and (integerp n) (>= n 0)))) (if (zp n) 1 (* n (fact (1- n)))))This guarded definition has the same termination conditions as before -- termination is not sensitive to the guard. But the guards can be verified. This makes the primary raw lisp definition accessible during execution. In that definition, the
(zp n)above is compiled as
(= n 0), because
nwill always be a natural number when the primary code is executed. Thus, by adding a guard and verifying it, the elegant and easily used definition of factorial is also efficiently executed on natural numbers.
Now let us consider an alternative definition of factorial in which
(= n 0) is used in place of
(defun fact (n) (if (= n 0) 1 (* n (fact (1- n)))))This definition does not terminate. For example
(fact -1)gives rise to a call of
(fact -2), etc. Hence, this alternative is inadmissible. A plausible response is the addition of a guard restricting
nto the naturals:
(defun fact (n) (declare (xargs :guard (and (integerp n) (>= n 0)))) (if (= n 0) 1 (* n (fact (1- n)))))But because the termination argument is not sensitive to the guard, it is still impossible to admit this definition. To influence the termination argument one must change the conditions tested. Adding a runtime test that
nis a natural number would suffice and allow both admission and guard verification. But such a test would slow down the execution of the compiled function.
The use of
(zp n) as the test avoids this dilemma.
provides the logical equivalent of a runtime test that
n is a
natural number but the execution efficiency of a direct
0, at the expense of a guard conjecture to prove.
In addition, if guard verification and most-efficient execution are
not needed, then the use of
(zp n) allows the admission of the
function without a guard or other extraneous verbiage.
While general rules are made to be broken, it is probably a good
idea to get into the habit of using
(zp n) as your terminating
0 test'' idiom when recursing down the natural numbers. It
provides the logical power of testing that
n is a non-
natural number and allows efficient execution.
We now turn to the analogous function,
Zip is the
0-test idiom when recursing through the integers toward
Zip considers any non-integer to be
0 and otherwise
0. A typical use of
zip is in the definition
integer-length, shown below. (ACL2 can actually accept this
definition, but only after appropriate lemmas have been proved.)
(defun integer-length (i) (declare (xargs :guard (integerp i))) (if (zip i) 0 (if (= i -1) 0 (+ 1 (integer-length (floor i 2))))))Observe that the function recurses by
(floor i 2). Hence, calling the function on
25causes calls on
0, while calling it on
-25generates calls on
-1. By making
(zip i)the first test, we terminate the recursion immediately on non-integers. The guard, if present, can be verified and allows the primary raw lisp definition to check
(= i 0)as the first terminating condition (because the primary code is executed only on integers). | <urn:uuid:176b459d-2d0b-425b-b7a4-74b0045e4e07> | 3.0625 | 1,786 | Documentation | Software Dev. | 55.093853 |
In a block formatting context, boxes are laid out vertically, starting
at the top. Block-level elements—elements with a display property value of
table, and (in certain circumstances)
run-in—participate in block formatting contexts.
A block-level element with a
display property value
table will generate a principal
box block. A principal box will contain either block boxes or
inline boxes as children, never both. If the element contains a mix of
block-level and inline children, anonymous block boxes will be generated
where necessary, so that the principal box will only contain block boxes.
Consider the following example:
<div> <p>A paragraph</p> Some text in an anonymous box <p>Another paragraph</p> </div>
The HTML snippet above will, by default, generate a principal box for
div element and the two
elements, plus an anonymous block box for the text that appears between
the paragraphs, as seen in Figure 1.1
An anonymous block box inherits its properties from the enclosing non-anonymous box—the div box in this example. Any non-inherited properties are set to their initial (default) values.
The principal box becomes the containing block for non-positioned
descendant boxes, and it’s also the box that’s affected for any value of
position other than
for any value of
float other than
In a block formatting context the vertical distance between two sibling boxes is determined by their respective margin properties; vertical margins between adjacent block boxes collapse if there are no borders or padding in the way. For more information, see Collapsing Margins.
In a left-to-right environment, the left outer edge of each block box touches the left edge of the containing block. In a right-to-left environment, the right edges touch. This happens even if there are floated elements in the way, except if the block box establishes a new block formatting context. In that case, the block box becomes narrower to accommodate the floated elements.
1 Note that mixing block and inline content like this is semantically questionable, and it’s not something we recommend. This example is provided just to illustrate how CSS handles the situation. | <urn:uuid:4d2d4885-0234-49c0-a18c-abcc6b9e17d4> | 3.296875 | 470 | Documentation | Software Dev. | 39.4725 |