text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Scientists have figured out how to tell what color some dinosaurs were just by looking at their fossils. Dr. Prum and his colleagues took advantage of the fact that feathers contain pigment-loaded sacs called melanosomes. In 2009, they demonstrated that melanosomes survived for millions of years in fossil bird feathers. The shape and arrangement of melanosomes help produce the color of feathers, so the scientists were able to get clues about the color of fossil feathers from their melanosomes alone. Human ingenuity never ceases to amaze.
<urn:uuid:6a4c989a-6763-48d4-a1bd-22819dd2c9fa>
3.84375
109
Personal Blog
Science & Tech.
41.393362
From Sidney Harris, http://www.sciencecartoonsplus.com/originals2.html, see also http://astro.wsu.edu/worthey/astro/html/lec-cartoons.html Actually, in everyday life the distinctions between wave and particle behavior are seldom observed directly, which is why the concept did not become firmly established until early in the 20th century. Then, it was found that not only light but the fundamental particles such as electrons and protons also have wave-particle duality, resulting in behavior that can only be described by the branch of physics called quantum mechanics. Once the theory was established, it was found to unite and explain a huge number of things that do have everyday implications. An example coming up soon is the structure of the atom and why a given type of atom emits and absorbs light only at very specific wavelengths.
<urn:uuid:3ab2ac1c-d180-4640-a7dc-5be695ea2e7a>
3.359375
182
Knowledge Article
Science & Tech.
46.654558
See also the Dr. Math FAQ: Browse High School Triangles and Other Polygons Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Area of an irregular shape. Pythagorean theorem proofs. - Constructing a Segment [09/26/1999] Given a 1" segment and a 2.5" segment, how can you find a segment of length sqrt(2.5)" using only a compass and a straightedge? - Constructing a Square [12/25/1998] Given any four points, construct a square such that each side or extension passes through one point. - Constructing a Triangle [08/20/1999] How can you construct a triangle with 3 different-size segments? - Constructing a Triangle Given the Medians [01/01/2001] How can I construct a triangle ABC given AM, BN, and CP, the respective medians from the vertices A, B, and C? - Constructing Polygons [06/03/1998] How do you construct a regular pentagon and a regular decagon? Can you construct a regular n-gon? - Constructing the Orthocenter [01/27/1999] How do you construct the orthocenter of a triangle? - Construct Polygon Given One Side [12/03/2001] How can you construct a polygon, given one side? - Converse of the Pythagorean Theorem [02/14/2003] What is the converse of the Pythagorean theorem? - Cosine Addition Formula [12/13/1997] How can you prove the addition formula for cosine by using right - Counting Diagonals [03/14/1998] How many diagonals can be drawn for a polygon with n sides? - Counting Intersections of Diagonals in Polygons [03/08/2000] Can you help me find an equation for the maximum number of intersections of the diagonals in a polygon? - Counting Rectangles Cut By a Diagonal [06/15/1999] How can we find an equation for the number of unit squares that are cut by a line going from corner to corner on a rectangle? - Counting Sides by Counting Diagonals [06/04/2002] How can I find the number of sides in a polygon, given the number of - Covering Paper using Index Cards [10/24/2001] What is the maximum area of an 8"x13" sheet of paper that you can cover by using seven 3"x5" standard index cards? - Cross-Cornering a Shape to Make it Square [12/19/2002] We lay out a building that is 30' x 40' and cross-corner it to see if it is 'square,' but there is a 6' difference. What is the equation to find how far to move one side to make the shape 'square'? - Curious Property of a Regular Heptagon [04/06/2001] How can I prove that in a regular heptagon ABCDEFG, (1/AB)=(1/AC)+(1/ - Cutting a Circle out of a Square [2/14/1996] What is the area (to the nearest square centimeter) of the largest circle that can be cut from a square piece of sheet metal 73cm. on each side? Explain how you determined this. - Cutting a Square into Five Equal Pieces [07/12/1999] How can you divide a square cake into five equal parts, cutting through the center point? - Cutting a Triangle into Two Congruent Triangles [10/06/1998] How do you cut a triangle into two congruent equilateral triangles with the minimum number of cuts? - Cutting Carpet [9/9/1996] Two pieces of carpet are to be used to cover a floor. You are allowed to make just one cut in one of the two pieces... - Cyclic Quadrilateral [05/22/2000] For an isoscles trapezium ABCD with AB paralled to DC and AB less than CD, how can we prove that ABCD is a cyclic quadrilateral? - Cyclic Quadrilaterals [8/30/1996] A cyclic quadrilateral touches a circle at each vertex. What angles do these points make with the centre of the circle? - Cyclic Trapezoid [01/17/2002] PQ is a diameter; AB is a chord parallel to PQ. If PQ=50cm and AB= 14cm, - Defining Exterior Angles of Polygons [06/24/2004] My math teacher says the sum of the exterior angles of a triangle is 900 degrees (360*3 - 180). I think that the sum is 360 degrees. Who - Definition of Opposite Sides [01/18/2001] What is the formal definition of 'opposite sides' of a polygon? Does a regular pentagon have opposite sides? Does a concave polygon have opposite sides? How can we define it consistent with our intuition? - Degenerate/Nondegenerate Figure [10/27/2001] We need to know what a nondegenerate circle is. (We're trying to decide whether this is a model of incidence geometry, but don't know the - Degenerate Triangle [09/03/2003] Isn't a degenerate triangle really just a line segment? - De Longchamp's Point [09/21/2000] What is De Longchamp's point, and how is it used? - Derivation of Law of Sines and Cosines [11/02/1997] How do you derive the law of sines and the law of cosines? - Derivations of Heron's Formula [11/24/1998] How is Heron's formula (Hero's formula) derived? - Derivations of Pi and a Polygon of Degree n [3/4/1995] I am curious about the derivation of pi and the formula for deriving a polygon of degree n. Can you help? - Deriving the Distance Formula from the Pythagorean Theorem [03/23/2003] How does one derive the distance formula from the Pythagorean theorem? - Deriving the Law of Cosines [04/01/1998] Will the Pythagorean Theorem work with a non-right triangle? - Deriving Trilinear Coordinates [05/18/1999] How do you derive the trilinear coordinates of the orthocenter of a - Desargues' Theorem and SSASS [12/15/1998] What is the main theory behind Desargues' Theorem? Also, is SSASS a valid method for proving two quadrilaterals are congruent? - Determinants and the Area of a Triangle [12/14/1998] Given a triangle with vertices (A,B), (C,D), and (E,F), how do you find the area in determinant form? - Determine if Point is in Rectangle [5/29/1996] What formula will allow me to determine whether a specified point lies within a polygon (rectangle - 4 points)? - Determining Area of a Lot of Land [08/20/2003] We are trying to figure out the square footage for a lot that I own. - Determining If a Given Point Lies inside a Polygon [12/27/2005] I have a finite number of points that constitute a polygon, and a point p(x,y). I want to know if point p lies inside the polygon. - Determining Triangle Similarity [05/26/1998] Given two triangles, how can you determine if they are similar?
<urn:uuid:1c8a66b1-bf07-46f4-b327-2edcc9455167>
3.359375
1,743
Q&A Forum
Science & Tech.
66.874563
When a battered, skinny tortoiseshell cat wandered into a yard in Florida earlier this year, she could have been any other stray, but she was nothing of the kind. She carried an implanted microchip—one put there by a loving owner—and it revealed an intriguing story: the cat belonged to a local family, had been lost on a trip two months earlier, and had traveled 200 miles (322 km) in that time to arrive back in her hometown. Her journey inspired a spate of articles looking for an explanation for how this one cat, and a few others who’ve made similar trips, managed such impressive feats of navigation. The response from many eminent animal researchers was the same: “No idea.” Cats’ long-distance travels are relatively rare in the scientific literature, which explains the dearth of answers—at least so far. But that’s not the case for the wanderings of sundry other creatures, especially those that migrate. Such extreme journeys—mapless, compassless, sometimes intercontinental, through places the animals have never seen before—seem nothing short of miraculous. That’s the kind of mystery that gets scientists moving, and move they have, conducting all manner of experiments over the years—locking animals in planetariums, carrying them around in dark boxes, putting them in wading pools wrapped in magnets, and destroying various bits of anatomy to see which piece was the important one. These experiments have yielded fascinating insights into the animal brain and into a world beyond human sensation. Part of what navigating animals do is not entirely surprising. Planetarium studies reveal that some animals steer by the stars, an approach that’s comfortingly familiar to Homo sapiens but practiced by organisms as distant as the nocturnal dung beetle, which, as one recent study revealed, can roll its precious gob of poo in a straight line only as long as the Milky Way is in view. One of the most accomplished animal navigation researchers of the twentieth century, naturalist Ronald Lockley, found that captured seabirds released far from their homes could make a beeline back so long as either the sun or the stars were visible; an overcast sky threw them off so much that many never made it back. But plenty of other navigating animals are using something most humans regularly forget exists: the Earth’s magnetic field. In illustrations, the field is usually depicted as a series of loops that emerge from the south pole and reenter the planet at the north pole, and extend out to the edges of our atmosphere, sort of like a cosmic whisk. Our compass needles are designed to align with the field, and in the last few decades it’s become clear that numerous animals can find their way by feeling some of its various field. Sea turtles, for example, don’t use the field simply to tell north from south. According to experiments led by Kenneth Lohmann, a professor of biology at University of North Carolina, Chapel Hill, they are actually born knowing a magnetic map of the ocean. Newly hatched loggerhead turtles in the populations Lohmann studies journey 8,000 miles (12,900 km) from their hatching beaches around the Atlantic Ocean to reach feeding areas, and if they don’t keep right on track, they do not survive. Lohmann learned early on that the turtles could sense the Earth’s magnetism: he found that hatchlings from the Florida coast, which normally swim east in darkness to start their migration, swam the other way when they were put in a magnetic field that reversed north and south. That got Lohmann thinking that the turtles’ long-distance navigation might be linked to their being able to respond to whorls and quirks in the planetary field they encounter along the way. To study this, he and colleagues collected baby sea turtles a few hours before they would have left the nest on their own and put them in pools surrounded by magnetic coils. The coils were designed to reproduce the Earth’s magnetic field at specific points along the turtles’ migration. Reliably, the young turtles oriented themselves and swam in the direction relative to the magnetic field that, had they been in the open ocean, would have kept them on course. Lohmann has tested this with 8 different locations along their route, and in each case the turtles head in just the direction required to get them to their destination. The turtles may not know where they are in any big-picture way—as Lohmann says, they may not see themselves as blinking spots on a map—but they have inherited a sense that should they feel a particular pull from the magnetic field, well, better take a right. (LIST: Top 10 Heroic Animals) The list of animals that navigate by magnetism, suspected and confirmed, is long, and includes a few mammals in addition to migrating birds and turtles. But our understanding of the mechanism behind that ability is sketchy: sea turtles tend to be threatened or endangered species, so scientists can study only their behavior, not their brains, and even in animals in which such work is possible, it’s hard to tell what parts of the brain and other physical structures are involved. Pigeons, one of the most intensively studied animal navigators, show how complex a question this is. One leading theory holds that iron-containing cells in the beak send magnetic information to the brain, since destroying the nerve that carries sensation from beak to brain seems to disrupt pigeons’ navigation. However, last year it emerged that those beak cells are not neurons capable of sending messages, as had been supposed; they appear to be immune cells, throwing the beak theory into confusion. Another school of thought suggests that the magnetic field may be affecting chemical reactions in the birds’ eyes, literally changing the way the world looks when they are oriented in a particular direction. And David Dickmann, a professor at the Baylor College of Medicine whose primary work is on a magical ability we humans often forget we have—our ability to sense gravity and constantly adjust our position to keep our balance—has lately published work showing that pigeons may have a magnetic-field sensor in their inner ears. No one knows yet which of these mechanisms, or what combination of them, is at the root of the pigeon’s powers. And lest we forget, the magnetic field is far from the only thing out there that navigating animals can sense and humans cannot. The heads of sharks are threaded with jelly-filled tubes, called the ampullae of Lorenzini, that allow them to detect extremely faint electric currents and may help them with navigation. Scents in the air, at concentrations far below human perception, are perceivable to numerous creatures that may use them to steer (in fact, pigeons that cannot smell seem oddly lost, even with their magnetic abilities intact). Bees can see patterns in sunlight invisible to the naked human eye and can use them to find their way. We can see only the outcomes, never the workings, of whatever evolved systems animals use to orient themselves across hundreds or thousands of miles. But that hasn’t stopped us from working to understand the feats of migrating reptiles, homing pigeons, and even lost pets. With reminders like the odyssey of the Florida housecat, how can we stop?
<urn:uuid:ea273269-b908-4990-842e-a96f7e2b8e39>
3.5625
1,504
Nonfiction Writing
Science & Tech.
37.653044
The GNU configure and build system can be used to build cross compilation tools. A cross compilation tool is a tool which runs on one system and produces code which runs on another system. A compiler which produces programs which run on a different system is a cross compilation compiler, or simply a cross compiler. Similarly, we speak of cross assemblers, cross linkers, etc. In the normal case, a compiler produces code which runs on the same system as the one on which the compiler runs. When it is necessary to distinguish this case from the cross compilation case, such a compiler is called a native compiler. Similarly, we speak of native assemblers, etc. Although the debugger is not strictly speaking a compilation tool, it is nevertheless meaningful to speak of a cross debugger: a debugger which is used to debug code which runs on another system. Everything that is said below about configuring cross compilation tools applies to the debugger as well. When building cross compilation tools, there are two different systems involved: the system on which the tools will run, and the system for which the tools generate code. The system on which the tools will run is called the host system. The system for which the tools generate code is called the target system. For example, suppose you have a compiler which runs on a GNU/Linux system and generates ELF programs for a MIPS embedded system. In this case the GNU/Linux system is the host, and the MIPS ELF system is the target. Such a compiler could be called a GNU/Linux cross MIPS ELF compiler, or, equivalently, a `i386-linux-gnu' cross `mips-elf' compiler. Naturally, most programs are not cross compilation tools. For those programs, it does not make sense to speak of a target. It only makes sense to speak of a target for tools like `gcc' or the `binutils' which actually produce running code. For example, it does not make sense to speak of the target of a tool like `bison' or `make'. Most cross compilation tools can also serve as native tools. For a native compilation tool, it is still meaningful to speak of a target. For a native tool, the target is the same as the host. For example, for a GNU/Linux native compiler, the host is GNU/Linux, and the target is also GNU/Linux. In almost all cases the host system is the system on which you run the `configure' script, and on which you build the tools (for the case when they differ, see section Canadian Cross). If your configure script needs to know the configuration name of the host system, and the package is not a cross compilation tool and therefore does not have a target, put `AC_CANONICAL_HOST' in `configure.in'. This macro will arrange to define a few shell variables when the `configure' script is run. The shell variables may be used by putting shell code in `configure.in'. For an example, see section Using Configuration Names. By default, the `configure' script will assume that the target is the same as the host. This is the more common case; for example, it leads to a native compiler rather than a cross compiler. If you want to build a cross compilation tool, you must specify the target explicitly by using the `--target' option when you run `configure'. The argument to `--target' is the configuration name of the system for which you wish to generate code. See section Configuration Names. For example, to build tools which generate code for a MIPS ELF embedded system, you would use `--target mips-elf'. When writing `configure.in' for a cross compilation tool, you will need to use information about the target. To do this, put `AC_CANONICAL_SYSTEM' in `configure.in'. `AC_CANONICAL_SYSTEM' will look for a `--target' option and canonicalize it using the `config.sub' shell script. It will also run `AC_CANONICAL_HOST' (see section Using the Host Type). The target type will be recorded in the following shell variables. Note that the host versions of these variables will also be defined by `AC_CANONICAL_HOST'. Note that if `host' and `target' are the same string, you can assume a native configuration. If they are different, you can assume a cross configuration. It is arguably possible for `host' and `target' to represent the same system, but for the strings to not be identical. For example, if `config.guess' returns `sparc-sun-sunos4.1.4', and somebody configures with `--target sparc-sun-sunos4.1', then the slight differences between the two versions of SunOS may be unimportant for your tool. However, in the general case it can be quite difficult to determine whether the differences between two configuration names are significant or not. Therefore, by convention, if the user specifies a `--target' option without specifying a `--host' option, it is assumed that the user wants to configure a cross compilation tool. The variables `target' and `target_alias' should be handled differently. In general, whenever the user may actually see a string, `target_alias' should be used. This includes anything which may appear in the file system, such as a directory name or part of a tool name. It also includes any tool output, unless it is clearly labelled as the canonical target configuration name. This permits the user to use the `--target' option to specify how the tool will appear to the outside world. On the other hand, when checking for characteristics of the target system, `target' should be used. This is because a wide variety of `--target' options may map into the same canonical configuration name. You should not attempt to duplicate the canonicalization done by `config.sub' in your own code. By convention, cross tools are installed with a prefix of the argument used with the `--target' option, also known as `target_alias' (see section Using the Target Type). If the user does not use the `--target' option, and thus is building a native tool, no prefix is used. For example, if gcc is configured with `--target mips-elf', then the installed binary will be named `mips-elf-gcc'. If gcc is configured without a `--target' option, then the installed binary will be named `gcc'. The autoconf macro `AC_ARG_PROGRAM' will handle this for you. If you are using automake, no more need be done; the programs will automatically be installed with the correct prefixes. Otherwise, see the autoconf documentation for `AC_ARG_PROGRAM'. The Cygnus tree is used for various packages including gdb, the GNU binutils, and egcs. It is also, of course, used for Cygnus releases. In the Cygnus tree, the top level `configure' script uses the old Cygnus configure system, not autoconf. The top level `Makefile.in' is written to build packages based on what is in the source tree, and supports building a large number of tools in a single `configure'/`make' step. The Cygnus tree may be configured with a `--target' option. The `--target' option applies recursively to every subdirectory, and permits building an entire set of cross tools at once. The Cygnus tree distinguishes host libraries from target libraries. Host libraries are built with the compiler used to build the programs which run on the host, which is called the host compiler. This includes libraries such as `bfd' and `tcl'. These libraries are built with the host compiler, and are linked into programs like the binutils or gcc which run on the host. Target libraries are built with the target compiler. If gcc is present in the source tree, then the target compiler is the gcc that is built using the host compiler. Target libraries are libraries such as `newlib' and `libstdc++'. These libraries are not linked into the host programs, but are instead made available for use with programs built with the target compiler. For the rest of this section, assume that gcc is present in the source tree, so that it will be used to build the target libraries. There is a complication here. The configure process needs to know which compiler you are going to use to build a tool; otherwise, the feature tests will not work correctly. The Cygnus tree handles this by not configuring the target libraries until the target compiler is built. In order to permit everything to build using a single `configure'/`make', the configuration of the target libraries is actually triggered during the make step. When the target libraries are configured, the `--target' option is not used. Instead, the `--host' option is used with the argument of the `--target' option for the overall configuration. If no `--target' option was used for the overall configuration, the `--host' option will be passed with the output of the `config.guess' shell script. Any `--build' option is passed down unchanged. This translation of configuration options is done because since the target libraries are compiled with the target compiler, they are being built in order to run on the target of the overall configuration. By the definition of host, this means that their host system is the same as the target system of the overall configuration. The same process is used for both a native configuration and a cross configuration. Even when using a native configuration, the target libraries will be configured and built using the newly built compiler. This is particularly important for the C++ libraries, since there is no reason to assume that the C++ compiler used to build the host tools (if there even is one) uses the same ABI as the g++ compiler which will be used to build the target libraries. There is one difference between a native configuration and a cross configuration. In a native configuration, the target libraries are normally configured and built as siblings of the host tools. In a cross configuration, the target libraries are normally built in a subdirectory whose name is the argument to `--target'. This is mainly for historical reasons. To summarize, running `configure' in the Cygnus tree configures all the host libraries and tools, but does not configure any of the target libraries. Running `make' then does the following steps: The steps need not be done in precisely this order, since they are actually controlled by `Makefile' targets. There are a few things you must know in order to write a configure script for a target library. This is just a quick sketch, and beginners shouldn't worry if they don't follow everything here. The target libraries are configured and built using a newly built target compiler. There may not be any startup files or libraries for this target compiler. In fact, those files will probably be built as part of some target library, which naturally means that they will not exist when your target library is configured. This means that the configure script for a target library may not use any test which requires doing a link. This unfortunately includes many useful autoconf macros, such as `AC_CHECK_FUNCS'. autoconf macros which do a compile but not a link, such as `AC_CHECK_HEADERS', may be used. This is a severe restriction, but normally not a fatal one, as target libraries can often assume the presence of other target libraries, and thus know which functions will be available. As of this writing, the autoconf macro `AC_PROG_CC' does a link to make sure that the compiler works. This may fail in a target library, so target libraries must use a different set of macros to locate the compiler. See the `configure.in' file in a directory like `libiberty' or `libgloss' for an example. As noted in the previous section, target libraries are sometimes built in directories which are siblings to the host tools, and are sometimes built in a subdirectory. The `--with-target-subdir' configure option will be passed when the library is configured. Its value will be an empty string if the target library is a sibling. Its value will be the name of the subdirectory if the target library is in a subdirectory. If the overall build is not a native build (i.e., the overall configure used the `--target' option), then the library will be configured with the `--with-cross-host' option. The value of this option will be the host system of the overall build. Recall that the host system of the library will be the target of the overall build. If the overall build is a native build, the `--with-cross-host' option will not be used. A library which can be built both standalone and as a target library may want to install itself into different directories depending upon the case. When built standalone, or when built native, the library should be installed in `$(libdir)'. When built as a target library which is not native, the library should be installed in `$(tooldir)/lib'. The `--with-cross-host' option may be used to distinguish these cases. This same test of `--with-cross-host' may be used to see whether it is OK to use link tests in the configure script. If the `--with-cross-host' option is not used, then the library is being built either standalone or native, and a link should work. The top level `Makefile' in the Cygnus tree defines targets for every known subdirectory. For every subdirectory dir which holds a host library or program, the `Makefile' target `all-dir' will build that library or program. There are dependencies among host tools. For example, building gcc requires first building gas, because the gcc build process invokes the target assembler. These dependencies are reflected in the top level `Makefile'. For every subdirectory dir which holds a target library, the `Makefile' target `configure-target-dir' will configure that library. The `Makefile' target `all-target-dir' will build that library. Every `configure-target-dir' target depends upon `all-gcc', since gcc, the target compiler, is required to configure the tool. Every `all-target-dir' target depends upon the corresponding `configure-target-dir' target. There are several other targets which may be of interest for each directory: `install-dir', `clean-dir', and `check-dir'. There are also corresponding `target' versions of these for the target libraries , such as `install-target-dir'. The `libiberty' subdirectory is currently a special case, in that it is the only directory which is built both using the host compiler and using the target compiler. This is because the files in `libiberty' are used when building the host tools, and they are also incorporated into the `libstdc++' target library as support code. This duality does not pose any particular difficulties. It means that there are targets for both `all-libiberty' and `all-target-libiberty'. In a native configuration, when target libraries are not built in a subdirectory, the same objects are normally used as both the host build and the target build. This is normally OK, since libiberty contains only C code, and in a native configuration the results of the host compiler and the target compiler are normally interoperable. Irix 6 is again an exception here, since the SGI native compiler defaults to using the `O32' ABI, and gcc defaults to using the `N32' ABI. On Irix 6, the target libraries are built in a subdirectory even for a native configuration, avoiding this problem. There are currently no other libraries built for both the host and the target, but there is no conceptual problem with adding more. Go to the first, previous, next, last section, table of contents.
<urn:uuid:af81768b-70f4-46aa-91f2-30a3b4432a0f>
3.5625
3,427
Tutorial
Software Dev.
47.274499
From afar, the whole thing looks like an A closer look at the however, shows the region is actually a window into the center of a larger dark shell of Through this window, a brightly-lit where a whole open cluster of stars is being formed. In this cavity tall pillars and round globules of dark dust and cold remain where stars are still forming. Already visible are several young bright blue stars whose light and winds are burning away and pushing back the and walls of gas and dust. The Eagle emission nebula, tagged M16, lies about 6500 light years away, spans about 20 light-years, and is visible with the constellation of the Serpent This picture combines three specific emitted colors and was taken with the 0.9-meter telescope on
<urn:uuid:81aebd6e-8a8a-4719-8ff3-c55d53ff4c90>
2.78125
178
Truncated
Science & Tech.
47.140385
table of contents An overview of the principles and methods in phytoremediation. Biology Articles » Biotechnology » Green Biotechnology » Phytoremediation (a lecture) » The Principle - Phytoremediation (a lecture) - Use plants to "vacuum" heavy metals from the soil through their roots - Certain species have the ability to extract elements from the soil and concentrate them in the stems, shoots, and leaves - The unique plants must be able to tolerate and survive high levels of heavy metals in soils-like zinc, cadmium, and nickel. - These plants possess genes that regulate the amount of metals taken up from the soil by roots and deposited at other locations within the plant. - Some contaminants also changed into safer gases as plant traspires. rating: 4.15 from 80 votes | updated on: 23 Jun 2008 | views: 114819 |
<urn:uuid:58cb2c56-1cb1-4342-855d-64709fc50eff>
3.84375
191
Truncated
Science & Tech.
32.504394
|Themes > Science > Earth Sciences > Geology > Soils > Soil Composition > Components and Structure > The Soil Community| The living part of the soil is just as critical to Plant growth as the physical soil structures. Soil microorganisms are the essential link between mineral reserves and plant growth. The cycles that pen nit nutrients to flow from soil to plant are all interdependently and they work only with the help of the living organisms that constitute the soil community. Soil organisms, from bacteria and fungi to protozoans and nematodes, on up to mites, springtails and earthworms, perform a vast array of fertility-maintenance tasks. Organic soil management aims at helping soil organisms maintain fertility; conventional (non-organic) soil management merely substitutes a simplified chemical system to provide nutrients to plants. Once a healthy soil ecosystem is disrupted by the excessive use of soluble synthetic fertilizers, restoring it can be a long and Costly process. In many cases, the excessive use of energy-intensive petroleum-based fertilizers and pesticides has destroyed the biological fertility of soil, so growers use ever-larger amounts of these materials to sustain crop growth. Like all living things, the creatures of the soil community need food, water, and air to carry on their activities A basic diet of plenty of organic material, enough moisture, and well-aerated soil will keep their populations thriving. Soil creatures thrive on raw organic matter with a balanced ratio of carbon to nitrogen, about 25 to 30 parts carbon to 1 part nitrogen. Carbon, m the form of carbohydrates, is the main course for soil organisms. Given lots of it, they grow quickly scavenging every scrap of nitrogen from the soil system to go with it. That's why adding lots of high-carbon materials to your soil can cause nitrogen deficiencies in plants. In the long term, carbon is the ultimate fuel for all soil biological activity and therefore of humus formation and productivity. A balance supply of mineral nutrients is also essential for soil organisms, and micronutrients are important to the many bacterial enzymes involved in their biochemical transformations.
<urn:uuid:d32813c7-e993-4df5-ba37-6bb7e02dab7a>
3.90625
432
Knowledge Article
Science & Tech.
29.17659
|Themes > Science > Physics > Atomic Physics > Spin| In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum: 1. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving: 2. Orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory in use at the time. By adding an additional quantum number-the spin of the electron-Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all sub-atomic particles, including protons, neutrons, and antiparticles. Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make up the nucleus. Quantum theory prescribes that spin angular momentum can only occur in certain discrete values. These discrete values are described in terms of integer or half-integer multiples of the fundamental angular momentum unit h/2p, where h is Plank's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have odd half-integer spin (1/2, 3/2, such as photons, alpha particles, and mesons, have integer Fermions obey the Pauli's exclusion principle, while bosons do not.
<urn:uuid:8af61329-3889-4c48-a6ed-1ef9abc47d9c>
4.21875
420
Knowledge Article
Science & Tech.
37.279026
Hi Guys, Was asked this quesion by my lecturer(& was left speechless ) Heres' the question... SUGGEST A WAY BY WHICH POINTERS CAN BE IMPLEMENTED IN Java...!! The question expects a suggestion that has a mechenism that shows pointers being implemented logically in java code... I have heard that it can be done using interfaces, not sure if its true though...!! Can anyone sugest the solution...?? If your new Big Idea doesn't scare the hell out of you, <br />it's probably not a "new Big Idea". Your teacher is asking you to read his/her mind to determine his/her definition of "pointer", which almost certainly does not align with any of the 'de facto' definitions, since I can't think of any reasonable one, even one that I don't entirely agree with, that would allow your question to be correctly answered. Good luck, I recommend the Stealth Max V2 Crystal Ball "rub and be enlightened". I use it at work quite successfully. All variables that refer to objects in Java are pointers. We generally like to say "references" instead, but they're pointers just the same. The only difference between Java pointers and C pointers is that you can't do "pointer arithmetic" in Java -- you can't subtract pointers, or convert to and from integers. But a Java variable points to an object in just the same way that a C pointer does; it can be reassigned to point to another object, or set to null (no object). So if you know what a linked list is, and how to implement one in C/C++, then just do it the same way in Java.
<urn:uuid:46406f4c-6269-490d-b2f1-70896c6c3c48>
2.96875
356
Comment Section
Software Dev.
72.355233
Global warming appears to have stalled. Climatologists are puzzled as to why average global temperatures have stopped rising over the last 10 years. Some attribute the trend to a lack of sunspots, while others explain it through ocean currents. By Gerald Traufetter At least the weather in Copenhagen is likely to be cooperating. The Danish Meteorological Institute predicts that temperatures in December, when the city will host the United Nations Climate Change Conference, will be one degree above the long-term average. Otherwise, however, not much is happening with global warming at the moment. The Earth’s average temperatures have stopped climbing since the beginning of the millennium, and it even looks as though global warming could come to a standstill this year. Ironically, climate change appears to have stalled in the run-up to the upcoming world summit in the Danish capital, where thousands of politicians, bureaucrats, scientists, business leaders and environmental activists plan to negotiate a reduction in greenhouse gas emissions. Billions of euros are at stake in the negotiations.
<urn:uuid:37751b4a-da58-4637-ab33-826a85444ae3>
3.15625
209
Truncated
Science & Tech.
26.59885
Operators In PHP Operators In PHP This article is the second of a series of PHP guides that aim at teaching you the basics of PHP programming. I hope you would have been practising the last lecture ( PHP Programming Basics ) as I am getting feedback from different students from different countries and I am very happy I am able to contribute to PHP through my series of PHP Articles for beginners and professionals. Today we are going to discuss different types of operators used in PHP. I hope you remember the basic definition of the operators and operands from my last article ( PHP Programming Basics ) and I am not going to explain it again as we had promised in the very first article of this Series of PHP Tutorials that we will not see behind and this is just to compell you to concentrate on each and every article of this series and practise (O! I love practice) PHP offers the following operators to perform different operations on the operands. Arithmetic operators enable you to perform different mathematical operations on different values. Below are the quick details of arithmetic operators, Addition: it is denoted by plus sign ( + ) and is used to add two values e.g 2 + 3 = 5 you can add variables as following, Subtraction: it is denoted by minus sign ( - ) and is used to subtract first value from the second e.g 2 - 1 = 1 you can subtract two variables as following, multipication: it is denoted by Asterisk sign ( * ) and is used to multiply two value e.g 2 * 3 = 6 you can multiply two variables as following, Division: it is denoted by forward slash sign ( / ) and is used to divide first value on the second e.g 2 / 2 = 1 you can divide two variables as following, Modulus(Division Remainder): it is denoted by percentage sign ( % ) and is used to give the division remainder e.g 3 % 2 = 1 you can find the modulus of two variables as following, Increment: it is denoted by double-plus sign ( ++ ) and is used to increase value of one variable by one e.g Remember there are two types of incrementation, a) Prefix Increment: in which the increment operator comes before the variable name i.e ++$var; In this type of increment the variable’s value is incremented and is available for the immediate use e.g b) Postfix Increment: in which increment operation is performed however the resultant value is not available for the immediate use and can be used only when the control access the variable in the next run e.g Decrement Operator: it is denoted by double-minus sign ( — ). It is very similar to the increment operator however it decrease the value by one. Assignment operators are used to assign new values to a variable. We have the following assignment operator available in PHP, //you can print $var1 after each step to see its value First of all in above all example i am assuming each operation for the basic values of var1 and var2 to be 3 and 5 respectively. comparison operators are used to compare two values. We can use following comparison operators in PHP, ( == ) is equal to Just learn these operators by heart we will use them in our up coming articles and you will be able to understand their exact use. We have three logical operators in PHP, AND ( && ) Operators: you can use both ‘AND’ and double-empersand sign ( && ) for this operator in PHP. The basic logical use of this operator in a rough psuedocode could be If first condition is true && second condition is also true OR ( || ) Operators: you can use both ‘OR’ and double-pipeKey sign ( || ) for this operator in PHP. The basic logical use of this operator in a rough psuedocode could be Either the first condition is true || the second condition is true NOT ( ! ) Operators: you can use both ‘NOT’ and Exclamination Mark (Mathematician will call it Factorial Sign) sign ( ! ) for this operator in PHP. The basic logical use of this operator in a rough psuedocode could be First value is NOT equal to the second value Again do not confuse the use of these operators but just learn them by heart and I will use these all operators in my up coming articles for more complex PHP programmes and then your all doubts will be removed hopefully. This is one of the most important articles of this PHP Tutorials Series. Please concentrate hard on this articles and try to understand all issue discussed today and feel free to ask if you have any question and remember if you cannot understand this foundation lectures of PHP it will be very hard for you to survive when we will ‘BURN IT’ as I had promised with you that I will take you the Extreme PHP Programming and the Only rule of our game is “Don’t look behind” so you will not be able to go back to these articles again so learn these things now this article will make your strong foundation towards complex programming. Let’s end this lecture here.
<urn:uuid:85793f71-4e8a-48f1-89b9-dbdcca23e23e>
3.796875
1,106
Tutorial
Software Dev.
41.080127
Lugo should have put the steps in reverse order or at least mention that each step is reversible. He has reversible steps in each of his examples. This corresponds to each statement being equivalent to the next. This is logically the same as saying that if A,B,C,D are statements then we have A <=> B <=> C <=> D. This being the case, we can start with A and get D and we can also start with D and get A. But if we want A => D then to avoid being misleading to others we should start with A and get D. If one of these <=> breaks down; that is, only goes one way, say B <= C, then we can only derive D => A but not A =>D. Bob Bundy, The example you gave of 1=2 thus 2=1 thus 3=3 is a great example. The first conclusion that 2=1 is F following from F. The second conclusion 3=3 is T following from F. So it is clear that from a False statement we can logically derive either True or False statements. This follows from the truth table for implicaton (=>). p q p=>q T T T T F F The implication being True corresponds to a VALID argument. F T T The implication being False corresponds to an INVALID argument. F F T Starting with a T statement the only thing that can be validly produced is other T statements, since the second line indicates that starting with a T statement and arriving at a F statement means that the implication (argument or step) is invalid. Starting with a F statement we can validly arrive at True or False statements as your example and the last two lines of the table show. A variation of this problem is OFTEN seen in trigonometry. Students trying to prove that A<=>F come up with A =>B=>C and F=>E=>C and try to conclude that A<=>F is true because C<=>C is true. IF they had A=>B=>C=>E=>F they could conclude that A=>F, BUT they DON'T have C=>E and E=>F. Instead they have E=>C and F=>E, the reverse implications (converses). Hence they can't get from A to F validly. Again IF ALL the implications involved were DOUBLE IMPLICATIONS (<=>) then all would be well. They could go from A to F and also from F to A validly. Pedantic? Perhaps so, but then I am also a teacher and have seen this logical mistake over and over when teaching trig. A little bit of logic goes a long way in understanding mathematics. Just knowing the logic behind direct and indirect proofs and knowing how to negate statements (We start ALL indirect proofs with the negation of the statement to be proven.) properly is critical to being comfortable with doing proofs in mathematics (and elsewhere). Why does the method of indirect proof work? Because if we want to prove P to be true indirectly we start with the statement ~p (not P) and validly arrive at a false statement, then the statement ~P must be false (according to the second line of the truth table above) and hence P which is equivalent to ~~P must be true. Indirect proofs are quite nifty since they often allow us to start with more information than a direct proof would and then what we must reach is "relaxed" in the sense that all we have to come up with is ANY false statement (contradiction). Indirect proofs are often much easier than direct proofs and at times are the only known proofs for some theorems in mathematics. Example: To do a direct proof of p => q we can assume p and then try to validly conclude q. Or we can assume ~q and try to validly conclude ~P. But for the indirect proof we assume the negation of p =>q which is p and ~ q, which gives us TWO bits of info to work with. Then all we have to do is come up with ANY false statement that we can. When we validly arrive at a false statement, the proof is immediately finished. There are only seven simple rules for negations of statements and they are TOTALLY INDEPENDENT of the CONTENT OR MEANING of the statements. They are simply rules of symbolic logic. 1) ~(p and q) is ~p or ~q 2) ~(p or q) is ~p and ~q 3) ~(p => q) is p and ~q 4) ~( p <=> q) is either p <=>~q or ~p <=> q (your choice) 5) ~~P is equivalent to p 6) Change "for every" to "there exists" (When required) * 7) Change "there exists" to "for every" (When required) * * If a "there exists" or a "for every" statement is in the hypothesis of an implication then it is not to be negated since the hypothesis p of the implication p => q is not negated in arriving at the negation p and ~q. Likewise in negating a double implication you may or may not have to negate depending on which of the two options in 4) you choose. As you might surmise, I am a great believer in teaching students a little bit of logic. If they continue in mathematics it could be a big boon to their understanding of proof and disproof. Maybe my middle name should be "verbose." Oops! I got the T's and F's messed up in the paragraph beginning with "Bob Bundy," so I hope this edit avoids any confusion.
<urn:uuid:834bfdbd-3c1c-4e76-a45f-e5655c4d4cab>
3.84375
1,239
Comment Section
Science & Tech.
70.30697
Please PLEASE read the following and solve this problem: I am basically doing a frustrum of a cone, what i need to do in the question is re-arrange the volume for cone to make H the subject and substitute this H in the Surface area formula for cone. If u get what i mean!!!!!!!!! Now the problem is this, i made a diagram of a truncated cone, and put some dotted lines to make it into a whole cone.........I said the dotted lines to be as 1/3 of the radius and height. SO when i did the volume expression it came something like this: V = 1/3*pi*r^2*h - 1/3*pi*r^2/9*h/3. ........................(i) When u simplify the equation it come up something like this: h = 81V/26*pi*r^2 I had to make volume fixed to 600. so: h = 1869 / pi*r^2. Now this is where the problem comes: Now when i substitute this equation into surface area as i said i had to: S.A. = Pi*R*S where S= square root of H^2 + R^2 S.A. = Pi*r Square root of 1869/pi*r^2 + r^2 Well here it is, if i have already subtracted 1/3 of height and radius in the volume formula in equation (i) , then do i need to do, divide remaining r^2 with 3??? Or have i already subtracted the r and h in equation (i) S.A. = Pi*r/3 square root of 1869/Pi*r^2 + r^2/3 if u get what i mean??? Please try to quickly explain this, as i need to hand this in by tommarrow!!!!!!
<urn:uuid:0faa3236-3a88-4192-a961-3db6ae77f06f>
3.546875
399
Q&A Forum
Science & Tech.
102.645668
USING a mirror controlled by a single atom, physicists in California are on the brink of creating a photon molecule. Photons do not normally interact with one another in the way that atoms in a molecule do. But Joseph Jacobson and his colleagues at Stanford University are confident that the device they are constructing, dubbed a quantum beam splitter, should make photons bind together (Physical Review Letters, vol 74, p 4835). When particles are fired at a beam splitter it lets through a proportion of them and reflects the rest. When a molecule encounters a beam splitter, it is transmitted or reflected as a whole; the strength of the bonds between its atoms sees to it that the molecule does not break up. Jacobson and his team have now worked out how to make a group of photons behave the same way. Conventional photon beam splitters, such as half-silvered mirrors, ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:f306e5f5-6458-4b5f-9fbc-f388bc3a943a>
3.734375
210
Truncated
Science & Tech.
51.045837
Miragaia, named for the area in which it was found (Miragaia, Portugal), is a genus of stegosaurid dinosaur from the Late Jurassic Period (approximately 150 million years ago). A fairly complete half skeleton and partial skull were discovered. It was described by Octávio Mateus and his colleagues in 2009. The type species is M. longicollum. Fragments from a juvenile individual were also found in the same bone bed as the type specimen. Miragaia is notable for its long neck, which included at least 17 vertebrae. Miragaia had more neck vertebrae than most sauropods, dinosaurs known for their long necks. The traditional view of stegosaurid dinosaurs are low browsers with short necks. Only three sauropod dinosaurs (Euhelopus, Mamenchisaurus, and Omeisaurus) have as many neck vertebrae as Miragaia. Most sauropods only possess between 12 and 15 neck vertebra. The purpose of the longer neck has been debated. Scientists either believe it was to allow this animal to browse at higher levels that other herbivores were not utilizing, or may have been a natural evolution for sexual selection.
<urn:uuid:22d5aa76-04ea-4a48-8af2-ebeab210f5aa>
4.0625
257
Knowledge Article
Science & Tech.
41.198007
The Java Dateformat class is used to format the date/time in Java applications. You can use the Dateformat class while generating the report to format the date/time in required format. To format a date for the current Locale, use one of the static factory methods: String myString = DateFormat.getDateInstance().format(myDate); DateFormat provides many class methods for obtaining default date/time formatters based on the default or a given locale and a number of formatting styles. The formatting styles include FULL, LONG, MEDIUM, and SHORT. More detail and examples of using these styles are provided in the method description. Here is the example of Dateformat class: If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:22416ddb-c76f-4e44-9c95-e078790442a0>
3.015625
186
Documentation
Software Dev.
36.67
Aug. 17, 2012 The forests of the coastal regions from California to British Columbia are renowned for their unique and ancient animals and plants, such as coast redwoods, tailed frogs, mountain beavers and the legendary Bigfoot (also known as Sasquatch). Whereas Bigfoot is probably just fiction, a huge, newly discovered spider is very real. Trogloraptor (or "cave robber") is named for its cave home and spectacular, elongate claws. It is a spider so evolutionarily special that it represents not only a new genus and species, but also a new family (Trogloraptoridae). Even for the species-rich insects and arachnids, to discover a new, previously unknown family is an historic moment. A team of citizen scientists from the Western Cave Conservancy and arachnologists from the California Academy of Sciences found these spiders living in caves in southwest Oregon. Colleagues from San Diego State University found more in old-growth redwood forests. Charles Griswold, Curator of Arachnology, Joel Ledford, postdoctoral researcher, and Tracy Audisio, graduate student, all at the California Academy of Sciences, collected, analyzed, and described the new family. Audisio's participation was supported by the Harriet Exline Frizzell Memorial Fund and by the Summer Systematics Institute at the Academy, which is funded by the National Science Foundation. Trogloraptor hangs beneath rudimentary webs on cave ceilings. It is about four centimeters wide when its legs are extended -- larger than the size of a half-dollar coin. Their extraordinary, raptorial claws suggest that they are fierce, specialized predators, but their prey and attack behavior remain unknown. The anatomy of Trogloraptor forces arachnologists to revise their understanding of spider evolution. Strong evidence suggests that Trogloraptor is a close relative of goblin spiders, but Trogloraptor possesses a mosaic of ancient, widespread features and evolutionary novelties. The true distribution of Trogloraptor remains unknown: that such a relatively large, peculiar animal could elude discovery until 2012 suggests that more may be lurking in the forests and caves of western North America. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Charles Griswold, Tracy Audisio, Joel Ledford. An extraordinary new family of spiders from caves in the Pacific Northwest (Araneae, Trogloraptoridae, new family). ZooKeys, 2012; 215 (0): 77 DOI: 10.3897/zookeys.215.3547 Note: If no author is given, the source is cited instead.
<urn:uuid:cce9294d-a922-424d-9fe5-cbcf8bb9897f>
3.578125
561
Knowledge Article
Science & Tech.
28.548726
A La Nina Like No Other Or Just A Big One for NASA Global Climate Change Team Pasadena CA (JPL) Feb 11, 2011 1. What is La Nina and why does it matter? La Nina, "little girl" in Spanish, is the cool part of a naturally-occurring climate cycle called the El Nino/Southern Oscillation. El Nino is the warm part at the other end of that cycle. These shifts are governed, like much of the climate on the planet, by the relationship between winds and ocean surface temperatures. When trade winds, blowing from east to west across the Pacific, are strong, equatorial waters are very cool, signaling the arrival of La Nina. When these winds falter, ocean surface temperatures rise and signal the arrival of the warm sibling, El Nino ("Christ Child" in Spanish). These warm and cool pools expand and linger across much of the tropics for many months, causing dramatic shifts in worldwide temperature and rainfall patterns over both the oceans and continents. These shifts happen every five to seven years and have been around for centuries. 2. This year's La Nina has wreaked havoc around the world - floods in Australia, drought in east Africa and South America, and landslides in Brazil. Why has it been so severe? When devastating and deadly events in Brazil, Australia, Columbia and Pakistan make headlines, like we have seen in recent months, some look to climate change or even some "2012-movie-style" rare alignment of the planets. These explanations are quick and easy, ignore the obvious and neglect the facts. The Australian floods were not record breaking. Enhanced La Nina rainfall, - which normally brings wetter conditions to northeastern Australia - and Tropical Cyclones Tasha and Yasi combined to give the region the worst floods it's seen since 1973. In Brazil, lack of responsible planning across much of the country has led to deforestation and construction of entire cities at high risk for flooding and mudslides. Many of the consequences of these heavy rain events are due to where we live. La Nina can bring heavy rains, but exploding populations in high-risk regions have made the natural events, like La Nina, more costly and deadly. Floods, droughts, hurricanes and other natural events are to be expected. They are part of the history of every country. Each is unique and, at some level, can be anticipated. Better urban, suburban and agricultural planning will make them less punishing. 3. Is there any connection between this year's La Nina and climate change? Climate change is real and there have been some subtle changes in precipitation patterns over recent decades. But not enough to explain these horrific events ... yet. Eventually, global warming will have a massive impact on global and regional temperature and rainfall patterns. This is scary because our civilization is built for today's climate, not for new, unknown shifts in climate. As societies look ahead into the new century, we will need to prepare for these changes ... to plan more responsibly, for now and the future. 4. Is it unusual to have a La Nina that's accompanied by so much rain? Past La Ninas have been wetter. La Nina and her sibling, El Nino, definitely shift all the weather on the planet. The consequences might be record-breaking, but the rainfall amounts should have been anticipated. These shifts from dry to wet, and back to wet to dry, are well documented in the historical climate record. 5. How good are we at predicting La Nina/El Nino weather patterns? And what might we expect as we head into the rest of 2011? La Nina often (though not always) follows El Nino. Last winter's El Nino was a heads-up that today's La Nina would probably follow. Climatologists knew this, and regional planners should have prepared. What can we expect this year? Continuing La Nina rains in some regions and drought elsewhere. Don't forget, droughts can have huge regional impacts, as we have already seen. Also, it important to remember that there is more going on than just El Nino and La Nina. For the past two winters, frigid blasts out of the Arctic have chilled and snowed in much of Europe and the United States. These conditions are due to a very active "Arctic Oscillation" atmospheric pressure pattern that swooped out of the far north. This "Polar Express" has overwhelmed the La Nina impacts forecasted for North America. Until this frigid Arctic visitation calms down, La Nina definitely has second billing. Stay tuned! Share This Article With Planet Earth Climate at NASA Water News - Science, Technology and Politics Santiago, Chile (UPI) Feb 10, 2011 Chile has ordered nationwide contingency planning to prepare for damaging effects of a drought triggered by La Nina weather phenomenon, already seen behind low rainfall and poor agricultural harvests in Argentina. A succession of natural disasters has put unexpected financial pressures on President Sebastian Pinera's announced plans to catapult Chile into the 21st century. Major parts o ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2010 - SpaceDaily. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement|
<urn:uuid:8ce62ed4-a8e5-4c5a-9135-90370cb3cdf8>
3.4375
1,161
Nonfiction Writing
Science & Tech.
47.098785
A Module is a collection of methods and constants. The methods in a module may be instance methods or module methods. Instance methods appear as methods in a class when the module is included, module methods do not. Conversely, module methods may be called without creating an encapsulating object, while instance methods may not. (See Module#module_function) In the descriptions that follow, the parameter syml refers to a symbol, which is either a quoted string or a Symbol (such as :name). module Mod include Math CONST = 1 def meth # ... end end Mod.class #=> Module Mod.constants #=> ["E", "PI", "CONST"] Mod.instance_methods #=> ["meth"]
<urn:uuid:7673106c-4d66-41b0-84a3-1881c543a4e1>
3.25
151
Documentation
Software Dev.
52.276123
Delphi Programming RTL Reference|By Category|Alphabetically|By Unit function Trim(const S: string): string; Returns a string containing a copy of a specified string without both leading and trailing spaces and non-printing control characters. String Types in Delphi var s : string; s:=' Delphi '; s := Trim(s); Understanding and managing string data types in Delphi's Object Pascal. Learn about differences between Short, Long, Wide and null-terminated strings. Free Delphi code snippet inside every Delphi Newsletter!| Got some code to share? Got a question? Need some help?|
<urn:uuid:f9dbec0a-243b-458a-b9cc-001fb058b6bb>
2.796875
140
Documentation
Software Dev.
52.296292
A quick ecology quiz: Is there more life in cold waters or warm waters? Our journey has provided some wonderful empirical evidence on this question. When we set out from Cape Town, water temperatures in the Atlantic hovered around 50 degrees Fahrenheit. When we reached Richards Bay in northeastern South Africa, the Indian Ocean clocked in somewhere in the 70s. And as we crossed the Tropic of Capricorn off the coast of Mozambique, the mercury was up around 80 degrees. So, where did we see the most marine life? Around Cape Town. The ocean is cold there, because it is fed by upwellings from the deep ocean. That deep water is cold, but it is also full of nutrients. As a result, the water around Cape Town is rich with sea birds, Penguins, Fur Seals, and our friend the Great White Shark. The waters around Richards Bay, on the other hand, appeared virtually devoid of life. The jetties at Cape Town were crowded with cormorants, gulls, and terns. The jetties at Richards Bay were empty. This pattern is not special to South Africa. The richest marine environments around the world are concentrated in places with cold, nutrient-rich waters -- places like the Arctic, the Antarctic, Georges Bank, the Galapagos, the southern Sea of Cortez, etc. Happily, warm water doesn't have to mean no marine life. As we moved further into the tropics off Mozambique, we began to see more critters. Flocks of sooty terns diving on shoals of bait driven to the surface by predatory fish. A fairly rare pod of Melon-headed Whales (photos). And several pods of Transtropical Spotted Dolphins and Spinner Dolphins. There's still not as much biomass as down near the Cape, but there's enough to keep things interesting.
<urn:uuid:6a45f0e5-86b8-409b-8813-340d2e8cb495>
3.359375
393
Personal Blog
Science & Tech.
59.054685
Nature's Bank Account Money doesn’t grow on trees, so how do we put a monetary value on nature? BY STEPHEN POLASKY The best things in life are free—an evening with friends, a summer day at the lake, hiking in the forest on a crisp autumn morning. But just because these things are free doesn’t mean we can take them for granted. Maintaining relationships with people and maintaining the environment both require thoughtful action and investment. If we want these “best things” to remain, we need to focus on what builds our communities and nourishes our environment at the same time. In the past few centuries, humans have dramatically transformed the planet, in good ways and bad. Our quality of life has improved with increases in food production, as well as better health care and education. Yet, humanity has not invested sufficiently to maintain environmental quality. Deforestation, expanding deserts, emergence of dead zones in coastal waters, loss of biodiversity and climate change are just a few consequences of our collective failure to properly care for the environment. And either we, or our descendents, will pay the price. Environmental degradation causes harm to people by damaging health, reducing productivity and jobs (as seen with the collapse of fisheries), and potential large-scale disruptions from climate change. Some of these damages are hard to quantify, such as reduced quality of life when local lake water becomes clouded with algae, or when a favorite natural area is developed. Even so, these values are real and vital. Most prices we pay for goods and services do not reflect the full impacts of our production or consumption choices on the environment. Before we can see what fundamental changes are needed to fully sustain the environment, we must begin to incorporate the value of nature in our economic and political decision making. This presents three complex, albeit surmountable, challenges. First, we must recognize that we don’t always know the environmental costs of our actions. For example, chlorofluorocarbons were promoted as a cheap and effective chemical for refrigeration and a propellant for aerosol cans. Not until 40 years after their discovery were CFCs linked to destroying the ozone layer that shields Earth from ultraviolet radiation. Second, we must translate our actions—and the intended or unintended results—into environmental values we can compare with other values, such as increased income or jobs. Economists have already made great progress in valuing certain environmental benefits. For example, we can infer the value of nature to homeowners by analyzing how housing prices increase with access to lakes, a scenic view or other environmental amenities. And third, we must bring the values of nature to bear in our decision-making processes. Innovative public policies, such as incentive-based regulations, and private initiatives, such as environmental certification, harness market forces for environmental protection, showing it is possible to protect the environment—even in a rising economy. In spite of the challenges, we’ve already seen how progress on both the environment and the economy can be made. Following the passage of the Clean Air Act in 1970, emissions of major air pollutants in the United States were cut in half by 2005, while the economy nearly tripled in size. The Clean Air Act made reducing air pollution a priority, helping to usher in new technology and smarter policies to improve the environment within a growing economy. Our understanding of the links between human actions and environmental impacts has improved rapidly in recent years. What we need now is to account for a broader range of nature’s goods and services in our daily choices. By accounting for the natural world, we can preserve the best of both worlds: a better environment and a better quality of life. STEPHEN POLASKY is a professor of ecology and environmental economics at the University of Minnesota and a resident fellow of the Institute on the Environment. - © 2012 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer Last modified on January 23, 2012
<urn:uuid:862dd71e-8fcc-4786-9cfe-6d67bbd76318>
3.125
824
Nonfiction Writing
Science & Tech.
30.726976
Every warm body emits electromagnetic radiation. In general, the warmer the emitter, the shorter the wavelength of the radiation. The Sun, with its photosphere of 6000° K, emits in a short wavelength band called ultraviolet to visible. Since the Earth's atmosphere is largely transparent to solar radiation, the Sun's rays warm the surface of the Earth which in turn emits in the longer wavelength infrared band. The atmosphere contains gases that absorb and re-emit radiation, thereby pushing the Earth's equilibrium temperature to a warmer value than it would have without an atmosphere. This phenemenon, somewhat inaccurately called the greenhouse effect, depends upon the concentration of water vapor, carbon dioxide, methane and other gases in the atmosphere.
<urn:uuid:3e30cb10-4d17-422c-ac17-5d6b14c1de76>
4.03125
144
Knowledge Article
Science & Tech.
28.615
Recently, my wife and I went and saw Avatar in 3D at the local cinema. They used RealD 3D glasses that look just like sunglasses. Rather than dropping ours in the recycling bin after the movie, I took them back to the lab to play with, because they have some interesting optical properties. Here’s a view through one of the lenses of the big LCD display in the lab: At first glance, it would seem absurd to have light behave so differently when going forwards or backwards through the glasses. However, there is an interesting explanation for why this occurs. The way we are used to experiencing light interacting with an object is typically symmetrical. If I can see you in a mirror, then you can see me through that same mirror. However with the RealD glasses, LCD light can pass through relatively freely in one direction but is blocked in the other direction. This effect is only seen when looking at LCD displays, not at other lights, which is a big hint as to what’s going on. Further investigation shows that the brightness when going forwards varies somewhat as you rotate the glasses, and when going backwards it goes from totally black to somewhat visible also when rotating the glasses. So, what’s up? First, it may be helpful to know a little bit about how our eyes work. The reason that we can experience depth (i.e. 3D) in the real world is because we have more than one optical input device in our head. Each eye gives a slightly different perspective of the image in front of us. Try it out: close your left eye and look out from your right; now close your right and look from your left. The image shifts ever so slightly. When looking through both eyes, adjustments are required in order to focus on a single point. We then associate these adjustments with the relative distance of various objects. Eyes angled inward with tight focus means an object is closer, while eyes looking straight out with a loose focus means the object is further away. Normal 2D movies don’t require these eye adjustments because they were shot with a camera that has only one optical sensor. 3D movies use a special process to simulate depth by requiring these eye adjustments. To do this, we have to receive two different optical signals from a single movie screen. This paricular 3D projection system uses circularly polarized light in order to achieve this. That is quite a big word, so let me break it down. As light is emitted from a source, it moves in a somewhat random fashion. Polarized light, on the other hand, has been lined up in some way. It could be compared to the difference between kids running around chaotically on a playground and kids walking in an orderly straight line. This organization can take different patterns. If polarization is circular, then light takes on a small circular pattern of movement as it travels through space. Similar to a rotating propeller, it spins as it moves forward. The projector uses this circularly polarized light to send alternating images, one for each eye, and the viewer wears something that allows only the appropriate images to enter the appropriate eye. The left lens allows certain images through and the right lens allows others. So the glasses worn by viewers use “circular polarizers” to block specific parts of the light. Now back to the LCD screen. LCDs emit linearly polarized light, or light that lines up along a single plane. So why would a circular polarizer affect linearly polarized light differently depending on which side was facing the light source? It turns out that circular polarizers can be made by stacking a quarter wave plate and a linear polarizer. A wave plate alters the type of polarization of the light that passes through. A quarter wave plate changes the orientation by 90 degrees. Essentially, this set up nudges the linearly polarized light, giving it a rotation, resulting in circular polarization. Consequently, the wave plate can also take circularly polarized light and “straighten” it to linear. Vector addition allows us to represent linearly polarized light as the sum of two orthogonally polarized waves, one vertical and one horizontal. In other words, linearly polarized light, which behaves in a single plane, can be broken down into its horizontal (x) and vertical (y) components. It’s like instead of asking you to walk 50 paces in some diagonal direction, I could tell you to walk forward 40 paces and then left 30 paces. The same principle is sometimes used to calculate vectors of light. A linear polarizer allows light linearly polarized in one direction through, and blocks orthogonally-polarized light, or light whose orientation differs by 90 degrees. You could imagine the vertical bars of a jail cell (linear polarizer) would only allow through pizza boxes (linearly polarized light) with the same orientation. Vertical pizza boxes pass while horizontal pizza boxes get blocked. These two facts allow us to explain, for example, what the amount of polarized light is that gets through a linear polarizer as the polarizer is rotated: simply represent the incident polarized light as two vectors, one of which is parallel to the orientation of the polarizer and the other of which is perpendicular (according to vector addition explained above), and eliminate the amount of light in the perpendicular direction. The following two diagrams illustrate this point. The first shows what happens when linearly polarized light aligned in different directions goes through a linear polarizer. In the first case, when it’s aligned in the same direction, the light goes through. In the second case, when it’s perpendicular, the light is blocked. But since we can represent light as being the sum of any two orthogonal components (as seen in the top half of the next diagram), we see that a linear polarizer will allow some but not all of the light that is rotated less than 90 degrees through the polarizer. With this information, we can now explain the odd behavior of the 3D glasses and the LCD display. The following table lists the various cases we need to examine. The left-most column lists a kind of light heading into the glasses used with the 3D projection system. The “Forwards” columns show what happens when the light goes through the quarter wave plate (QWP) first, and then the linear polarizer (LP) second. The “Backwards” column shows what happens when the light goes in the opposite direction. “Circ” means circularly polarized, “Lin” means linearly polarized, and “1/2″ means that some fraction of the light makes it through, with the polarization one would expect. In the theater, they send out circularly polarized light, with a different handedness for each eye. The 3D glasses could have the linear polarizers in each lens oriented in different directions for each eye. So (referencing the bottom two rows in the table above) the right-circular light would go through one lens but not the other, but vice versa for left-circular light. When we look through the glasses at an LCD, which puts out linearly polarized light, we can see that at least some light will get through both lenses at the same time in one direction (the first two rows in the table above) but in the opposite direction, there is an orientation where no light will get through. As it turns out, the particular glasses I have appear to be built slightly differently. If the linear polarizers were orthogonal to each other, then the reverse view of an LCD should have one lens letting all the light through while the other lets none through. But in fact they are both identical. This suggests that the quarter wave plates (QWPs) are the elements oriented at 90 degrees to each other instead of the linear polarizers. This would mean that circularly polarized light going through one QWP would come out horizontally polarized, and going through the other would be vertically polarized. Research grade quarter wave plates are expensive, but apparently there are lower quality ones available in plastic at much lower prices, and their quality would be good enough for watching a movie. You can read more about the history of 3D films on Wikipedia.
<urn:uuid:91c9c097-d650-459e-a438-370ae0d426c0>
3.25
1,694
Personal Blog
Science & Tech.
49.580048
Begin your search for information on properties with the resources listed here. See also resources listed by properties, materials and substance. Tip: Look for the substance (e.g. Gallium Arsenide) in a handbook or encylopedia. Often there is a section on properties. You may need to back up further (e.g. Gallium) to find what you need. CRC Handbook of Chemistry and Physics Comprehensive resource of physical constants and properties. Tip: Look for the Substance/Property Search link on the left side. Print version is located in Science Library and Barker Library stacks at QD65.H235. CHEMnetBASE Tip: In the CRC Handbook of Chemistry and Physics, look for the Substance/Property Search link on the left side. If looking for a chemical, try the Combined Chemical Dictionary or Properties of Organic Compounds. Browsable and searchable engineering and science handbooks. Tip: Look over the search tips section to build an effective search strategy. Or use the Data Search to look for specific properties. Springer Materials Known in print as Landolt-Börnstein. Includes 250,000 substances and 3,000 properties. Good for semi- and superconducting materials. Landolt-Börnstein [book in Barton] (print with online index) Reaxys Find properties for organic, inorganic and organometallic substances. Known in print as Beilstein or Gmelin. Tip: put the molecular formula in Hill Order. Librarian for Chemistry & Chemical Engineering Librarian for Materials Science & Engineering, Mechanical Engineering, & Engineering Systems email@example.com
<urn:uuid:93122771-d486-43ce-b032-c54f863ef5b0>
2.921875
349
Content Listing
Science & Tech.
47.296138
The Apache distribution comes with a script to control the server called apachectl, installed into the same location as the httpd executable. For the sake of the examples, let's assume that it is in /home/httpd/httpd_perl/bin/apachectl. All the operations that can be performed by using signals can also be performed on the server by using apachectl. You don't need to know the PID of the process, as apachectl will find this out for itself. To start httpd_perl: panic% /home/httpd/httpd_perl/bin/apachectl start To stop httpd_perl: panic% /home/httpd/httpd_perl/bin/apachectl stop To restart httpd_perl (if it is running, send HUP; if it is not, just start it): panic% /home/httpd/httpd_perl/bin/apachectl restart Do a graceful restart by sending a USR1 signal, or start it if it's not running: panic% /home/httpd/httpd_perl/bin/apachectl graceful To perform a configuration test: panic% /home/httpd/httpd_perl/bin/apachectl configtest There are other options for apachectl. Use the help option to see them all. panic% /home/httpd/httpd_perl/bin/apachectl help It is important to remember that apachectl uses the PID file, which is specified by the PidFile directive in httpd.conf. If the PID file is deleted by hand while the server is running, or if the PidFile directive is missing or in error, apachectl will be unable to stop or restart the server. mod_perl, modperl, Apache, perl, cgi, html, mod_perl, e-commerce, scalability, free, open source, OSS, apache, squid, high availability, modperl, linux, unix, Web, www, mod_perl, webserver, admin, apache, book, webmaster, tools, modperl, guide, docs, documentation, help, mod_perl, perl, information, apache, script, errata, eric cholet, perl, apache, mod-perl, stas bekman, mod_perl, cool, perl, Apache, performance, speed, choice Other projects to check out: meta-religion.com is for those interested in Religious, Spiritual and Esoteric Phenomena. i-want-a-better.com is a community of people discussing what they would like to be improved in their lives and things they use and interact with. You may also want to find a healer in your area or read articles on variety of topics.
<urn:uuid:dad35ba8-7f49-41d2-a09e-33fac4862097>
3.234375
615
Documentation
Software Dev.
45.115588
Copyright © University of Cambridge. All rights reserved. The picture above shows four equal weights on one side of the scale and an apple on the other side. What can you say that is true about the apple and the weights? If the apple weighs $180g$, how heavy must one weight be? If the apple weighed $375g$, how heavy would one weight be? If the apple was a giant one and weighed a full kilo and the weights were each $250g$, what would the scale look like? How do you know? Can you prove it?
<urn:uuid:29dd6470-f3ba-4449-bca8-0d2bb75ee9c9>
2.9375
119
Q&A Forum
Science & Tech.
80.133889
Titan may have an icy surface Apr 25, 2003 The surface of Titan, Saturn’s largest moon, may contain frozen water. Caitlin Griffith at the University of Arizona and colleagues at the Institute for Astronomy in Honolulu, the Gemini Observatory and the Université Pierre et Marie Curie measured the infrared spectrum of the moon’s surface and detected the tell-tale characteristics of water ice. This result may overturn the hypothesis that Titan’s surface is completely covered by a thick layer of organic liquids and solids (C A Griffith et al. 2003 Science 300 628). The atmosphere of Titan is made up of a thick haze, about 800 metres deep, of methane, nitrogen and carbon dioxide. This layer obscures the surface and makes it difficult to detect what lies below. Previous studies focused on a small range of wavelengths but spectral peaks that are characteristic of surface compounds only show up at a larger range of wavelengths. It is known from the Voyager mission that the atmosphere of Titan becomes more transparent in the near infrared part of the spectrum. Griffith and co-workers have now made multiple measurements between 0.8 and 5.1 μm using the United Kingdom Infrared Telescope (UKIRT) and NASA’s Infrared Telescope Facility (IRTF). They investigated the reflectivity – the fraction of light reflected - of Titan’s surface at certain narrow wavelength ‘windows’ to catch glimpses of the surface where it is not masked by the atmosphere. The team measured reflectivities at eight separate wavelengths of 0.83, 0.94, 1.07, 1.28, 1.58, 2.0, 2.9 and 5.0 μm. “These values, if taken together, indicate the presence of water ice,” Griffith told PhysicsWeb. In fact, Titan’s spectrum resembles that of Ganymede – Jupiter’s largest satellite – which is dominated by ice features. Below 1 μm, dirty water ice features like those found on many of Jupiter’s other satellites were also seen. Moreover, the reflectivities did not match those of the organic sediments that had been expected. About the author Belle Dumé is Science Writer at PhysicsWeb
<urn:uuid:85e5c01d-d843-4eba-bc9b-45678b3e58ec>
3.71875
460
Truncated
Science & Tech.
50.004476
Major Section: PROGRAMMING See hard-error, see illegal, and see cw for examples of functions to call in the first argument of (Prog2$ x y) equals y; the value of x is ignored. x is first evaluated for side effect. Since the ACL2 programming language is applicative, there can be no logical impact x may involve a call of a function such illegal, which can cause so-called ``hard errors'', or a call of cw to perform output. Here is a simple, contrived example using hard-error. The intention is to check at run-time that the input is appropriate before calling (defun foo-a (x) (declare (xargs :guard (consp x))) (prog2$ (or (good-car-p (car x)) (hard-error 'foo-a "Bad value for x: ~p0" (list (cons #\0 x)))) (bar x)))The following similar function uses illegalhas a guard of nil, guard verification would guarantee that the call of illegalbelow will never be made (at least when guard checking is on; see set-guard-checking). (defun foo-b (x) (declare (xargs :guard (and (consp x) (good-car-p (car x))))) (prog2$ (or (good-car-p (car x)) (illegal 'foo-b "Bad value for x: ~p0" (list (cons #\0 x)))) (bar x))) We conclude with a simple example using cw from the ACL2 sources. (defun print-terms (terms iff-flg wrld) ; Print untranslations of the given terms with respect to iff-flg, following ; each with a newline. ; We use cw instead of the fmt functions because we want to be able to use this ; function in print-type-alist-segments (used in brkpt1), which does not return ; state. (if (endp terms) terms (prog2$ (cw "~q0" (untranslate (car terms) iff-flg wrld)) (print-terms (cdr terms) iff-flg wrld))))
<urn:uuid:9774162a-ffb3-4464-b81c-50042669fd14>
2.8125
507
Documentation
Software Dev.
59.807571
Question: Would it be theoretically possible to deduce the masses and distances of the other planets in our Solar System solely by observing the movement of our Sun? (Looking at the movement of the parent star is one way to detect exoplanets, isn't it?) It is a bit harder because we are ‘in’ the Solar System, but technically yes it is sort of easy to do. This is how some of the moons of Jupiter were found before we sent spacecraft there. We can look at the orbit of the parent star and see if we can see wiggles in the periodicity of the orbit, or see darkenings on the surface that would indicate an eclipsing planet, for example. That would be a very interesting project! It would be very complex to model with a 9-body system to the first approximation, but I’m pretty sure we would be able to see the effects on the sun from the larger planets. Maybe the proximity would allow us to measure the smaller planets too. You could probably even see things like orbital inclination of planets by looking at the motion of the sun. You could even build a time lapse of doppler shift from the sun and build up a big 3d picture of the solar system over a long time. You’re right – you can look at the wobble of a star to detect an exoplanet.
<urn:uuid:3e653741-f44e-4e3a-8d44-c2f399808303>
3.015625
283
Q&A Forum
Science & Tech.
56.907704
Google introduced a new component called Web which gives you the functionality to send and fetch data from a server or a website through GET and POST requests. This component can decode both JSON and HTML formatted data. We will be writing an app called iRead that will ask a user to type in a full or partial book name and query Google's database of books using Books API and retrieve the most relevant book info. We will parse the Title and Author of the book, we will get the image URL of the book cover, and finally the book URL which can be launched through a browser. This is how our app would look like- Before we start, I advise you to read on anything that is unfamiliar to you but mentioned in the paragraph above like Books API, JSON, HTML, GET, POST, etc. First thing first, build your interface in the Designer/Viewer window. It should look like this- If you are having hard time designing, download the source file (iRead.zip) from Downloads page and take a look. Have you tried executing a Book API query request yet? Click on the link below to see the API response. This gives you a response that contains- Our concern is not the actual book info (Over time you might not get the same book info using the same query), our concerns are the tokens that we'll need to identify and parse specific data. Take a look at the tokens like "title": that ends with a comma and a new line, "authors": that ends with ], etc. Now let's start defining actions and interactions in the blocks editor. When a user enters a book name and hits Search button, we call GetBookInfo procedure where we simply construct our URL with user inputted text and encode it for using it as a URL, and finally feed it to Web component's Url property. We want to get only one book info (API may return many); so we used maxResults=1. If we didn't use projection=lite, we would get a lot of other info that we are not going to use in this tutorial app. At the end we call Web.get to execute our query. Let's parse the JSON response from the API. We only can proceed if responseCode is 200 which means our query didn't fail along the way. We have another procedure called ParseBookResult that actually parses the response. Let's take a look at it- In the ParseBookResult procedure, we split the response content using the start tag and end tag. For an example, to parse title of the book, we need to know what title data begins and ends with. If you have noticed in the query, it returns- "title": "Harry Potter and the chamber of secrets", "authors": [ "J. K. Rowling" ], So we know, it begins with "title": and ends with a comma followed by a new line that is what we used to split and get the value, in this case, the value is "Harry potter and the chamber of secrets". Then we decode the JSON value by calling Web's JsonTextDecode. And finally decode the HTML characters by calling HtmlTextDecode. Note that we only call HtmlTextDecode if a certain value may contain HTML characters. An image link is a URL and should not be parsed. That is why, for that, we don't call HtmlTextDecode.When a user taps on the Read button, we simply launch the browser with the book URL. You can download the source file of this app (iRead.zip) from Downloads page. If you want to parse multiple occurrences of a token/tag, say you want to get three books info by setting the maxResults=3, in that case you can use a foreach loop to do so. You can download the iRead2.zip from Downloads page to see how to parse multiple occurrences from a JSON response.
<urn:uuid:2ec4766c-7c87-4401-9710-a69d0a467dec>
2.828125
816
Tutorial
Software Dev.
64.107931
A manipulative parasitic wasp Back to spider home William Eberhard (2000) describes an interesting relationship between a female wasp parasite, Hymenoepimecis sp., (Ichneumonidae) and its spider host, Plesiometa argyra (Araneidae). Although these species do not occur in southern Africa, they illustrate an interesting instance where an insect parasitoid is able to alter the behaviour of its spider host to the finest degree. The orb spider is stung while on its web and is temporarily paralysed while the wasp lays her egg on it. The spider then recovers and goes about its life with the newly hatched wasp larva feeding on it by sucking its haemolymph For about 7 to 14 days, the spider continues building its usual orb webs for prey capture. However, in the evening of the night when it is to be killed by its wasp parasite, the spider weaves a different web, designed specifically to suit the purposes of the wasp. The wasp larva then moults, kills and consumes the spider and pupates, suspending itself safely from its custom-built cocoon web. |Normal web of the orb-weaving spider. ||Cocoon web and wasp cocoon from above. |Cocoon web and wasp cocoon from the side. |Images provided by W. Eberhard ©, used with permission. The cocoon web is consistently made to the same pattern and deviations from that pattern would be disastrous for the wasp larva. The cocoon web is a simplified web and the sticky spirals and multi-stranded cable and radial lines of the orb web are omitted. This simplified cocoon web suspends the wasp pupa safely protecting it from various adverse conditions. Vulnerability to heavy rains, for example, was observed in a related The spider's change in behaviour is thought to be induced chemically rather than by physical interference. The effect of the stimulus is both rapid and long-lasting. Observations were made where the wasp was removed earlier in the evening of the spider's final night and the spider did not spin the cocoon web. The wasp was left and only removed later in the evening and the spider was observed to proceed with the construction of the cocoon web. When the spider was allowed to survive, it continued to make the cocoon web the following night and some spiders reverted to making more normal webs on - Eberhard, W. G. 2000. Spider manipulation by a wasp larva. Nature Vol. 406. 20 July: 255 - 256.
<urn:uuid:827b8d0a-489c-467f-9c1d-bec4d300c418>
2.765625
578
Knowledge Article
Science & Tech.
49.617983
Volume 8, Number 49: 7 December 2005 Climate alarmists have long contended that the historical and still-ongoing rise in the air's CO2 content - aided and abetted by the historical increase in atmospheric methane concentration - will lead to dangerous global warming that could rival temperature increases experienced during prior glacial-to-interglacial transitions. Now, new light has been shed on the subject by two reports that provide CO2, methane and temperature data stretching a full 650,000 years back in time (Siegenthaler et al., 2005; Spahni et al., 2005), based on measurements made on East Antarctica's Dome Concordia ice core, which was originally extracted and cursorily analyzed by Augustin et al. (2004). What are politically-correct scientists saying about the new findings? Los Angeles Times staff writer Usha McFarling (25 Nov 2005) reports they claim "the work provides more evidence that human activity since the Industrial Revolution has significantly altered the planet's climate system." As an example, she quotes Penn State University's Richard Alley as stating the new results may be interpreted as "saying, 'Yeah, we had it right' ... we can pound on the table harder and say, 'this is real'." Likewise, Associated Press writer Lauran Neergaard (24 Nov 2005) quotes Edward Brook, who wrote a Perspective piece in Science about the new findings, as saying "these studies tell us that there's a strong relationship between temperature and greenhouse gasses ... which logically leads you to the conclusion that maybe we should worry about temperature change in the future." Echoing this sentiment, Jerry McManus of the Woods Hole Oceanographic Institution was quoted by BBC News staff writer Julianna Kettlewell (28 Nov 2005) as saying "it is something of grave concern to someone like me, who sees the strong connection between greenhouse gases and climate in the past." Actually, the ice core data do not strengthen the climate-alarmist claim that we should be concerned about greenhouse gas-induced global warming, and for two different reasons. We discussed the first of these reasons in our Editorial of 30 Nov 2005, where we indicated that the ice core data: (1) clearly demonstrate the important role of climate in CO2 regulation, but (2) provide no evidence for the inverse relationship, i.e., a regulation of climate by CO2. Here, we discuss the second reason. We begin with the fact that the new ice core data indicate the atmosphere's current CO2 concentration is about 30% higher than it has been at any other time in the last 650,000 years, and that the atmosphere's current methane concentration is 130% higher. These extremely high concentrations, in the words of McManus (as quoted by Kettlewell), "are geologically incredible." Hence, if the world's climate alarmists are correct about the tremendous warming power they attribute to these two top greenhouse gases, one would logically expect the earth to be currently experiencing some incredibly high temperatures. So what do the ice core data indicate in this regard? Both the Dome Concordia and Vostok data sets suggest that the peak temperature of the current interglacial or Holocene was not incredibly higher than the peak temperatures of all of the past four interglacials, the earliest of which is believed to have been nearly identical to the Holocene in terms of earth's orbit around the sun. In fact, the Holocene's peak temperature was not higher than those of the preceding four interglacials by even a tiny fraction of a degree. In fact, it was lower. In fact, the work of Petit et al. (1999) revealed that the peak temperature of the Holocene was more than 2°C lower than the average peak temperature of the prior four interglacials. What is more, earth's current temperature is lower still. In light of these several real-world observations, we conclude that if there is anything unusual or unnatural about earth's current climatic state compared to the climates of the past four interglacials, it is that it is so much colder in spite of there being so much more (dare we say incredibly more?) CO2 and methane in the air. Clearly, the planet's climate system is not operating the way the world's climate alarmists and politically-correct scientists claim it does. Sherwood, Keith and Craig Idso Augustin, L., Barbante, C., Barnes, P.R.F., Barnola, J.M., Bigler, M., Castellano, E., Cattani, O., Chappellaz, J., Dahl-Jensen, D., Delmonte, B., Dreyfus, G., Durand, G., Falourd, S., Fischer, H., Fluckiger, J., Hansson, M.E., Huybrechts, P., Jugie, G., Johnsen, S.J., Jouzel, J., Kaufmann, P., Kipfstuhl, J., Lambert, F., Lipenkov, V.Y., Littot, G.C., Longinelli, A., Lorrain, R., Maggi, V., Masson-Delmotte, V., Miller, H., Mulvaney, R., Oerlemans, J., Oerter, H., Orombelli, G., Parrenin, F., Peel, D.A., Petit, J.-R., Raynaud, D., Ritz, C., Ruth, U., Schwander, J., Siegenthaler, U., Souchez, R., Stauffer, B., Steffensen, J.P., Stenni, B., Stocker, T.F., Tabacco, I.E., Udisti, R., van de Wal, R.S.W., van den Broeke, M., Weiss, J., Wilhelms, F., Winther, J.-G., Wolff, E.W. and Zucchelli, M. 2004. Eight glacial cycles from an Antarctic ice core. Nature 429: 623-628. Petit, J.R., Jouzel, J., Raynaud, D., Barkov, N.I., Barnola, J.-M., Basile, I., Bender, M., Chappellaz, J., Davis, M.., Delaygue, G., Delmotte, M., Kotlyakov, V.M., Legrand, M., Lipenkov, V.Y., Lorius, C., Pepin, L., Ritz, C., Saltzman, E. and Stievenard, M. 1999. Climate and atmospheric history of the past 420,000 years from the Vostok ice core, Antarctica. Nature 399: 429-436. Siegenthaler, U., Stocker, T., Monnin, E., Luthi, D., Schwander, J., Stauffer, B., Raynaud, D., Barnola, J.-M., Fischer, H., Masson-Delmotte, V. and Jouzel, J. 2005. Stable carbon cycle-climate relationship during the late Pleistocene. Science 310: 1313-1317. Spahni, R., Chappellaz, J., Stocker, T.F., Loulergue, L., Hausammann, G., Kawamura, K., Fluckiger, J., Schwander, J., Raynaud, D., Masson-Delmotte, V. and Jouzel, J. 2005. Atmospheric methane and nitrous oxide of the late Pleistocene from Antarctic ice cores. Science 310: 1317-1321.
<urn:uuid:fa378482-ccc8-4869-bf7d-7735392cf5f9>
3.203125
1,609
Academic Writing
Science & Tech.
67.849604
Optimize with a SATA RAID Storage Solution Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs And so it is with type and class. In the seminal work Object Oriented Analysis and Design with Applications, (Addison Wesley, 1994), Grady Booch declares, "For our purposes, we will use the terms type and class interchangeably." A footnote goes on to explain that, "A type and a class are not exactly the same ... For most mortals, however, separating the concepts of type and class is utterly confusing." Certainly Booch deserves the eminent status he has attained in the object-oriented community. Nonetheless, I disagree with his assessment, though in fairness I should acknowledge that he speaks from a language-independent view. Fortunately for Java developers, the Java language has taken a positive step in facilitating a clearer distinction between type and class. To explore that distinction, we will examine type and class from a Java perspective. Programming language types characterize the values used in the course of program execution. Though limited compared with modern languages, FORTRAN first introduced types in 1954. The language syntax allowed programmers to distinguish between numeric types for integer and floating-point arithmetic. Variables beginning with those letters between i and n were implicitly typed as integers. Relics of that convention survive today, as many programmers still use the letters i and j for array subscripting. The FORTRAN typing scheme's primary benefit was code optimization for the underlying hardware system. Computer language evolution quickly raised type from its restricted realm of urging faster code production from compilers to allowing users to define their own data types. User-defined types are the pragmatic extension of abstract data types from type theory. An abstract data type is an ordinary type along with a set of operations. Abstract data types effectively shield module internal mechanics by permitting interaction only through published type operations. You will recognize the class concept as the manifestation of abstract data types in object-oriented languages. An object-oriented class is an abstract data type with full or partial implementation of each declared type operation. User-defined data types realize significant benefit in extending a programming language's primitive type system. Expressive combinations of primitive and user-defined types create new, more complex types. Importantly, these user-defined types are first-class citizens, meaning that objects characterized by these types enjoy privileges similar to those of primitive types, thereby facilitating efficient data structure management. The introduction of programming language types also paved the way for program text type-checking. Type operations restrict the permissible interaction with a user-defined type, thereby declaring the explicit boundaries of a system module. A type-checker, based on well-defined typing rules, enforces the proper use of program types by ensuring the integrity of these boundaries. Type-checking's primary purpose is to prevent program execution errors.
<urn:uuid:cb2bffc7-d6b1-4fd5-9811-579f2037ac96>
2.96875
586
Documentation
Software Dev.
22.272431
The dream of faster-than-light travel has been on the mind of humanity for generations. Until recently, though, it was restricted to the realm of pure science fiction. Theoretical mechanisms for warp drives have been posited by science, some of which actually jive quite nicely with what we know of physics. Of course, that doesn’t mean they’re actually going to work NASA researchers recently revisited the Alcubierre warp drive and concluded that its power requirements were not as impossible as once thought. However, a new analysis from the University of Sydney claims that using a warp drive of this design comes with a drawback. Specifically, it could cause cataclysmic explosions at your destination. To see how the Alcubierre drive could devastate an entire star system, you have to know a little about how it would work. The ship would consist of a central pod, and a large flattened ring around it (pictured below). The ring would have to be made of an as-yet unidentified kind of dense exotic matter capable of bending space-time. Supply the craft with enough energy, and the very fabric of the universe can be warped. NASA now believes this would require orders of magnitude less energy than Alcubierre originally thought. When activated, space behind an Alcubierre drive expands while contracting in front. The ship itself hums along in a stable pocket, or bubble in space. It turns out the bubble is the problem. As your faster-than-light ship sails through the cosmos, it’s not alone. Although we often think of space as empty, there are loads of high-energy particles shooting through the void. The University of Sydney research [PDF] indicates that these particles are liable to get swept up in the craft’s warp field and remain trapped in the stable bubble. The longer the journey lasts, the more of these dangerous particles build up. This doesn’t affect the ability of the warp drive to keep bending the laws of the universe — it’s the stopping that’s going to ruin your day. The instant the Alcubierre drive is disengaged, the space-time gradient that allows it to effectively move faster than light goes away. All the energetic particles trapped during the journey have to go somewhere, and the researchers believe they would be blasted outward in a cone directly in front of the ship. Anyone or anything waiting for you at the other end of your trip would be destroyed. Because of a funny little quirk of relativity, there is no upper limit to the amount of energy a Alcubierre drive could pick up. A long trip could vaporize entire planets upon your arrival. The researchers are beginning a new round of number crunching to see how bad the problem is. It’s possible the deadly particle beam could be projected in all directions, making Alcubierre drives unworkable. That spiffy warp ship might make a better weapon than method of transportation. The Alcubierre drive is, of course, still highly speculative. NASA scientists are working with small-scale models in an effort to produce localized distortions in space, but this new Aussie research could give NASA something to think about. Even with future advances in technology, this method of space flight might prove to be impossible. At least then we wouldn’t have to worry about annihilating everyplace we try to explore.
<urn:uuid:f5dcfd5c-8f20-4bc4-b97f-b1d173f230b3>
3.453125
696
Comment Section
Science & Tech.
46.537222
Search our database of handpicked sites Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest. You searched for We found 16 results on physics.org and 112 results in our database of sites 111 are Websites, 1 is a Videos, and 0 are Experiments) Search results on physics.org Search results from our links database A NASA page all about solar wind, including the latest data on the Sun's activity and with lots more information on solar physics. Explanation of this phenomenon, the flux of charged particles from the sun, and links to related areas. Feature article explain all the different activities of the sun from the solar wind to coronal mass ejections When energetic charged particles enter the earth's atmosphere from the solar wind, they tend to be channeled toward the poles by the magnetic force. A page about the magnetosphere and its interactions with solar wind. A wealth of info about the sun, the earth's magnetosphere, space weather, cosmic rays, solar wind etc Articles on fossil fuels, hydroelectric, wind, solar and nuclear power. Each is followed by an online multiple choice test. There is some worthwhile material in this site and it would be especially ... Information about the earth's magnetic field, solar wind, the magnetosphere, Van Allen belts and space weather from NASA's IMAGE mission. Have a look inside a wind turbine to see how it generates electricity. Part of the NASA website, includes detailed information (and illustrations) on the history, development and use of wind tunnels. Showing 1 - 10 of 112
<urn:uuid:ac5b6421-1053-432c-97c3-23a88ad3fd4b>
3.09375
351
Content Listing
Science & Tech.
50.22521
Web edition: July 20, 2012 Thirty years ago, the California condor came dangerously close to extinction. Biologists took action, and their efforts worked. The big, bald birds lived, and their numbers grew — from 22 in 1982 to several hundred today. But the bird’s apparent success story may be misleading, a new study finds. By studying blood and feathers from condors in the wild, researchers have confirmed that the birds are suffering a slow poisoning.
<urn:uuid:775fd7de-61f5-4321-b16a-1521b03ebeb8>
3.640625
96
Truncated
Science & Tech.
59.634
INSPIRATION for Ibis Therapeutics's broad-scan biodetector came when company president David J. Ecker realized that a method used to screen for potential RNA-binding drugs might provide a means of looking for pathogens. Image: Courtesy of ISIS PHARMACEUTICALS Chance is often the best inventor. Isis Pharmaceuticals never set out to become a maker of sensors for biological weapons. The company, based in Carlsbad, Calif., is best known for its work in developing antisense therapies, the use of small pieces of DNA-like molecules that bind to messenger RNA (a copy of a gene) to block synthesis of an encoded protein. Its research led to the formation of a division called Ibis Therapeutics, which develops chemicals other than DNA that would interfere with RNA. Along the way, Ibis discovered a method of screening pathogens that might lead to a universal detector for biological weapons--even perhaps nefarious, as yet to be invented bioengineered strains of pathogens. The road to a universal biosensor began in the mid-1990s, when Ibis started looking for chemicals with a low molecular weight that would bind to and block the activity of RNA, the same mechanism used by many antibiotics. The Defense Advanced Research Projects Agency (DARPA) funded some of the research because of its interest in finding new drugs to counter the microorganisms used in biowarfare. Conventional high-throughput screening--conducting a multitude of tests to measure the interaction of drug candidates with different enzymes--is ineffective for drugs that would work by binding to RNA. So Ibis began to explore the possibility of using mass spectrometry to determine when a small molecule binds to RNA. The company refined a technique called electrospray ionization, as well as mass spectrometry, to extract RNA and the bound drug candidate from an aqueous solution intact and then suspend those molecules in a vacuum, where they can be weighed. As the methods proved themselves, Ibis president David J. Ecker came to the realization that pulling out the RNA alone, without the bound molecule, would provide the makings of an extraordinary sensing system. After RNA from a cell is weighed with the spectrometer--each cell has multiple types of the molecule--these very precise measurements, accurate down to the mass of a few electrons, can be correlated with a database that contains information about RNA weights for a given pathogen. Each weight in the database table corresponds to the weight of the exact number of letters, or nucleotides, for a particular RNA. As long as information about the nucleotide composition is in the database, the system, called TIGER (triangulation identification for genetic evaluation of risks), can identify any bacterium, virus, fungus or protozoan. Before the RNA is weighed, another critical step is necessary: the polymerase chain reaction must make copies of stretches of DNA or RNA that are found in all cellular organisms (or, for viruses, in whole families of them). Six months before last year's anthrax attacks, Ibis and partner SAIC, a contract research house, received a $10-million DARPA grant extending over two years to do a feasibility study for TIGER. The goal of the program is to develop a system that can detect the 1,500 or so agents known to infect humans. This approach differs fundamentally from the way other biodetectors are designed. Most systems use an antibody or a piece of DNA as a probe to bind to a protein or nucleic acid in a pathogen. These tests are limited to detecting a small subset of the universe of pathogenic agents. And an antibody probe for, say, anthrax needs to make a match with the exact strain of the specific bacterium it is targeting. With TIGER, if information about a pathogen is not in its database--because it is a newly evolved strain or a specially bioengineered bug--the software can flag any genetic likeness it has with other microorganisms. "The database will say, 'I've never seen this before, but it's very similar to Yersinia pestis [plague],'" Ecker says. The detector would not, however, be able to pick up some genetic alterations of a microorganism--for instance, a gene for a toxin put in an otherwise harmless microbe. Although biosensors were never part of Ibis's business plan, about half of its 35 employees are now on the TIGER team. Work at the company continues on sequencing the relevant genes to extract the needed RNA signatures for populating the databases--or obtaining this information from sequencing efforts under way worldwide. One of the biggest challenges the researchers still face is how to tell one piece of RNA from among thousands of specimens in a complex sample, such as a ball of dirt. "That requires very complex signal processing," Ecker says. The problem that Ibis had encountered was one that radar engineers deal with constantly. In fact, this was the reason behind the collaboration with SAIC, which produced culture shock when Ibis's molecular biologists began to work with SAIC's radar engineers. "We spent the better part of a whole year figuring out how to communicate with each other," Ecker remarks. This article was originally published with the title The Universal Biosensor.
<urn:uuid:db392a0f-4181-472d-aa6b-527b18a73744>
2.734375
1,087
Truncated
Science & Tech.
32.567584
DailyDirt: Life, Life Everywhere from the urls-we-dig-up dept Evidence of life hasn't been found outside of our planet (yet?), but life seems to be getting into nearly every nook and cranny of our dear Earth. Places that seem too cold or hot or dark have been shown to harbor life forms that survive in unusual ways, eating substances that aren't normally considered food. Here are just a few examples of these extremophiles that suggest life might exist on other worlds, even if the conditions don't seem ideal. - Astronauts have actually discovered a new species of life... while training in an underground cave. The astronauts were taking a week-long ESA CAVES underground training course to prepare for duties on the international space station and to acclimate to working under extreme conditions, and they found a new kind of crustacean. [url] - An ecosystem exists in the deepest layer of the Earth's ocean crust, in the gabbroic layer, living off hydrocarbons such as methane and benzene. This discovery could mean there may be life even deeper, possibly in the Earth's mantle. [url] - Microbes isolated beneath 65 feet of Antarctic ice might define a new limit for life to survive. These little organisms live in Lake Vida without much sunlight, without oxygen, at -13°C, in acidic salt water. [url]
<urn:uuid:5ec7f245-38c2-4376-949c-1cc856c2a749>
2.796875
291
Listicle
Science & Tech.
49.019692
Orion shows us colorful examples of stars that astronomers designate red, white and blue, Teske says. Decide for yourself if each stars color lives up to its advertised hue. Betelgeuse, marking Orions left (eastern) shoulder, is extremely red according to the way astronomers determine star color, Teske explains. Bellatrix, in Orions right (western) shoulder, is one of the bluest stars to have its color measured. Rigel, in Orions right knee, has a neutral color and is considered to be white. To astronomers, a stars color is an important property that can be used to indicate the temperature of its outermost layers, Teske says. We are all used to the idea that color and temperature are related. Something can be red-hot or white-hot. The trick is to be sure that everyone agrees on what colors are what, and then to relate a temperature to each agreed-upon color. Astronomers have developed a way to actually measure the quantity they refer to as color. A sensitive television camera attached to a telescope is used to compare a stars brightness at two different wavelengths or colors of light. One brightness measurement is made with the stars blue light; the other with light at yellow-green wavelengths. Astronomers describe a stars relative brightness with a single number called its color, Teske says. This number is really just a comparison of the stars brightness as seen in what human eyes perceive as two colors. Over many years of observations, astronomers have carefully built up detailed knowledge of the relationship between their measured colors and the temperatures of stars outer layers. Nowadays all that is needed to determine a stars temperature is to measure its corresponding astronomical color. In order to get results that are compatible with those obtained by other scientists, all astronomical observers try to use the same color measurement system with their telescopes, Teske explains. Observing methods and television detectors are carefully compared among observatories to maintain high standards of accuracy. Its inevitable, though, that astronomers have taken to using color words. Stars that are coolest and therefore emit a lot of reddish light are called red, while others that are extremely hot and emit a lot of blue light are called blue, Teske says. By examining the stars of Orion, an observer can get a good notion of how blue or how white or red stars can really be. Red Betelgeuse has one of the lowest temperatures recorded for a naked-eye starabout 5,000 F. This is the temperature at which pure liquid iron boils and vaporizes. The outer layers of white Rigel have a temperature of 22,000 F. Rigel is among the brightest stars of the galaxy in which our sun is located. It shines with the brilliance of 50,000 suns. In blue Bellatrix, the temperature soars to 45,000 F. Bellatrix is not the hottest of known normal stars, however, Teske notes. That distinction belongs to a star in the southern constellation of Carinaone too faint to be seen with the eye alone. Labelled only as HD93250, it has a temperature approaching 95,000 F.
<urn:uuid:6f348d54-acbe-49cc-9d51-c440edad940f>
3.90625
647
Knowledge Article
Science & Tech.
42.493205
du - estimate file space usage du [-a| -s][-kx][-r][file ...] By default, the du utility writes to standard output the size of the file space allocated to, and the size of the file space allocated to each subdirectory of, the file hierarchy rooted in each of the specified files. The size of the file space allocated to a file of type directory is defined as the sum total of space allocated to all files in the file hierarchy rooted in the directory plus the space allocated to the directory itself. When du cannot stat() files or stat() or read directories, it will report an error condition and the final exit status will be affected. Files with multiple links will be counted and written for only one entry. The directory entry that is selected in the report is unspecified. By default, file sizes are written in 512-byte units, rounded up to the next 512-byte unit. The du utility supports the XBD specification, Utility Syntax Guidelines . The following options are supported: - In addition to the default output, report the size of each file not of type directory in the file hierarchy rooted in the specified file. Regardless of the presence of the -a option, non-directories given as file operands will always be listed. - Write the files sizes in units of 1024 bytes, rather than the default 512-byte units. - Generate messages about directories that cannot be read, files that cannot be opened, and so on. This is the default case. - Instead of the default output, report only the total sum for each of the specified files. - When evaluating file sizes, evaluate only those files that have the same device as the file specified by the file operand. The following operand is supported: - The pathname of a file whose size is to be written. If no file is specified, the current directory is used. The following environment variables affect the execution of du: - Provide a default value for the internationalisation variables that are unset or null. If LANG is unset or null, the corresponding value from the implementation-dependent default locale will be used. If any of the internationalisation variables contains an invalid setting, the utility will behave as if none of the variables had been defined. - If set to a non-empty string value, override the values of all the other internationalisation variables. - Determine the locale for the interpretation of sequences of bytes of text data as characters (for example, single- as opposed to multi-byte characters in arguments). - Determine the locale that should be used to affect the format and contents of diagnostic messages written to standard error. - Determine the location of message catalogues for the processing of LC_MESSAGES . The output from du consists of the amount of the space allocated to a file and the name of the file, in the following format: "%d %s\n", <size>, <pathname> Used only for diagnostic messages. The following exit values are returned: - Successful completion. - An error occurred.
<urn:uuid:97c43983-58f9-4a06-bf06-589eeb118183>
2.953125
645
Documentation
Software Dev.
36.173675
An acorn worm appears as a bright orange or red acorn sticking out of the surface of muddy or sandy shores. Learn about acorn worms. Acorn worm is the name for many similar animals of the phylum Hemichordata. Within its natural habitat an acorn worm appears as a bright orange or red acorn sticking out of the surface of muddy or sandy shores. When carefully dug out, a fragile earthworm like animal is revealed. The length will vary from 2 inches to 6 feet depending on the species. The acorn part of this animal is the proboscis, which is attached to the body by a stalk. Surrounding the stalk in the front of the body is a cylindrical collar. The acorn worm's mouth, which is covered by this collar, opens into a straight intestine running the entire length of the body. Directly behind the collar are two rows of gill slits that connect the intestine with the exterior of its body. It has a very simple heart that pumps blood first through a kidney and then on to the intestine and gills. These animals have no sense organs, but instead simple sensory cells that are imbedded in the skin. Most species of the acorn worm can be found from the shoreline to the depths of the ocean, even down two miles or more. They live in U or V shaped tunnels in the seabed, while some construct tubes of mud or sand particles that are glued together with slime. Others that live in deeper waters are known to move freely over the bottom. When the acorn worm moves, its movements are effected by the proboscis and collar. These water filled bags surrounded by muscles will contract to elongate the proboscis, thus forcing the animal forward. Minute whip like protoplasmic hairs pump water through openings in the walls of the proboscis and collar which causes them to swell. This swelling anchors the front part of the animal while the rest is dragged forward by muscle contractions. While the acorn worm moves around mud and sand is forced into its mouth. Water is filtered out through the gill slits and solid materials are passed down where any organic matter is digested. Undigested sand is bound in mucus and ejected. Reproductive organs of the acorn worm lie in pairs beside the gills. Eggs are laid along the same sides of the parent's tunnor or directly into the surrounding water. Species living in deep or cold water will lay a few large eggs with a rich yolk that develop directly into baby acorn worms. Others that live in warm or shallow water lay a large number of small eggs that develop into larvae that swim on the surface of the water before settling to the bottom to become adult and worm like. On the surface acorn worms look a lot like earthworms, but the internal structure of their body sets them apart. There are certain features that give these animals an apparent affinity with back boned animals. These features include the structure of the nervous system, the rows of gill slits and a notochord. These features have caused the acorn worm to be linked with other groups of invertebrates such as the sea cucumber, relatives of the starfish and sea urchins. In the second half of the 19th century and attempt was made to bridge the gap between vertebrates and invertebrates. The discovery of a number of animals, including the acorn worm, provided the link. The acorn worm was first discovered by a Neapolitan fisherman who found fragments of the strange animal in his net and took them to a zoologist in Naples. After a careful study this zoologist was able to recognize that this one once of the missing links.
<urn:uuid:3f409024-97c4-47d3-a22c-db3a79a71fdd>
4.09375
759
Knowledge Article
Science & Tech.
52.078804
Eomaia scansoria: discovery of oldest known placental mammal In Nature 416 816 - 822, Ji et al report a fossil of the earliest known eutherian (placental) mammal (1). |Eomaia scansoria (Chinese Academy of Geological Sciences (CAGS) 01-IG-1a, b; holotype). a, Fur halo preserved around the skeleton (01-IG-1a, many structures not represented on this slab are preserved on the counter-part 01-IG-1b, not illustrated). b, Identification of major skeletal structures of Eomaia. c, Reconstruction of Eomaia as an agile animal, capable of climbing on uneven substrates and branch walking. Taken from ref (1)| This fossil was found in Lower Cretaceous Yixian formation. It is extremely well preserved, and both part and counter-part were recovered. It is a complete cranial and post-cranial skeleton of a small (large mouse-sized) mammal that, dated at 125Myr, represents the earliest eutherian (placental) animal fossil found. (The previous oldest, Murtoilestes is dated at 120M years). The new species is named Eomaia (Dawn Mother in Greek) scansoria (climber in Latin). The new fossil clearly has some transitional features. In particular: this mosaic of metatherian and eutherian features, along with many other derived and primitive features, it is clear that Eomaia has transitional characteristics. The animal seems to be specialised for climbing and the speculation is that this ability allowed placentals (and marsupials who also have climbers in their early lineage) to out-compete the many sub-classes of mammals that have become extinct (only three sub-classes survive today: placentals; marsupials, like kangaroos etc; and monotremes like the duck billed platypus - but the Cretaceous was a melting pot of mammalian sub-classes most of which do not survive to today). Some of the reasons that the fossil points to a climbing or tree-living habit are:1: Fore and hind feet of Eomaia show similar proportion and curvature of the phalanges (the bones making up the fingers and toes) to the grasping feet of extant arboreal mammals (opposum, flying lemur and arboreal primates). first joint of the forefoot digit is curved and it shows compelling evidence of the attachment of a strong muscle that closes the grasp around branches (absent in ground-dwellers) 3: The length of the intermediate phalanx (second joint) of the forefoot as a percentage of the proximal (first joint) varies according to the habit of the animal. It ranges from around 53% in fully terrestrial mammals (eg Metachirus) to 126% in fully arboreal mammals (the flying lemur, Cynocephalus). Scansorial mammals have a lower ratio than fully aboreal animals(eg, the tree shrew, Tupaia, has 61%). E scansoria is, at 79%, more arboreal than scansorial. 4: On the hindfoot, digits (toes) 4 and 5 are longer than digits 1,2 and 3: this is a scansorial pattern. 5: On the hindfoot, the bones of the digits 4 and 5 are longer than those of digit 3; In terrestrial mammals, the bones of digits 2 and 3 are longer than those of digits 4 and 5 6: Turning to the claws, they have the curved shape and show the insertion of strong flexor (grasping) muscles and are very much likethe dormouse (Glis glis) and the tree shrew (Tupaia). 7: Clear evidence of insertion of strong climbing muscles in the shoulder blade 8: The tail is twice as long as the rest of the spine, and the tail vertebrae are elongated 9: The trapezoid and capitate are small and in the same proportion to the hamate and trapezium (all of these are wrist bones) as in extant climbing and tree dwelling mammals The palaeontological age of the fossil matches very well with the molecular data which suggests a diversification of the major clades within the placental sub-class at 110Myr. This age is estimated by analysing the genetic divergence between the most distant extant placental mammals according to a phylogenetic analysis. The extraordinarily well-preserved fossil includes a detailed fur halo. See also Anne Weil's supporting article in the issue of Nature (2) Half of the fossil of Eomaia scansoria, which is preserved on two facing slabs of rock from China's Yixian Formation from ref (2) 1. Ji et al, The earliest known eutherian mammal, Nature 416, 816 - 822 2. Weil, Mammalian evolution: upwards and onwards, Nature 416, 798 -799 3. Nature Science Update article on the find 4. Reilly and White, Hypaxial motor patterns and the function of epipubic bones in primitive mammals, Science 299, 400 - 402
<urn:uuid:4f7dc697-fd6f-404f-b8da-ef106126dacc>
3.265625
1,092
Knowledge Article
Science & Tech.
39.234759
In a solution, the solvent is generally a liquid, which can be a pure substance or a mixture. The species that dissolves, the solute, can be a gas, another liquid, or a solid. Solubilities range widely, from infinitely soluble such as ethanol in water, to poorly soluble, such as silver chloride in water. The term insoluble is often applied to poorly soluble compounds, although in some cases insolubility means that a compound is very poorly soluble. The solubility equilibrium is relatively straightforward for covalent substances such as benzene. When dissolved in water, the benzene molecules remain intact but interact with and are generally surrounded by molecules of water. When, however, an ionic compound such as sodium chloride (NaCl) dissolves in water, the sodium chloride lattice dissociates into individual ions that are solvated or surrounded by water molecules. Nonetheless, NaCl is said to dissolve in water, because evaporation of the solvent returns crystalline NaCl. When a solute dissolves, it may form several species in the solution. For example, an aqueous suspension of ferrous hydroxide, , will contain the series (2−x)+ as well as other oligomeric species. Furthermore, the solubility of ferrous hydroxide and the composition of its soluble components depends on pH. In general, solubility in the solvent phase can be given only for a specific solute which is thermodynamically stable, and the value of the solubility will include all the species in the solution (in the example above, all the iron-containing complexes). The solubility of one substance dissolving in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility. Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of liquid solutions. The last two effects can be quantified using the equation for solubility equilibrium. Solubility (metastable) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time). The solubility of a given solute in a given solvent typically depends on temperature. For around 95% of solids, the solubility increases with temperature from ambient to 100 °C. In liquid water at high temperatures, (e.g., that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent. Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water, but more soluble in organic solvents. The chart shows solubility curves for some typical solid inorganic salts. Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature. Some solutes (e.g. NaCl in water) exhibit solubility which is fairly independent of temperature. A few, such as cerium(III) sulfate, become less soluble in water as temperature increases. This is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal loses water of crystallization at 32 °C to form a more soluble anhydrous phase. The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins. where the index i iterates the components, Ni is the mole fraction of the ith component in the solution, P is the pressure, the index T refers to constant temperature, Vi,aq is the partial molar volume of the ith component in the solution, Vi,cr is the partial molar volume of the ith component in the dissolving solid, and R is the universal gas constant. Liquid solubilities also generally follow this rule. Lipophilic plant oils, such as olive oil and palm oil, dissolve in non-polar solvents such as alkanes, but are less soluble in polar liquids such as water. Synthetic chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. The rate of dissolution and solubility should not be confused as they are different concepts, kinetic and thermodynamic, respectively. Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. The solubility constant is not as simple as solubility, however the value of this constant is generally independent of the presence of other species in the solvent. The Flory-Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen Solubility Parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion. The partition coefficient (Log P) is a measure of differential solubility of a compound in a hydrophobic solvent (octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity). Solubility is often said to be one of the "characteristic properties of a substance," which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid". Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation. Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid-liquid extraction, is an important technique in synthetic chemistry. However, there is a limit to how much salt can be dissolved in a given volume of water. This amount is given by the solubility product, Ksp. This value depends on the type of salt (AgCl vs. NaI, for example), temperature, and the common ion effect. One can calculate the amount of AgCl that will dissolve in 1 liter of water, some algebra is required. The result: 1 liter of water can dissolve 1.34 × 10−5 moles of AgCl(s) at room temperature. Compared with other types of salts, AgCl is poorly soluble in water. In contrast, table salt (NaCl) has a higher Ksp and is, therefore, more soluble. |Group I and NH4+ compounds||carbonates (except Group I, NH4+ and uranyl compounds)| |nitrates||sulfites (except Group I and NH4+ compounds)| |acetates (ethanoates) (except Ag+ compounds)||phosphates (except Group I and NH4+ compounds)| |chlorides, bromides and iodides (except Ag+, Pb2+, Cu+ and Hg22+)||hydroxides and oxides (except Group I, NH4+, Ba2+, Sr2+ and Tl+)| |sulfates (except Ag+, Pb2+, Ba2+, Sr2+ and Ca2+)||sulfides (except Group I, Group II and NH4+ compounds)| In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks. Solubilities of [H.sub.2] in [H.sub.2]O and [D.sub.2] in [D.sub.2]O with dissolved boric acid and lithium hydroxide. Jan 01, 2006; Introduction Light and heavy water are used as moderators and heat transport media in nuclear power reactors and may contain... Solubility of Hydrofluorocarbon (HFC-134a, HFC-152a) and Hydrochlorofluorocarbon (HCFC-142b) Blowing Agents in Polystyrene. Jun 01, 2000; HIROKATSU MASUOKA Solubilities of blowing agents (HFC-134a, HCFC-142b, and HFC-152a) in polystyrene (PS) were measured at... Solubility phenomena involving CaS[O.sub.4] in hydrometallurgical processes concerning heavy metals.(Report) May 01, 2011; INTRODUCTION The abundance of calcium, with 4-5 % in mass, ranks fifth among all elements in the earth's crust, several thousands...
<urn:uuid:08416d03-24f6-4faf-9ca7-827ba533783b>
4.0625
2,372
Knowledge Article
Science & Tech.
28.745806
“If there is a tragic figure in modern physics it has to be George Sudarshan from Kerala. Sudarshan has been passed over for the Physics Nobel Prize on more than one occasion, leading to controversy in 2005 when several physicists wrote to the Swedish Academy, protesting that Sudarshan should have been awarded a share of the Prize for the Sudarshan diagonal representation in quantum optics, for which Roy J. Glauber won his share of the prize. Worse, Glauber is credited with the Glauber-Sudarshan P-representation, even though it was George Sudarshan who developed it first, with Glauber only adopting it later.” – Dr. N.S. Rajaram On July 4, 2012, scientists working at the Large Hadron Collider (LHC) at CERN in Geneva announced they had obtained data that suggested they had found evidence for the long sought after particle called the Higgs Boson. This was a search that had taken 5000 scientists more than ten years and cost over ten billion dollars making it the longest and most expensive particle hunt in history. (Two years ago when it was called the Big Bang experiment, the cost was said to be 14 billion dollars, but who is counting?) Considering the scale and cost of the effort some hyperbole is probably to be expected. It was immediately hailed by the science community worldwide, with Stephen Hawking calling for a Nobel Prize for Peter Higgs after whom the Higgs Boson and its associated Higgs Mechanism are named. (This is passing strange as we shall see in due course.) Archana Sharma, an Indian member of the LHC immediately claimed that it was a discovery comparable to Newton’s discovery of gravity, Einstein’s relativity theory and quantum theory. (Correction: Newton did not discover gravity, he gave a mathematical description of it. Gravity was there all along.) This is a bit extreme to put it mildly. To begin with, there was no discovery but possible support for the existence of a hypothetical particle postulated by Peter Higgs and others way back in 1964. It is not proper to compare something like this with major physical theories like relativity or quantum mechanics that changed our view of the world. The Higgs Boson, if proved to exist, at best fills a gap in what is known as the Standard Model used for describing the elementary particles (like proton, electron, etc) and the forces associated with them. The Higgs Boson is one of a class of elementary (or subatomic) particles called ‘bosons’ that are used to account for forces at the atomic level. They were named bosons by the English physicist Paul Dirac after the Indian theoretical physicist Satyendra Nath Bose who first described the statistical behaviour for the special but important case of photons or light particles. This was generalized and extended to other cases by Einstein. This is now called Bose-Einstein statistics. Along with the Fermi-Dirac statistics obtained a couple of years later, it marks the transition from the ‘old’ quantum theory of Planck and Einstein to particle physics. (The work of Pauli, Schrödinger and Heisenberg similarly led to a ‘new’ quantum theory.) To understand what this means we need to go back a hundred years and see where physics then stood. At that time the reality of atoms was accepted by most (but not all) scientists. They also knew that Einstein’s special relativity connected matter and energy via the equation E = mc2, the most famous if not the most important equation in physics. Max Planck in 1900 had introduced the idea of the quantum of energy as a mathematical tool to resolve a paradox in radiation. Einstein in 1905 had extended it to light claiming that light particles, now called photons were a physical reality and not just a mathematical trick. Since light waves were already known, this meant there was a duality in nature, with light being both wave and particle, just as matter and energy are different forms of the same thing. In 1916, when Einstein completed his work on the general theory of relativity, Satyendra Nath Bose (born 1894) was a fresh graduate who had just been appointed a lecturer at the University of Calcutta. The excitement over the new physics, Einstein’s work in particular was so great that Bose taught himself both German and relativity theory which was not then part of the college science curriculum. Within a few years he had mastered both sufficiently to translate Einstein’s papers on special and general relativity from the original German into English, with some help from Meghnad Saha. It was an exciting time in physics brimming with new ideas and results in relativity, quantum physics and atomic physics. In 1921, Bose had moved to the newly established Dhaka University. He was already known as a talented young mathematician who had published several articles in the prestigious Journal of the Royal Society. So it is an error to describe him as a total unknown as many writers have tended to do. No matter, while going over Einstein’s work on quantum theory, Bose had a fundamental idea. He saw that photons tended to move to the same states that were occupied by other photons. He worked out the mathematics and sent his paper to the Royal Society for publication. The Royal Society rejected his paper. These things happen all the time — there is no need to suppose any conspiracy. (In 1934, Nature had rejected Enrico Fermi’s paper on the neutrino later recognized as a landmark.) Undeterred, Bose sent the same paper to Einstein with a request to have it translated into German, for Germany was then the center of quantum physics. Einstein got it published with his own extensions. Bose (with Einstein) had described the statistical behavior of photons or light particles — the now famous Bose-Einstein statistics. It was only later, after the discovery of the Fermi-Dirac statistics that it was recognized that there are other particles with similar statistical properties (and spin) as the photon. Dirac coined the term ‘boson’ to describe them. The Fermi-Dirac statistics describes the behaviour of particles that can only occupy states that are not occupied by any other particle (unless they have opposite spin). They are called fermions (after Fermi). Fermions obey the Pauli Exclusion Principle while bosons do not. Also their spin values are different. Bosons have integer spins (0, 1, 2, …) while fermions have half integer spins (1/2, 3/2, 5/2, …). (The term ‘spin’ is unfortunate; it does not indicate rotation like a spinning top, but a dimension.) With some oversimplification it may be said that the identification of bosons and fermions as basic elementary (subatomic) particles was the beginning of modern particle physics. All observed elementary particles are either fermions or bosons. At the present state of knowledge in quantum physics, the boundary between bosons and fermions is somewhat blurred, but it can be said that bosons are force carriers while fermions are responsible for mass (matter). The next stage in the development of particle physics was the unification of three basic forces of nature — electromagnetism, weak force (radioactive decay) and strong force (nuclear force). Each is identified with a class of bosons. It is known that photons are the carriers of the electromagnetic field. W and Z bosons are the carriers of the weak force, while gluons are bosons that carry the strong force. Where does the Higgs Boson — the so-called God Particle — fit into this picture? Uncomfortably between bosons and fermions; it is a boson but is also the medium that causes fermions to acquire mass from the Higgs Field through something called the Higgs Mechanism. This scenario was postulated by Peter Higgs along with six others in 1964. Higgs never claimed to be its sole originator. So where does Peter Higgs fit into the picture if a Nobel Prize is given as suggested by Hawking? Again uncomfortably with six others. So, if the Hadron data is confirmed to be from the Higgs Boson, it may suggest that the basics of the Higgs Mechanism are valid — that fermions acquire their mass from the Higgs Field mediated by the Higgs Boson. Paradoxically the Higgs Boson is very heavy but being a boson it has no mass. What does this mean? Nobody is sure. All measurements are indirect and there are many variables and other possible explanations. The Higgs Mechanism is probably the simplest, or as physicists like to say, the most parsimonious. But nature may not be so kind, God Particle notwithstanding. This, even if confirmed, does not complete the picture. We looked at only three forces leaving out the fourth and the most important — gravity. Gravity is the weakest and also the most important force in nature. Einstein showed that gravity is really the geometry of space (or space-time), while the quantum world seems to have no geometry, even though quantum physics is essentially a geometric theory based on concepts like Hilbert spaces and operators. Several workers (including this writer) believe that the many paradoxes that plague quantum physics like particles flying through double slits and non-locality and the like are the result of this mismatch — of imposing geometry on a space that has no geometry. To get back, we are far away from having a theory that includes gravity. Einstein laboured on such a theory for over forty years but failed. An elementary particle called the graviton (a boson if it exists) has been postulated for the purpose, but it is so small (or weak) that its discovery is beyond the capability of existing or currently foreseeable technology. In the aftermath of the Higgs Boson announcement, there was much hand-wringing in India that S.N. Bose who “discovered the boson” (which he didn’t) has been forgotten and his contribution ignored in the West. One prominent news channel screamed: “The God Particle’s neglected namesake.” The absurdity of this will be clear to anyone who takes the trouble to look through a textbook on modern physics. Most of them probably learnt of him for the first time when the press reported the discovery of the Higgs Boson. Bose’s contribution was widely recognized in his time both in India and the West. It was the English physicist Paul Dirac who coined the term ‘boson’. The fact that Einstein himself took the trouble to translate and publish Bose’s article should lay to rest the charge that Bose was ignored. Bose did not get the Nobel Prize for his work. Should he have? It is hard to say. He lived and worked during a period when physics was in state of ferment with many spectacular discoveries that overshadowed his work. Not only Bose, but also George Gamow, Pascual Jordan and Samuel Goudsmit who all made important contributions failed to get it. Neither did J. Robert Oppenheimer or David Bohm later. Although politics probably played a part in the denial of the Prize to Jordan and Bohm, this could not have been the case with Bose. He was simply unlucky to be working when physics was making extraordinary progress with many stalwarts in action. In a leaner period (like the present) he would have stood a better chance. So it cannot be said that Bose was unjustly treated either in his own time or later. It is a different story, however, with another Indian scientist, E.C.G. Sudarshan, who, it can be argued is the world’s greatest theoretical physicist after Richard Feynman. He was cheated out of the Nobel Prize not once but twice — not just passed over but with others being rewarded for what was demonstrably his work. If there is a tragic figure in modern physics it has to be George Sudarshan (born 1931) from Kerala. Sudarshan has been passed over for the Physics Nobel Prize on more than one occasion, leading to controversy in 2005 when several physicists wrote to the Swedish Academy, protesting that Sudarshan should have been awarded a share of the Prize for the Sudarshan diagonal representation (also known as Sudarshan-Glauber representation) in quantum optics, for which Roy J. Glauber won his share of the prize. Worse, Glauber is credited with the Glauber-Sudarshan P-representation, even though it was George Sudarshan who developed it first, with Glauber only adopting it later. A similar thing had happened before. In 2007, Sudarshan himself observed: “The 2005 Nobel Prize for Physics was awarded for my work, but I wasn’t the one to get it. Each one of the discoveries that this Nobel was given for was work based on my research.” Sudarshan also commented on not being selected for the 1979 Nobel: “Steven Weinberg, Sheldon Glashow and Abdus Salam built on work I had done as a 26-year-old student.” All this is a matter of record that not disputed by any scientist. Sudarshan is 81 now and one hopes that the Nobel Committee will recognize its error and award him the long overdue Prize. At one time even his name was excluded from the famous representation now known as the Sudarshan-Glauber representation. This at least has been corrected, but it is small consolation for such a great injustice. My appeal to Indian fans is, instead of lamenting over Bose, let us do all we can to see that George Sudarshan gets his due — the Nobel. Such activism led to recognizing J.C. Bose as the true discoverer of the radio rather than Marconi. Finally, how about the Higgs Boson being the ‘God Particle’? Forget it. If God (or gods) does (or do) exist, and as omnipotent as believers hold, he should do better than stake his existence on such a messy and unstable particle as the Higgs Boson whose own existence is still in doubt. – Folks Magazine, 26 July 2012 » Dr. Navaratna Srinivasa Rajaram is an Indian mathematician who is notable for his publications with Voice of India. He holds a Ph.D. degree in mathematics from Indiana University, and has published papers on statistics in the 1970s and on artificial intelligence and robotics in the 1980s. Filed under: geopolitics, god, india, knowledge, media, nobel prize, physics, scholarship, science, sweden, technology | Tagged: albert einstein, archana sharma, bose-einstein statistics, bosons, CERN, fermions, george sudarshan, god particle, hadron collider, higgs boson, higgs mechanism, large hadron collider, LHC, noble prize, peter higgs, physics, physics nobel prize, s.n. bose, satyendra bose, science, subatomic particles, swedish academy of science | 8 Comments »
<urn:uuid:cb86501c-b83b-4f7a-9c85-50a8b84c7509>
3.34375
3,179
Personal Blog
Science & Tech.
46.545246
In organic chemistry, cis/trans isomerism (also known as geometric isomerism) is a form of stereoisomerism describing the relative orientation of functional groups within a molecule. It is not to be confused with E/Z isomerism which are related absolute stereochemical descriptors, only to be used with alkenes. In general, such isomers contain double bonds, which cannot rotate, but they can also arise from ring structures, wherein the rotation of bonds is greatly restricted. Cis and trans isomers occur both in organic molecules and in inorganic coordination complexes. The terms cis and trans are from Latin, in which cis means "on the same side" and trans means "on the other side" or "across". The term "geometric isomerism" is considered an obsolete synonym of "cis/trans isomerism" by IUPAC. It is sometimes used as a synonym for general stereoisomerism (e.g., optical isomerism being called geometric isomerism); the correct term for non-optical stereoisomerism is diastereomerism. In organic chemistry When the substituent groups are oriented in the same direction, the diastereomer is referred to as cis, whereas, when the substituents are oriented in opposing directions, the diastereomer is referred to as trans. An example of a small hydrocarbon displaying cis/trans isomerism is 2-butene. Alicyclic compounds can also display cis/trans isomerism. As an example of a geometric isomer due to a ring structure, consider 1,2-dichlorocyclohexane: Comparison of physical properties Cis and trans isomers often have different physical properties. Differences between isomers, in general, arise from the differences in the shape of the molecule or the overall dipole moment. |Oleic acid||Elaidic acid| These differences can be very small, as in the case of the boiling point of straight-chain alkenes, such as 2-pentene, which is 37°C in the cis isomer and 36°C in the trans isomer. The differences between cis and trans isomers can be larger if polar bonds are present, as in the 1,2-dichloroethenes. The cis isomer in this case has a boiling point of 60.3°C, while the trans isomer has a boiling point of 47.5°C. In the cis isomer the two polar C-Cl bond dipole moments combine to give an overall molecular dipole, so that there are intermolecular dipole–dipole forces (or Keesom forces) which add to the London dispersion forces and raise the boiling point. In the trans isomer on the other hand, this does not occur because the two C-Cl bond moments cancel and the molecule has a net zero dipole (it does however have a non-zero quadrupole). The two isomers of butenedioic acid have such large differences in properties and reactivities that they were actually given completely different names. The cis isomer is called maleic acid and the trans isomer fumaric acid. Polarity is key in determining relative boiling point as it causes increased intermolecular forces, thereby raising the boiling point. In the same manner, symmetry is key in determining relative melting point as it allows for better packing in the solid state, even if it does not alter the polarity of the molecule. One example of this is the relationship between oleic acid and elaidic acid; oleic acid, the cis isomer, has a melting point of 13.4 degrees Celsius, making it a liquid at room temperature, while the trans isomer, elaidic acid, has the much higher melting point of 43 degrees Celsius, due to the straighter trans isomer being able to pack more tightly, and is solid at room temperature. Thus, trans-alkenes, which are less polar and more symmetrical, have lower boiling points and higher melting points, and cis-alkenes, which are generally more polar and less symmetrical, have higher boiling points and lower melting points. In the case of geometric isomers that are a consequence of double bonds, and, in particular, when both substituents are the same, some general trends usually hold. These trends can be attributed to the fact that the dipoles of the substituents in a cis isomer will add up to give an overall molecular dipole. In a trans isomer, the dipoles of the substituents will cancel out due to their being on opposite site of the molecule. Trans isomers also tend to have lower densities than their cis counterparts. Usually, for acyclic systems trans isomers are more stable than cis isomers. This is typically due to the increased unfavourable steric interaction of the substituents in the cis isomer. Therefore, trans isomers have a less exothermic heat of combustion, indicating higher thermochemical stability. In the Benson heat of formation group additivity dataset, cis isomers suffer a 1.10 kcal/mol stability penalty. Exceptions to this rule exist, such as 1,2-difluoroethylene, 1,2-difluorodiazene (FN=NF), and several other halogen- and oxygen-substituted ethylenes. In these cases, the cis isomer is more stable than the trans isomer. This phenomenon is called the cis effect. E/Z notation The cis/trans system for naming isomers should generally only be used when there are only two different substituents on a double bond. The application of the terms cis/trans is based on the substituents that form the longest hydrocarbon chain and as reflected in the root name of the molecule (i.e. related to organic nomenclature). The E/Z notation is more reliable (and the IUPAC standard) for tri- and tetrasubstituted alkenes and should then be used. Z (from the German zusammen) means "together". E (from the German entgegen) means "opposite". It is incorrect to say that Z corresponds to cis;and E corresponds to trans since there are cases when this is not true. The terms cis/trans and E/Z are not 100% interchangeable; they are based on different principles. For example, trans-2-chlorobut-2-ene is (Z)-2-chlorobut-2-ene. Whether a molecular configuration is designated E or Z is determined by the Cahn-Ingold-Prelog priority rules; higher atomic numbers are given higher priority. For each of the two atoms in the double bond, it is necessary to determine the priority of each substituent. If both the higher-priority substituents are on the same side, the arrangement is Z; if on opposite sides, the arrangement is E. In inorganic chemistry Diazenes (and the related diphosphenes) can also exhibit cis-trans isomerism. As with organic compounds, the cis isomer is generally the more reactive of the two, being the only isomer which can reduce alkenes and alkynes to alkanes, but for a different reason: the trans isomer cannot line its hydrogens up suitably to reduce the alkene, but the cis isomer, being shaped differently, can. Coordination complexes In inorganic coordination complexes with octahedral or square planar geometries, there are also cis isomers in which similar ligands are closer together and trans isomers in which they are further apart. For example, there are two isomers of square planar Pt(NH3)2Cl2, as explained by Alfred Werner in 1893. The cis isomer, whose full name is cis-diamminedichloroplatinum(II), was shown in 1969 by Barnett Rosenberg to have antitumor activity, and is now a chemotherapy drug known by the short name cisplatin. In contrast, the trans isomer (transplatin) has no useful anticancer activity. Each isomer can be synthesized using the trans effect to control which isomer is produced. For octahedral complexes of formula MX4Y2, two isomers also exist. (Here M is a metal atom, and X and Y are two different types of ligands.) In the cis isomer, the two Y ligands are adjacent to each other at 90°, as is true for the two chlorine atoms shown in green in cis-[Co(NH3)4Cl2]+, at left. In the trans isomer shown at right, the two Cl atoms are on opposite sides of the central Co atom. A related type of isomerism in octahedral MX3Y3 complexes is facial-meridional (or fac/mer) isomerism, in which different numbers of ligands are cis or trans to each other. Characterizing whether a metal carbonyl compound is fac or mer can be done by using infrared spectroscopy. See also - "IUPAC Gold Book - geometric isomerism". Goldbook.iupac.org. 2009-09-07. Retrieved 2010-06-22. - "Chemicalland values". Chemicalland21.com. Retrieved 2010-06-22. - CRC Handbook of Chemistry and Physics, 60th Edition (1979-80), p.C-298 - Advanced organic Chemistry, Reactions, mechanisms and structure 3ed. page 111 Jerry March ISBN 0-471-85472-7 - "Spectroscopic Methods in Organic Chemistry," Dudley H. Williams and Ian Fleming, 4th ed. revised, McGraw-Hill Book Company (UK) Limited, 1989.Table 3.27 - The stereochemical consequences of electron delocalization in extended .pi. systems. An interpretation of the cis effect exhibited by 1,2-disubstituted ethylenes and related phenomena Richard C. Bingham J. Am. Chem. Soc.; 1976; 98(2); 535-540 Abstract - Craig, N. C.; Chen, A.; Suh, K. H.; Klee, S.; Mellau, G. C.; Winnewisser, B. P.; Winnewisser, M. (1997). "Contribution to the Study of the Gauche Effect. The Complete Structure of theAntiRotamer of 1,2-Difluoroethane". Journal of the American Chemical Society 119 (20): 4789. doi:10.1021/ja963819e.
<urn:uuid:20a2ec9c-4d2c-4e8e-85e9-90835318353d>
3.890625
2,250
Knowledge Article
Science & Tech.
45.789219
I wanted to ask under what conditions will charges not flow in a closed circuit. Or when is current through the circuit zero even when an EMF is applied? Like in the case of potentiometer, we say that we are measuring the emf of the battery because current through the secondary circuit is 0? So what condition are we fulfilling here so that current is 0? Basically, it is infinite resistance; consider Ohm's law $$ I = V/R $$ if you let $R$ get arbitrarily large, then the current goes to zero. Current is zero every time resistance tends to zero [ because infinite resistance will be ofcource a open circuit. like in case of a capacitor , after long time the current is zero because all the potential is dropped about the capacitor and current is zero. In case on potentiometer current does not flow in secondary circuit is true because when balance point is achieved the potential drop across both wires is same , no driving force for electrons is present inside the wireand (potentials apply equal and opposite drift on electrons) thus galvanometer show zero deflection.
<urn:uuid:79ed96d1-5fb4-4619-9314-a6c37228459c>
2.75
227
Q&A Forum
Science & Tech.
37.02799
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | A membrane protein is a protein molecule that is attached to, or associated with the membrane of a cell or an organelle. More than half of all proteins interact with membranes. Membrane proteins can be classified into two groups, based on the strength of their association with the membrane. Integral membrane proteins are permanently attached to the membrane. They can be defined as those proteins which require a detergent (such as SDS or Triton X-100) or some other apolar solvent to be displaced. They can be classified according to their relationship with the bilayer: - Transmembrane proteins span the entire membrane. The transmembrane regions of the proteins are either beta-barrels or alpha-helical. The alpha-helical domains are present in all types of biological membranes including outer membranes. The beta-barrels were found only in outer membranes of Gram-negative bacteria, cell wall of Gram-positive bacteria, and outer membranes of mitochondria and chloroplasts. - Integral monotopic proteins are permanently attached to the membrane from only one side. Peripheral membrane proteins are temporarily attached either to the lipid bilayer or to integral proteins by a combination of hydrophobic, electrostatic, and other non-covalent interactions. Peripheral proteins dissociate following treatment with a polar reagent, such as a solution with an elevated pH or high salt concentrations. - Further information: Integral membrane proteins, Transmembrane proteins, Peripheral membrane proteins Classification of membrane proteins to integral and peripheral does not include some polypeptide toxins, such as colicin A or alpha-hemolysin, and certain proteins involved in apoptosis. These proteins are water-soluble but can aggregate and associate irreversibly with the lipid bilayer and form alpha-helical or beta-barrel transmembrane channels. An alternative classification is to divide all membrane proteins to integral and amphitropic The amphitropic are proteins that can exist in two alternative states: a water-soluble and a lipid bilayer-bound, whereas integral proteins can be found only in the membrane-bound state. The amphitropic protein category includes water-soluble channel-forming polypeptide toxins, which associate irreversibly with membranes, but excludes peripheral proteins that interact with other membrane proteins rather than with lipid bilayer. There are also numerous membrane-associated peptides, some of which are nonribosomal peptides. They can form transmembrane channels (for example, gramicidins and peptaibols ), travel across the membrane as ionophores (valinomycin and others), or associate with lipid bilayer surface, as daptomycin and other lipopeptides. These peptides are usually secreted. So, they probably should be classified as amphitropic, although some of them are poorly soluble in water and associate with membrane irreversibly. - Protein-lipid interactions (Ed. L.K. Tamm) Wiley, 2005. - Popot J-L. and Engelman D.M. 2000. Helical membrane protein folding, stability, and evolution. Annu. Rev. Biochem. 69: 881-922. - Bowie J.U. 2005. Solving the membrane protein folding problem. Nature 438: 581-589. - Cho, W. and Stahelin, R.V. 2005. Membrane-protein interactions in cell signaling and membrane trafficking. Annu. Rev. Biophys. Biomol. Struct. 34: 119–151. - Goni F.M. 2002. Non-permanent proteins in membranes: when proteins come as visitors. Mol. Membr. Biol. 19: 237-245. - Johnson J.E. and Cornell R.B. 1999. Amphitropic proteins: regulation by reversible membrane interactions. Mol. Membr. Biol. 16: 217-235. - Seaton B.A. and Roberts M.F. Peripheral membrane proteins. pp. 355-403. In Biological Membranes (Eds. K. Mertz and B.Roux), Birkhauser Boston, 1996. - Integral membrane proteins - Transmembrane proteins - Peripheral membrane proteins - Ion pump (biology) - Carrier protein - Ion channel - Receptor (biochemistry) (including G protein-coupled receptor) - General Principles of Membrane Protein Folding and Stability from Stephen White laboratory - Orientations of Proteins in Membranes (OPM) database 3D structures of integral and amphitropic membrane proteins - MeSH Membrane+proteins |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:0d5641f4-4b95-4227-92cc-abf4d1717af0>
3.5
1,032
Knowledge Article
Science & Tech.
32.647026
Check out the picture accompanying this post (courtesy NASA). It is a graphical timeline of the history of our universe. Let's walk through it step by step - from left to right. Quantum Fluctuations: the universe starts off as a tiny blob of quantum fuzz, perhaps 0.000...thirty more zeroes...1 centimeters in size; basically out of void and nothingness - the uncertainty inherent in the laws of quantum mechanics (see post on the quantum world for more). The physics during this period is not well understood - it lies in the realm of String Theory. The temperature of the quantum fuzz is 1000…plus twenty more zeroes degrees… at these temperatures, all the laws of physics are expected to be highly symmetric and the forces of Nature unified. Inflation: the random quantum fluctuations get frozen out during a remarkable period known as Cosmological Inflation: a violent expansion of the universe driven by the repulsive force of dark energy (see post on Dark Energy for more). This lasts only 0.000…thirty or so zeroes…1 seconds; but it is so violent that, at the end, the universe is only about 100 times smaller in size than what we see today! The explosive expansion cools down temperatures to a comfortable 1000…fifteen more zeroes degrees. In the next few seconds, the expansion continues but slows down dramatically (see comments about the graceful exit in another post). The laws of physics loose their symmetric form and the force laws start fragmenting into different branches, as we see them today: electromagnetism, gravity, weak force, and nuclear force. Protons and neutrons form first, then Hydrogen and Helium as the matter condenses out of the vacuum into a cooler universe. By the time we reach a few hundred thousand years since the beginning, atoms abound and the stuff in the universe goes from opaque to transparent: that's the point labeled Afterglow Light Pattern in the timeline. This is the Cosmic Microwave Background (CMB) radiation that we image today (see other post on the CMB for more). The temperature is now a chilling 3000 degrees. The universe continues to expand and cool down at a slower rate for the next 14 billion years. We first go through the Dark Ages - when witches were burned alive and alchemy was common. Then we have the formation of the first stars about 400 million years since the beginning. Then we get galaxies, and finally here we are living our miserable lives. This picture of the history of our universe crucially relies on that initial critical and delicate period called Inflation - when the universe underwent a violent expansion that stretched space faster than the speed of light. Without the inflationary epoch, it is effectively impossible to realize a universe that looks like ours today (see other post on the multiverse). The video accompanying this post gives an excellent and brief description of what Inflation is, including a discussion by the father of the inflationary theory, Alan Guth. Enjoy.
<urn:uuid:476b5396-e9f7-45ff-aad0-0fba58f7e748>
3.546875
617
Personal Blog
Science & Tech.
48.592566
Most land snail species that have teeth or lamellae in the apertures of their shells develop them as they near sexual maturity. In a handful of species, such apertural formations are present in young snails, but diminish in number and size or disappear completely in adults. The North American snail Ventridens suppressus is in the latter group. The apertures of the shells of newly hatched Ventridens suppressus are unobstructed. Up to 5 lamellae develop in their apertures as the snails go thru their ontogeny. But as they approach maturity, the lamellae get resorbed one by one and the adults end up with 1 small tooth. Sometimes even that disappears completely. Here is one individual that I found recently (shell diameter was ~5 mm). It has 2 lamellae in its aperture. This is Pilsbry's 3rd neanic substage (Fig. 235 in Land Mollusca of North America, vol. II:1). In a paper that came out about a year ago, I hypothesized that in the aperture of the shell of the semi-terrestrial snail Pedipes ovalis, one long lamellae functions to protect the penumostome (breathing hole) from the movements of the foot. I have long been puzzled by the lack of apertural formations in young shells and, in the case of species like Ventridens suppressus, in old shells. If they have a function, why are they not present at all life stages?
<urn:uuid:93474e33-0fea-405d-b912-c453889fc9e6>
3.5625
318
Personal Blog
Science & Tech.
52.233406
I am reading a book for Java that I am trying to learn, and I have a question. I can't understand which is the difference between the variable type "char" and "string". For example, there is a difference between "int" and "short", the bytes at the memory and the area of numbers that they have. But what is the difference between "char" and "string"? Expect that "char" use (') and "string" ("). PS: It is my first "real" programming language. (At school I learned a fake-language for the purpose of the programming lesson.)
<urn:uuid:a9c1e09c-3d8a-4b8d-ba6e-26f4a28c2a44>
3.078125
126
Q&A Forum
Software Dev.
69.633871
Substring is O(n) rather than the O(1) of Java. This is because in .NET, the String object contains all the actual character data itself1 - so taking a substring involves copying all the data within the new substring. In Java, substring can just create a new object referring to the original char array, with a different starting index and length. There are pros and cons of each approach: - .NET's approach has better cache coherency, creates fewer objects2, and avoids the situation where one small substring prevents a very large char being garbage collected. I believe in some cases it can make interop very easy too, internally. - Java's approach makes taking a substring very efficient, and probably some other operations too There's a little more detail in my strings article. As for the general question of avoiding performance pitfalls, I think I should have a canned answer ready to cut and paste: make sure your architecture is efficient, and implement it in the most readable way you can. Measure the performance, and optimise where you find bottlenecks. 1 Incidentally, this makes string very special - it's the only non-array type whose memory footprint varies by instance within the same CLR. 2 For small strings, this is a big win. It's bad enough that there's all the overhead of one object, but when there's an extra array involved as well, a single-character string could take around 36 bytes in Java. (That's a "finger-in-the-air" number - I can't remember the exact object overheads. It will also depend on the VM you're using.)
<urn:uuid:f92b9dd8-6888-4d53-9eb9-232aa3734689>
2.71875
348
Q&A Forum
Software Dev.
53.869692
Credit: Herb Thornby As you may know, I’m teaching a General Relativity course this term, and so I have spacetime on the brain. So it’s particularly fun when I get relativity-oriented emails. This morning, for instance, I found a question from a reader named Dawn who asks: Okay, the info out there seems to be all about the effects of speed and mass on time e.g. The Twin Paradox, but I just can’t see the why. Take this rather interesting question I came across… ‘If you could spin a carousel fast enough to get its rim moving at nearly the speed of light, would time stand still for people on the carousel?’. So, here, in theory, time would go more slowly for the people on the carousel. Fine. Buy why? And I don’t mean mathematically why, I mean physically why. For time to be affected by speed and mass it must be a ‘thing’ (even if ultimately time may not exist). I have yet to see an animation, model or drawing that shows WHAT is PHYSICALLY happening to this thing called ‘time’. In the same way that we see how atoms and molecules are affected by heat and then understand why things get hot or cold. What is physically happening to the ‘atoms’ of time when they are being subjected to speed or mass? I would be particularly interested to see this in the spinning carousel example. Being able to affect time without traveling. I linked the answer to the original question from the HowStuffWorks website for the rest of your edification, but I’m not surprised that Dawn came away from it with more questions than answers. For those of you who aren’t familiar, let me give you two basic results from relativity: - Moving observers apparently have slow running clocks. The closer you move to the speed of light, the greater the effect. This shows up, notably, in the twin paradox that Dawn mentions in her question. - Likewise, clocks run slow in strong gravitational fields. Clocks on earth, for instance, run slower than clocks in deep space by about 1 part in a billion. Part of Einstein’s genius was that he connected these two effects, and noted that functionally, there’s little difference between an observer who’s moving quickly (at the edge of a Carousel, for instance), and one who’s on the side of a hill. In both cases, you’d be pushed outwards (downwards). I even did a technical blog post wherein I worked out the details for you. Up top of the page, I’ve included a much nicer illustration from my upcoming book to illustrate the basic idea. But all of that is the “what?” and Dawn, at least, came in already knowing that. Her question more concerns the “why?”. I’m sorry, Dawn, but I think you’re going to find my answer unsatisfying. The reason is that time (unlike an atom) isn’t a thing. To be blunt, an atom isn’t really a fixed thing either, but I think we should probably leave that level of abstraction for another day. Let me give you a quick exercise to put your mind in the ride state for what comes next. I stand up and face North, and having done so, I can now describe the objects around me in various ways. There is a chair about 3 feet to my right, a computer monitor 1 foot in front of me, and so on. These coordinates seem like a perfectly valid way of describing things until I decide to pivot. Suppose I turn 90 degrees to my right. “Forward”(North) becomes “Left.” “Right” (East) becomes “Forward.” and so on. It would have been ludicrous for me to think of the forward direction as something concrete because I can easily turn and make it some other direction. Put another way, you’re asking the wrong question. In a previous post, I tried to tackle this question in a geometric way. The idea is that simply by turning, the coordinates that we might label as “x”,”y”, or “z” get switched around but something — in this case, the distance between any two objects — remains the same. Time is more complicated because we feel intuitively (and wrongly) that it is somehow wholly different from the 3 coordinates of space. While it has a slightly different behavior, the reality is that space-time is really the thing that is unchanged as we turn around or fly through it at high speeds. In other words, nothing is happening to time as you travel close to the speed of light. It was, and remains, inextricably combined with space, and you should simply think of the whole operation as looking at spacetime at another angle. My latest column is up, and it’s a fun one. Does antimatter have antigravity? Only one way to find out for sure! In other news, I’ll be on a panel at the Library Journal Day of Dialog May 29 (NYC). Richard Dawkins, Simon Winchester, and I will be talking about “The Art of Science Books.” It should be pretty awesome. Lots of excitement in the last couple of months before the new book comes out! Our first review is out, and it’s very good! A few choice quotes from our Publisher’s Weekly review: …An informative, math-free, and completely entertaining look at the concept of symmetry in physics… Throughout his fascinating discussion, Goldberg’s writing remains accessible and full of humor…Seasoning his expose’ with pop culture references that range from Doctor Who to Lewis Carroll to Angry Birds, Goldberg succeeds in making complex topics clear with a winning style. Read the whole thing here. Credit: New Scientist Enough with the random announcements! It’s time for some science! I have a new column up on io9: Will the Universe End in a Big Rip? For those too impatient to read through, the answer is “maybe,” with a side of “probably not.” But read it anyway. There’s some good cosmology in there. Okay, just one more, and only because it’s pretty insane. Danica McKellar (who you may also remember as Winnie Cooper from the Wonder Years) has written a really nice blurb for the Universe in the Rearview Mirror: This is a fun and fascinating examination of core physics concepts, explained with humor and levity – and which even includes a look at one of physics’ unsung heroines, a giant upon whose shoulders many physicists have stood: Emmy Noether! —Danica McKellar, actress and New York Times bestselling author of Math Doesn’t Suck In other news: - I’ll be doing a reading and book signing on release day (July 11) at the Rittenhouse Barnes and Noble in Philadelphia at 7:00pm. Come by and say hello. - Discover will be reviewing my book in their July/August issue (out June 11). Check it out! - I should have a new “Ask a Physicist” column out later today at io9. It’s all about the Big Rip! Okay, I promise not to overdo it, but in the last few days, some really awesome people have been saying some really awesome things about my upcoming book, The Universe in the Rearview Mirror: Most physics books can’t really be described as `rollicking,’ but most physics books aren’t written by Dave Goldberg. This book is fun, irreverent, and enjoyable, but also very truthful and illuminating. Buy it for your friend who was always scared of physics, especially if that friend is yourself. - Sean Carroll, theoretical physicist at Caltech, author of The Particle at the End of the Universe Reading this book is like taking a class with the most awesome science professor ever. Goldberg answers the physics questions you secretly want to ask, like whether you’ll ever have a TARDIS and what would happen if Earth were sucked into a black hole. You’ll have so much fun finding out that you won’t realize that you’ve just learned how space and time work at a fundamental level. A must read for anybody who wants to understand the nature of the universe — with jokes. - Annalee Newitz, editor and time distortion field operator for io9.com, as well as the author of the upcoming Scatter, Adapt, Remember: How Humans will Survive a Mass Extinction Whether unveiling the mysteries of the Higgs boson, visiting Antworld, or cracking the kaon koan, Dave Goldberg’s masterful explanations of how symmetry shapes the universe will enthrall and enlighten. ~J. Richard Gott, Professor of Astrophysics, Princeton University, and author of Sizing Up the Universe: The Cosmos in Perspective I’m extremely flattered and touched by their kind words. Please do them a solid by showing their books some love. So I found this in my inbox: Unputdownable! This book is tremendous fun for any reader curious about our bizarre and beautiful universe. If only the profound concepts and laws of physics were presented in schools in the clear and fun way Dave Goldberg has in this book, we would attract many more people to science early. –Prof. Priyamvada Natarajan Departments of Astronomy & Physics Chair, Womens Faculty Forum, Yale University Wow! I mean, right? Full disclosure, Priya is a good friend and mentor, but she also doesn’t pull her punches. She even makes me want to read the book again! P.S. If you haven’t already done so, be sure to become a fan on facebook. There’s a lot of good stuff there, including talk announcements, links to articles, discussion of ongoing science and more! Like many of you, one of my earliest and best exposures to popular mathematics writing was John Allen Paulos’s excellent Innumeracy and Beyond Numeracy. He and I have become twitter buddies, and he graciously agreed to blurb my upcoming book: The scope of Dave Goldberg’s The Universe in the Rearview Mirror is almost as vast as the physical universe it does a most impressive job of describing. Employing an engagingly informal and often humorous voice, he explains some very profound physical ideas, ranging from the the Second Law of Thermodynamics and Maxwell’s demon to Olbers’s paradox of the dark night sky and the mysteries of quantum entaglement. Perhaps most importantly he limns the under-appreciated work of Emmy Noether whom Einstein described as “the most significant creative mathematical genius thus far produced since the higher education of women began.” Her principle that every symmetry gives rise to a conserved quantity unifies much of physics and Goldberg makes clear why and how. It’s a pretty big honor. If you’re not clear on why, go back and read John’s books, and give yourself a treat. Lately, my my inbox hath overflowed. Yesterday, I got a thought-provoking email from a fellow physics instructor. It’s a good, non-intuitive question about special relativitiy. It’s also got the “technical” tag, so if you’re afraid of a few equations and some truly terrible MS Paint figures, this may not be the right blog for you. Since you might not make it to the end, let me remind you now to like my new facebook page!. Our correspondent asks: A box has a mass m. Push on the Box and it has an inertia proportionate to m. If by various processes, some of the matter is converted to energy .. Say by burning fuel or mechanical to electromagnetic or radioactive decay… But the energy is still contained in the box. Does the box have the Same inertia? If answer is simple yes by , how does Energy have inertia? I’m going to rephrase this a little bit for concreteness. Suppose you had an essentially massless box, and inside there was 5 kg of matter and 5 kg of antimatter, separated by a magnetic field or some other such connivance. The total device, of course, would have a mass of 10 kg by any measure you wanted to consider. Pushing on it with a force of 10 N, for instance, would cause it to accelerate at: Likewise, were you to measure the gravitational pull of the box (which would be tough but, in principle, doable), you’d find it has a gravitational mass of 10 kg. No problem so far, but what happens when you remove the membrane, and that 10kg of mass turns into: worth of photons. Photons are individually massless particles, so the question is, does your box still have inertial mass? Yes. And it has gravitational mass, too. To understand why we need to delve a little into special relativity, and in particular, into the postulates of special relativity: - The laws of physics are the same in all inertial frames of reference. - The speed of light in free space has the same value c in all inertial frames of reference. This setup is not that dissimilar to how Einstein derived in the first place. So imagine (for simple mathematical convenience) that the light in your box were monochromatic, and the box is stationary, with half of the light traveling to the right, and half to the left. Light does carry momentum, as we known since Maxwell, and can easily be seen in a radiometer: where is the Planck constant, and is the frequency of an individual photon. In this case, the momentum cancels. But now look at the box from a different inertial perspective, one where the box is moving to the right at v. This speed can be much less than the speed of light, and will still produce an interesting answer. The 2nd postulate of special relativity tells us that all photons travel at the same speed. The only thing that changes if you look at them in a moving frame is their frequency (or equivalently, their wavelength). The frequency of the photons in the forward-going direction are higher than they would be if the box were at rest (blueshifted), and the backward-going direction are _lower_ than they would be if the box were at rest (redshifted). The relation is: So the total momentum of the forward going photons are: and backward gets a minus sign in two places: Adding them together yields: Feel free to check my algebra, but the upshot is that there are two terms each in the forward-going and backward-going momenta, and one of them cancels, and one of them adds. This means that the impulse required to push the box is the value above, and since the box is moving at non-relativistic speeds, we can re-write this as: The bit in the parentheses is the mass. It’s also worth noting that: So yes, a collection of photons has inertial mass because it requires an impulse to increase their net momentum. As a final bonus: does a box of photons have gravitational energy? Absolutely yes! I’m not going to prove this in detail, but I’ll simply give you a flavor for why. - The equivalence principle of general relativity says that there is no distinguishing between being in free-fall and a true gravitational field. As a result, all massive bodies fall with the same acceleration in the same field. After all, the curvature describes the acceleration, not some inverse square law. - But Newton’s 3rd law really does hold. It gives rise to conservation of momentum, which means that if my box of photons is accelerated toward the earth, Newton #3 says that the earth must be accelerated toward the box with the same force. Tada! A box of photons has mass even though each individual photon is massless! Of course, this shouldn’t be such a big surprise. After all, what is the Higgs but a way of turning interaction energy into mass? For that matter, would it surprise you to learn that protons are about 50 times more massive than the quarks that make them? The rest is all interaction energy.
<urn:uuid:4af66427-855d-4979-a1ec-7fef4b1a6e45>
3.078125
3,512
Personal Blog
Science & Tech.
57.014127
Ask a question about 'Perilla ketone' Start a new discussion about 'Perilla ketone' Answer questions from other users is a natural terpenoid The terpenoids , sometimes called isoprenoids, are a large and diverse class of naturally occurring organic chemicals similar to terpenes, derived from five-carbon isoprene units assembled and modified in thousands of ways. Most are multicyclic structures that differ from one another not only in... that consists of a furan Furan is a heterocyclic organic compound, consisting of a five-membered aromatic ring with four carbon atoms and one oxygen. The class of compounds containing such rings are also referred to as furans.... ring with a six-carbon side chain In organic chemistry and biochemistry, a side chain is a chemical group that is attached to a core part of the molecule called "main chain" or backbone. The placeholder R is often used as a generic placeholder for alkyl group side chains in chemical structure diagrams. To indicate other non-carbon... containing a ketone In organic chemistry, a ketone is an organic compound with the structure RCR', where R and R' can be a variety of atoms and groups of atoms. It features a carbonyl group bonded to two other carbon atoms. Many ketones are known and many are of great importance in industry and in biology... functional group. It is a colorless oil that is sensitive to oxygen, becoming colored upon standing. Perilla ketone is present in the leaves and seeds of purple mint (Perilla Perilla is the common name of the herbs of the genus Perilla of the mint family, Lamiaceae. In mild climates, the plant reseeds itself. There are both green-leafed and purple-leafed varieties, which are generally recognized as separate species by botanists. The leaves resemble stinging nettle... ), which is toxic to some animals. When cattle and horses consume purple mint when grazing in fields in which it grows, the perilla ketone causes pulmonary edema Pulmonary edema , or oedema , is fluid accumulation in the air spaces and parenchyma of the lungs. It leads to impaired gas exchange and may cause respiratory failure... leading to a condition sometimes called perilla mint toxicosis.
<urn:uuid:187290eb-8b1d-4ed2-aebb-3f695c3915f7>
2.71875
484
Q&A Forum
Science & Tech.
39.214209
How do you make a hurricane? The recipe calls for two key ingredients: heat and moist air. Hurricanes are caused when intensely low pressure areas form over warm ocean waters, usually in the summer and early fall. Warm sea surface temperatures of at least 80 degrees fuel hurricanes. Water evaporates off the ocean's surface and then condenses to form clouds and rain. This warms the cool air higher up, causing it to rise even further. The rising air is replaced by more warm humid air from the ocean below, and the cycle continues to draw more and more warm moist air into the developing storm from the ocean surface to the atmosphere. In tropical thunderstorms, winds carry the heat away, but the heat can build up if there is no wind. This causes low pressure areas to form. Because of the low pressure, winds begin to spiral inward towards the center of the low pressure point, much like water going down a drain. Converging winds are those moving in different directions that run into each other. They help form hurricanes by pushing even more moist, warm air upwards. At the same time, high-pressure air in the upper atmosphere begins to be sucked into the low-pressure center ("eye") of the storm. Wind speeds increase, and a hurricane is born. All hurricanes have three main parts. Rain bands are the thunderstorms moving outward from the center. An "eyewall" encircles the center of a storm. That's where the strongest winds occur. The center is called the eye of the hurricane. It contains the warmest air, and very little wind.
<urn:uuid:a1ae16b7-56ee-4665-a2bc-278d19b13cb3>
4.21875
323
Knowledge Article
Science & Tech.
59.339167
An unstrained horizontal spring has a length of 0.30 m and a spring constant of 240 N/m. Two small charged objects are attached to this spring, one at each end. The charges on the objects have equal magnitudes. Because of these charges, the spring stretches by 0.031 m relative to its unstrained length. Determine (a) the possible algebraic signs and (b) the magnitude of the charges.
<urn:uuid:ead9dd8e-cd73-4e4a-ae0b-1eb77722e93a>
3.46875
90
Tutorial
Science & Tech.
75.447391
When a variable is declared, it is declared with a particular type. The type tells what kind of values the variable can have, and what the possible operations on that type are. There are eight primitive types in Java; these are the basic building blocks from which all other types are constructed. In addition, Java defines literally thousands of object types, and every program you write has additional programmer-defined object types of its own. Every class defines a type. int is a primitive type that can have integer values, and the operations add, subtract, multiply, divide, and several others. an object type supplied by Java, with operations such as substring. When you write a program, you might define a type (class) named Employee with operations
<urn:uuid:90f1d1b4-c306-4380-b849-89823dd8d21d>
3.796875
156
Documentation
Software Dev.
34.136429
The main benefit of Object Oriented Design (that is, designing a program using classes) is that you can model your program towards real life creating separate components and that these components. In practice, let's consider a game, like pacman. If you would describe pacman to someone, you could say "pacman is a game where you move around a figure, which eats up dots, fruits and keeps away from ghosts". In object oriented design, we would analyse that as the following: - Figure pacman which moves and eats things - Eaten things: dots, fruits, ghosts (note that for clarity I'm leaving out the actual game grid) We can then transfer this into classes: Class Dot is a Thing: lives remain the same; Class Fruit is a Thing: Class Ghost is a Thing: Keep in mind that this is just a rough example. The benefit is that when you would like to add a "thing", it's easy to do, when you want to change the score added by eating a dot, it's easy to do. That is because every class forms a seperate entity, with a limited interface to the outside. Think of it as going to the cash register of a store. You pay the person at the register, but don't care what happens after that, things like increasing the daily income of the store, updating the stock, etc., you simply use the interface the cash register provides you. Similarly, classes cooperate with one and other through the interface they present to one and other, which means that the details of implementation are hidden and known only to a class itself. It's also worth mentioning that C++ actually uses a lot of classes in it's standard library. Take std::string for instance. That is actually a class, and every std::string you create is an object of that class. So you can't really go about using C++ not using classes (unless you confine yourself to the C-subset, in which case, just learn C). If you don't see a constructor defined for a class, the default constructor will just be called. The default constructor is code generated by the compiler, but which doesn't really alter anything to the object you're creating. So to conclude, the use of classes may benefit the clearness of your design, the maintainability of your code and make your code more reusable (you can reuse classes you created before easily). I say may , because it depends on how you use it, it could also clutter your code into a mess if you don't know what you're doing. If you want a good read on this stuff, have a look at Robert Lafore's Object Oriented programming using C++. Hope that helps. All the best, Edit: I'm so slow... sorry!
<urn:uuid:74e446cc-691f-4cce-8558-b20bb5dea40b>
3.625
588
Comment Section
Software Dev.
61.495079
all, section 5. 5. Class Relationships Class declarations define new reference types and describe how they are implemented. Constructors are similar to methods, but cannot be invoked directly by a method call; they are used to initialize new class instances. Like methods, they may be overloaded. Static initializers are blocks of executable code hat may be used to help initialize a class when it is first The body of a class declares members, static initializers, and constructors. The scope of the name of a member is the entire declaration of the class to which the member belongs. Field, method, and constructor declarations may include the access modifiers public, protected, or private. The members of a class include both declared and inherited members. Newly declared fields can hide fields declared in a superclass or superinterface. Newly declared methods can hide, implement, or override methods declared in a superclass. visibility modifier: public/private/protected Return type: void/primitive type/reference to a object Class methods/Class variables are declared with static. Static declaration inside a method change the lifetime of © by csfac. All Rights Reserved (2010). It is not allowed to print these pages on a CAST printer. Last modified: 01/April/10 (17:16)
<urn:uuid:e91054e1-8cf5-4ba9-b19c-e8f87e2ad2a9>
3.640625
290
Documentation
Software Dev.
35.150883
A teacher friend called recently with a strange message. “I just found out that a lot of people don’t know what tree acorns grow on.” He (I will call him John because that’s his name) first became aware of this strange phenomenon after another teacher asked him the question. The other teacher didn’t know. John got to wondering. So he asked one of his high school classes to raise hands if they knew where acorns came from. About two thirds did, so John, long experienced with high school students, asked one of them for the whereabouts of acorns. The student, embarrassed, said he didn’t really know. John addressed the class again: “Perhaps you didn’t understand the question,” and then he repeated it. This time, with the threat of being asked hanging over them, only a handful of the students raised their hands. Perhaps this class was an exception, John thought. He had the opportunity a little later to ask the question of a larger group— about 250 people. Only a handful knew the answer. Asked John of me: “Are we supposed to believe that people are getting a good education?” The truth is, many of us, perhaps most of us, are illiterate about the world of nature. Our attention in life is focused elsewhere. Perhaps the way to resolve this kind of ignorance is to make up computer games based on natural history. But electronic games might not be the remedy for this kind of illiteracy. The problem is that the knowledge achieved would be almost entirely virtual. You could have a game based on identifying bird species— call it “Guess The Bird” — but the knowledge gained would be like that of many birdwatchers. They can name the bird they see, or even hear, but they don’t know the least little bit about how that bird fits into the ecosystem, which is the most important part of learning about them. For instance, which birds depend on acorns for an important part of their food supply? There is nothing wrong with not knowing something that ought to be common knowledge. It is only wrong when people don’t know that they don’t know. Everyone today likes to spout off about how we should manage nature but very few of us know enough about the issues (like population carrying capacity, like climate change) to discuss them intelligently. Not knowing where acorns come from is symptomatic of something very perplexing. A culture which is that ignorant is going to be unaware of a great many more facts about nature and that could lead to environmental suicide. A culture that doesn’t know where acorns come from obviously doesn’t know much about trees at all, and so will go heedlessly on destroying forests until it destroys the ecosystems of about half the earth. If you don’t know where acorns come from, you won’t know that acorn flour was once a staple food of native Americans, especially in California, and could be a staple food again. If you don’t know where acorns come from, do you know where oil and coal come from? Do you know where a healthy environment comes from? Do you know, for instance, that a mature shade tree gives off 60 cu. ft. of pure oxygen every day? Do you know where most of the building material for houses comes from? Where good furniture and tool handles come from? Where most fruit and edible nuts come from? Where rubber comes from? Where coconut, varnishes, nutmeg and turpentine come from? Where millions of acres of fertile land came from? Where hundreds of species of wild animals come from, some of which were probably our evolutionary ancestors? Where the life-saving fuel for many millions of people comes from? Will a society that doesn’t know where acorns come from really know where humans come from? Gene and Carol Logsdon have a small-scale experimental farm in Wyandot County, Ohio. All Flesh Is Grass: Pleasures & Promises of Pasture Farming The Lords of Folly (novel) The Mother of All Arts: Agrarianism and the Creative Impulse (Culture of the Land) Image Credit: Wikipedia Quercus subgenus Cyclobalanopsis
<urn:uuid:13dfaa57-1cc0-44c2-a79f-de7af3556843>
2.84375
896
Personal Blog
Science & Tech.
56.751393
Last year about 300 million compact fluorescent bulbs (CFLs) were sold in the U.S., thanks to people being more concerned about global warming. The only bad thing about CFLs is that they contain small amounts of mercury in them, so when they are thrown away and broken the toxic vapors from the mercury can be inhaled by humans. And that can be a health risk. So despite their eco-friendliness, many people think that it is too risky to throw away these bulbs. That is until engineering students and researchers at Brown University developed a material that could absorb the mercury. How does it work? What the students did was they created a prototype of a special lining that would be placed inside CFL packaging. That way if any bulbs broke inside the packaging the mercury would be absorbed safely. Or if the mercury spilled on the floor or a table, the packaging could be placed on top of it to absorb the liquid. You can read more details about the story here. If this material proves to work effectively, it could dramatically impact our environment, especially if the government approves energy-efficient lighting by 2012. Let’s hope that the Brown University students continue improving this amazing discovery.
<urn:uuid:29c91ff3-0de0-44b8-ad1d-86541d04c857>
3.6875
249
Knowledge Article
Science & Tech.
54.235714
BIOTIC Species Information for Balanus spp. |Click here to view the MarLIN Key Information Review for Balanus spp.| |Researched by||Dan Bayley| |Refereed by||This information is not refereed.| |Growth form||Feeding method||Active suspension feeder |Typical food types||Zooplankton, detritus||Habit||Attached| |Bioturbator||Flexibility||None (< 10 degrees)| |Adult dispersal potential||None||Dependency||Independent| |General Biology Additional Information||Feeding Barnacles feed by extending thoracic appendages called cirri out from the shell to filter zooplankton from the water. In the absence of any current, the barnacle rhythmically beats the cirri. When a current is present the barnacle holds the cirri fully extended in the current flow. Barnacles feed most during spring and autumn when plankton levels are highest. Little if any feeding takes place during winter, when barnacles rely on stored food reserves. Feeding rate is important in determining the rate of growth. Barnacles need to moult in order to grow. Frequency of moulting is determined by feeding rate and temperature. Moulting does not take place during winter when phytoplankton levels and temperatures are low. |Biology References||Rainbow, 1984, Barnes et al., 1963, Bassindale, 1964,|
<urn:uuid:e6e878b4-045e-4dfc-aee2-773bc64cf118>
3.15625
314
Knowledge Article
Science & Tech.
29.705803
Introduction of Cloud in Visual Studio 2010 In this article I am going to explain a brief introduction of cloud programming in Visual Studio 2010. Cloud computing is internet based computing, whereby shared resources, software and information are provided Windows Azure: Windows Azure is a Microsoft cloud computing platform. The Windows Azure platform offers an intuitive, reliable and powerful platform for the creation of web applications and services. The Windows Azure platform is comprised of Windows Azure: an operating system as s service; SQL Azure: a fully relational database in the cloud; and .NET Services: consumable web based services that provide both secure connectivity and federated access control for applications. Role: Role is an element or individually scalable component running in the cloud. Each instance corresponds to a Virtual Machine in the cloud. Web Role: Web Role is similar to a web application running on IIS. It is accessible via http or https endpoint. A worker role is used to host background processing application behind a web role. Storage Services: Scalable storage solutions to provide support to sustain scaling in load. Blob, queue and table storage services in Microsoft Azure storage. Microsoft SQL Azure: Cloud based relational db service built on SQL Server technology. It is a local Simulation environment: enables local development and testing of cloud applications. Visual Studio 2010 provides support for programming cloud based services. There are three areas where Microsoft provides developers with a hook into cloud-based programming: Development and configuration tool. Local testing environment. Install Windows Azure tools for Visual Studio to get started with cloud programming. To create a new Role, you can create a new Project and use one of the templates such as ASP.Net Web Role template, WCF Service Role template, and ASP.Net MVC Web Roles contain additional references to assemblies compared to standard ASP.Net web app such as: The cloud service project is a deployment project that defines which roles are included, their definition and configuration files. A simulation environment is provided locally through IIS to develop and test cloud-based services. Development Fabric provides a local test environment enabling developers to develop and test cloud based applications on local First develop and build the application locally using Windows Azure Development Fabric and using local storage. application is tested locally, run in mixed mode - run the application locally using a Windows Azure Storage Account. is validated in mixed mode, deploy and test your application on Windows Azure using a Windows Azure storage account and id. Each Windows Azure Hosted Service has a private staging deployment area and a public Production deployment area. Windows Azure account id can be provisioned online on windows.azure.com. The provisioning process provides options to set a friendly account name, select affinity groups etc. Affinity groups indicate tight coupling for storage and hosted services, where possible. When you deploy the application to Staging on Windows Azure, the Role is in Allocated state. Once you click Run, the Role is in Initializing state. When the Role is ready, it moves to the Started State. For testing your application in the staging environment, you can navigate to the staging URL which is a URL starting with a guid. Once everything is tested, the Role can be promoted to Production area. From this stage, the Role is available to end users. For on-going changes/fixes, you can modify the development environment. In order to maintain separate storage for staging and production, separate storage bindings for the deployment and staging. When you expect the load to increase, the instance count can be increased by modifying and re-deploying the service
<urn:uuid:feb4200d-f897-4f54-b55b-d0b718420301>
2.9375
785
Truncated
Software Dev.
30.533373
Joined: 03 Oct 2005 |Posted: Wed Dec 21, 2005 10:29 am Post subject: Nanoparticle Drug-Release in Response to pH Level in Cells |Researchers at Georgia Institute of Technology Use Polymeric Nanoparticles to Release a Drug in Response to a Cancer Cell’s pH Environment One of the characteristics that distinguish a cancer cell from a normal cell is the former’s acidic interior. Researchers have had some success capitalizing on this difference by creating nanoparticles that are stable in the more basic environment of the blood stream and the extracellular environment but that unfold when they enter the acidic, or low pH, environment inside a cancer cell, releasing their drug payload then. Now, a research team from the Georgia Institute of Technology has created a new polymeric nanoparticle that not only releases its payload under acidic conditions, but also disintegrates into small, non-toxic molecules that should be easily degraded by the body. Reporting their work in the journal Bioconjugate Chemistry, Niren Murthy, Ph.D., and his graduate student Michael Heffernan describe the methods they developed to create a new polymer that they call PPADK. This polymer contains an unusual chemical linkage that causes it to fall apart when the pH drops below 5. The pH of blood and the interior of healthy cells, in contrast, is about 7.4. When mixed vigorously with water, the polymer forms stable nanoparticles. If drug molecules are included in the mixture, the drug becomes entrapped within the nanoparticle structures. This work is detailed in a paper titled, “Polyketal nanoparticles: a new pH-sensitive biodegradable drug delivery vehicle.” An abstract is available through PubMed. Source: NCI Alliance for Nanotechnology in Cancer. This story was posted on 20 December 2005.
<urn:uuid:e5a9a882-5cfe-4ee6-b848-41fdf166530a>
3
378
Comment Section
Science & Tech.
35.311531
THERE'S water, water, everywhere in the cosmos, but how it comes about in the interstellar clouds that give birth to stars, planets and even life is a bit of a mystery. The answer, it seems, may lie on the surface of frosty dust grains. When hydrogen and oxygen exist as gases water forms easily, but models of interstellar clouds suggest that this route is unlikely to produce the abundance of water seen in them. Most of the water we see has formed icy sheaths around tiny grains of dust in the clouds, and it is believed oxygen atoms accumulate on the grains and react with hydrogen to form water. Akira Kouchi and colleagues at Hokkaido University in Sapporo, Japan, tested the idea by freezing oxygen onto a nucleation surface held at 10 degrees above absolute zero. When they fired atoms of hydrogen onto the oxygen, hydrogen peroxide was produced, which ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:26f0f7ba-2972-4e5e-8eea-1922c48e8686>
4.09375
210
Truncated
Science & Tech.
44.025868
I wonder if you can help, I work with science teachers and have recently been given an Aepinus condenser. I have never come across one before and having searched for ages on the Internet cannot find out how to use the equipment. Can you please explain to me how you operate one? How about presenting it as a simple charge storage device like a Leyden Jar? It should be able to accept a fairly large number of electrons and store them until discharged with an acceptable shorting device (and a spectacular spark?). Depending upon the size of the plates and atmospheric conditions, I would think there could be a significant charge accumulated. I would need to go back to my College Physics Book to decipher just how much. Of course inserting an insulator such as a plywood disk should obviously affect things. An explanation of the beast would include the following: The excess electrons from the generator are stored on one plate making it negative in polarity. This accumulation of negative charges repels the electrons on the second plate (like charges repel and the ground gives them somewhere to go), causing it to have a The air separating the plates serves to prevent electrons from flowing from plate to plate until the amount of charges accumulated produces a potential (voltage), that overcomes the air's resistance and arching occurs. The wider the gap, the higher potentials that can be achieved, although, the charge will dissipate more quickly than in a modern capacitor. The setting would lead to discussion and calculations about potential, charge density, and electron flow, static charge, amongst R. W. "Bob" Avakian B.S. Earth Sciences; M.S. Geophysics Oklahoma State Univ. Inst. of Technology Click here to return to the Physics Archives Update: June 2012
<urn:uuid:899989dc-718b-4dc7-acb5-6432bf13f39c>
3.296875
392
Q&A Forum
Science & Tech.
43.803446
J. Craig Venter, one of the first geneticists to sequence the human genome, has been called many things arrogant, antagonistic, even daring to play God. But no matter how polarizing he may be, few of his colleagues or critics would deny that he has made monumental advancements in the world of science. This year Venter and his team at the J. Craig Venter Institute (JCVI), a not-for-profit genomic-research organization, announced in the journal Science that they had created the very first synthetic life-form, by creating DNA using chemicals in a lab and inserting it into a living bacterium. The feat itself is astounding an age-old science-fiction plot come true but perhaps just as astounding are the possibilities it suggests, such as tailor-made species that could produce mass amounts of food or biofuel. But worries remain about the possibility that a synthetic organism could mutate or adapt in unexpected and dangerous ways. Certainly Venter has sparked fundamental questions about both our scientific and philosophical ideas of what factors do (and should) constitute life.
<urn:uuid:f685c212-927b-425d-8952-df36cf1c1923>
2.859375
218
Truncated
Science & Tech.
38.55566
It is often observed that girls do not perform as well as boys in mathematics. This difference is often overstated and it’s cause is often highly debated. Many people have suggested that the basis for this difference is essentially biological. It is now well established that a society’s attitude toward gender will significantly affect the performance of its girls in mathematics. That was the result of a study described in the May 30, 2008 edition of Science (available only to subscribers online) in an article called “Culture, Gender and Math.” That study attempted to analyze the cause of the “gender gap” (the difference between the scores of boys and girls) in mathematics. The conclusion of this comprehensive study is that “Social conditioning and gender biased environments can have a very large effect on test performance.” The study examined cultural attitudes regarding women in various countries and compared them to math achievements of girls in those same countries. It found that the gender gap in math tends to disappear in more gender-equal societies. The authors of the study commented that the math gender gap has been narrowing over time in the United States. These conclusions dovetail well with the concerns raised by Mary Pipher, in her book, Reviving Ophelia.
<urn:uuid:9ec879ca-8ad7-4b97-a4ae-98dfff6f1017>
3.46875
258
Knowledge Article
Science & Tech.
36.208109
Thursday, May 3, 2012 - 14:30 in Earth & Climate WASHINGTON (AP) -- Greenland's glaciers are hemorrhaging ice at an increasingly faster rate but not at the breakneck pace that scientists once feared, a new study says.... - Greenland ice cap melting faster than everThu, 12 Nov 2009, 14:47:21 EST - Fast-shrinking Greenland glacier experienced rapid growth during cooler timesThu, 14 Jul 2011, 15:33:48 EDT - Greenland rapidly rising as ice melt continuesTue, 18 May 2010, 14:21:24 EDT - Greenland's glaciers losing ice faster this year than last year, which was record-setting itselfMon, 15 Dec 2008, 10:32:21 EST - 2 Greenland glaciers lose enough ice to fill Lake ErieTue, 24 May 2011, 12:04:55 EDT
<urn:uuid:2bb9f182-d9aa-49e0-b126-f97f2cc48309>
2.71875
173
Content Listing
Science & Tech.
68.458523
1. You encounter many organic compounds everyday. Some organic compounds are ethanol (grain alcohol) and acetone (nail polish remover). Please, list other organic compounds which can be found around you in everyday life. 2. Hydrocarbon is one of the organic compound that mainly contains carbon and hydrogen. Which of the following compounds are not classified as hydrocarbons? Alcohol, salt, LPG, vinegar, gasoline, fat, formaline, oil, base, LNG, acid, acetone. 3. Explain the characteristics of carbon atom that can form so many compounds. 4. Please explain the following keywords concerning hydrocarbon groups : aliphatic, aromatic, branched chain, cyclic chain, closed chain, opened chain. 5. Which are the example of organic compounds that have a long chain? Petroleum, methane, glucose, kerosin, sucrose, gasoline, alcohol. 6. Write down the different formulas of hydrocarbon compounds: general formula, molecular formula, empirical formula, structural formula, condensed structural formula, line formula, and skeleton formula for each of the following compounds which have 4 C atoms each. Alkane, alkene, alkadiene, alkyne, and cycloalkane. 7. Hydrocarbon compounds can be classified as a covalent compound. However, the covalent bonds of hydrocarbons are vary. Explain the type of covalent bond in alkane, alkene, alkadiene, alkyne, and cycloalkane. 8. A hydrocarbon in which all carbon atoms are bonded to the maximum number of hydrogen atoms is classified as saturated hydrocarbon. Explain this, and also the other one. 9. Write down the name of each compounds below and give two isomers of each : n-heptane, 2,2-dimethylbutane, 2-methylhexane, cyclopentane. 10. Write down the formulas of the common name for the following alkyl groups : methyl, ethyl, propil, isopropyl, butyl, isobutyl, tert-butyl, sec-butyl. 11. A homologous series of alkanes is a series of alkane compounds in which one compound differs from a preceding one by - CH2 -. Explain by giving an example. 12. Determine the number of primary, secondary, tertiary of C atoms in CH3(CH2)3CH(CH3)(CH2)2C(CH3)3. 13. Which compound is likely to have the highest boiling point and which has the lowest one? n-butane, n-hexane, n-decane, 2-methylpropane, 3,3-dimethyloctane 14. Write the molecular formula and condensed structural formula of 2,4-dimethylpentane, then find two isomers. 15. Make a simple scheme for a type of isomerism, then give example of each. All types of isomers that you have learnt.
<urn:uuid:80584ca9-232b-4a75-93e4-5bd3865702a4>
3.734375
632
Tutorial
Science & Tech.
41.170669
Of the new candidates, 68 are one-and-a-quarter times the size of the Earth or smaller — smaller, that is, than any previously discovered planets outside the solar system. Another 50 of these so-called exoplanets are in the habitable zones of their stars, where temperatures should be moderate enough for liquid water, the essential stuff for life as we know it; two of these are less than twice the size of Earth. In a separate announcement, to be published in Nature on Thursday, a group of Kepler astronomers led by Jack Lissauer of Ames said they had found a star with six planets — the most Kepler has yet found around one star, orbiting in close ranks in the same plane inside what would be the orbit of Mercury. If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
<urn:uuid:9c4b54e9-62c2-4c91-99cf-64946a1c96d0>
2.953125
177
Truncated
Science & Tech.
31.877911
Have you ever wondered how electricity is produced by a photovoltaic — what we often call a PV or solar electric — system? We'll help you understand by covering the basics of PV technology, which includes the underlying physics, how various PV devices are designed and become fully functional systems, and what's happening today in PV research and development. The Solar Energy Technologies Program of the U.S. Department of Energy (DOE) and its partners are adding to our fundamental knowledge and expertise in this area while improving the technologies that put the abundant energy of sunlight to work for us. To help you delve further into this fascinating topic, we've compiled additional information sources at the bottom of many of these pages that will direct you to other pages within our own Web site, as well as to other helpful Web sites. While perusing this material, you may wonder what a specific term means. If so, visit our solar glossary for a comprehensive listing of renewable energy and electrical terms. What do we mean by photovoltaics? First used in about 1890, the word has two parts: photo, derived from the Greek word for light, and volt, relating to electricity pioneer Alessandro Volta. So, photovoltaics could literally be translated as light-electricity. And that's what photovoltaic (PV) materials and devices do — they convert light energy into electrical energy (Photoelectric Effect), as French physicist Edmond Becquerel discovered as early as 1839. Commonly known as solar cells, individual PV cells are electricity-producing devices made of semiconductor materials. PV cells come in many sizes and shapes — from smaller than a postage stamp to several inches across. They are often connected together to form PV modules that may be up to several feet long and a few feet wide. Modules, in turn, can be combined and connected to form PV arrays of different sizes and power output. The size of an array depends on several factors, such as the amount of sunlight available in a particular location and the needs of the consumer. The modules of the array make up the major part of a PV system, which can also include electrical connections, mounting hardware, power-conditioning equipment, and batteries that store solar energy for use when the sun isn't shining. Did you know that PV systems are already an important part of our lives? Simple PV systems provide power for many small consumer items, such as calculators and wristwatches. More complicated systems provide power for communications satellites, water pumps, and the lights, appliances, and machines in some people's homes and workplaces. Many road and traffic signs along highways are now powered by PV. In many cases, PV power is the least expensive form of electricity for performing these tasks. Photovoltaic devices can be made from various types of semiconductor materials, deposited or arranged in various structures, to produce solar cells that have optimal performance. In this section, we first review the three main types of materials used for solar cells. The first type is silicon, which can be used in various forms, including single-crystalline, multicrystalline, and amorphous. The second type is polycrystalline thin films, with specific discussion of copper indium diselenide (CIS) cadmium telluride (CdTe), and thin-film silicon. Finally, the third type of material is single-crystalline thin film, focusing especially on cells made with gallium arsenide. We then discuss the various ways that these materials are arranged to make complete solar devices. The four basic structures we describe include homojunction, heterojunction, p-i-n and n-i-p, and multijunction devices. A photovoltaic (PV) or solar cell is the basic building block of a PV (or solar electric) system. An individual PV cell is usually quite small, typically producing about 1 or 2 watts of power. To boost the power output of PV cells, we connect them together to form larger units called modules. Modules, in turn, can be connected to form even larger units called arrays, which can be interconnected to produce more power, and so on. In this way, we can build PV systems able to meet almost any electric power need, whether small or large. By themselves, modules or arrays do not represent an entire PV system. We also need structures to put them on that point them toward the sun, and components that take the direct-current electricity produced by modules and "condition" that electricity, usually by converting it to alternate-current electricity. We might also want to store some electricity, usually in batteries, for later use. All these items are referred to as the "balance of system" (BOS) components. Combining modules with the BOS components creates an entire PV system. This system is usually everything we need to meet a particular energy demand, such as powering a water pump, or the appliances and lights in a home, or, if the PV system is large enough, all the electrical requirements of a whole community. Energy Payback Times for Photovoltaic Technologies Energy payback time (EPBT) is the length of deployment required for a photovoltaic system to generate an amount of energy equal to the total energy that went into its production. Roof-mounted photovoltaic systems have impressively low energy payback times, as documented by recent (year 2004) engineering studies. The value of EPBT is dependent on three factors: (i) the conversion efficiency of the photovoltaic system; (ii) the amount of illumination (insolation) that the system receives (about 1700 kWh/m2/yr average for southern Europe and about 1800 kWh/m2/yr average for the United States); and (iii) the manufacturing technology that was used to make the photovoltaic (solar) cells. With respect to the third factor, i.e., manufacturing technology, there are three generic approaches for manufacturing commercial solar cells. The most common approach is to process discrete cells on wafers sawed from silicon ingots. Ingots can be either single-crystal or multicrystalline. However, in either case, the growing and sawing of ingots is a highly energy intensive process. A more recent approach which saves energy is to process discrete cells on silicon wafers cut from multicrystalline ribbons. The third approach involves the deposition of thin layers of non-crystalline-silicon materials on inexpensive substrates. It is the least energy intensive of the three generic manufacturing approaches for commercial photovoltaics. This last group of technologies includes amorphous silicon cells deposited on stainless-steel ribbon, cadmium telluride (CdTe) cells deposited on glass, and copper indium gallium diselenide (CIGS) alloy cells deposited on either glass or stainless steel substrates. Recent research has established battery-free, grid-tied EPBT system values for several (year 2004-early 2005) photovoltaic module technologies (see Table 1). In Table 1, the values in the last column are the reciprocals of the respective values in the third column. It is seen that, even for the most energy intensive of these four common photovoltaic technologies, the energy required for producing the system does not exceed 10% of the total energy generated by the system during its anticipated operational lifetime. Future research will extend the table to include amorphous silicon and CIGS alloys. Table 1. System Energy Payback Times for Several Different Photovoltaic Module Technologies. (1700 kWh/m2/yr insolation and 75% performance ratio for the system compared to the module.) Energy Payback Time (EPBT)1 (yr) Energy Used to Produce System Compared to Total Generated Energy 2 (%) Total Energy Generated by System Divided by Amount of Energy Used to Produce System2 Non-ribbon multicrystalline silicon Ribbon multicrystalline silicon 1. V. Fthenakis and E. Alsema, "Photovoltaics energy payback times, greenhouse gas emissions and external costs: 2004-early 2005 status," Progress in Photovoltaics, vol. 14, no. 3, pp. 275-280, 2006. 2. Assumes 30-year period of performance and 80% maximum rated power at end of lifetime. Related Links on Photovoltaic Research and Development Discover what research and development (R&D) is taking place in the field of photovoltaics within the Department of Energy's Solar Energy Technologies Program and within various research areas at the national laboratories. Headquartered at the National Renewable Energy Laboratory, the National Center for Photovoltaics is the nation's premier research facility for PV or solar electricity. National Renewable Energy Laboratory (NREL) Solar Research Program NREL is involved in the following: Photovoltaic research, especially through the nation's premier PV research facility, the National Center for Photovoltaics; Solar Thermal research, including R&D in concentrating solar power and solar heating, through NREL's Center for Buildings and Thermal Systems; and Solar Radiation research, performed at NREL's Solar Radiation Research Laboratory. Systems engineering at NREL advances the performance and reliability of photovoltaic systems, and develops technology that can be integrated into residential and commercial structures. NREL's Cadmium Use in Photovoltaics Web Site Cadmium Use in Photovoltaics Web site provides a look at the very real environmental and health benefits of using CdTe to make solar electricity and how to carefully weigh these against the perceived risks. U.S. DOE Solar Program's PV Manufacturing R&D Project Thin-Film PV Partnership Web site provides a database of relevant and up-to-date resources about thin films, serving the thin-film community and more general audiences with information on amorphous silicon, copper indium diselenide, cadmium telluride; environment, safety, and health; and module reliability.
<urn:uuid:4472f7db-11e8-4d78-85cd-3095167edaf2>
3.6875
2,104
Knowledge Article
Science & Tech.
30.781454
Sampling ocean, seep, and hot-spring fluids in the deep sea During this expedition, as the R/V Thompson transits from near shore to blue water environments, we will be sampling at depths of <500 to >2500 m to investigate the chemistry of ocean water. Specific sites include those off Grays Harbor, Washington, Newport, Oregon, Hydrate Ridge, and Axial Seamount. Fluid samples will be obtained using niskin bottles on a rosette on a conductivity-temperature-depth (CTD) frame. These samples will be filtered to study microorganisms (archaea and bacteria) in different marine environments. Fluids will also be analyzed using a shipboard in situ flow cytometer that allows analyses and rapid sorting of individual organisms. The Thompson is the only ship in the UNOLS fleet that currently hosts this capability. Hydrate Ridge: Methane-rich fluids at Hydrate Ridge will be sampled using titanium "syringes" and niskin bottles with the robotic vehicle Jason. Here, the methane-rich fluids with less sulfide support dense mats of chemosynthetic microbial communities. Bacterial communities include sulfate-reducing bacteria and methanogens. Methane-rich fluids also support communities of very large clams. Methane issuing from the seafloor forms distinct plumes in the water column above the gas hydrate deposits and these fluids will be sampled using the CTD. Axial Volcano: This large volcano hosts multiple, active hydrothermal fields with numerous anhydrite-, barite- and sulfide-rich chimneys. Fluids issuing from the vents are very enriched in magmatic gases such as carbon dioxide, reflecting the presence of molten basalt beneath the volcano. Titanium syringes will also be used to sample the 300°C hydrothermal fluids, in a similar fashion to sampling techniques used at Hydrate Ridge. In the past, numerous vents issued fluids that were boiling, but more recent studies by NOAA-PMEL investigators show that boiling activity has decreased. Shore-based chemical analyses will be completed on the fluids, as well as investigation of microorganisms that thrive in this extreme environment.
<urn:uuid:62ea4618-032f-4b00-9a39-593c3a2ef41c>
2.953125
445
Knowledge Article
Science & Tech.
29.031106
During the 70s and into the early 80s, C compilers were relatively easy to come by for personal computers, although most only did a subset of C (which is why you'll see so many different "tiny C" compilers adverts in the older magazines). Pascal was a larger more cumbersome language back in the days when only the wealthiest computer hobbyists had hard drives (and a 5 meg hard drive was several hundred dollars). For the Apple 2 (my first computer, and it wasn't even a "plus"), running Pascal required purchasing an extra memory card (it needed 64k of RAM!) and took several floppies to load up, while "tiny C" compilers fit on a single floppy (and could get by with 16k of RAM). Pascal was taught in computer science curricula, while C was mostly self-taught (sometimes taught in electrical engineering curricula). Pascal got a reputation among the cowboy coders for being a "bondage and discipline language", which I thought was undeserved as they never met ADA. The major drivers of Pascal in the 80s were Apple (because the APIs used Pascal calling standards) and Borland. Borland's "Turbo" compilers were probably the best available ones in the marketplace, and the "like a book" license made them a lot more popular than companies with more vicious licensing. Borland lost their lead in the development market when Microsoft hired away their lead developers and project managers (such as Hejlsberg, Gross and more than 35 others), eventually developing .NET and Visual Studio. Borland and Microsoft settled the lawsuit a couple years later, but Borland never recovered from the loss. In my opinion, Delphi started withering at that time (as the folks who gave it focus and drive were hired away), and the change in CEO at the same time took Borland away from a compiler company into an ALM (application lifecycle management) company, changing their name to Inprise a couple years later. The ashes of Borland are now owned by Micro Focus.
<urn:uuid:811b0bb6-e3d6-4950-8c92-4b4eab75d4b9>
2.703125
423
Q&A Forum
Software Dev.
40.253221
Reverse Every K Nodes Of A Linked List August 26, 2011 Here is another from our collection of interview questions: Given a list of elements and a block size k, return the list of elements with every block of k elements reversed, starting from the beginning of the list. For instance, given the list 1, 2, 3, 4, 5, 6 and the block size 2, the result is 2, 1, 4, 3, 6, 5. Your task is to write a function to solve the sublist-reversal problem. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:7e29a6a4-d963-4cb2-9819-b57257992062>
2.6875
154
Tutorial
Software Dev.
62.180636
Observing Basics: Finder Scopes In this episode, learn how this small auxiliary telescope helps you better tour the night sky. A finder scope has a lower magnification than your main telescope so it can provide a wider view of the sky, which allows you to more easily star-hop your way from target to target. These instruments come with a label of AxB, where A is the magnification and B is the aperture of the finder scope's objective lens in millimeters. This designation is in the same format used by most binoculars. Expand your observing at Astronomy.com| Check out Astronomy.com's interactive StarDome to see an accurate of your sky. This tool will help you locate this week's targets. Intro to the Sky: Get to know the night sky Learn how to use star charts, find constellations, and observe the brightest objects in your night sky with in this handy reference section. Tour the solar system: The Sun Explore the characteristics of our home star and the methods scientists are using to study it in this informative video. The Sky this Week Get a daily digest of celestial events coming soon to a sky near you. After you listen to the podcast and try to find the objects, be sure to share your observing experience with us by leaving a comment at the blog or in the Reader Forums.
<urn:uuid:a04cc19f-addb-4d5d-951f-a8185b43b340>
3.46875
283
Tutorial
Science & Tech.
53.509916
POLAR BEAR } Ursus maritimus HABITAT: Polar bears live throughout the ice-covered waters of the circumpolar Arctic, with distribution dependent on food availability and sea-ice conditions; they are most often found at the convergence of sea ice and open water, and where seals congregate. These bears are totally reliant on the sea ice as their primary habitat, using it for a number of essential activities including hunting and feeding on seals, seeking mates and breeding, making long-distance movements, accessing terrestrial maternity denning areas, and sometimes even maternity denning itself. Polynyas — areas of open water surrounded by ice and caused by fluctuations in wind, tide or current — are sites of increased marine mammal and bird concentrations and are extremely important to polar bears. RANGE: This circumpolar species is found in and around the Arctic Ocean, with its southern range limited by pack-ice availability and its southernmost occurrence at James Bay in Canada. The world’s currently recognized 20 polar bear populations occur within the jurisdictions of the United States ( Alaska), Canada, Denmark (Greenland), Norway, and Russia. MIGRATION: Some polar bears make extensive north-south migrations in response to ice packs receding northward in the spring and advancing southward in the fall. In addition, individuals may travel vast distances to find mates or food and have been seen 100 miles from the nearest land- or icefall. In October and November, males head out onto the pack ice where they spend the winter, while pregnant females seek sites on land or nearshore sea ice to dig dens in the snow, where they spend the winter and give birth. BREEDING: Like other members of the bear family, female polar bears have small litters, reach breeding age late in life, and produce few young in their lifetime. They mate on the sea ice in either April or May, after which a female must accumulate sufficient fat reserves to live and to support her cubs from the time she enters the maternity den between late October and mid-November until the time the family emerges in the spring and she again begins to feed. Cubs are born in snow dens between late November and early January, with timing varying by region and population. Because of their vulnerability at birth, cubs must remain in the maternity den, where the temperature warms to near freezing. They nurse inside the den until sometime between late February and the middle of April, depending on the latitude. The age at which mothers wean their cubs also varies by region, though in most areas cubs are weaned at approximately 2.5 years of age, resulting in a three-year reproductive cycle. After a period of several weeks’ acclimatization, the mother and cubs begin their trek to the sea ice to feed on seals. LIFE CYCLE: Polar bears can live up to 25 or 30 years in the wild. FEEDING: The top Arctic predators, polar bears primarily eat ringed seals but also hunt bearded seals, walrus, and beluga whales, and will scavenge on beached carrion such as whale, walrus, and seal carcasses found along the coast. These bears often eat only seals’ skin and blubber, leaving the carcass for other animals to scavenge and thus playing a critical role in the Arctic food chain. THREATS: The greatest threat to polar bears is global warming, which is affecting the Arctic far more intensely than the rest of the world and is rapidly causing the bears’ sea-ice habitat to melt away. Other grave threats include oil and gas development, environmental contaminants such as PCBs, industrial noise and harassment from increased Arctic shipping and other activities, and overhunting in some areas. Global warming will likely interact with several of these additional threats to further increase the polar bear’s peril. POPULATION TREND: Polar bear numbers increased following the establishment of hunting regulations in the 1970s and today stand at 20,000 to 25,000. The rapid decline of Arctic sea-ice due to global warming has reversed this trend, and currently at least five of the 19 polar bear populations including those in Western Hudson Bay are declining. Scientists estimate that if the Arctic continues its melting trend, the worldwide polar bear population will decline by two-thirds by 2050 and will be near extinction by the end of the century. As actual sea-ice melting has proceeded much faster than predicted by scientific models, population declines may occur much faster as well. |Photo © Thomas D. Mangelsen, ImagesOfNatureStock.com||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /|
<urn:uuid:f95ab735-f350-40b4-a94d-f5392366eb91>
3.8125
964
Knowledge Article
Science & Tech.
33.193977
What you are looking at is a faint planetary nebula. A planetary nebula is a star near the end of its life cycle. It has already burned up all its hydrogen, helium, and carbon that once fueled the star, and is left with only a very white carbon star. Therefore it still emits intense ultraviolet radiation and this radiation will ionize the gas of the star’s old shell that it is gently ejecting outward, making it glow. Eventually, the star will cool down, and the glow will vanish. Contrary to its name, a planetary nebula has nothing to do with planets. It received its name from early discoveries of nebula when they were considered to possibly be more planets. This particular image is of planetary nebula NGC 1514. It is part of the Taurus constellation and approximately 1000 light years away from earth with a linear size of about half a light year. Here the light emitted from the white dwarf star can be clearly seen near the center of the image. Surrounding it is a blotchy, blue nebula shell that is slowly expanding away from the star. With a larger telescope or a longer exposure time the ring formed by the nebula’s expanding shell would have been seen more clearly. Scattered about the image are many other nearby stars, including one star that is appears to be merging with the white dwarf star. This is more likely a foreground star. Bennett, Jeffrey, Megan Donahue, Nicholas Schneider, and Mark Voit. The Cosmic Perspective. 6th ed. San Francisco: Addison-Wesley, 2010. 543-44. TheSky Astronomy Software. Copyright 1984-2000. Software Bisque. |Right Ascension (J2000)||04:09:14.7| |Filters used||blue(B), green(V), red(R), and clear(C)| |Exposure time per filter||60 seconds in CBVR| October 26, 2010 (CBVR)
<urn:uuid:08962fcc-4a45-4c61-8dc9-a2a71eea6da8>
3.671875
409
Knowledge Article
Science & Tech.
61.285092
Thank you for the opportunity to talk with you today about invasive species. I am David Strayer, a Senior Scientist at the Cary Institute of Ecosystem Studies, an independent ecological think-tank in Millbrook, New York. I have a Ph.D. in Ecology and Evolutionary Biology from Cornell University, and have been doing research on freshwater ecology, conservation ecology, and invasive species for more than 30 years. I have published more than 100 scientific articles on these subjects, and was elected as a Fellow of the American Association for the Advancement of Science in 2002 in recognition of these contributions. Invasive species are one of the largest environmental problems facing us today Humans have carelessly moved thousands of species outside their native ranges through activities such as transfer of ballast water, release of pets and bait, movement of untreated wood, escapes from agriculture and aquaculture, and deliberate release of species that we thought to be beneficial. Many of these species have had large, unwanted impacts on ecosystems, economies, and human health. We don’t have a good comprehensive accounting of the total effects of invasive species in New York, the United States, or the world. However, we do know that invasive species are one of the most important ways that humans are changing ecosystems all around the world, that they cause enormous economic damages (estimated to be more than $100 billion per year in the United States alone), and that they harm and kill Americans every year. We also know that New York is one of the most heavily invaded parts of the world, and suffers disproportionately from the impacts of invasive species. Invasive species can cause large changes to our ecosystems Even a single invasive species can turn an ecosystem upside-down. Let me illustrate this point by describing how the zebra mussel changed the Hudson River. Our research group has been studying the Hudson continuously since before zebra mussels arrived, so we have good measurements of zebra mussel impacts. Zebra mussels are small mollusks native to Europe that came into North America in the mid-1980s, probably in untreated ballast water. They first appeared in the Hudson in 1991, and by the end of 1992 had reached a population of 550 billion animals. This population weighed more than the combined weight of all other animals (fish, zooplankton, insects, native shellfish, etc.) in the river. Zebra mussels have remained abundant since then. Zebra mussels are filter-feeders, and the huge population in the river filtered a volume of water equal to all of the water in the Hudson every 1-4 days. As a result, the amount of plankton in the river dropped by 80%. Because plankton is one of the important foundations of the food web, many other species were affected. For example, 1000 tons of fish food disappeared (this amounts to half of all the fish food in the river). One result of this was that the number and growth rate of Atlantic shad dropped substantially. This fish is doing so poorly that the historically important commercial and recreational fisheries for this species were closed for the first time ever in 2010, and the zebra mussel has just added to its problems. Zebra mussels changed nearly everything that our group measures about the Hudson – water chemistry, water clarity, and the populations of many other plants and animals in the river. Humans have done many things to the Hudson over the years, but it is hard to argue that any had a greater impact on the ecosystem than the introduction of this one invader. New York is filled with invasive species and receives more every year The zebra mussel is not the only species that humans have brought into the Hudson. A study led by Professor Edward Mills of Cornell University found that the fresh waters of the Hudson River basin contain more than 120 non-native species. Six or seven new species arrive each decade. Just in the past few years, we’ve seen snakehead fish, hydrilla plants, Chinese mitten crabs, and Asian clams appear in the Hudson basin, all species capable of ecological and economic harm. And it’s not just the Hudson that’s being invaded – every assembly district in the state now suffers economic and ecological damage from established invaders, and is endangered by new invaders waiting at the doorstep. New Yorkers living in cities and suburbs probably will pay billions of dollars to remove and replace ash trees killed by the emerald ash borer, communities along the Great Lakes and Finger Lakes are seeing valuable fish killed by viral hemorrhagic septicemia, the autumn tourist and maple sugaring industries are imperiled by the spread of the Asian long-horned beetle, our farmers have to deal with stink bugs, and plum pox, and now wild boars, hundreds of New Yorkers have been sickened and dozens killed by the West Nile virus, and so on and so on. There are many good opportunities to reduce the spread and impacts of invasive species There is no good reason why we should continue to endanger our ecosystems, our economy, and our health by continuing to allow invasive species to move freely around the globe. It is often said that species invasions are an inevitable consequence of globalization. I suppose that is true, in the same sense that pollution is an inevitable consequence of industrialization. But just as we know that we can have careless industrialization with a lot of pollution or careful industrialization with little pollution, we can have globalization with many damaging species invasions, or globalization with a few damaging species invasions. And just as we have learned for pollution, it is usually much cheaper to prevent problems with invaders than to clean them up after they occur. We have many good tools to reduce the movement and impacts of invasive species, but we need to use them. We can treat ballast water, keep potentially harmful species out of the pet, horticulture, and aquaculture trades, stop moving untreated wood, better inspect our ports of entry, and educate the public. Here in New York, we have made a good start with the Invasive Species Council, the Office of Invasive Species Coordination, and the programs they oversee, but these programs have too few people and too little money to take advantage of all of the good opportunities we have today to better manage invasive species. I worry that our children and grandchildren will some day look out onto a world filled with undesirable and unmanageable invasive species, as they pay bills for problems that we created, and wonder why we did so little when we understood very well that invasive species were a problem and had the tools to stop them. Why are we doing so little? Thank you for your attention.
<urn:uuid:d1ae7559-af13-4fe2-889b-823cd7f5aadc>
3.375
1,357
Audio Transcript
Science & Tech.
33.078097
What Happens After A Failure? Last week , I distinguished between two kinds of software failure. In one kind, the program fails to produce a correct result; in the other, it produces an incorrect result. The difference between these two cases is whether the software's user can tell that the program has failed. For now, I want to concentrate on the case in which the program's user knows that it has not produced a correct result. Imagine that you have asked a program to do something for you, and it has reported that it is unable to do so. What do you do next? The answer depends on the context, including what you can learn about what the program did instead of what you asked it to do. At one extreme, the program might have failed in a way that makes further progress impossible, such as by destroying critical data. At the other extreme, the failure might be in a network that does not require anything beyond a "best effort" to handle incoming data — in which case you can simply disregard the failure and wait for whatever is on the other end of the network to try again. Even these two simple examples suggest outlines of what it might be useful to know about a part of a program that has failed: - Do we know what it actually did? - Did the failure leave the system in a state from which it is possible to proceed? - If it is possible to proceed, what do we need to do in order to be able to use the failing component in the future? If these questions have answers at all, those answers are probably not known until the failure has occurred. The desire to be able to answer such questions is a key motivation for the design of exception-handling facilities in languages that have them. In order for us to be able to answer our three questions after a failure, - The program has to be able to report how it failed in a way that lets us identify the nature of the failure, and - The failure must be documented in enough detail so that we can figure out how, and if, we can proceed. Perhaps the most important factor in being able to continue after a failure is that we must be able to understand the damage that the failure caused. This understanding typically comes from documentation, rather than from the program itself. For example, suppose we call a sort function. Such a function typically works by comparing elements of the data structure that we give it, and any of those comparisons might fail. If a comparison fails, the sort function reports the failure back to us. In order for us to figure out how to continue, one question will be crucially important: Can we be confident that the data structure that we tried — and failed — to sort is a permutation of its original contents? That is, do we know that every element in the original data structure is still there, and that all that might have changed are the relative positions of the elements? This kind of question is very unlikely to be answered in the code. Instead, we would hope to be able to find some kind of guarantee in the sort function's documentation that if the data cannot be sorted, the function will throw an exception, after which the data we were trying to sort will be some kind of rearrangement of the original data, with all values still present. In effect, such documentation will help us understand what the failed program did before it failed, which will help us answer our other two questions. In short, when part of a program can detect — reliably — that it cannot continue, it can use some kind of exception-handling mechanism to report this inability back to its caller. The fact that an exception has occurred, coupled with the author's description of what the program has done at the point when it reports an exception, is what makes it possible to figure out how to proceed after the exception. Next week, we'll look more closely at what kinds of guarantees it is feasible for a program to make in the context of exceptions.
<urn:uuid:8aa0a94b-da0a-4dc6-b595-dec472f91a64>
2.96875
819
Personal Blog
Software Dev.
48.57979
from the Charles-plans-new-use-for-his-sailing-skills dept. Senior Associate Charles Vollum reports "An article on CNN's website discusses recent successful demonstrations of sails for space propulsion at JPL and Wright-Patterson Air Force Base." This entry was posted on Thursday, July 6th, 2000 at 4:35 PM and is filed under Space. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
<urn:uuid:4d3ecbe7-5cb2-4735-b96d-c36de3b514aa>
2.828125
116
Comment Section
Science & Tech.
72.253577
This week's element is molybdenum, which will please the biologists who are reading because it is essential for life. Molybdenum has the symbol Mo and atomic number 42. It is a beautiful, lustrous silver-coloured metal that was often confused with lead, hence its name, which comes from the Greek, molybdos, for lead. Molybdenum is an element of extremes. It is quite brittle but it makes strong alloys that do not expand upon heating when substituted for tungsten. Thus, molybdenum alloys are in demand for aircraft parts, electrical contacts, high-speed drill bits and other items exposed to high temperatures. Molybdenum-based lubricants are also in demand for use at high temperatures. But most important, in my opinion, is that molybdenum is essential for life -- all life, from bacteria to birds. Of course, this includes humans. Interestingly, several studies found that a scarcity of molybdenum in the Earth's early oceans was a limiting factor for the evolution of eukaryotic life (plants and animals) for nearly two billion years. The reason is because eukaryotes cannot capture and use inorganic nitrogen so they depend upon prokaryotic bacteria to capture and "fix" nitrogen into a usable, organic, form (doi:10.1038/nature06811). For this, they have an enzyme known as nitrogenase, that requires molybdenum to function properly. Currently, molybdenum is more common in the oceans than on land; it is the 25th most abundant element in the oceans (average: 10 parts per billion) whereas it is the 54th most abundant element in the Earth's crust -- but it didn't start out this way. Due to the lack of oxygen in the early oceans, molybdenum-containing minerals located on the sea floor did not dissolve and thus, this element was not available to nitrogen-fixing bacteria. Only after oxygen levels increased in sea water was molybdenum available to these microbes, which then fixed nitrogen, making that element available to eukaryotes. This, then supported the ensuing explosive evolutionary diversification of life on earth. Our favourite professor, Martyn Poliokoff, tells us more about molybdenum use by the enzyme nitrogenase: .. .. .. .. .. .. .. .. .. .. .. .. Video journalist Brady Haran is the man with the camera and the University of Nottingham is the place with the chemists. You can follow Brady on twitter @periodicvideos and the University of Nottingham on twitter @UniNottingham You've already met these elements: Niobium: Ni, atomic number 41 Zirconium: Zr, atomic number 40 Yttrium: Y, atomic number 39 Strontium: Sr, atomic number 38 Rubidium: Rr, atomic number 37 Krypton: Kr, atomic number 36 Bromine: Br, atomic number 35 Selenium: Se, atomic number 34 Arsenic: As, atomic number 33 Germanium: Ge, atomic number 32 Gallium: Ga, atomic number 31 Zinc: Zn, atomic number 30 Copper: Cu, atomic number 29 Nickel: Ni, atomic number 28 Cobalt: Co, atomic number 27 Iron: Fe, atomic number 26 Manganese: Mn, atomic number 25 Chromium: Cr, atomic number 24 Vanadium: V, atomic number 23 Titanium: Ti, atomic number 22 Scandium: Sc, atomic number 21 Calcium: Ca, atomic number 20 Potassium: K, atomic number 19 Argon: Ar, atomic number 18 Chlorine: Cl, atomic number 17 Sulfur: S, atomic number 16 Phosphorus: P, atomic number 15 Silicon: Si, atomic number 14 Aluminium: Al, atomic number 13 Magnesium: Mg, atomic number 12 Sodium: Na, atomic number 11 Neon: Ne, atomic number 10 Fluorine: F, atomic number 9 Oxygen: O, atomic number 8 Nitrogen: N, atomic number 7 Carbon: C, atomic number 6 Boron: B, atomic number 5 Beryllium: Be, atomic number 4 Lithium: Li, atomic number 3 Helium: He, atomic number 2 Hydrogen: H, atomic number 1 Here's a wonderful interactive Periodic Table of the Elements that is just really really fun to play with! .. .. .. .. .. .. .. .. .. .. .. ..
<urn:uuid:2bb59b63-a342-485c-9cbd-ff6c1f33adcb>
4.375
997
Listicle
Science & Tech.
49.651892
SCIENCE IN THE NEWS I'm Bob Doughty with Sarah Long, and this is the VOA Special English program SCIENCE IN THE NEWS. This week -- a report on the powerful storms of this past month ... news of a new dinosaur, and the oldest modern European ... and, a look at one way feeling happy may be good for the health. Earlier this month, South Korea was hit by its strongest ocean storm since records began a century ago. At least one-hundred-seventeen people were killed in Typhoon Maemi. And there were damage estimates of four thousand million dollars. The following week, Hurricane Isabel tore into the mid-Atlantic coast of the eastern United States. American officials said the powerful storm was responsible for at least forty deaths. And, in northwestern Mexico, Hurricane Marty was blamed last week for five deaths. Hurricanes and typhoons are the same thing. Weather scientists call them hurricanes when the storms develop east of the International Date Line. They call them typhoons when the storms happen west of the date line. And they call the same kind of powerful ocean storm a cyclone when it forms in the Indian Ocean. Weather experts use these names to describe storms that have winds of more than one-hundred-twenty kilometers an hour. Experts in different countries are responsible for observing storm movements and warning people about any danger. Warning centers are found in twenty-two places. These include Bangladesh, Burma, China, Fiji, Hong Kong, India, Japan, New Zealand, South Korea, Thailand, Vietnam and the United States. Technology has given weather experts better tools to do their job. Satellites observe weather conditions from space. Radar systems gather information from the ground. Airplanes can drop special instruments into storms. These devices record information about air movements. Weather experts have gotten better in recent years at telling where an ocean storm will hit land. But they say they still need to improve their ability to tell how powerful the storm will be when it gets there. They say part of the problem is because the storms develop over water. This makes it more difficult to measure exact conditions. However, scientists are working to improve their ability to tell what is happening inside a storm. They say this should lead to better predictions of the intensity. Scientists in Switzerland have announced the newest discovery of a set of dinosaur footprints in the Jura mountains. They say the prints were found along several ancient paths near the village of Chevenez. The footprints suggest that the dinosaurs were probably three to four meters high. Scientists believe the dinosaurs that left them were sauropods. Sauropods are among the biggest animals ever. They had huge bodies with very small heads at the end of long necks. They also had long, powerful tails. Sauropods ate plants. The prints are believed to be about one-hundred-fifty million years old. They would have been left during the Jurassic Period. That time in Earth's history is named for the Jura mountains. The area is shared by Switzerland and France. Scientists have found many fossils of similar age there. The Jurassic period began about one-hundred-eighty-million years ago. It lasted about fifty million years. Sauropods were on Earth long before the Jurassic period. In fact, scientists in South Africa recently announced the oldest known sauropod fossils. The bones are about two-hundred-twenty-million years old. They date back to the middle of the Triassic period. James Kitching is a widely known fossil hunter at Witwatersrand University in Johannesburg. He found the bones of the two-ton creature in nineteen-eighty-one. But they were wrongly identified at the time as belonging to an ancestor of sauropods. Recently another scientist at Witwatersrand, Adam Yates, re-examined the bones. He decided that they represented a new kind of sauropod. Mr. Yates named the new sauropod Antetonitrus (ant-ee-tone-ite-rus). That is Latin for "before the thunder." The name connects the dinosaur to a well-known plant eater that came later. Brontosaurus is a Latin word meaning "thunder lizard." Scientists knew that plant-eating dinosaurs with four legs, like sauropods, developed from older ones with two legs. Mr. Yates says Antetonitrus walked on four legs but still had the ability like its ancestors to hold things. Adam Yates and James Kitching worked together on the new study. The Proceedings of the Royal Society in Britain published their research. Fossil researchers in the United States also made an announcement recently. They believe they have identified the oldest known fossil in Europe of a modern human. They say the jawbone is from the mouth of someone who lived around thirty-five-thousand years ago. Three Romanian cave explorers found the jawbone last year in the Carpathian mountains of Romania. Other face and head bones were found in the same cave earlier this year. Scientists used the process called radiocarbon dating to find the age of the jawbone. They say they expect to find the other bones the same age. Erik Trinkaus of Washington University in Saint Louis, Missouri, was one of the two leaders of the research. He says the bones show some qualities found in earlier periods of human development. He says not only is the face very large, but so are the jaws and the teeth. This is especially true of the wisdom teeth at the back of the mouth. Mr. Trinkaus says the bones possibly show that early modern humans and Neanderthals had children together. At the time, early modern humans existed with Neanderthals as that species was disappearing in Europe. But scientists disagree about whether the two groups mixed. Mr. Trinkaus and Romanian researcher Oana Moldovan reported their results in The Proceedings of the National Academy of Sciences. The Journal of Human Evolution will publish a separate report later. A small study in the United States shows how brain activity may influence the body's defenses against disease. The findings also appear in The Proceedings of the National Academy of Sciences. Richard Davidson led the study. He directs the Laboratory for Affective Neuroscience at the University of Wisconsin in Madison. Professor Davidson and his team worked with fifty-two people chosen from a long-term health study. All were between the ages of fifty-seven and sixty. The team wanted to study electrical activity in the prefrontal cortex of the brain. Earlier studies linked increased activity in the right side of this area with depression, anger and sadness. Greater activity in the left side has been linked with happier emotions. The team asked the people to think about two events – one that made them happy and another that made them sad, fearful or angry. Each time, the researchers measured the electrical activity in both sides of the prefrontal cortex. Next, the people received injections of vaccine against the influenza virus. Like other vaccines, it is designed to increase the number of antibodies in a person's defense system. Antibodies fight infection. The researchers wanted to know if the people who showed more activity in the left side would also show greater protection after the vaccine. Over the next six months, the researchers took blood from the fifty-two people to count the antibodies against influenza. They found higher levels of antibodies in the people who had more activity in the left side instead of the right side. The left side of the prefrontal area in the brain is the side linked to happier emotions. Professor Davidson at the University of Wisconsin says the study helps show how the mind can influence the body. SCIENCE IN THE NEWS was written by Nancy Steinbach and George Grow. Our producer was Cynthia Kirk. This is Bob Doughty. And this is Sarah Long. Join us again next week for more news about science in Special English on the Voice of America.
<urn:uuid:aef53e58-6937-4fa2-995b-6dd132d20cf5>
3.171875
1,617
Audio Transcript
Science & Tech.
51.145954
Aegagrophila linnaei is a filamentous green algae in freshwater and some brackish-water coastal habitats and has three different growth forms: The lake balls have been the source of much fascination and are on stamps issued in Iceland and Japan. In the Hokkaido district of Japan they are part of the local folklore where each year is held a 3-day ceremony focused on these balls. Get a detailed classification of the different growth forms of Aegagrophila linnaei. Aegagrophila linnaei has a worldwide distribution and, in particular, the lake balls have gathered significant interest in some countries. Find out where this species can be found around the world and the types of habitat it grows in. Discover the factors that contribute to the growth and form of Aegagrophila linnaei. Find out about the reproductive processes of Aegagrophila linnaei. Learn why Aegagrophila linnaei is protected in a number of countries and find out the status of the species in the British Isles. Get reference matierial for Aegagrophila linnaei. Close view of mass of soft Aegagropila balls in Loch Bruggan where the water is brackish. Low power microscopic view of the branched filaments of Aegagropila (photo Chris Carter). Dense masses of Aegagropila ballsphotographed in September 2009 within a bay in Loch Bruggan on the island of South Uist, Outer Hebrides, Scotland. High power view of cells showing details of branch pattern (photo Chris Carter) View of a portion of the dense carpet form of Aegagropila. Small and hard Aegagropila balls floating on the surface of the freshwater Loch Ollay on South Uist, Outer Hebrides, Scotland.
<urn:uuid:8e52db01-06f5-4903-9847-9ea8df5301ea>
3.015625
381
Knowledge Article
Science & Tech.
38.832899
1. Why use Producer/Consumer? The Producer/Consumer pattern gives you the ability to easily handle multiple processes at the same time while iterating at individual rates. What makes this pattern unique is its added benefit of buffered communication between application processes. When there are multiple processes running at different speeds, buffered communication between processes is extremely effective. For example, an application has two processes. The first process performs data acquisition and the second process takes that data and places it on a network. The first process operates at three times the speed as the second process. If the Producer/Consumer design pattern is used to implement this application, the data acquisition process will act as the producer and the network process the consumer. With a large enough communication queue (buffer), the network process will have access to a large amount of the data that the data acquisition loop acquires. This ability to buffer data will minimize data loss. 2. Build a Producer/Consumer As with the standard Master/Slave pattern, the Producer/Consumer design consists of parallel loops which are broken down into two categories; producers, and consumers. Communication between producer and consumer loops is done by using data queues. LabVIEW has built in queue functionality in the form of VIs in the function palette. These VIs can be found in the function palette under Advance >> Synchronization >> Queue Operations. Queues are based on the first-in/first-out theory. In the Producer/Consumer design pattern, queues can be initialized outside both the producer and consumer loops. Because the producer loop produces data for the consumer loop, it will be adding data to the queue (adding data to a queue is called “enqueue”). The consumer loop will be removing data from that queue (removing data from a queue is called “dequeue”). Because queues are first-in/first-out, the data will always be analyzed by the consumer in the same order as they were placed into the queue by the producer. Figure 1 illustrates how the Producer/Consumer design pattern can be created in LabVIEW. Figure 1: Producer/Consumer Design Pattern 3. Example - Move-Window.vi This application has the following requirements: - Create a user interface with four directional control buttons and a queue status indicator. - Create one loop that collects user interface events and updates the queue indicator. Create another loop that takes the user interface data and moves the window accordingly. Our first step will be to decide which loop will be the process and which the consumer. Because the user interface is collecting instructional data for another process to carry out, the user interface loop will be the producer. The loop that moves the window depending on user instruction will be the consumer. The producer loop will use a queue to buffer user interface data for the consumer loop. Our application should also monitor instructions that are placed into and removed from the queue. We are now ready to begin our LabVIEW Producer/Consumer application. To view the final Producer/Consumer application, please open the attached VI (Move-Window.vi). 4. Important Notes There are some caveats to be aware of when dealing with the Producer/Consumer design pattern such as queue use and synchronization. - Queue use Problem: Queues are bound to one particular data type. Therefore every different data item that is produced in a producer loop needs to be placed into different queues. This could be a problem because of the complication added to the block diagram. Solution: Queues can accept data types such as array and cluster. Each data item can placed inside a cluster. This will mask a variety of data types behind the cluster data type. Figure 1 implements cluster data types with the communication queue. Problem: Since the Producer/Consumer design pattern is not based on synchronization, the initial execution of the loops does not follow a particular order. Therefore, initializing one loop before the other may cause a problem. Solution: Adding an event structure to the Producer/Consumer design pattern can solve these types of synchronization problems. Figure 2 depicts a template for achieving this functionality. More information pertaining to synchronization functions is located below in the Related Links section. Figure 2: Using an Event Structure in Producer/Consumer Design Pattern
<urn:uuid:39b796f8-7314-4e01-8b6d-1b724d4b8532>
3.734375
862
Tutorial
Software Dev.
35.400814
Courtesy Staplegunther via Wikimedia Commons Courtesy Ceinturion via Wikimedia Commons We use lots of energy. And, as world population grows and countries develop, we're going to be using a lot more. Most of our energy comes from fossil fuels—materials in limited supply that will take millions of years to replenish. And when we burn fossil fuels to extract their energy, they produce pollutants and greenhouse gases. Even nuclear power relies on non-renewable fuels, and its byproducts must be stored for thousands of years until they are no longer dangerously radioactive. IonE's Initiative for Renewable Energy and the Environment focuses on finding sustainable energy sources that will help us meet the new demand of the 21st century. Sustainable energy needs to come from resources that are renewable (i.e. they won’t become depleted in the near future) and must be less damaging to the environment than current power sources. In addition to new technologies and strategies for solar and wind power, IonE and IREE are exploring techniques that could generate power and reduce existing pollution. Some plants, for instance, could be processed into biofuels—gas- and oil-like products refined from organic material. While growing, the plants pull carbon dioxide from the atmosphere, and much of it is left harmlessly in the ground after the usable parts of the plant are harvested. Another method might use carbon dioxide captured from power plants to extract heat from the Earth. The pressurized CO2 could be pumped into the ground, where some would become trapped, and some, heated by the Earth itself, would return to the surface to generate more electricity. The connections between energy, the environment and society are very complicated—solutions that seem straightforward at first may have unexpected consequences. Simply avoiding old problems doesn't guarantee that new ones won't arise. IonE researchers approach energy projects from multiple angles to produce solutions that make sense economically and environmentally that could be sustained beyond our generation.
<urn:uuid:c333471e-045a-4a40-82e8-1efdfcd7b19f>
4
401
Knowledge Article
Science & Tech.
31.775695
The kilogram is defined by a lump of metal in Paris but several comparisons with identical copies showed that its mass is changing. That’s why the kilogram needs to be redefined. Scientists want to redefine the SI units in terms of fundamental constants of nature. To redefine the kilogram scientists use the Quantum-Hall effect so mass is defined in terms of h, Planck constant. Next constant on the list is the Ampere, the unit of electrical current, that would be redefined in terms of the electron charge e.
<urn:uuid:af674fe1-fe18-4358-aca4-3b5d0b3131a9>
3.546875
113
Knowledge Article
Science & Tech.
50.385
Department of Astronomy and Astrophysics University of Chicago Recession of the galaxies: distant objects appear to be moving away from us. Cosmic Microwave Background: we are bathed in primordial light. The big bang or hot expanding model of cosmology finds strong support from the recession of the galaxies, that is, when we look out in the night sky, distant objects appear to be flying away from us. And also the cosmic microwave background - the fact that we are bathed in primordial light that bears evidence from and earlier, hotter, and denser period in the universe. As we shall see, this evidence also supports the gravitational instability paradigm - the picture that gravity can make wrinkles The relation between recession and expansion is easy to Imagine you're standing at the north pole and think boy, I'd much rather be at the basking the sun at some equatorial paradise. You look up the distance to it and plan your trip. Right before you leave, you think, I'd better check the distance again. Unbeknownst to you the radius of the earth has expanded in the meantime. To your suprise the distance to your equatorial paradise is now larger. You think, that's funny paradise seems to be receding from me! This is exactly the situation we observe with the distant galaxies. Because space itself is expanding it looks as if the galaxies around us are all receeding into the distance. It's easy to convince yourself that there is nothing special about being on the north pole. Anywhere on the globe the distance between points is increasing and hence all distant objects appear to be receeding. Well, maybe at least you can see paradise? Unfortunately the wavelength of light also stretches with the expansion so that visible light at millionths of meter wavelengths gets streched into invisible microwaves at millimeter to centimeter wavelengths. Given that light behaves this way, it's easy to see how the sea of primordial light we observe in microwaves supports the hot expanding (big bang) model. Run these pictures backwards in time. The wavelength of light becomes shorter and shorter and matter becomes denser and denser implying that the universe began in a hot dense state.
<urn:uuid:6c3e2a86-6c66-4baf-92c4-a26fb3cd25ab>
2.828125
486
Nonfiction Writing
Science & Tech.
42.791913
Scientists from Germany and Ireland have been studying types of bacteria that are prominent in bogs and sewage plants to research the link between bacteria cells and cells with a nucleus, the eukaryotes. Prior to these studies it was widely suggested that there was perhaps not some intermediate cell, rather a type of fusion of cells to create a new cell. Researchers have found that PVC (Planctomycetes, Verrucomicrobiae, Chlamydiae) bacteria are all types that might be the intermediate step between bacteria and eukaryotes since they are larger and divide more slowly. Further reasearch will be needed to be able to determine if the fusion theory or intermediate theory is more likely. The link between bacteria and eukaryotic cells found in sewage??? TrackBack URL: http://blog.lib.umn.edu/cgi-bin/mt-tb.cgi/132329
<urn:uuid:1d310e88-7769-479b-8885-27092c8a0167>
3.40625
187
Truncated
Science & Tech.
48.630156
I am mostly interested in this article, describing what cuttlefish are. I find it very cool that the flamboyant cuttlefish can change colors, and how the lines move on the cuttlefish body. Most people say that they are the most advanced vertebrate found. Even though they are closely related to slugs, they have much better motor skills, sensory structures and highly developed heads. Another example of their advanced body is their eyes. They are very similar to many other vertebrates which could help study eye evolution and functions. Cuttlefish have layers of color producing cells that work together to get different patterns. Which look awesome on flamboyant cuttlefish. The top three color pigments seen are black/brown, red/orange, and orange/yellow. I think that these fish should be protected because they are very cute and colorful. Do you think that the cuttlefish could be considered a violent endangered species?
<urn:uuid:b35b0915-b8f0-4dba-88f6-995be4f6639b>
2.8125
193
Personal Blog
Science & Tech.
45.226091
|Jmol-3D images||Image 1| |Molar mass||182.22 g mol−1| 48.5 °C, 322 K, 119 °F () 305.4 °C, 579 K, 582 °F () |Solubility in water||Insoluble| |Solubility in organic solvents||1 g/ 7.5 mL in ethanol 1 g/ 6 mL in diethyl ether |MSDS||External MSDS by JT Baker| |Main hazards||Harmful (XN)| | (what is: / ?) Except where noted otherwise, data are given for materials in their standard state (at 25 °C, 100 kPa) Benzophenone can be used as a photo initiator in UV-curing applications such as inks, imaging, and clear coatings in the printing industry. Benzophenone prevents ultraviolet (UV) light from damaging scents and colors in products such as perfumes and soaps. It can also be added to the plastic packaging as a UV blocker. Its use allows manufacturers to package the product in clear glass or plastic. Without it, opaque or dark packaging would be required. In biological applications, benzophenones have been used extensively as photophysical probes to identify and map peptide–protein interactions. Benzophenone can be prepared by the reaction of benzene with carbon tetrachloride followed by hydrolysis of the resulting diphenyldichloromethane, or by Friedel-Crafts acylation of benzene with benzoyl chloride in the presence of a Lewis acid (e.g. aluminium chloride) catalyst. The industrial synthesis relies on the copper-catalyzed oxidation of diphenylmethane with air. Organic chemistry Benzophenone is a common photosensitizer in photochemistry. It crosses from the S1 state into the triplet state with nearly 100% yield. The resulting diradical will abstract a hydrogen atom from a suitable hydrogen donor to form a ketyl radical. Benzophenone radical anion Sodium reduces benzophenone to the deeply colored radical anion, diphenylketyl: - Na + Ph2CO → Na+Ph2CO·− This ketyl is used in the purification of organic solvents, particularly ethers, because it reacts with water and oxygen to give non-volatile products. Very dry solvents are obtained by refluxing over benzophenone and sodium metal prior to distillation. The ketyl is soluble in the organic solvent being dried, so it reacts quickly with residual water and oxygen. In comparison, sodium is insoluble, and its heterogeneous reaction is much slower. The ketyl radical generally appears blue or purple, depending on the solvent. Commercially significant derivatives Substituted benzophenones such as oxybenzone and dioxybenzone are used in some sunscreens. The use of benzophenone-derivatives which structurally resemble a strong photosensitizer has been strongly criticized (see sunscreen controversy). The high-strength polymer PEEK is prepared from derivatives of benzophenone. See also - Merck Index, 11th edition, 1108 - Dorman, Gyorgy; Prestwich, Glenn D. (1 May 1994). "Benzophenone Photophores in Biochemistry". Biochemistry 33 (19): 5661–5673. doi:10.1021/bi00185a001. - Marvel, C. S.; Sperry, W. M. (1941), "Benzophenone", Org. Synth.; Coll. Vol. 1: 95 - Hardo Siegel, Manfred Eggersdorfer "Ketones" in Ullmann's Encyclopedia of Industrial Chemistry, Wiley-VCH, 2002 by Wiley-VCH, Wienheim. doi:10.1002/14356007.a15_077 - W. L. F. Armarego and C. Chai (2003). Purification of laboratory chemicals. Oxford: Butterworth-Heinemann. ISBN 0-7506-7571-3. - L. M. Harwood, C. J. Moody and J. M. Percy (1999). Experimental Organic Chemistry: Standard and Microscale. Oxford: Blackwell Science. ISBN 978-0-632-04819-9. - Knowland, John; McKenzie, Edward A.; McHugh, Peter J.; Cridland, Nigel A. (1993). "Sunlight-induced mutagenicity of a common sunscreen ingredient". FEBS Letters 324 (3): 309–313. doi:10.1016/0014-5793(93)80141-G. PMID 8405372. Toluene is refluxed with sodium and benzophenone to produce dry, oxygen-free toluene.
<urn:uuid:5e2cf9be-0633-4be9-8ac6-8963e3d22bd6>
3
1,040
Knowledge Article
Science & Tech.
51.452724
DistributionRead full entry Range DescriptionSouth American Sea Lions are widely-distributed, occurring more or less continuously from northern Peru south to Cape Horn, and north up the east coast of the continent to southern Brazil. They also occur in the Falkland/Malvinas Islands. The northernmost breeding distribution on the Pacific side is Isla Lobos de Tierra (6º26’S; Peru). No breeding colonies occur in Brazil. The northernmost breeding rookery in the Atlantic is Isla de Lobos, on the Uruguayan coast. South American Sea Lions are primarily a coastal species, found in waters over the continental shelf and slope; they occur only infrequently in deeper waters. This species ventures into fresh water and can be found around tidewater glaciers and up rivers. Vagrants have been found as far north as 13°S, near Bahia Brazil and in the Galápagos Archipelago (Ecuador).
<urn:uuid:11dcfeda-65af-44ac-8de5-71329bd47530>
3.0625
193
Knowledge Article
Science & Tech.
39.813929
Guides to (mostly) Harmless Hacking Volume 5 Number 5: Amit Rawat's Guides to Kernel Hacking, #3: Memory Management Each process has its own private address space. The address space is initially divided into three logical segments: text, data, and stack. The text segment is read-only and contains the machine instructions of a program. The data and stack segments are both readable and writeable. The data segment contains the initialized and uninitialized data portions of a program, whereas the stack segment holds the application's run-time stack. On most machines, the stack segment is extended automatically by the kernel as the process executes. A process can expand or contract its data segment by making a system call, whereas a process can change the size of its text segment only when the segment's contents are overlaid with data from the file system, or when debugging takes place. The initial contents of the segments of a child process are duplicates of the segments of a parent process. The entire contents of a process address space do not need to be resident for a process to execute. If a process references a part of its address space that is not resident in main memory, the system pages the necessary information into memory. When system resources are scarce, the system uses a two-level approach to maintain available resources. If a modest amount of memory is available, the system will take memory resources away from processes if these resources have not been used recently. Should there be a severe resource shortage, the system will resort to swapping the entire context of a process to secondary storage. The demand paging and swapping done by the system are effectively transparent to processes. A process may, however, advise the system about expected future memory utilization as a performance aid. Memory Management Inside the Kernel The kernel often does allocations of memory that are needed for only the duration of a single system call. In a user process, such short-term memory would be allocated on the run-time stack. Because the kernel has a limited run-time stack, it is not feasible to allocate even moderate-sized blocks of memory on it. Consequently, such memory must be allocated through a more dynamic mechanism. For example, when the system must translate a pathname, it must allocate a 1-Kbyte buffer to hold the name. Other blocks of memory must be more persistent than a single system call, and thus could not be allocated on the stack even if there was space. An example is protocol-control blocks that remain throughout the duration of a network connection. Demands for dynamic memory allocation in the kernel have increased as more services have been added. A generalized memory allocator reduces the complexity of writing code inside the kernel. Thus, the 4.4BSD kernel has a single memory allocator that can be used by any part of the system. It has an interface similar to the C library routines malloc and free that provide memory allocation to application programs. Like the C library interface, the allocation routine takes a parameter specifying the size of memory that is needed. The range of sizes for memory requests is not constrained; however, physical memory is allocated and is not paged. The free routine takes a pointer to the storage being freed, but does not require the size of the piece of memory being freed. The memory management subsystem is one of the most important parts of the operating system. Since the early days of computing, there has been a need for more memory than exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of these is virtual memory. Virtual memory makes the system appear to have more memory than it actually has by sharing it between competing processes as they need it. Virtual memory does more than just make your computer's memory go further. The memory management subsystem provides: - Large Address Spaces - The operating system makes the system appear as if it has a larger amount of memory than it actually has. The virtual memory can be many times larger than the physical memory in the system, - Each process in the system has its own virtual address space. These virtual address spaces are completely separate from each other and so a process running one application cannot affect another. Also, the hardware virtual memory mechanisms allow areas of memory to be protected against writing. This protects code and data from being overwritten by rogue applications. - Memory Mapping - Memory mapping is used to map image and data files into a processes address space. In memory mapping, the contents of a file are linked directly into the virtual address space of a process. - Fair Physical Memory Allocation - The memory management subsystem allows each running process in the system a fair share of the physical memory of the system, - Shared Virtual Memory - Although virtual memory allows processes to have separate (virtual) address spaces, there are times when you need processes to share memory. For example there could be several processes in the system running the bash command shell. Rather than have several copies of bash, one in each processes virtual address space, it is better to have only one copy in physical memory and all of the processes running bash share it. Dynamic libraries are another common example of executing code shared between several processes. Shared memory can also be used as an Inter Process Communication (IPC) mechanism, with two or more processes exchanging information via memory common to all of them. Linux supports the Unix TM System V shared memory IPC. Swapping Out and Discarding Pages When physical memory becomes scarce the Linux memory management subsystem must attempt to free physical pages. This task falls to the kernel swap daemon (kswapd). The kernel swap daemon is a special type of process, a kernel thread. Kernel threads are processes have no virtual memory, instead they run in kernel mode in the physical address space. The kernel swap daemon is slightly misnamed in that it does more than merely swap pages out to the system's swap files. Its role is make sure that there are enough free pages in the system to keep the memory management system operating efficiently. The Kernel swap daemon (kswapd) is started by the kernel init process at startup time and sits waiting for the kernel swap timer to periodically expire. Every time the timer expires, the swap daemon looks to see if the number of free pages in the system is getting too low. It uses two variables, free_pages_high and free_pages_low to decide if it should free some pages. So long as the number of free pages in the system remains above free_pages_high, the kernel swap daemon does nothing; it sleeps again until its timer next expires. For the purposes of this check the kernel swap daemon takes into account the number of pages currently being written out to the swap file. It keeps a count of these in nr_async_pages; this is incremented each time a page is queued waiting to be written out to the swap file and decremented when the write to the swap device has completed. free_pages_low and free_pages_high are set at system startup time and are related to the number of physical pages in the system. If the number of free pages in the system has fallen below free_pages_high or worse still free_pages_low, the kernel swap daemon will try three ways to reduce the number of physical pages being used by the system: - Reducing the size of the buffer and page caches - Swapping out System V shared memory pages - Swapping out and discarding pages. - If the number of free pages in the system has fallen below free_pages_low, the kernel swap daemon will try to free 6 pages before it next runs. Otherwise it will try to free 3 pages. - Each of the above methods are tried in turn until enough pages have been freed. The kernel swap daemon remembers which method it was using the last time that it attempted to free physical pages. Each time it runs it will start trying to free pages using this last successful method. - After it has free sufficient pages, the swap daemon sleeps again until its timer expires. If the reason that the kernel swap daemon freed pages was that the number of free pages in the system had fallen below free_pages_low, it only sleeps for half its usual time. Once the number of free pages is more than free_pages_low the kernel swap daemon goes back to sleeping longer between checks. Reducing the Size of the Page and Buffer Caches - The pages held in the page and buffer caches are good candidates for being freed into the free_area vector. The Page Cache, which contains pages of memory mapped files, may contain unnecessary pages that are filling up the system's memory. Likewise the Buffer Cache, which contains buffers read from or being written to physical devices, may also contain unneeded buffers. When the physical pages in the system start to run out, discarding pages from these caches is relatively easy as it requires no writing to physical devices (unlike swapping pages out of memory). Discarding these pages does not have too many harmful side effects other than making access to physical devices and memory mapped files slower. However, if the discarding of pages from these caches is done fairly, all processes will suffer equally. - Every time the Kernel swap daemon tries to shrink these caches - it examines a block of pages in the mem_map page vector to see if any can be discarded from physical memory. The size of the block of pages examined is higher if the kernel swap daemon is intensively swapping; that is if the number of free pages in the system has fallen dangerously low. The blocks of pages are examined in a cyclical manner; a different block of pages is examined each time an attempt is made to shrink the memory map. This is known as the clock algorithm as, rather like the minute hand of a clock, the whole mem_map page vector is examined a few pages at a time. - Each page being examined is checked to see if it is cached in either the page cache or the buffer cache. You should note that shared pages are not considered for discarding at this time and that a page cannot be in both caches at the same time. If the page is not in either cache then the next page in the mem_map page vector is examined. - Pages are cached in the buffer cache (or rather the buffers within the pages are cached) to make buffer allocation and deallocation more efficient. The memory map shrinking code tries to free the buffers that are contained within the page being examined. - If all the buffers are freed, then the pages that contain them are also be freed. If the examined page is in the Linux page cache, it is removed from the page cache and freed. - When enough pages have been freed on this attempt then the kernel swap daemon will wait until the next time it is periodically woken. As none of the freed pages were part of any process's virtual memory (they were cached pages), then no page tables need updating. If there were not enough cached pages discarded then the swap daemon will try to swap out some shared pages. The Swap Cache When swapping pages out to the swap files, Linux avoids writing pages if it does not have to. There are times when a page is both in a swap file and in physical memory. This happens when a page that was swapped out of memory was then brought back into memory when it was again accessed by a process. So long as the page in memory is not written to, the copy in the swap file remains valid. Linux uses the swap cache to track these pages. The swap cache is a list of page table entries, one per physical page in the system. This is a page table entry for a swapped out page and describes which swap file the page is being held in together with its location in the swap file. If a swap cache entry is non-zero, it represents a page which is being held in a swap file that has not been modified. If the page is subsequently modified (by being written to), its entry is removed from the swap cache. When Linux needs to swap a physical page out to a swap file it consults the swap cache and, if there is a valid entry for this page, it does not need to write the page out to the swap file. This is because the page in memory has not been modified since it was last read from the swap file. The entries in the swap cache are page table entries for swapped out pages. They are marked as invalid but contain information which allow Linux to find the right swap file and the right page within that swap file.
<urn:uuid:79850321-f243-489d-b9cc-61d0b746cd3c>
3.484375
2,576
Documentation
Software Dev.
41.938707
Griffiths, R.A. and Dewijer, P. and Brady, L. (1993) The effect of ph on embryonic and larval development in smooth and palmate newts, Triturus-Vulgaris and T-Helveticus. Journal of Zoology, 230 . pp. 401-409. ISSN 0952-8369. |The full text of this publication is not available from this repository. (Contact us about this Publication)| The distribution of smooth newts, Triturus vulgaris, and palmate newts, Triturus helveticus, in north-west Europe is related to geology and water quality. This study compared the development of the eggs and larvae of the two species under sublethal acidic and neutral conditions. Newt embryos raised under low pH hatched at an earlier stage of development, at a smaller size, and before those raised under neutral conditions. T. vulgaris hatched at a smaller size than T. helveticus, but pH did not affect the species differentially. Larvae of both species grew to a larger size under neutral than under acid conditions. Larvae raised in heterospecific pairs grew at least as well as those raised in conspecific pairs. Feeding was depressed under acid conditions, and reduced growth may therefore be associated with changes in the behaviour of newt larvae and their prey. |Uncontrolled keywords:||Amphibian decline; Behavioural toxicology; Embryonic survival; Fertilizers; Parental cares| |Subjects:||G Geography. Anthropology. Recreation| |Divisions:||Faculties > Social Sciences > School of Anthropology and Conservation| |Depositing User:||M. Nasiriavanaki| |Date Deposited:||01 Aug 2009 11:08| |Last Modified:||01 Aug 2009 11:08| |Resource URI:||http://kar.kent.ac.uk/id/eprint/22122 (The current URI for this page, for reference purposes)| - Depositors only (login required):
<urn:uuid:cc12eb9b-7852-4e4e-ab7d-8b2a1f62c2be>
2.75
442
Academic Writing
Science & Tech.
45.726044
Researchers say bubble fusion more difficult to reproduce than once thought By Charles Choi United Press International July 24, 2002 CHAMPAIGN-URBANA, Ill. - Although sound waves can generate temperatures as hot as the surface of the sun simply by squishing bubbles, the potential of tabletop nuclear "bubble fusion" raised earlier this year may have been exaggerated, new calculations suggest. Experimental findings reported last March suggested tiny bubbles could trigger fusion reactions by collapsing in a neutron-loaded solution of acetone, a common, naturally occurring solvent used to make plastic, fibers, drugs and other chemicals. The research was led by Rusi Taleyarkhan at Oak Ridge National Laboratory in Tennessee. Bubbles in liquids trapped and energized by ultrasound beams tend to flare with light in a phenomenon known as sonoluminescence, first observed in 1990. When bubbles inflated by sound waves collapse, the billionth-of-asecond- long implosions generate incredible pressures normally found at the bottom of the ocean along with temperatures of about 9,000 degrees Such intense pressure and heat led to speculation fusion could take place, in which atomic nuclei are slammed together to liberate incredible forces with little radioactive waste. Taleyarkhan's team said they detected chemical byproducts of fusion in their souped-up paint thinner in a container the size of three coffee mugs. "Our results make Taleyarkhan's increasingly unlikely," Kenneth Suslick, a chemist at the University of Illinois at Urbana-Champaign told United Press Suslick and colleague Yuri Didenko analyzed the chemical reactions from the collapse of an isolated excited bubble, the byproducts formed. They reported their findings in the July 25 issue of the British journal Nature. "This energy is converted into light emission, chemical reactions and mechanical energy," Suslick explained. "We were able to determine, for the first time, how much of the energy goes into the chemistry of the bubble." Suslick and Didenko generated a bubble about the size of a red blood cell and trapped it in the center of a spherical container using soundwaves. They adjusted the pressure in the container to expand the bubble to 1,000 times its volume, then collapsed it repeatedly, using sensitive fluorescent chemicals to monitor the byproducts created. They found that volatile molecules such as water, nitrogen and oxygen were ripped apart. Although less than one-thousandth of the energy involved fueled these chemical reactions, it was enough to eliminate the possibility that fusion could occur, they said. The new findings suggest sound-triggered fusion is improbable in highly volatile fluids like acetone or water, Suslick said. However, "the possibility of fusion occurring in low volatility fluids, such as liquid metals and molten salts, cannot be ruled out at this time." Tabletop fusion may be out of reach, but "there are other uses for sonoluminescent bubbles," said physicist Detlef Lohse of the University of Twente in Entschede, The Netherlands. For example, now that scientists understand the chemical processes of sonoluminescence more thoroughly, they might be able to harness it for applications in medicine and industry. Suslick noted sonoluminescence is already helping to enhance the chemical reactions used to make pharmaceuticals. Quoting Russian intellectual Leon Trotsky, Suslick said the research should go "forward in all directions." (In accordance with Title 17, Section 107, of the U.S. Code, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. New Energy Times has no affiliation whatsoever with the originator of the original text in this article; nor is New Energy Times endorsed or sponsored by the originator.) "Go to Original" links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted on New Energy Times may not match the versions our readers view when clicking the "Go to Original" links.
<urn:uuid:91fab116-3dc8-4cf9-bef2-c0c422039b34>
3.171875
897
Truncated
Science & Tech.
21.20777
A talk by Doug Given, US Geological Survey Earthquake Early Warning Coordinator Millions of Japanese citizens received warning of the 2011 magnitude 9.0 Tohoku earthquake. Can such a system be built for use in California? University researchers and government agencies are working together to create an Earthquake Early Warning System in California to reduce earthquake losses. How could you and your family best prepare for severe ground shaking using 30 seconds of advance warning?
<urn:uuid:9df17a14-0ae8-4c6e-b80e-00762f8fd5e0>
3.125
87
Audio Transcript
Science & Tech.
29.178
The Savannah River Site is an 803-km2 Department of Energy (DOE) facility located on the Upper Coastal Plain of South Carolina, near Aiken, SC. The site is bordered on one side by the Savannah River. Approximately 10% of the site is developed for DOE industrial activities; the remaining 720 km2 is primarily forested. The entire SRS became the nation's first National Environmental Research Park (NERP) in 1972. Within the SRS NERP, DOE also maintains a network of research reserves (DOE Research Set-Asides). These areas preserve representative samples of all major SRS biological communities and serve as reference sites to evaluate the impacts of site operations. Three resident research organizations are present on the SRS: the University of Georgia Savannah River Ecology Laboratory (SREL), USDA Forest Service - Savannah River (USFS-SR), and Savannah River National Laboratory (SRNL). As a research unit of UGA, SREL's primary function is ecological research and education. SREL provides independent evaluation of the impacts of SRS operations, assistance with risk assessment and remediation, and baseline information on natural trends in unimpacted ecosystems. SREL also manages the DOE Research Set-Asides, which evolved largely from SREL long-term research sites. USFS-SR manages ~140,000 acres of the SRS for forest products and wildlife, and conducts a variety of forest research. USFS-SR also conducts frequent prescribed burns throughout the SRS, essential for the health of many native species and the prevention of wildfires. As a national laboratory SRNL's primary mission is technological research and development for DOE, but includes a significant environmental management component. SRNL is involved in remediation, hazardous waste storage and management, and other aspects of site cleanup. Collaborations among these three organizations have enhanced the value of the SRS NERP. More information is available in a recent publication about the park.
<urn:uuid:3b927495-f678-4d16-8593-5eac97c4e216>
3.03125
396
Knowledge Article
Science & Tech.
25.760319
Today, a silent arrival. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. Today's New York Times Science section had yet another article on energy being taken from wave motion. We've heard a lot about schemes for taking energy from ocean waves. World energy consumption is around sixteen terawatts at this writing -- that's sixteen thousand gigawatts. Ocean waves might have the potential for supplying a tenth of that. But the equipment would be huge, and we're also unsure of the environmental risks of big wave energy systems. Now the article suggests another way this form of solar energy can be used. Two California engineers have made a small unmanned whale-monitoring device that travels the ocean. The monitoring equipment on the surface is linked by cables to a submerged platform, 20 feet down. The monitor functions are powered by solar panels. And, as the monitor bobs up and down, the cables work a mechanism below that propels it through the water at the speed of a walk. In this case, wave power isn't used to generate power for sale -- only to give the monitor mobility. I think of the image of a lone railroad depot in an old western movie. Remember the large water tank for resupplying steam locomotives, and the large windmill that pumped water into the tank. A steam engine would've powered those pumps in a late-19th-century city. But, off on the prairies, a windmill needed neither fuel nor heavy maintenance. Stand-alone systems like the windmill (or that whale monitor) are pretty common. Farm windmills still serve cattle-watering troughs. In the city, stand-alone solar panels are taking up many functions where they can't be put out of service by power outages. School-zone warning lights, widely powered by solar panels, are one example. Solar powered domestic alarm systems are another. Our technologies so often start out in one form, only to be diverted into another. So-called green energy is a case in point. Certain applications are best served by large central power plants -- whether the power source is a windmill farm or a coal-fired steam generator. It's a lot more efficient to produce power -- or most anything else -- in bulk. So all the moral and economic arguments rage over the source of large-scale power. But green energy has already won the argument when it comes to energy in isolation. We argue over the efficacy of building huge wind farms, large solar towers, or many square miles of wave energy collectors, while these technologies actually enter beneath our radar -- on little cat feet. They spring up in isolation, picking up independent tasks. Heating one house, electrifying an Aleutian village, doing this job and that within our cities. As this happens, the technology of these systems matures. While we argue about creating major installations, the technologies grow and evolve on the small scale. They enter our lives from the grass roots up. We gradually see more and more green energy systems. And, one day, before we've actually decided to adopt them, we'll wonder how we ever did without them. I'm John Lienhard at the University of Houston, where we're interested in the way inventive minds T. Walker, Wave-Powered Monitor Is Moving Beyond Listening to Whales. The New York Times, Science Times, February 24, 2009, pg. D3.
<urn:uuid:398024a9-311d-467c-a602-1e496316aa0f>
3.140625
757
Personal Blog
Science & Tech.
47.587575
the Physics Central the American Physical Society Find out how scanning tunneling microscopes can be used to see atoms, which are too small to see with optical instruments. This same microscope can actually move atoms and locate them precisely on a surface--check out this site to see a circle of atoms and the electron waves inside the circle. %0 Electronic Source %A Physics Central, %T Physics in Action: Seeing Atoms %I American Physical Society %V 2013 %N 24 May 2013 %9 text/html %U http://www.physicscentral.org/explore/action/atom-1.cfm Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:5dff253e-54e6-4883-aaf7-52701381a22b>
2.9375
171
Truncated
Science & Tech.
42.909985