text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Basics of Spectrophotometry
Many compounds absorb light, but not all absorb the same amount, or the same colors (wavelength). Some absorb mainly in the visible spectrum (see below) while others absorb mainly in the ultraviolet or infrared range. Here we are mainly interested in measuring absorbance in the visible range
When light passes through a solution some of it will strike molecules and be absorbed, thus the amount coming out the other side is less than the amount that entered. Note that there are two ways of looking at this: you can either think about how much light makes it through the solution (transmittance) or how much gets absorbed by the molecules (absorbance). Clearly the two are related, the transmittance is simply the amount not absorbed and vv. However, the two are measured differently: transmittance is expressed at the percentage of light that passes through, while absorbance is expressed as:
Absorbance = log10(100/%transmittance)
So, if transmittance is 100%, then A=0, while if transmittance is 1%, then A=2.
There are three sources of absorbance, the container (usually a clear glass test-tube or cuvette), the solvent which is nearly always water, and the dissolved molecules. In most cases it is only the last that is of interest. To ensure that the container and solvent do not interfere with our readings we use a 'blank' which is a tube containing only solvent as a reference point. By defining the amount of light that passes through the blank as 100% transmission we can assume that any reduction in the sample tube is due to the dissolved molecules.
Each type of molecule absorbs each wavelength of light in a characteristic pattern. In general you can make a good guess about a solution's absorbance pattern by looking at the color of the solution: the color it appears is the color that is NOT absorbed. So a blue solution is absorbing all the colors except blue. As you will find out, this does not mean that all blues are identical, nor that the absorbance of other colors is necessarily complete.
Two other factors determine the amount of light absorbed, the length of the light path through the solution and the concentration of molecules in the solution. The first is seldom a factor, since we always use the same size tubes, the length of the light path is always the same. The effect of concentration is usually what we are looking at in the lab. Suppose you dissolve different amounts of a red dye and measure the amount of blue light that passes through, or is absorbed. | <urn:uuid:ccc47f3b-e67e-43e8-823e-67f2be0c81ef> | 4.1875 | 529 | Tutorial | Science & Tech. | 42.011992 |
Node:Number Syntax, Next:Integer Operations, Previous:Exactness, Up:Numbers
The read syntax for integers is a string of digits, optionally preceded by a minus or plus character, a code indicating the base in which the integer is encoded, and a code indicating whether the number is exact or inexact. The supported base codes are:
#B-- the integer is written in binary (base 2)
#O-- the integer is written in octal (base 8)
#D-- the integer is written in decimal (base 10)
#X-- the integer is written in hexadecimal (base 16).
If the base code is omitted, the integer is assumed to be decimal. The
following examples show how these base codes are used.
-13 => -13 #d-13 => -13 #x-13 => -19 #b+1101 => 13 #o377 => 255
The codes for indicating exactness (which can, incidentally, be applied to all numerical values) are:
#E-- the number is exact
#I-- the number is inexact.
If the exactness indicator is omitted, the integer is assumed to be exact,
since Guile's internal representation for integers is always exact.
Real numbers have limited precision similar to the precision of the
double type in C. A consequence of the limited precision is that
all real numbers in Guile are also rational, since any number R with a
limited number of decimal places, say N, can be made into an integer by
multiplying by 10^N. | <urn:uuid:a0c910df-47e7-4179-812f-02170b62b340> | 3.875 | 332 | Documentation | Software Dev. | 37.512011 |
© Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo.
This photograph shows the almost full water tank at the Super-Kamiokande nucleon decay experiment. The experiment, buried deep underground, seeks evidence of proton decay. A decaying proton will emit one or more charged particles, moving with enough energy to create a flash of "Cerenkov light", the visual equivalent of a sonic boom when an electromagnetically charged particle (like an electron) moves through a material faster than light would move through the same material. The Super-Kamiokande experiment is also capable of detecting neutrinos, as we learned in Unit 1. (Unit: 2) | <urn:uuid:e9082641-38de-4976-ae5e-2b21e5761db5> | 3.921875 | 148 | Knowledge Article | Science & Tech. | 23.030692 |
Wild Side : Soltice — the invisible annual pivot point
Photo by Susan Safford
This winter solstice, the official start of winter, was emphatically punctuated by a total eclipse of the moon early Tuesday morning (the actual solstice occurred Tuesday evening).
When this column was filed Monday morning, the prospect of good viewing conditions for the eclipse looked grim. But whether you could see it or not, there is something wonderful about the earth acting as a gigantic lens, refracting a dim echo of sunlight through its atmosphere and onto the shadowed moon, to mark the shortest day.
Some similar celestial drama should signal each solstice. The moment of the lowest sun is predictable — cultures have noted and celebrated it for millennia — but it's nearly invisible, an undistinguished moment on an undistinguished wintery day. And yet the solstice echoes throughout the natural world. A colorful alignment of celestial bodies seems like a fitting marker for this turning point, which truly is a critical point on the Wild Side.
On the surface, the landscape looks bleak during the first few weeks of December. The trees are bare, the growing season ended weeks ago, and most migratory species are in their winter quarters. But a careful observer can find a surprising amount of wildlife still active during those weeks; hints of the past summer persist, and there are even species that time their reproductive lives to those weeks leading to the solstice.
Most obviously, Islanders may have noticed swarms of grayish moths around porch lights and windows on warm evenings earlier this month. Aptly named, these winter moths emerge and mate in weather that would immobilize or kill most insects. As you'd imagine, they have evolved unique proteins and enzymes that function well in cool conditions, allowing energy to flow and muscles to contract. The winged males flutter in search of the scent of wingless females, which have climbed onto the bark of the trees that will feed their caterpillars at leaf-out next spring.
While activity in cold weather is not a common pattern among insects, the winter moth is hardly alone. There are also winter crane flies and lacewings that turn up at porch lights in December, similarly adapted to enduring cold, mating, and laying at this unpromising season. The approach seems incongruous for cold-blooded animals, but it has advantages: most potential predators are either inactive or far to the south, and these cold-adapted species enjoy a window of relative safety at a critical point in their life cycles.
December also features the tail end of the summer season for many species. A few grasshoppers persist in sheltered spots, and on a mild days a few butterflies can still be found (over the years, a half-dozen species have been recorded here in early December). On a warm Saturday earlier this month, I found five distinct species of flies still active at Cedar Tree Neck, warmed by the sun on the open beach, along trails, or as they swarmed over ponds. Straggling warblers linger in our region in small numbers, even as our winter resident birds continue to arrive and settle in. And I've been told that a few striped bass dally in our waters into December, making leisurely headway toward the Chesapeake.
But by the time of the solstice, the game is over. Summer insects that hadn't gotten around to dying have done so. Those few insects optimized for late autumn, like the winter moth, have run their course and encountered conditions lethal even to them. Migratory birds that are truly migrating have gotten the message and left, and any colleagues they leave behind are likely doomed by whatever afflicts them: Injury, sickness, or just an unfortunately dim migratory impulse.
And then, at the moment it ends, the annual cycle starts again. The point of the solstice, of course, is that it's the shortest day. The following day is longer, if only by seconds, and the lengthening accelerates as we approach the March equinox. Just as summer did six months ago, winter begins to wane at the moment it begins. And within a few short weeks, Island wildlife will begin to respond to the lengthening days and higher sun. Silent around the solstice, a few resident birds like chickadees and house finches will be singing again by mid-January. The average temperature lags behind day length — the coldest day, on average, comes around January 20 — but the push toward spring has already begun.
The length of the day exerts enormous control over nature. It's a pervasive, unfailing signal of the turn of the seasons, prompting plants and animals both simple and complex to alter their hormones, their behavior, and their activity level.
I understand the physiology of it all, and I know exactly what will happen. But still, every year, the solstice catches me off guard, surprising me with how dramatically the natural world can pivot on an invisible point of time. | <urn:uuid:693bc737-c300-4559-8700-edf286c7cf69> | 3.265625 | 1,019 | Truncated | Science & Tech. | 42.753345 |
The watch list however, is different from the emerging contaminants action list. The watch list includes those materials that DoD believes have a "probable mission or budge impact." DoD then monitors events surrounding the listed material while conducting "rough impact analysis." Other materials found on the watch list include: tungsten and its alloys, lead, beryllium, dichlorobenzenes, and dioxins, among others. "DoD places materials on the Watch List when they are identified through the scanning phase as potentially affecting one or more DoD business areas. While the exact nature and magnitude of the potential impacts are unknown, the Department has identified these materials as having a potential to affect DoD functions. As a result, DoD is conducting Phase I assessments for each of these materials."
The difference between the watch list and action list is that under the watch list the DoD monitors developments concerning the listed material while expending minimal resources. If the material is upgraded to the action list, DoD has determined that the material is likely to impact the department, and it performs detailed analysis on the material while possibly expending "significant" resources on understanding the material. Other activities performed once a material is upgraded to the action list include undertaking risk management actions and pollution prevention efforts by DoD.
This listing of nanomaterials, without more information, is interesting for a number of reasons. First, the DoD's watch and action lists are selective in nature. There are only eighteen materials on these lists in total, so the addition of nanomaterials is significant. We therefore see this action as a step towards regulation of nanotechnology by DoD's recognition of nanomaterials as potentially impacting department operations and the environment. Second, it is hard to know what DoD will be watching by posting "nanomaterials" on its watch list. Given the different types and functions of nanomaterials and nanoparticles, a blanket listing is vague at best. However, because the DoD elected to list nanomaterials at all is proof that federal agencies are increasing their focus on nanotechnology in general.
While this listing does not cause any regulatory actions to be taken by DoD, an upgrade to the action list could certainly mean a significant change in course as to how one of the country's largest agencies addresses nanotechnology. | <urn:uuid:d54d9892-b016-430a-b63e-55ff5f372982> | 2.84375 | 479 | Knowledge Article | Science & Tech. | 24.076276 |
Researchers around the world are opening their minds to the possibility that the phenomenon of anti-gravity is not just science fiction.
Most respected physicists still scoff at the idea that experimental equipment can reduce gravity, but several groups have been working on it independently and are coming to the same conclusion: it might just be true.
On Monday, reports re-emerged that Boeing, the American aircraft manufacturer, is interested in exploring the possibility of building an anti-gravity device. The news, first revealed in New Scientist magazine in January 2002, centres around Russian scientist, Evgeny Podkletnov. In 1992 he claimed to be the first person to witness the reduction of gravity above a spinning superconducting disc.
Podkletnov, a specialist in superconductors, says he stumbled on the effect whilst performing a routine test on a large superconductor in his laboratory at the Tampere University of Technology, Finland.
High speed spin
Podkletnov met New Scientist in late 2001 to outline the experiment, in which a large yttrium-barium-copper-oxide (YBCo) superconducting disc was suspended in nitrogen vapour and cooled to around 233 °C. The disk was levitated in a magnetic field and finally spun at speeds of up to 5000 revolutions per minute by means of an alternating electric current.
He claimed that objects placed above the disc lost around one per cent of their weight. But so far no one has managed to successfully repeat his experiment.
However, several high-profile organisations have taken an active interest in his work. NASA has paid Superconductive Components of Columbus, Ohio, $600,000 to reproduce the apparatus Podkeltnov used in his experiment. There have been delays, but NASA's Ron Koczor told New Scientist: "We expect to be ready to test the device in late September 2002."
British defence contractor, BAe Systems, is also interested in the work and set up Project Greenglow to explore the subject. Other groups in Japan, France and Canada are also rumoured to be working on the subject though they have so far kept their identities secret.
The most intriguing aspect of the affair is that Ning Li, then at the University of Alabama, Huntsville, says she independently predicted Podkletnov's observation in 1989.
Li's theory predicts that if a time-varying magnetic field were applied to a superconductor, charged and deformed lattice ions within the superconductor could absorb enormous amounts of energy. This would cause the lattice ions to spin rapidly about their equilibrium positions and create a minuscule gravitational field.
Li claimed that if these charged, rotating, lattice ions were aligned with each other by a strong magnetic field, the resulting change in local gravity would be measurable.
Early in 2002, Raymond Chiao, a respected physicist at the University of California at Berkley, put forward his own theory relating gravity and superconductors. He predicted that bombarding a superconductor with electromagnetic waves would produce gravitational radiation and is now attempting to prove his theory by experimentation.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Jan 17 18:15:54 GMT 2008 by Paul
Hi, the principle of anti gravity is a simple one and I believe I know how to create such a device, but without all the rigmarole talked about in what I just read.
If there is anyone out there who has a few million dollars lying around, and wishes to know how to achieve anti gravity, without ANY `machinery` as such, I may entertain telling what I know!
You will be amazed at how staggeringly simple a process it is!
I am thinking of advertising this, on a `highest bidder` set-up! Please contact me if you wish to talk about such things.
Wed Jan 30 04:10:18 GMT 2008 by Don
Have you had any luck finding an interested party for your antigravity device? Where can I go if I have a working protype?
Mon Sep 15 19:32:08 BST 2008 by Xylem
Please atleast give a clue of how that can be possible, or atleast give the principle that your work rilies on.
Gravity Reducing "shield"
Sat Jun 28 18:56:17 BST 2008 by Seamus O'toole
My working prototype consists of an ordinary vacuum cleaner (Electrolux) on its reverse setting. I then place a ping pong ball in the air stream. It floats and occasionally spins as well. (Though certainly not as fast as Podkletnov's ceramic disk!) When I hold a penny over the hovering ball, it seems to be lighter.
I've replicated this dozens of times in my own home, usually during a party. Does anyone know of an investor interested in backing my further experiments?
Gravity Reducing "shield"
Sat Sep 27 06:17:26 BST 2008 by Craig Kubiak
Is it true as an object spins it gets lighter, I have seen a video of a man spin an cone sape objest on a pice of paper than he lifted it up while the object was spinning and than removed the paper while the cone shped object hovered in the air. Let me know
Gravity Reducing "shield"
Wed Jun 17 23:19:04 BST 2009 by frank
that is caused by the air flow speeding up when traveling around the round object and the weight reduction is caused by the upward air flow
Natural Gravity Utilization
Sun Dec 07 00:37:50 GMT 2008 by Larry Robinson
As with wind and solar power, gravitational forces in nature should be looked at to harness for use. There is the danger in producing similar but man made gravitional forces that could cause harm to biological entities which have evolved and grown in the gravitational forces around them. Artifical gravitation or anti-gravitation should take cause and effect into consideration.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:02910fb1-b743-4022-a7e6-e9276ba52c43> | 3.28125 | 1,356 | Comment Section | Science & Tech. | 43.435024 |
BIRDS appear to be social climbers. Our feathered friends have expensive tastes when it comes to deciding where to hang out, preferring parks in more upmarket neighbourhoods over the poorer parts of town.
The British government already uses local bird life as one of 15 indicators of the quality of life - the others are things like transport and recycling. But as well as being a barometer of a healthy environment, it seems that birds are also attracted to wealth.
Ann Kinzig and her team at Arizona State University studied 15 parks in Phoenix. They found that the more upmarket the surrounding neighbourhood, the more diverse the bird population in the park. And to their surprise, socio-economic factors explained bird diversity better than anything about park ecology - such as tree diversity and vegetation structure. In fact, parks in the poorest parts of town had the highest tree diversity, but ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:7ed36544-542f-4ded-a112-9f289cd16584> | 3.125 | 202 | Truncated | Science & Tech. | 45.366452 |
Many species undergo extensive diel vertical migration, feeding in the surface waters and moving down during the day, reducing predation. In this way, surface production is cascaded through progressively deeper layers. Of relatively minor productive importance is organic material from large carcasses sinking to the seafloor, e.g. dead whales, and sulpha-based organic production associated with deep-sea seafloor hot-water vents. Nevertheless, the concentration of organic material decreases exponentially with depth.
In contrast to former views, it is now known that seasonal effects in surface layers are transferred into even deeper ocean regions so that, despite the physical uniformity of the deep oceans, an annual production signal exists resulting in seasonal migrations and reproductive cycles in deep-sea fauna.
Deepwater fishes comprise three major groups: pelagic fish living largely in midwater, with no dependence on the bottom; demersal fish, living close to and depending on the bottom; and benthopelagic fish, living close to the bottom but undertaking short migrations in the watermass (e.g. for feeding). In general, the deep-sea demersal fishes come from phylogenetically much older groups than the pelagic species (the first existing demersal species were present around 80 million years ago). While most of the demersal deep-sea families are found worldwide, the existence of isolated deepwater basins bounded by the continents and mid-oceanic ridges has resulted in regional differences believed to be a consequence of continental drift and subsequent ocean formation.
Much remains unknown about deepwater fishes and new discoveries continue, such as the megamouth shark (a 4.5m and 750kg shark) and the six-gilled ray, which both represent new families. Since the demersal species are distributed according to depth, those inhabiting the continental slope and rise are spread along ribbon-like depth regions along the perimeters of the oceans. Where deepwater pelagic species and demersal species co-occur, they usually prey on each other.
Life history characteristics and productivity
Just as for epipelagic fishes, deepwater species must successfully spawn, grow and return to the area of the adult habitat. The extreme conditions of the deep-sea are reflected in the variety of reproductive strategies that exist. Low population sizes notwithstanding, hermaphroditism, extreme sexual dimorphism and unbalanced sex ratios occur. Sebastes spp., certain ophidioids, as well as deepwater sharks can be live bearers and the pseudotriakid, Pseudotrakis microdon, is oviphagous.
Despite the fewer number of species in the deep-seas, those that occur display a variety of reproductive methods ranging from strongly K-selected species, which may be semelparous (e.g. Coryphaenoides armatus , a widely occurring macrourid) through ovoviviparous and oviparous species, to those that are strongly r-selected. And, in the perpetual darkness of the abyss, many species depend on photophores and sound production for intra-species recognition required for successful reproduction.
Many deepwater species grow slowly, so slowly in fact that determination of their actual age remains difficult and contentious. For some species, particularly orange roughy (Hoplostethus atlanticus), no convincing case has yet emerged for any particular ageing technique based on interpretation of otolith microstructure. Depending on the assumptions made, this species may have longevity ranging from 21 to more than one hundred years. Because of these biological characteristics, most deep-sea species are very fragile with reduced resilience to intensive fishing.
Until most recently, the great depth of the deep-sea has made it difficult to exploit and the existence of relatively more abundant resources in shallower seas have meant that little incentive existed to fish in such difficult-to-exploit regions. Few deepwater fisheries are of long standing and those that are - the Portuguese (Madeira) line fishery for black scabbardfish (Aphanopus carbo), the Pacific Island fisheries for snake mackerels (Gempylidae) and cutlass fish (Trichiuridae) or the west African fisheries for deep-sea sharks (for extraction of scalene) - were initially artisanal.
With the reduction of opportunities for development of inshore fisheries and the improvement of gear technology and navigation instruments, deep-sea fishing has expanded in the 1990s. A well-known example of recently developed deepwater fisheries is that of the orange roughy, a species that inhabits the slope waters and those of seamounts (as well as the seafloor), particularly around New Zealand and Southeast Australia where this commercial fishery initially began. The fishery later spread to the Walvis Ridge in the Southeast Atlantic (Namibia) and the Southwest Indian Ocean. A small fishery even exists in the Bay of Biscay. This long-living fish reaches about 40cm and 2kg in size though the maximum size varies with region. Specially-aimed trawling techniques were developed after initial massive catches from spawning aggregations were taken in a matter of minutes resulting in split codends.
Orange roughy is particularly sensitive to approaching objects (perhaps an adaptation to avoid predation) so that acoustic assessment using towed bodies containing the transducer have proved futile in some areas. Maximum sustainable levels of exploitation of orange roughy may be as low as 5-10% of unfished biomass, corresponding to natural mortalities (M) of about 0.04 per year. Accumulating evidence about stock declines indicates that none of these fisheries are being exploited sustainably and ongoing yields will likely be around 5% of those initially obtained.
A Trichiurid fishery, which exploits Aphanopus carbo in the Atlantic, is a rare example of a deepwater fishery that, because it has traditionally used hook and line gear, has proved sustainable over a period of about 150 years. Adults of this species are benthopelagic living in the deep range 400-1 600m. The species ranges from Greenland to the Canary Islands and on both sides of the mid-Atlantic ridge. Unusual for a deepwater species, A. carbo grows rapidly and has longevity of around 8 years. However, as with orange roughy, the usual ominous signs are now evident for this fishery. Catch rose from 1 100 tonnes in 1980 to 3 000 tonnes in 1992, gear efficiency has improved through the introduction of monofilament lines and in a large increase in the number of hooks per line set, now at 4 000-5 000 per line.
The Macroudidae are another group whose members are widespread and, in particular locations, abundant. They are typical pelagic 'cruisers' and inhabit the mid-to-upper region of the continental slope. In the North Atlantic, fisheries exist for Macrourus berglax and Coryphaenoides rupestris using bottom trawls initially fishing in depths of 600-800m, and more recently extending down to 1 500m depth. However, experience in these fisheries off Newfoundland shows the all-too-familiar pattern of total allowable catches tracking declining trends in reported landings of this group. Coryphaenoides rupestris have a potential longevity of 70 years, although in the NE Atlantic fish ages are usually in the 20-30 year range. Thus, as for other deepwater species, Macrourids exhibit the characteristics of many deepwater fisheries that render them susceptible to overfishing.
The Pleuronectidae are a highly-evolved group that are not usually associated with deepwater fisheries, but important fisheries for members of this group occur in both the North Atlantic and North Pacific Oceans. In the Atlantic, the best known has been that for Greenland Halibut (Reinhardtius hippoglosoides) on the continental slope depths. This fish had an average size of around 1kg up until the mid-1980s, but has since declined to around 200g in the early 1990s. | <urn:uuid:23c6c7bf-263c-4a8a-8bfd-4d08647112ba> | 4.375 | 1,664 | Knowledge Article | Science & Tech. | 25.912833 |
In most cases it's simpler to perform date calculations using epoch time (the numbers of seconds that have elapsed since a given date, in Unix is January 1, 1970).
If you have a date and want to convert it to epoch time, use the functions timelocal and timegm availabe in the module Time::Local.
These functions take as a parameter a date value (represented as a six-element array) and returns the epoch time.
$time = timelocal($sec,$min,$hour,$mday,$mon,$year);
$time = timegm($sec,$min,$hour,$mday,$mon,$year);
Use the function 'Add_Delta_Days' from 'Date::Calc' module.
The syntax of the function is:
Add_Delta_Days($year, $month, $day, $n_days)
where $n_days is the number of days you want to add (or substract) from the date specified with $year, $month and $n_days.
The function returns the calculated date as a list with 3 elements: year, month and day.
#-- add 60 days to November 4th, 1985
use Date::Calc qw(Add_Delta_Days);
($year, $month, $day) = Add_Delta_Days(1985,11,4,60);
To determine the day of week of a given date, use the function 'Day_of_Week' from 'Date::Calc' module.
Day_of_Week expects 3 parameters: year, month and day (in that order); it returns '1' for Monday, '2' for Tuesday and so on until '7' for Sunday.
To obtain the name of the day, use the function 'Day_of_Week_to_Text' (also from 'Date::Calc' module).
Day_of_Week_to_Text receives as a parameter the day of week and returns a string with the corresponding name.
Use the 'leap_year' function from Date::Calc module. The function returns true if the argument is a leap year, false otherwise.
use Date::Calc qw(leap_year);
$year = 2000;
print "$year is a leap year\n" if ( leap_year($year) );
Use 'localtime' or 'gmtime'. 'localtime' returns local time information and 'gmtime' returns time based on GMT time zone.
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
Both functions expects as parameter the number of seconds since the epoch (00:00 January 1, 1970 for most systems, 00:00 January 1, 1904 for MacOS). That value is returned by the 'time' function.
The functions returns a 9-element list with the following structure:
All the elements are numeric, the meaning of the elements are the following: | <urn:uuid:3c384e4d-2eaa-48a3-8301-f31c04c863dc> | 2.875 | 653 | Tutorial | Software Dev. | 52.637154 |
The waves so far discussed can be:
(a) Transverse waves
(b) Longitudinal waves.
(a) Transverse waves:
The waves in which each particle of the medium executes vibrations about its mean position and is perpendicular to the direction of the waves are called transverse waves. For example, the waves generated in a pond when a stone is thrown, all electromagnetic waves, stretched string of sitar, violin, etc.
(b) Longitudinal waves:
The waves in which each particle of the medium executes vibrations about its mean position, in the direction of the propagation of the wave, are called longitudinal waves. For example, sound waves in air, sound waves inside water.
Longitudinal waves can also be defined as the waves in which the medium particles have periodic change in displacement and pressure in the direction in which they travel.
In a longitudinal wave, if the particles of the medium vibrate horizontally (the direction of movement of the tuning fork), then the disturbance also travels horizontally. Again this wave travels in the form of compression and rarefaction.
A compression is the region or a space of the medium, in which the particles come close to a distance less than the normal distance between them.
A rarefaction is the region or a space of the medium, in which the particles get apart to a distance greater than the normal distance between them. | <urn:uuid:f952c2f3-3cf7-4926-b816-0038f15512fe> | 4.34375 | 283 | Knowledge Article | Science & Tech. | 31.915469 |
A worldwide effort is underway to perform a census of the world's oceans, pulling information about species from around the world into one location. The project, known as the Census of Marine Life, now has 122,500 different species on its tally -- after cleaning up over 56,000 scientific names that were in actuality just aliases for other organisms.
The project aims to assess and explain the diversity, distribution, and abundance of marine life on the planet -- but that's a tall order, with scientists estimating that there may be three times as many species yet to be discovered as have already been described in the scientific literature. In this segment, we'll talk with ocean explorer Sylvia Earle, and check in on the progress of the project. Organizers of the Census of Marine Life say they are about halfway done, on track to complete the effort in 2010. Information from the project is being published in the World Register of Marine Species, an an online encyclopedia of photos and information about all known marine species.
Produced by Annette Heist, Senior Producer | <urn:uuid:22e16b87-e8db-4ad0-b310-7b654054a9c2> | 3.71875 | 211 | Truncated | Science & Tech. | 32.10373 |
The term, 12th Planet, is not scientifically exact but relates to the historical and widely read book that Sitchen wrote, titled The 12th Planet. In this book he explains that the ancient visitors from this traveling comet considered the Moon to be a planet, and counted the Sun as the first. The periodic Earth cataclysms caused by the 12th Planet have been in place for eons, since the Earth was cold and without life. As this statement will raise questions in some minds, let us explain. The Earth was cold as the Sun had not yet lit. All this is a matter of astrophysics, and not relevant to the discussion at hand. The 12th Planet, or giant comet, assumed its orbit around the Sun due to gravitational and motion issues, which were at play coming out of what some Earthlings refer to as the big bang. This was in fact only a little bang, a local affair, however.
The orbit of the 12th Planet is long and narrow. This is not dependent on gravitational and orbital matters within your Solar System, but on a larger scheme, which causes the trip back into your Solar System to be but a minor part of the itinerary. Why does the 12th Planet swing so far away from your Solar System, and why bother to return, having done so?
There is a balance between the attraction of your Sun and another, unseen by you but nevertheless present and in force. The 12th Planet travels interminably between these two forces, not able to settle on an orbit around just one because of the momentum and path it originally took. It is caught. The path of the 12th Planet is such that it spends most of its life out in dark space, slowly moving from one giant tug to another. As it approaches one of these giants, your Sun being one, it picks up speed, and reaches a maximum speed as it passes the attraction. Having passed, it now has double the gravitational attraction on one side, and quickly switches back in the other direction, zooming just as rapidly much along the path it just took. Out in space again, caught between the two giants that dominate its life, it settles down to a sedate few thousand years, only to zip around the Sun's counterpart in a like manner and head back toward your Solar System. | <urn:uuid:700c40e0-ac87-4e3b-971e-6eaf70376f89> | 3.84375 | 469 | Nonfiction Writing | Science & Tech. | 58.06146 |
This activity was selected for the On the Cutting Edge Reviewed Teaching Collection
This activity has received positive reviews in a peer review process involving five review categories. The five categories included in the process are
- Scientific Accuracy
- Alignment of Learning Goals, Activities, and Assessments
- Pedagogic Effectiveness
- Robustness (usability and dependability of all components)
- Completeness of the ActivitySheet web page
For more information about the peer review process itself, please see http://serc.carleton.edu/NAGTWorkshops/review.html.
This page first made public: Jun 29, 2011
A Jigsaw Approach to the Weathering Thermostat Hypothesis
Topic: weathering and climate feedback loops
Course type: upper level university
This activity teaches Climate Literacy Essential Principle 2: Climate is regulated by complex interactions among components of the Earth system
This activity teaches: Concept F. Equilibrium and Feedback Loops - The interconnectedness of Earth's systems means that a significant change in any one component of the climate system can influence the equilibrium of the entire Earth system. Positive feedback loops can amplify these effects and trigger abrupt changes in the climate system. These complex interactions may result in climate change that is more rapid and on a larger scale than projected by current climate models.
Students should be able to do the following:
- explain how the weathering of silicates is affected by environmental variables, such as temperature and precipitation rates
- draw and explain a negative feedback loop involving silicate weathering and atmospheric CO2
- explain what a negative feedback loop is and how it tends to promote equilibrium
Instructor provides an introduction to the weathering cycle and connection to ocean chemistry. We consider the following question as a group before splitting up for the Jigsaw portion of the exercise:
If we take the chemistry of wollastonite (CaSiO3) to represent continental rocks, what is the chemical equation of weathering with carbonic acid (H2CO3)?
Students conduct research and develop expertise in one aspect of the weathering-CO2 cycle. Each student produces a 1-2 page description of their area of expertise. Students studying the same aspect then meet to deepen understanding and identify and clear up any misconceptions. Groups check in with instructor or teaching assistant.
These are the aspects of the weathering-climate cycle the students will consider:
- What happens to the Ca2+, Si4+ and HCO3- ions?
- What effect would higher atmospheric CO2 levels have on silicate weathering? Why?
- What effect do warmer temperatures have on silicate weathering? Why?
- What effect do higher precipitation rates have on silicate weathering? Why?
- What effect does increased vegetation have on silicate weathering? Why?
Students are then redistributed into mixed groups and learn about the entire cycle through peer teaching. Each of the mixed groups is given an external forcing of either "Climate becomes warmer" or "Climate becomes cooler". They need to answer the following questions and draw a figure representing the feedback cycle.
- Show the effect of the warming or cooling on temperature, precipitation, vegetation and rate of chemical weathering.
- How do these changes affect atmospheric CO2 levels?
- How do the changes in CO2 levels affect the original warming or cooling?
- Draw the connections as a loop or cycle and explain how equilibrium in the system is promoted by the weathering cycle.
Each group produces a poster explaining the negative feedback cycle. Class time is spent visiting the posters, providing peer feedback and a final group discussion.
During the final group discussion, we emphasize that both final scenarios involve "negative" feedback. In each case, there is a reduction of the initial forcing, whether it is a warming or a cooling.
Assessment is based on the 1-2 page summaries, the group posters and a portion of the midterm/exam.
Connections to other Activities
I used a CLEAN resource called Understanding the Carbon Cycle: A Jigsaw Approach by David Hastings of Eckerd College as a template for this exercise. It would be a good introduction to the carbon cycle and the jigsaw method and this activity could follow.
I use "Earth's Climate: Past and Future" by William F. Ruddiman as a text for this exercise. Students would need to use a variety of reference materials for the initial research part of the activity. | <urn:uuid:20d078a0-47a3-413c-adbb-d96ba299ddd2> | 3.5625 | 915 | Knowledge Article | Science & Tech. | 30.245258 |
Nanotechnology: Cleaning up our Water
Reported April 2008
HOUSTON, Texas (Ivanhoe Newswire) -- He's just 37 years old, but he's already making a difference in the world! Now, Ivanhoe introduces a young engineer who's creating small solutions to big problems.
We've seen it in the movies -- polluted drinking water is a health and environmental concern. In fact, right now, 30 states need to clean up their groundwater. "They've been designated by the EPA as being highly contaminated, and they've got to do something about the contaminated water," Michael Wong, Ph.D., a chemical engineer at Rice University in Houston, told Ivanhoe.
Dr. Wong is one of Smithsonian Magazine's America's Young Innovators … and for good reason. He's trying to come up with a way to use nanoparticles to clean up our water. "Water is not just H2O. Water has all sorts of stuff in it and the stuff we don't want, those are the things that can really hurt you," Dr. Wong explains.
He's using nanoparticles made out of gold and palladium -- a metal related to platinum -- to get rid of chemicals. One of the most common pollutants in United States groundwater is trichloroethylene, or TCE, a solvent used to degrease metals. And it can cause cancer.
"Our idea was, let's go ahead and break it down -- break it down into something that's safer," Dr. Wong says. "Safer chemicals that won't hurt your body and hurt the animals and the fish and what not."
Wong uses nanoparticles -- ten thousand times smaller than a human hair -- and hydrogen to break TCE into something non-toxic. "We are going to pump water through this guy here and the water is being pumped from the bottom up," Dr. Wong explains.
Glass beads will help to hold the nanoparticles in place. "Then clean water comes out," Dr. Wong says.
Dr. Wong plans to test it at military sites first -- then move onto industrial sites and dry cleaning businesses. "I'd like to see our reactor do a really good job of getting rid of some of the contaminants," Dr. Wong says. Possibly, making our water and environment cleaner in the future. Dr. Wong says his reactor will be more efficient and cost less than the carbon reactors being used now.
The American Geophysical Union, the American Waterworks Association, and AVS contributed to the information contained in the TV portion of this report.
Click here to Go Inside This Science or contact:
Dr. Michael Wong
American Geophysical Union
Washington, DC 20009-1277
American Water Works Association
(303) 794-7711 or 1-800-926-7337
Inside the Clouds
This Month's TV Reports
Satellites are unlocking the secrets of the sky, revealing how clouds are affecting global warming and why the Antarctic cloud covering is disappearing.
Discovering a new Earth 2.5 Trillion Miles Away?
An exciting discovery in space: A new earth-like planet has been spotted developing 430 light years away. Could it support life?.
Congestive heart failure affects millions of Americans; but now, a new device is improving the quality of life of patients by helping them breathe easy.
New Hope for Stroke Survivors
It was once thought that when a stroke patient loses motor skills, they are gone for good; but a new 'video game' is helping patients continue on their road to recovery no matter how far out they are.
Saving Legs - Saving Lives
Is it just a pain in your leg or something deadly? A new tool is saving the lives of millions with a silent killer many don't even know they have.
Science of Stress
Do you all of a sudden have acne as an adult? Is your hair falling out or turning gray? Relax! It could be a sign of stress, but help is on the way.
Protect Yourself From Computer Hackers
Every 40 seconds, computer hackers are using the internet to attack your computer. Learn how to protect your PC and your identity from cyber thieves.
Every year, there are six million accidents on America's roads; but virtual reality is helping to make our roads safer by finding out what happens when cars crash.
Virtual Reality for Construction Zones
Every year, more than 350 construction workers die on the job -- many from falling. Now, 3-D simulators are saving lives by improving balance and coordination.
Cleaning up our Water: Chemicals could be polluting your drinking water and harming your health! Now, one young scientist is working on a way to clean up H20.
Men are From Mars
That saying 'Men are From Mars, Women are From Venus' might actually be true -- at least when it comes to stress. Scientists say the brains of stressed-out men and women react differently, and may result in unique responses.
Planes, Trains and ant Hills
Tired of waiting in lines at the airport? Waiting to check-in, waiting to board, waiting to exit the plane after it's landed? One airport is getting a little help from some six-legged friends to make your traveling easier. | <urn:uuid:4b935059-83cf-4ae9-b0a7-01c10d20d567> | 2.9375 | 1,087 | Truncated | Science & Tech. | 63.529994 |
The simplest explanation is to consider a tray with a bunch of small depressions in it and a bunch of marbles on it. When the tray is stationary, the marbles fall in to the depressions and stay there. When you shake the tray very slowly, the marbles stay in their depressions. But if you shake the tray vigorously, the marbles pop out of the depressions and go all over the place. As long as you keep shaking vigourously, the marbles move around the tray almost as if the depressions weren't even there. And as soon as you slow your shaking enough, the marble fall back in to the depressions and cannot move beyond their local depression.
"Temperature" is a measure of how vigorously the elements of something are moving around randomly. A gas which is 4X as hot has its molecules moving, on average, 2 times as fast (square that to get 4 times the energy). The depressions represent the fact that many molecules have a slighly lower energy when they are at a "sweet spot" distance from each other which is actually pretty close together, but that energy is not very much lower. It is easy to see how the marbles in the depression is analogous to a solid, where the molecules are arrayed in fixed locations in a regular lattice.
But it is not much of a stretch to see how the marbles in the depression are also aanalagous to the liquid situation. In liquid, the molecules want to stay near each other, but they can "roll around over each other" pretty freely as long as they don't get too far apart from any other molecules.
And so we have described "phase transitions," how a weak attractive force (shallow depressions) only impose their order when the temperature (amount of random motion energy) is low enough.
Superconducting Phase Transition
In a metal which is potentially superconducting, at high temperatures its conduction electrons zip around pretty freely and energetically throughout the extent of the metal. It is so much like a gas of electrons that the very useful model is called the free electron model and at higher temperatures the electrons are referred to as a gas.
But it turns out that in some metals, (lead, tin, niobium, among others) there is a very weak attractive force between electrons. This is so week that at room temperature it is completely unnoticeable. Indeed, none of the gas electrons get "stuck" in the tiny "depressions" associated with such a weak force until the temperature is around one one-hundredth of normal room temperature, around 4 Kelvins for lead, whereas normal room temperature is about 290 Kelvins and liquid nitrogen is still 77 Kelvins.
The superconducting state is more like a liquid than a solid. So the electrons have some "correlation" with each other, which is broken when they are jostled out of their "depression" by thermal bumping from other electrons.
Now what this model tells you is a way to describe falling in to a more correlated state from a "gas" state, which applies to gases condensing to liquids, then solids, it also applies to gasses of electrons condensing into a superconducting fluid-like state. What it does NOT describe is WHY that state carries current with absolutely positively no voltage drop required. That is a different (and probably harder) post! | <urn:uuid:855a7cf5-2fee-4a4d-8d11-d272b90c91ee> | 2.875 | 706 | Q&A Forum | Science & Tech. | 40.776222 |
The size and number of marine dead zones—areas where the deep water is so low in dissolved oxygen that sea creatures can’t survive—have grown explosively in the past half-century. Red circles on this map show the location and size of many of our planet’s dead zones. Black dots show where dead zones have been observed, but their size is unknown.
It’s no coincidence that dead zones occur downriver of places where human population density is high (darkest brown). Some of the fertilizer we apply to crops is washed into streams and rivers. Fertilizer-laden runoff triggers explosive planktonic algae growth in coastal areas. The algae die and rain down into deep waters, where their remains are like fertilizer for microbes. The microbes decompose the organic matter, using up the oxygen. Mass killing of fish and other sea life often results.
Satellites can observe changes in the way the ocean surface reflects and absorbs sunlight when the water holds a lot of particles of organic matter. Darker blues in this image show higher concentrations of particulate organic matter, an indication of the overly fertile waters that can culminate in dead zones.
Naturally occurring low-oxygen zones are regular features in some parts of the ocean. These coastal upwelling areas, which include the Bay of Bengal and the Atlantic west of southern Africa, are not the same as dead zones because their bottom-dwelling marine life is adapted to the recurring low-oxygen conditions. However, these zones may grow larger with the additional nutrient inputs from agricultural runoff. | <urn:uuid:71de7202-1891-4a5a-bd5c-f404128fefc1> | 4 | 321 | Knowledge Article | Science & Tech. | 37.045344 |
You're hot or you're not
Hot and cold are two of the most basic things we come across in life - but only one of them actually exists.
You don't have to live in Australia to know that heat is real. It's pure energy, and when you heat things up, you're just adding energy to them.
But there's no such thing as cold: cold is just a lack of heat. So when you cool something down, you're not adding cold to it, you're sucking heat out of it.
That's how fridges work — they suck the heat out of food and spit it out the back. A cold bath or metal chair sucks heat out of your skin. Heat is constantly moving from warmer things to cooler things, and that will keep happening until everything in the universe is frozen solid at exactly the same temperature.
There's no upper limit to how hot things can get, but no matter who or where you are, nothing can ever get colder than -273 ° C. To understand why, you need to get your head around what actually happens when things get heated or 'un-heated'.
What's going on when things hot up
When you add heat to something you give its atoms more energy. Atoms haven't got batteries or thighs, so they can't store the extra energy. All they can do is use it straight away. And they use it to move. If you give them a little energy, they vibrate a bit faster. Give them a lot and they really take off.
It doesn't sound like much, but that movement at the atomic level is what causes solids to melt, and liquids to evaporate.
Atoms use energy from heat to overcome one of the undeniable facts of life: they're all attracted to one another. Not the kind of strong attraction that leads to chemical bonds, like those between hydrogen and oxygen atoms in water. It's more like a passing glance kind of attraction.
The attraction is caused by Van der Waals forces, which exist between all atoms and molecules. And the only way to overcome them is with energy.
A force to be reckoned with
Atoms that haven't got enough energy to overcome the Van der Waals forces between them end up jammed together and locked in place. And that's exactly what you'd see if you could look at the atoms in any solid like a rock, or a chunk of metal or ice. They're locked in position with hardly any room to move. But they aren't completely still, because even in the coldest Antarctic winter the atoms have got enough energy to make them vibrate and jiggle.
If you warm up a solid, its atoms start vibrating a bit more wildly. They stay locked in their place, shaking faster and faster until the temperature reaches their melting point. Temperature is just a measure of how much energy atoms have got to move around. It's their average kinetic energy, or 'jiggliness'. The melting point of a substance is the temperature at which its atoms or molecules have got enough energy to vibrate their way out of their rigid solid pattern.
Once they've broken free of their solid structure, atoms can move around a lot more — which is why liquids are so much sloppier and less structured than solids. So the only difference between solids and liquids is how freely their atoms move around. And that's all down to how much heat energy they've absorbed from things around them. Suck that heat back out, and your liquid will soon freeze back into its rigid solid form as the atoms can no longer escape the Van der Waals pull.
If you heat a liquid, the extra energy makes its atoms move faster and faster, bumping into one another harder and more often. Chemical reactions rely on atoms and molecules bumping into each other, so just warming a liquid makes reactions happen faster.
The 'escape velocity' of steam
The boiling point of a liquid is a bit like the escape velocity of a rocket — it's the temperature where atoms or molecules have got enough energy to completely escape their attraction to one another.
Van der Waals forces are no match for a particle with that much kinetic energy — it's outta there. That's why gases mix so easily: they can fill any shape, and they'll keep spreading around until they're everywhere.
There's no upper limit on temperature, so you can keep heating and heating a gas. But it will eventually reach a point where its kinetic energy is so great it actually rips molecules apart, and then rips electrons off the individual atoms. That's a plasma — a high-energy electrically charged version of a gas that causes the glowing light in stars, lightning and the gases between sheets of glass in plasma TVs.
While there's no upper limit to temperature, there's a very definite stop sign in the cooling department. The temperature where every atom in the universe stops dead in its tracks is called absolute zero, and it would happen at -273.15 ° C.
At this temperature, the kinetic energy of every atom in the universe would be zero. There wouldn't be the slightest hint of a jiggle. Everything would stop. There'd be no chemical reactions because the atoms wouldn't have the energy to bump together to make them happen. There'd be no movement, and no liquids or gases because every atom would be stuck exactly where it is.
Luckily for fans of everything that doesn't involve a motionless energyless state, we'll never reach absolute zero.
Because heat always flows from hotter areas to cooler ones, you'd have to empty all the heat from the entire universe to get even a tiny part of it down to -273.15 ° C.
And there's no science grant in the galaxy that could fund an undertaking like that.
Published 16 March 2010 | <urn:uuid:833e2bc5-2838-4f49-95a0-0baf7403a28b> | 2.9375 | 1,193 | Truncated | Science & Tech. | 59.467819 |
Posts Tagged «solar power»
Stanford creates flexible, high-efficiency peel-and-stick solar cells December 24, 2012 at 8:48 am
Researchers at Stanford University have created the first peel-and-stick solar cells. These cells are flexible, can be attached to a variety of surfaces (windows, business cards, clothing), and most importantly they can be produced using conventional, industry-standard facilities and materials. Furthermore, it should be possible to use Stanford’s new process to create peel-and-stick computer chips and LCD displays.
Cheap, graphene-based solar cells could be just years away December 21, 2012 at 11:28 am
Today’s solar cells are generally too expensive, and physically limited for everyday, consumer-grade use. However, a team of scientists at MIT have created a new solar cell out of graphene that is not only more flexible with a higher mechanical strength, but is relatively cheap.
Princeton’s nanomesh nearly triples solar cell efficiency December 11, 2012 at 7:01 am
A research team at Princeton has used nanotechnology to create a mesh that increases efficiency over traditional organic solar cells nearly three fold. We’ll obviously still be using fossil fuels for decades to come, but this research and other breakthroughs like it are accelerating the rate at which we can move to alternate energy sources.
The first flexible, fiber-optic solar cell that can be woven into clothes December 7, 2012 at 8:13 am
An international team of engineers, physicists, and chemists have created the first fiber-optic solar cell. These fibers are thinner than human hair, flexible, and yet they produce electricity, just like a normal solar cell. The US military is already interested in weaving these threads into clothing, to provide a wearable power source for soldiers.
MIT’s sun funnel could slit solar power’s efficiency bottleneck November 28, 2012 at 8:45 am
This week, a team of researchers at MIT hope to start our great global austerity measure, the elimination of the middle-men, and the rise of true, sustainable solar power.
The hunt for alien, star-encompassing Dyson Spheres begins October 17, 2012 at 11:34 am
In May this year, the Templeton Prize went to Tensin Gyatso (aka the 14th Dalai Lama), however an additional grant of $200,000 has also been given to cosmologist Geoff Marcey of Berkeley. Marcey realized that the Kepler data might also reveal stars that are surrounded by Dyson Spheres.
Is the slumping solar market temporary or a long-term trend? August 27, 2012 at 3:01 pm
Recent reports from Chinese solar manufacturers are anything but rosy; company profits are in the tank. Is this a passing phase, or does it spell trouble for solar power’s ability to compete on the worldwide market?
So long, silicon: Researchers create solar panels from cheap copper oxide August 10, 2012 at 1:10 pm
Researchers from the University of California and Berkeley Lab have discovered a way of making photovoltaic cells out of any semiconducting material, not just beautiful, expensive crystals of silicon.
Researchers create transparent solar cell — the key to transparent computers? July 23, 2012 at 8:12 am
There are two key breakthroughs here: First, this is the highest efficiency polymer solar cell (PSC) yet created; and, perhaps more importantly, researchers have historically struggled to get get past 10 or 20% transparency — and this is almost 70% transparent.
Arduino-based ArduSat will run your code in space June 21, 2012 at 2:03 pm
ArduSat is an Arduino-powered satellite being funded through kickstarter to foster open satellite access and the collection of data from sensors in space. It will be able to run code for up to a week at a time, depending on the price tier, and will be orbiting Earth for about a year before it burns up in the atmosphere. | <urn:uuid:4ceade83-5fc9-4937-82ce-1b0bff67f47d> | 2.90625 | 832 | Content Listing | Science & Tech. | 36.5548 |
The Extensible Provisioning Protocol (EPP) is a flexible protocol designed for allocating objects within registries over the Internet. The motivation for the creation of EPP was to create a robust and flexible protocol that could provide communication between domain name registries and domain name registrars.
These transactions are required whenever a domain name is registered or renewed, thereby also preventing domain hijacking. Prior to its introduction, registries had no uniform approach, and many different proprietary interfaces existed. While its use for domain names was the initial driver, the protocol is designed to be usable for any kind of ordering and fulfilment system.
EPP is based on XML – a structured, text-based format. The underlying network transport is not fixed, although the only currently specified method is over TCP. The protocol has been designed with the flexibility to allow it to use other transports such as BEEP, SMTP, or SOAP.
Source: Extensible Provisioning Protocol(EPP) | <urn:uuid:14f29fc3-a4db-4066-9f10-9d8667112d7e> | 2.984375 | 199 | Personal Blog | Software Dev. | 22.491055 |
With the invention of the iPad and driverless cars, technology has begun mimicking the images of old “futuristic” sci-fi films. Now our future may hold some inventions influenced by “magical” films, such as the Harry Potter series. BBC News reports:
Scientists in the UK have demonstrated a flexible film that represents a big step toward the “invisibility cloak” made famous by Harry Potter.
The film contains tiny structures that together form a “metamaterial”, which can, among other tricks, manipulate light to render objects invisible. Flexible metamaterials have been made before, but only work for light of a colour far beyond that which we see.
Physicists have hailed the approach a “huge step forward”. The bendy approach for visible light is reported in the New Journal of Physics.
Metamaterials work by interrupting and channelling the flow of light at a fundamental level; in a sense they can be seen as bouncing light waves around in a prescribed fashion to achieve a particular result.
However, the laws of optics have it that light waves can only be manipulated in this way by structures that are about as large as the waves’ length.
Continues at BBC News … | <urn:uuid:39f692cd-86d2-4d63-a92a-08cc0fe402c7> | 3.375 | 266 | Truncated | Science & Tech. | 40.372564 |
When an error occurs, the interpreter prints an error message and a stack trace. In interactive mode, it then returns to the primary prompt; when input came from a file, it exits with a nonzero exit status after printing the stack trace. (Exceptions handled by an except clause in a try statement are not errors in this context.) Some errors are unconditionally fatal and cause an exit with a nonzero exit; this applies to internal inconsistencies and some cases of running out of memory. All error messages are written to the standard error stream; normal output from the executed commands is written to standard output.
Typing the interrupt character (usually Control-C or DEL) to the primary or secondary prompt cancels the input and returns to the primary prompt. 2.1 Typing an interrupt while a command is executing raises the KeyboardInterrupt exception, which may be handled by a try statement. | <urn:uuid:90940c7a-8327-4ef1-92a5-1d7655a4ef69> | 3.21875 | 180 | Documentation | Software Dev. | 39.766941 |
These days anyone can contribute to a great scientific endeavour, whether it’s astronomy, molecular biology or sleep research. Clare Freeman investigates the growing importance of citizen scientists and crowdsourced research.
In this week’s show we delve into the world of crowdsourced science to find out why scientists are increasingly relying on members of the public to make observations, gather information and analyse vast clumps of data. The list of crowdsourced projects is seemingly endless, from folding proteins in computer games, to discovering new planets and searching for extraterrestrial intelligence.
Prof Chris Lintott started his first crowdsourcing project in 2007, Galaxy Zoo. He explains to Clare Freeman how this and all the other Zooniverse projects have developed over the years. It’s not just the technology that has advanced but also the community, with citizen scientists willing to spend more time than ever scouring data.
In the two months since our Science Weekly call-out, almost 6,000 Britons have contributed to Prof Russell Foster’s crowdsourced survey of sleep "chronotypes" – whether you’re an owl or a lark. He reveals the initial results comparing the sleep patterns of Germans and Britons.
Knowing your chronotype can help you maximise your intellectual performance, but could your school or employer be persuaded to let you start work later or earlier depending on your chronotype? | <urn:uuid:59206f63-27c4-4030-af67-4606d3612991> | 2.890625 | 278 | Content Listing | Science & Tech. | 36.624715 |
- Oil Pollution Facts
- Marine Debris Facts
Photo Credit: US Department of Justice
Source: American Academy of Sciences,
Oil in the Sea III, 2003
According to the National Academy of Sciences the majority of the oil in the ocean, about fifty-two percent, is the direct result of human activity. The remaining forty-eight percent comes from natural underwater seeps.
Each year, human activities send between 190 million and 706 million gallons of crude oil or its refined products into the seas.
The single largest source of man-made oil pollution in the oceans is oily waste spilled from ships. Oil illegally dumped from commercial ships accounts for 46% of all the oil entering the world’s oceans from human sources.
Put into perspective, the yearly intentional discharge of oil into the seas through routine shipping operations is equal to eight times the amount released by the Exxon Valdez oil spill.
While oil spills produced by large industrial accidents like the Exxon Valdez, which spilled 11 million gallons of oil in 1989, or British Petroleum’s Deepwater Horizon, which released 210 million gallons of oil in 2010, garner the most media attention, they make up just 18% of the oil released into the water as a result of human activity.
Air pollution from cars and industry accounts for just over 8% of the total input of oil into our oceans from human activity. The hundreds of tons of hydrocarbons that are emitted form particle fallout, which is then swept into the oceans by rainfall.
Runoff from land sources accounts for 21% of oil discharged into the water from human activity. Rain washes oil leaked by cars from roads into storm drains, which empty directly into our waters. Another large source of land runoff pollution comes from the improper disposal of engine oil. An average oil change uses 5 quarts of oil, which can contaminate millions of gallons of fresh water.
According to the EPA, more than half of all Americans change their own oil, but only about one-third of the used oil from do-it-yourself oil changes is collected and recycled.
Industrial activities associated with the extraction of petroleum represent just 6% of oil in the water from human sources.
Learn more about Illegal Dumping by Commercial Ships | <urn:uuid:9813e9a5-a4cb-4f67-879d-a7dd10723dba> | 3.78125 | 459 | Knowledge Article | Science & Tech. | 41.454241 |
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10
Imagine a pyramid which is built in square layers of small cubes.
If we number the cubes from the top, starting with 1, can you
picture which cubes are directly below this first cube?
Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this?
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours?
Delight your friends with this cunning trick! Can you explain how
Starting with the number 180, take away 9 again and again, joining up the dots as you go. Watch out - don't join all the dots!
Choose a symbol to put into the number sentence.
Can you make a cycle of pairs that add to make a square number
using all the numbers in the box below, once and once only?
We start with one yellow cube and build around it to make a 3x3x3
cube with red cubes. Then we build around that red cube with blue
cubes and so on. How many cubes of each colour have we used?
Can you each work out the number on your card? What do you notice?
How could you sort the cards?
Here you see the front and back views of a dodecahedron. Each
vertex has been numbered so that the numbers around each pentagonal
face add up to 65. Can you find all the missing numbers?
You have 5 darts and your target score is 44. How many different
ways could you score 44?
Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions.
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Winifred Wytsh bought a box each of jelly babies, milk jelly bears,
yellow jelly bees and jelly belly beans. In how many different ways
could she make a jolly jelly feast with 32 legs?
Find the sum of all three-digit numbers each of whose digits is
What are the missing numbers in the pyramids?
There are 78 prisoners in a square cell block of twelve cells. The
clever prison warder arranged them so there were 25 along each wall
of the prison block. How did he do it?
Replace each letter with a digit to make this addition correct.
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
Here is a chance to play a version of the classic Countdown Game.
This magic square has operations written in it, to make it into a
maze. Start wherever you like, go through every cell and go out a
total of 15!
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99
How many ways can you do it?
If you have only four weights, where could you place them in order
to balance this equaliser?
There are 44 people coming to a dinner party. There are 15 square
tables that seat 4 people. Find a way to seat the 44 people using
all 15 tables, with no empty places.
The idea of this game is to add or subtract the two numbers on the
dice and cover the result on the grid, trying to get a line of
three. Are there some numbers that are good to aim for?
Can you put the numbers 1 to 8 into the circles so that the four
calculations are correct?
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
Place the numbers 1 to 10 in the circles so that each number is the
difference between the two numbers just below it.
Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square.
In the following sum the letters A, B, C, D, E and F stand for six
distinct digits. Find all the ways of replacing the letters with
digits so that the arithmetic is correct.
In a square in which the houses are evenly spaced, numbers 3 and 10
are opposite each other. What is the smallest and what is the
largest possible number of houses in the square?
Try entering different sets of numbers in the number pyramids. How does the total at the top change?
This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy.
Zumf makes spectacles for the residents of the planet Zargon, who
have either 3 eyes or 4 eyes. How many lenses will Zumf need to
make all the different orders for 9 families?
In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square?
This number has 903 digits. What is the sum of all 903 digits?
Can you explain how this card trick works?
Can you arrange 5 different digits (from 0 - 9) in the cross in the
Start by putting one million (1 000 000) into the display of your
calculator. Can you reduce this to 7 using just the 7 key and add,
subtract, multiply, divide and equals as many times as you like?
Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number.
Cross out the numbers on the same row and column. Repeat this
process. Add up you four numbers. Why do they always add up to 34?
This article suggests some ways of making sense of calculations involving positive and negative numbers.
Look carefully at the numbers. What do you notice? Can you make
another square using the numbers 1 to 16, that displays the same
Cherri, Saxon, Mel and Paul are friends. They are all different
ages. Can you find out the age of each friend using the
Arrange eight of the numbers between 1 and 9 in the Polo Square
below so that each side adds to the same total.
You have two egg timers. One takes 4 minutes exactly to empty and
the other takes 7 minutes. What times in whole minutes can you
measure and how?
Four bags contain a large number of 1s, 3s, 5s and 7s. Pick any ten
numbers from the bags above so that their total is 37.
Using the statements, can you work out how many of each type of
rabbit there are in these pens?
An environment which simulates working with Cuisenaire rods.
You have four jugs of 9, 7, 4 and 2 litres capacity. The 9 litre
jug is full of wine, the others are empty. Can you divide the wine
into three equal quantities? | <urn:uuid:b5714c61-888f-4bce-83ed-8153787e2040> | 3.9375 | 1,547 | Content Listing | Science & Tech. | 77.667143 |
The water column is the basic habitat and the medium through which all other fish habitats are connected.
The water column provides the basic physical and chemical requirements for aquatic life and links all habitats.
Fish use of water column habitat
- Water circulation transports eggs, larvae, food, and oxygen to nursery, spawning and foraging areas.
- While all species occupy the water column, water column conditions are especially important for pelagic species such as river herring, Atlantic menhaden, and bluefish.
Some importants facts
- A total of 1,315 miles of freshwater streams and nearly 70,000 acres of estuarine waters in coastal river basins were rated as “impaired” in 1999. (http://h2o.enr.state.nc.us/tmdl/General_303d.htm)
- River herring have not recovered despite greatly reduced fishing effort starting in 1995.
- Over 1,000 acres of Outstanding Resource Waters were permanently closed to shellfish harvest during 1990-2000. (See graph below)
- During 1996-2005, over 380 fish kill events were reported in coastal river basins (http://h2o.enr.state.nc.us/esb/Fishkill/fishkillmain.htm)
How’s it Doing?
Water pollution is increasing with growth in human population and supporting development. Runoff from land-disturbance and impervious surfaces clogs streams and creeks with sediments, excess nutrients, and toxic chemicals. Changes in land cover from vegetated open spaces to hardened surfaces has also reduced filtration of runoff and accelerated freshwater flow to adjacent water bodies. Impacts on the water column have the most far-reaching effects on the ecosystem. See Threats to Habitat Index for more information
See Water Column chapter of CHPP (14.7MB) | <urn:uuid:1a9c71c9-a110-446b-882c-669a0ea8b963> | 3.796875 | 390 | Knowledge Article | Science & Tech. | 49.377747 |
Around the world, the oceans are in trouble, with declining fish stocks, disappearing coral reefs, and changing water chemistry. This week, researchers published a new map highlighting the human impact on oceans worldwide from 17 different activities, such as fishing, climate change, and pollution. âOur results show that when these and other individual impacts are summed up, the big picture looks much worse than I imagine most people expected," said Ben Halpern, lead author of the paper published this week in the journal Science.
The map shows that the most heavily affected waters in the world include large areas of the North Sea, the South and East China Seas, the Caribbean Sea, the east coast of North America, the Mediterranean Sea, the Red Sea, the Persian Gulf, the Bering Sea, and several regions in the western Pacific. The least affected areas are largely near the poles. In this hour, Ira and guests take a look at the state of the world's ocean ecosystems -- and their inhabitants -- with some of the world's top ocean experts. The prognosis isn't good, but is it hopeless?
We're broadcasting live from Boston, Massachusetts, the site of this year's annual meeting of the American Association for the Advancement of Science. If you're in Boston, stop by!
Produced by Annette Heist, Senior Producer | <urn:uuid:ef8c82c6-8ceb-4cf4-a0fc-e1c7576fb862> | 2.828125 | 271 | Truncated | Science & Tech. | 45.752387 |
|Family:||Pleuronectidae (righteye flounders)|
American plaice (Hippoglossoides platessoides) are salt water fish that live in the northwest Atlantic Ocean. Like most flatfish, they live on the bottom of the continental shelf, up to 700 metres deep, but spend most of the time at 90 to 200 meters. Their geographical range is from the coast of Labrador, south to the coast of the U.S. state of Rhode Island. The most are found off the eastern tip of Newfoundland. American plaice feed on sand dollars, brittle stars, crustaceans, polychaetes, and fish such as capelin and launce.
The U.K.-based Marine Conservation Society rates American plaice as 5, the most threatened category of over-harvested animals.
Related pages [change]
- American plaice information (Northeast Fisheries Science Center (U.S.))
- American plaice information (National Marine Fisheries Service (Canada))
- FishBase article
- Ecological status of American plaice (Marine Conservation Society) | <urn:uuid:5cd77cc4-0ce8-473e-9d3a-de1a286ed142> | 3.140625 | 237 | Knowledge Article | Science & Tech. | 32.955244 |
In mathematics and in most programming languages, + and - denote addition and subtraction; but what should their types be? Of course, we want to be able to add both integers and floating-point numbers, but these two functions correspond to completely different machine operations; we may also want to define arithmetic on new types, such as complex or rational numbers.
To better understand the problem, consider a function to add the elements of a list of integers to an accumulator. Assuming that + only means integer addition we could define
add :: Int -> [Int] -> Int add acc (x : xs) = add (x + acc) xs add acc = acc
As long as there are elements in the list, we add them to the accumulator and call the function recursively; when the list is empty the accumulator holds the result.
But this function makes perfect sense also for floats (or rationals, or complex numbers, or ...) and we would like to use it at those types, too. One could imagine an ad hoc solution just for the arithmetic operators, but we prefer a general solution.
A first step is to introduce the following struct type:
struct Num a where (+), (-), (*) :: a -> a -> a
A value of type Num Int has three fields, defining addition, subtraction and multiplication, respectively, on integers. (Of course, we can construct an object of this type using any three functions of the prescribed type, but the intention is to supply the standard arithmetic operators.) Similarly, a value of type Num Float defines the corresponding operators for floating-point numbers.
Assume that we have properly defined struct value numInt :: Num Int. We can then define
add :: Int -> [Int] -> Int add acc (x : xs) = add (numInt.(+) x acc) xs add acc = acc
Of course, the first argument in the recursive call now looks horrible; it would be silly to write integer addition in this way. But it has now become easy to generalize the code by abstracting out the Num object as an argument:
add :: Num a -> a -> [a] -> a add d acc (x : xs) = add d (d.(+) x acc) xs add d acc = acc
This version of add can be used for lists of any type of objects for which we can define the arithmetic operators, at the expense of passing an extra argument to the function.
The final step that gives an acceptable solution is to let the compiler handle the Num objects. We do this by declaring Num to be a type class, loosely following the terminology introduced in Haskell:
For any such type, its selectors can be used without the dot notation identifying a struct value from which to select. Whenever a selector of a type class occurs in a function body, the compiler does the following:
With Num defined as a type class, our running example becomes
add :: a -> [a] -> a \\ Num a add acc (x : xs) = add (x + acc) xs add acc = acc
As a convenience, the declaration of the struct type and the type class declaration can be combined into one single declaration:
typeclass Num a where (+), (-), (*) :: a -> a -> a
To use function add to sum a list of integers, we would like to write e.g. add 0 [1, 7, 4]. The compiler must now insert the extra argument numInt, a struct value with selectors for the three arithmetic operators at type Int. However, since there might be several objects of type Num Int defined, we must indicate in instance declarations the objects that are to be used at different types.
instance numInt :: Num IntAn instance declaration is essentially just a type signature flagged with the additional information that the corresponding value is available for automatic argument insertion by the compiler. For convenience, an instance declaration and its struct value definition can also be combined into one declaration. As an example, here is an instance of Num for rational numbers:
data Rational = Rat Int Int instance numRat :: Num Rational where Rat a b + Rat c d = Rat (a*d + b*c) (b*d) Rat a b + Rat c d = Rat (a*d - b*c) (b*d) Rat a b * Rat c d = Rat (a*c) (b*d)
This definition should be improved by reducing the fractions using Euclid's algorithm, but we omit that. We just note that the arithmetic operators in the right hand sides are at type Int; thus the compiler will insert the proper operations from the instance numInt, avoiding the overhead of extra parameters.
This solution combines ease of use and flexibility with type security. A possible disadvantage is inefficiency; an extra parameter is passed around. To address this, the user may add a specific type signature; if the user assigns the type Int -> [Int] -> Int to add, giving up flexibility, the compiler will not add the extra parameter, instead inserting integer operations directly into the function body.
Several type classes, including Num, are defined in the Prelude, together with instances for common cases.
The compiler must be able to select the proper object of a type class to use whenever a function with a qualified type is used; this choice is guided by the context of the function application. In certain cases ambiguites can occur; these are resolved using default declarations.
Normally, the combined declaration forms are used both for type classes and instances. Separate typeclass declaration of a struct type can only be done in the module where the struct type is defined.
Also subtyping relations may be used as constraints in qualified types. As a simple example, consider the function
twice f x = f (f x)
Obviously, twice has a polymorphic type. At first, it seems that the type should be
(a -> a) -> a -> aHowever, it can be assigned the more general type
twice :: (a -> b) -> a -> b \\ b < aTypes with subtype constraints will never be assigned by the compiler through type inference, but can be accepted in type-checking. | <urn:uuid:6d247a8e-fc7f-4fbf-a434-e9ee14a887bb> | 3.46875 | 1,289 | Documentation | Software Dev. | 42.457997 |
Measuring only 5 microns (millionths of a meter) in diameter and 300 nanometers (billionths of a meter) in thickness, has been made by scientists at the University of Melbourne. The ring is a component in a device for producing and detecting single photons. A picture of the ring (see image at http://www.aip.org/png/2008/299.htm) was shown by Steven Prawer (email@example.com) at a session devoted to circuitry based on artificial diamonds at this week’s March Meeting of the American Physical Society (APS) in New Orleans. For more on the Australian work see http://www.qcaustralia.org/home.htm.
DIAMOND QUBITS. The APS session featured several additional striking quantum information processing (QIP) results. But first: why diamond? Diamond is an excellent heat conductor and electrical insulator, and it looks as if it will be an excellent host for qubits. Qubits are a special kind of bit. Unlike the bits (with a value of “1" or “0") used in ordinary digital computers, qubits can have a value of 1 and 0 at the same time. That’s because a qubit is manifested in the form of a quantum system that exists in a superposition of two different states. Examples include photons that can be in either of two polarization states, or Cooper pairs that can reside on either side of a Josephson junction, or the net spin (up or down) of a quantum dot.
A relatively new form of qubit utilizes the two spin orientations of an unpaired electron circulating around a strange kind of “molecule” at the heart of an artificially created diamond film. The molecule consists of a nitrogen atom (present as in impurity amid all those carbon atoms) and a nearby vacancy, a place in the crystal containing no atom at all. The advantages of employing this NV color center (so named since the molecule, when excited, re-emits photons one at a time) include the fact that it is easily excited or polarized by laser light; it stays polarized for as long as a millisecond, compared to mere nanoseconds for most electrons in a semiconductor; and all of this occurs at room temperature. Putting the electron into each of two spin states simultaneously makes it into a long-lived qubit. With further optical networking this qubit might be entangled (brought into coherence) with other nearby qubits, creating a logic gate or processor for a future quantum computer. (The article by David Awschalom firstname.lastname@example.org in the Oct 2007 Scientific American provides excellent background.)
In quick order, here is some of the other diamond news from the APS meeting.
SINGLE-ELECTRON ESR. Ronald Hanson (Kavli Institute, Delft, email@example.com) reported results from the University of California at Santa Barbara revealing the ability to flip the spin of an electron (associated with the NV center) in a few nanoseconds and observe that electron as it loses its assigned polarization through interactions with the surrounding diamond environment; this environment he referred to as a “spin bath” since it consisted of many surrounding nitrogen atoms whose spins could be adjusted. Hanson argued that he and his colleagues had achieved electron spin resonance (ESR, essentially the electron equivalent of nuclear magnetic resonance, NMR) with single-electron sensitivity. The results were also reported online in Science on March 13.
CONTROLLING SINGLE NUCLEAR SPINS. Mikhail Lukin of Harvard (firstname.lastname@example.org) described the effect of a magnetic carbon-13 nucleus on the observed behavior of color centers in diamond. Carbon-13, an isotope present in very pure diamond at the 1% level, is magnetic, whereas as ordinary carbon-12 nuclei are not. Lukin said that he hopes to entangle several such NV/C-13 qubits, creating a potential register for performing quantum processing (see Dutt et al., Science, 1 June 2007, for background). A single C-13 atom could be located to within a space of 1 nm and its spin could remain stable for periods as long as 1 second. Furthermore, the NV/C-13 interaction provides a way to perform NMR spectroscopy on a single isolated nuclear spin and to sense very weak magnetic fields. Lukin and his colleagues have performed experiments in whichan NV site in a tiny diamond mounted on the end of a probe was used to sense the magnetic signature of a sample lying close underneath. Fields as small as 10 nano-tesla were sensed. In effect, Lukin said, this setup performed as a one-atom magnetometer.
PHOTONIC QUBIT NETWORK. Finally, Charles Santori (Hewlett Packard, email@example.com) reported the creation of qubits in diamond at room temperature without the need for any external magnetic field (for polarizing electrons) or microwaves (for flipping the polarization). All these tasks, he said, could be accomplished with a visible-light laser modulated at two frequencies. The all-optical approach to manipulating spins, using optical waveguides and cavities, was a necessary step toward streamlining and scaling up the process of creating and linking many qubits in a workable quantum computer. | <urn:uuid:04b90ff4-7066-4cfa-87d2-fbdd03f8ea6b> | 3.375 | 1,142 | Knowledge Article | Science & Tech. | 42.833624 |
So far, transgenic forest trees have only been marketed in China, but over 250 experimental releases of GE forest trees have been conducted worldwide. Canada has been field-testing GE trees since 1997. The research is driven primarily by private businesses from developed nations, including some of the world’s largest pulp and paper companies.
Greenpeace is calling for a ban on the release of transgenic trees and, as an interim measure, recommends a global moratorium on commercial and large-scale experimental releases. In a submission to the scientific body of the Convention on Biological Diversity, Greenpeace provides evidence of the significant ecological risks associated with transgenic forest trees.
One of the biggest threats is that GE forest trees will take over natural landscapes, irreversibly usurping the native vegetation upon which a whole array of other plants and animals depend. Although GE trees are intended to be grown on plantations, it is naive (and irresponsible) to think they will remain confined.
Trees typically produce a large number of seeds, and while most of these seeds are usually deposited in close vicinity, smaller amounts can spread across great distances with the help of wind, water and animals. For example, the seeds from pine trees — one of the most widespread and invasive species as well as one of the species subject to GE research — can be carried up to 30 kilometres by the wind.
The corporate answer to the problem of uncontrollable propagation poses an even bigger risk. GE terminator trees, designed to be sterile, would mean birds, insects and mammals could not rely on those seeds for food. The impact on forest biodiversity would be catastrophic.
Trees also propagate from shoots, and because they breed relatively easily with related species, they would inevitably pass on their genes to wild relatives and transfer their transgenes to microorganisms.
A number of varieties of transgenic forest trees have been developed to resist insects, including two species of poplar, which have been commercialized in China. Although there are no studies of their potential effects on non-target organisms, the fact that they can be affected is apparent from experiences with annual crop plants. Similar effects have also been observed in the soil as GE crops can affect the bacteria, earthworms and soil respiration. Compared to annual crop plants, insect resistant trees offer scope for even more frightening scenarios. The leaves of GE trees planted along a river or the shore of a lake could easily enter the waterways with unforeseeable consequences for the aquatic life.
The other characteristic of forest trees that make them so vulnerable to genetic engineering is their long lifespans. All sorts of unexpected changes could easily happen over the lifetime of a tree, some of which live hundreds of years. The longevity of trees also undermines the results of tests, which cannot determine the long-term effects. Ecological consequences may not be evident until after several years of growth.
In addition to ecological impacts, transgenic plantations will also have social consequences. The technological and economic power associated with transgenic forestry is likely to parallel those experienced in agriculture. In that case, the number of producers typically declines and a few large corporations control the production system. Ownership of gene technology will provide forestry corporations with even greater decision-making powers than today. Furthermore, being heavily mechanized and centralized, transgenic plantations will offer little in terms of local employment and profit. When commodities from natural forests and transgenic plantations compete, the latter could actively undermine wood prices and discourage incentives for natural forest management. As indigenous people are often the largest landowners of naturally managed forests, transgenic plantations could lead to a decline in the income of poor people. Moreover, given that the spread of transgenic seeds will be inevitable, the coexistence between transgenic tree plantations and less intensively managed public and private forestlands will pose new economic and liability problems, especially in landscapes made up of a mosaic of public forests, corporate timberlands, wildlife refuges and family timberlands.
In an attempt to warn the Canadian government of these very real risks, Greenpeace has submitted its report on GE trees to the Canadian Food Inspection Agency as it’s consulting on the latest round of field trials that | <urn:uuid:d26415d1-558c-496c-8382-412dd6b4d3c9> | 3.640625 | 835 | Knowledge Article | Science & Tech. | 22.376429 |
A sled starts from rest at the top of a hill and slides down with a constant...
...acceleration. At some later....? A sled starts from rest at the top of a hill and slides down with a constant acceleration. At some later....?
...time it is 14.4 m from the top; 2.00 s after that it is 25.6 m from the top, 2.00 s later 40.0 m from the top, and 2.00 s later it is 57.6 from the top. a) What is the magnitude of the average velocity of the sled during each of the 2.00-s intervals after passing the 14.4-m point? b)What is the acceleration of the sled? c) What is the speed of the sled when it passes the 14.4-m point? d) How much time did it take to go from the top to the 14.4-m point? e) How far did the sled go during the first second after passing the 14.4-m point?
|All times are GMT. The time now is 02:24 AM.|
Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2013, Jelsoft Enterprises Ltd.
Copyright 2005 - 2012 Molecular Station | All Rights Reserved | <urn:uuid:b8a89ea7-91a0-4f0f-97c6-43e183de82b1> | 3.203125 | 273 | Comment Section | Science & Tech. | 110.07773 |
To help with scientific research you don't have to work in a lab. Just visit your garden, local wood or the seashore and you can take part in a national survey, like the Museum’s bluebell survey or the Big Seaweed Search.
Your results will help scientists to find out more about UK species and their distribution, so they can be conserved for future generations.
Help us learn more about the diversity and distribution of trees growing in urban areas by telling us about the trees in your streets, parks and gardens.
Take a walk along the coast and help us monitor the effects of climate change and invasive species on the UK's seaweeds.
Are bluebells flowering earlier than they used to? Help us find out by taking part in the Museum's bluebell survey. Discover what past surveys have revealed about the spread of non-native bluebells.
The Open Air Laboratories Network (OPAL) has been created to inspire people to become more involved with the natural world around them.
Whether you are interested in insects, birds, reptiles or amphibians, find out how you can help the UK's experts to map the biodiversity of the UK.
Scientists at the Museum's Centre for UK Biodiversity and the Biological Records Centre have produced a practical guide to setting up citizen science projects to study biodiversity and the environment.
Guide to citizen science PDF (1.9 MB) | <urn:uuid:49550999-c3de-41c8-ac5d-dd9ad669bc37> | 3.25 | 288 | Content Listing | Science & Tech. | 52.694923 |
A major volcanic eruption on the island of Hawaii in December 1959 devastated an existing montane rain- and seasonal-forest covering an area of about 500 hectares (ha). The eruption resulted in a massive pahoehoe lava substrate on the crater floor of Kilauea Iki, in a new cinder cone, in an area covered with spatter and another with an extensive blanket of pumice varying along a fallout gradient from over 46 m to less than 2 cm deep. Six new habitats were recognized by kinds of substrate and remains of former vegetation. A study was made of plant invasion and recovery from the time of the disturbance till 9 years thereafter. Plant records consisted primarily of periodically listing species by cover-abundance in a large number of quadrats along a transect system that crossed the crater floor and extended about 3 km along the fallout gradient. The atmospheric environment was studied concurrently by records of rainfall, lateral rain- and steam-interception, and desiccating power. The substrates were examined for their soil moisture properties, temperatures, mineralogical properties, and available plant nutrients.
It was found that plants moved onto the crater floor within the first year. They progressed concentrically towards the crater center in correlation with a substrate-heat gradient that cooled progressively from the margin inward. Plant invasion on the cinder cone was delayed by 2-3 years, because of prolonged volcanic heating from below. A fast invasion took place on the spatter habitat where a surviving rain forest was nearby and where tree snags provided additional moisture locally at their base by intercepting wind-driven rains. Establishment at snag bases was also noted on the pumice, and, generally, plant invasion occurred by aggregation of plants in favorable microhabitats which included crevices and tree molds. On the pumice, invasion progressed at a relatively uniform rate in spite of differences in substrate depth and atmospheric environment. The increase in plant cover was much faster on the habitats with vegetation remains than on those without. On the latter, the plant cover was still insignificant in year 9 after the eruption, in spite of a near total spread of plants across these habitats.
The sequence of life form establishment on substrates without vegetation remnants was clearly algae first, then mosses and ferns, then lichens, then native woody seed plants, and finally exotic woody and herbaceous seed plants. On the substrates with former vegetation remains, exotic seed plants participated in the invasion process from the beginning. This was related to the availability of microhabitats with water relations favorable for plants with normal root systems and probably higher water requirements than the native selerophyllous woody plants. A remarkable recovery occurred among Metrosideros polymorpha trees that were buried up to and over 2.5 m deep under pumice. Several native shrubs resprouted after their entire shoot system had been buried. The best herbaceous survivors were those with underground storage organs, which included both native and exotic species.
The invading exotics in no way interfered with the establishment of the native pioneer plants. Initial stages of succession were observed whereby native woody plants began to replace exotic woody plants. Among herbaceous plants, exotic species were far more numerous, because there are only very few native species in this group. A succession, in part caused by competitive replacement, was noted among the exotic herbaceous plants. Thus, there appears to be no threat of native plants to be replaced by exotics on these new volcanic substrates. The native forms are better adapted to these harsh environments. But exotic complementary life forms are expected to remain in association with the native vegetation because of a lack of life forms among the native species to fill the available niches.
Last Updated: 1-Apr-2005 | <urn:uuid:e009bf70-327f-4a88-a202-fd8f21f4e9bd> | 3.390625 | 766 | Knowledge Article | Science & Tech. | 27.018055 |
X-treme Microbes — Text-only | Flash Special Report
Extremophiles are organisms capable of living in conditions that would kill other life-forms, including intense cold, heat, pressure, dehydration, acidity/alkalinity and other chemical and physical extremes. A few animals, such as frogs that freeze solid in winter, can qualify. But in large part, the world’s endurance champs are microbes: bacteria and archaea.
They’re at home in some of the most forbidding pockets of the planet, where scientists are studying their survival mechanisms—and probing the outermost boundaries of life. (Click on each panel at left for specific information.)
Photo: DRY LIFE
Life can’t exist without any water. But research is showing how shockingly little is necessary. Even in the planet’s driest places—such as the Atacama high desert in Chile or the Dry Valleys in Antarctica—scientists have found that microbes can set up shop a few inches below the surface. In such circumstances, certain extremophiles have evolved novel biochemistry with functions that compensate in some respects for lack of water. Investigators are studying the DNA of these survivors to determine which genes contribute to the cells’ abilities.
Other organisms found in Atacama and elsewhere can enter a seemingly lifeless, freeze-dried state, reviving only if and when some water appears. In the ultra-arid Dry Valleys, for example, researchers recently discovered that a mat of cells that had been dormant for two decades began photosynthesis within a day of exposure to liquid water. And a few marvelous microbes, tested in experiments on the space shuttle, have even survived the vacuum and radiation bombardment of empty space.
Credit: Julio L. Betancourt, U.S. Geological Survey
Photo: COLD LIFE
Lots of creatures can live in the cold. But it takes special talents for cells to survive at the South Pole, where temperatures often drop below -100 F. Yet that’s where scientists found a certain kind of bacteria that can get through the polar winter and have active metabolisms in surroundings as cold as 1.4 F.
That’s just one of many creatures specially adapted to extremely frigid venues. Researchers uncovered microbes in an ice core extracted from just above Lake Vostok, an ancient body of water buried thousands of feet below the Antarctic ice surface. At the other end of the Earth, extreme-tolerant organisms have shown up in the permafrost of northern Alaska.
Laboratory studies have shown that many cold-surviving life-forms (collectively known as psychrophiles) have remarkable cellular ingredients that prevent the formation of ice crystals. Others have evolved a talent for huddling together into mats called biofilms. Many can’t live at all above 50 F. It’s just too hot.
Credit: A. Chiuchiolo
Photo: VENT LIFE
Miles below the ocean surface on the lightless seafloor, giant cracks in the Earth’s crust create sites where mineral-dense water—heated to 600 F—spews forth in roiling clouds. It’s as forbidding an environment as one could imagine. Yet scientists have found hosts of organisms that have learned to thrive there.
In those circumstances, of course, photosynthesis simply isn’t possible. But certain kinds of single-celled archaea have developed a unique alternative called chemosynthesis: a means of converting inorganic hydrogen sulfide dissolved from rocks into food. Archaea living on or under the seafloor make up vast microbial mats and other configurations that provide the foundation for a bizarre and abundant community of towering tube worms, gigantic clams and mussels, and strange fish and crabs that can withstand the titanic pressure and utter dark.
Credit: University of Washington, Center for Environmental Visualization.
Photo: ACID LIFE
When it comes to acidity versus alkalinity, most mammals are wimps. On the pH scale, 7 is neutral. The lower the number, the more acidic; the higher, the more alkaline. Human blood has to stay between 6.8 and 7.8 to support life. But nature is replete with creatures that thrive on the extreme ends of the pH scale.
In Yellowstone National Park, for example, researchers took water samples and found organisms fully adapted to extremely hot acidic conditions. In California, other scientists studying the contents of mine drainage revealed incredibly tiny microbes living comfortably at a pH level as low as 0.5—the equivalent of battery acid.
On the double-digit side of the scale, soda lakes in Africa with a pH around 10 (about the same as drain unclogger) support dozens of microbial species with specially evolved chemistry that keeps the pH inside the cells neutral.
Lab studies of both acidophiles and alkalophiles continue to show the remarkable—and often unexpected—range of conditions to which life can adapt.
Credit: David Stahl, Northwestern University | <urn:uuid:5c61e91a-7912-4ba3-b84b-6ca0533d5b1d> | 4.09375 | 1,040 | Knowledge Article | Science & Tech. | 38.597511 |
Owning Class: support
Requires: MathScript RT Module
n = length(a)
Computes the length of a numeric object or string. The length of a matrix is the number of rows or columns in the matrix, depending on which is larger. The length of a string is the number of characters in the string.
|a||Specifies a scalar, vector, or matrix of any data type.|
|n||Returns the length of a. n is a scalar.|
The following table lists the support characteristics of this function.
|Supported in the LabVIEW Run-Time Engine||Yes|
|Supported on RT targets||Yes|
|Suitable for bounded execution times on RT||Yes|
VECTOR = [0, 3, -4.5, 2, 4, 7]
MATRIX = rand(4, 5)
aString = 'abcdef' | <urn:uuid:e056b6a4-1996-437d-aa88-f6df042ac303> | 2.859375 | 192 | Documentation | Software Dev. | 74.284089 |
The best source of energy for eukaryotic organisms are fats. Glucose offers a ratio 6.3 moles of ATP per carbon while saturated fatty acids offer 8.1 ATP per carbon. Also the complete oxidation of fats yields enormous amounts of water for those organisms that don't have adequate access to drinkable water. Camels and killer whales are good example of this, they obtain their water requirements from the complete oxidation of fats.
There are four distinct stages in the oxidation of fatty acids. Fatty acid degradation takes place within the mitochondria and requires the help of several different enzymes. In order for fatty acids to enter the mitochondria the assistance of two carrier proteins is required, Carnitine acyltransferase I and II. It is also interesting to note the similarities between the four steps of beta-oxidation and the later four steps of the TCA cycle.
Entry into Beta-oxidation
Most fats stored in eukaryotic organisms are stored as triglycerides as seen below. In order to enter into beta-oxidation bonds must be broken usually with the use of a Lipase. The end result of these broken bonds are a glycerol molecule and three fatty acids in the case of triglycerides. Other lipids are capable of being degraded as well.
A triglyceride molecule
Glycerol Fatty Acids (unsaturated)
Steps of Beta-oxidation
A fatty acyl-CoA is oxidized by Acyl-CoA dehydrogenase to yield a trans alkene. This is done with the aid of an [FAD] prosthetic group.
The alcohol of the hydroxyacly-CoA is then oxidized by NAD+ to a carbonyl with the help of Hydroxyacyl-CoA dehydrogenase. NAD+ is used to oxidize the alcohol rather then [FAD] because NAD+ is capable of the alcohol while [FAD] is not.
This page viewed 27074 times | <urn:uuid:fd18958d-85a2-43c0-84e3-adf84af1381d> | 3.640625 | 406 | Knowledge Article | Science & Tech. | 35.339261 |
Drought 2009 (updated October 1, 2009)
Minnesota's present drought conditions are the result of two spells of dry weather.
2009 growing season dry spell:
With a very few exceptions, 2009 growing season precipitation has been well short of historical averages across Minnesota. As a result, many Minnesota counties are categorized as being Abnormally Dry or undergoing Moderate to Severe drought (map at right). Precipitation totals have been roughly 50% to 75% of normal since April 1, falling short of average by five or more inches (maps below).
2008-2009 long-term dry spell: In east central Minnesota, a long-term episode of dryness began in mid-June of 2008 and continues to the present. Long-term precipitation deficits in these areas range from eight to fourteen inches (map at bottom of page). Counties in this area are categorized as experiencing Severe to Extreme drought by the U.S. Drought Monitor (map at right).
Weekly rainfall totals through Monday morning, September 28 (map at right) ranged from one and one-half inches to three inches over portions of southwest Minnesota. Much of the southern one half of the state received at least one-half inch of rain. Heavier rains were also reported along the Canadian border. Elsewhere in northern Minnesota, rainfall totals were generally less than one-third inch. Rainfall totals for September are short of historical averages by one to three inches over much of eastern Minnesota.
Temperatures for the fourth week of September were once again well above normal across Minnesota. September 2009 will go into the record books as one of Minnesota's warmest Septembers ever. Warm temperatures enhance evaporation and transpiration rates, worsening the drought situation.
- Agriculture - The Agricultural Statistics Service reports that topsoil moisture is "Short" or "Very Short" across 40 percent of Minnesota's landscape as of September 27.
- Stream flow - Stream discharge values for roughly one-quarter of Minnesota measurement sites rank below the 25th percentile in the historical data distribution for the date. Many measurements fall below the 10th percentile when compared with historical late-September values. Some of the lowest flows, relative to historical data, are observed along the Mississippi River and the Upper St. Croix River.
- Lake and Wetland Levels - Water levels on many central and east central Minnesota lakes and wetlands are very low. The White Bear Lake Conservation District reports that White Bear Lake is within three inches of its all-time recorded low level. According to the Minnehaha Creek Watershed District, discharge at Lake Minnetonka's Grays Bay Dam, the outlet to Minnehaha Creek, remains suspended per operating procedures. The Prior Lake-Spring Lake Watershed District indicates that Prior Lake water levels are the lowest since the early 1990s. For significant rises to occur in the larger water bodies, above-normal precipitation is needed throughout the autumn and into 2010.
- Wildfire Danger - The Department of Natural Resources - Division of Forestry classifies wildfire danger as High in Lake of the Woods County and portions of Koochiching County. Fire danger is Moderate in many northern and eastern Minnesota counties. Elsewhere, fire danger is considered Low.
2009 growing season precipitation deficit maps: | <urn:uuid:2ec1043f-8246-4ef4-987a-521b28c828ca> | 3.15625 | 665 | Knowledge Article | Science & Tech. | 34.884345 |
Expressions are used in a variety of contexts within SQL statements, particularly in search condition predicates and the SET clause in UPDATE statements respectively. An expression always evaluates to a single value.
The syntax of an expression is as follows:
Note: A user-defined-function is created by using the CREATE FUNCTION statement.
Upright Database Technology AB
Voice: +46 18 780 92 00
Fax: +46 18 780 92 40 | <urn:uuid:23af80d5-181f-4f00-b75b-d36348f67933> | 2.875 | 93 | Documentation | Software Dev. | 40.037826 |
Date of this Version
Gull numbers roosting at two waterbodies close to a military airfield in central England were monitored at dusk and dawn for four weeks during November 2006. Approximately 25,000 and 8,000 gulls were present at each site respectively. Two LEM 50 laser torches mounted on tripods were then deployed to disperse the roost at one of the sites. No effect was observed before dusk or after dawn. Beams were scanned approximately 0.5 to 1metre above the surface of the water across an arc of approximately 200o during a three minute period. The process was repeated continuously for one hour from dusk. Gulls were successfully dispersed and left the site. Large numbers were still present, however, by dawn on all following mornings. Deployment rates were increased, firstly to include three equally spaced deterrence sessions per night, then subsequently to scans every half hour throughout the night. Gull numbers were reduced to zero overnight with none present at dawn. Numbers increased at the alternative waterbody. Birds continued to arrive before dusk to roost and dusk dispersal was always required. The technique cleared all gulls whenever it was deployed but could not eliminate the arrival of birds that would attempt to roost each afternoon. | <urn:uuid:8efdbc08-59ac-4365-9bbd-73b09175e015> | 3.171875 | 252 | Knowledge Article | Science & Tech. | 44.28837 |
Date of this Version
Fathead minnows (Pimephales promelas Rafinesque) were continuously exposed to reduced pH levels of 4.5, 5.2, 5.9, 6.6 and 7.5 (control) during a 13-month, one-generation test. Survival was not affected, even at the lowest pH tested. Fish behavior was abnormal, and fish were deformed at pH 4.5 and 5.2. Egg production and egg hatchability were reduced at pH 5.9 and lower, and all eggs were abnormal. A pH of 6.6 was marginal for vital life functions, but safe for continuous exposure. Free carbon dioxide, liberated by the addition of sulfuric acid to reduce the pH, may have had an unknown effect. The fish did not become accliminated to low pH levels. | <urn:uuid:65f73337-2fb4-4b41-83e8-9feca06aaafe> | 2.6875 | 173 | Academic Writing | Science & Tech. | 72.343713 |
This experiment was performed by the Hydronauts2Fly team within ESA's educational programme: "Fly Your Thesis! - An Astronaut Experience". The Fly Your Thesis! programme gives university students the possibility to fly their scientific experiment in microgravity, as part of their Masters thesis, PhD thesis or research programme, by participating in a series of parabolic flights. In total, three teams of postgraduate students were flying their experiments during the 2012 'Fly Your Thesis!' campaign.
Prime goal of the Hydronauts2Fly Team and their experiment is the investigation and definition of the human neutral body posture. The aerospace students from the Technische Universität München want to find out which position the human body adopts while relaxing under zero gravity environment, what is called the Neutral Body Posture - NBP. Due to neutralization of the gravity force in space and onboard parabolic flights, the human body does not have to make any efforts working against it. As a result its muscles and body limbs will rest in a different posture in space than on earth, similar to the posture of embryos.
With this experiment the Hydronauts2Fly team is studying the neutral body posture of different persons in order to find out whether or not there is a mutual position for test subjects with different body masses and sizes. And if so, is it predictable? | <urn:uuid:56fb3e30-3cfa-4ce7-9176-f603e783e68b> | 2.8125 | 276 | Knowledge Article | Science & Tech. | 35.722209 |
Evolution and Systematics
Members of bryozoan colonies capture tiny plants and animals to feed on by thrusting feathery tentacles into the current.
"The individuals of the bryozoan colony, called zooids, are about one-sixteenth of an inch in length and consist of little more than a digestive system encased within a compartment of leathery or calcified skeleton. The zooids feed through a trapdoor that opens to the outside. By thrusting feathery tentacles into the current, they sweep tiny plants and animals into their open mouths with a quick, flicking motion." (Winston 1990:70)
Learn more about this functional adaptation.
- Winston, Judith E. 1990. Life in Antarctic depths. (Cover story). Natural History. 99(9): 70.
Molecular Biology and Genetics
Statistics of barcoding coverage
|Specimen Records:||7||Public Records:||6|
|Specimens with Sequences:||7||Public Species:||6|
|Specimens with Barcodes:||7||Public BINs:||6|
|Species With Barcodes:||6|
To request an improvement, please leave a comment on the page. Thank you! | <urn:uuid:2e39129b-92d2-4b6d-b5d3-81c22b6bcbfb> | 3.75 | 257 | Knowledge Article | Science & Tech. | 48.100015 |
Left: 5-Day composite of the global soil moisture distribution derived from EUMETSAT MetOp-ASCAT data for August 18, 2007. Right: Retrieval noise. Pixels marked in red denote areas with unreliable data due to missing satellite data, mixed pixel effects, or weather and/or vegetation effects. Pixels marked in pink denote areas flagged as snow covered, frozen or saturated with water / flooded. Black areas over land denote lakes or permanent ice. Olive, light and dark blue mark low, medium and high soil moisture or soil moisture retrieval noise, respectively. Blue areas in the right image denote areas where the retrieved soil moisture values should be given less confidence because of a substantial retrieval noise (complex topography, dense vegetation like the rain forest in South America).
This data set is only available for a restricted user group, please contact us if you want to access these data.
RESTRICTED only accessable in ZMAW net or via CliSAP login What does that mean?
This soil moisture data set is based on radar backscatter measurements (at C-Band) of the Advanced SCATterometer (ASCAT) aboard the EUMETSAT MetOp satellite. This data is first normalized to a common incidence angle (40°) using a radar backscatter model. The obtained radar backscatter coefficient is a function of the soil moisture: low values correspond to a low soil moisture, high values are associated with a high soil moisture. Radar backscatter values are scaled between 0 % (dry soil) and 100% (wet soil, saturated with water). The obtained relative soil moisture values represent the moisture in the topmost 5 cm of the soil. A short introduction to the data set is given in the data sheet; for detailed information we recommend the given references (see below).
This is version 2.0 of this data set.
We offer data for years 2007-2011 as a 5-day gridded composite in order to achieve an as complete as possible data coverage. Additionally, the data have been interpolated into a simple cylindrical grid for easy use in models and for display (see section: coverage, spatial and temporal resolution down below). Original data come from single ASCAT overpasses and are organized as time series per 12.5 km x 12.5 km grid cell organized in 5° x 5° tiles. We also offer these original data upon request.
Coverage, spatial and temporal resolution
Period and temporal resolution:
Coverage and spatial resolution:
Original data are organized in 5° x 5° tiles. Each tile comprises a certain number of 12.5 km grid cells. The data are stored as time series per grid cell per tile in binary format.
This data set contains a number of quality flags and additional information. One is the retrieval noise. This is estimated using Gaussian error propagation of the input uncertainties like the variability of the used radar backscatter values due to measurement noise and the variability of the sensitivity of the radar backscatter values to soil moisture for different soil and vegetation types.
We note, that the soil moisture retrieval method used here, is of limited use particularly in polar regions and regions covered by dense rain forest like in South America. These areas are therefore often flagged as unreliable data and/or show a large retrieval noise.
Soil moisture retrieval is not possible for areas covered with snow and ice, for areas with frozen soil and for wetlands/lakes/rivers. In order to allow identification of dubious soil moisture values (which have perhaps not been flagged as being unreliable), the product contains the percentage fractions of snow, frozen soil, and wetlands for every grid cell. Those for wetland are static, assuming that the fraction does not change over time, while those of snow and frozen soil stem from a climatology and therefore vary with season but don't have interannual variation.
Regions of a strongly variable topography are also problematic because of the highly variable local incidence angle in these cases. This causes problems to normalize the measured radar backscatter values to the common incidence angle and thus to retrieve the soil moisture. Therefore, the data set contains the normalized standard deviation of the altitude in each grid cell (in relative units) as a measure of the topographic complexity.
We recommend to check the references (see below) for details.
Institute of Photogrammetry and Remote Sensing (IPF)
Vienna University of Technology (TU Wien)
E-Mail: ww@ ipf.tuwien.ac.at
ICDC, CliSAP, Universität Hamburg
E-Mail: stefan.kern@ zmaw.de
Upon using this data please cite as follows:
ASCAT Soil Moisture 2007-2010, Institute of Photogrammetry and Remote Sensing (IPF), Vienna Institute of Technology (TU Wien), Vienna, Austria,
provided as 5-Day composites by: Integrated Climate Data Centre (ICDC, icdc.zmaw.de), University of Hamburg, Hamburg, Germany. | <urn:uuid:57ef7c82-cdea-4836-bff5-2fc7b75475da> | 2.703125 | 1,054 | Knowledge Article | Science & Tech. | 35.473711 |
Using the scientific notation:
$$3.14 = 0.314 \times 10^1$$
From Tanenbaum's Structured Computer Organization, section B.1:
The range is effectively determined by the number of digits in the exponent and the precision is determined by the number of digits in the fraction.
I know how this notation works but I am asking about the meaning of the two words.
Why the book is calling them the range and precision? What do they exactly mean? | <urn:uuid:5a31f52f-5382-498b-b3d8-70b6d6014c83> | 3.21875 | 101 | Q&A Forum | Science & Tech. | 68.625 |
On August 23, a magnitude 5.8 earthquake struck the Piedmont region of the U.S. East Coast near Mineral, Virginia. This was an intraplate earthquake – most earthquakes are interplate, meaning that they occur on fault lines that bound tectonic plates. Intraplate earthquakes tend to be much less frequent and much smaller in magnitude than interplate earthquakes. In fact, the most recent earthquake of larger magnitude to strike anywhere in the U.S. east of the Rockies occurred 114 years ago (one of equal magnitude occurred 67 years ago in upstate New York).
The North Anna Nuclear Generation Station, a nuclear plant located about 11 miles from the epicenter, automatically shut down both of its reactors as a safety precaution. Although off-site power was lost, four on-site diesel generators provided sufficient power to reactor safety systems. When one of these generators failed, a fifth backup generator was activated. Off-site power was restored later on the day of the earthquake, and the reactors will likely resume normal operation as soon as possible. No significant damage occurred, and no radioactive material was released.
In short, the nuclear plant and its safety systems functioned properly. Even this rare earthquake was within the design basis of all U.S. nuclear plants.
It is important to understand that Richter magnitude is only one of many factors that contribute to the seismic damage risk of a nuclear plant. Earthquakes of equal magnitude can cause vastly different ground shaking behavior, both in terms of ground shaking frequency and ground acceleration magnitude. Although Richter magnitude corresponds to the total energy released in an earthquake, that energy can be released and propagate in a variety of ways. For example, many people on the U.S. West Coast, who are much more familiar with earthquakes than their East Coast compatriots, were surprised that a mere 5.8 earthquake centered in Virginia could be felt in Massachusetts. Had a 5.8 earthquake occurred in San Diego, people in Los Angeles would probably never know it. Indeed, the older, more solid and connected geology of the Piedmont allows for seismic waves to propagate beautifully and freely. In contrast, California is a disjointed geologic hodgepodge that dissipates seismic energy. Seismologists usually quantify all of this by evaluating the probability that the ground at a certain geographic location will exceed a certain acceleration within a certain period of time. See the map below, which shows the ground acceleration value that has a 10% probability of being exceeded in a 50-year period. For a reference point, the acceleration of gravity is 9.8 m/s2.
Of course, plant design has a large effect on seismic risk. Nuclear plants, just like all important structures, are designed to withstand larger earthquakes (usually quantified by ground acceleration, not Richter magnitude) on the West Coast than on the East Coast. Nuclear engineers who specialize in probabilistic risk assessment (PRA) quantify “plant fragility” by evaluating the probability of plant damage as a function of ground acceleration. See the figure below for a simple flow chart of factors that contribute to the seismic risk (or damage probability) of a nuclear plant. | <urn:uuid:12486011-013d-4f50-b590-e19316a0d83a> | 4 | 644 | Knowledge Article | Science & Tech. | 40.697047 |
On the question you mentioned, a commentator said, "astrophysicists would be very surprised to find a nonrotating black hole in nature". And the event horizon of a rotating black hole isn't actually going to be spherical.
Anyways, the relaxation to an oblate shape might be quick. Now, this is a messy business. There have been approximation and numerical methods used to analyse the merger of two black holes. These are way over my head, but Figure 2 of Binary black hole mergers in Physics Today (2011) shows the ring-down time being a hundred or so times $GM/c^3$. ($GM/c^3$ was the characteristic time mentioned in the comments to the other question, so this is in agreement with what was said there.)
For a solar mass black hole, the characteristic time is about 5 microseconds. The supermassive black hole at the centre of our galaxy is thought to be about 4 million solar masses, so the time would be about 20 seconds. So the ring-down time even for that monster would be only about 2000 seconds, or let's say half an hour.
That said, this only models how long it takes huge distortions to relax to small distortions. It's not clear to me that small distortions have as fast a relaxation time to even smaller distortions. More precisely, I don't see why the decay would be exponential. Again, it's over my head. [Maybe this should be another question.]
You also asked if there could be other periodic disturbances of the horizon. Technically, no, any disturbance would be subject to some damping, because it would have to produce gravitational radiation. If an object were orbiting the black hole, for example, that would have to distort the event horizon as it passed over it, while its orbit would decay via radiation. But the power radiated doesn't scale linearly with mass of the orbiting body. For very small disturbances, it could take a very long time, and you could have an almost periodic scenario. (In the limit, test particles have stable orbits and produce no distortion of the horizon.) | <urn:uuid:5f8a98eb-814c-4093-a59c-6cc61163ecc8> | 2.890625 | 431 | Q&A Forum | Science & Tech. | 53.697996 |
For various reasons Eli has been thinking about aerosols and how the mid-century cooling is attributed to them. Now the Bunny has also been consorting with a bunch of acid rain and regional forcasting types and the thought occurred that maybe we have a case here of the urban cooling effect. To make a long story short, and this is really a WAGNER (wild assed guess, no explanation required) what if the large amounts of SO2 injected into the northern hemisphere atmosphere by WWII and the unrestrained coal burning (see London, smog) produced huge amounts of sulfate aerosol which shadowed and cooled downwind rural measurement sites. Sulfate aerosol gets rained out pretty quick, so the range would not be global. This would mean that the dip between 1940 and 1970 was in a sense an artifact, the UCE.
The figure at the left from GISS shows that the cooling was a northern hemisphere thing. Warming in the tropics and the southern hemisphere has been quite steady, even though there are plenty of aerosols there (although they are different, the major sources in the SH being sea spray and in the tropics sand from the Sahara as well as agricultural burning in Brazil and Africa. | <urn:uuid:bb4fed04-3de6-4719-8da8-5cb4afbb1007> | 2.90625 | 249 | Personal Blog | Science & Tech. | 40.924634 |
VBS Home page,VBS
Course Navigator, Basic chemistry,
Hydophilic interactions, Previous
Page, Next Page,top
|Interactions between water and other molecules
such that the other molecules are attracted to water are called hydrophilic
Molecules that have charged parts to them are attracted to the charges within the water molecule. This is an important reason why water is such a good solvent. So for instance the picture below shows a glucose molecule in solution. Water molecules surrounding the glucose molecule are shown in blue for clarity.
Glucose molecules have polar hydroxyl(OH) groups in them and these attract the water to them. When sugar is in a crystal the molecules are attracted to the water and go into solution. Once in solution the molecules stay in solution at least in part because they become surrounded by water molecules. This layer of water molecules surrounding another molecule is called a hydration shell.
VBS Home page,VBS Course Navigator, Basic chemistry, Hydophilic interactions, Previous Page, Next Page,top of page
pgd revised 6/20/02 | <urn:uuid:dfde049b-4973-42ae-a0b7-3dbbed5af910> | 3.546875 | 227 | Knowledge Article | Science & Tech. | 24.665988 |
The calm eye of Typhoon Nabi stands out like a bulls-eye in the center of the concentric circles of color that make up the storm. The colors represent wind speed, with purple and pink showing the highest winds, while tiny barbs show the wind’s direction spinning around the eye of the storm. The white barbs indicate regions of heavy rainfall. The image was created using data collected by the QuikSCAT satellite on September 1, 2005, when Nabi was growing into a powerful super typhoon with winds of 260 kilometers per hour (160 miles per hour, 140 knots) and gusts to 315 km/hr (196 mph, 170 knots). At the time this image was taken, however, Nabi had winds of about 213 km/hr (132 mph, 115 knots) with gusts to 260 km/hr (160 mph, 140 knots), making it the equivalent of a Category 3 hurricane on the Saffir-Simpson Hurricane Scale.
The wind speeds shown in this image don't match the winds reported by the Joint Typhoon Warning Center. This is because QuikSCAT measures near-surface wind speeds over the ocean based on how the winds affect the ocean. The satellite sends out high-frequency radio waves, some of which bounce off the ocean and return to the satellite. Rough, storm-tossed seas return more of the waves, creating a strong signal, while a mirror-smooth surface returns a weaker signal. To learn to match wind speeds with the type of signal that returns to the satellite, scientists compare wind measurements taken by ocean buoys to the strength of the signal received by the satellite. The more measurements scientists have, the more accurately they can correlate wind speed to the returning radar signal.
Typhoons and hurricanes are relatively rare. This means that scientists have few buoy measurements to compare to the data they get from the satellite and can’t match the satellite measurements to exact wind speeds. Instead, the image provides a clear picture of relative wind speeds, showing how large the strong center of the storm is and which direction winds are blowing. To learn more about measuring winds from space, check out NASA’s Winds web site. | <urn:uuid:804e6552-507b-4381-b4e4-ee300b827513> | 3.859375 | 446 | Knowledge Article | Science & Tech. | 54.839498 |
In my opinion, this essay is a must read because it clearly illustrates correlation between ocean cycles to; Arctic ice loss and gain, glacier advance and retreat, and land surface temperature rise and fall. As I said graphically in a previous post…
Guest post by Juraj Vanovcan
The following article shows, that decadal oscillation in North Atlantic sea surface temperature is the driving force behind observed variations in European climate during 20th century. Long-term North Atlantic SST trend is well correlated to European temperature station record, Alpine glacier retreat/advance and changes in Arctic ice extent as well.
Considering the problems with ground station record being contaminated by urbanization, land use changes and selective use, SST record offers an alternative metrics of changes in climate record, since it is free of at least some issues mentioned above. North Atlantic SST record is unique in this view, since it is quite reliable also in the early part of 20th century, when the ship measurement coverage of Atlantic between American continent and Europe had been much denser than in other parts of the globe .
Here is presented North Atlantic sea surface temperature record since 1850. While the pre-1880 data are rather noisy, probably because of sparse coverage, the 20th century record shows regular cyclical pattern of warming and cooling. The cycle length is 65 years, with cold minimums reached in 1910 and 1975 and warm maximums in 1940 and 2005.
Figure 1: North Atlantic SST record, expressed as monthly anomalies against 1971-2000 period (HadSST2 dataset)
Let’s now compare the North Atlantic SST record with the European ground stations within 40-70N and 10W-30E.
Figure 2: North Atlantic SST record compared to European ground stations
European station record is well correlated with the Atlantic SST changes, and lags the SST record by some 5 years. It is thus obvious, that it is the Atlantic decadal variability, which dictates the European climate. Some excessive surface warming to the end above the SST record (observed also in global surface and SST datasets) is either explained as a sign of quicker response of the surface to increasing radiative forcing, but critics consider it as a sign of urbanization and land use changes, plaguing the station record. This might be especially true for Europe, where population density and its growth have been considerable during the last 100 years. This dispute can be resolved by comparing the North Atlantic SST trend with long-term rural station record.
Armagh Observatory (Ireland) is one of the few rural stations with long historical record, located near small town of Armagh and its surrounding has been claimed to be basically intact since its start in 1796. Lomnicky peak Observatory (Slovakia) is located on the top of the Lomnicky Peak (2655), the highest mountain of Carpathian ridge and measurements are available since 1941.
Figure 3: North Atlantic SST record compared to rural ground stations
From the graph above, it is obvious that the North Atlantic SST record is extremely well correlated to selected UHI-free surface station records from both Western and Central Europe. Amplitude of warming and cooling cycles is slightly more pronounced in the station records.
There are several points worth of interest.
- The rate of warming in 1910-1940 period has been equal with the warming period 1975-2005.Even if one suggests that the anthropogenic forcing is superimposed on natural variations in the background, it is difficult to identify the alleged “increased anthropogenic forcing” in the record to the end of 20th century.
- There has been pronounced cooling period since 1940 until 1980, which completely erased the early century warming against the 19th century average. The 1982-centered decade in Armagh and CET records has been actually colder than end of 19th century and the decade centered around 1870, which again questions the concept of anthropogenic forcing, which should already manifest with the CO2 increase. Surprisingly enough, looking back at the whole length of the both records, 80ties in Europe were equally coldish as average of the Little Ice Age period.
- The overall warming trend since 1900 (0.6 deg C/century for SST and 0.9 deg C/century for the station record) is partially created by the fact, that beginning of the century starts with the cycle minimum and ends with the cycle maximum. By more proper procedure – comparing the differences between 1910/1975 minimums and 1940/2005 maximums – one gets constant warming trend of 0.3 deg C/century for SST record.
- Despite a string of cold years in early 1940s (much more pronounced in the Central/Eastern European record), individual years in 1940-1950 decade were comparably warm as during the last decade. But the fact is that the last decade as a whole has been warmest in record in both Armagh and Atlantic SST data.
Figure 4: 0-700m ocean heat content in North Atlantic, 1955-2010
In the monthly Atlantic SST record, we can observe that the recent warm phase peaked in 2005 and subsequent cooling of North Atlantic started, despite the recent AMO peak as a response to 2009/2010 El Nino. This climate shift is even better visualized in the 0-700m ocean heat content record for the Northern Atlantic. Based on previous records, we can expect the European climate to follow the SST record and to mimic the 1940-1975 cooling trend.
* * *
Multidecadal oscillation in European climate is also tied to European glacier growth/decline. We often hear about the recent Alpine glaciers retreat, but the fact is, that similar retreat occurred in early 20th century as well, and most of the observed glaciers advanced just three decades ago. Data from Swiss Glaciology Institute, covering more than 100 Swiss glaciers, show ratio of advancing, stationery and retreating glaciers during the 20th century, presented here against the AMO index.
Figure 5: Swiss glacier advance/retreat related to Atlantic Multidecadal Oscillation (older years are to the right)
Compared to North Atlantic SST record, the period with most glacier growth/retreat lags the ocean by 5 years, matching the lag in surface record. Extremely warm European summer in 2003 is clearly recognizable, when all observed glaciers retreated. But similar period occurred in 1945-1950, followed by years with prevailing growth in late 70ties/early 80ties. This glacier behavior is also discussed in recent study “100 year mass changes in the Swiss Alps linked to the Atlantic Multidecadal Oscillations” . Based on the AMO peak in 2005 and observed 5-year lag, rebound of Alpine glaciers in the near future is expected.
* * *
North Atlantic seems to have decisive effect on Arctic temperature and ice extent as well. This is understandable, since the Gulf Stream brings masses of warm Atlantic water into the Northern Ocean. Plotting the post-1979 satellite era ice extent against both North Atlantic SST anomalies and Ocean heat content shows reasonable correlation.
Figure 6: Arctic ice extent as a function of North Atlantic SST record, 1979-2009
Figure 7: Arctic ice extent as a function of North Atlantic 0-700m ocean heat content, 1979-2009
By extrapolation this correlation backwards, it is understandable, that the North West Passage has been open for shipping in both 1942-1944 and again in 2007-2009 period. Beyond this SST range, also other positive/negative amplifying effects may change the linear correlation suggested above. Starting rebound of Arctic ice extent since its 2007 minimum is well explainable in light of recent climate shift in the North Atlantic to the cooling mode.
In light of these facts, the alleged Arctic ice history often presented as a “proof” of “unprecedented” ice retreat in the 20th century is unsupported.
Juraj Vanovcan 26th September 2010
My thanks to Juraj for this excellent essay. The conclusion from this essay is that the oceans drive the temperature of the atmosphere, not the other way around. The polar ice responds to the AMO, and glaciers in Europe respond to the AMO. When the AMO and PDO coincide to both be negative, forecast to be sometime around 2015, there’s gonna be some ‘splaining to do.
As the New Scientist finally came to realize and publish on this week, the sun and the oceans play a bigger role than many give credit for. – Anthony
Here’s some additional information via appinsys:
PDO Plus AMO / US Temperatures
Joseph D’Aleo has conducted a correlation analysis between the PDO, AMO and temperatures [http://icecap.us/images/uploads/US_Temperatures_and_Climate_Factors_since_1895.pdf] and [http://intellicast.com/Community/Content.aspx?a=127]. The following figures are from D’Aleo’s analysis.
The following figure shows the 5-year means of PDO, AMO and PDO + AMO.
The next figure shows the US temperature anomalies as calculated by NASA’s James Hansen (2001). The periods when the temperature anomalies are positive correspond almost exactly to when the PDO+AMO changes between warm and cool phases.
The following figure compares the PDO+AMO with the US average annual temperatures. D’Aleo calculated an r-squared of 0.85 between the two – an extremely good correlation.
The next figure compares the same temperature data with atmospheric CO2. D’Aleo calculated an r-squared of 0.44 between the two – a fair correlation, but poor in comparison to the PDO+AMO correlation. Although correlation does not prove causation, lower correlation is evidence of lower probability of causation.
The following figure shows the combined effect of PDO and AMO on drought in the United States [http://oceanworld.tamu.edu/resources/oceanography-book/oceananddrought.html]. Further information on these drought relationships can be found at [http://www.pnas.org/content/101/12/4136.full] | <urn:uuid:49133bfd-6793-419d-a383-9a5e23cc0260> | 2.96875 | 2,121 | Personal Blog | Science & Tech. | 42.45363 |
The Creation Wiki is now operating on a new and improved server.
From CreationWiki, the encyclopedia of creation science
|Atomic Weight||39.948 g/mol39.948 amu|
|Chemical series||Noble gases|
|Appearance|| Colorless |
|Group, Period, Block||18, 3, p|
|Electron configuration||[Ne] 3s2 3p6|
|Electrons per shell|| 2,8,8 |
|Melting point|| 83.80 K-189.35 °C |
|Boiling point|| 87.30 K-185.85 °C |
|Isotopes of Argon|
|All properties are for STP unless otherwise stated.|
Argon is a chemical element known by the chemical symbol Ar. is one of the most abundant elements and the third most common gas in the Earth’s atmosphere. Argon is located in the group VIIIA that is also called noble gases. Noble gases are generally colorless, odorless, and tasteless. Noble gases have extremely low boiling points and freezing points. Argon was discovered by a Scottish chemist named Sir William Ramsay, and an English chemist named Lord Rayleigh in 1984. It was isolated by examination of remains obtained by remove nitrogen, oxygen, carbon dioxide, and water from clean air. They named this element “Argon” from Greek word “argos” meaning inactive or indolence, because this gas does not react with anything. The original symbol for Argon was the letter “A” ,then IUPAC changed the symbol to “Ar” in 1957.
Argon is a chemical element in the periodic table. Argon is the eighteenth element of the periodic table. It means Argon has eighteen electrons, eighteen protons, and two neutrons. The atomic number of Argon is 39.948.Argon is a colorless, gaseous element which is in the noble gases group and nonmetal group. Argon also has a melting point of 83.8 Kelvin (-189.35 Celsius or -308.83 Fahrenheit), a boiling point of 87.3 Kelvin (-185.5 Celsius or -302.53 Fahrenheit), and a density of 1.7837 gram per liter. It has the crustal abundance number of 3.5 milligrams per kilogram and the oceanic abundance number of 0.45 milligrams per liter. (Argon forms a anthracite with “b” hydroquinone which is false chemical bonds) Argon is two and a half times more soluble in water than nitrogen. When you view through the spectroscope, you will see a lot of red lines.
Argon can be found in the Earth's atmosphere it contains 0.94% of Argon and Mar's atmosphere contains 1.6% of Argon. Argon-39 is made by cosmic ray activity. There are 22 known isotopes of Argon from Ar-31 to Ar-53 (except Ar-52) . Natural argon is a fusion of three major isotopes: Ar-36 (0.34%), Ar-38 (0.06%), and Ar-40 (99.6%). Not all isotopes of argon have half life time due to their stability of Ar-36, Ar-38, and Ar-40. It means that those isotopes have a stable life time and they will not change the shapes. For example: Ar-37 has 35 days of half life time, Ar-39 has 269 days of half life time, and Ar-41 has 1.8 hour of half life time. Due to its common nature, the cost of pure Argon of 100 gram is 5 cents.
Argon is used mainly in electrical lights, vivid tubes, photo tubes, glow tubes, and lasers. Argon is used in lighting where nitrogen is unsuitable. It is very important for the metal manufacture of stainless steels and silicon crystals because Argon is used as a shield in arc fusing and cutting. Argon-39 is used for applications and primarily (ice coring). A mixture of Argon and Carbon dioxide is sometimes used in metal insert gas fusing of common structures. Argon is also used in wine-making to separate oxygen in barrels.
Argon is one of the most health-effecting elements but Argon does not damage the environment. Argon is used in cryosurgery because it is needed for use in extremely cold temperature to specifically destroy the small areas of diseases under the skin. If you breathe pure Argon in, you will start to feel the sensation of dizziness, headache, suffocation, and eventually death. If your skin or eyes have contact with liquid Argon, they will become frostbite.
- ↑ 1.0 1.1 1.2 Unknown Author. Argon. www.chemicool.com. Web. Access 27 November 2012 .
- ↑ 2.0 2.1 Gagnon, Steve. The Element Argon. education.jlab.org. Web. Access 27 November 2012 .
- ↑ Winter, Mark. Argon: the essentials. www.webelements.com. Web. Access 27 November 2012 .
- ↑ Helmenstine, Anne. Argon Facts. www.about.com. Web. Access 27 November 2012 .
- ↑ Bentor, Tinon. Periodic Table: Argon. www.chemicalelements.com. Web. Access 27 November 2012 .
- ↑ Unknown Author. Argon (Ar) Properties, Uses, Applications Argon Gas and Liquid Argon. www.uigi.com. Web. Access 27 November 2012 .
- ↑ Unknown Author. Argon Ar. www.lenntech.com. Web. Access 27 November 2012 . | <urn:uuid:78c74965-c299-4d4a-98d6-080f22ba9057> | 3.8125 | 1,177 | Knowledge Article | Science & Tech. | 71.420107 |
Differences between Python 2 and 3 - This articles explains the subtle and not so subtle differences (print ('...'), input(...) and eval(input(..)) instead of raw_input and input in 3, etc)
Python Style Guide - Readability Counts! And hey this document shows a desire for standardization of coding to aid everyone in the community
ActiveState - implementation and IDE
PyDev - IDE
Python E-Books - e-Literature to get you started and free to boot
I like Dive into Python its pretty clear and really brings the reader up to speed in a concise manner. I personally use IDLE. Hope this helpful to people starting out in Python.
Generators in Python:
Tricks, Tips, and Hacks:
Python Tips, Tricks, and Hacks
MIT 6.00 Intro to Computer Science & Programming
Anyone have their personal resources and are willing to share them?
This post has been edited by Dogstopper: 04 January 2011 - 02:47 PM | <urn:uuid:05478550-2840-412a-a90e-686b918d4be6> | 2.953125 | 211 | Comment Section | Software Dev. | 56.32852 |
Newton's First Law of Motion
It's not Halo Wars for X-Box; but you can still have fun with these online, interactive inertia games.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and take a visual overview of Newton's laws of motion.Better than Cool Demo
View a high speed video of the classic demonstration involving the removal of a sewing loop from underneath a marker.
Thinking Physics! Present your students with this short decision-making challenge.The Laboratory
Looking for a lab that coordinates with this page? Try the Galileo For a Day Lab from The Laboratory.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on Newton's first law.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.
Balanced and Unbalanced Forces
Newton's first law of motion has been frequently stated throughout this lesson.
An object at rest stays at rest and an object in motion stays in motion with the same speed and in the same direction unless acted upon by an unbalanced force.
But what exactly is meant by the phrase unbalanced force? What is an unbalanced force? In pursuit of an answer, we will first consider a physics book at rest on a tabletop. There are two forces acting upon the book. One force - the Earth's gravitational pull - exerts a downward force. The other force - the push of the table on the book (sometimes referred to as a normal force) - pushes upward on the book.
Since these two forces are of equal magnitude and in opposite directions, they balance each other. The book is said to be at equilibrium. There is no unbalanced force acting upon the book and thus the book maintains its state of motion. When all the forces acting upon an object balance each other, the object will be at equilibrium; it will not accelerate. (Note: diagrams such as the one above are known as free-body diagrams and will be discussed in detail in Lesson 2.)
Consider another example involving balanced forces - a person standing upon the ground. There are two forces acting upon the person. The force of gravity exerts a downward force. The floor of the floor exerts an upward force.
Since these two forces are of equal magnitude and in opposite directions, they balance each other. The person is at equilibrium. There is no unbalanced force acting upon the person and thus the person maintains its state of motion. (Note: diagrams such as the one above are known as free-body diagrams and will be discussed in detail in Lesson 2.)
Now consider a book sliding from left to right across a tabletop. Sometime in the prior history of the book, it may have been given a shove and set in motion from a rest position. Or perhaps it acquired its motion by sliding down an incline from an elevated position. Whatever the case, our focus is not upon the history of the book but rather upon the current situation of a book sliding to the right across a tabletop. The book is in motion and at the moment there is no one pushing it to the right. (Remember: a force is not needed to keep a moving object moving to the right.) The forces acting upon the book are shown below.
The force of gravity pulling downward and the force of the table pushing upwards on the book are of equal magnitude and opposite directions. These two forces balance each other. Yet there is no force present to balance the force of friction. As the book moves to the right, friction acts to the left to slow the book down. There is an unbalanced force; and as such, the book changes its state of motion. The book is not at equilibrium and subsequently accelerates. Unbalanced forces cause accelerations. In this case, the unbalanced force is directed opposite the book's motion and will cause it to slow down. (Note: diagrams such as the one above are known as free-body diagrams and will be discussed in detail in Lesson 2.)
To determine if the forces acting upon an object are balanced or unbalanced, an analysis must first be conducted to determine what forces are acting upon the object and in what direction. If two individual forces are of equal magnitude and opposite direction, then the forces are said to be balanced. An object is said to be acted upon by an unbalanced force only when there is an individual force that is not being balanced by a force of equal magnitude and in the opposite direction. Such analyses are discussed in Lesson 2 of this unit and applied in Lesson 3.
Luke Autbeloe drops an approximately 5.0 kg fat cat (weight = 50.0 N) off the roof of his house into the swimming pool below. Upon encountering the pool, the cat encounters a 50.0 N upward resistance force (assumed to be constant). Use this description to answer the following questions. Click the button to view the correct answers.
1. Which one of the velocity-time graphs best describes the motion of the cat? Support your answer with sound reasoning.
2. Which one of the following dot diagrams best describes the motion of the falling cat from the time that they are dropped to the time that they hit the bottom of the pool? The arrows on the diagram represent the point at which the cat hits the water. Support your answer with sound reasoning.
3. Several of Luke's friends were watching the motion of the falling cat. Being "physics types", they began discussing the motion and made the following comments. Indicate whether each of the comments is correct or incorrect? Support your answers.
a. Once the cat hits the water, the forces are balanced and the cat will stop.
b. Upon hitting the water, the cat will accelerate upwards because the water applies an upward force.
c. Upon hitting the water, the cat will bounce upwards due to the upward force.
4. If the forces acting upon an object are balanced, then the object
a. must not be moving.
b. must be moving with a constant velocity.
c. must not be accelerating.
d. none of these | <urn:uuid:e664c54a-8efb-4b62-a41e-35d0bc5e077d> | 4.28125 | 1,282 | Tutorial | Science & Tech. | 58.184738 |
|Jul5-12, 02:20 AM||#1|
Energy lost by charge due to acceleration.
We knew energy is lost when a charge accelerates. What is the form of the energy? Which form of energy of theirs are these charges releasing?
|Jul5-12, 02:54 AM||#2|
The energy is given off in the form of electromagnetic waves. The energy ultimately comes from whatever process is accelerating the charge. That is, because some of the energy gets radiated away, you need to do more work to accelerate a charged object than an equivalent uncharged object. Thus we speak of a "radiation reaction force" that opposes the acceleration of charged particles. We can say that the energy of the radiated EM waves comes from the work you do against this radiation reaction force.
(Since you posted this in the QM forum, you might note that this is a purely classical phenomenon, not a quantum mechanical one.)
|Similar Threads for: Energy lost by charge due to acceleration.|
|Where Energy is Lost?||Classical Physics||2|
|If energy is never lost where is it?||General Physics||4|
|Angular Acceleration. Im totally lost||Introductory Physics Homework||3|
|what does it mean for energy to be lost? I thought that you could not lose energy!||Classical Physics||17|
|I'm lost on this angular acceleration problem||Introductory Physics Homework||3| | <urn:uuid:df72181f-6c4a-4293-ad45-bef169136ecf> | 3.140625 | 310 | Comment Section | Science & Tech. | 50.227551 |
In 2008, PISCO researchers documented the rise of anoxic waters caused by upwelling currents. Upwelling currents typically support extremely productive ecosystems (20% of global fishery yield are taken from upwelling areas) because they transport nutrient-rich water from the deep to surface waters where they can be used by photosynthetic organisms. Because upwelling currents transport nutrient-rich but oxygen-depleted water onto shallow seas, large expanses of productive continental shelves can be vulnerable to the risk of extreme low-oxygen events.
This research documents the novel rise of water-column shelf anoxia in the northern California Current system, a large marine ecosystem with no previous record of such extreme oxygen deficits. These findings are particularly alarming as large scale changes in productive ecosystem, such as the expansion of anoxia could severely impact a major portion of the world's fisheries.
Chan, F., J. A. Barth, et al. (2008). "Emergence of Anoxia in the California Current Large Marine Ecosystem." Science 319(5865): 920. http://dx.doi.org/10.1126/science.1149016 | <urn:uuid:9db599ba-9c40-4b44-8c6d-dbaa67bf11e1> | 3.40625 | 238 | Academic Writing | Science & Tech. | 37.79886 |
Radioactive waste is waste type containing radioactive chemical elements that does not have a practical purpose.
It is sometimes the product of a nuclear process, such as nuclear fission.
The majority of radioactive waste is "low-level waste", meaning it has low levels of radioactivity per mass or volume.
This type of waste often consists of items such as used protective clothing, which is only slightly contaminated but still dangerous in case of radioactive contamination of a human body through ingestion, inhalation, absorption, or injection.
Waste from the front end of the nuclear fuel cycle is usually alpha emitting waste from the extraction of uranium.
It often contains radium and its decay products.
The back end of the nuclear fuel cycle, mostly spent fuel rods, often contains fission products that emit beta and gamma radiation, and may contain actinides that emit alpha particles, such as uranium-234, neptunium-237, plutonium-238 and americium-241, and even sometimes some neutron emitters such as Cf.
Industrial source waste can contain alpha, beta, neutron or gamma emitters.
Radioactive medical waste tends to contain beta ray and gamma ray emitters.
For more information about the topic Radioactive waste, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:9a0839fc-4f6c-4e42-bed6-3270a326e6aa> | 4.21875 | 292 | Knowledge Article | Science & Tech. | 28.984495 |
A newly-published review of research in the journal Science this week looks at the effects of climate change on Arctic species. Rapid, widespread changes in the Arctic regions, the authors say, have been especially significant to species that depend on the ice for foraging, reproduction, and predator avoidance, such as the hooded seal, ringed seal, Pacific walrus, narwhal, and polar bear. In addition, species once confined to more southerly ranges now are moving northward, invading the upper Arctic zones.
"The Arctic as we know it may soon be a thing of the past," said Eric Post, one of the authors of the report. We'll talk to him about the findings.
Produced by Annette Heist, Senior Producer | <urn:uuid:8a05a851-a5d4-4503-9623-bbcb1329729b> | 3.234375 | 153 | Truncated | Science & Tech. | 42.358966 |
Date: Autumn 2012
Creator: Marshall, James L., 1940- & Marshall, Virginia R.
Description: Article describing the discovery of argon, helium, and other inert gases by Lord Rayleigh, Sir William Ramsay, and other collaborators. Ramsay also characterized the noble gases and classified them within the structure of the Periodic Table of Elements.
Contributing Partner: UNT College of Arts and Sciences | <urn:uuid:9274bd53-02f3-4c26-b7a1-65d74908402c> | 2.9375 | 83 | Content Listing | Science & Tech. | 33.127845 |
Monday , 25 October 2010 is the day that Wi-Fi direct certification was released to the mobile device manufacturer for the first time.
Wi-Fi direct is technology that developed from original Wi-Fi technology (it’s developed by
WiFi Alliance). This technology enable two mobile device to communicate to each other with ease
and safty.It’s also known as “bluetooth killer” because it can substitute for bluetooth technology
in every function of bluetooth. Otherwise it support more applications and more effective than
bluetooth technology.So bluetooth technology may come to an end in the next few years.
Software testing or Testing is an examination conducted to present the stakeholders with data regarding the quality of the service or product under test. This type of software testing as well provides in insight on the independent views and objectives of the software to permit the business to understand all the risks involved during the implementation of the software.
Based on the testing method employed, Software testing can be put into practice at any point of time in the development process. Although, most of the efforts for testing occur after the requirements are defined and the process for coding has been completed.
This article describes two types of testing methodologies.
1. Performance Testing
Performance testing is performed to determine how quick a system or sub-system executes under a particular amount of workload. It can also provide to verify and validate other quality traits of the system, such as reliability, scalability and resource usage.
In the field of Software Engineering, performance testing comes under the testing category which is carried out to how fast a particular aspect of the system under observation performs, given a fixed amount of workload. This sort of testing is a subset of Performance Engineering (up-and-coming computer science practices build to get performance into the architecture and design of a system, prior to the actual effort of coding). There are various purposes which performance testing can serve:
It can compare given two systems and finds out which one is performing better
It can demonstrate whether or not the system meets the required performance criteria.
It can calculate which parts of the system under observation cause the system to perform badly.
2. Regression Testing
This type of testing is any software testing which seeks to find out software errors by partly re-testing a modified program. The intention of this type of testing is to provide a kind of assurance that no additional errors were added during the process of fixing the existing problems. Regression testing is generally used to proficiently examine the system by systematically choosing the appropriate suite of tests which are required to sufficiently cove all of the affected changes.
Widespread methods of regression testing comprise of rerunning earlier run tests and scrutinizing whether previously fixed faults have re-emerged. One of the chief reasons for performing regression testing is that it’s often tremendously difficult for a programmer to outline out how a change in one part of the software will reverberate in other parts of the software.
Ad hoc testing
This term is commonly used to represent software testing without any documentation or planning. The tests are made to run only once, unless a defect is discovered. This type of testing is more of a part of exploratory testing. Here, the tester seeks to find errors by any means that seems appropriate to him/her.
This type of testing is a software verification and validation method where a programmer tests if individual units of system are fit for utilization. A unit is the least testable part of an application. In procedural programming, a unit might be an individual procedure or function.
The advantage of performing Performance and Regression testing are:
Reusable: Can reuse tests on diverse versions of an application, even if the user interface alters
Repeatable: Can check how the software responds under repeated execution of the identical operations.
Programmable: Can program complicated tests that bring out concealed information from the application.
Cost Reduction: Cost is reduced since the amounts of resources for regression test are reduced.
Reliable: Tests carry out precisely the equivalent operations each time they are run, thereby eradicating human error.
Comprehensive: Can build a suite of tests that wraps every feature in the application.
Better Quality Software: Can run additional tests in fewer time with less resources
Fast: Automated Tools run tests considerably faster as compared to the human users. | <urn:uuid:9bd88f1a-3ab1-4e86-a65d-7efa54ad8f96> | 2.75 | 884 | Personal Blog | Software Dev. | 26.776883 |
|CHOOSE A TOPIC
||A simple equation that will tell how far the planets are away from the Sun.
||Is it possible that we can see only 10% of all the matter in the universe?
||Laws of planetary motion that helped explain planetary orbits.
|Theory of Relativity
||Developed by Einstein, this theory was one of the most revolutionary physics discoveries of all time. | <urn:uuid:98742e7f-ca17-4f5c-9960-aa8c7d4a4bc2> | 3 | 85 | Content Listing | Science & Tech. | 45.099274 |
For the following information (paraphrased from Chapter 1, "A Mathematical and Historical Tour") and much more, see Robert Devaney, A First Course in Chaotic Dynamic Systems.
Chaos occurs in objects like quadratic equations when they are regarded as dynamical systems by treating simple mathematical operations like taking the square root, squaring, or cubing and repeating the same procedure over and over, using the output of the previous operation as the input for the next (iteration). This procedure generates a list of real or complex numbers that are changing as you proceed - a dynamic system.
For some types of functions, the set of numbers that yield chaotic or unpredictable behavior in the plane is called the Julia set after the French mathematician Gaston Julia, who first formulated many of the properties of these sets in the 1920s. These Julia sets are complicated even for quadratic equations. They are examples of fractals - sets which, when magnified over and over, always resemble the original image. The closer you look at a fractal, the more you see exactly the same object. Fractals naturally have a dimension that is not an integer - not 1 or 2, but often somewhere in between.
The black points in graphic representations of these sets are the non-chaotic points, representing values that under iteration eventually tend to cycle between three different points in the plane so that their dynamical behavior is predictable. Other points are points that "escape," tending to infinity under iteration. The boundary between these two points of behavior - the interface between the escaping and the cycling points - is the Julia set.
The totality of all possible Julia sets for quadratic functions is called the Mandelbrot set: a dictionary or picture book of all possible quadratic Julia sets. First viewed in 1980 by Benoit Mandelbrot and others, the Mandelbrot set completely characterizes the Julia sets of quadratic functions, and has been called one of the most intricate and beautiful objects in mathematics.
Home || The Math Library || Quick Reference || Search || Help | <urn:uuid:58a7d2d3-9dba-4d2f-89ce-6e00d6461fb7> | 3.59375 | 423 | Knowledge Article | Science & Tech. | 27.386736 |
News From the Field
Sea Temperatures Less Sensitive to CO2 13 Million Years Ago
June 6, 2012
In the modern global climate, higher levels of carbon dioxide (CO2) in the atmosphere are associated with rising ocean temperatures. But the seas were not always so sensitive to this CO2 "forcing," according to a new report. Around 5 to 13 million years ago, oceans were warmer than they are today--even though atmospheric CO2 concentrations were considerably lower.
San Francisco State University
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:8eb8f7ca-d8ad-4a8b-9a4e-37d0f1253bcd> | 2.890625 | 310 | Content Listing | Science & Tech. | 65.185434 |
North Atlantic right whales, which live along North America's east coast from Nova Scotia to Florida, are one of the world's rarest large animals and are on the brink of extinction. Recent estimates put the population of North Atlantic right whales at approximately 350 to 550 animals.
According to a NOAA-led paper published in the journal Conservation Biology, high levels of background noise, mainly due to ships, have reduced the ability of critically endangered North Atlantic right whales to communicate with each other by about two-thirds.
An example of predicted received levels (71-224 Hz, dB re 1µPa, scale far right) produced by calling right whales, large commercial ships and wind-dependent background noise within the study area (boundaries of the sanctuary outlined in black) calculated every 10 minutes over a nine hour period. Download here (Photo courtesy NOAA and Cornell Lab of Ornithology).
From 2007 until 2010, a team of scientists used acoustic recorders to monitor noise levels, measure levels of sound associated with vessels, and to record distinctive sounds made by multiple species of endangered baleen whales, including “up-calls” made by right whales to maintain contact with each other. More than 22,000 right whale contact calls were documented as part of the study during April 2008.
Vessel-tracking data from the U.S. Coast Guard’s Automatic Identification System was used to calculate noise from vessels inside and outside of the Stellwagen Bank National Marine Sanctuary. By further comparing noise levels from commercial ships today with historically lower noise conditions nearly a half-century ago, the authors estimate that right whales have lost, on average, 63 to 67 percent of their communication space in the sanctuary and surrounding waters.
The authors suggest that the impacts of chronic and wide-ranging noise should be incorporated into comprehensive plans that seek to manage the cumulative effects of offshore human activities on marine species and their habitats.
A team of scientists from the Stellwagen Bank National Marine Sanctuary, Cornell Lab of Ornithology, NOAA Fisheries Northeast Fisheries Science Center, and Marine Acoustics Inc. were involved in the study. This study was funded under the National Oceanographic Partnership Program. The research was published in Conservation Biology. | <urn:uuid:74f8a36e-60ce-424f-bf1b-fb4ef4805ced> | 4.0625 | 450 | Knowledge Article | Science & Tech. | 28.899733 |
Global Warming Natural Cycle
Is global warming a natural cycle? Or is global warming affected by human influence? What does the science say? Both are true. In the natural cycle, the world can warm, and cool, without any human interference. For the past million years this has occurred over and over again at 100,000 year intervals. About 80-90,000 years of ice age with about 10-20,000 years of warm period.
The difference is that in the natural cycle CO2 lags behind the warming because it is mainly due to the Milankovitch cycles. Now CO2 is leading the warming. Current warming is clearly not natural cycle. The earths natural cycles, if human industrial output had not been involved, would have us near or slightly below thermal equilibrium, possibly slightly cooling.
In other words, if we were in the natural cycle without human influence, the forcing levels would likely be around 0W/m2 to -0.1W/m2. We are currently experiencing a positive forcing of around 3.6 to 3.8W/m2 and a human induced negative forcing of around 2W/m2. The resultant forcing, depending on current levels and the Schwabe cycle is around +1.6W/m2 above natural cycle as estimated.
Where are we currently in the Natural Cycle (Milankovitch Cycle)?
Milankovitch cycle current estimate of influence? From the estimated forcing near the peak of the beginning of the Holocene 10,000 years ago, which is the base point for estimating current changes in radiative forcing we should be somewhere around 0.0 to -0.1W/m2, including natural variability. We are currently estimated at total positive human induced forcing of approximately +3.6 W/m2 with -2 W/m2 of human induced negative forcing from aerosols giving us a net estimate on the mean of +1.6 W/m2.
The natural cycle is understood by examining the paleo records. The fact that the earth goes in and out of ice ages distinctly outlines the natural cycles of earths climate. This occurs about every 100,000 years. We are currently in a warm period. Generally, the earth spends about 80-90,000 years in an ice age and around 10-20,000 years (or so) in a warm period.
The National Academy of Sciences, National Research Council, Board on Atmospheric Science and Climate Present 'Climate Change: Lines of Evidence - Natural Cycles'
The Natural Cycle - Climate Minute
Rapid Climate Change In The Natural Cycle
The Holocene temperatures peaked around 8,000 years ago. This temperature peak was associated with the perihelion phase of the Milankovitch cycles. That was when it is estimated that the natural cycle climate forcing was at maximum, including associated climate feedbacks. Since then the forcing levels have been slowly dropping and the temperature has been following the slope of forcing in line with the changes in the Milankovitch cycle forcing combined with system feedbacks.
Recent significant changes in climate forcing due to human cause factors have produced a net positive forcing causing temperatures to rise. This is a departure from the natural cycle.
The current global mean temperature (GMT) is above the temperature peak associated with the forcing imposed on the climate system when we came out of the last ice age.
150 Thousand Years
450 Thousand Years
In the image below we can see that the cycle has been fairly regular for the past 450,000 years.
5 Million Years
The last 5 million years of climate history shows us settling in to our current 100k year cycles.
65 Million Years
The image below shows the climate was much warmer prior to 7 million years ago. Here we can see the Eocine optimum and the PETM event, which is assumed to have involved a methane hydrate clathrate release that caused a temperature spike.
542 Million Years
The past 542 million years of climate as it is currently understood.
There are many reasons for the dramatic temperature differences and science is continually investigation to better understand past climate. Some of the reasons include Pangaea which is when the land mass of earth was connected 250 million years ago. Imagine the more land all facing the sun. Easy to understand how that can produce a warmer world. But that is an oversimplification of influences and science is never entirely settled but rather investigated further.
That does not mean that there are many things we understand fairly well or are even very certain of. Don't assume that because knowledge is not perfect that one can not have a very strong understanding of what influences climate. In our recent history, the past one million years, we have very strong understanding of climate influences.
Mulitin Milankovitch calculated the cycles that influence the general climate of earth in the early part of the 20th century. It was not until after his death that his hypothesis was confirmed by the deep ocean sediment core studies.
The Milankovitch Cycles
- Learn more about Milankovitch Cycles
These cycles increase and decrease the amount of solar forcing imposed within our climate system and that actually causes the temperature to rise and fall with calculable regularity. The more time the earth or land mass spends closer to the sun (at perihelion), the more energy it receives thus warming. The more time it spends farther form the sun (at aphelion) the less energy it receives and the earth cools.
- The 'eccentricity' cycle period is around 100,000 years. This causes the orbit of the earth to elongate or become more elliptical. Imagine that the more elliptic it becomes, the less time during the year it spends near the sun. So the planet receives less solar energy and cools a bit.
- The 'obliquity' cycle tilts the earth every 41,000 years and that causes the land mass of the norther hemisphere to face more towards the sun or less towards the sun.
- The 'precession' cycle occurs about every 26,000 years and influences the wobble of the polar axis. This also influences earths climate by causing winters and summers to be warmer or colder depending on the amount of land surface being more or less exposed to the sun.
These are three main influences considered in the Milankovitch theory that regulate the general amount of energy received in our earth climate system. As we warm and cool, more or less of our natural greenhouse gases are released into the atmosphere, or stored in the oceans, ice and earth.
- Learn more about Solar Cycles
The sun also goes through cycles. Solar observations began in the time of Galileo. Even then they noticed that the number of sunspots changed and might be connected to climate.
This sunspot cycle was detected as early as the 5th century. NASA began taking satellite readings of the total solar irradiance in 1978.
Over the years, they refined their ability to calculate the total solar irradiance and we now have a very accurate measurement of the amount of energy that reaches our outer atmosphere.
A Change in the Atmospheric Composition
- Learn more about Atmospheric Composition
The main difference between the natural cycle and what is now called the anthropogenic cycle is that we have altered the atmospheric composition of greenhouse gases and therefore increased the climate forcing.
A Change in the Forcing Levels
- Learn more about Climate Forcing
This following image shows the last 800,000 years of temperature and forcing levels. Essentially, we have largely departed the climate forcing from the natural cycle.
- 12,000 Years: http://www.globalwarmingart.com/wiki/Image:Holocene_Temperature_Variations_Rev_png
- 150,000 Years: http://www.ncdc.noaa.gov/paleo/globalwarming/paleobefore.html
- 450,000 Years: http://eetd.lbl.gov/AQ/smenon/temperature-CO2.jpg
- 450,000 Years: http://eetd.lbl.gov/AQ/smenon/mainintro.htm | <urn:uuid:c36c5223-9eba-48df-b084-d07be3e69fcc> | 3.1875 | 1,681 | Knowledge Article | Science & Tech. | 48.476176 |
The reason why think a large red shift corresponds to large distance is that this is what is predicted by General Relativity.
If you make a few simplifying assumptions about the universe you can solve Einstein's equation for the universe to give a result called the FLRW metric. This predicts the universe is expanding and predicts the red shift increases with distance.
However we have experimental evidence for the red shift-distance relation. For example there is a type of supernova called SN1a for which we can calculate the brightness. Because we know what the brightness should be we can measure the brightness as seen from Earth and use this to calculate how far away the supernova is. Then we can measure the red shift and test the red shift-distance relationship.
The measurement of SN1a red shifts is the way dark energy was detected, because if you do the experiment you find the supernovae are actually slightly farther away than the red-shift predicts they should be. | <urn:uuid:19c3f590-2f89-4f0e-b848-b4ae42ee1ebb> | 3.1875 | 195 | Q&A Forum | Science & Tech. | 44.226261 |
If you ask a person on the street, “how do you go about looking for supersymmetry?”, that person will probably cross the street quickly. But if you ask this question on the street at CERN, the laboratory which runs the Large Hadron Collider (LHC), you’ll probably get an answer, of the following form: “Look for a surprising number of collisions with Jets and Missing Energy.”
This answer might cause you yourself to cross the street quickly. But it’s not so impenetrable; it just needs a little translation. What it means is this:
You should look for an unexpectedly large number of proton-proton collisions that show signs of both (a) quarks, anti-quarks or gluons (particles that are found inside protons and other hadrons) flying out of the collision with very high energy, as though shot from a gun (and creating sprays of particles, called “jets”), and (b) undetectable particles flying off invisibly, carrying off lots of momentum and energy.
The purpose of this article is to explain to you why people give this answer, and what its strengths and weaknesses are.
You may want to have read this article on supersymmetry first. It explains what supersymmetry is and what it predicts as far as new particles. Summarizing it briefly: for every type of particle that we know about in nature, supersymmetry requires one or two additional ones, typically referred to as superpartners by physicists, with similar properties but differing in one respect —
- If the particle we know about already is a boson, the superpartner is a fermion, and vice versa. (You can read here about bosons and fermions, if you like. Or you can ignore this point for now, it isn’t crucial for the current article.)
and for consistency with data, it must be that supersymmetry is hidden in a subtle way, leading to a second difference between particle and partner:
- the superpartner is more massive than the particle we already know about.
In the most popular forms of supersymmetry, the superpartner of each particle we know is just heavy enough to be out of reach of previous experiments but within reach of the Large Hadron Collider.
The reason many physicists believe it likely that the superpartners will lie in range of the Large Hadron Collider is that they believe that supersymmetry can serve as the solution to a puzzle known as the hierarchy problem. If the superpartners are much heavier, then the solution to the hierarchy problem must lie elsewhere.
Let us suppose for now that these physicists are right… why should we look for collisions that have multiple jets (i.e. signs of high-energy quarks/antiquarks/gluons) and lots of missing energy (i.e. signs of invisible particles?)
Where does the answer “Jets And Missing Energy” come from?
Well, let me first tell you what these physicists have in their minds, and then I’ll tell you how it got there.
This is what they are thinking:
Since the proton is made from quarks, anti-quarks and gluons, which are affected by the strong nuclear force, the easiest superpartners to produce in the proton-proton collisions at the LHC are their superpartners: squarks, anti-squarks and gluinos. For example (see Figures 2 and 3), in a proton-proton collision, two up-quarks might collide and form two up-squarks.
What would happen next? Like most particles, squarks would decay. Decay to what? In many variants of supersymmetry, squarks decay to a quark, plus another superpartner, called a neutralino (a mixture of the superpartners of the photon, Z, and Higgs particles.) The quarks carry a lot of energy and turn into jets, while the neutralinos zip through the detectors without leaving any observable signal. What we will see, therefore, is two high-energy jets, one from each of the two quarks, and signs that they are recoiling against something invisible and undetected.
The collision itself, with production and decay of the squarks, is shown in Fig. 3. The jets and neutralinos that rush out from the collision point are shown in Fig. 4. And what the detector actually detects — the only information that scientists get in their data — is Fig. 5.
The obvious imbalance, shown in Fig. 5, where a lot of stuff is heading up and to the right but nothing is seen heading down and to the left, is for unfortunate historical reasons and shorthand called “missing energy”. (It is actually “missing momentum in the directions perpendicular to the colliding beams” — a mouthful that partly explains the use of shorthand.)
If instead it is gluinos that are produced in pairs, the situation is only slightly different. Typically each of the two gluinos decays to a quark, an anti-quark and a neutralino, so it is again true that the detectors will see jets (four, in this case) along with lots of missing “energy” from the two neutralinos.
That’s the vision passing through the heads of these physicists when they answer your question about how to find supersymmetry. To comprehend where it comes from requires us to explore the underlying assumptions that go into it.
The Assumptions that Lie Behind the Answer “Jets And Missing Energy”
And that’s the logical journey we’re going to undertake next — illustrated in Figure 6 down below. At the end of our tour, you’ll be able to judge for yourself, to a degree, the strengths and weaknesses of this answer to your original question.
The logic involves three main assumptions.
Assumption 1: we assume there is an additional principle of nature, not required by supersymmetry itself, which states that in any physical process, the number of superpartners can only change by an even number. (The technical name for this is conservation of R-parity, which I tell you not because the name matters but because you may see this term used elsewhere; and I’ll use it below.)
Why do theorists impose this criterion? Without Assumption 1 supersymmetry would predict new forces among matter particles, and typically they would make protons quickly decay. That would be in conflict with data. The proton is very stable (thankfully — even a rather slow rate of proton decay would kill us, and melt the earth, and so on…) You can take a vat full of a billion trillion trillion protons and wait for ten years, and you will not see a single proton decay. (Yes, people tried this! You need 180,000 tons of water.) So without Assumption 1, supersymmetry (and we) would be dead on arrival.
But if Assumption 1 is true — if R-parity is conserved — those new forces are forbidden. Supersymmetry plus R-parity conservation predicts a very, very long-lived proton, consistent (in favorable circumstances) with data.
Note that this requirement of the conservation of R-parity is not imposed because supersymmetry requires it, or on the basis of some theoretical principle. It is added because consistency with data requires it. That said, it is a perfectly reasonable requirement from the theoretical point of view.
Assumption 2: of all the superparticles in nature, the lightest one is a partner of a particle we’ve discovered already, or of a Higgs particle, and therefore it is one of the superpartners appearing in Figure 1, at the top of this webpage: a gluino, squark, charged slepton, sneutrino, chargino or neutralino.
One could certainly question this assumption. For one thing, if supersymmetry is true, then the graviton (the force carrier of gravity) has a superpartner too, called the gravitino — and I didn’t put that in Figure 1. How heavy is the gravitino? We don’t know. In some variants of supersymmetry it is about as heavy as the heaviest superpartners shown in Fig. 1, the squarks and gluinos. In other variants it is quite a bit lighter, and could even be lighter than an electron! That would violate Assumption 2.
Or there might be particles in nature that have rather small masses and that we don’t know about yet because they are very difficult to produce or detect — particles that are not affected by any of the three forces shown in Figure 1, the electromagnetic force or the weak and strong nuclear forces. Such particles are typically called “hidden”, because of how hard they are to produce even if they are very lightweight. (If there are several types of hidden particles, they are often collectively called a “hidden sector”.) Now, these particles have superpartners too, if supersymmetry is true — as mentioned in this article on supersymmetry, supersymmetry is really a symmetry of space and time, so any type of particle that travels in space and time must have superpartners. And if any of these superpartners are lighter than the lightest superpartner shown in Figure 1, well, then Assumption 2 is wrong.
Assumption 2 is not required by any experimental data. The best theoretical arguments against hidden particles suggest that nature is most likely to be simple and elegant, and since hidden particles are extra baggage, they are unlikely. (Whether you are convinced by this argument is a matter of taste.) And the best argument against a light gravitino is that stable gravitinos can cause various problems during the Big Bang. There’s another argument in favor of Assumption 2, having to do with the lightest superpartner being able to serve as the dark matter of the universe, but to understand it we’ll need to understand a few of the various consequences, so let’s hold off on that for the moment.
Assumption 3: the superpartners that feel the strong-nuclear force — squarks, anti-squarks, and gluinos — are on the heavy side, quite a bit heavier than the other superpartners, though not so heavy that they aren’t produced often at the LHC.
This is a squishier assumption than the last two — what exactly do heavy and often mean? Rather than discuss that here, let me simply say that in many, many variants of supersymmetry this turns out to be true. Theoretical calculations do show that in many different scenarios, those superpartners affected by the strong nuclear force end up heavier than most of those that aren’t. But again, it is not always the case.
Now what follows from these assumptions? There are a number of very important consequences that are important to recognize; you can use Figure 6 to keep track.
Assumption 1 has three crucial implications:
- If you start with no superpartners (as you do when two protons collide), and you produce superpartners in the collision, you must produce at least two of them. You can’t start with zero superpartners and end up with one.
- If you have a superpartner sitting around and it decays, there must be at least one superpartner (possibly three or five, but it turns out almost always just one) among the particles produced in the decay. You can’t start with one superpartner and end up with zero.
- The lightest superpartner cannot decay — it is a stable particle — because particles can only decay to particles that are lighter than they are, and for the lightest superpartner to decay would mean one superpartner would turn into zero superpartners.
Remarkable! supersymmetry plus R-parity conservation implies an as-yet-undiscovered stable particle — the lightest superpartner (often written LSP). What properties could this particle have?
Suppose this type of particle were affected by the electromagnetic force or by the strong nuclear force. Then (i) many such particles would have been created in the early universe during the Big Bang; (ii) they would have altered the abundance of various elements, such as lithium, during the Big Bang, so that these abundances would be in conflict with what we observe today, and (iii) they would still be floating around the universe, with a few of them hitting the earth, creating exotic atoms that would long ago have been detected in careful searches for new and unusual atoms. Though this merits a longer discussion, the basic conclusion is that any new stable particle must be insensitive to the electromagnetic force and the strong nuclear force.
Given this, what does Assumption 2 imply? The lightest superpartner must be one of the sneutrinos or one of the neutralinos. All of the other superpartners (squarks, sleptons, charginos and gluinos) of the known particles are affected by the electromagnetic or strong nuclear forces. For technical reasons, most (but not all) particle physicists prefer models where a neutralino is the lightest superpartner. [It can make a good candidate for the particles of dark matter! which is a posteriori an argument in favor of Assumption 2.] But even if a sneutrino is the lightest, the argument in favor of looking for jets and missing energy remains roughly the same, with a few adjustments that I’ll skip here…
And finally, Assumption 3 is that it is easy to produce squarks and gluinos, and they are relatively heavy. That means that they blow up with a relatively large amount of energy; when they break apart, the energy and momentum carried by the quarks and neutralinos to which they decay is big. The resulting jets will carry high energy, and the missing energy will be large.
So now I hope you can understand the vision encapsulated in Figures 3, 4 and 5. If supersymmetry is right, the logic goes, we will produce heavy squarks and gluinos; they decay to high-energy quarks and neutralinos; the quarks show up as high-energy jets, which are easily detected, and the presence of the neutralinos, though undetected, is inferred from an imbalance in the momentum of the jets.
So ok, we look, and we either find or we don’t find. Then what?
And therefore, if we see an excess number of collision events with high-energy jets and missing energy, that’s great; maybe we’ve discovered supersymmetry. However, CAUTION-CAUTION-CAUTION-CAUTION: other types of new phenomena can create similar looking events — it will be some time, years probably, and a lot of hard work, before we would start to be confident as to whether we were looking at supersymmetry or something else new that just looks similar to supersymmetry at first glance. Just because you see something like Figure 5 doesn’t mean that it was produced by Figure 3!
But if we don’t see an excess of events like this, does that mean supersymmetry is definitely not a property of nature? Before we leap to draw strong existential conclusions about the universe by interpreting an experimental result, we should ask what might go wrong with the three assumptions I listed above (or a couple of other less central ones I didn’t mention here). I already told you some of the things that might go wrong, and though I won’t go into any more detail about them here, you can see for yourself that the only thing we’ll be able to conclude, if we don’t see events like this, is that
- either supersymmetry is not a property of nature,
- or supersymmetry is a property of nature but something is wrong with one of the three assumptions in Figure 6.
Don’t be surprised to see press articles and statements by some physicists that go a bit too far interpreting the results from the LHC. It takes time to build knowledge, and impatience often leads to mistakes. Personally I think we need to state our assumptions, and check them whenever possible, when making claims about the existence of supersymmetry, or any other essential properties of the universe.
A couple of other ways to look for supersymmetry
Before wrapping up, I should emphasize that although I focused on the most famous way to look for supersymmetry, there are a number of other ways to search that are well-known, and several of them are being pursued. I won’t explain here where they come from [I might do so later, stay tuned] but very common ones include searching for an excess of collisions that contain
- in addition to jets and missing energy, a charged lepton, a charged anti-lepton, or both; or
- even without jets and/or missing energy, a pair of charged leptons or a pair of charged anti-leptons (that is, two particles with the same electric charge), or perhaps even three or four charged leptons and anti-leptons.
The processes in the last set are so rare and remarkable that there are few other processes in nature that can imitate them, which is why, in trying to distinguish them from look-alike backgrounds, it may not even be necessary that there be high-energy jets and/or missing energy in the same collisions. For this reason they are less sensitive to the three assumptions in Figure 6. I’ll be keeping a close eye on searches for those processes. | <urn:uuid:7242ddf8-d84d-402d-8a70-13b653351c3c> | 3.171875 | 3,724 | Nonfiction Writing | Science & Tech. | 45.652197 |
skade88 writes "Wired has a good article that covers the origins of the white dwarf super nova Johannes Kepler observed in 1604. From the article: 'Up until now, it was unclear what lead to the star's explosion. New Chandra data suggests that, at least in the case of Kepler's remnant, the white dwarf grabbed material from its companion star. The disk-shaped structure seen near the center suggests that the supernova explosion hit a ring of gas and dust that would have formed, like water circling a drain, as the white dwarf sucked material away from its neighbor. In addition, magnesium is not an element formed in great abundances during Type 1a supernovas, suggesting it came from the companion star. Whether or not Kepler's supernova is a typical case remains to be seen. '" | <urn:uuid:46799ffb-31bb-4c92-8338-d184cbea4c41> | 3.03125 | 166 | Comment Section | Science & Tech. | 49.463627 |
Scientific Name: Chrysemys picta
Hosta, Curt and Leonardo
Hosta came to Blandford after a dog bite left her with a missing chunk of her shell in 2007. This would make it impossible for her to tuck safely into her shell when in danger. The other two came in with leg injuries due to car accidents or attacks by predators in 2009 and 2011. Due to their leg injuries, these turtles would not be able to dig into the mud to hibernate during the winter.
Status of Painted Turtles in Michigan
Painted Turtles are the most common turtle species in Michigan.
Painted Turtles prefer bodies of shallow water with muddy bottoms. They can be found in ponds, lakes marshes, and slow-moving rivers and streams.
Painted Turtles eat aquatic plants, insects, tadpoles, small fish, snails, crayfish, and carrion.
The Role of Painted Turtles in Our Ecosystems
Painted turtles help keep populations of small fish, crustaceans, and other invertebrates in check.
Threats to Painted Turtles
There are currently no serious threats to populations of this species.
-Hatchling Painted Turtles produce a type of natural antifreeze to survive the cold temperatures of the fall and winter season while hibernating.
-Painted Turtles can tolerate organic pollution better than some other turtle species. | <urn:uuid:0b4f89b8-46fc-4cf8-890d-8aaa9eb5d93b> | 3.53125 | 292 | Knowledge Article | Science & Tech. | 47.699359 |
MessageToEagle.com - Space is a challenging place. We think of it as mostly empty, but that is not completely true.
The vast sea of space in our solar system is filled with powerful radiation and bombarded with high-speed atomic particles.
In addition, the Sun generates a continuous stream of particles that we call the "solar wind."
The high energy radiation, the high energy particles, and the solar wind could prove dangerous to life here on Earth's surface.
Earth's planetary shield -- the Earth's magnetic field working together with our atmosphere -- protects us.
Every magnet generates a magnetic field. Several objects in our solar system also have their own massive magnetic fields: the Sun,
Earth, Mercury, Jupiter, Saturn, Uranus, and Neptune. The magnetic field around a planet that extends into space is called a magnetosphere.
The magnetospheres of the planets interact with the particles from the Sun -- the solar wind. Within the magnetosphere, charged
particles spiraling along the Earth's magnetic field toward the poles create beautiful aurorae, the northern and southern lights,
when they interact with our atmosphere.
Magnetic fields can also create hazards. Magnetospheres trap high energy particles into radiation belts around planets.
The distant gas giant planets do not need protection from the solar wind; instead, their powerful radiation belts create a serious hazard for spacecraft,
as do our own Van Allen radiation belts here on Earth.
Earth's magnetosphere does more than shield us from the constant barrage of high-energy particles. It also protects our atmosphere and oceans
from the solar wind, which would otherwise gradually erode them away into space.
Mars' lack of a magnetosphere may be partly responsible for the thinness of its atmosphere and absent oceans.
A magnetosphere on Venus could have prevented this planet's primordial water from escaping into space.
Magnetism is a force in nature that is produced by electric fields in motion. This movement can involve electrons 'spinning' around atomic nuclei,
flowing through a conducting wire or ions moving through space in an organized stream.
Earth's magnetic field is familiar to us through its effects: our compasses point to the magnetic poles (north and south); it protects our
atmosphere from the blast of the solar wind; and particles interact with it to produce the auroras, or northern and southern lights. Similarly,
the magnetic fields of Mercury, Jupiter, Saturn, Uranus, and Neptune are detectable with compasses, and we have seen beautiful auroras on Jupiter and Saturn!
This illustration of a cloud of particles blasted from the Sun and impacting Earth to create an aurora. Credit: SOHO mission, NASA.
Planetary magnetic fields originate from processes deep in each planet's interior. Earth's is generated from the electric current
caused by the flow of molten metallic material within its outer core. Mercury's may be generated from its liquid core.
Jupiter's large magnetic field interacts with the solar wind to form an invisible magnetosphere.
If we were able to see Jupiter's magnetosphere, it would appear from Earth as in this artist's depiction, larger than the Moon in the sky. Credit: NASA.
Jupiter and Saturn are composed of gases crushed to such incredible pressures that they are forced beyond the common states of
liquid, solid, or gas that we find on Earth.
One such a layer inside Jupiter and Saturn is metallic hydrogen, and the electric current caused by swirling movements in this
substance produces a magnetic field so large that the tail of Jupiter's magnetic field reaches the edge of Saturn's orbit!
Scientists map planetary magnetic fields with a more sophisticated version of a compass, called a magnetometer.
They also "listen" for the radio signals given off by charged particles as they move through the magnetic field, and measure
the properties of ions and electrons directly with particle detectors.
This science visualization shows a magnetospheric substorm, during which, magnetic reconnection causes energy to be rapidly released
along the field lines in the magnetotail, that part of the magnetosphere that stretches out behind Earth. This released energy is focused
down at the poles and the resulting flood of solar particles into the atmosphere, causes the auroras at the North and South Poles.
The Sun has a very large and complex magnetic field. It actually extends far out into space, beyond the furthest planet.
The solar wind, the stream of charged particles that flows outward from the Sun, carries the Sun's magnetic field to the planets and beyond.
While the basic shape of the Sun's magnetic field is like the shape of Earth's field, with a north and south pole, superimposed on
this basic field is a much more complex series of local fields that vary over time.
Places where the Sun's magnetic field is especially strong are called active regions, and often produce sunspots. Disruptions in
magnetic fields near active regions can create energetic explosions on the Sun such as solar flares and coronal mass ejections.
The exact nature and source of the Sun's magnetic field are areas of ongoing research. Turbulent motions of charged plasmas in the
Sun's convective zone clearly play a role.
In spite of the low density, the solar wind and its accompanying magnetic fields are strong enough to interact with the planets
and their magnetic fields to shape magnetospheres. A magnetosphere is the region surrounding a planet where the planet's magnetic
Because the ions in the solar plasma are charged, they interact with these magnetic fields, and solar wind particles are swept
around planetary magnetospheres, as are particles from the planet's atmosphere.
At Jupiter and Saturn, the plasma inside the magnetosphere is almost entirely from their moons. Robotic missions investigating
these worlds are challenged by the energetic charged particles that are trapped in these planets' magnetic fields as radiation belts.
Jupiter is a planet of superlatives: this most massive planet in the solar system, which rotates the fastest, has the strongest magnetic field.
These unique properties lead to volcanoes on Io and a population of energetic particles trapped in the magnetic field that provides a physical link
between the satellites, particularly between Io, and the planet Jupiter. Every second, a ton of sulfurous gases is stripped from Io.
This process generates powerful electrical currents (1 million amps) that flow between Io and Jupiter.
The volcanic gases become ionized, trapped in Jupiter's magnetic field and make a vast doughnut that glows like a street light
(2 terrawatts -- that's a million megawatts or thousands of power stations)! Particles slam into Jupiter's atmosphere, creating auroras.
(Image credit: John Clarke & John Spencer)
The shape of the Earth's magnetosphere is the direct result of being blasted by solar wind. Solar wind compresses its sunward side to a
distance of only 6 to 10 times the radius of the Earth. Solar wind drags out the night-side magnetosphere to possibly 1,000 times
Earth's radius; this extension of the magnetosphere is known as the magnetotail. Many other planets in our solar system have
magnetospheres of similar, solar wind-influenced shapes.
This gorgeous view of the aurora was taken from the International Space Station as it crossed over the southern Indian Ocean on September 17, 2011.
The sped-up movie spans the time period from 12:22 to 12:45 PM ET. While aurora are often seen near the poles, this aurora appeared at lower latitudes
due to a geomagnetic storm – the insertion of energy into Earth's magnetic environment called the magnetosphere – caused by a coronal mass ejection
from the sun that erupted.
Given these critical roles, it is not surprising that several missions are actively investigating these planetary shields.
The ongoing MESSENGER mission is mapping out Mercury's magnetic field, as is Cassini at Saturn, and Juno is on its way to do the same at Jupiter.
The Solar Dynamics Observatory is also monitoring the Sun and its magnetic field to explore its impact on the near Earth space environment..
MessageToEagle.com based on material provided by NASA
Super-Earth Discovered Orbiting Several Suns
Scientists at the University of Goettingen and the Carnegie Institution for Science in the U.S. Washington have discovered a potentially habitable planet,
located 22 light years away from Earth.
The super-Earth, named GJ 667Cc has the mass four and a half times that of our Earth and an orbit of 28.15 days.
The planet GJ 667Cc orbits a dwarf star of the class M, 22 light years away which corresponds to approximately 209 trillion kilometers.
Astrophysicist Resolves Paradox With Radio Millisecond Pulsars
Celestial objects known as pulsars are still full of secrets. It is takes time and many efforts to learn all their secrets. Previous studies reached
the paradoxical conclusion that some millisecond pulsars are even older than the universe itself. It was time to resolve this paradox.
Cosmic Vibrations From Neutron Stars
In the collision of neutron stars, the extremely compact remnants of evolved and collapsed stars, two light stars merge to one massive star.
The newly-born heavyweight vibrates, sending out characteristic waves in space-time. Model calculations at the Max Planck Institute for
Astrophysics now show how such signals can be used to determine the size of neutron stars and how we can learn more about the interior of these exotic objects.
Unusual Sounds From Space Reported Worldwide - What Are They?
For almost a year now people from different countries have reported hearing strange sound from the sky. Now scientists propose that what people are
hearing is only a small fraction of the actual power of these sounds!
What are these sounds? What is causing them? Are they in anyway related to our Sun and the biggest solar flares, do they come from Earth's inner core
or can they be attributed to an unknown an astronomical phenomenon? Are they in anyway dangerous to our planet?
Unknown Force "Intelligently" Put Together Miranda Moon - with video
What could melt this moon in this extremely cold region of the solar system?
Voyager 2 passed Miranda’s strange world at a distance of only 3,000 kilometers (1,800 miles) and
sent back to Earth very detailed images of its "tortured" surface.
Nothing like them has been seen anywhere else in the solar system! Did a type III civilization conduct some "experiments" on Miranda? | <urn:uuid:5d72db4d-661c-4069-a76c-867f7908cf35> | 3.40625 | 2,192 | Content Listing | Science & Tech. | 41.100823 |
Limit to Impact Velocity
Name: Rafael C.
If there is no friction, will the terminal velocity of
a falling object reach a specific limit?
In other words, is there a chance (hypothetically) that a falling
object will never reach a terminal velocity if there is no drag present?
Yes. If there is no drag, the object will just keep accelerating.
If there is nothing to work against gravity, no air resistance or drag, then
the object will continue to accelerate until it collides with the ground.
So long as the object is close enough to the surface of the Earth, the
acceleration will be a constant 9.8m/s^2 throughout the fall. Gravity will
be the only force on the object, so gravity will determine the acceleration.
Terminal velocity only happens because drag is proportional to speed. As a
falling object speeds up, air resistance increases. If the fall lasts long
enough, the air resistance gets just as large as gravity, but in the
opposite direction. At this time, net force and acceleration are zero.
Velocity is constant.
Math, Science, Engineering
Illinois Central College
Sounds like you understand it correctly, Rafael.
With no friction there is no terminal velocity.
"Terminal velocity" is a effect of drag-producing fluid media such as air or water.
The drag force increases with speed until it is almost equal to the weight of the object,
so then there is almost no further acceleration, so the object falls at a steady speed
from then on.
In a vacuum there is no drag, let alone drag which increases with speed.
So in a vacuum there is no "terminal velocity" effect.
Our moon has no atmosphere. High vacuum right down to the surface.
A feather dropped on the moon from 100 miles above the surface would keep on
1/6 of earth-gravity (earth: 9.8m/sec2 or 32ft/sec/sec or 22mph/sec, moon: 1.6m/sec2
or 5.3ft/sec/sec or 3.6mph/sec)
until it hit the surface at about 1600 miles per hour.
I do not think that is quite enough to vaporize the feather, so maybe we should drop
it from a little higher than that.
If a black hole has no matter near it to provide gas, it can be surrounded by high
and an unlikely object falling straight "down" into it should reach light speed at the
event horizon radius.
If you wish to try thinking of light speed as the terminal velocity of empty space,
I think you will gain some perspective from that.
As an object approaches light speed relative to an observer, it requires increasingly
more energy to accelerate it 1 mph faster.
A bit like air drag requires steeply increasing force as velocity increases.
But the energy used pushing air drag is lost, while the energy used to push a mass
faster in vacuum is stored as momentum,
and is all available to be given back.
A high-speed object in a vacuum can coast forever, while an object in air slows down
when the pushing stops.
In the absence of friction but the presence of a gravitational field the
object will continue to accelerate. It should be noted, however, that
there is no such thing as a perfect vacuum so there will always be some
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:5ff4d805-49fd-4d56-a50b-172c4fec0de7> | 3.453125 | 728 | Q&A Forum | Science & Tech. | 58.423808 |
|Dec3-11, 08:22 PM||#1|
Angle between tangent and foci. (Involve vector calculus)
1. The problem statement, all variables and given/known data
See the photo on the left
2. Relevant equations
3. The attempt at a solution
|Dec4-11, 06:20 AM||#2|
You must show some work of your own or this thread will be deleted.
|Dec4-11, 09:01 AM||#3|
In fact, I type my work on the computer, the photo I have shown is already my work.
That is not solution, that is my attempt to solution. So it may be wrong.
|Similar Threads for: Angle between tangent and foci. (Involve vector calculus)|
|Launching of a Potato (Involve kinetic and potential energy and an angle!!)||Introductory Physics Homework||6|
|Hard time visualizing gradient vector vs. tangent vector.||Calculus||3|
|Tangent/Normal planes, intersections, vector tangent||Calculus & Beyond Homework||1|
|Vector Calculus: Level curves, tangent lines, directional derivatives||Calculus & Beyond Homework||0|
|does this involve calculus? im confused help please||Introductory Physics Homework||3| | <urn:uuid:7a781374-a74a-4691-957b-e6260ebbeb32> | 2.8125 | 289 | Comment Section | Science & Tech. | 59.422737 |
Three different classes of geoengineering identified by the American Meteorological Society and the Royal Society have very different risks and time scales and would play very different roles in a climate strategy.
Climate Remediation Technologies
Climate remediation is similar in concept to cleaning up contamination in our water or soil. The first problem is to stop polluting (mitigation) and the second is to remove the previously emitted contaminants (remediation) and put them somewhere—for example, filter CO2 out of the air and pump it underground.
Climate remediation technologies are, with some exceptions, relatively safe and noncontroversial and have relatively few governance issues. They address the root cause of the problem, but these methods work slowly. It would take years if not decades to reduce the concentration of CO2 in the atmosphere through air capture and sequestration. These technologies are also expensive when compared to the option of not emitting CO2 in the first place. However, as we try to bring emissions close to zero, it will likely remain difficult to operate heavy-duty transportation without liquid hydrocarbons. If the sustainable supply of biofuels can’t match this demand, a choice may be to continue the use of fossil fuels and offset the resulting emissions by removing CO2 from the atmosphere. As well, we may decide that the atmospheric concentration must be brought down below stabilized levels, perhaps even below 350 parts per million (ppm). If we don’t want to wait many hundreds of years for this to happen through natural processes, we may have to actively remove greenhouse gases from the atmosphere.
Air capture technologies are closely related to carbon capture and storage (CCS) technologies. For CCS, we are contemplating separating out CO2 after coal combustion and then pumping it deep underground into abandoned oil or gas fields or saline aquifers. The technologies for removing CO2 from the air (air capture) and from flue gas are similar. After capturing the CO2, it has to be put somewhere isolated from the atmosphere. Currently, we are considering geologic disposal: pumping the CO2 deep underground. The implementer must obtain rights to the underground pore space and be able to assign liability for accidents, leakage, and so on. These same issues exist for storage of CO2 in a CCS project. However, Keeling has suggested that the amount of CO2 we may need to remove from the atmosphere is such that we will have to consider disposal in the deep ocean as a form of environmental triage.1 Ocean sequestration would clearly involve much more serious governance issues.
Beyond air capture, the Royal Society report on geoengineering lists a number of other carbon-removal technologies, including augmentation of natural geologic weathering processes and biological methods such as reforestation.2 Of these, biological methods, which include genetically modified organisms (GMOs) and ocean iron fertilization, would have governance issues similar to climate intervention discussed below.
The purpose of climate intervention is to modify the energy balance of the atmosphere in order to restore a prior radiation balance. Climate intervention has also been called solar radiation management (SRM), or sunblock technology, and some consider these technologies to be a radical form of adaptation. If we can’t find a way to live with the altered climate, we intervene to roll back the impact.
Volcanic eruptions, which emit massive amounts of sulfates that reflect sunlight, cause colder temperatures for months afterward. Such rapid global temperature decline can be simulated in climate models by changing the global heat balance. This is evidence that climate interventions that change the radiation balance of the Earth could be effective at reducing global temperatures.
Climate intervention techniques to change the radiation balance are amazingly inexpensive, especially compared to mitigation, and all are relatively fast acting. For example, just three grams of sulfur aerosols can offset the warming of a ton of CO2.3 Some methods could lower temperatures within months of implementation, but they do not “solve” the problem in that they do nothing to reduce the excess greenhouse gases in the atmosphere. So, if we reflect more sunlight and don’t reduce CO2 in the atmosphere, the oceans will continue to acidify, severely stressing the ocean ecosystems that support life on Earth. And if we keep adding CO2 to the atmosphere, we will eventually overwhelm our capacity to do anything about it with geoengineering. So, climate intervention can’t be a standalone solution. It is, at best, only a part of an overall strategy to reduce atmospheric concentrations of greenhouse gases and adapt to the unavoidable climate change coming down the pike.
There are ideas for putting reflectors in space and increasing the reflectance of the oceans, land, or atmosphere. Some propose global interventions such as injection of aerosols (sulfate particles or engineered particles) in the stratosphere, and the Novim report spells out the required technical research for this concept in some detail.4 Others propose more regional or local interventions, such as injecting aerosols in the Arctic atmosphere only in the summer to prevent the ice from melting.5 Even more local, and perhaps most benign, is the idea of painting rooftops and roadways white to reflect heat.
Climate intervention has some serious drawbacks. It may be difficult to predict exactly how the weather patterns will change. The intervention may cause the climate in some parts of the Earth to improve and in others to become worse. It will be very difficult to determine whether these deleterious conditions arise simply from climate variability or are due to the intentional intervention. Climate model simulations have shown that if we were to suddenly stop a global intervention, the global mean temperature would quickly return to the trajectory it was following before the intervention. This means that temperatures could increase very rapidly upon cessation of an intervention, which would likely be devastating. Ironically, then, climate intervention may only provide temporary respite but would be difficult to stop. These drawbacks mean that climate interventions are unlikely to be deployed until or unless we have strong evidence that the risks of climate change plus climate intervention are less than the risks of climate change alone.
Climate intervention might nevertheless be an effective part of an overall climate strategy. We already emit millions of tons of aerosols in the form of air pollution, which is estimated by the Intergovernmental Panel on Climate Change (IPCC) to be masking roughly one half of the global warming that would otherwise be caused by today’s atmospheric greenhouse gas concentrations.6 So, as we clean up air pollution to protect human health, or stop emitting air pollution as we shut down coal-fired electricity generation in mitigation efforts, we will also cause a significant increase in short-term warming. (Long-term warming remains largely a function of the CO2 concentration.) We may want to offset this additional warming by injecting aerosols into the stratosphere, where they are even more effective at reflecting radiation. This plan might cause much less acid rain and improve human health impacts, compared to power plant and automobile emissions, while continuing to mask undesirable warming. It is possible that the “drug” of aerosol injection could be a type of “methadone” as we withdraw from fossil fuels.
The “Catch-All” Category
The catch-all category includes technologies to manage heat flows in the ocean or actions to prevent a massive release of methane in the melting Arctic. These technologies are not as well understood or developed, but the classification recognizes that not all the ideas are in. Also, we may need to address some very specific global- or subglobal-scale emergencies caused by climate change.
For example, recent studies have shown vast amounts of methane, a powerful greenhouse gas, are leaking from the Arctic Ocean floor. Abrupt increases in methane emissions have been implicated in mass extinctions observed in the geologic record and could trigger runaway climate change. (It is the possibility of such runaway climate change that most clearly supports the need for geoengineering research.) James Cascio recently posed an idea for deploying genetically engineered methanotrophic bacteria (bacteria that eat methane) at the East Siberian Arctic Shelf. Is this possible? What are the risks? Could the release of genetically modified methanotrophic organisms cause problems in the Arctic ecosystems? This may be an idea with merit—or it may be a very stupid idea. A geoengineering research program should include funding to freely explore theoretical ideas and perform the modeling and laboratory studies to distinguish between concepts that are worthy of more work and those that are completely impractical or too dangerous.
There should also be a “top-down” research program that examines potential emergencies that could result from climate change and then attempts to design interventions for these specific situations. Although higher temperatures will be a serious problem, other impacts of climate change might be more critical. Volcanic eruptions have effects that are similar to the effects of aerosol injection. Although these reduce temperature, they also reduce precipitation. Reducing precipitation would clearly be a bad thing to do. By looking only at what we know how to do (reduce temperatures) versus what problem we want to solve (increase water supply), we could be making conditions worse. Geoengineering research should not just be structured around the “hammers” we know about. We should also collect the most important “nails” and see if we can design the right hammer.
Thus, we might try to develop methods that directly attack specific, important climate impacts. Can we conceive of a way to control the onset, intensity, or duration of monsoons to ensure successful crops in India? Can we conceive of a way to stop methane burps or hold back melting glaciers? Some part of a geoengineering research program should take stock of the possible climate emergencies and then look for ideas that would ameliorate these problems.
None of these geoengineering technologies should be considered a standalone solution; some or all of these could be integrated into a comprehensive climate-change strategy that starts with mitigation. Such a comprehensive strategy might include:
- A steady, but aggressive transformation of the global energy system to eliminate emissions with concurrent elimination of air pollution in a few decades (mitigation).
- Carbon removal over perhaps 50 to 100 years to return to a safer greenhouse gas concentration (climate remediation).
- Time-limited climate intervention to counteract prior emissions and reductions in air pollution, tapering off until greenhouse gases fall to a “safe” level (climate intervention).
- Specific, focused actions—such as preventing methane burps or melting Arctic ice—to reverse regional climate impacts (technologies from the “catch-all” category).
- Keeling, RF. Triage in the greenhouse. Nature Geoscience 2, 820-822 (2009).
- Shepard, J et al. Geoengineering the Climate: Science, Governance and Uncertainty (London, Royal Society, 2009).
- Keith, D. Geoengineering research. Presentation to National Academy of Science America’s Climate Choices conference (June 2009) [online].
- Blackstock, JJ et al. Climate Engineering Responses to Climate Emergencies (Novim, Santa Barbara, CA, 2009) [online]. arxiv.org/pdf/0907.5140. people.ucalgary.ca/~keith/Misc/Keith_NASJune2009.pdf.
- MacCracken, M. On the possible use of geoengineering to moderate specific climate change impacts. Environmental Research Letters (2009).
- Climate Change 2007: The Physical Science Basis. IPCC Fourth Assessment Report (Cambridge University Press, Cambridge, UK, 2007) | <urn:uuid:a38de745-d7b2-45a4-bcda-a95bfff884a1> | 3.578125 | 2,370 | Academic Writing | Science & Tech. | 27.298319 |
Studies in Plant Physiology
Primary plant productivity sustains life on Earth and is a natural process for regulating atmospheric carbon dioxide (CO2) concentration. Many of the main features of plant growth and development can be modeled by treating plants as organisms composed of modular morphological subunits with physiological functionality, e.g., leaf, shoot and root. Each subunit performs a specialized function for the plant, thus it can be studied in isolation. The integration of the subunits to sustain the entire living organism is provided by physiological vascular and biochemical regulatory networks. The vascular networks provide the connectivity for distribution of nutrients (carbon, nitrogen and minerals) and other substances throughout the plant. The regulatory networks control the rates of biochemical processes and influence the allocation of substances. For this reason the most developed vascular networks are in the path between sugar producing sites and locations of high growth priority, i.e., sinks.
The overall growth rate and development of plants depend on the availability of exogenous resources. When under prevailing stressful environmental conditions, plants modify their growth and development patterns to increase acquisition of the limiting resources. Such morphological adjustments are mediated by responses of the regulatory network to environmental conditions. The mechanisms that sense environmental conditions and control the allocation of resources (e.g., carbon and nitrogen) in plants are poorly understood. The main goals of the research are:
1) To identify and measure the properties of shifts in the allocation of carbon (sugars) and nitrogen due to rapid changes in environmental conditions; and
2) To measure the physical parameters in existing plant physiology models and also develop new models for substance translocation and allocation, e.g., phloem loading and unloading.
Left: Photograph of PhytoBeta detector imager next to the spicebush (Lindera benzoin). The CO2 gas cuvette (A) is shown near PhytoBeta detector (B) prior to clamping the cuvette on the apical portion of a leaf. Right: Photograph of a leaf with the imaged distribution of 11C overlaid. The imager head abuts is next to the labeling cuvette on the leaf (not shown).
Concept and Experimental Setup
Our approach is to identify cause and effect relationships between changes in environmental conditions and physiological responses that result in shifts in the allocation of carbon and/or nitrogen. Once an effect is observed, the next step is to measure physical quantities associated with resource allocations, e.g., fractional distribution of carbon and/or nitrogen from a source to the sinks and translocation times, and to determine the time scale (e.g., seconds, minutes or hours) for the observed physiological response. Because the details of the responses to changes in environmental conditions depend on the development stage of plants and varies with species, we plan to study a variety of grass and tree species at different stages of development. In addition to learning about the particular species studied, this work should provide insights about growth resource control mechanisms that are applicable to a broad range of plant species.
The measurement technique used in this work is radiotracing with short-lived isotopes that decay by positron emission and direct positron imaging. The isotopes are produced in the tandem laboratory at TUNL and the labeling measurements are carried out at Phytotron facility, which is located about 100 m from the target area where the isotopes are produced. The Phytotron is a controlled environment facility for plant research that is operated by the Duke Biology Department. It has 45 growth chambers with environment controls, e.g., light intensity, atmospheric CO2 concentration, nutrient, rooting medium, and temperature. One chamber is dedicated to this project. We have demonstrated the capability of producing several isotopes in chemical form that can be used for plant studies. These include 11C tagged CO2, 13N tagged NO3 in aqueous solution, 19F ions in aqueous solution, and 15O tagged water.
Experimental Setup: The experimental setup comprises of production of radio-isotopes, the transport of these isotopes to the growth chamber, delivery to various parts of the plant, and imaging of the gamma rays from positron annihilation or direct positron imaging. | <urn:uuid:e53f4903-1341-44c2-a206-3811e77f0157> | 3.03125 | 854 | Academic Writing | Science & Tech. | 26.92999 |
"The most beautiful thing we can experience is the mysterious." - Albert Einstein
Posted 04 July 2012 - 08:56 AM
Hadron Collider scientists have confirmed that they have discovered the elusive Higgs Boson particle.
Scientists say it is a 5 sigma result which means they are 99. 999% sure they have found a new particle. Finding the Higgs plugs a gaping hole in the Standard Model, the theory that describes all the particles, forces and interactions that make up the universe.
Is there any technology that we can gain from this?
I don't see it in the foreseeable future. Not all discoveries lead to consumer products either. Some lead to greater understanding and help us refine our models, hypotheses, and theories. Some practical device might come, but it's hard to tell when. Quantum Mechanics, for instance, got it's beginnings in the 1920's but a consumer product which used its principles didn't show up until the 1980's (the cell phone). Same goes for General Relativity introduced in 1916 (as far as a practical product in the home). Maybe in 2072 we'll have some product such as an anti-gravity skateboard (waaaay behind the back to the future timeline).
I can hear the nerd-gasms happening all around the world as we speak.
I did Though I didn't think they would find it (I know it's not confirmed by enough sources yet but looks pretty solid). I thought they found something similar, but not THE higgs boson. In terms of rejoicing, you can say that followers of the standard model are doing that. You gotta hand it to Higgs and his team, they predicted this 60 years ago and it came to pass.
Anyone else notice there not 100% sure they found it??
Science doesn't deal in absolutes. The current number is 99.99994% certain it's not an error. If a scientist was asked what are the chances humans exist, they wouldn't answer 100% Absolutes are a matter of Religion.
Now they need more experiments at the energy levels that yielded the boson and figure out more about it (such as the spin). All the data is open for the world to inspect, analyze and see if there are different conclusions based on it. Extremely unlikely though. | <urn:uuid:8214e58e-25fc-4e3e-841f-b7b1684f2f6f> | 2.75 | 471 | Comment Section | Science & Tech. | 65.64343 |
By studying fossilized grains of pollen, researchers have reconstructed the climate history of the Antarctic Peninsula, which gave up its vegetation about 12 million years ago. Scientists are studying the region because it has warmed significantly in recent decades.
The rapid decline of glaciers along the peninsula has led to widespread speculation about how the rest of the continent’s ice sheets will react to rising global temperatures.
“The best way to predict future changes in the behavior of Antarctic ice sheets and their influence on climate is to understand their past,” says John Anderson, marine geologist at Rice University and the study’s lead author.
(Photo credit: Sophie Warny)
Full story at Futurity. | <urn:uuid:14dd02d0-341d-47c8-b7ba-553b9ce6f22b> | 3.921875 | 142 | Truncated | Science & Tech. | 27.108789 |
SUMMARY PARAGRAPH for RPL24A
About yeast ribosomes...
Ribosomes are highly conserved large ribonucleoprotein (RNP) particles, consisting in yeast of a large 60S subunit and a small 40S subunit, that perform protein synthesis. Yeast ribosomes contain one copy each of four ribosomal RNAs (5S, 5.8S, 18S, and 25S; produced in two separate transcripts encoded within the rDNA repeat present as hundreds of copies on Chromosome 12) and 78 different ribosomal proteins (r-proteins), which are encoded by 137 different genes scattered about the genome, 59 of which are duplicated (6). The 60S subunit contains 42 proteins and three RNA molecules: 25S RNA of 3392 nt, hydrogen bonded to the 5.8S RNA of 158 nt and associated with the 5S RNA of 121 nt. The 40S subunit has a single 18S RNA of 1798 nt and 32 proteins (7). All yeast ribosomal proteins have a mammalian homolog (8).
In a rapidly growing yeast cell, 60% of total transcription is devoted to ribosomal RNA, and 50% of RNA polymerase II transcription and 90% of mRNA splicing are devoted to the production of mRNAs for r-proteins. Coordinate regulation of the rRNA genes and 137 r-protein genes is affected by nutritional cues and a number of signal transduction pathways that can abruptly induce or silence the ribosomal genes, whose transcripts have naturally short lifetimes, leading to major implications for the expression of other genes as well (9, 10, 11). The expression of some r-protein genes is influenced by Abf1p (12), and most are directly induced by binding of Rap1p to their promoters, which excludes nucleosomes and recruits Fhl1p and Ifh1p to drive transcription (13).
Ribosome assembly is a complex process, with different steps occurring in different parts of the cell. Ribosomal protein genes are transcribed in the nucleus, and the mRNA is transported to the cytoplasm for translation. The newly synthesized r-proteins then enter the nucleus and associate in the nucleolus with the two rRNA transcripts, one of which is methylated and pseudouridylated (view sites of modifications), and then cleaved into three individual rRNAs (18S, 5.8S, and 25S) as part of the assembly process (6). Separate ribosomal subunits are then transported from the nucleolus to the cytoplasm where they assemble into mature ribosomes before functioning in translation (14, 15). Blockage of subunit assembly, such as due to inhibition of rRNA synthesis or processing, results in degradation of newly synthesized r-proteins (16, 15). (For more information on the early steps of rRNA processing and small ribosomal subunit assembly, see the summary paragraph for the U3 snoRNA, encoded by snR17A and snR17B.)
Last updated: 2007-02-15 | <urn:uuid:0d312374-8428-41d6-a7d7-003843dd6a46> | 2.921875 | 663 | Academic Writing | Science & Tech. | 46.117459 |
Inelastic X-ray Scattering Reveals Microscopic Transport Properties of Molten Aluminum Oxide
The transport properties of high-temperature oxide melts are of considerable interest for a variety of applications, including modeling the Earth's mantle, optimizing aluminum production, confining nuclear waste, and investigating the use of aluminum in aerospace propulsion. Information on melt viscosities and the speed of sound through liquid oxides is essential for testing the validity of theories used in predicting the geophysical behavior of the Earth's mantle as well as mathematical models of aluminum combustion. The experimental techniques described here supply fundamental insights into the behavior of liquid oxides that help provide a basis for these and other advanced applications.
Kinematic restrictions on neutron scattering make it impossible to reach acoustic modes in liquid oxides, and the high-temperature regime is inaccessible by light scattering because of black body radiation. Another factor making it difficult to obtain data on microscopic transport properties with conventional techniques is the chemical reactivity of oxide melts at high temperatures. Aluminum oxide, for example, melts at about 2327K and is an exceptionally aggressive material in the liquid state, which precludes the use of traditional containers for the material while measurements are being made.
Researchers from three French research centers, Centre de la Recherche sur la Matière Divisée, Institut de Science et de Génie des Materiaux et Procédés, and Centre de la Recherche sur la Matériaux à Haute Température; Spain's University of the Basque Country; and from Argonne sought to circumvent these limitations by studying molten aluminum oxide using high-resolution inelastic x-ray scattering (IXS) at X-ray Operations and Research beamline 3-ID-C at the APS. The measurements were performed in a containerless environment. Aluminum oxide spheres 3 to 4 mm in diameter were suspended in an oxygen gas jet and heated with a 270-W CO2 laser beam to temperatures between 2300 and 3100K. A carefully adjusted gas flow through a conical nozzle maintained the levitated sample at a position that was stable to within ±20 µm above the plane of the top edge of the nozzle, allowing a clear path for the incident and diffracted x-ray beams. Sample temperature was measured by a pyrometer directed at the point illuminated by the x-ray beam.
The x-ray energy was 21.657 keV, and the energy resolution was determined to be 1.8 meV, full width at half maximum. Data were collected at six values of the wave vector Q (over the energy transfer range of -30 to 60 meV), covering the Q value range of 1.09 to 6.09 nm-1, and over a more restricted range (-30 to 30 meV), covering values of Q out to 28 nm -1.
Fig. 1. Excitation spectra in liquid aluminum oxide at 2323K measured with inelastic x-ray scattering for different Q.
Fig. 2. Dispersion of the sound modes (Ws) and damping (Gs) at 2323K.
The excitation spectra showed a well-defined triplet peak structure at lower Q values (1 to 6 nm-1) (Fig. 1) and a single quasi-elastic peak at higher Q. The high-Q spectra were well described by kinetic theory, but the low-Q spectra diverged significantly from predictions based on hydrodynamic theory. When the IXS spectra measured at the lowest six wave vectors at 2323K were fit to the hydrodynamic equation for the scattering function, it was found unexpectedly that the Brillouin modes remained underdamped out to Q=6 nm -1 (Fig. 2). Furthermore, the observed Brillouin line width increased with temperature, whereas it was expected that the viscosity would decrease in the hydrodynamic limit. Other discrepancies were discovered as well. The researchers found that an extension of hydrodynamic theory that allows for frequency dependence of the transport coefficients, such as the viscosity, yielded reasonable fits to the low-Q data up to 6.09 nm -1. This provides a description of the liquid dynamics in the relatively uncharted regions between hydrodynamics and kinetic theory (Fig. 3).
Fig. 3. Different dynamical regimes for liquids as a function of the values of the products wt and Qs, where Q represents the wave vector, w the frequency, t the relaxation time, and s the interparticle distance.
See: H. Sinn, B. Glorieux, L. Hennet, A. Alatas, M. Hu, E.E. Alp, F.J. Bermejo, D.L. Price, and M.-L. Saboungi, Science 299, 2047-2049 (28 March 2003).
This work, and use of the Advanced Photon Source, was supported by the U.S. Department of Energy, Office of Science, under contract W-31-109-ENG-38; the CNRS, and the Universitié d' Orléans. | <urn:uuid:8ac9245d-946c-4b3b-a9c4-f0838f2a59ce> | 2.984375 | 1,062 | Academic Writing | Science & Tech. | 48.703665 |
In the constellation Sagittarius
4.1 million times the mass of the Sun
Diameter roughly 15 million miles (24 million km).
The supermassive black hole at the center of our Milky Way galaxy lies behind dense clouds of gas and dust. The center of the galaxy is a little above the "spout" of the teapot-shaped constellation Sagittarius at the upper right of this photograph. It's about 27,000 light-years away. Recent observations show that the black hole occasionally produces flares of energy, perhaps caused by its complex magnetic field. [Akira Fujii]
This document was last modified: August 21, 2006. | <urn:uuid:9c6fd801-7e3c-4cad-aedf-d3f9c2926fd6> | 3.640625 | 135 | Knowledge Article | Science & Tech. | 56.850882 |
Crab Eating Macaque
Image Source: eol
The crab-eating macaque (Macaca fascicularis), also known as the Java macaque, or long-tailed macaque, is a species of primate located throughout Southeast Asia. The species is sexually dimorphic, so the crab-eating macaque weighs between 4.8 to 7 kg for males and 3 to 4kg for females. It has a body length of 43cm, and has a greyish-brown or reddish tail that extends up to 60cm beyond its body length. Color varies from grey-brown to reddish brown, fading away ventrally. Macaques inhabit a variety of habitats in southeastern Asia, but are often found near water due to their tendency to subsist off of crabs. The crab-eating macaques also consume fruits, flowers, insects, leaves, fungi and clay for its potassium. It eats for about 18.3 minutes at a time, and on average 20 times a day.
Like other primates, they communicate with a mixture of auditory, facial, and other physical gestures. The maximum lifespan is about 30 years; though in captivity a wild male was observed to live up to 39 years old. In 2010, the Beijing Genomics Institute used a female crab-eating macaque (CE) that was a captive-bred descendent of a crab-eating macaque from Vietnam for genome sequencing and analyses.We sequenced the nuclear genomes of the crab-eating macaqueon the IlluminaGAIIx platform. The sequencing data were processed with Illumina custom computational pipelines. The genome was de novo assembled using SOAPdenovo program based on the de Bruijn graph algorithm methods and obtained 162-Gb of high-quality sequence, representing 54-fold coverage. The total size of the assembled genome was about 2.85 Gb, providing 54-fold coverage on average. The scaffolds were assigned to the chromosomes according to the synteny displayed with the Indian rhesus macaque (IR) and human genome sequences. About 92% of the CE scaffolds could be placed onto chromosomes. This data can be found on the link provided.
Due to the frequent usage of the genus Macacain scientific research, we felt it necessary to sequence the crab-eating macaque to further our understanding on how it differed from other species, like the Chinese rhesus macaque (CR) and the Indian rhesus macaque. This is especially relevant considering the other sequenced macaque, the Indian rhesus macaque, has reduced in availability and so the CE and CR macaques are being used in its place. We hope that our genetic research will assist in understanding a species that has had little genetic information available up until now.
More information about Crab Eating Macaque can be viewed at: http://macaque.genomics.org.cn/ | <urn:uuid:3172f2a5-8536-4713-8099-951bf77fa5d2> | 3.421875 | 585 | Knowledge Article | Science & Tech. | 43.327935 |
SUMMARY: In late January, astronomers celebrated the creation of an artificial star in the nighttime sky. The star was created 90 km up in the atmosphere by a powerful laser projected out of the ESO's fourth 8.2m Unit Telescope of the Very Large Telescope at Cerro Paranal in Chile. This artificial star allows the telescope's adaptive optics system to compensate against the fluctuations of the Earth's atmosphere, and produce images as crisp and clear as if they were taken from space.
View full article
What do you think about this story? post your comments below. | <urn:uuid:5a1edf95-e903-46fe-8e87-029f5787fe50> | 3.421875 | 117 | Comment Section | Science & Tech. | 48.232955 |
Quasars or quasi-stellar radio source, from the method by which they where originally discovered : as stellar optical counterparts to small regions of strong radio emission. With increasing spatial resolution of radio telescopes the strong radio emission often seemed to come from a pair of lobes surrounding many of these faint star-like emission line objects.
The method initial method of selection was strong radio emission, then later any object with blue or ultraviolet excess was considered a good quasar candidate. Very recent evidence from the near infrared portion of the spectrum indicates that a large fraction of quasars may in fact be brighter in the infrared than in other wavelength bands.
Unfortunately, due to an error in spectral identification made by Maarten Schmidt (1963) these quasars were incorrectly classified as extra-galactic objects. In order to distance themselves from the term 'radio stars' they nicknamed these objects QUASARs for QUAsi StellAr Radio source (because the only 'appeared' like stars). The subsequent discovery of emission lines with little or no radio emission led to the modern term QSO (or Quasi Stellar Object), again partly because they could not bring themselves to consider them as stars within the galaxy.
However, based on the extensive work of Y.P. Varshni it turns out quasars were stars after all; they are laser stars within the galaxy. Hence the similarities of the properties quasars such as Cygnus-A and 3C 345 with those of other objects within the galaxy like Eta Carinae, MWC 349, NGC 7027, SS433 and Young Stellar Objects (YSO). In fact their properties are so similar that two recent 'radio stars', GRS 1915-105 and GRO J1655-40 have been nicknamed 'mini-quasars' by their discoverers.
Considering this large amount of accumulating data on lasers associated with confirmed radio stars within the galaxy, combined with The recent discovery of 'Naked quasars' only adds fuel to the fire; it is high time for the astronomical community to abandon the outdated and obsolete quasar redshift interpretation.
A recent post from a quasar astronomer sci.astro responding in a newsgroup to concerns raised over the alarming similarities between the jets of the 'radio star' GRS 1915+105 and quasar jets:
In other words he is claiming there are two physics involved here; (1) The physics of ordinary stars and (2) The unusual physics of quasars. This response is typical of staunch believers of a popular religion: They don't see any conflict between their religion versus the empirical sciences, they merely separate the two and all is well. As long as the analytical methods of science are not used to probe religious issues, which are matters of faith. This is a pathological form of the Selection Effect, Belief has always been stronger than reason. This division creates an unhealthy schizm between the acceptable science of objects within the galaxy and the 'amazing' extra-galactic world. Symptoms of range from the compartementalisation of the various branches of astronomy to lack of communication between galactic and extra-galactic astronomers. | <urn:uuid:6173ec42-f891-4f82-af03-bc01b4ff762b> | 3.46875 | 646 | Knowledge Article | Science & Tech. | 32.290653 |
Turbulence causes large variations in point and line-averaged wind measurements. These variations often dominate the difference between traditional measurements and the VIL observations. Therefore, the point and line-averaged measurements are not referenced when analyzing the accuracy of the wind profiling method. Calculated error estimates of the VIL wind profiles are compared with the local consistency of the measurements. The local consistency is calculated for each wind measurement by subtracting the average of its vertically adjacent wind measurements. If the vertical wind shear is linear, the variations between adjacent vertical values presents the measurement inaccuracy.
The Cartesian mapping procedure introduced in Section 4.1.2 may cause profile smoothing by mixing the aerosols from different layers. Therefore, the Cartesian mapping routine is slightly modified for the consistency analysis: each backscatter signal element is averaged into the closest grid point instead of dividing it into the eight closest grid points. Also, the filling of the empty pixels is performed only in xy-plane. This eliminates the vertical mixing of the CAPPI scans due to the mapping algorithm. Data from July 26, 1989, is selected (see Figure 35) for accuracy calculations, since the wind shear is more linear on that day than on any other day in the FIFE program. The same case was also analyzed with the original cartesian mapper. The maximum difference between the wind results calculated from differently mapped CAPPI scans is about 0.1 ms in speed and 0.5 in direction. The RMS errors were smaller for the original Cartesian mapper, indicating the noise in the CCF planes was also smaller because of more efficient averaging.
Figure 35: Hourly averaged wind profiles on July 26, 1989, between 11:00 and 16:00 CDT. Only results with = 1 are plotted. The mixed layer mean depth rises rapidly to 1000 m before 11:00 CDT when the clouds start forming. The base of convective clouds varies between 1000 m and 1800 m during the measurement period. The wind speed shear is linear except in the first 200-300 meters and at the top of the CBL. Error bars show estimated root-mean-square errors.
Figure 36 compares the local deviations and calculated RMS error estimates for hourly averaged wind profiles on July 26, 1989. The local deviations are calculated from the averages of vertically adjacent results. The calculated error estimates represent RMS errors in wind determination from noise in the CCF
after the wind estimates with are rejected. The calculated RMS error estimates are consistent with the internal consistency of the results.
In the convective boundary layer (below 1400 m), the local deviations are less than 0.1 ms in speed and 1 in direction. Part of the local variation is probably due to real wind fluctuations. Thus, it may provide a conservative estimate of the accurasy. The calculated profiles show minimum RMS errors at about 1 km: the contrast between scattering from aerosol structures and mixed air is very good just below the CBL top, since the clear air penetrating down to the CBL is not yet mixed with the boundary layer aerosols. Closer to the ground, more intense mixing smears the aerosol structures, decreasing the signal-to-noise ratio of the CCF and flattening the CCF peak. Above the convective boundary layer, clouds block the signal decreasing the signal-to-noise ratio.
Figure 36: Local deviations of hourly averaged wind speed and direction (open circles) and calculated RMS error estimates (black circles) on July 26, 1989, between 11:00 and 16:00 CDT. The error bars indicate the minimum and maximum error of hourly averaged wind estimates during the 5-hour observation session. The calculated error estimates are consistent with the local deviations.
Figure 37 shows calculated RMS error estimates of wind measurements as functions of the averaged time and normalized altitude from July 26 to July 31, 1989. Error estimates are first linearly interpolated into altitudes normalized by the convective boundary layer mean depths and then averaged over time. In the CBL, the error estimates are almost constant with altitude; above the CBL, they grow almost exponentially. In the convective boundary layer, the RMS errors are on the average less than 0.2 ms in speed and 3.0 in direction. In the middle of the convective boundary layer, where aerosol structures have good contrast against the background, the RMS errors are 0.03--0.05 ms in speed and 0.3--0.8 in direction. Above the CBL, the RMS errors vary between 0.1--3.0 ms in speed and 1.0--30 in direction, since the correlations get poorer due to aerosol structures diminishing with altitude and clouds blocking the signal. The RMS errors decrease as the averaging time interval increases, since the time averaging of the cross correlation functions reduces random correlations.
Figure 37: Root-mean-square error estimate profiles of reliable results as functions of averaging time interval and normalized altitude from July 26 to July 31, 1989. The altitude is normalized with the convective boundary layer mean depth. Note the logarithmic horizontal axes. The profiles show RMS errors of wind measurements with = 1 averaged over a 5-hour period. Errors of 3-minute (open circles) and hourly averaged (black circles) profiles are shown. The RMS errors decrease as the averaging time interval increases. The minimum errors are obtained in the middle of the convective boundary layer. Below it, turbulent mixing smears the aerosol structures and decreases the signal-to-noise ratio of the CCF. Above it, the clouds block part of the data and decrease the signal to noise ratio. | <urn:uuid:b3702024-2f77-4d8b-afe3-e67c08a3cfcb> | 2.921875 | 1,160 | Academic Writing | Science & Tech. | 45.443934 |
Since geometry and algebra are often discovered to be two sides to the same phenomena, I suggest that you develop your geometric intuition to understand algebraic phenomena. A typical example here is to use the abstract tensor formalism to understand Hopf algebras. From my personal experience, Hopf algebras did not come alive until I understood that the axioms could be drawn as little bits of string. When I listen to a Hopf algebra talk now, I try to envision the proof via these diagrams.
This leads me to a second point which no one yet has suggested. knot theory is an inherently visual subject that is easily entered via geometric intuition.
To truly make progress as a research mathematician, you may have to also develop tools for symbol manipulation. Geometry can always be a guide to discovering the formulas. Knot theory, abstract tensors, linear algebra, and group theory all are easily approached via geometric techniques. Many of us delight when an arcane algebraic concept is reinterpreted as a geometric one. | <urn:uuid:e1318835-017b-4601-9d5b-70a88cb227b2> | 2.796875 | 208 | Comment Section | Science & Tech. | 36.304706 |
Diffusion tensor imaging is used in magnetic resonance imaging to attempt quantify the diffusional direction of water on a voxel-by-voxel basis. The standard method to do this is to apply diffusion encoding gradient along multiple gradient directions during a spin-echo pulse sequence and then calculate the water diffusion from each voxel based on the acquired set of images.
An image with the primary direction of water diffusion is color encoded. Red is left-right diffusion, green is up and down the image and blue, is in and out of the image.
A map of the fractional anisotropy (FA). Brighter areas correspond to regions of high anisotropy (i.e. they are preferentially oriented) and darker areas correspond to regions of low anisotropy (or high isotropy, which means there is no preferential diffusion direction).
The apparent diffusion coefficient (ADC) which is the average amount of diffusion per unit time.
The way I like to think of all of this is that if you look at a map of Canada’s lakes and rivers, the lakes would be regions of low anisotropy (low FA) and the rivers would be regions of high anisotropy (high FA).
What is interesting is to look at the underlying data acquired from which the FA, ADC and colormap images are calculated. You can see the diffusion encoded data here which was used to calculate the DTI images on this page.
The equation for the b-value is b=γ2 g 2 δ2 [ Δ - δ/3]. For example, if δ=5 ms, Δ = 10 ms, g = 25 G/cm and γ = 2.675×10^8 /T/s then b=931 s/mm2 (with the proper unit conversion..)
Note: You can do this calculation in Google, for example: (2.675*10^8 /(tesla*s))^2 * (25 gauss/cm)^2 * (5 ms)^2 *(10-5/3)ms in s/mm^2 | <urn:uuid:d7de4360-50fe-4e29-b558-57b5f1134f0a> | 3.34375 | 447 | Knowledge Article | Science & Tech. | 64.895941 |
W. E. Moeckel
Large downstream movements of transition observed when the leading edge of a hollow cylinder or a flat plate is slightly blunted are explained in terms of the reduction in Reynolds number at the outer edge of the boundary layer due to the detached shock wave. The magnitude of this reduction is computed for cones and wedges for Mach numbers to 20. Concurrent changes in outer-edge Mach number and temperature occur in the direction that would increase the stability of the laminar boundary layer. The hypothesis is made that transition Reynolds number is substantially unchanged when a sharp leading edge or tip is blunted. This hypothesis leads to the conclusion that the downstream movement of transition is inversely proportional to the ratio of surface Reynolds number with blunted tip or leading edge to surface Reynolds number with sharp tip or leading edge. The conclusion is in good agreement with the hollow-cylinder result at Mach 3.1.
An Adobe Acrobat (PDF) file of the entire report: | <urn:uuid:7a24c619-8cc5-4c22-80a3-9356bf22dc4a> | 2.890625 | 200 | Academic Writing | Science & Tech. | 43.10455 |
Last summer I had the amazing opportunity to be on board the U.S. Coast Guard Icebreaker Healy, in partnership with N.A.S.A.’s ICESCAPE mission to study the effects of ocean acidification on phytoplankton communities in the Arctic Ocean. We collected thousands of water samples and ice cores in the Chukchi and Beaufort Seas.
While in the northern reaches of the Chukchi Sea, we discovered large “blooms” of phytoplankton under the ice. It had previously been assumed that sea ice blocked the sunlight necessary for the growth of marine plants. But the ice acts like a greenhouse roof and magnifies the light under the ice, creating a perfect breeding ground for the microscopic creatures. Phytoplankton play an important role in the ocean, without which our world would be drastically different.
Phytoplankton take CO2 out of the water and release oxygen, almost as much as terrestrial plants do. The ecological consequences of the bloom are not yet fully understood, but because they are the base of the entire food chain in the oceans, this was a monumental discovery that will shape our understanding of the Arctic ecosystem in the coming years.
The Arctic is one of the last truly wild places on our planet, where walruses, polar bears, and seals out-number humans, and raised their heads in wonderment as we walked along the ice and trespassed into their domain. However, their undeveloped home is currently in grave danger. The sea ice that they depend on is rapidly disappearing as the Arctic is dramatically altered by global warming.
Some predictions are as grave as a seasonally ice-free Arctic by 2050. Drilling for oil in the Arctic presents its own host of problems, most dangerous of which is that there is no proven way to clean up spilled oil in icy conditions. An oil spill in the Arctic could be devastating to the phytoplankton and thereby disrupt the entire ecosystem. The full effects of such a catastrophe cannot be fully evaluated without better information about the ocean, and we should not be so hasty to drill until we have that basic understanding.
Unless we take drastic action to curb our emissions of CO2 and prevent drilling in the absence of basic science and preparedness, we may see not only an ice-free Arctic in our lifetimes, but also an Arctic ecosystem that is drastically altered. | <urn:uuid:36c36f1e-d80b-471c-a1c5-807be9bcce28> | 3.28125 | 501 | Personal Blog | Science & Tech. | 45.316849 |
Plants terrestrial. Roots occasionally branching laterally, yellowish to black, 0.5--2 mm diam., smooth or with corky ridges, not proliferous. Stems upright, forming caudex to 5 mm thick; gemmae absent or minute, spheric. Trophophores ascending to perpendicular to stem, sessile or stalked; blades linear, oblong, or deltate, simple to 5-pinnate, 4--25 × 1--35 cm. Pinnae (reduced to segments in many species) spreading to ascending, fan-shaped to lanceolate to linear; margins entire to dentate to lacerate, apex rounded or acute; veins free, arranged like ribs of fan or pinnate. Sporophores normally 1 per leaf, 1--3-pinnate, long-stalked, borne at ground level to high on common stalk. Sporangial clusters with sporangia sessile to short-stalked, almost completely exposed, borne in 2 rows on pinnate (except in very small plants) sporophore branches. Gametophytes broadly ovate, unbranched, 1--3 × 1--10 mm. x =44, 45, 92. The greatest diversity in Botrychium is at high latitudes and high elevations, mostly in disturbed meadows and woods. Extensive field and laboratory research has revealed unexpected diversity in North America, especially in subg. Botrychium . For accurate identification, a substantial number of carefully spread and pressed leaves are usually needed because of the large amount of variation found in most species. Taking many samples will have little effect on the population as long as the underground shoots and roots are left intact. Approximately a dozen sterile hybrid combinations have been encountered, but they are very infrequent. The range maps south of Canada reflect mostly local occurrences at high elevations (1000--3700 m) in the mountains. The ranges for many of the species are probably more extensive and continuous than indicated by our present knowledge. | <urn:uuid:7151810a-7d1d-4d60-aba5-30edbc3f5a76> | 3.3125 | 426 | Knowledge Article | Science & Tech. | 40.386674 |
The intent of this library is to implement the unordered containers in the draft standard, so the interface was fixed. But there are still some implementation decisions to make. The priorities are conformance to the standard and portability.
The wikipedia article on hash tables has a good summary of the implementation issues for hash tables in general.
By specifying an interface for accessing the buckets of the container the standard pretty much requires that the hash table uses chained addressing.
It would be conceivable to write a hash table that uses another method. For example, it could use open addressing, and use the lookup chain to act as a bucket but there are a some serious problems with this:
So chained addressing is used.
There are two popular methods for choosing the number of buckets in a hash table. One is to have a prime number of buckets, another is to use a power of 2.
Using a prime number of buckets, and choosing a bucket by using the modulus of the hash function's result will usually give a good result. The downside is that the required modulus operation is fairly expensive.
Using a power of 2 allows for much quicker selection of the bucket to use, but at the expense of loosing the upper bits of the hash value. For some specially designed hash functions it is possible to do this and still get a good result but as the containers can take arbitrary hash functions this can't be relied on.
To avoid this a transformation could be applied to the hash function, for an example see Thomas Wang's article on integer hash functions. Unfortunately, a transformation like Wang's requires knowledge of the number of bits in the hash value, so it isn't portable enough. This leaves more expensive methods, such as Knuth's Multiplicative Method (mentioned in Wang's article). These don't tend to work as well as taking the modulus of a prime, and the extra computation required might negate efficiency advantage of power of 2 hash tables.
So, this implementation uses a prime number for the hash table size. | <urn:uuid:8773dcfd-3af8-49fa-82cd-20be3d9212e7> | 2.765625 | 411 | Documentation | Software Dev. | 45.198631 |
On the Shipwreck Trail
This Indiana Jones takes students diving at sunken vessels.
What they discover is a real treasure.
The year was 2002 and Bill Jones was standing on the deck
of the Spiegel Grove, a 50-year-old retired navy ship slated
for sinking in the Florida Gulf. The project was 10 years and
a million dollars in the making and Key Largo’s chamber of commerce
was betting that this ship-turned-artificial reef would dramatically
enhance local tourism by attracting divers and anglers.
A professor of aquatic ecology at IU’s School of Public and Environmental Affairs,
Bill Jones didn’t care so much about the tourists, but about the diversity and
abundance of marine life this sunken vessel would attract—the fish, coral, and
plants that would eventually make the ship their home.
Jones and his students were volunteering on board, hauling gear and helping with
last-minute clean up. The air crackled with anticipation. Tour boats, reporters,
photographers, and gawkers would be arriving from all over the state to watch the
massive ship go down.
At 10 a.m., four hours ahead of schedule, in what could almost be interpreted as a
final act of defiance, the 510-foot-long ship shuddered and began to sink on her own.
Everyone aboard was quickly evacuated and just 22 minutes later, Jones and his students
watched from the safety of a diving boat as the Spiegel Grove went belly up in the Gulf.
It took another six months and a half-million dollars to bring the ship to its final
resting place, but within a year the Spiegel Grove had become precisely what Jones
hoped it would be, an enormously popular habitat for hundreds of species of fish and other
In Jones’s Coral Reef Ecology course, students spend a week in the Bloomington
classroom and ten days in the Florida Keys. Working in cooperation with and the
support of the Florida Keys National Marine Sanctuary, students assess the condition of artificial reefs, such as the site of the shipwrecked San Pedro, an 18th century Spanish merchant vessel that perished when a hurricane hit the Spanish treasure fleet en route from Havana to Spain. All but one of the 21 ships were scattered and sunk over 30 miles of the Florida Keys. The San Pedro is one of nine sunken vessels along the “Shipwreck Trail” between Key West and Key Largo.
There’s not much left of the San Pedro today, just an iron anchor, eight replica
cannons, and a stretch of dense ballast stones, retrieved from European river beds
and stacked, as was the custom of the time, in the lower holds of the ship to
increase stability. With most of the boat scavenged and gone, the real treasure
now, the one that attracts Bill Jones and his students, is the crusty coral reef
that established itself at the shipwreck site.
An Idea is Spawned
How did a SPEA professor—a lake management guy no less—wind up at the bottom of the
sea swimming with students among barracudas and sunken ships?
The idea of teaching in the briny deep surfaced in 1996 in a conversation over
lunch with Charles Beeker, director of the Underwater Science program at I.U.’s
School of Health, Physical Education and Recreation. Beeker mentioned that he was
starting an underwater archaeology project in the Dominican Republic. He had
everything he needed except for a water quality expert.
Nine months later Jones found himself rappelling 51 feet to the surface of
Manantial de La Aleta, one of several ceremonial sinkhole lakes used by the
Taino Indians, the natives who greeted Christopher Columbus when he arrived
at the New World. Among the artifacts discovered during this dive: ceramic pots,
a wooden club, water-gathering gourds, and a ceremonial chair designed for the
Though Jones didn’t realize it at the
this trip was a kind of trial, a way for Beeker to gauge Jones’s
interest in underwater archaeology and make sure that his collaborator
wasn’t merely a competent diver, but a safe one, too. Beeker needed
a careful and attentive guide for his young charges.
“A lot of kids
sign up for scuba in college because they’re adrenaline seekers,”
Jones observes. “They’re not always thinking about safety issues.
I get a rush out of these things, too, but I don’t take unnecessary
For their part, students have called the class “life changing”
and “a dream come true.” And while Jones admits that Coral Reef
Ecology “is only a small part of what I do and an even smaller part
of what SPEA does,” in its own idiosyncratic way the course reflects
SPEA’s broader mission, “giving students skills so they can go out
into the world and do good work.”
Bill Jones is an aquatic ecologist
whose specialty is lake and watershed management. He teaches courses
in limnology, stream ecology, and lake and watershed management.
He and his research group perform lake diagnostic studies, prepare
lake and watershed management plans, and work with the Indiana
Department of Environmental Management in implementing the Indiana
Clean Lakes Program. Jones was a founding member of the North
American Lake Management Society (NALMS) and is currently a governor-appointee
to the Indiana Lakes Management Work Group, a legislative study
group that is recommending changes in Indiana lake policy. He
received a B.S. in zoology (1972) and an M.S. in water resources
management (1977) from the University of Wisconsin, Madison. | <urn:uuid:f5620411-2cc0-4b6b-92da-d8b6c902ce9d> | 3.015625 | 1,229 | Nonfiction Writing | Science & Tech. | 45.863322 |
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
Post a reply
Topic review (newest first)
I haven't touched this math in, well, about 15 yrs And what I learned I forgot, some of it, not all, now I want to get my brain going, I'm going to have to start with the fundamentals, and what vectors and algebra is, and dot matrix etc, I'd prefer if there was a book with picture examples, of all the different types following the math.
There are the Edexcel C1/C2 books by Keith Pledger, which most of us use for AS-level Mathematics in the UK (for those on that exam board). I think they are easy to follow.
I prefer books, any more suggestions are welcome, give me something to decide on
What type of books? Real ones or downloadable ones? There are sites and videos too.
Hi, I'm in need of Grade 8 / 9 / 10 Trigonometry & Algebra math books, any recommendations from books you learned at school or bought when you were younger ? | <urn:uuid:7acd67e5-70d9-4e29-8d75-d94b738c0573> | 2.734375 | 278 | Comment Section | Science & Tech. | 67.857749 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. 2012 March 12 The Scale of the Universe - Interactive Flash Animation Credit & Copyright: Cary & Michael Huang
Free Science Curriculum
Home > 404 - Page not found Meep, meep... moop, me-meep, meep, meep... Oh no! It looks like Beaker has been playing on the website again... don't worry even though this page might not exist anymore (thanks Beaker...) every time a customer gets this message a little red light goes off in the web team offices. Our highly trained Beaker restrainers have been dispatched to find Beaker before he creates any more havoc.
Click on the image below to start exploring the Arctic and Antarctic.
Middle school sites
High School sites
At his death on 20 March 1727, Isaac Newton left papers relating to all areas of the intellectual pursuits he had followed since arriving at Trinity College, Cambridge, in the summer of 1661. His friend, relative by marriage (to Newton's half-niece Catherine Barton) and successor at the Mint, John Conduitt, posted a bond for Newton's debts and claimed entitlement to this material, Newton having died intestate.
A year for the record books From extreme drought , heat waves and floods to unprecedented tornado outbreaks , hurricanes , wildfires and winter storms , a record 14 weather and climate disasters in 2011 each caused $1 billion or more in damages — and most regrettably, loss of human lives and property. NOAA's National Weather Service has redoubled its efforts to create a "Weather-Ready Nation", where vulnerable communities are better prepared for extreme weather and other natural disasters.
Radical Raisins [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) Plastic Milk [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) The Collapsing Can and Thermal Expansion of Gases [ lesson plan ] (Michael Pham, NCSSM, 2006) Colors Behind the Black [ lesson plan] (Hattie Chung and Ann Liu, NCSSM, 2006) Lets be Molecules! [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) Invisible Ink [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) Different Levels of Density [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) Magical Goo [ lesson plan ] (Hattie Chung and Ann Liu, NCSSM, 2006) Mini Geysers!
1. Choose a sound In order for nature sounds to start playing choose a sound from drop-down box for one channel and drag the volume slider up. 2. | <urn:uuid:f2cae51b-c8cf-449e-a8cf-bc64cfaa6c31> | 2.6875 | 583 | Content Listing | Science & Tech. | 57.961928 |
|Oct1-10, 08:59 PM||#1|
1. The problem statement, all variables and given/known data
complete the following decay equations by inserting the missing particle or nuclide information. Identify each type of decay.
see attached screenshot please.
2. Relevant equations
3. The attempt at a solution
I've done all of the decays but we have to label them and I'm not sure what iv would be called.
|Oct2-10, 07:16 AM||#2|
It's called "fission".
|Oct2-10, 07:28 AM||#3|
ok thanks, I called the rest
i) Beta Decay
ii)Beta Decay (positron emission)
iii) Alpha Decay
v) gamma decay (annihilation)
I was'nt sure about the last one and can you see any problems with the rest??
|Similar discussions for: Nuclear Decay|
|Nuclear Decay||Advanced Physics Homework||1|
|Nuclear Decay||Advanced Physics Homework||2|
|Nuclear decay||Nuclear Engineering||16|
|nuclear decay||Introductory Physics Homework||3|
|Law of Nuclear decay?||High Energy, Nuclear, Particle Physics||3| | <urn:uuid:7dda5caf-45f6-40a5-a1a4-1fe061cd31a4> | 3.203125 | 273 | Comment Section | Science & Tech. | 61.331263 |
Protein crystal growth on Mir (DCAM)
Last round of DCAM specimens set for launch Space Shuttle-Mir Science Program (STS-89/91)
Jan. 14, 1998: Experiments with the Diffusion-controlled Crystallization Apparatus for Microgravity (DCAM) aboard the Mir space station will conclude with a payload of six trays of cells to be carried by STS-89 in January 1998.
To date, with stays lasting as long as 6-months aboard Mir, DCAM has yielded dramatic results. Highlights include numerous large, spectacular crystals of the nucleosome core particle (shown at right), which regulates genetic activities in the nucleus of a cell. Another striking result was the growth of the largest T7 RNA crystal ever produced (0.7x0.8x2.0 mm in size). DCAM cells carrying triglycine sulfate (TGS) also yielded large crystals. TGS has been grown in space by other experiments, so these results will help in gauging the comparative effectiveness of various microgravity processes.
DCAM's first flight, the second U.S. Microgravity Laboratory (USML-2) in 1995, verified the scientific validity of this approach to growing protein crystals. A full complement of six trays comprising 162 DCAM cells was carried to Mir in March 1996 and was returned to Earth on STS-79 which also installed a second set of DCAM trays. The STS-81 mission in January 1997 retrieved the second set of trays and installed a third set, retrieved by STS-85 in May 1997, as scientists continued to refine the mixtures and details used in this promising method. STS-91 will retrieve this DCAM set in May 1998.
Proteins are important, complex biological molecules which serve a variety of functions in all living organisms. Determining their three-dimensional structure will lead to a greater understanding of how they function in living organisms. Many proteins can be crystallized and their molecular structures determined through analysis of those crystals by X-ray diffraction. Unfortunately, some crystals grown in the 1-g environment of Earth have internal defects that limit or impair such analyses. As demonstrated on Space Shuttle missions since 1985, some protein crystals grown in space are larger, and more highly ordered, than the Earth-grown counterparts.
DCAM was developed at Marshall Space Flight Center to grow protein crystals by a special diffusion process. The principal investigator is Dr. Daniel Carter of New Century Pharmaceuticals.
DCAM grows crystals by the dialysis and liquid/liquid diffusion methods. In both methods, protein crystal growth is induced by changing conditions in the solution containing the protein. In dialysis, a semipermeable membrane retains the protein solution in one compartment, while allowing molecules of precipitant to pass freely through the membrane from an adjacent compartment. As the precipitant concentration increases within the protein compartment, crystallization begins.
In liquid-liquid diffusion, a protein solution and a precipitant solution are layered in a container and allowed to diffuse into each other. This leads to conditions which may induce crystallization of the protein. Liquid-liquid diffusion is difficult on Earth because density and temperature differences cause the solutions to mix rapidly.
In the DCAM, a "button" covered by a semi-permeable membrane holds a small protein sample but allows the precipitant solution to pass into the protein solution. Exposure to the precipitant causes the protein to crystallize.
Each DCAM unit is a polycarbonate plastic container a little larger than a plastic can for 35 mm film. The inside is molded into two cylindrical chambers joined by a tunnel. The first chamber, which is smaller, contains a buffer/precipitant solution. The end cap for this chamber holds the protein sample in a "button" covered by a semi-permeable membrane. The larger chamber holds a precipitant solution which is usually more concentrated that that in the smaller chamber. The two chambers are joined by a plug filled with a porous material to control the rate of diffusion. The plug material is selected to be compatible with the protein solution, and its properties are set to match the rate at which the crystals are to grow.
The DCAM has no mechanical system. Diffusion starts on Earth as soon as the chambers are filled. However, the rate is so slow that no appreciable change occurs before the samples reach orbit one or two or even several days later. This also allows protein samples to stay aboard the shuttle in case of a launch delay. In other hardware, many samples must be replaced in the event of a postponement. Such an apparatus is ideally suited for long duration mission such as the International Space Station and Mir.
STS-89 will carry 162 DCAM units mounted in 3 x 9 arrays on six trays stored in a locker in the Shuttle middeck (the same as the array now aboard Mir). Upon arrival at Mir, the DCAMs will be transferred to one of Mir's modules and mounted in a quiet area where crystallization will take place. Three of the six trays will be mounted so they can be photographed as crystals form. After the retrieval and return to Earth by STS-92 in May 1998, the samples will be analyzed.
Protein samples for the crystallization in space are selected by a committee chaired by the PCG project scientist at NASA's Marshall Space Flight Center. Samples are then evaluated and approved by NASA toxicology and safety offices. As a point of comparison, the molecular masses of proteins range from about 890 to 2,200 times that of ordinary sugar, a relatively simple organic compound which is easily crystallized. Candidate DCAM proteins (and guest investigators) for the fifth DCAM-Mir mission include:
- Human Antithrombin III controls blood coagulation in human plasma. Its importance is underscored by the occurrence of severe thrombotic disorders including deep vein thrombosis, pulmonary embolism, and cerebral infarction in subjects with antithrombin mutations. Antithrombin is commonly given to patients suffering thrombotic crises of the shock syndromes. Investigator: Dr. Mark R. Wardell, Washington University School of Medicine, St. Louis, Mo.
- Lysozyme is used a protein model to document the effects of microgravity on crystal growth. Investigators: Dr. Daniel C. Carter, New Century Pharmaceuticals, Huntsville, Ala., Dr. Franz Rosenberger and Dr. Bill Thomas, University of Alabama in Huntsville.
- Nucleosome core particles have important roles in the regulation of gene expression, particularly in the expression of genes transcribed by RNA polymerase III. The nucleosome is the basis for organization within the genome by compacting DNA within the nucleus of the cell and by making selected regions of chromosomes available for transcription and replication. Investigator: Dr. Gerard J. Bunick, Oak Ridge National Laboratory, Oak Ridge, Tenn.
- Outer surface glycoprotein of the hyperthermophile Methanothermus fervidus lets M. fervidus live under environmental extremes, like high temperature, low-pH value, or high salt concentration. Elucidation of the crystal structure of this glycoprotein, which is directly exposed to the environment, may provide important information on the survival of these unusual microorganisms. Investigator: Dr. Jean-Paul Declercq, UniversitĂŠ Catholique de Louvain, Belgium.
- Serum Albumin, a key ingredient in blood plasma, is crucial to the transport of drugs and other chemicals throughout the body. Investigator: Dr. Daniel Carter, New Century Pharmaceuticals, Huntsville, Ala.
- HK-Gro EL Complex is used in fundamental virus structure and function studies. Investigator: Dr. John Rosenberg University of Pittsburgh.
- Ferritin and Apoferritin are used in fundamental biochemistry and crystal growth model systems. Investigators: Dr. Franz Rosenberger, Dr. Bill Thomas, University of Alabama in Huntsville, Dr. Daniel C. Carter, New Century Pharmaceuticals, Huntsville, Ala.
- Bacteriorhodopsin has potential for storing data in optical computer crystals. Investigator: Dr. Gottfried Wagner Justus-Liebig-University, Germany
- Ferrochelatase is important to biomedical and biochemical applications. Investigators: Dr. B.C. Wang and Dr. Harry Dailey University of Georgia.
Launch preparations (KSC) | Mission activities (JSC) | Shuttle-Mir Science Program
Acrobat PDF version of this fact sheet (2 pages; 96K)
THIS IS AN OFFICIAL NASA WEB PAGE
Last modified on Friday, Jan. 16, 1998 | <urn:uuid:610c2ece-773f-4f50-96ef-8f9ce4b831bb> | 3 | 1,787 | Knowledge Article | Science & Tech. | 41.116303 |
Natural phenomena relating to heated water generated by igneous activity.
Geothermal and hydrothermal activity [ More info] Glossary of geothermal and hydrothermal activity terms.
Distribution of high-temperature (>150 °C) geothermal resources in California [ More info] Description and map of geothermal resources in California.
Geothermal industry temperature profiles from the Great Basin [ More info] Subsurface temperature, well temperature, and well data of geothermal resources in the Great Basin.
Geysers, fumaroles, and hot springs [ More info] Description of volcanic geysers, fumaroles, and hot springs.
Hydrologic studies in Long Valley caldera [ More info] Hydrologic monitoring data for Long Valley caldera, California, on springs, streams, wells, fumaroles, and precipitation to study the natural hydrologic variations and the response of the hydrologic system to volcanic and tectonic processes.
Plus side of volcanoes: geothermal energy [ More info] Examples of the use of volcanic energy in geothermal power plants.
Alphabetical Index of Topics
a b c d e f g h i j k l m n o p q r s t u v w x y z | <urn:uuid:6a80d22c-0116-421f-a0c5-7524826544d9> | 2.875 | 260 | Content Listing | Science & Tech. | 20.072907 |
int accept(int s, struct sockaddr *addr, int *addrlen);
The argument s is a socket that has been created with socket.3n and bound to an address with bind.3n and that is listening for connections after a call to listen.3n accept() extracts the first connection on the queue of pending connections, creates a new socket with the properties of s, and allocates a new file descriptor, ns, for the socket. If no pending connections are present on the queue and the socket is not marked as non-blocking, accept() blocks the caller until a connection is present. If the socket is marked as non-blocking and no pending connections are present on the queue, accept() returns an error as described below. accept() uses the netconfig.4 file to determine the STREAMS device file name associated with s. This is the device on which the connect indication will be accepted. The accepted socket, ns, is used to read and write data to and from the socket that connected to ns; it is not used to accept more connections. The original socket (s) remains open for accepting further connections.
The argument addr is a result parameter that is filled in with the address of the connecting entity as it is known to the communications layer. The exact format of the addr parameter is determined by the domain in which the communication occurs.
addrlen is a value-result parameter. Initially, it contains the amount of space pointed to by addr; on return it contains the length in bytes of the address returned.
accept() is used with connection-based socket types, currently with SOCK_STREAM.
It is possible to select.3c or poll.2 a socket for the purpose of an accept() by selecting or polling it for a read. However, this will only indicate when a connect indication is pending; it is still necessary to call accept().
Created by unroff & hp-tools. © by Hans-Peter Bischof. All Rights Reserved (1997).
Last modified 21/April/97 | <urn:uuid:87a2915a-05ec-4ea2-b4b1-29282ca191ba> | 3.40625 | 421 | Documentation | Software Dev. | 60.793619 |
Source code: Lib/textwrap.py
The textwrap module provides two convenience functions, wrap() and fill(), as well as TextWrapper, the class that does all the work, and a utility function dedent(). If you’re just wrapping or filling one or two text strings, the convenience functions should be good enough; otherwise, you should use an instance of TextWrapper for efficiency.
Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines.
Optional keyword arguments correspond to the instance attributes of TextWrapper, documented below. width defaults to 70.
Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph. fill() is shorthand for
Both wrap() and fill() work by creating a TextWrapper instance and calling a single method on it. That instance is not reused, so for applications that wrap/fill many text strings, it will be more efficient for you to create your own TextWrapper object.
Text is preferably wrapped on whitespaces and right after the hyphens in hyphenated words; only then will long words be broken if necessary, unless TextWrapper.break_long_words is set to false.
An additional utility function, dedent(), is provided to remove indentation from strings that have unwanted whitespace to the left of the text.
Remove any common leading whitespace from every line in text.
This can be used to make triple-quoted strings line up with the left edge of the display, while still presenting them in the source code in indented form.
Note that tabs and spaces are both treated as whitespace, but they are not equal: the lines " hello" and "\thello" are considered to have no common leading whitespace.
def test(): # end first line with \ to avoid the empty line! s = '''\ hello world ''' print(repr(s)) # prints ' hello\n world\n ' print(repr(dedent(s))) # prints 'hello\n world\n'
The TextWrapper constructor accepts a number of optional keyword arguments. Each keyword argument corresponds to an instance attribute, so for example
wrapper = TextWrapper(initial_indent="* ")
is the same as
wrapper = TextWrapper() wrapper.initial_indent = "* "
You can re-use the same TextWrapper object many times, and you can change any of its options through direct assignment to instance attributes between uses.
The TextWrapper instance attributes (and keyword arguments to the constructor) are as follows:
(default: 70) The maximum length of wrapped lines. As long as there are no individual words in the input text longer than width, TextWrapper guarantees that no output line will be longer than width characters.
(default: True) If true, then all tab characters in text will be expanded to spaces using the expandtabs() method of text.
(default: True) If true, each whitespace character (as defined by string.whitespace) remaining after tab expansion will be replaced by a single space.
(default: True) If true, whitespace that, after wrapping, happens to end up at the beginning or end of a line is dropped (leading whitespace in the first line is always preserved, though).
(default: '') String that will be prepended to the first line of wrapped output. Counts towards the length of the first line.
(default: '') String that will be prepended to all lines of wrapped output except the first. Counts towards the length of each line except the first.
(default: False) If true, TextWrapper attempts to detect sentence endings and ensure that sentences are always separated by exactly two spaces. This is generally desired for text in a monospaced font. However, the sentence detection algorithm is imperfect: it assumes that a sentence ending consists of a lowercase letter followed by one of '.', '!', or '?', possibly followed by one of '"' or "'", followed by a space. One problem with this is algorithm is that it is unable to detect the difference between “Dr.” in
[...] Dr. Frankenstein's monster [...]
and “Spot.” in
[...] See Spot. See Spot run [...]
fix_sentence_endings is false by default.
Since the sentence detection algorithm relies on string.lowercase for the definition of “lowercase letter,” and a convention of using two spaces after a period to separate sentences on the same line, it is specific to English-language texts.
(default: True) If true, then words longer than width will be broken in order to ensure that no lines are longer than width. If it is false, long words will not be broken, and some lines may be longer than width. (Long words will be put on a line by themselves, in order to minimize the amount by which width is exceeded.)
(default: True) If true, wrapping will occur preferably on whitespaces and right after hyphens in compound words, as it is customary in English. If false, only whitespaces will be considered as potentially good places for line breaks, but you need to set break_long_words to false if you want truly insecable words. Default behaviour in previous versions was to always allow breaking hyphenated words.
TextWrapper also provides two public methods, analogous to the module-level convenience functions:
Wraps the single paragraph in text (a string) so every line is at most width characters long. All wrapping options are taken from instance attributes of the TextWrapper instance. Returns a list of output lines, without final newlines.
Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph. | <urn:uuid:478b9e86-c158-44a9-a808-8a934a8762a3> | 3.203125 | 1,226 | Documentation | Software Dev. | 50.795449 |
Oscillation is the repetitive variation, typically in time, of some measure about a central value (often a point of equilibrium) or between two or more different states. Familiar examples include a swinging pendulum and AC power. The term vibration is sometimes used more narrowly to mean a mechanical oscillation but sometimes is used to be synonymous with "oscillation". Oscillations occur not only in physical systems but also in biological systems, in human society and the brain.
Simple harmonic oscillator
The simplest mechanical oscillating system is a mass attached to a linear spring subject to no other forces. Such a system may be approximated on an air table or ice surface. The system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted. The time taken for an oscillation to occur is often referred to as the oscillatory period.
Systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy which is converted into potential energy stored in the spring at the extremes of its path. The spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
Damped and driven oscillations
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. This is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system. The simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator.
Some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing (from its equilibrium) results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement. At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation.
Coupled oscillations
The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example two masses and three springs (each mass being attached to fixed points and to each other). In such cases, the behavior of each variable influences that of the others. This leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks (of identical frequency) mounted on a common wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations typically appears very complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes.
More special cases are the coupled oscillators where the energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between an elongation of a vertical spring and the rotation of an object at the end of that spring.
Continuous systems – waves
As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity; examples include a string or the surface of a body of water. Such systems have (in the classical limit) an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate.
See also
- Strogatz, Steven. Sync: The Emerging Science of Spontaneous Order. Hyperion, 2003, pp 106-109
- Vibrations – a chapter from an online textbook | <urn:uuid:1ccdc184-d3fe-40fe-aebb-a83064b16fd0> | 3.96875 | 913 | Knowledge Article | Science & Tech. | 26.87346 |
Find information on common issues.
Ask questions and find answers from other users.
Suggest a new site feature or improvement.
Check on status of your tickets.
When optical components are reduced to the nanoscale, they exhibit interesting properties that can be harnessed to create new devices. For example, imagine a block of material with thin layers of alternating materials. This creates a periodic arrangement of alternating dielectric constants, forming a "photonic crystal" that is analogous to the electronic crystals used in semiconductor devices. Photonic crystals, along with quantum dots and other devices patterned at the nanoscale, may form the basis for sensors and switches used in computers and telecommunications. More information on Nanophotonics can be found here.
No results found
nanoHUB.org, a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. | <urn:uuid:f3415627-b0eb-4eb5-8063-ec283e2326e8> | 2.890625 | 185 | Content Listing | Science & Tech. | 24.200458 |
May 27, 2010
For researchers the easiest new species to discover were insects: over 48 percent of the new species described in 2008 were insects. Over a third of the new insects were beetles. Mammals were among the most difficult: researchers discovered only 41 new species of mammal in 2008, nearly a third of which were rodents. Five times as many extinct mammals were found as living species.
To date insects represents a majority of the world's species with just over a million insects described. Plants come in second with just over a quarter-million. As of 2008, researchers have described 9,997 birds, 8,863 reptiles, 6,644 amphibians, and 5,528 mammals. In all, scientists have described almost 2 million species (1,922,710) since taxonomic work began in the 18th Century.
Unidentified planthopper insect from Madagascar. Scientists estimate that at best 20 percent of the world's species have been described. Photo by: Rhett A. Butler.
However, if higher total species estimations prove correct, for example 50 million species, then it would take two and half millennia before the world's species are described at the present rate.
Whether 8 million or 48 million species remain undiscovered by researchers, many of these species may vanish before scientists find them. Extinction rates are currently estimated at 100 to 1000 times higher than the background extinction rate (i.e. the average extinction rate as determined by studying fossils), leading scientists to warn of a mass extinction that could rival the comet that destroyed the dinosaurs.
To read the report: SOS 2010.
Close to a billion species: ocean exploration reveals shocking diversity of tiny marine life
(04/19/2010) Biologists worldwide may have to start re-evaluating their estimates of the number of species on Earth, since expeditions documenting the oceans' tiniest species have revealed shocking diversity: in the tens of millions of species, at least, and according to one researcher "closer to a billion". Fourteen field projects sent out by the Census of Marine Life focused on the oceans' smallest inhabitants: microbes, zooplankton, and tiny burrowing species inhabiting the deep sea bed. What they found was astounding.
Australia starts 10 million dollar initiative to find new species
(02/15/2010) Known as the 'Bush Blitz', Australia will spend 10 million Australian dollars (8.88 million US dollars) over the next three years to conduct biodiversity surveys in far-flung places, reports Sydney Morning Herald. The program hopes to both uncover new species and gather more data about innumerable little-known plants and animals on the continent.
Videos and Photos: over 17,000 species discovered in waters beyond the sun's reach
(11/23/2009) Deep, deep below the ocean's surface, in a world of ever-present darkness, one would expect few, if any, species would thrive. However, recent expeditions by the Census of Marine Life (CoML) have found an incredible array of strange, diverse, and amazing creatures. To date a total of 17,650 species are now known to live in frigid, nearly lightless waters beyond the photic zone—where enough light occurs for photosynthesis—approximately 200 meters deep. Nearly 6,000 of these occur in even harsher ecosystems, below depths of 1,000 meters or 0.62 miles down. | <urn:uuid:4e1724a4-8eb1-416a-ae1c-c62934d4b5f9> | 3.53125 | 696 | Content Listing | Science & Tech. | 51.192431 |
Units Used in Chemical Oceanography
In chemical oceanography most of the numerical results are expressed as concentrations—that is, as the amounts of various constituents in a certain quantity of sea water. Obviously many different combinations
Only two units are to be used for expressing the quantity of sea water: either (1) the kilogram or (2) the amount of water which at 20° C. and pressure one atmosphere occupies the volume of one liter. The latter unit is designated as L20, but in this discussion it will be indicated as L. The system in which the constituents are reported as the amounts present per liter is designated as the “preferred” one, with an alternative for the abundant substances that may be reported as grams per kilogram of sea water. Salinity and chlorinity are always reported as grams per kilogram of sea water. It should be understood that the proposed system applies only to the reporting of analytical data in the literature. Any suitable units may be adopted for the discussion of special problems.
For expressing the amounts of the dissolved constituents, two types of units are proposed: (1) physical units of mass, volume, or pressure, and (2) units based upon the number of atoms of the designated element, which may be present as ions or molecules either singly or in combination with other elements. In certain cases the number of chemical equivalents is acceptable.
The mass units most commonly used are those of the metric system and bear the following relations to each other:
In certain cases (for example, alkalinity and hydrogen-ion concentration) it is desirable to report the concentration in terms of chemical equivalents. The units shall then be
For expressing the partial pressure of gases dissolved in sea water the basic pressure unit is the “physical atmosphere” (p. 55):
Volume units are all based upon the true liter—that is, the volume of 1 kg of distilled water at 4°C. When volume units are used, the temperature and pressure should be stated. The quantities of dissolved gases, when expressed as milliliters (ml), should be those for 0°C and a pressure of 1 atmosphere, that is, NTP.
The centigrade scale is to be used for reporting temperatures.
The units to be used in reporting data, proposed by the International Association of Physical Oceanography, are given in table 34. It should be noted that all units are based upon the amount of a designated element that may be present either singly (for example, oxygen or calcium) or in combination with other elements (for example, phosphate-phosphorus).
Because the 20° liter is the standard volume unit for expressing the quantity of sea water, glassware should be calibrated for this temperature, and, if practicable, measurements and chemical determinations should be made at or near this temperature. If the sea-water samples are not at 20°, it may be necessary to apply certain corrections. Full descriptions of the methods for making such corrections and tables to facilitate the transformation are included in the Report of the International Association of Physical Oceanography. In most cases the accuracy of the methods of analysis for the elements present in small amounts do not justify such corrections.
As already stated, it is frequently desirable to express the relative concentrations as Cl-ratios or chlorosity factors (p. 167). These relationships may be used to calculate the quantity of the major elements present in water of known chlorinity or to check variations in composition which may be brought about by natural agencies, pollution by sewage and industrial wastes, or by other agencies. | <urn:uuid:97419252-6940-4a3f-a0a1-8cca176e2ea2> | 3.484375 | 726 | Documentation | Science & Tech. | 28.909548 |