text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
If renewable resources are harvested at a rate greater than their
regeneration rate, the long-term flow of benefits is reduced, and they
are said to be overharvested. When natural capital is drawn down too
far, fundamental ecosystem changes can occur which make ecosystem
recovery to full service delivery potential very slow or impossible,
and degradation is said to have occurred. Degraded ecosystems support
half or less of the biodiversity of non-degraded used ecosystems.
overharvesting is the unintended side effect of activities aimed at
harvesting just one or a few components of the ecosystem. The discarded
“bycatch” in fisheries and the habitat destruction caused by logging
are examples of this. Regulatory policies that pay no heed to anything
other than the target species encourage this kind of damage.
is a problem in many localities. For example, about 9 percent of
rangelands south of the equator are grazed by domestic livestock at
unsustainable rates. The fish stocks in the Great Lakes (Lake Victoria
in particular) show classic symptoms of overfishing, and marine fish
stocks in Western and Eastern Africa are at risk of overfishing. | <urn:uuid:150a1d85-8ade-4d37-ace6-41f166abc9e3> | 3.859375 | 249 | Truncated | Science & Tech. | 22.40128 |
Monitoring the Atlantic Inflow toward the Arctic (MAIA)
The main objective of MAIA was the development of an inexpensive, reliable system for monitoring the inflow of Atlantic water to the Nordic Seas, based on coastal sea level data, and to see how changes of ice extent in the north are related to this flux. Available observation systems, including standard tidal stations, were used to obtain transport estimates with a time resolution of less than a week and the method was tested to find out if it could be applied to a similar monitoring of other regions.
A general overview of the project is presented in the MAIA brochure (948 KB). The project was divided into three phases
- The analysis of historical data to develop useful algorithms for computing the inflow
- A validation experiment which focused on collecting data to test the algorithms
- An analysis to test and refine the algorithms and recommend measures for improvement
Who funded the project?
MAIA was a research project within the Fifth Framework programme of the European Commission (Contract EVK2-CT-1999-00008) supporting Key Action 2 (Global Change, Climate and Biodiversity). The project was part of the Energy, Environment and Sustainable Development programme: to better exploit existing data sets and observing systems.
Who ran the project?
MAIA was coordinated by SINTEF Fisheries and Aquaculture, Trondheim, Norway and involved scientists from the Proudman Oceanographic Laboratory, Merseyside, UK; Fisheries Research Services, Aberdeen, UK; Institute of Marine Research, Bergen, Norway; Swedish Meteorological and Hydrological Institute, Norrköping, Sweden and Université Pierre et Marie Curie, Laboratoire d'Oceanographie Dynamique et de Climatologie, Paris, France.
The project ran from January 2000 to December 2002. Data were collected as part of a validation experiment from May 2000 to November 2001. Data management support for the project was provided by BODC.
The fieldwork programme consisted of 31 cruises and included 5 repeated CTD sections. These sections were in the Faroe Shetland Channel, North of Faroes, Norwegian Sea (Gimsøy and Svinøy) and Barents Sea (Fugløya - Bear Island). The research vessels, Scotia (UK), Magnus Heinason (Faroes), Johan Hjort and G.O.Sars (Norway) were used. 51 moorings containing current meters, ADCPs and bottom pressure recorders were deployed along the sections. 10 RAFOS floats were also deployed in the Lofoten Basin. During the Johan Hjort cruise in May 2000 about 300 water samples were collected in order to measure 129Iodine concentration (relative to 127I). Analysis was carried out by the Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse, France.
Available observational data from the standard tidal stations at Tórshavn, Lerwick, Bodø and Ny-Ålesund were also used in the analysis.
The MAIA project data set
BODC had responsibility for assembling and fully documenting all data collected during the validation experiment. The data set consists of 859 CTD casts, 18 moored ADCPs, 43 current meters, 6 bottom pressure recorders, 2 inverted echo sounders, 8 RAFOS floats and 1 Iodine experiment.
The full MAIA data set was published on CDROM by BODC in March 2003 complete with user interface and documentation.
Related MAIA pages at BODC
|MAIA CDROM||Other links|
|MAIA brochure (948 KB)| | <urn:uuid:309a3f7d-ce26-4922-b737-60e5fbe820a8> | 2.6875 | 761 | Knowledge Article | Science & Tech. | 31.243361 |
(This article was originally published in Cichlid News Magazine, Jan-00 pp. 32-34, It is reproduced here with the permission of author Ron Coleman and Aquatic promotions).
While few cichlids could ever be regarded as dull, some groups of cichlids are particularly intriguing. The lamprologines are just such a group. Cichlid aquarists and scientists alike are drawn to these natives of Lake Tanganyika for all sorts of reasons. There are plenty of forms, including roughly 85 described species (all from Lake Tanganyika except for six species found in the Zaire River system) and there are more species being described all the time (Bills and Ribbink 1997). They come in a range of sizes from tiny dwarf shell-dwellers to species over a foot in length, and they exhibit a diversity of interesting behavior.
Recent cladistic analyses have taken place on Lepidiolamprologus attenuatus. Photo by Don Danko.
If you have followed these fishes for the last few years, you are aware that generic names applied to them keep changing: sometimes they are all lumped in the genus Lamprologus, while at times they are split into separate genera, such as Altolamprologus and Neolamprologus. Why is this?
These name changes reflect developments in our understanding of the evolutionary relationships among these fishes. The good news is that several teams of researchers have recently devoted substantial effort to sorting out these fishes; the bad news is that the results don't all agree with each other, so the process is ongoing. In the mid-1990s two teams of researchers (Sturmbauer et al., 1994, from the State University of New York at Stony Brook and Kocher et al., 1995 from the University of New Hampshire) applied modern molecular techniques to the lamprologines, but got surprisingly different results. More recently, Melanie Stiassny (American Museum of Natural History, New York) re-examined a large number of lamprologines using more traditional techniques of inspecting morphological structures. Her findings support some of the previous molecular work, but also suggest that even more bizarre things may be going on. Understanding this research and other controversies in cichlid systematics require a quick introduction to modern systematics.
Systematics is the study of the evolutionary relationships among organisms. Like all branches of science, this field is not without disagreements, but in the last two decades at least some agreement has been reached on the basic approach and philosophy of systematics. That methodology is phylogenetic systematics, often called "cladistics." The goal of a cladist is to produce a phylogeny for a group of organisms, i.e., a depiction of the evolutionary relationships of that group. The purpose of the phylogeny is to illustrate which taxa (e.g., species, genera, etc.) are most closely related to which other ones and to illustrate the evidence we have to support that relationship. An important point to remember is that there is a single, true phylogeny, i.e., these organisms evolved in a certain way. The problem comes in trying to reconstruct that pattern of evolution.
As scientists gather more data, our understanding of these relationships grows. We might realize that creatures that we once thought were closely related are not so closely related at all. Thus, a currently accepted phylogeny may change, sometimes dramatically.
Cladistics has a few central principles, one of the more important being that a taxon can consist only of the evolutionary descendants of a common ancestor. So what? This means that if you are naming a group, such as a family, or a genus, you can only include those species which descended from a common ancestor. Also, you must include all of the descendants. This makes the group "monophyletic," the basis for cladistics. This may seem a bit esoteric or even blatantly obvious (of course you want to include only things that are closely related to each other in a group) but this wasn't always the way things were done. It also means that no matter how much you like the name of something, if it doesn't belong in a group (because it isn't closely related to the other taxa in the group) you have to remove it from the group and call it something else.
The second key point of cladistics is that we decide how closely taxa are related solely on the basis of what are called shared derived characters or "synapomorphies." A synapomorphy is an evolutionary novelty, that is to say, something peculiar that didn't exist before. It might be a strange bump on the head or a particular way two bones join together or even a unique behavior. If two organisms share a very peculiar trait, we assume that the reason they have this trait is because their common ancestor had it. For example, imagine that two fish both have a bright iridescent stripe along the base of the dorsal fin. The odds that such a peculiar trait evolved twice is very low. It is far more likely that the common ancestor to these two fish had the bright iridescent stripe and "passed it on" to its descendants.
Now imagine that we are examining four species of fish. Two species share the peculiar trait of having extremely long pelvic fins, three have the bright iridescent stripe, and the fourth has neither of these peculiar traits. Additionally, all four species have a pointed tail.
Altolamprologus compressiceps is an excellent example of an extremely "derived" or specialized lamprologine. Fish and Photo by Don Danko.
The cladist would hypothesize that the two with extremely long pelvic fins are closest relatives, or "sister taxa." The third species, which does not have long pelvic fins, but shares the bright iridescent stripe with the other two, is the sister group to the group consisting of the two species with the long pelvic fins. Finally, we also can conclude that all four species form a monophyletic group because they all share the peculiar, derived character of having a pointed tail.
Cladistics would be easy if real organisms were so neat and tidy. Unfortunately, the real world is vastly more complex. Sometimes, characters evolve more than once independently. For example, it is quite possible that pointed tails have evolved a number of times in cichlids (and indeed they have). To deal with conflicting data, cladists rely on the principle of "parsimony," meaning that the path that takes the fewest total steps is most likely the one that evolution took. So, imagine three taxa. The first and second share fifteen uniquely derived characters. But, the second taxon shares two unique characters with the third taxon. Which two species are most closely related? Because of parsimony (the 15 outweighs the two), we assume that the first two species evolved from a common ancestor and the "unique" traits shared by the second and third species each evolved twice, once in the second taxon and once in the third. So, the first and second species are sister taxa, and the third species is the sister taxon to the group consisting of the first and second species.
In the case of sorting out the lamprologines, the first task was to find peculiar traits shared by all members of the group. When Stiassny scrutinized the bones of the head and the tail, she found such traits. The particular arrangement of bones below the eyes, in the pelvic region, and in the tail are unique to this group. Further, lamprologines have a unique type of scale - with tiny tooth-like structures over the entire surface of the scale and a unique arrangement of the teeth. Together these characters and others give us evidence that all the lamprologines are a single evolutionary lineage, i.e., a monophyletic group.
Next, Stiassny found evidence that Variabilichromis moorii (formerly called Neolamprologus moorii) is the sister group to all other lamprologines. This finding agrees with the molecular results of Sturmbauer et al. (1993) and will likely stand the test of time. But things get messy after this point.
Neolamprologus sexfasciatus represents still another example of the broad diversity seen among the lamprologines of lake Tanganyika. Photos by Mikael Karlsson.
Previous studies have divided the lamprologines into several genera: Lamprologus, Lepidolamprologus, Neolamprologus, Altolamprologus, Telmatochromis, Julidochromis and Chalinochromis. When Stiassny examined many species from each of these genera she found a peculiar thing: certain, but not all, members of the genera Lamprologus, Lepidolamprologus, Neolamprologus and Altolamprologus contained a peculiar bony element in the jaw, which she called a "labial bone." This presents a problem, because it suggests that the 26 species which share this character form a natural evolutionary group, excluding the other members of those four genera. Stiassny does not offer a new name for this group as yet, other than to call it the "ossified" group, and she stresses that further work is needed; however if she is correct, then the current scheme of using the names Lamprologus, Lepidolamprologus, Neolamprologus and Altolamprologus will have to be abandoned or at least radically adjusted.
This work on lamprologines is just one example of a recent surge in interest in cichlid systematics. Indeed, papers are coming out almost too fast to keep up with! In the short term, we are in a period of instability for the names of cichlids, which is a nuisance to hobbyist and scientist alike. In the long run, however, we are gaining a much deeper understanding of these fascinating fishes.
© Copyright 2000 Ron Coleman, all rights reserved
Coleman, Ron. (Agosto 29, 2001). "Revealing Relationships". The Cichlid Room Companion. URL consultato in data Giugno 18, 2013, da: http://www.cichlidae.com/article.php?id=158&lang=it. | <urn:uuid:41017db0-3112-4856-93da-1f89bb2c1921> | 3.03125 | 2,140 | Truncated | Science & Tech. | 39.709162 |
Acid-base indicators provide a great platform for a variety of at home chemistry experiments that anyone can do. One of the simplest indicators that is readily available is red cabbage. It turns out that the colored pigment that gives the cabbage it color is a natural acid-base indicator. The red color of cabbage comes from a molecule called anthocyanin. This naturally occurring dye changes it color depending on the the presence of an acidic or alkaline (basic) substance.
Many other foods contain anthocyanins including cranberry juice, black currants, and strawberries. Some flowers such as hydrangea also contain anthocyanins, and this makes their color sensitive to the acidity of the soil in which they grow.
If you have a blender, grab three to four leaves of cabbage, break them up into reasonable size chunks and put in a blender. Add a few cups of water and blend until the chunks of cabbage turn to small bits. About thirty seconds should be sufficient. Strain the blended liquid through a screen and you have your acid-base indicator solution.
If you are using the pot method, chop up a few leaves and toss into a pot of water. Bring to a boil for about 20 minutes (enjoy the aroma) then allow to cool. Strain the solution through a screen and you have your indicator.
Now the fun part is to find some items from your kitchen to test with your indicator. To test some liquids or powders, first add a bit of your indicator solution to a glass of water. Then you can add a bit of say, lemon juice, orange juice, baking soda, egg whites, etc. Watch carefully to see what color the liquid changes. You may want to prepare several containers of indicator solution so you can test and compare multiple items.
You can mix up a huge batch of indicator solution for your friends or classmates, or you could cook up some sauteed cabbage or maybe even make a sweet and sour German dish. In anycase, don’t let the cabbage head go to waste! Here’s a quick listing of many dishes you can cook up with your leftovers.
Normally an acid (lemon juice) will change the cabbage indicator to a bright red color and bases (like washing soda) will change the color to a light green or yellowish color. The colors correspond to what chemists refer to as the pH of the solution. Things with a low pH value are acidic and those with a high pH value are basic. Below is a sample of some typical values of pH for some household items.
Values taken from the CRC Handbook of Chemistry and Physics
If you found this interesting, check out some of these related articles.
Tell us what you're thinking by leaving a comment below... | <urn:uuid:e7745cb4-324b-447d-898a-4f9885366d50> | 3.421875 | 566 | Tutorial | Science & Tech. | 55.288035 |
Graphene by the kilo
Durham Graphene Science founder Karl Coleman is forging ahead in production of single-layer carbon. Sarah Houlton talks to the 2011 Chemistry World entrepreneur of the year
- Karl Coleman founded spin-out Durham Graphene Science to commercialise his method of making graphene flakes
- Chemical vapour deposition can produce very pure graphene, unlike 'Scotch tape' mechanical exfoliation of graphite
- The flakes form a powder that could find use in inks, composite materials and capacitors
- The company is currently still based in university labs, but is looking to move out to a pilot plant
Graphene is a two-dimensional form of carbon with the atoms arranged in a honeycomb lattice that looks like chickenwire. In recent years, it has generated a lot of interest, with potential applications including inks, composites, capacitors and even consumer electronics, and the 2010 Nobel prize for physics was awarded to two scientists working on the material. Fascinating as it is, there's a problem if it's going to find widespread use - it's not easy to make it in pure form. This year's Chemistry World entrepreneur of the Year, Karl Coleman, is looking to change that, with his spin-out company, Durham Graphene Science.
The standard method for making graphene involves mechanical or chemical exfoliation. 'It works well for physicists who only need one tiny flake to study, but for chemists, engineers or product developers, that's no good,' he says. Coleman had been working on carbon nanotubes for some time, using chemical vapour deposition (CVD) to make them, and a couple of years ago started to wonder whether, by adapting the technology and altering the parameters, it might be possible to make the flat layers of graphene using CVD, too.
It turned out that it was. The key to success lay in changing both the catalyst and removing the substrate. 'By moving from systems that used transition metal catalysts along with hydrocarbon gases - such as methane mixed with hydrogen - to alcohols as the source of carbon and simple alkoxides, we found it worked extremely well, giving us pure layers of graphene,' he says. 'Not only that, it didn't just make tiny flakes of the material - we found we could scale the process up and make more significant amounts of the material.'
An eye for opportunity
Graphene flakes and powder are ideal for use in inks, capacitors and composite materials
© KARL COLEMAN
There were two key factors that aided him in the commercialisation process - support from Durham University, both from the chemistry department and the business innovation services department (DBIS), and winning a Blueprint Knowledge Transfer award last year. 'DBIS keeps a very close eye on what's going on in chemistry - listening to all the gossip!' he says. 'They heard about what we'd been doing, and we met with Mike Bath from their group, who is now a director of the company. He got very excited about our work, and agreed it definitely had commercial potential. So we went looking for investment to start a spin-out company, and were very lucky - two investors were fighting to get involved. We could only go with one, and Northstar Ventures gave us a £100 000 investment, which was sufficient to launch the company, and start proof of concept studies.'
The Blueprint award was every bit as important in setting up the company, he says. 'It's a competition held in the north-east of England, run by the five universities there with support from regional and national businesses and commerce groups,' he says. 'We won their Knowledge Transfer award, which gave us £5000 in cash and - importantly - £5000 in business support from the event sponsors. We got legal and business development advice from experts such as the IP Group, law firm Dickinson Dees, and a free patent submission via another law firm, Murgitroyd, which was invaluable. It gave us very useful contacts, and I attended numerous workshops and entrepreneurial courses which gave me a real insight into what the world of business is really like. As an academic, you don't realise what support is out there and how many people are willing to help you. Of course I'm comfortable answering questions about the science, but people with a business background ask all sorts of important questions I simply hadn't thought of.'
Coleman is now looking for further investment to take the company to the next stage. Currently, they can make several grams of graphene a day, but they want to be able to produce kilogram quantities. 'The aim is to get the material out to customers so they can test it and build prototype products that use it, in areas such as composites, inks and capacitors,' he says. 'It's important to supply them with sufficient material so they can test it properly, and build serious prototypes or make master batches for composite or ink applications.'
Coleman is hoping to take graphene production out of the lab and up to commercial scale
© KARL COLEMAN
Durham Graphene Science is still based within Coleman's labs in the chemistry department at the university, but they are now looking for the further cash injection needed to move out and set up a small production facility, and they are in talks with science and technology parks within the north-east of England. 'We could move out now, but want to do it properly and we will need that extra investment to set up a pilot plant facility, rather than just another lab somewhere else,' he says. 'We hope to have it up and running within the next 12 months, but the earlier we can do it the better.'
Despite the fact that they can still only make limited quantities of graphene, the company already has several customers across a variety of sectors - academic labs, big European research institutes, and large industrial companies who are interested in investigating what they might be able to achieve with it. 'For some, we aren't yet producing the material in large enough quantities for them to test it seriously, for applications such as materials in the aerospace industry,' he says.
In terms of the competition, Coleman believes it is limited. The standard method uses a top-down approach: chemical exfoliation or oxidation of graphite - aggressively stripping the layers apart in some way - and there are three small companies in the US who make graphene like this. The obvious disadvantage with this approach is the difficulty in removing graphite impurities. CVD is bottom-up - building the graphene from scratch - which allows much purer material to be created. Perhaps more importantly, it can be scaled into a continuous process which is going to be vital for the future if applications for graphene are to be developed and commercialised.
Finding a niche
'We produce graphene as platelets and powder, which is useful for inks, capacitors and composite materials,' he says. He thinks these applications are likely to be commercialised more quickly, maybe within five years. Commercial applications in electronics - where the graphene would be needed in film form, like that made by the Samsung process - are still a decade or more away. That said, he is looking at the possibility of assembling films from platelets in the lab, but it's definitely a longer-term goal.
When the new facility is up and running, Coleman anticipates they will be able to produce a kilogram of graphene a day. 'Once we've demonstrated the method works at that scale, if demand is there we should be able to ramp up to tens of kilograms a day relatively quickly, so several tonnes a year,' he says. 'That would be a serious amount of material, and enough for real products that would make it onto the shelf.' This doesn't sound like an impossible dream - Bayer already uses CVD to make hundreds of tonnes of carbon nanotubes a year, so it's clear it can be run on a substantial scale. His aim is to supply graphene at a cost comparable to carbon fibres, where high-quality material currently sells for just under $100 (£61) a kilogram.
Looking to the horizon
Coleman hasn't let the science stand still, either - he has continued to develop and patent other graphene synthesis methods. 'Some are rather more sophisticated, but I can't talk about them yet as we've only just applied for the patents,' he says. 'We found that once we had one method, it was a great springboard for finding others, and it's really snowballed.'
Although it's clearly still early days for Durham Graphene Science, the markets are developing and Coleman's plan is to start by picking the low-hanging fruit in terms of customers who are already working on carbon nanomaterials and products that could easily incorporate graphene, but whose research is being limited by a lack of availability of good quality material.
'We know people want it and it's worth doing - we're not trying to force something into a marketplace that's not ready for it!' he says. 'We've spoken to our current investors and potential new investors, and it's now clear that we need to ramp up our efforts. There just aren't enough hours in the day to make enough material with our current lab-scale production method. Our business plan keeps being pushed forward and we're doing things ever earlier than we'd originally planned. Business plans can involve a lot of artistic licence, but we really have found that everything has gone according to plan. I'm extremely optimistic for the future.'
Sarah Houlton is a freelance science writer based in Boston, US
Personal Profile: Graphene By The Kilo
Download the print version of this article
PDF files require Adobe Acrobat Reader | <urn:uuid:ef6aa318-75c8-4a0e-a3f6-30e75b0e2a5e> | 3.046875 | 1,995 | Nonfiction Writing | Science & Tech. | 38.279736 |
When the Austrian biologist Paul Kammerer was found dead, a gun by his side, in September 1926, the reason for his despair seemed obvious. For years there had been rumours that he was a fraud, and a few weeks earlier a leading researcher hinted at proof that Kammerer had invented data to support his crazy ideas about evolution. While his obituaries tried to put a positive gloss on the debacle, many scientists believed his death proved his guilt, and his name was quickly forgotten by all but a few historians. But now the story of Kammerer has taken a dramatic new twist, following research suggesting that his "fraudulent" work was a major breakthrough decades ahead of its time. If confirmed, Kammerer will be seen as one of the pioneers of evolution theory, alongside the likes of Darwin himself.
At the time he performed his notorious experiments, biologists were just beginning to combine Darwin's theory of evolution with the idea of inheritance of traits via genes. No one had any idea what genes actually were, but most biologists were convinced they were the drivers of evolution, via the twin effects of natural selection and mutation. Most biologists, but not all, suspected there had to be more to evolution.
He became intrigued by the ideas of the 18th-century French naturalist Jean Lamarck, who had argued that if creatures acquired some useful trait during their lifetime, they could pass it on to their offspring, so they too would benefit. As an example, Lamarck pointed to the giraffe, which he said was simply a type of antelope that had steadily acquired a longer neck through stretching upwards to pick leaves off trees.
Even at the time, Lamarck's ideas about the inheritance of acquired characteristics faced critics, who pointed out that, for example, the sons of blacksmiths are not born with bulging biceps. In any case, it was far from clear how a lifetime's experiences could end up permanently modifying some trait of a living species. By the start of the 20th century, Lamarck's theory was regarded as patent nonsense by many leading biologists. Yet Kammerer, never one to follow the herd, believed the only way to know for sure was via the scientific method.
He devised an experiment to see if he could force living creatures to acquire a new trait during their lifetime, and then see if it could persist down the generations. Success might not reveal how the process worked, but it would at least show there was more to evolution than organisms just passively waiting for new traits to emerge at random. By the 1920s, Kammerer was making headlines with experiments involving marine creatures that seemed to confirm Lamarck's theory. The most famous centred on an amphibian called the midwife toad.
Unlike most frogs and toads, this creature breeds on dry land, its curious name coming from the way the male carries the fertilised eggs on its back until they are ready to emerge as tadpoles. Kammerer wondered if he could force some midwife toads to give up their normal traits, and instead breed in water, leaving their eggs there as well, rather than moving them around. Kept in a hot, dry enclosure, the toads sought sanctuary in the coolness of water provided by Kammerer, and set about breeding and leaving their eggs there. Over 95 per cent of the eggs failed to turn into tadpoles, but those that did led to toads which now preferred to breed in water. And, just as Lamarck's theory predicted, these eggs in turn produced offspring with the same preference.
Kammerer continued the experiment through six generations of toads, and claimed that they all showed the same acquired trait. More remarkable still, some of the toads also showed signs of acquiring additional new traits, such as modified forelimbs that allowed them to mate more effectively with females in water. Kammerer's apparent confirmation of Lamarckian evolution attracted huge interest - and opprobrium. In 1923, the distinguished English biologist William Bateson launched an attack on the reliability of Kammerer's data in the pages of the journal Nature. Then others joined in, among them the American biologist Kingsley Noble, who claimed to have found evidence of black ink being used to fake the appearance of modified limbs.
Right up until the end, Kammerer insisted he had done nothing wrong - and allowed others to check his claims. In his final note, he hinted that his despair was the result of an unhappy relationship, rather than being "outed" as a fraud. Some biologists performed similar experiments with other creatures, and found similar results. It made no difference: Kammerer's suicide was widely assumed to be proof of guilt, and his claims were forgotten.
Now, more than 80 years after his untimely death at the age of 46, Kammerer's reputation may be about to undergo a major transformation. New research suggests that Kammerer's results are consistent with effects at the heart of an emerging field of biology, known as epigenetics. Put simply, epigenetics focuses on processes in which new traits appear in organisms without any changes in their DNA. Such processes include so-called methylation, in which a small group of molecules becomes attached to DNA, changing its behaviour without altering its genetic information.
According to Dr Alexander Vargas, an evolutionary biologist at the University of Chile, such epigenetic effects may have been triggered by the switch from dry to watery conditions. The result would be the emergence of new traits suited to the new environment - which is precisely what Kammerer claimed to have found. Writing in the current issue of the Journal of Experimental Zoology, Dr Vargas argues that new experiments should be performed, to find out if epigenetic effects really do emerge when eggs are hatched in water.
He adds that if Kammerer's claims are confirmed, the midwife toad could become the organism of choice for biologists studying how epigenetics affect the evolution of life. Historians may yet find the toad a useful way of demonstrating what can go wrong in the evolution of science. Robert Matthews is Visiting Reader in Science at Aston University, Birmingham, England. | <urn:uuid:b5c07d09-cee3-462c-8dc0-3a5a927bf05d> | 3.5 | 1,253 | Nonfiction Writing | Science & Tech. | 37.925517 |
Though much weaker than Typhoon Saola, Tropical Storm Damrey was forecast to make landfall on the coast of China in early August 2012. AccuWeather reported that the storms would be “likely within 500 miles [800 kilometers] and 18 hours of each other.”
The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured this natural-color image of Tropical Storm Damrey on July 30, 2012. The same day that MODIS acquired this image, the U.S. Navy’s Joint Typhoon Warning Center (JTWC) reported that Damrey was located roughly 175 nautical miles (325 kilometers) east-northeast of Iwo Jima. The storm had maximum sustained winds of 45 knots (85 kilometers per hour) with gusts up to 55 knots (100 kilometers per hour).
The JTWC’s projected storm track showed Damrey making landfall between August 2 and 3. AccuWeather reported that the storm might bring heavy rains and high winds. The storm continued moving toward southeastern China on August 1 and August 2.
- Joint Typhoon Warning Center. (2012, July 30) Tropical Storm 11W (Damrey) Warning. [Online] URL: http://www.usno.navy.mil/NOOC/nmfc-ph/RSS/jtwc/warnings/wp1112web.txt. Accessed July 30, 2012.
- Pydynowski, K. (2012, July 30) Tropical double whammy for China. AccuWeather.com. Accessed July 30, 2012.
NASA image courtesy LANCE MODIS Rapid Response Team at NASA GSFC. Caption by Michon Scott.
- Aqua - MODIS | <urn:uuid:90dfb67a-4e35-458a-9f24-09de106987cf> | 3 | 358 | Knowledge Article | Science & Tech. | 68.296333 |
Old man uranium
Earlier this year, I did column on the stockpile of highly enriched uranium at the Y-12 nuclear weapons plant and studies under way there to understand the aging characteristics of uranium and the long-term deployment of uranium components.
Here is more detailed information, based on questions submitted to the National Nuclear Security Administration.
Q. Does Y-12 perform studies on uranium aging?
A. Yes, Y-12 conducts tests on the components it produces. Y-12's role in this work is part of NNSA's Stockpile Evaluation Program, which is conducted to detect and evaluate potential problems in the nuclear weapons stockpile that could affect safety, use, control, and reliability. These evaluations are performed on materials and components used in weapon manufacturing, newly built weapons, weapons that have been retrofitted or upgraded, and weapons withdrawn from the stockpile.
Q. What can you tell me about Y-12’s role in these studies?
A. Y-12 has been conducting uranium aging studies for many years as part of an ongoing mission for NNSA to evaluate components manufactured for nuclear weapons in partnership with the design laboratories to assure continued safety, reliability, and performance.
Until the early 1990’s, weapons systems were routinely replaced with new designs to meet national defense requirements, so they had relatively short service lives resulting in few aging concerns. Since the end of the Cold war, routine replacement stopped and the existing weapon types are expected to last much longer than originally anticipated. The Enhanced Surveillance program was initiated in the mid-1990’s as a joint effort between design and production organizations to establish a systematic approach to assess weapon aging and lifetimes. Y-12 was a charter member of this program and has played a critical role in developing the tools and obtaining the data to understand and predict the aging of various materials, including uranium components.
Nuclear weapons are complex systems made of many different materials and interconnecting components. The materials may interact with air, moisture, and other atmospheric constituents during manufacture, shipping, storage, and assembly, as well as with each other once they have been enclosed in the weapon. Changes in material properties from these interactions over an extended time, whether chemical, physical, or mechanical, have the potential to affect the weapon performance. Unlike plutonium components that age primarily due to internal radioactive decay that causes changes throughout the material, uranium aging is principally due to various types of corrosion. Although systems were designed to minimize corrosion, the longer expectation for weapon life increases the risk of uranium aging concerns.
The Y-12 effort on uranium aging has contributed to the development of several new tools and methods which are used to understand and predict the corrosion process and can be applied to each weapon type. A suite of advanced diagnostic tests have been deployed at Y-12 to evaluate the condition of uranium components returned from nuclear weapons for early identification of any changes that could lead to future problems. Y-12 has helped to develop and validate the aging models that use information from a variety of sources to predict the lifetime of uranium components and to assist in the overall assessment of the nation’s nuclear stockpile. The Y-12 results from Enhanced Surveillance have been useful in the planning and design efforts of weapon life extension programs (LEPs) and in the conceptual phase of a Reliable Replacement Warhead.
With increasingly older nuclear weapons, further collaboration between Y-12 and design laboratories is required to develop the diagnostics, including less invasive non-destructive means, and the predictive models that are necessary to reduce uncertainties about weapon aging and maintain confidence in the current and future U.S. nuclear deterrent.
Q. Have you found anything that specifically quantifies the shelf life of uranium?
A. The lifetime of components depends on the specific material interactions within the different weapon types. Over the past several years, Y-12 and the national laboratories have developed an improved understanding of the aging mechanisms for uranium, have accumulated data on its material properties and component condition over time, and have developed aging models that help predict the component life within weapons.
Q. How long are uranium components good for?
A. Although the aging rate depends on the weapon type, most uranium components can be expected to last for many decades.
Q. Have these studies allowed us to increase or decrease the urgency of life extension programs?
A. The results of aging studies for many components have been used in planning and decisions for the timing of refurbishment or replacement of the various weapon types. Since every weapon has somewhat different aging considerations, the limiting factors have been the life of various components.
Q. Can you discuss the results?
A. We cannot discuss specific results.
Q. Does uranium last longer in field deployment than the plutonium components?
A. Uranium and plutonium have different aging mechanisms and their component life varies depending on the weapon type.
Q. Is the aging effect on plutonium a more limiting factor than uranium or vice versa?
A. Same answer.
Q. Has stockpile testing at Y-12 significantly changed the assumptions on weapons aging?
A. The results obtained from Y-12 stockpile evaluation and enhanced surveillance have improved our knowledge and understanding of weapon aging, contributed to better assumptions and predictive capabilities, and enabled the implementation of enhanced testing focused on aging.
Q. If uranium components last for many decades, why is Y-12 doing refurbishments? Is it because of non-nuclear parts?
A. The weapons currently undergoing life extension have components between 17 to 40 years old. These weapons are expected to be in the stockpile another two or three decades, so the components are refurbished or replaced to meet the extended lifetime requirement. | <urn:uuid:7c138c95-73e2-4551-8644-7e9cc480b8f8> | 2.9375 | 1,147 | Q&A Forum | Science & Tech. | 32.534972 |
“If you want to make something dirt-cheap, make it out of dirt.”
—Prof. Donald Sadoway, John F. Elliott Professor of Materials Chemistry at the Massachusetts Institute of Technology
I love innovation. I love when people think outside the box. Too often I hear comments like “the battery capability isn’t there,” or “the grid can’t handle the intermittency of renewable sources of energy.”
But I don’t think that means we have to continue to work within our current framework and squeeze out every drop of oil from beneath the ocean floor or out of the tar sands in Alberta. Nor do we need to dig out every nugget of coal from every mountaintop that we’re tearing down. What we should be striving toward instead is change the infrastructure so it can deal with these newer developing technologies.
One inspiring individual who thinks outside the box is Professor Donald Sadoway at MIT. Although he has research interests in many areas of chemistry and metallurgy, one area of research he has focused on has been to produce more efficient batteries, thereby reducing greenhouse gas emissions, especially within the industry sectors. His research has often been driven by his desire to reduce carbon pollution output by various industries.
Professor Sadoway recently had a TED talk posted on a new efficient battery he and his research team have been working on. I found his video particularly inspiring. If you have fifteen minutes, I strongly recommend you watch this informative and enlightening piece of education. | <urn:uuid:e3c1add6-b30d-45e3-93be-3c56c16be3da> | 2.71875 | 318 | Personal Blog | Science & Tech. | 41.764202 |
Please enter the text in the image above here:
A private group of genetic engineers in the US have a plan to create light-emitting plants for “sustainable natural lighting”. The plants will include the luciferase gene, as present in fireflies and genetically modified rabbits commissioned by artists; the ultimate aim is to provide a better than carbon-neutral replacement for street lights and household lamps.
To create the glowing plants, the team will first generate modified genes with the Genome Compiler software, then insert them into Arabidopsis, a small flowering plant related to mustard and cabbage (they make sure to point out that the plant is not edible). The main gene, luciferase, is the same one that makes fireflies light up the night.As luciferase is not sufficiently bright to light a street, or even a living room, the project will require optimisation; the engineers already have enhanced the gene's light output to an extent.
A Kickstarter campaign was started to fund the research, with those (in the US) contributing $40 or more to receive a packet of glowing plant seeds in return. To day, the campaign, which aimed for $65,000, has raised $216,536, with 33 days to go.
It'll be interesting to see if this is successful; will we see streets lit by fluorescing trees, or find ourselves putting a plantshade over the bedside plant when going to sleep? And will plants that emit a useful amount of light need to be fed large quantities of a sufficiently high-energy plant food to keep glowing?
In recent medical/biotechnological breakthroughs: players in an online game simulating protein folding have successfully determined the 3-dimensional structure of a protein in a simian virus related to HIV, a hard problem which is not feasible to do with brute-force computation:
Teams of players collaborate to tweak a molecule’s model by folding it up in different ways. The result looks somewhat tangled, but each one is scored on criteria such as how tightly folded it is and whether the fold avoids atoms clashing. The structure with the highest score wins. Anyone can play and most of the gamers have little or no background in biochemistry.
“People have spatial reasoning skills, something computers are not yet good at. Games provide a framework for bringing together the strengths of computers and humans. The results in this week’s paper show that gaming, science and computation can be combined to make advances that were not possible before.”Meanwhile, an experiment in using genetically modified HIV to destroy cancer cells has worked spectacularly well, with an experimental patient apparently having been cured of leukaemia, and remaining in full remission one year later:
At first, nothing happened. But after 10 days, hell broke loose in his hospital room. He began shaking with chills. His temperature shot up. His blood pressure shot down. He became so ill that doctors moved him into intensive care and warned that he might die. His family gathered at the hospital, fearing the worst. A few weeks later, the fevers were gone. And so was the leukemia.
But scientists say the treatment that helped Mr. Ludwig ... may signify a turning point in the long struggle to develop effective gene therapies against cancer. And not just for leukemia patients: other cancers may also be vulnerable to this novel approach — which employs a disabled form of H.I.V.-1, the virus that causes AIDS, to carry cancer-fighting genes into the patients’ T-cells. In essence, the team is using gene therapy to accomplish something that researchers have hoped to do for decades: train a person’s own immune system to kill cancer cells.Meanwhile, HIV research has yielded an unexpected boon, in the form of cats that glow in the dark.
Craig Venter (of Human Genome Project fame) has succeeded in creating synthetic life; i.e., of creating a living cell whose genome was entirely written from scratch in the laboratory. Venter's first commercialisation of the discovery will be a deal with ExxonMobil to create algae which absorb carbon dioxide and create hydrocarbon fuel. Beyond that, the possibilities are vast; from the mundane (cancer cures, new terrorist bioweapons, weird new designer drugs for mutant freak subcultures out of a Warren Ellis or John Shirley story) on to the horizon of the unimaginable.
And Quinn Norton says that we've just lost the War On Drugs, but not as badly as the drug lords, whose business model looks as doomed as the RIAA's:
You know what’s a lot easier than all the high minded business about environment, or life extension, or even the scary doomsday 12 Monkeys scenarios? Growing simpler molecule drugs. I don’t mean like aspirin, I mean like heroin and cocaine, THC and hallucinogens. They already grow in plants thoroughly studied, and people are motivated and not at all risk averse about getting those sequences somewhere they can use them. Cooking meth is hard and dangerous science compared to the ability to get a starter of a minimal cell that poops heroin and feeding it growth medium in your closet. We may have lost the drug war, but not as badly as the drug lords have.
It’s still hard to grow drugs in medium. But the whole point of this project is to make it easier. Who will be motivated to put in the work to make it happen? Especially if it’s so bad for organized crime? Drug addicts, frankly. You think they look like street junkies with DTs, but a fair number look like scientists, because they are. Drugs will finally be p2p, and governments and drug lords alike will find out what it’s like to be media companies and counterfeiters in a world of lossless copying and 100Mb pipes. Junkies will be victims of their success, and if we don’t get serious about treating addiction instead of trying to fight chemicals, it’s going to look a lot more bloody and horrid than the RIAA’s lawsuit factory. This is just one vision of what this kind of disruption looks like when people get a hold of it.
A new application of genetic engineering: permakittens, or cats which never mature.
Everybody loves kittens. The only thing wrong with them is that they turn into cats. So we'll make genetically modified cats that never get big. I've bounced this off a couple of honest-to-goodness biologists who assured me it is 100% doable and even gave me some tips.The author of the idea, one Dylan Stiles, has worked out the genetics of it (or claims to have; not being a biologist, I can't verify whether what he's saying is plausible). Cleverly enough, his idea includes its own copy-protection mechanisms, in that the permakittens will not produce unlicensed knockoffs. (Which would be the case if they remained actual kittens, which they're not; they do mature, whilst remaining
Two researchers at Berkeley have created a virus which fights AIDS. This virus is a modified version of HIV with the harmful parts replaced by a mechanism that inhibits HIV's ability to kill immune cells. The anti-AIDS virus is sexually transmissible, much as HIV is, which means that now it is hypothetically possible to screw a sick person healthy. (They may have to get rid of this if they ever market it, as not to lose revenue; otherwise they could sell multi-user site-licenses to sexually promiscuous patients, or put a celibacy clause in their licenses and prosecute violators under copyright laws.)
First a Brazilian artist commissioned a glow-in-the-dark rabbit, and now a biotech company is displaying fluorescent white mice at the Bio Taiwan 2003 expo. With photo, though whether they really look like that is debatable. (via jwz)
The latest from the frontiers of science: in the future, gravestones and other such memorials may be replaced by trees containing the DNA of the deceased. Though whether people would want to eat apples containing their grandmothers' DNA is a cultural question yet to be answered. Meanwhile, science has found the perfect eyebrow shape, bringing humanity one step closer to a race of superhumanly beautiful cyborgs.
Obituary: Dolly the cloned sheep is dead; she was euthanased after coming down with a number of chronic ailments. Her premature aging (she was 6) has cast doubt over the ability of cloning to create healthy animals.
Biotech companies use algorithmic music composition tools to convert DNA to music; not for artistic reasons, but to take advantage of the virtually perpetual terms of music copyrights (95 years, but extended by law every decade or so), as opposed to 17-year patents. Sounds like post-cyberpunk fiction, doesn't it?
(There we have it: the very concept of "art" is now a weapon of copyright fascism. It doesn't bode well for when the pendulum swings back.) (via bOING bOING)
To protest biotechnology patent laws, which often give multinational corporations absolute rights over basic foodstuffs (even if they had been grown for centuries), a development charity is planning to patent salted potato chips. By patenting a new pre-salted chip, ActionAid are hoping to own the rights to the concept of salted potato chips, which in theory could be used to levy license fees from chip shops under threat of patent infringement lawsuit.
Please enter the text in the image above here: | <urn:uuid:ff7fc4c2-c7ed-4095-84fa-6e2f829e4af2> | 3.109375 | 1,973 | Content Listing | Science & Tech. | 49.587631 |
Source code: Lib/linecache.py
The linecache module allows one to get any line from any file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file. This is used by the traceback module to retrieve source lines for inclusion in the formatted traceback.
The linecache module defines the following functions:
Get line lineno from file named filename. This function will never raise an exception — it will return '' on errors (the terminating newline character will be included for lines that are found).
If a file named filename is not found, the function will look for it in the module search path, sys.path, after first checking for a PEP 302 __loader__ in module_globals, in case the module was imported from a zipfile or other non-filesystem import source.
Clear the cache. Use this function if you no longer need lines from files previously read using getline().
Check the cache for validity. Use this function if files in the cache may have changed on disk, and you require the updated version. If filename is omitted, it will check all the entries in the cache.
>>> import linecache >>> linecache.getline('/etc/passwd', 4) 'sys:x:3:3:sys:/dev:/bin/sh\n' | <urn:uuid:cabadedc-7137-47f0-9c4a-843a0ef23cc1> | 2.796875 | 283 | Documentation | Software Dev. | 45.604004 |
|Major contractors||EADS Astrium, Toulouse, France, leading a team of 25 subcontractors from 14 European countries|
|Launch date||9 November 2005 03:33:34 UTC|
|Mission duration||153 days en route; 1,000 days in orbit
7 years, 6 months, and 9 days elapsed
|Semimajor axis||39,468.195 km|
|Orbital period||24 h|
Venus Express (VEX) is the first Venus exploration mission of the European Space Agency. Launched in November 2005, it arrived at Venus in April 2006 and has been continuously sending back science data from its polar orbit around Venus. Equipped with seven scientific instruments, the main objective of the mission is the long term observation of the Venusian atmosphere. The observation over such long periods of time has never been done in previous missions to Venus, and is key to a better understanding of the atmospheric dynamics. It is hoped that such studies can contribute to an understanding of atmospheric dynamics in general, while also contributing to an understanding of climate change on Earth. The mission is currently funded by ESA until 31 December 2014.
The mission was proposed in 2001 to reuse the design of the Mars Express mission. However, some mission characteristics led to design changes: primarily in the areas of thermal control, communications and electrical power. For example, since Mars is approximately twice as far from the Sun as Venus is, the radiant heating of the spacecraft will be four times greater for Venus Express than Mars Express. Also, the ionizing radiation environment will be harsher. On the other hand, the more intense illumination of the solar panels will result in more generated photovoltaic power. The Venus Express mission also uses some spare instruments developed for the Rosetta spacecraft. The mission was proposed by a consortium led by D. Titov (Germany), E. Lellouch (France) and F. Taylor (United Kingdom).
The launch window for Venus Express was open from 26 October to 23 November 2005, with the launch initially set for 26 October 4:43 UTC. However, problems with the insulation from the Fregat upper stage led to a two week launch delay to inspect and clear out the small insulation debris that migrated on the spacecraft. It was eventually launched by a Soyuz-FG/Fregat rocket from the Baikonur Cosmodrome in Kazakhstan on 9 November 2005 at 03:33:34 UTC into a parking Earth orbit and 1 h min after launch put into its transfer orbit to Venus. A first trajectory correction maneuver was successfully performed on 11 November 2005. It arrived at Venus on 11 April 2006, after 153 days of journey, and fired its main engine between 07:10 and 08:00 Universal Time (UTC) to reduce its velocity so that it could be captured by Venusian gravity into a nine day orbit. The burn was monitored from ESA's Control Centre, ESOC, in Darmstadt, Germany.
Seven further orbit control maneuvers, two with the main engine and five with the thrusters, were required for Venus Express to reach its final operational 24-hour orbit around Venus.
Venus Express entered its target orbit at apocentre on 7 May 2006 at 13:31 UTC, when the spacecraft was at 151 million kilometres from Earth. Now the spacecraft is running on an ellipse substantially closer to the planet than during the initial orbit. The orbit now ranges between 66,000 and 250 kilometres over Venus and it is polar. The pericentre is located almost above the North pole (80° North latitude), and it takes 24 hours for the spacecraft to travel around the planet.
Venus Express is studying the Venusian atmosphere and clouds in detail, the plasma environment and the surface characteristics of Venus from orbit. It will also make global maps of the Venusian surface temperatures. Its nominal mission was originally planned to last for 500 Earth days (approximately two Venusian sidereal days), but the mission has been extended three times: first on 28 February 2007 until early May 2009; then on 4 February 2009 until 31 December 2009; and then on 7 October 2009 until 31 December 2012. On-board resources are sized for an additional 500 Earth days.
ASPERA-4: An acronym for "Analyzer of Space Plasmas and Energetic Atoms," ASPERA-4 will investigate the interaction between the solar wind and the Venusian atmosphere, determine the impact of plasma processes on the atmosphere, determine global distribution of plasma and neutral gas, study energetic neutral atoms, ions and electrons, and analyze other aspects of the near Venus environment. ASPERA-4 is a re-use of the ASPERA-3 design used on Mars Express, but adapted for the harsher near-Venus environment.
VMC: The Venus Monitoring Camera is a wide-angle, multi-channel CCD. The VMC is designed for global imaging of the planet. It operates in the visible, ultraviolet, and near infrared spectral ranges, and maps surface brightness distribution searching for volcanic activity, monitoring airglow, studying the distribution of unknown ultraviolet absorbing phenomenon at the cloud-tops, and making other science observations. It is derived in part by the Mars Express High Resolution Stereo Camera (HRSC) and the Rosetta Optical, Spectroscopic and Infrared Remote Imaging System (OSIRIS). The camera includes an FPGA to pre-process image data, reducing the amount transmitted to Earth. The consortium of institutions responsible for the VMC includes the Max Planck Institute for Solar System Research, the Institute of Planetary Research at the German Aerospace Center and the Institute of Computer and Communication Network Engineering at Technische Universität Braunschweig.
MAG: The magnetometer is designed to measure the strength of Venus's magnetic field and the direction of it as affected by the solar wind and Venus itself. It will be able to map the magnetosheath, magnetotail, ionosphere, and magnetic barrier in high resolution in three-dimensions, aid ASPERA-4 in the study of the interaction of the solar wind with the atmosphere of Venus, identify the boundaries between plasma regions, and carry planetary observations as well (such as the search for and characterization of Venus lightning). MAG is derived from the Rosetta lander's ROMAP instrument.
PFS: The "Planetary Fourier Spectrometer" (PFS) operates in the infrared between the 0.9 µm and 45 µm wavelength range and is designed to perform vertical optical sounding of the Venus atmosphere. It will perform global, long-term monitoring of the three-dimensional temperature field in the lower atmosphere (cloud level up to 100 kilometers). Furthermore it will search for minor atmospheric constituents that may be present, but have not yet been detected, analyze atmospheric aerosols, and investigate surface to atmosphere exchange processes. The design is based on a spectrometer on Mars Express, but modified for optimal performance for the Venus Express mission.
SPICAV: The "SPectroscopy for Investigation of Characteristics of the Atmosphere of Venus" (SPICAV) is an imaging spectrometer that will be used for analyzing radiation in the infrared and ultraviolet wavelengths. It is derived from the SPICAM instrument flown on Mars Express. However, SPICAV has an additional channel known as SOIR (Solar Occultation at Infrared) that will be used to observe the Sun through Venus's atmosphere in the infrared.
VIRTIS: The "Visible and Infrared Thermal Imaging Spectrometer" (VIRTIS) is an imaging spectrometer that observes in the near-ultraviolet, visible, and infrared parts of the electromagnetic spectrum. It will analyze all layers of the atmosphere, surface temperature and surface/atmosphere interaction phenomena.
Radio science
VeRa: Venus Radio Science is a radio sounding experiment that will transmit radio waves from the spacecraft and pass them through the atmosphere or reflect them off the surface. These radio waves will be received by a ground station on Earth for analysis of the ionosphere, atmosphere and surface of Venus. It is derived from the Radio Science Investigation instrument flown on Rosetta.
Climate of Venus
Starting out in the early planetary system with similar sizes and chemical compositions, the histories of Venus and Earth have diverged in spectacular fashion. It is hoped that the Venus Express mission can contribute not only to an in-depth understanding of how the Venusian atmosphere is structured, but also to an understanding of the changes that led to the current greenhouse atmospheric conditions. Such an understanding may contribute to the study of climate change on Earth.
Search for life on Earth
Venus Express is used also to observe signs of life on Earth from Venus orbit. In the pictures, Earth is less than one pixel in size, which mimics observations of Earth-sized planets in other solar systems. These observations are then used to develop methods for habitability studies of extra-solar planets.
Important events and discoveries
|Wikinews has related news: European Venus probe launched successfully|
- 3 August 2005: Venus Express completed its final phase of testing at Astrium Intespace facility in Toulouse, France. It flew on an Antonov An-124 cargo aircraft via Moscow, before arriving at Baikonur on 7 August.
- 7 August 2005: Venus Express arrived at the airport of the Baikonur Cosmodrome.
- 16 August 2005: First flight verification test completed.
- 22 August 2005: Integrated System Test-3.
- 30 August 2005: Last Major System Test Successfully Started.
- 5 September 2005: Electrical Testing Successful.
- 21 September 2005: FRR (Fuelling Readiness Review) Ongoing.
- 12 October 2005: Mating to the Fregat upper stage completed.
- 21 October 2005: Contamination detected inside the fairing — launch on hold.
- 5 November 2005: Arrival at launch pad.
- 9 November 2005: Launch from Baikonur Cosmodrome at 03:33:34 UTC.
- 11 November 2005: First trajectory correction maneuver successfully performed.
- 17 February 2006: The main engine is fired successfully in a dress rehearsal for the arrival maneuver.
- 24 February 2006: Second trajectory correction maneuver successfully performed.
- 29 March 2006: Third trajectory correction maneuver successfully performed - on target for 11 April orbit insertion.
- 7 April 2006: Command stack for orbit insertion maneuver is loaded on the spacecraft.
- 11 April 2006: The Venus Orbit Insertion (VOI) is completed successfully, according to the following timeline:
spacecraft time (UTC) ground receive time (UTC) Liquid Settling Phase start 07:07:56 07:14:41 VOI main engine start 07:10:29 07:17:14 pericentre passage 07:36:35 eclipse start 07:37:46 occultation start 07:38:30 07:45:15 occultation end 07:48:29 07:55:14 eclipse end 07:55:11 VOI burn end 08:00:42 08:07:28
- Period of this orbit is nine days.
- 13 April 2006: First images of Venus from Venus Express released.
- 20 April 2006: Apocentre Lowering Manoeuvre #1 performed. Orbital period is now 40 hours.
- 23 April 2006: Apocentre Lowering Manoeuvre #2 performed. Orbital period is now approx 25 hours 43 minutes.
- 26 April 2006: Apocentre Lowering Manoeuvre #3 is slight fix to previous ALM.
- 7 May 2006: Venus Express entered its target orbit at apocentre at 13:31 UTC
- 14 December 2006: First temperature map of the southern hemisphere.
- 27 February 2007: ESA agrees to fund mission extension until May 2009.
- 19 September 2007: End of the nominal mission (500 Earth days) - Start of mission extension.
- 27 November 2007: The scientific journal Nature publishes a series of papers giving the initial findings. It finds evidence for past oceans. It confirms the presence of lightning on Venus and that it is more common on Venus than it is on Earth. It also reports the discovery that a huge double atmospheric vortex exists at the south pole of the planet.
- 20 May 2008: The detection by the VIRTIS instrument on Venus Express of hydroxyl (OH) in the atmosphere of Venus is reported in the May 2008 issue of Astronomy and Astrophysics.
- 4 February 2009: ESA agrees to fund mission extension until 31 December 2009.
- 7 October 2009: ESA agrees to fund the mission through 31 December 2012.
- 23 November 2010: ESA agrees to fund the mission through 31 December 2014.
- 25 August 2011: It is reported that a layer of ozone exists in the upper atmosphere of Venus.
See also
- Unmanned space mission
- Geosynchronous satellite
- List of planetary probes
- List of unmanned spacecraft by program
- Space exploration
- Space observatory
- Space probe
- Timeline of artificial satellites and space probes
- Timeline of planetary exploration
- "Venus Express preliminary investigations bring encouraging news". ESA. 25 October 2005. Retrieved 2006-05-09.
- "Mission extensions approved for science missions". ESA. 16 October 2009.
- "The Venus Express mission camera". Max Planck Institute for Solar System Research.
- "Venus Monitoring Camera". Technical University at Brunswick.
- "The light and dark of Venus". ESA. 2008-02-21.
- Atmospheric Dynamics of Venus and Earth
- Venus Express searching for life – on Earth ESA
- "Successful Venus Express main engine test". ESA. 17 February 2006. Retrieved 2006-05-09.
- Various authors, Eric (November 2007). "European mission reports from Venus". Nature (450): 633–660. doi:10.1038/news.2007.297.
- "Venus offers Earth climate clues". BBC News. 28 November 2007. Retrieved 2007-11-29.
- "Venus Express Provides First Detection Of Hydroxyl In Atmosphere Of Venus". SpaceDaily.
- "Venus springs ozone layer surprise". BBC News. 7 October 2011.
- F.W. Taylor (2006). "The Planet Venus and the Venus Express Mission". Planetary and Space Science 54 (13-14): 1247–1496. Bibcode:2006P&SS...54.1247T. doi:10.1016/j.pss.2006.06.013.
- "Venus Express launch campaign starts". ESA Portal. Retrieved 3 August 2005.
- "Venus Express Launch Campaign Journal". ESA SciTech Website. Retrieved 16 August 2005.
- "Venus Express 3D Model". ESA SciTech Website. Retrieved 5 September 2005.
- "Venus Express Instruments". ESA Portal. Retrieved 14 September 2005.
Further reading
Thorsten Dambeck: The Blazing Hell Behind the Veil , MaxPlanckResearch, 4/2009, p. 26 - 33
- ESA description of the Venus Express mission
- ESA Science & Technology - Venus Express page
- ESA Spacecraft Operations - Venus Express page
- Venus Express Program Page by NASA's Solar System Exploration
- Venus Express: The first European mission to Venus
- Extrasolar-planets.com — Venus Express
- Orbit Insertion - Scheduled events to shape orbit concluding 6 May 2006
- Map of temperatures of South Hemisphere of Venus planet
- apr 2007-esa-1.html 04/03/07: Venus Express: Tracking Violent Winds and Turbulences Site includes full coverage of the Venus Express Mission
- Amateurs Assist Venus Express Mission
- Japan Aerospace Exploration Agency
- Planet-C mission to Venus | <urn:uuid:e378cba4-5ca3-40bc-9d0c-c2838e654e12> | 2.71875 | 3,247 | Knowledge Article | Science & Tech. | 48.554198 |
Comprehensive DescriptionRead full entry
Philippine lizards of the family Gekkonidae comprise 49 species (Taylor, 1915, 1922; Brown and Alcala, 1978) in 10 genera: Gehyra (1), Gekko (13), Hemidactylus (5), Hemiphyllodactylus (2), Lepidodactylus (6), Luperosaurus (8), Ptychozoon (1), Pseudogekko (4), and Cyrtodactylus (9), (Brown et al., 2007, 2010a, 2011; Welton et al., 2009, 2010a, 2010b; Zug, 2011). An amazing percentage of these species are endemic to the Philippines archipelago (roughly 85%; Brown et al., 2011). Several of the recently described gekkonids in the Philippines were discovered only recently as part of ongoing surveys around the archipelago. Recent phylogenetic studies focused on Philippine gekkonids (Siler et al., 2010; Welton et al., 2010a,b) have resulted in the observation of high levels of genetic diversity among populations of widespread species, an indication that the country's gecko diversity may still be greatly underestimated.
The genus Dibamus represents a unique radiation of lizards in that all species in the genus are entirely limbless. Of the 22 species of Dibamus currently recognized, only two species are known from the Philippines (Dibamus leucurus and Dibamus novaeguineae). Both species are rarely observed, fossorial lizards, recognized to occur in the southern portions of the Philippines (mostly in the Mindanao faunal region). Unfortunately, little is known about the ecology and natural history of these unique species in the Philippines. | <urn:uuid:0f5a65cd-eddb-447b-a265-cf4db0aba784> | 3.265625 | 371 | Knowledge Article | Science & Tech. | 25.457168 |
New insights on how solar minimums affect Earth
Since 1611, humans have recorded the comings and goings of black spots on the sun. The number of these sunspots wax and wane over approximately an 11-year cycle -- more sunspots generally mean more activity and eruptions on the sun and vice versa. The number of sunspots can change from cycle to cycle and 2008 saw the longest and weakest solar minimum since scientists have been monitoring the sun with space-based instruments. Observations have shown, however, that magnetic effects on Earth due to the sun, effects that cause the aurora to appear, did not go down in synch with the cycle of low magnetism on the sun. Now, a paper in Annales Geophysicae that appeared on May 16, 2011 reports that these effects on Earth did in fact reach a minimum -- indeed they attained their lowest levels of the century -- but some eight months later. The scientists believe that factors in the speed of the solar wind, and the strength and direction of the magnetic fields embedded within it, helped produce this anomalous low.
"Historically, the solar minimum is defined by sunspot number," says space weather scientist Bruce Tsurutani at NASA's Jet Propulsion Lab in Pasadena, Calif., who is first author on the paper. "Based on that, 2008 was identified as the period of solar minimum. But the geomagnetic effects on Earth reached their minimum quite some time later in 2009. So we decided to look at what caused the geomagnetic minimum."
Geomagnetic effects basically amount to any magnetic changes on Earth due to the sun, and they're measured by magnetometer readings on the surface of the Earth. Such effects are usually harmless, the only obvious sign of their presence being the appearance of auroras near the poles. However, in extreme cases, they can cause power grid failures on Earth or induce dangerous currents in long pipelines, so it is valuable to know how the geomagnetic effects vary with the sun.
Three things help determine how much energy from the sun is transferred to Earth's magnetosphere from the solar wind: the speed of the solar wind, the strength of the magnetic field outside Earth's bounds (known as the interplanetary magnetic field) and which direction it is pointing, since a large southward component is necessary to connect successfully to Earth's magnetosphere and transfer energy. The team -- which also included Walter Gonzalez and Ezequiel Echer of the Brazilian National Institute for Space Research in São José dos Campos, Brazil -- examined each component in turn.
First, the researchers noted that in 2008 and 2009, the interplanetary magnetic field was the lowest it had been in the history of the space age. This was an obvious contribution to the geomagnetic minimum. But since the geomagnetic effects didn't drop in 2008, it could not be the only factor.
To examine the speed of the solar wind, they turned to NASA's Advanced Composition Explorer (ACE), which is in interplanetary space outside the Earth's magnetosphere, approximately 1 million miles toward the sun. The ACE data showed that the speed of the solar wind stayed high during the sunspot minimum. Only later did it begin a steady decline, correlating to the timing of the decline in geomagnetic effects.
The next step was to understand what caused this decrease. The team found a culprit in something called coronal holes. Coronal holes are darker, colder areas within the sun's outer atmosphere. Fast solar wind shoots out the center of coronal holes at speeds up to 500 miles per second, but wind flowing out of the sides slows down as it expands into space.
"Usually, at solar minimum, the coronal holes are at the sun's poles," says Giuliana de Toma, a solar scientist at the National Center for Atmospheric Research whose research on this topic helped provide insight for this paper. "Therefore, Earth receives wind from only the edges of these holes and it's not very fast. But in 2007 and 2008, the coronal holes were not confined to the poles as normal."
Those coronal holes lingered at low-latitudes to the end of 2008. Consequently, the center of the holes stayed firmly pointed towards wind at Earth begin to slow down. And, of course, the geomagnetic effects and sightings of the aurora along with it.
Coronal holes seem to be responsible for minimizing the southward direction of the interplanetary magnetic field as well. The solar wind's magnetic fields oscillate on the journey from the sun to Earth. These fluctuations are known as Alfvén waves. The wind coming out of the centers of the coronal holes have large fluctuations, meaning that the southward magnetic component – like that in all the directions -- is fairly large. The wind that comes from the edges, however, has smaller fluctuations, and comparably smaller southward components. So, once again, coronal holes at lower latitudes would have a better chance of connecting with Earth's magnetosphere and causing geomagnetic effects, while mid-latitude holes would be less effective.
Working together, these three factors -- low interplanetary magnetic field strength combined with slower solar wind speed and smaller magnetic fluctuations due to coronal hole placement -- create the perfect environment for a geomagnetic minimum.
Knowing what situations cause and suppress intense geomagnetic activity on Earth is a step toward better predicting when such events might happen. To do so well, Tsurutani points out, requires focusing on the tight connection between such effects and the complex physics of the sun. "It's important to understand all of these features better," he says. "To understand what causes low interplanetary magnetic fields and what causes coronal holes in general. This is all part of the solar cycle. And all part of what causes effects on Earth."
Source: NASA/Goddard Space Flight Center
- Teamwork: IBEX and TWINS observe a solar stormThu, 12 Apr 2012, 19:34:06 EDT
- Solar winds triggered by magnetic fieldsMon, 2 Nov 2009, 11:33:11 EST
- Climatic effects of a solar minimumSun, 6 May 2012, 17:31:29 EDT
- IBEX discovers that galactic magnetic fields may control the boundaries of our solar systemFri, 16 Oct 2009, 12:47:27 EDT
- Oldest measurement of Earth's magnetic field reveals battle between sun and Earth for our atmosphereThu, 4 Mar 2010, 14:49:34 EST
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
No popular news yet
No popular news yet
- Stem cell transplant restores memory, learning in mice
- 2 landmark studies report on success of using image-guided brachytherapy to treat cervical cancer
- Calculating tsunami risk for the US East Coast
- Researchers discover mushrooms can provide as much vitamin D as supplements
- Cutting back on sleep harms blood vessel function and breathing control | <urn:uuid:e0c633a3-719e-4847-91bc-205cfa31b663> | 3.78125 | 1,463 | Truncated | Science & Tech. | 42.623166 |
Venus Fly Trap
Name: Jon Catoe
Hi my name is Jon Catoe and I am a seventh grader. I am currently involved
in a science research project. My topic is the Venus's Fly Trap, for which I have
completed a report. My question is since there is a time of growth where the trap
is not able to gather insects, how does the Venus Fly Trap receive its nitrogen supplement?
I don't have any written information on this, but from what I recall the
Venus fly trap is able to grow as a regular plant and acquire nutrition
from the soil. Consuming an unfortunate fly is supplementary.
You might consult your library for a book on exotic houseplants.
They also sell a variety of plant care books at your local bookstore.
Ask the clerk or librarian for assistance if you have a hard time
locating the information you are seeking.
Thanks for using NEWTON!
Click here to return to the Biology Archives
Update: June 2012 | <urn:uuid:b45c60d7-5416-4980-a76f-7799a39ad50e> | 3.078125 | 209 | Q&A Forum | Science & Tech. | 56.652165 |
Coral has nothing to fear from CO2 (the Australian):
“A WIDESPREAD belief that the world’s coral reefs face a calamitous future due to climate change is proving less resilient than the natural wonders themselves.
Rising sea temperatures, storm damage and ocean acidification have grabbed the headlines as looming threats to reef survival.
But as each concern is more thoroughly investigated, scientists are finding nature better equipped to cope than they had imagined.
The latest research, published in Nature: Climate Change today, blows away the theory that reefs were doomed due to rising ocean acidification caused by the higher take-up of carbon dioxide in the seas.
Researchers have found a common coralline algae that grows at the leading edge of coral reefs is not nearly as susceptible to changing ph levels as coral because it contains high levels of dolomite.
In fact, the dolomite-laden algae has a rate of dissolution six to 10 times lower than coral’s.
The good news is that dolomite-rich coralline algae is common in shallow coral reefs across the world.
“Our research suggests it is likely they will continue to provide protection for coral reef frameworks as carbon dioxide rises,” the paper says.” | <urn:uuid:394d0187-471e-412a-a148-b7db5aad8be5> | 2.9375 | 263 | Personal Blog | Science & Tech. | 31.915215 |
An Example of a Manifold
Let’s be a little more explicit about our example from last time. The two-dimensional sphere consists of all the points in of unit length. If we pick an orthonormal basis for and write the coordinates with respect to this basis as , , and , then we’re considering all triples with . We want to show that this set is a manifold.
We know that we can’t hope to map the whole sphere into a plane, so we have to take some points out. Specifically, let’s remove those points with , just leaving one open hemisphere. We will map this hemisphere homeomorphically to an open region in .
But this is easy: just forget the -component! Sending the point down to the point is clearly a continuous map from the open hemisphere to the open disk with . Further, for any point in the open disk, there is a unique with . Indeed, we can write down
This inverse is also continuous, and so our map is indeed a homeomorphism.
Similarly we can handle all the points in the lower hemisphere . Again we send to , but this time for any in the open unit disk — satisfying we can write
which is also continuous, so this map is again a homeomorphism.
Are we done? no, since we haven’t taken care of the points with . But in these cases we can treat the other coordinates similarly: if we have our inverse pair
while if we have
Similarly if we have
while if we have
Now are we done? Yes, since every point on the sphere must have at least one coordinate different from zero, every point must fall into one of these six cases. Thus every point has some neighborhood which is homeomorphic to an open region in .
This same approach can be generalized to any number of dimensions. The -dimensional sphere consists of those points in with unit length. It can be covered by open hemispheres, each with a projection just like the ones above. | <urn:uuid:54961d57-a418-48cd-89dc-33daf29ec3b1> | 3.734375 | 416 | Tutorial | Science & Tech. | 55.98999 |
One of the well-documented effects of climate change in the future is increased probability of widespread and long-term drought. Related to those effects will be the accessibility of fresh water to meet the needs of ecosystems and human societies.
The Colorado River system is in the midst of a 10-year drought. Abundant geologic evidence shows that the region, without climate forcing by humans, is susceptible to decadal- to century-scale droughts. Post-2007 IPCC Report studies have dug further into the question of what future climates would be like under different CO2 emission scenarios. A related study that examined what different stream flows and projected changes in water demand would mean for the Colorado River System has been accepted to a peer-review journal, Water Resources Research. Keep in mind that the Colorado River System delivers water to 30 million people and a large number of ecosystems. Key components of the system for human use includes reservoirs. If stream flow is reduced for consecutive years, it will have far-ranging impacts that are not yet fully defined or explored. What is known is this: those reservoirs can’t capture what doesn’t flow through the System.
The researchers found that through 2026, the risk of fully depleting reservoir storage in any given year remains below 10% under any scenario of climate fluctuation or management alternative. That is certainly good news. During this same period, the reservoir storage could even recover from its current low level, which is currently at 59% of capacity. But if climate change results in a 10% reduction in the Colorado River’s average stream flow as some recent studies predict, the chances of fully depleting reservoir storage will exceed 25% by 2057, a much more worrisome outcome. Even more disturbing, if climate change results in a 20% flow reduction, the chances of fully depleting reservoir storage will exceed 50% by 2057. Exceeding a 50% probability is indicative of a critical danger to human societies and warns of unacceptable ecosystem collapse across a large region.
This is the future we could saddle future generations with. Or we can take action now, when the costs are still well within reach, and stop forcing the climate system as hard as we’re doing. Rep. John Salazar (D, CO-03) voted for the more dangerous future by joining with the two Colorado Republican Representatives and voting against H.R. 2454, the American Clean Energy and Security Act of 2009. The remainder of the Colorado delegation, all Democrats, voted for the less risky future. The Senate should take up their version of the legislation this fall. Will Sens. Udall and Bennet vote for a better future? When (if?) a compromise bill comes up for a final vote, will legislators do the morally right thing? They’d better. 30 million+ people are at risk of becoming mighty thirsty.
Cross-posted at SquareState. | <urn:uuid:f34b9ea4-b667-4196-92f8-18ceab389362> | 3.578125 | 589 | Personal Blog | Science & Tech. | 45.641763 |
What Music Does Bacteria Enjoy the Most?
Grade Level: 9th to 12th; Type: Biology
This experiment will explore whether music of different varieties affects the growth of bacteria.
- Does music alter the growth of bacteria?
- Do different kinds of music make the bacteria grow differently?
Although bacteria lack the ability to hear, they are very perceptive to changes in vibration. Physically speaking, music is essentially various changes in vibration. This experiment might help figure out better ways to process sewage and other essential microbe-assisted duties.
- 2 or more prepared Petri dishes with agar (available from biological supply companies)
- Sterilized swabs
- Rubber or plastic gloves
- 2 or more portable CD or MP3 players
- Several pairs of cheap headphones, NOT earbuds (same number as music players) You will want to throw them away after the experiment.
- Several songs or albums of various music, the more diverse the better (such as classical, hard rock, and dance)
- Notepad and paper
- Wearing gloves, prepare the Petri dishes. Following the manufacturer’s instructions, take them out of the refrigerator for about an hour before conducting the experiment.
- Using the sterilized swabs, collect bacterial samples while wearing gloves. Good places to nab some bacteria include faucets or any other area that is touched by a lot of people. Ensure that you swab from the same area to get roughly the same amount and type of bacteria. Swipe the swab against the agar in the Petri dish and then close and seal the dish. Label each sample.
- Place the samples in a warm, out of the way place. Leave one sample alone, this is the control.
- For the other samples, place the headphones snugly around the dish.
- Connect the headphones to the music players. Play a different song or album on repeat on each player.
- Let the samples grow for a week. Make sure to keep the music players charged and playing at all times. Take pictures of the developing bacteria everyday.
- Take off the headphones and compare each sample. Take note of the amount of colonies in each sample and measure the size of each colony.
- Carefully dispose of the Petri dishes.
- Analyze this data. Did the music have an affect on the size or amount of bacteria colonies? Did a certain genre of music have a greater affect than others?
Terms/Concepts: microbiology, bacteria, microbes, vibrations, sewage treatment
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety. | <urn:uuid:b1f73cf9-f35b-4be3-8d34-184b14bc448a> | 3.671875 | 604 | Tutorial | Science & Tech. | 42.102922 |
Science & Research
Volume III - 4.11 Statistics Applied to pH in Canned Foods
Section 4 - Basic Statistics and Presentation
|EFFECTIVE DATE: 10/01/2003||REVISED: 01-31-13|
pH is a logarithmic measure for the acidity of an aqueous solution. Since pH represents the negative logarithm of a number, it is not mathematically correct to calculate simple averages or other summary statistics. Instead, the values should be converted to hydrogen ion concentrations, averaged, and re-converted to pH values.
The following guidance is provided
- 1. Convert each pH value to hydrogen-ion activity (H+), using the equation:
Activity = 10-pH
In Excel, the formula is: =10^(-pH number)
- 2. Calculate the mean of the activity values by adding the values and dividing the sum by the total number of values. Calculate the standard deviation also from the activity values.
- 3. Convert the calculated mean activity back to pH units, using the equation: pH = (-)(log10)(mean H+ activity). Also convert the standard deviation to pH units. In Excel, the formula is: = -LOG10(number)
When the pH values correspond closely, there is not a significant difference between the mathematical mean and the logarithmic mean. As the pH values spread further apart from each other, the difference between the two means become more significant. | <urn:uuid:df0bbd23-30db-490c-9596-525cc1ec2d2a> | 3.453125 | 311 | Tutorial | Science & Tech. | 44.816693 |
Identity Swap: Finding the variants that human history has favoured
Sequence differences in less than 0.2% of the 3-billion-base human genome play a vital role in a bewildering variety of human disease. Today, researchers from the Wellcome Trust Sanger Institute and the Cambridge University’s Cambridge Institute for Medical Research, together with international colleagues report in PLoS Genetics their detailed maps of differences implicated in disease as well as genes that are unchanged in recent human history.
The Major Histocompatibility Complex (MHC) consists of hundreds of genes on human chromosome 6 that are important in most autoimmune conditions, when our biological defences turn on our own systems. The MHC has the major role in type 1 diabetes and rheumatoid arthritis. The MHC is also pivotal in response to infection, including malaria and AIDS.
Genes in the MHC can differ dramatically between people, and the differences among us affect medical events as diverse as tissue transplant rejection, arthritis, asthma and disease resistance. A detailed study of this region in different people will shed light on which genes are most important.
“We analysed the entire MHC region in detail from three individuals that carried different susceptibility to disease,” explained Dr Stephan Beck, leader of the team at the Wellcome Trust Sanger Institute. Key differences were then further analysed in a much larger population of 140 DNA samples.
“Within the sea of over 20,000 sequence variations across the 4 million MHC bases, we found one island of stability,” continued Dr Beck. “A region of 160,000 bases that is up to 200-fold less variant between chromosomes sharing part of the same HLA type, suggesting these individuals most likely shared a common ancestor as recently as 50,000 years ago.”
Swapping of ancestral sequence blocks is a potential mechanism (identity-by-descent) whereby certain gene combinations, which presumably have favoured immunological advantage (e.g. resistance to infectious disease), can spread across haplotypes and populations.
Professor John Trowsdale, at the Department of Pathology, University of Cambridge, said, “The region, called DR-DQ, where we find this island of stability is one of the most variable in our genome, yet in some people it has been ‘fixed’. We suggest that ancestral DR-DQ blocks have been shuffled into different MHC backgrounds and subsequently expanded in frequency across European populations.
“These ‘fixed’ haplotypes might then have expanded because they protected against infection and disease. We hope to show, in further studies, whether this stable region was a key to disease resistance in our recent past.”
The study further described over 300 amino acid changing variants in gene sequences. These variants are strong candidates for functional studies to understand the role of variation in MHC-associated disease.
Autoimmune disease affects about 3 million people in the UK. The three haplotypes studied here display different susceptibilities to diseases such as type 1 diabetes, myasthenia gravis and multiple sclerosis.
For some common autoimmune diseases the MHC provides by far the largest genetic contribution by a single chromosome region. For example, the MHC accounts for at least 30% of the familial aggregation in type 1 diabetes and rheumatoid arthritis.
“Data generated by projects such as the MHC Haplotype Project will feed into the recently announced Wellcome Trust Case-Control Consortium,” explained Professor John Todd, Professor of Medical Genetics at the Cambridge Institute for Medical Research, “and the WTCCC search for the genetic signposts for eight common diseases will be accelerated by the new markers reported here. At an ever increasing rate, we are developing the necessary tools and sample collections to make a real difference to the study, diagnosis and, we hope, treatment of diseases such as TB, coronary heart disease, diabetes and rheumatoid arthritis.”
The MHC Haplotype Project is creating a public resource to assist the discovery of genetic factors influencing these medical traits and to shed light on the evolution of the MHC. Access to complete sequences across several MHC haplotypes that exhibit differences in disease susceptibility will help researchers to home in on specific variants (susceptibility alleles) and to rule out regions contributing to a given disease.
Haplotypes and the MHC
Haplotypes are combinations of gene and sequence variants that tend to occur together in an individual genome. This may be purely fortuitous, or it may reflect selection of given combinations (they have been successful in the past), or it may reflect a population bottleneck, where only a few, perhaps similar, genomes have contributed to the further population growth.
The MHC is among the most gene-dense regions of the human genome and the most variable, as might be expected from a region involved in fighting infection (as well as other functions). Over evolutionary time, the MHC has been driven to become the most variable region of our genome.
The MHC Haplotype Project is studying in fine detail the sequence of eight of the most common human haplotypes, selected for conferring protection against or susceptibility to common disease. The detailed analysis of the third of these eight is reported here and compared with the two previously described.
The COX haplotype has been associated with susceptibility to a wide range of diseases, including type 1 diabetes, systemic lupus erythematosus and myasthenia gravis.
The PGF haplotype provides protection against type 1 diabetes and predisposes to other diseases such as multiple sclerosis and systemic lupus erythematosus.
The QBL haplotype is positively associated with Graves’ disease and type 1 diabetes.
More articles from Life Sciences:
Tokyo Institute of Technology research: An insight into cell survival
17.05.2013 | Tokyo Institute of Technology
Asian lady beetles use biological weapons against their European relatives
17.05.2013 | Max-Planck-Institut für chemische Ökologie
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise.
The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ...
About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers.
However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich.
Second sound is a quantum mechanical phenomenon, which has been observed only in superfluid helium.
Physicists from the University of Innsbruck, Austria, in collaboration with colleagues from the University of Trento, Italy, have now proven the propagation of such a temperature wave in a quantum gas. The scientists have published their historic findings in the journal Nature.
Below a critical temperature, certain fluids become superfluid ...
Researchers use synthetic silicate to stimulate stem cells into bone cells
In new research published online May 13, 2013 in Advanced Materials, researchers from Brigham and Women's Hospital (BWH) are the first to report that synthetic silicate nanoplatelets (also known as layered clay) can induce stem cells to become bone cells without the need of additional bone-inducing factors.
Synthetic silicates are made ...
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:8e728872-a33f-4e82-a9ef-3390679f3df1> | 3.359375 | 1,758 | Content Listing | Science & Tech. | 34.781527 |
Intercepts, asymptotes, sketch for quadratic over quadratic:
The x-intercept: Where the function is zero, or the numerator is zero and the denominator is non-zero:
x2 - 9 = 0, or x = 3, x = -3.
The y-intercept: Where x = 0:
Vertical asymptote: where the denominator is zero: x2 - 4 = 0, or x = 2, x = -2.
Horizontal asymptote: The limit of the function at infinity:
We can use the Limit at Infinity for Rational Functions:
Draw your graph, and then check it. | <urn:uuid:03b38df3-a25c-4630-837c-c4daafe6f13f> | 2.734375 | 147 | Tutorial | Science & Tech. | 53.052222 |
Olsen, Paul E. and Johansson, Annika K., (Lamont-Doherty Earth Observatory of Columbia University, Palisades, New York)
To a first approximation, the transitions between major global climate states (i.e. "ice house" vs. "hot house") do not seem to correlate with the largest scale aspects of tectonic history. Ice house conditions came and went during the existence of Pangea during the late Paleozoic while hot house conditions were maintained during the last stages of Pangea, its breakup and its dispersion during the Mesozoic. Hot house conditions gave way to ice house conditions during the Cenozoic during which the continents remained more or less dispersed. In contrast, there is a good correlation between major climate states and the staggered evolution of producing and consuming organisms. Terrestrial plants tend to increase chemical weathering (via root respiration and decay) while herbivores and detritivores tend to suppress chemical weathering (via lowering of root productivity and subsurface decay). Other workers have emphasized the role of the evolution of major plant groups in the development of ice house conditions, and it may be that it is the evolution of consuming animals that brings the Earth back to a hot house state. We propose that the long lag between the evolution of key innovations in plant group and corresponding innovations in consumers is responsible for intervening ice house conditions. If animals were in adaptive equilibrium with producers there would be no net change in weathering rates other than that driven by changes in the solar constant. This scenario fits the observed climatic pattern, the sedimentary record of carbon reservoirs, and the known evolutionary history of plant and consumer groups, but it remains to be tested by examination of appropriate proxies of weathering in the geological record and experimental studies of the details of the differences in weathering rates among different plant groups. | <urn:uuid:09fff63d-75ae-48d5-a984-94faf4c1f4ed> | 2.875 | 380 | Academic Writing | Science & Tech. | 22.734303 |
Let's solve the world's math problems!
Five students earned a prize of 720.00. Peter stayed with twice what did each one of the other four, who received equal amounts. How much Peter has?
- Click and drag over the problem to make highlights.
- Type the next step in the solution to this problem.
- Keep the math beautiful by using [[ and ]] around spans of math.
- Don't do x^2. Instead do [[x^2]] to get x2, etc.
Step 1: let p=peter's amount and f=the amount of each other of the four students
Step 2: peter earned twice that of the other students, so p=2f
Step 3: the student all earned 720 together, so p+4f=720
Step 4: but p=2f, so 2f+4f=720 or 6f=720
Step 5: $so, f=120$..this means each student received 120 and peter received 240 (twice 120)
Next step? Make a sketch
I know how to do a lot of math, maybe I can help: | <urn:uuid:07ec8852-ce22-4b1f-a86a-c42cbf998db5> | 3.34375 | 242 | Tutorial | Science & Tech. | 101.531124 |
New Images of C/2006 P1
Check out the new images
(Jan 18-25) and
(Jan 26-Feb 05) of the comet by Rob McNaught.
Posters are available from several of these images.
Images of the comet by Gordon Garradd appear here.
What's all the Fuss About?
Every few years a comet becomes bright enough to be easily seen with the naked eye
but only every decade or so are these comparable with comets like Halley. The
recent observations of C/2006 P1 already show it to be considerably brighter
than Halley and it is likely to get much brighter still! Pictures like this one from
in Norway on January 8 give a feel for what we might expect in Australia
after the comet swings around the Sun on January 12, moving into the
southern evening sky by January 14.
Photo: Hakon Dahle, Institute of Theoretical Astrophysics, University of Oslo.
Discovery of C/2006 P1
C/2006 P1 was a routine discovery on 2006 Aug 7, with the Uppsala Schmidt
telescope at Siding Spring Observatory, near Coonabarabran, N.S.W., Australia. It is one of 29 comets discovered by this telescope
since early 2004 in a project to systematically search the
southern skies for asteroids, or comets, that can pass close to the Earth.
The project is run by Steve Larson of the University of Arizona and
operates three telescopes;
two in Arizona
and, in collaboration with the Australian National University, the
Uppsala Schmidt at Siding Spring. This three telescope operation
discovered almost 400 Near-Earth Asteroids in 2006, over 60% of the
worldwide total. Discovery statistics are listed
with our group of three telescopes combined under "Catalina".
This earliest image of C/2006 P1 is from a 20sec exposure with the Uppsala Schmidt taken in moonlight on 2006 Aug. 07.
It covers 13.2 x 10.0 arcmins, a tiny part of the original 120 x 120 arcmin field of view. North is to the top and the lower
edge of the image is the bottom of the original field. Due to the moonlight, the flat fielding correction is imperfect
(our thinned CCD chip has a number of blemishes along this edge).
These four images, taken 10 minutes apart, form the sequence used in the discovery. The comet is moving in the center (see above for the location
of the comet on the first image). The background, being uneven from the imperfect flat field, appears to move around. This
"dither" pattern is a deliberate offset of the telescope so none of these cosmetic artifacts can appear to move like an
asteroid or comet and the automated detection software thus ignores them for the most part.
Within a week, it was clear that
this comet was unlike most others we have discovered. They have orbits that keep them
far from the Sun, such objects being of interest only to professional or amateur
astronomers. This comet was likely to pass well
within the Earth's orbit and even well within the orbit of Mercury, making it
much brighter than most, but also potentially hiding it in the Sun's glare. It
would be at its closest to the Sun in mid-January 2007. You can visualise the comet's
orbit using this very nice
Alternatively, cross your eyes and look at these wonderful stereo diagrams below by Paul Payne,
displaying the comet's path through the inner Solar System. Note that the planets are enlarged!
You can get larger versions here for the
images. These images may be used if credited to
If you can successfully view the images in stereo try this beautiful
Quicktime stereo animation (4.7MB)
by Paul displaying the orientation of the comet's orbit.
From August into early November the comet did brighten rapidly, but not
enough to prevent it becoming lost in the evening twilight by mid-November. Although still
brightening, it seemed that the comet might be lost to human eyes until
it reappeared in southern skies in the evening twilight of late January.
Although not visible to ground based telescopes in early December, the comet
was recorded on a number of occasions with the SWAN instrument on the SOHO satellite.
This recorded the UV halo from fluorescing gas around the comet. During the month
it was clearly brightening (see images on
Gary Kronk's Cometography
page), although just how bright this was in terms of what the human eye would see was not certain. An animation
of the latest SWAN images can be found
but through mid-January, C/2006 P1 will be lost to the SWAN instrument, being located in the excluded region around the Sun.
The comet went unobserved by human eyes for 40 days but was successfully reobserved in the
twilight from the northern hemisphere at the end of December. It was
becoming clear that it could be a bright object when closest to the Sun
in mid-January. Since then the comet has continued to grow in brightness
impressively! By January 6, several amateur astronomers were reporting
that the comet was visible to the naked eye in bright twilight just
a few degrees above the horizon.
Having heard of the successful daylight sighting of the comet by
the very experienced US amateur Dennis di Cicco on Jan 7,
and Rob McNaught made the attempt on January 9 from Siding Spring
(using professional computer telescope pointing) and we were able to see
the comet. Precautions had been made to
prevent any possibility of the sun entering the telescope. With such
a bright sky, Gordon used sunglasses to cut down the glare (they would
provide no protection should you accidently look at the sun
through the telescope) and clearly saw it. Rob found it more difficult
without sunglasses, but the sun hat was appropriate wear. (Remember
"Slip, Slop, Slap"). Whilst telescopic viewing of the comet in daytime is *possible* it
is strongly recommended that nobody attempt this without considerable
prior experience. It is far too easy to accidently look at the sun and
inflict permanant eye damage or blindness.
Gordon Garradd observing the comet at midday on January 9 using the 125mm finder telescope
on the Uppsala Schmidt (the discovery telescope).
Rob McNaught observing the comet shortly after Gordon.
It seems reasonable that the baseline prediction for the peak brightness
on Jan 14 will be of -ve magnitude (brighter than the brightest stars),
but it gets better! Due to the dust
in the comet, there will be a brightness enhancement around that date
caused by the comet being located between us and the Sun. This brightening,
called forward scatter, has been estimated to increase the brightness
of the comet by around a couple of magnitudes, so an impressive brightness
might result between Jan 12 to 16. This brightening effect is
just like that of plant seeds or bugs which brighten as they drift
in front of the sun. The comet's brightness may possibly rival Venus (the
Evening star) visible in the evening twilight at the moment.
Before Jan 13, the comet is not visible in the Australian sky after sunset.
Australians will however have the chance to see the comet before this by examining
images on the web taken by space telescopes that monitor the
sky near the Sun. These record both activity on the Sun itself and in
the environment surrounding it.
From Jan 12 to 16, it would be possible to gauge its brightness
by examining images taken with the
LASCO C3 telescope on the SOHO satellite. This telescope which monitors
the Sun's atmosphere, often shows small comets passing close to the sun;
usually very small and faint objects. The following link to a NASA webpage
shows the latest SOHO images (to see the comet, choose the LASCO C3 images)
and also has a link towards the bottom of the page to Real Time Movies showing the last few day's activity.
The comet will only appear in the LASCO C3 images, which has the widest field of
view (8 degree radius) of all the SOHO telescopes. Michael Mattiazzo, a
well known Australian comet observer notes what to expect on the C3 images
and notes that the times given on the SOHO web pages are in UT:
On Jan 12 at approximately 09:00 UT (8pm NSW time), comet
C/2006 P1 McNaught appears at the 11 o'clock position in the
images. On Jan 15 at 15:55 UT (02:55am NSW time), the comet
is 40' East of Mercury. On Jan 16 at approximately 16:00 UT
(03:00am NSW time), the comet disappears at the 7 o'clock
The following plot provided by Syuichi Nakano displays the path of
the comet relative to the Sun as seen from the Earth. This unusual way of plotting the path
gives the comet an apparent kink in it's motion. It is properly visualised
as the comet moving through a long loop directed towards the Earth in late-December/early-January,
but well over 100 million km distant, before it swings around the Sun and heads away from the Earth and Sun again.
Other space telescopes that monitor the Sun include a pair of telescopes called
STEREO, launched late last year. They view the Sun from different angles to
give a 3D view of the Sun's activity. The comet should move into the
field of these telescopes on Jan 10, but the images are only being updated
slowly and it may be a few days before comet images appear after this date.
Note that the images are in FITS format requiring a suitable astronomical image viewer.
Visibility from Australia
I'll concentrate my comments on the latitude of Sydney, which will be
reasonably accurate for most of the Australian population.
The first possibility to see the comet will be at sunset on Jan 13
when the comet would be a *very* difficult object some 6 degrees
north of the azimuth of the just set sun. The comet will set only
7 minutes after the top edge of the sun has set. You would need a
very good horizon and beautifully clear skies to see it, but given
the possible brightness it is not an impossibility. The tail would
lie almost flat along the horizon to the comet's right.
The first real chance will be at sunset on Jan 14 with the comet about
5 degrees from the just set sun, up at 45 degrees to the right (and
gas tail continuing away from the sun in that direction). The head
of the comet will set about 23 mins after the sun, still in the bright
twilight, but as the sky darkens it is probable that the tail will
become visible at greater distances from the comet. It is
close to sunset on the 14th that the comet will reach its theoretical
At sunset on Jan 14, the comet will be located only 1.2 deg due right
of Mercury which will then be mag. -1, and some 14 deg from Venus
which at mag. -4 lies up to the right of the comet. [The magnitude scale
is used by astronomers to measure brightness. The Sun is mag. -26, the full moon
mag. -12 and the faintest stars you see on a dark night are about mag. +6.]
The best geometry occurs on Jan 15, with the comet starting to move
away from the sun (now 7 degrees) and almost directly above the
position of the sun at sunset. The head will set about 39 mins after
the sun, although the azimuth at which it sets will be 5.0 degrees to
the left of the sunset point. It is quite reasonable to expect the tail
to remain visible up to an hour after sunset, so it may be seen in a
On Jan 15 the comet is already 3 degrees from Mercury, up to the left.
By Jan 16, the effect of forward scattering will have dropped back to
about zero and the comet will already be heading away from the Sun and Earth;
back to the obscurity of the Oort cloud. Although now clearly fading,
it will be moving higher into the southern sky away from the sun. At
sunset on the 16th, the comet will be about 10 degrees from the sun and
just left of directly above the Sun at sunset. It will set 54 mins
after the sun, 9 degrees to the left of the sunset point.
From Jan 17 onwards, the comet, although fainter, should be well visible
in the darker skies. It then moves into the SW sky at roughly
a 45 deg angle up to the left of the sunset point.
The angular distance of the comet from sun at the time of sunset from
Sydney then increases on a daily basis:
Jan 17 12deg
after which date the head of the comet will set when the sun has already
passed more than 18 degrees below the horizon (astronomical darkness).
This diagram by
shows the WSW sky just after sunset for NSW. Venus
will be an obvious bright object up to the right, but Mercury will
be difficult to see except in binoculars (do not try to look for it
or the comet before the Sun has fully set). The
position of the comet on seven nights from Jan 13 to 19 is given.
Although technically visible on the 13th, the comet sets just after
the sun, so it is on the 14th and 15th before the comet is likely to be
easily seen. The tail is plotted as a general indication of
what might be seen. The outer parts of the tail will only be
visible after the sun, and the comet's head, have set much lower
below the horizon.
(and a b+w version)
can be freely used in
any publication if credited (c) Steve Quirk (2007). Note however the diagram is only really
applicable to the southern states of Australia.
The Comet's Tail
A note of the appearance of the tail is necessary. Any
prediction of the length and brightness of the tail is likely to be
more difficult than the brightness of the comet; comet brightness
prediction being difficult enough in itself. It is likely that the
blue gas tail will be narrow, pointing away from the sun, with a broad
diffuse and strongly curved yellow dust tail to it's right. The reason
for this geometry is that the gas moves very quickly away from the head
so tends to point directly away from the sun. The dust however is
heavier and once ejected, follows a wider and slower orbital path
around the sun moving relatively more slowly as its distance from the
[A very crude analogy would be of spinning around holding a garden hose.
If the hose was on high, the stream of water would be fast and fairly
straight (we are talking of the appearance as seen from above). As you
slow the speed of the stream, still spinning at the same rate, the
curvature of the stream is much more marked].
Other Bright Comets
Until the modern era of automated surveys and space telescopes,
it was not uncommon for a comet to suddenly appear in the bright twilight
as an already impressive object. Without the survey telescope at Siding Spring, or
space telescopes, the current naked-eye sightings could have really been the
first anyone would have known about the comet. This is bourne out
by a report by
of the Institute of Theoretical Astrophysics at the University of
Oslo. Responding to a query as to how the comet would compare with
some famous recent comets like Bennett, West, Halley, Hyakutake and Hale-Bopp,
I already consider the comet to be in that league.
Yesterday (Jan. 8) I found it the most striking object
in the evening twilight sky, and our department had
been getting a lot of phone calls throughout the day
(particularly from northern Norway, where the sun is
still below the horizon at noon) from the general
public. The typical story was from people who had
"discovered" the comet while going to/from work,
waiting for the bus etc.
It is most unlikely that this comet will approach the spectacular
brightness of comet Ikeya-Seki in 1965, but it should turn out to be the
brightest comet for over 40 years.
Where to Look
For those using astronomical telescopes, below is an ephemeris for
09:30 UT (8:30pm NSW time) for Sydney (or any location in eastern
Australia at around 35S latitude for 12 mins after sunset). Delta is
the distance of the comet from the Earth and r of the comet from the
Sun (in AU, 1AU = ~150 million km). Elong. is angular distance from
the Sun in degrees.
UT R.A. (J2000) Dec. Delta r Elong.
YYYY MM DD HHMM HH MM.mm DD MM.m AU AU deg
2007 01 13 0930 19 58.79 -18 06.7 0.839 0.172 5.9
2007 01 14 0930 20 06.15 -21 36.6 0.822 0.183 5.5
2007 01 15 0930 20 12.70 -25 11.0 0.817 0.201 7.2
2007 01 16 0930 20 18.46 -28 35.3 0.820 0.224 9.8
2007 01 17 0930 20 23.56 -31 42.1 0.830 0.251 12.6
2007 01 18 0930 20 28.13 -34 29.0 0.845 0.279 15.3
2007 01 19 0930 20 32.31 -36 56.8 0.863 0.308 17.7
2007 01 20 0930 20 36.18 -39 07.2 0.883 0.337 19.9
2007 01 21 0930 20 39.81 -41 02.3 0.904 0.367 21.9
2007 01 22 0930 20 43.25 -42 44.2 0.926 0.396 23.7
2007 01 23 0930 20 46.54 -44 14.9 0.949 0.425 25.3
2007 01 24 0930 20 49.70 -45 35.9 0.971 0.454 26.8
2007 01 25 0930 20 52.76 -46 48.6 0.994 0.482 28.2
2007 01 26 0930 20 55.74 -47 54.1 1.017 0.510 29.5
2007 01 27 0930 20 58.65 -48 53.6 1.039 0.538 30.7
2007 01 28 0930 21 01.50 -49 47.7 1.061 0.565 31.8
2007 01 29 0930 21 04.29 -50 37.2 1.083 0.592 32.8
2007 01 30 0930 21 07.04 -51 22.7 1.104 0.618 33.8
2007 01 31 0930 21 09.74 -52 04.7 1.125 0.644 34.8
2007 02 01 0930 21 12.42 -52 43.6 1.146 0.670 35.7
Webpages about C/2006 P1
A telescope has been marketed in Germany under the name "McNaught Comet Catcher". Rob McNaught was not contacted by the company, does not endorse it
and has no knowledge of its quality. | <urn:uuid:9667fdb0-9fc2-4c10-8375-02b357475077> | 2.796875 | 4,152 | Knowledge Article | Science & Tech. | 78.403707 |
The first test to see whether you can access the database server is to try to create a database. A running PostgreSQL server can manage many databases. Typically, a separate database is used for each project or for each user.
Possibly, your site administrator has already created a database for your use. He should have told you what the name of your database is. In that case you can omit this step and skip ahead to the next section.
To create a new database, in this example named mydb, you use the following command:
$ createdb mydb
If this produces no response then this step was successful and you can skip over the remainder of this section.
If you see a message similar to:
createdb: command not found
then PostgreSQL was not installed properly. Either it was not installed at all or your shell's search path was not set to include it. Try calling the command with an absolute path instead:
$ /usr/local/pgsql/bin/createdb mydb
The path at your site might be different. Contact your site administrator or check the installation instructions to correct the situation.
Another response could be this:
createdb: could not connect to database postgres: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
This means that the server was not started, or it was not started where createdb expected it. Again, check the installation instructions or consult the administrator.
Another response could be this:
createdb: could not connect to database postgres: FATAL: role "joe" does not exist
where your own login name is mentioned. This will happen if the administrator has not created a PostgreSQL user account for you. (PostgreSQL user accounts are distinct from operating system user accounts.) If you are the administrator, see Chapter 20 for help creating accounts. You will need to become the operating system user under which PostgreSQL was installed (usually postgres) to create the first user account. It could also be that you were assigned a PostgreSQL user name that is different from your operating system user name; in that case you need to use the -U switch or set the PGUSER environment variable to specify your PostgreSQL user name.
If you have a user account but it does not have the privileges required to create a database, you will see the following:
createdb: database creation failed: ERROR: permission denied to create database
Not every user has authorization to create new databases. If PostgreSQL refuses to create databases for you then the site administrator needs to grant you permission to create databases. Consult your site administrator if this occurs. If you installed PostgreSQL yourself then you should log in for the purposes of this tutorial under the user account that you started the server as.
You can also create databases with other names. PostgreSQL allows you to create any number of databases at a given site. Database names must have an alphabetic first character and are limited to 63 bytes in length. A convenient choice is to create a database with the same name as your current user name. Many tools assume that database name as the default, so it can save you some typing. To create that database, simply type:
If you do not want to use your database anymore you can remove it. For example, if you are the owner (creator) of the database mydb, you can destroy it using the following command:
$ dropdb mydb
(For this command, the database name does not default to the user account name. You always need to specify it.) This action physically removes all files associated with the database and cannot be undone, so this should only be done with a great deal of forethought.
As an explanation for why this works: PostgreSQL user names are separate from operating system user accounts. When you connect to a database, you can choose what PostgreSQL user name to connect as; if you don't, it will default to the same name as your current operating system account. As it happens, there will always be a PostgreSQL user account that has the same name as the operating system user that started the server, and it also happens that that user always has permission to create databases. Instead of logging in as that user you can also specify the -U option everywhere to select a PostgreSQL user name to connect as.
Please note that if you happen to be the system administrator (hopefully on your personal machine) on a Linux system, many Linux distributions create an user ID called "postgres" or something similar. You need to su into this account from the root account in order to be able to access the system wide installation of the postgresql server for the first time.
I had to do a Google search to figure it out. So I thought I might document it here as well.
I wasn't able to use the steps above to create the database without reading beyond this point. The syntax that I found worked for creating the db is this: CREATE DATABASE "mydb";. Note that I did not use the dollar sign. The double quotes are important to maintain the case, since SQL is case sensitive, or the case will be converted to lower. As usual, the semicolon is necessary to indicate the end of the SQL statement.
To the person using the "create database .." syntax:
You are misinformed and perhaps misinforming too. You are referring to SQL syntax, where indeed the line "create database 'mydb'" will be interpreted as instructions to create a database. In order to use the SQL syntax however, some sort of SQL parsing client needs to be used. Like PHP, or part of a C program parsing SQL queries.
This page however explains creating databases from the operating system shell, where a specific binary called 'createdb' is used to access database creation promptly, without resorting to SQL language.
This is the important difference. Even though 'createdb' did not work for you, rest assured you CANNOT create databases typing 'CREATE DATABASE "mydb"' at Windows 'cmd' or UNIX sh prompt. You are mixing terms here. You yourself have mentioned 'SQL', while it is clear the article does not cover the use of SQL, but instead how to create Postgres databases from generic shell command line (as opposed to some sort of SQL-enabled shell). | <urn:uuid:c99a11d8-03f8-4806-935e-b4404cb03856> | 2.90625 | 1,325 | Tutorial | Software Dev. | 47.430187 |
New Form of Matter Discovered, Part Laser, Part Superconductor
2007 10 09
By Lucian Dorneanu | news.softpedia.com
A group of researchers have recently announced the creation of a completely new state of matter that combines the characteristics of lasers with those of the world’s best electrical conductors.
This image shows a rotating superfluid made up of fermionic atoms. Credits: Andre Schirotzek, MIT
They successfully demonstrated the existence of the phase, besides the previously known ones: solids, liquids, gases, plasmas, superfluids, supersolids, Bose-Einstein condensates, fermionic condensates, liquid crystals, strange matter and quark-gluon plasmas.
David Snoke, an associate professor in the physics and astronomy department in the University of Pittsburgh’s School of Arts and Sciences explained that the new state is a solid filled with a collection of energy particles known as polaritons that have been trapped and slowed. The new material could provide new ways of moving energy from one point to another as well as a low-energy means of producing a light beam like that from a laser, thanks to its joint properties.
The Pitts researchers have been working on a project to create materials which mix the characteristics of superconductors and lasers and have successfully captured the polaritons in the form of a superfluid, using optical nanostructures, thus resulting a form of matter called a polariton superfluid, in which the wave behavior produces a pure light beam similar to that from a laser but is much more energy efficient.
Unlike existent superconductors and superfluids, which need a constant temperature between -280 F and -450 F (-173 C to -268 C), the new polariton superfluid demonstrates an increased stability at higher temperatures, and could even, with future improvements, exist at room temperatures.
To create this new state of matter, the scientists used a technique similar to that used for superfluids made of atoms in the gaseous state known as the Bose-Einstein condensate.
High-temperature superconductivity is not fully understood, but the new material opens up opportunities to study the microscopic mechanisms behind this phenomenon.
Future practical applications could be revolutionary, allowing the creation of new ways of trapping and manipulating the energy particles, and of achieving a controlled transfer of optical signals through solid matter.
Article from: http://news.softpedia.com/news/New-Form-of-Matter-
Astronomers shed light on mystery of 'dark matter'
The Primacy of Consciousness
Still Chasing the Ghosts of ‘Dark Matter’ and ‘Dark Energy’
Astronomers find the gate into parallel worlds
Breaking Through Conventional Scientific Paradigm
Spiritual Science - DNA is influenced by Words and Frequencies
Superconductor research points towards feasible electric airplanes
Latest News from our Front Page
No Bank Deposits Will Be Spared from Confiscation
2013 05 18
As alert Zero Hedge readers are aware, this week the EURO Politburo is busy debating the dodgy subject of deposit "bail-ins."
The following article very succinctly explains this odious mode of fractal fractional reserve end-game chicanery.
The author encourages all of you to share it with others.
NO BANK DEPOSITS WILL BE SPARED FROM CONFISCATION
By Matthias Chang Esq, futurefastforward.com (with author’s permission)
I challenge ...
Military Says No Presidential Authorization Needed To Quell “Civil Disturbances”
2013 05 17
A recent Department of Defense instruction alters the US code applying to the military’s involvement in domestic law enforcement by allowing US troops to quell “civil disturbances” domestically without any Presidential authorization, greasing the skids for a de facto military coup in America along with the wholesale abolition of Posse Comitatus.
The instruction (embedded at the end of this article), which ...
Ancient Maya Pyramid Destroyed in Belize
2013 05 17
An archaeological group says it plans to take legal action.
Despite its small size, the Caribbean country of Belize is known for a few outstanding characteristics: a spectacular barrier reef, a teeming rain forest, and extensive Maya ruins.
It now has one fewer of those ruins.
A construction company in Belize has been scooping stone out of the major pyramid at the site ...
Ginger: A Warming Herb
2013 05 17
Ginger is an Asian herb that is particularly well known to us in the West. Over time, and with trial and error, its stimulating properties and piquant flavor have been integrated into both our herbal “materia medica” and cuisine.
Brewed as an herbal tea, ginger root is particularly helpful for those people who have underactive stomachs and difficulty producing adequate amounts ...
Australian man dead for 40 minutes revived with new CPR machine
2013 05 17
In an Australian first, doctors have used a new resuscitation technique to revive three patients who were clinically dead for up to an hour.
One of the lucky survivors was Colin Fiedler, 49, who was pronounced dead at The Alfred Hospital in Melbourne, Victoria, after suffering a heart attack, The Herald Sun reported.
Doctors brought Fieldler back to life using a U.S.-made ...
|More News » | | <urn:uuid:29300107-3b09-4461-9855-ed508acfedb3> | 3.046875 | 1,138 | Content Listing | Science & Tech. | 34.322288 |
|Home | X-Objects | Stars | Habitability | Life ||
Early Cosmos around the Quasar
On April 23, 2001, a team of astronomers (including Xiaohui Fan, Robert Becker, Michael Strauss, and Richard L. White) working with the Sloan Digital Sky Survey (SDSS) announced that they had observed a distant "quasar" from the earliest stellar era of the universe (see: news summary; SDSS press release; Becker et al, 2001; and Fan et al, 2001). Light from the object travelled from 13 to 14.5 billion light-years (ly) -- assuming an estimated age for the universe of roughly 14 to 15 or so billion years -- before reaching the Solar System in March 2000, making J1030 the most distant object then detected in visible and x-ray wavelengths (Pentericci et al, 2002; and Malthur et al, 2002). Subsequently, however, an even more distant quasar with a tentative redshift of z=6.40 was announced on January 9, 2003, near the SDSS detection limit of a redshift of z ~ 6.5 for bright quasars, and other teams of astronomers detected even more distant, fast-star-forming irregular proto-galaxies, including: gravitationally-lensed HCM 6A behind galaxy cluster Abell 370 with a redshift of z~6.56, which appears to be converting about 40 Solar-masses into stars annually; (PhysicsWeb; IFA press release; Hu et al, 2002, in pdf; and erratum); and the possible "superwind-galaxy" LAE J1044-0130 (Subaru press release; and Ajiki et al, 2002, in pdf). On June 29, 2011, a team of astronomers using the European Southern Observatory's Very Large Telescope and other telescopes around the world announced their detection of ULAS J1120+0641, which is the oldest known quasar measured thus far with a redshift of z ~ 7.08 and which indicates that its light has taken around 12.9 billion years to reach Earth from just 770 million years after the Big Bang (ESO science release).
Designated by survey catalogue and position as SDSS J103027.10+052455.0 (but often referred to as simply J1030+0524 and hereafter in this text as J1030), this exceptionally luminous quasar was detected despite an extremely high spectroscopic redshift of z = 6.28 +/- 0.03. In addition to two similarly remote quasars found only a week before at redshifts of z = 5.99 and 5.82 on April 16, 2001, J1030 was observed when the universe was only around around seven percent of its present age at around 860 or so million years old (Pentericci et al, 2002). By August 3, 2001, the same team of astronomers announced that they were able to use the quasar to mark the end of the period when radiation from the first stars and quasars tore apart and re-ionized the neutral hydrogen atoms that filled the universe for some 100 million years after the Big Bang (SDSS press release).
Spectra of three remote quasars
show that the Lyman-alpha line
of hydrogen has been shifted
farthest to the right for
SDSS J1030+0524 (more).
According to what has become conventional cosmological theory, the universe after the Big Bang) was composed of a superhot plasma of electrons and quarks that soon formed protons and neutrons. However, the universe took 300,000 years to cool sufficiently for neutral hydrogen and helium gas to form, with traces of lithium and beryllium atoms (at around a redshift of z ~ 1,000). When the first generation of massive stars formed and lit up, the so-called "Cosmic Dark Age" ended. The first stars, however, also began emitting intense ultraviolet radiation that "re-ionized" neutral hydrogen atoms formed after the Big Bang by tearing electrons from their proton nuclei. With lives as brief as three million years, many of these massive stars soon exploded as supernovae and created black holes, of which many soon coalesced into supermassive hole and disk complexes as luminous quasars that emitted their own ionizing radiation. Within a half a billion years or so, not much neutral hydrogen was left around the stars and the quasars at the center of coalescing proto-galaxies. Unlike the two slightly closer quasars that were also found in April 2001, however, the emission absorption impact of neutral hydrogen gas was detected in the spectrum of J1030 dating it to the period when the first stars and quasars formed (Fan et al, 2001; Becker et al, 2001; and Jordi Miralda-Escude, 1997).
In 1965, Jim Gunn (SDSS Project Scientist) and Bruce Peterson predicted that neutral hydrogen atoms would be detected by their light-absorbing signature, creating a trough in the spectrum as hydrogen atoms absorb all the light at a particular, characteristic wavelength. If at least one part in a 100,000 of the hydrogen in intergalactic space were made up of whole atoms, all the light at this wavelength would be blocked. Because light from objects that are distant in space and time is shifted toward the red end of the spectrum, the Gunn-Peterson trough would also be shifted. By looking at where in the spectrum the trough occurred, astronomers could tell how faraway and old those atoms were. Thus, more than 35 years after it was predicted, J1030 was finally found in the early period of the universe where neutral hydrogen gas still existed in quantities sufficient to be detected.
The first stars lit up a few-
"Dark Age" with spectacular
intensity, leading to the
rapid creation of heavy
elements and black holes
that coalesced to form
bright quasars (more).
Astronomical instruments such as the Hubble Space Telescope and the Chandra X-Ray Observatory have uncovered evidence that very first stars may have burst into the universe more intensely and spectacularly than previously theorized. Studies of Hubble's "deep field" views suggest that the universe made a significant portion of its stars in a torrential firestorm of star birth, which abruptly lit up the pitch-dark heavens just a few hundred million years after the Big Bang, the tremendous explosion that created the universe. Though stars continue to be born today in galaxies, the current rate of star birth rate is a trickle compared to the hypothesized baby boom of stars in the early universe.
Larger x-ray & optical collage image.
A lack of bright, remote x-ray sources
in the Hubble Deep Field images
suggests that there was an enormous
amount of star formation in early
proto-galaxies, or that supermassive
black holes were somehow hidden
in the early universe (more).
Analysis of early galaxies in the Hubble deep fields taken near the north and south celestial poles (in 1995 and 1998, respectively) suggest that the farthest objects in the deep fields are only the "tip of the iceberg" of a uniquely effervescent period of star birth. Roughly 90 percent of the light from the early universe appears to be missing in the Hubble deep fields because the previous census of the deep fields missed most of the ultraviolet light in the universe. A new analysis of galaxy colors, however, indicates that the farthest objects in the deep fields must be extremely intense, unexpectedly bright knots of blue-white, hot newborn stars embedded in primordial proto-galaxies that are too faint to be seen even by Hubble's far vision -- as if only the lights on a distant Christmas tree were seen and so one must infer the presence of the whole tree (more discussion at: STScI; and Lanzetta et al, 2002). In 2003, astronomers announced that they had discovered that iron from supernovae of the first stars (possibly from Type Ia supernovae involving white dwarfs) indicate that "massive chemically enriched galaxies formed" within one billion years after the Big Bang, and so the first stars may have preceded the birth of supermassive black holes (more from Astronomy Picture of the Day, ESA, and Freudling et al, 2003).
ESO, ESA, NASA
Type Ia supernovae may have
chemically enriched the first
massive galaxies within one
billion years of the Big Bang
(more from APOD, ESA, and
Freudling et al, 2003).
SDSS observational data show that the number of quasars rose dramatically from a billion years after the Big Bang, but then peaking at around 2.5 billion years later and falling off sharply towards more recent times (SDSS press release). As quasars themselves do not provide enough ionizing photons to keep the universe ionized at z > 5, the major contributor to ionization must have been star-forming galaxies (as happens at z~3: Steidel et al, 2001). Luminous quasars such as J1030 ionized the hydrogen within their vicinity and create an HII region out to several million parsecs -- more than 20 million ly (Pentericci et al, 2002; and Malthur et al, 2002).
Quasar SDSS J1030+0524
The quasar was found in the northeastern corner (10:30:27.1+5:24:55.1, J2000; and 10:30:27.10+5:24:55.0, ICRS 2000.0) of Constellation Sextans, the Sextant. It is located southeast of Rho Leonis and Regulus (Alpha Leonis), northwest of Zavijava (Beta Virginis) and Zaniah (Eta Virginis), and northeast of the Spindle Galaxy (M102 or NGC 5866) and Alphard (Alpha Hydrae). Unfortunately, it has never been visible with the naked eye from the Solar System. A useful catalogue designation for this object is: SDSS J103027.10+052455.0.
Larger image of the accretion disk around the supermassive
black hole in NGC 4261, around 45 million light-years away.
A quasar is seen by the light emitted by gas falling into
a supermassive black hole via its accretion disk, which
astronomers believe is, in turn, surrounded by an even
more massive halo of dark matter (more from CfA,
Science, and Barakana and Loeb, 2003, in pdf).
J1030 appears to have a central black hole of several billion Solar-masses that must have formed within the first billion years of the Big Bang (Brandt et al, 2002). Since the quasar is very old (as much as 13 to 14.5 billion years), astronomers are puzzled that such a supermassive black hole could have formed so soon after the Big Bang (possibly less than a billion years or so). The current explanation is that such quasars and their host galaxies assembled within even larger haloes of dark matter, which enabled them to form very quickly. In January 2003, astronomers (Rennan Barkana and Abraham Loeb) announced their discovery of a shock-wave of ionized gas around J1030 and another early quasar with a supermassive black hole (SDSS J1122-0229 with z=4.75) that could be the fingerprints of dark-matter haloes. As a quasar's black hole sucks in gas from surrounding space, the gas collides with the edge of its dark-matter halo and forms a shock wave, which heats the gas suddenly and strips off electrons to form electrically charged ions. Although the infalling gas absorbs some of the light radiated from the quasar, the gas becomes transparent when it gets ionized. The sudden appearance of transparent gas at the shock boundary breaks up the spectrum of light from the quasar so that it is split into two intensity peaks of different height, instead of rising and falling smoothly as the wavelength changes if was not ionizd. Barkana and Loeb's analysis also suggests that the galaxy surrounding J1030 has around the mass of the Milky Way given the amount of gas falling into its central black hole (CfA press release, Science, and Barakana and Loeb, 2003, in pdf).
SDSS J1030+0524 is believed to be a
supermassive black hole with an accretion
disk of gas and dust and emitting bi-polar
jets of high-energy radiation such as gamma
and x-ray photons (more illustrations).
Metal line ratios indicating greater than Solar abundance were found in the spectrum of J1030 (and of similarly remote quasars). This finding suggests that heavy-element enrichment began quickly in the early universe, soon after the birth of the first stars. Although the Big Bang should have created only hydrogen, helium and trace amounts of lithium and beryllium, J1030's light revealed traces of heavier elements including carbon, nitrogen, oxygen and silicon, which must have been made by stellar processes and supernovae.
Spectra of J1030 and another remote
quasar displayed unexpectedly strong
lines of heavier elements, i.e., carbon,
oxygen, nitrogen, and silicon (more
on the "Gunn-Peterson trough" and
the ionization of neutral hydrogen).
Calculations suggest that J1030 may have been at last 13 to 20 million years old when observed and so the first stars must have formed a few hundred million years prior to the observation of the quasar, possibly at a redshift of z ~ 8.7 when the universe was around 560 million years old (Pentericci et al, 2002; and Haiman and Cen, 2002). The quasar's element ratios are consistent with chemical evolution models suggesting the fast formation of high-mass stars within around half a billion years previously, similar to the nitrogen-rich environment of today's "Giant Elliptical" galaxies (Pentericci et al, 2002).
Quasars are peculiar objects that radiate as much energy per second as a thousand or more galaxies, from a region that has a diameter about one millionth that of the host galaxy. They are intense sources of gamma rays and X-rays as well as visible light. The power of a quasar depends on the mass of its central supermassive black hole and the rate at which it swallows matter. Almost all galaxies, including the Milky Way, are thought to contain supermassive black holes in their centers. Quasars represent extreme cases where large quantities of gas are pouring into the black hole so rapidly that their energy output (from their accretion disk and wind and/or bi-polar jets) are a thousand times greater than the galaxy itself. Because of their relative brightness, high-energy gamma and x-ray quasars have become important probes for astronomers studying distant reaches of the universe and its ancient past. By the turn of the century, over 50 high-energy quasars had been discovered.
As a swirling disk of gas gradually falls into the central
black hole, it heats up and some of the gas is blown off
the disk by intense radiation in a wind at speeds up to
a tenth of light speed (more illustrations).
The three remote quasars discovered in the SDSS images during 2001 looked similar to ones that were less than half as old. Hence, some astronomers believe that the conditions around those central black hole did not appreciatively changed much in that time, contrary to some theoretical expectations. In addition, the masses of the black holes producing the X-rays are huge, given their relative youth. By various estimates, the three quasars had already accumulated between one and 10 billion Solar-masses. By comparison, the Milky Way's central black hole believed to contain less than three million Solar-masses (more discussion).
One team of astronomers analyzed the three distant SDSS quasars with 14 other, somewhat closer quasars that lie between 12 and 12.5 billion ly away. They determined that the younger, more distant SDSS quasars were radiating a lower share of their energy in x-rays. This is consistent with some theoretical predictions that a hot gas atmosphere is associated with the accretion disks swirling around central supermassive black holes, provided that the distant quasars have more massive black holes than nearby ones (Bechtold et al, 2002).
Up-to-date technical summaries on this quasar are available at: NASA's ADS Abstract Service for the Astrophysics Data System; the SIMBAD Astronomical Database mirrored from CDS, which may require an account to access; and the NSF-funded, arXiv.org Physics e-Print archive's search interface.
Constellation Sextans is one of seven (including Canes Venatici, Lacerta, Leo Minor, Lynx, Scutum, and Vulpecula) that were introduced by Johannes Hevelius (1611-1687), whose is known mostly known for his charts of the Earth's Moon. The seven were included in his catalogue of 1564 stars, Prodromus Astronomiae, which was published by his wife three years after his death. In naming this simple constellation, Hevelius commemorated the sextant on which he relied on to view the stars instead of a telescope. For more information about the stars and objects in this constellation, go to Christine Kronberg's Sextans. For an illustration, see David Haworth's Sextans.
© 2002-2011 Sol Company. All Rights Reserved. | <urn:uuid:7018ec73-4b70-4e70-8b8e-51b4dbd6daa3> | 3.359375 | 3,700 | Knowledge Article | Science & Tech. | 46.432898 |
Everyone knows that excessive acceleration accompanied by corresponding braking is an inefficient way to use stored energy, whether it be for ICE or EV's. However, what about the situation where you accelerate from one speed to another at a constant rate, and hold the final speed (like pulling onto a highway)? I contend that it doesn't use any more energy whether you creep up to speed or make it fun.
Here's my reasoning: the work done to accelerate a rigid body just depends on the initial and final velocities, not the acceleration rate. If you accelerate twice as fast it takes half the time to reach your final speed, and you use the same amount of energy.
Now cars are not rigid bodies, and energy is lost due to friction, heat and the inefficiencies of the conversion of stored chemical energy to mechanical power. However, if these losses are linearly related to the acceleration rate, then again the energy used to accelerate from one speed to another is the same no matter how fast you accelerate.
Now here's my final curve: the aerodynamic drag might be negligible for the few seconds of going from 30 to 65 mph, but it becomes more significant the longer it takes to do that. So if you accelerated at a very low rate, and it took you an hour (or many hours) to go from 30 to 65 mph, then the aerodynamic drag will become a more significant factor in the energy used, and you will use MORE energy than if you accelerated faster.
So I contend that putting your foot into it to pull onto a highway (as long as you don't exceed your desired final speed) doesn't use any more kW-Hrs than poking along. So (safely) enjoy yourself! Would be great to hear from a Tesla engineer on this one!
Please Login to Comment | <urn:uuid:b4df0b9d-8de3-45bd-9f3c-79eafc4573c3> | 2.875 | 368 | Comment Section | Science & Tech. | 41.946961 |
NASA recently announced that 97 percent of Greenland’s vast ice sheet had undergone at least some surface melting this summer, compared with a normal melt area of about 50 percent. The 2012 figure, said the headline on the space agency’s press release, was “unprecedented.”
That’s a powerful word in any context, but it’s especially so when you’re talking about the politically charged topic of climate change. If the melting was unprecedented, it would reinforce the idea that scientists are right about the dangers of human-generated greenhouse gases, and at the same time make it harder for skeptics to take potshots at the science.
The skeptics were naturally delighted, therefore, when it turned out that such widespread melting is anything but unprecedented. It happened most recently in 1889, and it happens, on average, every 150 years or so. This summer’s surface melt has very likely been influenced by global warming, but it might have happened anyway. The same goes for the heat waves that have pummeled large parts of the nation this summer and the drought that’s now destroying crops in the Midwest.
The drama and the hype – alarmist headlines, crowing skeptics, backtracking scientists (or at least, publicists) and a confused public – made me crazy. With one poorly chosen word, climate-change skeptics were handed an opportunity to sow more doubt and confusion about global warming.
I work at Climate Central, a nonprofit, nonpartisan research and news organization, and a good part of my work is dedicated to putting an end to just that sequence of events. Our central mission is communicating to a general audience what the science is and what it is not.
For the record: The science clearly shows that climate is changing largely as a result of greenhouse gas emissions. The science is equally clear that without rapid and drastic cutbacks in greenhouse gas emissions, the changes are likely to threaten life, property and Earth’s biosphere in all sorts of ways – including melting glaciers and worsening weather crises – by the end of century, and in many cases, much sooner. In fact, some of these changes are already happening.
But there is a lot science doesn’t know as well, especially what the link might be (if any) between climate change and specific events. It doesn’t know for certain what the future holds for, say, hurricanes. (The tentative conclusion: These storms could become fewer but more powerful.) It can’t say precisely how high sea level will rise, or how fast.
And there are outliers, scientists with (and sometimes without) legit credentials who doubt the mainstream conclusions. Yes, scientific “facts” are mutable: As data accumulate, knowledge changes. But just because science adjusts itself to new information (or because outliers invariably exist), it doesn’t make the mainstream wrong.
I’m not a scientist; I’m a science journalist. I translate the arcane terminology of experiments, data and theories into everyday English. In the current climate of global warming doubt – in which any uncertainty is used to discredit mountains of confirming data, where even a typo might promote misunderstanding – the job can be a minefield.
I just co-wrote a climate-change primer for Climate Central: 60 simple, bite-size dispassionate chapters about what scientists know on the topic, no more, no less. Accuracy is always the appropriate goal for a journalist, but for “Global Weirdness,” we were hyper-vigilant.
Each chapter was reviewed, down to the comma, by at least one of Climate Central’s doctorate-level staff scientists, and revised based on their critiques. Each chapter was then re-reviewed by at least one scientist outside our organization, drawn from a list of the world’s most eminent experts, and revised again.
It was something like the peer review process that scrutinizes scientific findings before they are published. No writer (and no scientist) particularly likes to have his or her work picked apart in this manner, but it’s the best way we know to get things right.
This doesn’t mean our work won’t come under criticism. The very fact that we take mainstream climate science seriously will paint us as partisan hacks in the eyes of those who insist the whole thing is a scam, and that includes some scientists with doctorates of their own. And we’ll no doubt also be criticized by those who think we aren’t scaring people enough. Without fear, they believe, people might not take action.
Who knows, the fear pushers might be right. But as convinced as I am that limiting greenhouse gases is important, I’m grateful for every time a critical scientist or editor has stopped me from making an “unprecedented” error.
In the end our best hope is sticking with the science as it is, not as any one person or cause wishes it might be.
After all, the truth is scary enough.Michael D. Lemonick is a senior writer for Climate Central and a co-author of “Global Weirdness: Severe Storms, Deadly Heat Waves, Relentless Drought, Rising Seas, and the Weather of the Future.” He wrote this for the Los Angeles Times. | <urn:uuid:ec074486-5cac-41ef-9936-e127c81cab04> | 3.265625 | 1,101 | Nonfiction Writing | Science & Tech. | 46.520304 |
What is MitosisWritten by rekhacontentwriter
Home » Science and Engineering
Do you know that a human body is made up of thousand of trillion of cells. Taking an example if you manage to arrange thousand of papers and pile them up then you can physically touch the moon and come back and you can do it number of times. But we didn’t realize it. But will you believe it that at the beginning we started with only one cell and then become thousand of cell.
Now the question lies how it becomes possible? How only one cell become thousand of trillion of cells? Here comes the concept of mitosis. Mitosis is a process of cell division where the one eukaryotic cell or we can say a mother cell gets divided and produce two daughter cells which will be genetically identical. If you elaborate precisely then we can say that cell division’s scientific term is mitosis and it involves the process of nucleus division of the cell. As the nucleus division of the cell takes place therefore it will produce two sets of chromosomes which will be identically same. After that cytokinesis takes place which results in dividing cytoplasm of the mother cell in the process of developing two daughter cells.
Now before going further discussion regarding mitosis you must know the term interphase. As we come to know that mitosis is the biological process of cell division but this process has to undergo the interphase stage. Basically interphase is also known as “holding” period or you can say a period in between of two consecutive cell divisions. In the interphase period the organelles and the genetic material get replicated.
Stages of mitosis
There are several stages of mitosis. Those are:
- Prophase- in this process the chromatin gets condenses into distinct chromosomes and the spindles from at the opposite “poles” brakes down after the nuclear get rips off.
- Metaphase – in this phase the chromosomes get aligned with the metaphase plate which is equally far-off from the two spindle poles.
- Anaphase – in this phase the sister chromatids or the paired chromosomes go to the other end of the cell.
- Telophase – this is the last stage of the mitosis. In this stage cordoned off the chromosomes in the discrete new nuclei of the new daughter cell takes place. This stage will also involve the cytokinesis process in which the cytoplasm of the mother cell gets divided.
After that stage two discrete cells are produced this will be genetically identical.
But don’t get confused with the meiosis. Mitosis and the meiosis are not same. Briefly described of the difference is:
- In mitosis two sets of complete chromosomes will get produced in each daughter cell.
- In meiosis only one set of complete chromosomes will get produced each daughter cell.
This is all about mitosis. | <urn:uuid:802f61f9-d49f-4563-8b83-4e6c3e9dcde6> | 3.03125 | 605 | Knowledge Article | Science & Tech. | 43.825847 |
Personally, I’m not against the use of FF, but in order to have a consistent meaning, a word must always have the same abbreviation, don’t you agree? That’s my thoughts on it, anyway–kind of like the same reason why we have standardized HTML.
Let’s not get off topic, though. I don’t think Ultimater would appreciate that.
For me personally, this is the most useful prototype function in this thread.
Often I will want to know the position of... say the third occurance of a string within another. This is similar to indexOf() but instead of just getting the first occurance you can specifiy a second argument which will be the nth occurance.
Specifying a negative value for n will count from the back rather than the front.
Specifying no value for n will presume the value 1 for n.
Specifying a string or decimal or zero for n will return -1.
If the string is not found at the nth occurance will also return -1.
Yep its fixed. I also made a fix to String.dex(), didnt realise till i saw yours that indexOf() took 2 arguments. My version drains less resources from the computer as it only calls indexOf() as many times as it has to.
For Those who dont know what factorial is it is the number of sequences that can exist with a set of items, derived by multiplying the number of items by the next lowest number until 1 is reached. For example, three items have six sequences (3x2x1=6): 123, 132, 231, 213, 312 and 321. Also note that zero factorial (in Maths noted by 0!) is equal to 1.
I guess the easiest way to beat that script for efficiency would be to simply store the factorials for all numbers up to 170 in a hard-coded array from the very beginning and return the appropriate NaN or Infinity otherwise. You'd never spend any time computing values, or checking for their existence in the array, since you already have them all.
Of course, that is a pretty lame approach...
Kids, kids... you tried your best, and you failed miserably; the lesson is: never try. | <urn:uuid:ef43eec7-8ddc-47be-a5f2-8802ace8882d> | 2.703125 | 475 | Comment Section | Software Dev. | 73.105358 |
Sometimes, distractions can be useful in themselves. That’s the message this week from the Planck space telescope, which has a mighty big mission: to take baby pictures of the universe. While it hasn’t yet accomplished that task, the preliminary disturbances that Planck scientists are now dealing with are yielding cosmic insights of their own.
Orbiting the Sun roughly 1.5 million kilometres from Earth, the Planck space-based telescope is scanning the sky for ultra-cold objects. Its instruments are chilled to just a tenth of a degree above absolute zero and are designed to pick up the faint microwave afterglow from the Big Bang, which scientists hope can tell them about the earliest moments of the Universe. [Nature News]
Planck was launched in spring of 2009 by the European Space Agency, and it’s still gathering data to complete its chart of this cosmic microwave background (CMB); researchers hope the map will shed light on the young universe’s brief “inflationary” period when it expanded extremely rapidly. At the moment, however, Planck is busy detecting other sources of microwaves so that it can subtract this “foreground” radiation from its map of the background.
So what are some of these sources?
OK, Mars wins this contest for bragging rights. The photo above shows the Melas Chasma on Mars, which reaches a depth of 5.6 miles; it’s part of the staggering the Valles Marineris rift valley, which stretches almost 2,500 miles across the surface of the red planet. For comparison’s sake, our earthly Grand Canyon is 1.1 miles deep and 277 miles long.
This remarkable image was taken by the High Resolution Stereo Camera on the European Space Agency’s Mars Express orbiter. In addition to giving us something neat to gawk at, the image also reveals evidence of Mars’s watery past.
Part of the canyon wall collapsed in multiple landslides in the distant past, with debris fanning out into the valley below. Scientists analyzing the texture of the rocks deposited by the landslides say they were transported by liquid water, water ice, or mud. [ScienceNOW]
80beats: NASA’s New Mars Mission: To Study the Mystery of the Missing Atmosphere
80beats: It’s Alive! NASA Test-Drives Its New Hulking Mars Rover, Curiosity
80beats: Vast Ocean May Have Covered One-Third of Primordial Mars
80beats: Mars Rover Sets Endurance Record: Photos From Opportunity’s 6 Years On-Planet
Image: ESA/DLR/FU Berlin (G. Neukum)
All aboard for fake Mars!
Earlier today, a six-man crew battened down the hatches on an 1,800-square-foot module for 520 days of isolation as they pretend to go to Mars and back again. The Mars-500 project, run by the Russian Institute for Biomedical Problems (IBMP) and funded in part by the European Space Agency, hopes to test the psychological mettle required for such a journey.
“See you in 520 days!” shouted Russia’s Sukhrob Kamolov as he was sealed inside the simulator at around 1000 GMT. [Radio Free Europe/Radio Liberty]
The trip will have three stages, including the trip to and from Mars and a simulated landing and planet exploration.
Psychologists said the simulation can be even more demanding that a real flight because the crew won’t experience any of the euphoria or dangers of actual space travel. They have also warned that months of space travel would push the team to the limits of endurance as they grow increasingly tired of each other. [AP]
For lovers of stellar beauty, the Herschel space telescope may have already earned its keep. Just one year after its launch, researchers from the European Space Agency have released this stunning image of a massive star being born in a vast bubble of cold dust.
Herschel’s far-infrared detectors are finely attuned to stellar nurseries. When a star begins to form, the dust and gas surrounding it heats up to a few tens of degrees above absolute zero, and it begins to emit far-infrared wavelengths. In the galactic bubble shown, known as RCW 120, the newborn star is the white blob at the bottom of the bubble.
The “baby” star is perhaps a few tens of thousands of years old. It is some eight to 10 times the mass of our Sun but is surrounded by about 200 times as much material. If more of that gas and dust continues to fall in on the star, the object has the potential to become one of the Milky Way Galaxy’s true giants [BBC].
Giant stars pose a particular challenge to our understanding of star formation, researchers say. Present theories suggest that stars that are larger than about 10 solar masses shouldn’t exist, because their fierce radiation should blast away the clouds that feed them materials to grow on. Yet astronomers have spotted stars that have 120 times the mass of our Sun.
Click through the gallery for a couple more amazing shots from Herschel.
The European Space Agency has released the latest pictures of the Martian moon Phobos, taken by the European Mars Express (MEX) probe during its recent flybys. On one flyby, MEX skimmed just 42 miles above the surface of Phobos, which is the closest any manmade object has ever gotten to the little Martian moon.
The image above is from a flyby that brought MEX within 63 miles of the surface; its High Resolution Stereo Camera took photographs that have a resolution of 14 feet per pixel. The images are being scrutinized by the Russian space agency as it tries to settle on a landing site for its ambitious Phobos-Grunt mission next year–the two potential landing sites are marked by red dots in the picture above. The Phobos-Grunt mission aims to collect a soil sample from Phobos, and then to return the sample to Earth for analysis.
Engines powered by chemical fuel? How passé. For the spacecraft with truly modern flair, an ion thruster is the only way to go. Such a system might not provide powerful and dramatic bursts of speed, but space agencies around the world are recognizing the benefits of its slow-and-steady approach, which is just what’s needed for cruising between planets.
Ion propulsion works by electrically charging, or ionizing, a gas and accelerating the resulting ions to propel a spacecraft. The concept was conceived more than 50 years ago, and the first spacecraft to use the technology was Deep Space 1 in 1998. Since then … there have only been a few other noncommercial spacecrafts that have used ion propulsion [Technology Review]. However, the technology has a clear advantage over chemical propulsion when it comes to long distance missions, because a very small amount of gas can carry a spacecraft a long way. Astronautics expert Alexander Bruccoleri explains that with chemical propulsion, “You are limited in what you can bring to space because you have to carry a rocket that is mostly fuel” [Technology Review].
Now, a European Space Agency (ESA) probe will use four ion thrusters to scoot all the way to Mercury, the planet nearest to the sun. That mission won’t launch until 2014, but ESA officials say the $37 million propulsion system will be the most efficient yet, and will also be the most ambitious test of the technology to date. The Mercury probe will be launched by a conventional rocket, and will continue to use chemical propulsion until it’s out of Earth orbit. When it begins its six-year cruise to Mercury, though, its ion thrusters will kick in. The system will draw electricity from solar panels; as the xenon ions pass through the electrified grids they accelerate to up to 50km a second (31 miles per second) and shoot from the rear in a parallel beam. On Earth, at sea level, the thrust would be just enough to lift a pound coin. In space, however, the same thrust will create a much much bigger lift [Telegraph].
Yesterday, Russian engineers cracked the wax seal on a metal hatch, and six men emerged from the simulated space capsule where they had spent the last 105 days in experiment designed to simulate the isolation of a manned trip to Mars. The experiment is part of a larger project dubbed “Mars 500.” The three months the men spent in isolation are a precursor to another simulation to take place in 2010, when another crew will submit themselves to 520 days in isolation, the projected time it would take for a return trip to Mars [ABC News].
The four Russians, one German, and one Frenchman were chosen from among 6,000 applicants, and were paid about $21,000 each for participating. Inside the mock capsule, they conducted experiments to test their physical and psychological reactions to the isolation, and performed many of the tasks that would keep Mars-bound astronauts busy. They had no television or Internet and their only link to the outside world was communications with the experiment’s controllers — who also monitored them via TV cameras — and an internal e-mail system. Communications with the outside world had 20-minute delays to imitate a real space flight [AP].
A European spacecraft that has been peering through the thick, roiling clouds of Venus for the past three years has found further evidence that the inhospitable planet once had oceans, volcanoes, and a system of plate tectonics similar to those at work on Earth. The Venus Express has mapped the planet’s southern hemisphere using infrared imaging, and found heat variations in the surface rocks, which allows researchers to speculate on the chemical composition of those rocks. Different surfaces radiate different amounts of heat at infrared wavelengths due to a material characteristic known as emissivity, which varies in different materials [SPACE.com].
In certain highland areas, researchers detected cooler patches of rock whose thermal signatures resemble those of granites on Earth. On our own planet, granites are made during the process of rock recycling that goes on at the edges of the great geologic plates that cover the Earth. At the boundaries of these plates, ancient rock is pulled deep into the planet, reworked with water and then re-surfaced at volcanoes. Critically, then, if there is granite on Venus, there must also have been an ocean and a process of plate movement in the past [BBC News].
The European Space Agency’s Planck observatory has reached its operating temperature of a mere tenth of a degree above the lowest temperature theoretically possible given the laws of physics, known as absolute zero. That means it’s ready for its mission: Observing the oldest light in the universe, known as the cosmic microwave background, or CMB, to create the clearest picture yet of what the young universe looked like.
Although scientists have achieved temperatures closer than this to absolute zero in the laboratory, the spacecraft is likely the coldest object in space. Such low temperatures are necessary for Planck’s detectors to study the Cosmic Microwave Background by measuring its temperature across the sky. Over the next few weeks, mission operators will fine-tune the spacecraft’s instruments. Planck will begin to survey the sky in mid-August [SPACE.com], and the first batch of data is expected to be released next year. Planck was launched May 14 and will observe the CMB from a spot more than 930,000 miles from Earth.
The solar probe Ulysses has circled the sun for more than 18 years–almost as long as the Greek hero Odysseus, also called Ulysses, was absent from home due to the Trojan War and his prolonged journey home–but the space probe doesn’t have a homecoming in its future. Ulysses will receive its final transmission tomorrow, as researchers say the scientific findings sent home by the failing spacecraft no longer justify the mission’s costs. After shut-off, Ulysses will continue to orbit the Sun, becoming in effect a man-made ‘comet’. “Whenever any of us look up in the years to come, Ulysses will be there, silently orbiting our star, which it studied so successfully during its long and active life” [SPACE.com], says mission manager Richard Marsden.
The craft has already exceeded expectations. In February 2008, mission engineers announced with great solemnity and with heaps of praise for the orbiter that the craft would fall silent within a few months. Its power supply had grown too weak to keep the craft’s fuel lines from freezing. Not so fast: Engineers figured out that they could keep the lines warm by firing the craft’s thrusters in short bursts every couple of hours [The Christian Science Monitor]. Using that clever fix, Ulysses soldiered on for another year. | <urn:uuid:ddb51c9f-ad60-498d-a38a-8d35fbb2d1d4> | 3.46875 | 2,681 | Content Listing | Science & Tech. | 47.30782 |
A letter published March 21 in the online edition of the journal Nature indicates that there are evolutionary reasons why some animals that mimic other more aggressive or dangerous species do so rather imperfectly.
The study conducted by a team of Canadian researchers indicated that, at least in hoverfly populations, predators impose less selection if the mimic is smaller. Thus, small mimic species need to have less fidelity with the species they are mimicking than do larger ones. The team concludes that the most likely reason for this is that the mimics are less profitable prey species and are not apt to be pursued as strongly as are larger species. So, less fidelity will do.
The team was also able to show little or no correlation of mimicry to several other theories that had been posited over the years as explanations for the imperfections. Among these, they were able to show that human ratings of mimetic fidelity are positively correlated with both morphometric measures and avian rankings of the mimicry, indicating that variation in mimetic fidelity is not just an illusion based on human perception.
See here for the complete paper. | <urn:uuid:2e86ea3f-355c-420e-8552-8316df8ddc7d> | 3.1875 | 220 | Personal Blog | Science & Tech. | 27.132765 |
by Katie Bowell, Curator of Cultural Interpretation
Remember when we showed you how to cornstarch combined with water creates a non-Newtonian fluid that acts a lot like Flubber? Well, it turns out that’s not the only way to get the “Flubber” effect. Researchers at the California Institute of Technology have figured out how to make water bounce, hop, break and merge. You just need a superhydrophobic carbon nanotube surface and a microscope to see it. Now why didn’t we think of that?
And for those of you (including me) who needed a short introduction to the world of nanotechnology, I’ve always found that singing puppets put everything into perspective. | <urn:uuid:b4a869b2-0858-497c-b0f8-54eb2f54b298> | 3.09375 | 157 | Personal Blog | Science & Tech. | 45.471379 |
FAST is a new telescope that has panels that can shift position to create smaller curved dishes anywhere on its surface.
A Moment of Science gives a Science Hero Award to Galileo!
What makes Galileo our science hero? For one thing, he moved human knowledge forward against outstanding prejudice and even persecution. Learn more on this Moment of Science.
Are self replicating robots in our near future? Find out on this Moment of Science.
What's the lowest note in the universe? The answer may surprise you. Find out on this Moment of Science.
Is it true that if the laws of nature were changed even a little bit, there would be no people in the universe? Find out on this Moment of Science.
Are there really alien civilizations out there that we don't know about? Find out on this Moment of Science: "The Fermi Paradox".
Scientists from Johns Hopkins gathered detailed data measuring light from over two hundred thousand galaxies, some of which are up to several billion light years away from earth. | <urn:uuid:3c22ac96-29c5-43df-a147-de6195bb282b> | 3.109375 | 206 | Content Listing | Science & Tech. | 56.249333 |
Sigma Space MPL (Micropulse LIDAR) measures the amount and vertical distribution of volcanic ash, clouds, and other aerosols above Bariloche, Argentina airport. View Map
See below for HOW TO INTERPRET THE DATA.
HOW TO INTERPRET THE DATA
Based on the same principles as radar, the MPL transmits laser pulses into the atmosphere and receives the backscattered light.
The laser light transmitted by the MPL is polarized light. The MPL receiver measures the light scattered back from the atmosphere in two separate channels. One channel looks at the scattered light that has the same polarization as the transmitted light. This is the co-pol or co-polarization signal. The other channel looks at the scattered light whose polarization has changed, or rotated by 90 degrees, from the transmitted light. This is cross-pol or cross-polarization signal.
We are displaying two separate graphs of real-time data. The top one shows the co-polarized backscatter and the bottom one shows the ratio of the cross- and co-polarized backscatter, known as the depolarization ratio. The large depolarization ratio indicates the presence of volcanic ash (asymmetrical particles)—and gives the vertical distribution of volcanic ash above the airport. These lidar measurements are proportional to the amount of volcanic ash at a given height. | <urn:uuid:62feb473-acab-4e3a-bc32-13b8d8de17a5> | 3.421875 | 286 | Knowledge Article | Science & Tech. | 36.396788 |
General best practices for performance
You must plan for every feature in your app, and the same is true for performance. Planning for performance consists of determining what your performance-critical scenarios will be, defining what good performance means, and measuring early enough in the development process to ensure that you can be confident in your ability to hit your goals.
You don't need to completely understand the platform to reason about where you might need to improve performance. By knowing what parts of your code execute most frequently, you can determine the best places to optimize your app.
The users' experience is a basic way to define good performance. For example, an app's startup time can influence a user's perception of its performance. A user might consider the performance of an app's launch time of less than one second to be excellent, less than 5 seconds to be good, and greater than 5 seconds to be poor.
Sometimes you also have to consider other metrics that don't have a direct impact on the user experience. An example of this is memory consumption. When an app uses large amounts of memory, it takes it from the rest of the system, causing the system as a whole to appear sluggish to the user. It is difficult to have a goal on overall sluggishness of the system, so having a goal on memory consumption is reasonable.
When defining your performance goals, take into consideration the perceived size of your app. Users’ expectations for the performance of your app may be influenced by their qualitative perception of how big your app is, and you should take into account whether your users will consider your app to be small, medium, or large. As an example, you might want a small app that doesn't use a lot of media to consume less than 100MB of memory.
As part of your plan, you define all of the points during development where you will measure performance. Measuring performance serve different purposes depending on whether measure during the prototyping, development, or deployment phase of your project. In all cases, measure on a representative device so that you get accurate info. For more info about how to measure your app’s performance in Visual Studio, see Analyzing the performance of Windows Windows Store apps.
Measuring your app’s performance during the early stages of prototyping can add tremendous value to your project. We recommend that you measure performance as soon as you have code that does meaningful work. Early measurements give you a good idea of where the important costs are in your app, and inform design decisions. This results in high performing apps that scale well. It can be very costly to change design decisions later on in the project. Measuring performance too late in the product cycle can result in last minute hacks and poor performance.
Measuring your app’s performance during development time helps you:
- Determine if you are on track to meet your goals.
- If you are not on track, find out early if you need to make structural changes, such as data representation, in order to get back on track.
You don't need to optimize every part of your app, and performance improvements to the majority of your code usually don't result in a material difference to the user. Measure your app's performance to identify the high traffic areas in your code, and focus on getting good performance in only those areas. Often, there is a trade-off between creating software that follows good design practices and writing code that performs at the highest optimization. It is generally better to prioritize developer productivity and good software design in areas where performance is not a concern.
Windows 8 can run on many devices under a variety of circumstances and it is impossible for you to simulate all the conditions in which your app will run. Collecting telemetry about your app's performance on user machines can help you understand what your end-users are experiencing. This can be accomplished by adding instrumentation to various parts of your application and occasionally uploading the data to a web service. From this info, you can determine what the average user sees and what the worst and best case performance of your app is. This will help you decide which aspects of performance to focus on for the next version of your app.
Let's look at a few key performance best practices that are specific to Windows Store apps.
When you create an app, be aware that the type of computer that your users have might have significantly less power than your development environment. Windows 8 was designed with low power devices, such as tablets, in mind. Also, Windows RT uses these same design principles to take advantage of the low power consumption characteristics of ARM-based platforms. Windows Store apps need to do their part to ensure that they perform well on these devices. Operations that seem to run fast on your development machine can have a big impact to the user-experience on a low power device. As a heuristic, expect that a low power device is about 4 times slower than a desktop computer and set your goals accordingly.
Testing your app early and often on the type of computer that you expect the user to have allows you to form a realistic understanding of how your users will experience your app.
Most of the performance work you do naturally reduces the amount of power your app consumes. The CPU is a major consumer of battery power on devices, even at low utilization. Windows 8 tries to keep the CPU in a low power state when it is idle, but activates it every time you schedule work. You can further reduce your app's consumption of battery power by ensuring that your app doesn't have timers unnecessarily schedule work on the CPU when it is idle. For example, an app might poll for data from web services and sensors (such as the GPS). Consider battery consumption when deciding how often you poll for data.
This is also an important consideration for animations that require constant updates to the screen and keep the CPU and graphics pipeline active. Animations can be an effective in delivering a great user experience, but make a conscious choice about when you use them. This is particularly important for data-driven apps where the user may be looking at your app but not interacting with it. For example, a user may spend a while looking at content in a news reader or photo viewer without interacting with the app. It can also be wasteful to use animations in snap mode because the user is not giving the app their full attention.
Many apps connect to web services to get new info for the user. Reducing the frequency at which you poll for new info can improve your battery consumption.
Reducing memory helps avoid sluggishness and is even more important for Windows Store apps because of the Process Lifetime Management (PLM) system. The PLM system determines which apps to terminate, in part, by the app's memory footprint. By keeping your app's memory footprint low, you reduce the chance of it getting terminated when it is not being used. For example, you can reduce your app’s memory footprint by releasing unnecessary resources such as images on suspend.
The PLM system may also choose to swap out your app to disk, and swap it back into memory the next time it is switched to. If you lower the memory footprint of your app it will resume faster.
The related topics contain more in-depth performance best-practices for developing Windows Store apps. These best practices cover topics that are likely to be the source of performance issues in your app. But these best practices will only make a difference if they are on performance critical paths of your app. We recommend that you follow the principles on this page and determine if applying these best practices will help you achieve your performance goals.
Build date: 11/29/2012 | <urn:uuid:1d61b10a-3472-4e2b-a556-265f9c3217eb> | 2.796875 | 1,549 | Documentation | Software Dev. | 43.718854 |
The effect of removing large numbers of gulls Larus spp. on an island population of oystercatchers Haematopus ostralegus: implications for management.
Harris, M.P.; Wanless, S.. 1997 The effect of removing large numbers of gulls Larus spp. on an island population of oystercatchers Haematopus ostralegus: implications for management. Biological Conservation, 82 (2). 167-171. 10.1016/S0006-3207(97)00019-0Full text not available from this repository.
Predation on other breeding species has been used to justify culling adult gulls at several colonies but few studies have been carried out to assess the effects of gull control on these species. On the Isle of May, southeast Scotland, numbers of herring gulls Larus argentatus and lesser black-backed gulls L. fuscus increased rapidly during the 1960s and large scale gull control was implemented in 1972 which continued, albeit at a reduced level, until 1988. Prior to the start of the cull, there was a small breeding population of oystercatchers Haematopus ostralegus. In contrast to the British population which increased markedly during the 1950s and 1960s, numbers on the Isle of May remained more or less stable during this time. However, immediately following the start of gull control, the number of oystercatcher breeding territories rose and the increase continued throughout the period of control, with the rate of increase being above the British average over the same period. Prior to the cull, oystercatcher breeding success was extremely low with most losses of eggs and chicks attributable to gull predation. However, even after gull numbers had been reduced, breeding success remained low and gulls were the main cause of failure. The increase in numbers of oystercatchers could not have been sustained without substantial immigration. Thus, although the reduction in gull numbers had made the Isle of May more attractive to oystercatchers, breeding conditions were not improved markedly.
|Programmes:||CEH Programmes pre-2009 publications > Other|
|CEH Sections:||_ Pre-2000 sections|
|Additional Keywords:||Oystercatcher, gulls, culling, management, population dynamics, Isle of May|
|NORA Subject Terms:||Zoology|
|Date made live:||09 Dec 2010 17:00|
Actions (login required) | <urn:uuid:5bd428d0-31e5-4030-ad97-9565f3f415b9> | 3.140625 | 512 | Academic Writing | Science & Tech. | 42.0353 |
A point P is selected anywhere inside an equilateral triangle. What
can you say about the sum of the perpendicular distances from P to
the sides of the triangle? Can you prove your conjecture?
We are given a regular icosahedron having three red vertices. Show
that it has a vertex that has at least two red neighbours.
Can you make sense of the three methods to work out the area of the kite in the square?
Biren Patel, Heathland School, Hounslow, England; Andrei, School
no. 205, Bucharest, Romania and Ang Zhi and Chai from River Valley
High School, Singapore sent in very good solutions. Here is Chai
Prove that if n is a triangular number then 8n+1 is a square
Prove, conversely, that if 8n+1 is a square number then n is a
Substitute n = K/2 (K+1) into 8n + 1
Therefore, if n is a triangular number then 8n+1 is a square
To prove the converse, let X 2 = 8n + 1 be a square
As 8 is an even number, 8n will always be an even number. If 8n
is an even number, then 8n + 1 will always be an odd number. X
cannot be even because the square of an even number is even.
Hence X= (2k + 1) represents an odd number where k can be any
(2k + 1) 2 = 8n + 1
4k 2 + 4k+ 1 = 8n + 1
4k 2 + 4k + 1 - 1 = 8n
4/8(k 2 + k) = n
(k 2 + k)/2 = n
Therefore, n = k/2 (k+1) so n is a triangular number. | <urn:uuid:aa609c32-9a32-43e6-8c78-21fc98624bc4> | 2.75 | 396 | Q&A Forum | Science & Tech. | 83.58426 |
This calculator computes the velocity of a circular orbit, where:
vcir is circular velocity.
G is the gravitational constant.
M is the combined mass of the central body and the orbiting body. In the case of spacecraft, the mass of the spacecraft is insignificant and may be ignored.
r is the distance between the orbiting body and the central body.
Click here for more formulas | <urn:uuid:2dba5f07-ffcb-4860-8110-cafe4abf9e56> | 3.328125 | 80 | Tutorial | Science & Tech. | 45.264615 |
Honeybees. Apis mellifera. Innocuous, diligent workers with only one mission in mind: to survive and provide for the hive. You would hardly ever come across a honeybee taking a break or straying from routine. Why, then, are honeybees in California and South Dakota suddenly abandoning their hives at night? Some seem to wander and stumble aimlessly, as if disoriented. Clumps of dead bees inexplicably turn up under light fixtures in the morning.
What’s gotten into the honeybees lately? The ominous, horrific, literal answer is this: parasites.
A species of fly called Apocephalus borealis is responsible for this suspicious bee-havior (I couldn’t resist using at least one bee pun). Here’s how it works. The fly lays eggs inside the body of a honeybee, which then serves as an incubator. The bee becomes a walking, buzzing time bomb until the eggs hatch. As the eggs grow, they suck up nutrients from their surrounding environment (i.e. their bee host). The bees become disoriented as their bodies change from within, often leading them astray from their natural habitats. Eventually, newborn fly larvae crawl out of the honeybee’s body and grow into adult flies, beginning the cycle all over again.
You’d almost expect this scene to come straight out of a science fiction film, but this is a tangible and real process that scientists have been observing.
Hoping to find whether this form of parasitism is distributed widely across North America, the San Francisco State University Department of Biology, San Francisco State University Center for Computing for Life Sciences, and the Natural History Museum of LA County have collaboratively launched ZomBee Watch.
Citizen scientists can contribute to this project by helping collect sick-looking or dead bee specimens to observe whether parasitic pupae emerge. It might sound a bit morbid (harvesting bee carcasses and all), but I say it’s mostly awesome. (Don’t worry, the parasites don’t affect humans…as far as we know.)
More importantly, this project is brand spankin’ new, so every piece of data collected would be immensely helpful to the scientists behind this burgeoning research. I’m keeping my fingers crossed that the human race won’t have to deal with its own zombie apocalypse anytime soon. For now, while we have the upper hand, we can help scientists with this valuable study. | <urn:uuid:842c1eee-ec42-47e1-a671-04b2ce1483e1> | 3.015625 | 514 | Personal Blog | Science & Tech. | 47.253314 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Wednesday, 8 May 2013
The environment has an unexpectedly strong effect on the chemical composition of the cuticle, or body surface, of stingless bees, a new study shows.
Tuesday, 30 April 2013
Cicada wings are cleaned by dew drops jumping off the wings' water repelling surface taking any dirt with them, scientists have discovered.
Wednesday, 10 April 2013 13
Great Moments in Science Worker bees are a lot lazier than they sound says Dr Karl, but they've got a lot of guts when it comes to taking one for the hive. Literally.
Wednesday, 10 April 2013
Insecticide-resistant mosquitoes that block dengue could be the key to controlling the disease, reports a new study.
Tuesday, 2 April 2013 0 comments
Ask an Expert Why do dragonfly wings look glassy? What are they made of?
Friday, 1 March 2013
Wild insects are crucial for crop pollination but they are declining, and their role cannot be compensated for by managed honeybees, say experts.
Wednesday, 27 February 2013
Cockroaches balance themselves without using their brains.
Friday, 22 February 2013
Electric connection Flowers may be silent, but scientists have just discovered that electric fields allow them to communicate with bumblebees and possibly other species, including humans.
Wednesday, 20 February 2013
Energy levels rather than size determines which male golden orb-web spiders are more likely to father offspring, new research shows.
Thursday, 17 January 2013
The complex social structure of the red fire ant is made possible by a DNA fusion known as a supergene, say biologists.
Tuesday, 18 December 2012
The iconic New Zealand cricket-like weta has ears similar to those of a whale, researchers have found.
Tuesday, 27 November 2012
Temperature increases expected due to climate change are likely to change the foraging behaviour of the humble ant.
Wednesday, 14 November 2012
City grasshoppers are changing their tune in an effort to be heard by potential mates over the noise from their urban environment.
Tuesday, 30 October 2012
South Korean scientists have copied the structure of a firefly's underbelly to create what they say is an improved and cheaper LED lens.
Friday, 26 October 2012
Dung beetles use the balls of manure they collect as mobile air conditioning units. | <urn:uuid:bb6e31ec-5d04-438f-8b34-6584db50c9d6> | 2.8125 | 494 | Content Listing | Science & Tech. | 43.396463 |
Copyright © 2013 Elsevier Ltd All rights reserved.
Current Biology, Volume 23, Issue 3, R102-R103, 4 February 2013
CorrespondenceAdd/View Comments (0)
Adaptive aerial righting during the escape dropping of wingless pea aphidsCorresponding author
# Contributed equally to this work
- Pea aphids (Acyrthosiphon pisum) are small sap-sucking insects that live on plants in colonies containing mostly wingless individuals. They often escape predators, parasitoids and grazing mammalian herbivores by dropping off the plant [1,2], avoiding immediate danger but exposing themselves to ground predators, starvation and desiccation . We show here that dropping pea aphids land on their legs, regardless of their initial orientation on the plant (like a defenestrated cat), by rotating their body during the fall. This righting ability is intriguing, as wingless aphids have no specialized structures for maneuvering in mid-air. Instead, they assume a stereotypic posture which is aerodynamically stable only when the aphids fall right-side up. Consequently, the body passively rotates to the stable upright orientation, improving the chance of clinging to leaves encountered on the way down and lowering the danger of reaching the ground.
We evoked dropping behavior in aphids situated on a fava bean (Vicia faba) stem by introducing a predator (ladybug, Coccinella septempunctata). The stem was positioned at different heights above a viscous substrate (petroleum jelly) that captured the landing posture. We found that up to 95% of the aphids landed upright after dropping 20 cm (Figure 1A). The aphid’s body appendages play an important role in aerial righting: when dropped upside-down from delicate tweezers, live aphids (n = 20), dead aphids (random appendage posture, n = 23) and aphids with amputated appendages (n = 25) landed on their ventral side in 95%, 52% and 28% of the trials, respectively (Fisher Exact, p < 0.001). High-speed video visualization of the fall revealed that aphids do not jump off the plant, but rather release their hold, allowing gravity to accelerate them downwards. The aphids start rotating after falling a few body lengths (Supplemental Movie S1. Aerial righting of the pea aphid 1 and Movie S2. Aerial righting of the pea aphid 2) reaching a final right-side up orientation within the first 13.7 cm of the fall (∼170 ms) in 90% of the trials (n = 45). Early during the fall aphids assumed a stereotypic posture and maintained it throughout. The aphids moved their antennae forward and up and the hind tibiae backward above the body. In that posture, the aphids reached the ground with the long axis of the body tilted upward so that their ventral-caudal end touched the ground first (Figure 1A,B).
The stereotypic posture was used to construct a mathematical-physical ‘model aphid’ using mean mass, volume and mass-moment of inertia, measured from five aphids (Supplemental information ). Using the model, we simulated body rotations due to air resistance acting on the appendages during the free fall. The simulations show that the stereotypic posture provides static longitudinal stability; i.e., at any starting orientation, the air resistance on the appendages works to return the body to a balanced (zero net aerodynamic torque) orientation, such that the ventral side faces downwards and the longitudinal axis of the body is tilted at 32.6° upwards (Figure 1B). This aerodynamic mechanism is based on the anisotropic drag of a slender (length/diameter >10) cylinder, where the drag of a cylinder aligned normal to the flow is greater than the drag of the same cylinder in axial flow . By orienting the different segments of the appendages at specific angles at a distance from the center of mass, the falling aphids create a pitching torque imbalance that works to rotate the body to the stable orientation. The stable orientation obtained in the model is only 0.6 standard deviations higher than the mean orientation angle (23.9 ± 14.4°) observed in falling aphids.
Controlled descent and gliding are not uncommon in wingless arboreal arthropods [5,6,7] and aerial righting has been demonstrated in larval stick insects . Controlled descent and righting reflexes may have been primordial precursors for the development of insect flight [6,7] as they improve the fitness of arboreal species trying to avoid reaching the ground . We therefore hypothesized that aphids falling upright would be more successful in stopping the fall on a lower part of their host plant by clinging to leaves they hit during the fall. When aphids (n = 56) were released over a tilted broad bean leaflet (Supplemental Movie S3. Landing on a leaf), 54% of the 35 aphids that landed in a ventral posture stayed on the leaflet. All 21 aphids that landed in a non-ventral posture bounced or rolled off the leaflet (Supplemental Movie S4. Caudal landing). Apparently, upright aphids use their tibial pulvilli (adhesive pads ) to cling to leaves at landing, since abscising the tips of the aphids’ legs reduced their ability to stay on the leaf (Figure 1C; Supplemental Movie S5. Gripping the surface).
Small body size provides a scaling advantage for falling aphids. First, small creatures reach lower terminal falling velocities , meaning that aphids take a longer time to fall the same distance than larger creatures. This also eliminates the risk of physical damage at impact. Second, in the flow regime typical of tiny aphids, the aerodynamic force coefficients of cylindrical appendages increase steeply with the decrease in size and speed (Reynolds number < 20; Supplemental information ). Finally, the aerodynamic torques that rotate the body depend on area and length (scale with body length to the power of 3), while the mass-moment of inertia that these torques need to overcome scales with body length to the power of 5. Consequently, smaller size entails an increase in the aerodynamic-torque: rotational-inertia ratio, thus resulting in faster (more agile) righting. Combining these general predictions explains how slender cylindrical appendages, normally used for walking and sensing, suffice to right aphids in less than 0.2 seconds while falling at low speeds.
The righting mechanism described here requires no dynamic control or constant feedback from the nervous system during the righting itself. It works by simply assuming a specific posture. Dead aphids landed ventrally less frequently than live aphids, suggesting that this posture is not a simple consequence of air resistance aligning the appendages with the direction of the fall. Rather, the aphids actively assume the descent posture (a tarsal or other reflex) and then allow gravity to do the rest.
We thank O. Bahat for discussions, and J. Covaliu, T. Gish and M. Ford for English copyediting. G.R. and D.W. thank LFN, T. Yehoshua and the Fine Fellowship.
- Movie S1. Aerial righting of the pea aphid 1 (AVI 3929 kb)
- An aphid dropping from a horizontal stem in response to a foraging ladybug (moving on the other side of the stem).
- Movie S2. Aerial righting of the pea aphid 2 (AVI 4456 kb)
- An aphid dropping from a vertical stem in response to a foraging ladybug (moving upward on the left side of the stem).
- Movie S3. Landing on a leaf (AVI 9759 kb)
- After righting itself in mid-air, the pea aphid lands on its legs and grips the surface of the leaf.
- Movie S4. Caudal landing (AVI 4461 kb)
- The film shows non ventral landing on a leaf resulting in the aphid bouncing off the leaf. The aphid was dropped from a height of 12.5 cm.
- Movie S5. Gripping the surface (AVI 3722 kb)
- When the aphid lands on its legs (demonstrated here on a piece of paper), it is able to grip the surface and resist further bouncing and rolling.
10 Haldane, J.B.S. (1927). On being the right size. pp 20–28 in Possible Worlds and other Essays (London, Chatto & Windus).. | <urn:uuid:25678ff4-4408-40bd-b728-6af688edd5a7> | 3.59375 | 1,834 | Academic Writing | Science & Tech. | 49.899183 |
[pygtk] confused about the expand and fill in table layout.
krmane at gmail.com
Wed Feb 20 05:03:26 WST 2008
can some one please help me understand the use of fill and expand in
my understanding about table layout is that the attach_left specifies
which column to be used (0 being first column), attach_right specifies
till which column the widget should go (actual position starting from
1) and same with top and bottom in context of row.
which means the top will specify which row starting from 0 should be
the position and the bottom states which actual row should the widget
if this understanding is right then what is the utility of expand?
my problem is how can I make a table with lables and text entries
paired with two pares in a row meaning 4 colums.
the issue is that they should be neetly aligned (justified ) and it
should not appear as one widget taking all the space.
I would prefer that the table shrink to fit the widgets but that
should not cramp the widgets at all and if the user resizes (maximise
) the window, the widgets must also adjust accordingly.
and I will be using a VBox where 60% of the top is taken by the table
layout and then there has to be some empty rows to creat a gap and at
the bottom of the screen I would put a button box to put all the
so the table with all the widgets followed by some blank space in the
VBOX and then the button box.
I can put the buttonbox at by using pack_end so that it comes at the
bottom of the VBox but I want that nither the table nore the button
box swel up to occupy that blank space.
I am confused if the expand and fill methods can or cannot be put to use here?
More information about the pygtk | <urn:uuid:88999337-5d58-46c6-bd2f-6aaf2b5f0423> | 2.796875 | 407 | Comment Section | Software Dev. | 57.75824 |
Written By Dan Suthers based on explanations by Scott Ferguson, June Firing, Kyle Hogrefe, and Danny Merritt.
Physical Oceanography studies the movement of ocean water (ranging from small scale oscillations to basin-wide flows) and the physical and chemical characteristics of that water (e.g., temperature and salinity). Most research on currents and water characteristics has focused on large scale patterns, such as gyres that flow around ocean basins . Water of different densities does not mix easily. Density depends on temperature and salinity. Therefore, if you measure the temperature and salinity of water at different depths you can track the same body of water as it moves around an ocean basin (e.g., the North Pacific) and even around the globe.
In this research, measurements are often taken on a large scale. For example, during a transect a ship might make measurements of water temperature and salinity at different depths, but do so only at every degree of longitude or latitude: about every 60 miles. One square kilometer is a standard resolution for satellites that calculate sea surface temperature. This research helps us understand major influences on reef life, such as transport of warm or cold water, nutrients, and possibly organisms from one region to another. However, the data gathered for oceanography at this scale is not detailed enough to understand how water characteristics influence life at a local level.
Reef-specific oceanography studies the currents and water characteristics on a local scale. The topography of a reef--where it's shallow and deep; how it's situated relative to ocean currents and prevailing winds--can influence water temperature and the distribution of nutrients beyond what can be predicted by looking only at large scale currents in the region of the reef. Marine biologists such as those on this expedition study the distribution of marine life on a local scale, comparing for example life at different depths and within and outside an atoll, as well comparing reefs at different locations in the island chain. In order to understand the distribution of life they are seeing, they need reef-specific oceanographic data.
Data for reef-specific oceanography is being gathered by taking direct measurements while we are up here, by leaving sensors in place that send their data back by satellite or are retrieved later, and by satellite imagery. The first two forms of data, measured directly on site, help scientists to "ground truth" satellite data: observations in the field are compared to what the satellite sees so that we know how to interpret the images.
Reef Oceanography data is being gathered on this expedition by two teams: Night Operations (Scott Ferguson, assisted by Drew Rapp), who operate off the Hi`ialakai; and the Mooring Team (Elizabeth Keenan, Danny Merritt, Stephani Holzwarth, and team leader Kyle Hogrefe), who operate off of the jet boat HI-2. The instruments they use and what they measure are described below. As we consider this diverse collection of instruments we should keep in mind that they are all contributing to the same objective: to have an empirically based model of how ocean water characteristics are distributed across space and time. The most important water characteristic studied from the point of view of reef health is temperature. See our article on coral bleaching for a discussion of how temperature can affect the symbiotic relationship between coral and algae. Reef life is also strongly affected by the availability of nutrients. Both temperature and nutrients, in turn, are affected by the movement of water, so some of the measurements made are intended to give a profile of the bodies of water surrounding reefs.
Yearly Water Column Measurements
Some measurements are only taken about once a year, when researchers can come up to the locations where the measurements are made.
Ship-based CTDs. The Night Operations team conduct Conductivity, Temperature and Depth or "CTD" measurements at several locations around the reef. (They also make observations with a Towed Optical Assessment Device, but this is for unrelated purposes: see the September 18th journal for a brief description.) The CTD deployed by Night Operations is a large device that can reach 500 meters in depth, taking up to five water samples at different depths, and making other measurements on a continuous basis on the the way down and up. Temperature and pressure are measured directly. Salinity is measured indirectly by measuring the conductivity of water to electricity. Chlorophyll is measured indirectly by a fluorometer that emits purple light ("black light") and measures fluorescence in response to that light. These measurements are made continuously, providing a profile of temperature, salinity, and chlorophyll as a function of depth. The photograph shows the dta from a test run of the CTD, including temperature (blue), salinity (red) and density (green).
Analysis of the water samples will tell scientists how much chlorophyll is actually in the water. This information can then be compared to the measurements of the flurometer to see how well it predicts levels of chlorophyll. The presence of chlorophyll is a good sign of producivity: the food chain is built on phytoplankton (microscopic plants) that produce food energy from light energy. They are eaten by microscopic zooplankton (floating animals), which in turn are eaten by larger animals, and so on.
Shipboard CTD measurements are typically taken at three locations around a given island or atoll: the windward and leeward sides, and at a standard oceanographic "station" assigned to each island or atoll that is being surveyed over a long period of time. There is one such station per each major island or atoll in the NWHI: Nihoa, Necker, French Frigate Shoals, Gardner, Maro Reef, Laysan, Lisanski, Pearl and Hermes, Midway, and Kure.
Shallow water CTDs. In order to understand the local reef ecosystem, we need measurements at more locations than just three, and we need to sample in shallower water than the ship can operate in. For these reasons, the oceanographers aboard make other measurements from the HI-2 jetboat. These measurements are taken every mile or two around the island/atoll between the 80 and 120 foot isobath and inside the atolls in a few places as well, providing greater resolution.
A handheld CTD device is pictured. Like its big brother, it includes a temperature sensor, a depth sensor, and a conductivity sensor (for salinity). The handheld device also has transmissometer that measures the level of particulate matter in the water (a proxy for turbidity). It does so by shining a light at a sensor and seeing how much of that light gets through the water. These measurements are also made continuously as the device descends and ascends. Unlike the larger version, the small boat CTD does not take water samples and does not have a fluormeter. Separate devices are used for these purposes.
Water samples are taken by a handheld device consisting of a tube with spring loaded caps at both ends. The caps are set in the open position so water can flow through the tube as it descends. A weight is then slid down the supporting rope to hit a trigger and close the caps. The sampler is then taken to the surface, where the water is used to first rinse out a sample bottle (to avoid contamination from other water), and then the sample bottle is filled. The bottle is opaque to prevent further modification of the chlorophyll content by light. (Water from the large CTD is also stored in these same bottles.) Water samples are typically taken at three depths, 5, 30 and 60 feet, measured out by marks on the rope holding the sampling device.
A radiometer on the small boat plays the role of the fluorometer, and also gives important information about available solar radiation. Two radiometers are coupled together to take readings above and below water. A instrument on the boat reads the amount of light arriving at the ocean surface. Another instrument is put in the water to read the light reflected back from the water. This is compared to the former surface measurement for reference. These measurements are made at light frequencies relevant to photosynthesis. By reading reflected light at certain wavelenghths, scientists can tell how much chlorophyll is in the water.
Time Series Measurements
The measurements just described provide a lot of detail about the water at a given location and different depths, but only at one or a few points in time: when the scientists can come up here. To fully understand an ecosystem, we need to monitor it over many years. The instruments described in this section record a series of measurements of water characteristics over time. They include CREWS buoys, SST buoys, and SST "pipe bombs."
CREWS Buoys. These are large buoys (pictured) that are anchored at a specific location and can send data back daily to scientists via satellite. CREWS stands for Coral Reef Early Warning System, reflecting a major function of these buoys: to warn scientists as soon as possible of an unusual change taking place in the environment of a coral reef ecosystem. These buoys have sensors both in and above the water that measure water and air temperature, water salinity (via conductivity), wind speed, and barometric pressure. A few of them also have radiometers, but these can only be located where staff can get to them every few weeks to clean the sensors. There is one at French Frigate Shoals, serviced by USGS staff on Tern Island.
SST Buoys and Pipes. CREWS buoys are large and expensive, so other instruments are also used that measure fewer parameters but can be deployed in more locations. Sea Surface Temperature (SST) buoys are round floating buoys that are anchored in a specific location. They measure water temperature and send this data back at regular intervals via satellite. The "pipe bombs of science" are strapped to the reef at different depths and locations around an atoll. These are set to measure temperature every half an hour, and record it on a data chip. Scientists must come up to the NWHI and retrieve these devices in order to obtain the data.
Tracking Water Movement
Currents, tides and waves also affect reef life. The mooring team installs two further kinds of instruments to track these parameters. Now we can appreciate why they are always so busy!
Wave and Tide Recorders (WTDs), record water pressure with high sensitivity, providing information about waves and tides. (When a wave passes over, or when the tide comes in, there is more water over the instrument and hence more pressure on it.) This instrument measures the tide 48 times a day, and records wave height 8 times a day in the process. Each measurement involves recording the pressure for 18 minutes, and then estimating wave and tide values based on changes in the pressure. They are deployed at 50 to 100 feet.
The Ocean Data Platform (ODP) contains temperature and salinity sensors, as well as an Acoustic Doppler Profiler (ADP). The ADP can measure current speed and direction in multiple directions and at different depths. These are placed at 60-100 feet, "hopefully deeper than the wave energy you are trying to study," as Kyle put it after a day of trying to extract an ODP that had somehow flipped and gotten wedged in the rocks!
Other instruments used by reef oceanographers are not described here because we don't have them on this expedition. Drifting buoys that follow water at 15 meters depth, measuring GPS position and water temperature over time, were not delivered to Honolulu in time for the expedition due to the hurricanes in Florida. Two other sensors will be installed on the Hi`ialakai later this year: an Acoustic Doppler Current Profiler that measures current velocity using the Doppler effect, and a sonar that measures biomass in the water column.
Putting It All Together into a Model of the NWHI
Visualize, if you will, a three dimensional model of the Northwestern Hawaiian Island chain. Imagine that we have represented this model in a computer such that we can attach the data gathered by the various devices just discussed to the corresponding location in the model. Now imagine that we can move a slider or use a play/fast forward/rewind control like on a video recorder to move this model through time, so we can store data taken at different times as well as different places. In other words, we have organized all the data collected in a four dimensional "space," where the first three dimensions are for the location of the various sensors discussed above (latitude, longitude, and depth) and the fourth dimension is for time.
In some places in this model, we will have detailed information such as temperature, salinity, chlorophyll and light levels, etc., measured continuously over a number of years. In other places we will have only temperature, and in some places we have measurements only once a year. Finally, in most places we have no measurements at all! But we have taken our measurements at enough locations that we can "interpolate" or predict mathematically the values the measures would be at the points in between those we actually took.
In some places and times we have different measures of the same thing. For example, we have the actual chlorophyll content of water and the estimate of that content taken by a radiometer or fluorometer; or we have the actual temperature and the estimated temperature taken by satellite. We can see how close these measurements are to each other, to better understand how we can use less expensive measures (such as a satellite) to estimate parameters that otherwise would require more expensive measures (sending people to the actual locations).
All of this data organized in space and time provides a physical model of the ocean environment of reef ecosystems. Marine biologists can use this model to understand what is affecting the prevalence and health of the various organisms of the reef. We will see how marine biologists find out what organisms are present in another article.
Garrison (2002) | <urn:uuid:c85369e9-074a-45b4-bb3b-ae8c16044913> | 4.25 | 2,894 | Knowledge Article | Science & Tech. | 35.620546 |
The pair of sites in Prince William Sound are located in Knowles Bay, north of Orca Bay and also just north of Shelter Bay on Hinchinbrook Island. These two sites have a clear view of central Prince William Sound. This HF Radar system has already provided new insight on the circulation of Prince William Sound, as well as information on the tidal currents in the region. These systems will be operational for the summer of 2008 but are currently decommissioned. Funding for these systems is being provided by grants awarded by the National Aeronautical and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), the Prince William Sound Regional Citizens Advisory Corporation (PWSRCAC), and the Alaska Ocean Observing System (AOOS).
During the summer of 2005, the Salmon Project temporarily operated two additional sites in the Northern Gulf of Alaska: one at Rugged Island on the southern end of Resurrection Bay and the other on the western end of Middleton Island. The goal of this HF Radar system was to address the effects of the Alaskan Coastal Current and the Alaskan Stream on the circulation of the shelf of the Gulf of Alaska. | <urn:uuid:30e41172-3131-492a-9e6e-c4dede5337dd> | 2.78125 | 234 | Knowledge Article | Science & Tech. | 32.7075 |
Science Main Index
The nautilus is native to deep ocean waters. It has a multi-chambered shell. Each chamber is sealed and contains gas which provides the nautilus with buoyancy to float. Like the octopus, squid and cuttlefish, the nautilus uses jet propulsion to move forward. It sucks in water, then expels it in a fast, strong stream to propel itself forward. The nautilus has as many as 90 small tentacles that it uses to catch food, such as shrimp, fish or small crustaceans. It then uses its powerful beak to crush the food. The nautilus is considered a living fossil because its form has remained unchanged for over 400 million years. Play the following videos to learn more about the nautilus. | <urn:uuid:15fec964-4a25-44da-b4ad-96269a7d4d49> | 4.0625 | 161 | Truncated | Science & Tech. | 62.015455 |
The Cosmic Bat — NGC 1788
About this Image
The delicate nebula NGC 1788, located in a dark and often neglected corner of the Orion constellation, is revealed in this finely nuanced image. Although this ghostly cloud is rather isolated from Orion’s bright stars, their powerful winds and light have a strong impact on the nebula, forging its shape and making it a home to a multitude of infant suns. This image has been obtained using the Wide Field Imager on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile. It combines images taken through blue, green and red filters, as well as a special filter designed to let through the light of glowing hydrogen. The field is about 30 arcminutes across; North is up, and East to the left. This image was released March 3, 2010. | <urn:uuid:0a09891a-00d7-4d71-b3c0-fcbb12757de9> | 3.03125 | 183 | Truncated | Science & Tech. | 55.946154 |
How does energy move?
I know of three ways in which energy can be transported without
external help. These are :
1. Conduction - This happens when energy is transported without
any transport of matter. It requires a medium. The energy is transported by
by change in random vibrations of the medium. For example an
iron rod when heated at one end becomes hot at the other end
too. Here the vibrations of the iron atoms are responsible for
2. Convection - Here energy is transported by actual motion
of the medium. An example of this is boiling water : you
can see the upwelling where the hot current is hitting the
3. Radiation - Energy can be transported by waves, as by
sound waves or water waves. Here energy is transported
by systematic vibrations of the medium, as compared to the random vibrations
that are responsible for conduction. It turns out that energy
can be radiated without any medium through electromagnetic waves.
One manifestation of such waves is light, and as proof of this
statement i will ask you to go out at night and take a good
look at stars - light is coming from stars to us through
space and as we all know space is emptier than the best vacuum we
can produce here on earth.
I read your question and the answer there and thought that I would
add a few other possibilities. Energy can also move through a shaft.
Consider the hydroelectric energy conversion process. We have built large
dams to hold back water in a lake. This is done to control flooding but
also to generate electricity. The water is passed through pipes from the
dam to a lower elevation. We can point the flowing water over some specially
designed "paddles" attached to a rotor (this is called a turbine). The
turbine transmits the power (or energy per unit time) to a generator which
converts the energy to electricity. The electricity is then transmitted
through large conductors (wires) to homes and businesses to be converted
again into another form of energy. In describing an example of how energy
is transmitted through a shaft I have discussed another type of transmission
form of energy. Can you guess what it is?
Great question! Have fun!
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:9d513923-48f1-4751-8586-8854de878888> | 3.390625 | 483 | Q&A Forum | Science & Tech. | 51.508345 |
Genetic connectivity between and within coral reefs is an important component of resilience. Larval exchange between reefs promotes genetic diversity, which is critical in terms of resilience against any disturbance, particularly mass bleaching events. The spread of selectively advantageous genetic traits, such as bleaching resistance, is a potential consequence of larval coral exchange and migration1. Within species, susceptibility to bleaching and mortality can differ, even under the same environmental conditions. These differences between individuals suggest that genetic variation within coral populations can create resilience to increased thermal stress.
Several biological characteristics of corals may contribute to their resilience::
- 1. Fluorescent tissue proteins
Fluorescent proteins are common in many corals, providing a system for regulating light. These proteins protect the coral from broad-spectrum solar radiation by filtering out damaging UVA rays (blue light portion of spectrum), as well as by reflecting visible and infrared light, thereby reducing light stress on the corals. Concentrations of the pigments vary among species (pocilloporids and acroporids have relatively low densities of pigments, while poritids, faviids and other slow-growing massive corals have high densities). The protective capacity of these pigments provides a kind of internal defense mechanism that may have important implications for long-term survival of corals exposed to thermal stress. Corals containing fluorescent capacity have been found to bleach significantly less than non-fluorescent colonies of the same species.
Furthermore, a recent study2 identified an additional role of fluorescent pigments as supplemental antioxidants which may work to prevent oxidative stress in coral tissue and further supports the hypothesis that fluorescent pigments serve multiple functions. The diversity, temporal, and spatial variation in coral fluorescent pigments distribution, abundance, in combination with differential antioxidant potentials, suggest that fluorescent pigment roles may differ between coral species or with changing environmental conditions.
- 2. Mycosporine-like amino acids (MAAs)
MAAs absorb UV and dissipate UV energy as heat without forming toxic intermediates. While there is still a great deal of uncertainty in how MAAs are acquired, it is known that corals have a major influence on the complement and distribution of MAAs, thereby moderating the amount of UV that reaches the cells of the zooxanthellae and influencing the amount of damage sustained by the zooxanthellae3.
- 3. Heat-shock proteins
Many different heat-shock proteins are found in coral tissues and their activity influences the bleaching response. Heat-shock proteins help maintain protein structure and cell function, following stress3.
- 4. Colony integration
The extent of colony integration influences the degree to which the whole colony responds to thermal stress. Characteristics of colony integration include polyp dimorphism, intra-tentacular budding and complex colony morphology4. Species with a high colony integration (e.g., milleporids, pocilloporids and acroporids) are predicted to have a greater whole-colony response to increased temperatures than species with a low colony integration (e.g., poritids, faviids, and other massive corals). This pattern of mortality has been observed between Acropora and Porites. Acropora, with high colony integration,displayed high rates of whole-colony mortality and little partial mortality, while Porites, with low colony integration, had patches of bleached areas with little whole-colony mortality.
- 5. Change in diet in response to bleaching stress
Much of the energy needed for coral metabolism is derived from zooxanthellae, however many corals are also effective carnivores. Corals that increase carnivory survive experimental bleaching better than corals that cannot3. Changes in the transfer of photosynthetic products from the zooxanthellae to the coral in response to stress are currently being researched.
- 6. Tissue thickness
The thickness of coral tissues may contribute to the level of susceptibility to bleaching. Thin tissue is found in coral species that are more susceptible to bleaching. Thicker tissue may help shade zooxanthellae from intense light, thereby increasing the resilience of the coral. Corals from genera such as Porites that have thicker tissues and appear more robust to thermal stress than corals from genera such as Acropora which have thinner tissues5.
1 Van Oppen and Gates 2006
2 Palmer et al. 2009
3 Baird et al. 2009
4 Baird and Marshall 2002
5 Hoegh-Guldberg et al. 1999 | <urn:uuid:9312a927-1c0b-490f-9bec-36d16fa64bcf> | 3.84375 | 933 | Knowledge Article | Science & Tech. | 21.723817 |
The Coastline Paradox
About this video
How can one coastline be two different lengths?
The coastline of Australia is thought to be roughly 12,500km long. But The World Factbook claims the figure is more than double this, at 25,700km. This ambiguity over the length of a coast is known as the coastline paradox.
The length of a coastline depends on the level of detail at which you measure it. Using a long measuring stick that sweeps across jagged stretches of coastline will give you a short coastline length. Conversely, a shorter measuring stick that enables you to measure inside every inlet along the coast will calculate a longer length of coastline. So what length of measuring stick should we use?
In this video Derek Muller relates the coastal paradox to the Koch Snowflake – a shape made from layer upon layer of equilateral triangles. This type of shape is called a fractal, which means it looks similar on many different scales.
Coastlines have a fractal structure too; when we zoom in they look roughly the same to how they do from afar. This similarity helps explain why the length of a coastline cannot be well-defined.
The fractal geometries found in nature were synthesized in the world's first fractal movie by animator Loren Carpenter in 1979-80.
- Dr Derek Muller
- Collections with this video:
Licence: Standard YouTube Licence
Collections containing this video:
An element of truth. The best of the science video blog from atoms to astrophysics. | <urn:uuid:fe377050-29c0-4a0e-82f5-a52bb62b1b0f> | 3.796875 | 312 | Truncated | Science & Tech. | 53.402867 |
A progressive future is at sight when one taps into solar power. Here are some keypoints on how solar power is a clean power source:
No Emissions – as compared to other forms of energy sources, solar power has no sort of emission at all. This is energy at its purest form.
Installation Success – fossil-fuel based power sources would generally need large patches of land to produce power. A solar panel can be specifically built to power a certain area, removing the need of any sort of demolition.
Fits Right In – stressing the Philippine climate again, solar is indeed the best renewable energy fit in the country. The amount of sunlight that the country receives is more than enough to stem the excessive energy output – and pollution – that the Philippines produces. | <urn:uuid:7b1da0e1-6b6b-4308-9dae-41ace12bda11> | 2.796875 | 155 | Knowledge Article | Science & Tech. | 39.95145 |
The Rich Colours of a Cosmic Seagull
This new image from ESO’s La Silla Observatory shows part of a stellar nursery nicknamed the Seagull Nebula. This cloud of gas, formally called Sharpless 2-292, seems to form the head of the seagull and glows brightly due to the energetic radiation from a very hot young star lurking at its heart. The detailed view was produced by the Wide Field Imager on the MPG/ESO 2.2-metre telescope.
Nebulae are among the most visually impressive objects in the night sky. They are interstellar clouds of dust, molecules, hydrogen, helium and other ionised gases where new stars are being born. Although they come in different shapes and colours many share a common characteristic: when observed for the first time, their odd and evocative shapes trigger astronomers’ imaginations and lead to curious names. This dramatic region of star formation, which has acquired the nickname of the Seagull Nebula, is no exception.
This new image from the Wide Field Imager on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile shows the head part of the Seagull Nebula . It is just one part of the larger nebula known more formally as IC 2177, which spreads its wings with a span of over 100 light-years and resembles a seagull in flight. This cloud of gas and dust is located about 3700 light-years away from Earth. The entire bird shows up best in wide-field images.
The Seagull Nebula lies just on the border between the constellations of Monoceros (The Unicorn) and Canis Major (The Great Dog) and is close to Sirius, the brightest star in the night sky. The nebula lies more than four hundred times further away than the famous star.
The complex of gas and dust that forms the head of the seagull glows brightly in the sky due to the strong ultraviolet radiation coming mostly from one brilliant young star — HD 53367 — that can be spotted in the centre of the image and could be taken to be the seagull’s eye.
The radiation from the young stars causes the surrounding hydrogen gas to glow with a rich red colour and become an HII region . Light from the hot blue-white stars is also scattered off the tiny dust particles in the nebula to create a contrasting blue haze in some parts of the picture.
Although a small bright clump in the Seagull Nebula complex was observed for the first time by the German-British astronomer Sir William Herschel back in 1785, the part shown here had to await photographic discovery about a century later.
By chance this nebula lies close in the sky to the Thor’s Helmet Nebula (NGC 2359), which was the winner of ESO’s recent Choose what the VLT Observes contest (ann12060). This nebula, with its distinctive shape and unusual name, was picked as the first ever object selected by members of the public to be observed by ESO’s Very Large Telescope. These observations are going to be part of the celebrations on the day of ESO’s 50th anniversary, 5 October 2012. The observations will be streamed live from the VLT on Paranal. Stay tuned!
This object has received many other names through the years — it is known as Sh 2-292, RCW 2 and Gum 1. The name Sh 2-292 means that the object is the #292 in the second Sharpless catalogue of HII regions, published in 1959. The RCW number refers to the catalogue compiled by Rodgers, Campbell and Whiteoak and published in 1960. This object was also the first in an earlier list of southern nebulae compiled by Colin Gum, and published in 1955.
HD 53367 is a young star with twenty times the mass of our Sun. It is classified as a Be star, which are a type of B star with prominent hydrogen emission lines in its spectrum. This star has a five solar mass companion in a highly elliptical orbit.
HII regions are so named as they consist of ionised hydrogen (H) in which the electrons are no longer bound to protons. HI is the term used for un-ionised, or neutral, hydrogen. The red glow from HII regions occurs because the protons and electrons recombine and in the process emit energy at certain well-defined wavelengths or colours. One such prominent transition (called hydrogen alpha, or H-alpha) leads to the strong red colour.
The year 2012 marks the 50th anniversary of the founding of the European Southern Observatory (ESO). ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning the 39-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
- Photos of the MPG/ESO 2.2-metre telescope
- Other photos taken with the MPG/ESO 2.2-metre telescope
- Photos of La Silla
ESO, La Silla, Paranal, E-ELT & Survey Telescopes Public Information Officer
Garching bei München, Germany
Tel: +49 89 3200 6655
Cell: +49 151 1537 3591 | <urn:uuid:6b457bbe-c4d3-4425-b4c5-338a78e2e084> | 3.3125 | 1,338 | Comment Section | Science & Tech. | 48.382873 |
Sam Bowring, professor of Geology at MIT, says human adaptations to the environment are so specific that we would face significant challenges if our surroundings were suddenly very different.
Visit the EAPS (Earth, Atmospheric and Planetary Sciences) website to learn more about the scientists who are studying the natural surroundings of our home planet and the clues they give us about our future.
Here's one scientist with a passion for ICE: Alison Criscitiello is studying glaciology in the EAPS program. Her work takes her to Antarctica, where she studies sea ice and measures ice cores. Check out photos of her work in this video:
NOVA: Extinction Happens
Comment on This Article
From The World's Science podcast to Nova, we'll deliver updates from these cutting edge programs. See a sample » | <urn:uuid:fd6ad825-ec91-4c1b-8ff1-cdaa863e607f> | 2.859375 | 165 | Truncated | Science & Tech. | 32.629909 |
Besides the standard "places to see", the scientifically minded tourist might definitely want to have a look at the Senckenberg Museum, one of the largest museums of natural history in Europe. It is run by the Senckenbergische Naturforschende Gesellschaft and named after Johann Christian Senckenberg, a Frankfurt physician whose 300th birthday is celebrated this year. On display in the museum are for example the unique, 50 million year old fossils from the nearby Messel pit, but kids will probably be most fascinated by the skeletons of dinosaurs and mastodons - and by the true-to-life replicas of a Tyrannosaurus and a Diplodocus in the front yard of the museum.
If you're standing in front of the museum , you will notice to the left a building with a small astronomical observatory on the roof - this is the home of the Physikalischer Verein, the "Physical Association". Both the Senckenbergische Naturforschende Gesellschaft and the Physikalischer Verein have quite an interesting history: They have been founded in 1817 and 1824, respectively, following a suggestion by Johann Wolfgang von Goethe at a visit to his hometown Frankfurt, as institutions of research and public outreach in the natural sciences. Foundation was not by the local ruler or government, but by private persons - Frankfurt citizens who until today operate and generously finance these science institutions out of personal memberships and private donations.
Since the time of its foundation, the Physikalischer Verein had been engaged in a lot of scientific activities: It organised astronomical observations to establish time for the City of Frankfurt, lectures on science for students and the Frankfurt citizens, and it sponsored research in physics, chemistry, and technology. To this end, the Verein paid lecturers and scientists and provided office and laboratory space for them. Early research at the Physikalischer Verein included, for example, the development of an electric telegraph by Samuel Soemmering, and the demonstration of the telephone by Philipp Reis. When the Frankfurt University was founded in 1914, all these activities were integrated into the physics institute of the new university, and hosted in a large building next to the Senckenberg Museum, which the Physikalischer Verein had built in 1907.
In the meantime, the physics institute has moved on to a new campus on the outskirts of Frankfurt, and the building is mostly empty, used only for public lectures on astronomy and observations at the telescope on Friday nights. There are plans to establish a Science Centre and a Planetarium on the premises, but currently, the building is dreaming of its exciting days in the past - for example, when in the early 1920s, Otto Stern and Walther Gerlach had been conducting here the famous experiment demonstrating space quantisation of magnetic moments for the first time.
In the early Bohr-Sommerfeld theory for the quantisation of the motion of electrons in atoms, orbiting electrons could have only discrete values of angular momentum, and thus, only discrete magnetic moments. Moreover, in an external magnetic field, these magnetic moments were supposed to have only specific, discrete orientations with respect to the direction of the field. For example in silver atoms, which have one "hydrogen-like" valence electron, there should be only two possible orientations of the magnetic moment in an external field.
Otto Stern, who had been as a postdoc with Einstein in Prague and Zurich and had become an assistant to Max Born in Frankfurt in 1919, had the idea that one might check space quantisation of magnetic moments using atomic beams - a technique quite new at that time: atoms are evaporated from an oven into a vacuum, and with systems of apertures and screens, one can obtain well-defined, sharp beams. Stern and Born had successfully used this method to study the thermal velocity distribution and the mean free path of atoms, and Stern thought it should be possible to test if space quantisation is real:
If a beam of atoms with a magnetic moment passes through a magnetic field with a strong gradient, the gradient of the field causes a deflection of the atoms according to the orientation of their magnetic moments. Now, if the magnetic moments can have any orientation in the magnetic field (as in the classical theories of the atom of Lorentz and Zeeman), the deflection would have any value, and the beam would be smeared out. If, however, the magnetic moments could only have some discrete orientations in the magnetic field (two, for silver atoms), deflection would also be discrete, and the beam should split (in two beams, for silver atoms). Stern was confident that this splitting could be measured (Otto Stern: Ein Weg zur experimentellen Prüfung der Richtungsquantelung im Magnetfeld. Zeitschrift für Physik 7 (1921) 249; English translation in Zeitschrift für Physik D: Atoms, Molecules and Clusters, 10 (1988) 114), but he also knew that the experiment was tricky.
Fortunately, he had a colleague who was an expert in atomic and molecular beams: Walther Gerlach, assistant to the professor of experimental physics in Frankfurt since 1920. Stern had no trouble to convince him that they collaborate on this problem.
They had to cope with many technical and organisational problems: The experiment was quite delicate, requiring adjustments of the beam and magnets to within 0.01 millimetre, maintaining a vacuum for the beam, and detection of tiny amounts of silver atoms deposited by the beam. Moreover, funding was difficult because of the consequences of the war and the beginning inflation. They could get money for the experiment from grants of Einstein at the Kaiser-Wilhelm-Institut in Berlin, from Henry Goldman of Goldman and Sachs, and from entrance fees Max Born had charged for a series of public lectures on relativity .
But finally, in February 1922 - Stern had already left Frankfurt and moved on to another position in Rostock - Walther Gerlach succeeded in measuring the splitting, in quantitative agreement with the calculations of Stern.
It's ironic, in a sense, that the theory Stern had used to calculate the splitting of the beams of silver atoms was wrong: as we know today, the splitting Gerlach eventually managed to observe is not caused by an electron orbital angular momentum taking on projections along the axis of the magnetic field of ±h/2π, but by the electron spin, which is only have as large and has projections ±h/4π. However, thanks to the electron g factor of 2, the value of the splitting coincides, again, with the calculation of Stern.
These subtleties notwithstanding, the Stern-Gerlach experiment is now one of the prototypical experiments showing quantum physics at work - and in case you have time to kill in Frankfurt, you can have a look at the place where all this has happened some 85 years ago.
To check out connections using public transport, the website of the local transport authority, rmv.de, is very helpful - useful connections are Airport-Hauptbahnhof, Airport-Hauptwache (city centre), or Airport-Bockenheimer Warte (Senckenberg Museum and Physikalischer Verein).
For a first orientation, maps.google.com is as usual helpful. The dinosaurs in the yard aren't yet there in the aerial photo.
A full account of the very interesting circumstances around the discovery of the Stern-Gerlach splitting can be found in Stern and Gerlach: How a Bad Cigar Helped Reorient Atomic Physics, by Bretislav Friedrich and Dudley Herschbach, Physics Today, December 2003, pages 53-59 (PDF), and Space quantization: Otto Stern's lucky star, also by Friedrich and Herschbach, Daedalus, Winter 1998.
TAGS: physics, Stern-Gerlach experiment, Frankfurt | <urn:uuid:09ec683a-7c9d-47b9-8b48-53ae7d501714> | 2.734375 | 1,640 | Personal Blog | Science & Tech. | 26.645208 |
Rise of the CyanoHABs
Cyanobacteria (also known as blue-green algae) are proliferating in the U.S. and worldwide, becoming a serious threat to freshwater resources and public health. Results from NCCOS harmful algal bloom programs are uncovering the secrets of why cyanobacteria are so successful so they can be used to develop new strategies to control them.
Cyanobacteria, which have evolved over 3.5 billion years, thrive in our modern world of warming temperatures and plentiful nutrients. Growing in freshwater, estuarine and marine ecosystems, they discolor and can cause foul odors and taste in drinking water and fish. Many produce a wide variety of toxins that can affect the liver, nervous system, digestive system and skin, and cause illness and death in humans, wildlife, and domestic animals exposed through contact, ingestion, or inhalation. Harmful cyanobacterial blooms are often termed “CyanoHABs.”
CyanoHABs thrive in warm, stratified waters and often use both inorganic and organic forms of phosphorus (P) and nitrogen (N). Some cyanobacteria can also use (“fix”) atmospheric nitrogen (N2), giving them an advantage where that nutrient is limiting growth, since most organisms cannot use N2.
Law- and policymakers usually focus on reducing P to control CyanoHABs. However, lakes and rivers today are increasingly inhabited by non-nitrogen fixing CyanoHABs and contain high concentrations of both P and N.
The studies suggest some tools available for short-term control are (1) increasing water mixing and flushing rates, which reduce the light or time available for CyanoHAB growth, and (2) decreasing nutrient inputs from sediments by dredging or capping with clay. However, in the long term, strict N and P management controls may be needed for the most successful reduction of CyanoHABs. Other options are also being investigated, such as flocculating CyanoHAB blooms with clay.
Learn more about CyanoHABs, why they are thriving, how they do it, and how best to control them from these NCCOS-supported research studies:
- The Rise of Harmful Cyanobacteria Blooms: The Potential Roles of Eutrophication and Climate Change
- Molecular Response of the Bloom-Forming Cyanobacterium, Microcystis aeruginosa, to Phosphorus Limitation
- Climate Change: Links to Global Expansion of Harmful Cyanobacteria
- Controlling Harmful Algal Blooms in a World Experiencing Anthropogenic and Climate-Induced Change
Learn more about our harmful algal bloom programs. | <urn:uuid:bf735d99-57bc-4c02-8543-e7ac016ad292> | 3.796875 | 573 | Knowledge Article | Science & Tech. | 26.61516 |
- Google Earth
Google Earth shows 3D overviews of major cities, mountains, and other terrain, as well as driving directions and maps. Also includes Sky, which allows users to view stars and galaxies, and Ocean, which allows users to explore the ocean floor and surface.
- NASA (273)
Learn about the U.S.'s National Aeronautics and Space Administration (NASA). Find photos, video, and information on NASA's robotic and manned space exploration programs.
- National Geographic Society (19)
Learn about the wonders of nature and the world's cultures with National Geographic Society's magazine, TV channel, photography, and educational campaigns sites.
- Internet Archive, The (5)
Find sites related to the Internet Archive, a digital library of Internet sites and other cultural artifacts saved in digital form.
- World Health Organization (WHO) (20)
Lists the United Nations World Health Organization's official site along with sites for its individual divisions and specific programs. Learn about the WHO's international public health efforts.
- National Weather Service (68)
Find National Weather Service (NWS) news, warnings, and forecasts. Also learn about the NWS's National Hurricane Center, Doppler radar, satellite network, and their other weather forecasting services.
- National Oceanic and Atmospheric Administration (NOAA) (170)
Learn about the National Oceanic and Atmospheric Administration's investigation into U.S. and world climates, weather, coasts, oceans, and deep seas. Also find sites for NOAA's National Weather Services, the National Marine Fisheries Service, National Severe Storms Laboratory, and other scientific services.
- Food and Agriculture Organization of the United Nations (FAO) (10)
- American Association for the Advancement of Science (AAAS) (16)
Find the American Association for the Advancement of Science (AAAS) sites for research grants, profession services, and science educational resources along with their weekly journal of research, Science Magazine.
- Science Magazine
Global weekly journal of research which serves the scientific community as a forum for the presentation and discussion of important issues related to the advancement of science.
International weekly journal of science. Nature offers science news, in-depth features, podcasts, and job information.
- U.S. Geological Survey (USGS) (63)
USGS sites focus on biology, geography, geology, geospatial information, and water resources of the U.S. and the world. The U.S. Geological Survey also offers sites monitoring recent earthquake activity and other natural hazards.
- New Scientist
Daily science news, hot topics on some of the current issues facing our world today, plus selected content from the magazine.
- Scientific American
Enhanced versions of print articles, explorations of recent developments in the news, interviews, ask the experts, and much more.
Guiding consumers towards greener, more environmentally responsible and sustainable products.
- National Center for Biotechnology Information (NCBI)
Resource for molecular biology information. Creates public databases, conducts research in computational biology, develops software tools for analyzing genome data, and disseminates biomedical information.
- Exploratorium (7)
Collection of the Exploratorium's online science exhibits, games, and teaching tools. The San Francisco museum introduces children to the life, earth, and space sciences as well as human perception with hands-on exhibits.
- American Society for Testing and Materials (ASTM)
Developer and provider of voluntary consensus standards, related technical information, and services.
Learn about the latest discoveries and current research in all fields of science. See news articles, video, and images covering breakthroughs in biology, the earth sciences, astronomy, physics, and computer science.
- National Institute of Standards and Technology (NIST) (25) | <urn:uuid:8ab8632c-a425-4bfb-8143-95445d0fecb6> | 3.265625 | 784 | Content Listing | Science & Tech. | 24.059029 |
When is the next meteor shower? EarthSky’s meteor shower guide
Sometimes, after a meteor shower, people report hearing the meteors. Some exceptionally bright meteors have been reported as being accompanied by a low hissing sound – like bacon sizzling.
For years, professional astronomers dismissed the notion of sounds from meteors as fiction. Typically, a meteor burns up about 100 kilometers – or 60 miles – above the Earth’s surface. Because sound travels so much more slowly than light does, the rumblings of a particularly large meteor shouldn’t be heard for several minutes after the meteor’s sighting. A meteor 100 kilometers high would boom about five minutes after it appears. Such an object is called a “sonic” meteor. The noise it makes is related to the sonic boom caused by a faster-than-sound aircraft.
But what about meteors that seem to make a sound at the same time you are seeing them? These meteors would be seen and heard simultaneously. Is this possible? Astronomers now say it is possible. They speak of “electrophonic meteors.” The explanation is that meteors give off very low frequency radio waves, which travel at the speed of light. Even though you can’t directly hear radio waves, these waves can cause physical objects on the Earth’s surface to vibrate. The radio waves cause a sound – which our ears might interpret as the sizzle of a meteor shooting by. | <urn:uuid:32b4c182-dc15-45c8-bf7d-fd6e0d72d6bd> | 3.578125 | 307 | Knowledge Article | Science & Tech. | 55.644118 |
As extended objects rather than point sources, galaxies show a wide variety of forms, some due to intrinsic structures, others due to the way the galaxy is oriented to the line of sight. The random orientations, and the wide spread of distances, are the principal factors that can complicate interpretations of galaxy morphology. If we could view every galaxy along its principal axis of rotation, and from the same distance, then fairer comparisons would be possible. Nevertheless, morphologies seen in face-on galaxies can also often be recognized in more inclined galaxies (Figure 1). It is only for the highest inclinations that morphology switches from face-on radial structure to vertical structure. In general we either know the planar structure in a galaxy, or we know its vertical structure, but we usually cannot know both well from analysis of images alone.
Figure 1. Four galaxies of likely similar face-on morphology viewed at different inclinations (number below each image). The galaxies are (left to right): NGC 1433, NGC 3351, NGC 4274, and NGC 5792. Images are from the dVA (filters B and g).
Galaxy morphology began to get interesting when the "Leviathan of Parsonstown", the 72-inch meridian-based telescope built in the 1840s by William Parsons, Third Earl of Rosse, on the grounds of Birr Castle in Ireland, revealed spiral patterns in many of the brighter Herschel and Messier "nebulae." The nature of these nebulae as galaxies wasn't fully known at the time, but the general suspicion was that they were star systems ("island universes") like the Milky Way, only too distant to be easily resolved into their individual stars. In fact, one of Parsons' motivations for building the "Leviathan" was to try and resolve the nebulae to prove this idea. The telescope did not convincingly do this, but the discovery of spiral structure itself was very important because such structure added to the mystique of the nebulae. The spiral form was not a random pattern and had to be significant in what it meant. The telescope was not capable of photography, and observers were only able to render what they saw with it in the form of sketches. The most famous sketch, that of M51 and its companion NGC 5195, has been widely reproduced in introductory astronomy textbooks.
While visual observations could reveal some important aspects of galaxy morphology, early galaxy classification was based on photographic plates taken in the blue region of the spectrum. Silver bromide dry emulsion plates were the staple of astronomy beginning in the 1870s and were relatively more sensitive to blue light than to red light. Later, photographs taken with Kodak 103a-O and IIa-O plates became the standard for galaxy classification. In this part of the spectrum, massive star clusters, dominated by spectral class O and B stars, are prominent and often seen to line the spiral arms of galaxies. These clusters, together with extinction due to interstellar dust, can give blue light images a great deal of detailed structure for classification. It is these types of photographs which led to the galaxy classification systems in use today.
In such photographs, we see many galaxies as a mix of structures. Inclined galaxies reveal the ubiquitous disk shape, the most highly flattened subcomponent of any galaxy. Studies of Doppler wavelength shifts in the spectra of disk objects (like HII regions and integrated star light) reveal that disks rotate differentially. If a galaxy is spiral, the disk is usually where the arms are found, and also where the bulk of interstellar matter is found. The radial luminosity profile of a disk is usually exponential, with departures from an exponential being due to the presence of other structures.
In the central area of a disk-shaped galaxy, there is also often a bright and sometimes less flattened mass concentration in the form of a bulge. The nature of bulges and how they form has been a topic of much recent research, and is discussed further in section 9. Disk galaxies range from virtually bulge-less to bulge-dominated. In the center there may also be a conspicuous nucleus, a bright central concentration that was usually lost to overexposure in photographs. Nuclei may be dominated by ordinary star light, or may be active, meaning their spectra show evidence of violent gas motions.
Bars are the most important internal perturbations seen in disk-shaped galaxies. A bar is an elongated mass often made of old stars crossing the center. If spiral structure is present, the arms usually begin near the ends of the bar. Although most easily recognized in the face-on view, bars have generated great interest recently in the unique ways they can also be detected in the edge-on view. Not all bars are made exclusively of old stars. In some bulge-less galaxies, the bar has considerable gas and recent star formation.
Related to bars are elongated disk features known as ovals. Ovals usually differ from bars in lacking higher order Fourier components (i.e., have azimuthal intensity distributions that vary mainly as 2), but nevertheless can be major perturbations in a galactic disk. The entire disk of a galaxy may be oval, or a part of it may be oval. Oval disks are most easily detected if there is considerable light or structure at larger radii.
Rings are prominent features in some galaxies. Often defined by recent star formation, rings may be completely closed features or may be partial or open, the latter called "pseudorings." Rings can be narrow and sharp or broad and diffuse. It is particularly interesting that several kinds of rings are seen, and that some galaxies can have as many as four recognizeable ring features. Nuclear rings are the smallest rings and are typically seen in the centers of barred galaxies. Inner rings are intermediate-scale features that often envelop the bar in a barred galaxy. Outer rings are large, low surface brightness features that typically lie at about twice the radius of a bar. Other kinds of rings, called accretion rings, polar rings, and collisional rings, are also known but are much rarer than the inner, outer, and nuclear rings of barred galaxies. The latter kinds of rings are also not exclusive to barred galaxies, but may be found also in nonbarred galaxies.
Lenses are features, made usually of old stars, that have a shallow brightness gradient interior to a sharp edge. They are commonly seen in Hubble's disk-shaped S0 class (section 5.2). If a bar is present, the bar may fill a lens in one dimension. Lenses may be round or slightly elliptical in shape. If elliptical in shape they would also be considered ovals.
Nuclear bars are the small bars occasionally seen in the centers of barred galaxies, often lying within a nuclear ring. When present in a barred galaxy, the main bar is called the "primary bar" and the nuclear bar is called the "secondary bar." It is possible for a nuclear bar to exist in the absence of a primary bar.
Dust lanes are often seen in optical images of spiral galaxies, and may appear extremely regular and organized. They are most readily evident in edge-on or highly inclined disk galaxies, but are still detectable in the face-on view, often on the leading edges of bars or the concave sides of strong inner spiral arms.
Spiral arms may also show considerable morphological variation. Spirals may be regular 1, 2, 3, or 4-armed patterns, and may also be higher order multi-armed patterns. Spirals may be tightly wrapped (low pitch angle) or very open (high pitch angle.) A grand-design spiral is a well-defined global pattern, often detectable as smooth variations in the stellar density of old disk stars. A flocculent spiral is made of small pieces of spiral structure that appear sheared by differential rotation. Their appearance can be strongly affected by dust, such that at longer wavelengths a flocculent spiral may appear more grand-design. Pseudorings can be thought of as variable pitch angle spirals which close on themselves, as opposed to continuously opening, constant pitch angle, logarithmic spirals.
There are also numerous structures outside the scope of traditional galaxy classification, often connected with strong interactions between galaxies. Plus, the above described features are not necessarily applicable or relevant to what we see in very distant galaxies. Accounting for all of the observed features of nearby galaxies, and attempting to connect what we see nearby to what is seen at high redshift, is a major goal of morphological studies. | <urn:uuid:0dacf0a4-c3fd-432e-9e31-772c21f9288a> | 3.953125 | 1,761 | Academic Writing | Science & Tech. | 40.228052 |
Last month I wrote an article entitled ‘Antarctica and Arctic Polar Opposites‘ that looked at a study which showed that “the total extent of sea ice surrounding Antarctica in the Southern Ocean grew by roughly 6,600 square miles every year, with recent research adding that that growth rate has accelerated recently, up from an average rate of almost 4,300 square miles per year from 1978 to 2006.”
This month, a new study published in the journal Nature Geoscience by scientists from NERC’s British Antarctic Survey (BAS) and NASA’s Jet Propulsion Laboratory (JPL), Pasadena California shows that the contrary changes to Antarctic sea ice drift that have occurred over the past two decades is due to changing winds.
The scientists explain that the Antarctic sea ice cover has increased under the effects of climate change, rather than decreasing such as has been seen on the opposite side of the planet in the Arctic. Working from maps created by JPL using over 5 million individual daily ice motion measurements captured over a period of 19 years by four US Defense Meteorological satellites, the scientists were able to see for the first time long-term changes in sea ice drift around the southern continent.
“Until now these changes in ice drift were only speculated upon, using computer models of Antarctic winds, said lead author, Dr Paul Holland of BAS. ”This study of direct satellite observations shows the complexity of climate change. The total Antarctic sea-ice cover is increasing slowly, but individual regions are actually experiencing much larger gains and losses that are almost offsetting each other overall.”
“We now know that these regional changes are caused by changes in the winds, which in turn affect the ice cover through changes in both ice drift and air temperature. The changes in ice drift also suggest large changes in the ocean surrounding Antarctica, which is very sensitive to the cold and salty water produced by sea-ice growth.”
“Sea ice is constantly on the move; around Antarctica the ice is blown away from the continent by strong northward winds. Since 1992 this ice drift has changed. In some areas the export of ice away from Antarctica has doubled, while in others it has decreased significantly.”
Antarctica has not seen a simple overall growth of sea ice. In fact, the evident growth is actually the result of much larger regional increases and decreases, resulting from changes in the winds. The impact of wind changes has seen increased ice-cover to expand out from Antarctica, while the Arctic — which is completely landlocked — is unaffected by such changes.
“The Antarctic sea ice cover interacts with the global climate system very differently than that of the Arctic, and these results highlight the sensitivity of the Antarctic ice coverage to changes in the strength of the winds around the continent,” said Dr Ron Kwok of JPL.
One very important distinction to make, however, is that these changes in wind are affecting the sea ice cover that expands outward from the continent during it’s annual winter freeze. These changes have virtually no impact upon the the Antarctic Ice Sheet which is losing volume each year.
I'm a Christian, a nerd, a geek, a liberal left-winger, and believe that we're pretty quickly directing planet-Earth into hell in a handbasket! I work as Associate Editor for the Important Media Network and write for CleanTechnica and Planetsave. I also write for Fantasy Book Review (.co.uk), Amazing Stories, the Stabley Times and Medium. I love words with a passion, both creating them and reading them. | <urn:uuid:022d89ab-0727-4fe9-ba49-e7bbf626b89d> | 3.65625 | 735 | Personal Blog | Science & Tech. | 38.400071 |
|I have a Masters in computer science. I can answer questions on core J2SE, swing and graphics. Please no questions about JSP or J2ME.|
First, you can't name a class sunchronized. It's a keyword in Java. Typically you would use 2 classes for 2 threads if you wanted them to behave differently. Since t1 and t2 are kept out of scope from
Those constructors are not useful for non-subclasses. If you don't subclass Thread you should use the Thread(Runnable target) or Thread(Runnable target, String name) constructor. If you subclass Thread
1. Creating a thread allows you to run multiple pieces of code at the same time. This can use multiple processors, or can allow one thread to continue while another one is waiting on a response or resource
I don't see what JLabel, JButton, and JTextField have to do with drawing a line. Unless perhaps you want a bit more interactivity. In any case you will want to create a custom component to draw the
1. It is possible to put any object in a synchronized block. When a thread enters a synchronization block, it is said to have a lock on that object. That lock is released once the block is left. | <urn:uuid:b3af38f2-9427-47e4-83e1-db52baea47ce> | 3.015625 | 266 | Q&A Forum | Software Dev. | 67.015982 |
A Drop of Salt Water on Mars
This project begins three years after beads of liquid brine were first photographed on one of the Mars Phoenix lander's legs.
"On Earth, everywhere there's liquid water, there is microbial life," said Nilton Renno, a professor in the Department of Atmospheric, Oceanic and Space Sciences who is the principal investigator. Researchers from NASA, the University of Texas at Dallas, the University of Georgia and the Centro de Astrobiologia in Madrid are also involved.
Scientists in the United States will create Mars conditions in lab chambers and study how and when brines form. These shoe-box-sized modules will have wispy carbon dioxide and water vapor atmospheres with 99 percent lower air pressure than the average pressure on Earth at sea level. Temperatures will range from -100 to -80 Fahrenheit and will be adjusted to mimic daily and seasonal cycles. Instruments will alert the researchers to the formation of brine pockets, which could potentially be habitable by certain forms of microbial life.
Their colleagues overseas will seed similar chambers with salt-loving "extremophile" microorganisms from deep in Antarctic lakes and the Gulf of Mexico. The will observe whether these organisms survive, grow and reproduce in brines just below the surface of the soil. All known forms of life need liquid water to live. But microbes don't need much. A droplet or a thin film could suffice, researchers say.
With his colleagues on the Mars Phoenix mission in 2008, Renno theorized that globules that moved and coalesced on the spacecraft's leg were liquid saltwater. Independent physical and thermodynamic evidence as well as follow-up experiments have confirmed that the drops were liquid and not frost or ice. The Phoenix photos are believed to be the first pictures of liquid water outside the Earth.
The median temperature at the Phoenix landing site was -70 degrees Fahrenheit during the mission—too cold for liquid fresh water. But "perchlorate" salts found in the site's soils could lower water's freezing point dramatically, so that it could exist as liquid brine. The salts are also capable of absorbing water from the atmosphere in a process called deliquescence.
Also contributing to this new project at U-M are Bruce Block, a senior engineer in the Space Physics Research Lab and Gregory Dick, an assistant professor in the Department of Geological Sciences. | <urn:uuid:06b34265-27bf-4aba-9f31-c9bc38a705f4> | 3.84375 | 484 | Knowledge Article | Science & Tech. | 39.339354 |
One of my inspirations for starting this website came from the profound experience I had when looking at the Hubble Deep Field images for the first time. I felt I was looking at the most important image humanity had ever taken.
It was important because for the first time, I got a real feeling for just how immense the universe actually is. It's absolutely mind-blowing if you stop to think about it, that by looking at a patch of sky that appears to have nothing in it, and you stare at it long enough, you see an image full of galaxies.
To fully convey how I feel about the Deep Field images, I composed the following video:
An astute viewer (such as yourself) may have asked, "How can the universe be 78 billion light years across when the age of the universe is only about 13 billion years?"
Astronomers applied redshifts to all galaxies in the HUDF and made this: The Hubble Deep Field in 3D
Good question, how can something be larger than then distance travelled at the speed of light? Since light from the beginning of the universe has only had 13 billion years to travel (not 78 billion), then shouldn't the universe be only 13 billion light years across? That's a pretty intuitive thought.
But it doesn't take into account that the entire universe itself is also expanding. When a photon of light leaves it's point of origin, it does so at the speed of light, so in a universe that doesn't expand, a photon travelling for 13 billion years traverses 13 billion light years.
In a universe that DOES expand, all of the distance covered by the photon gets increased by a scale factor equal to the rate of expansion of the universe.
Since the universe has expanded some since it left 13 billion years ago, we have the apply a scale factor to account for the expansion. Keeping in mind that the universe is expanding continually, it's not stopping and starting, you have to do some calculus to solve the problem. When you do that, you come up with the size of the universe being 78 billion light years in radius, 156 billion in diameter.
Until recently, it was thought that the rate of expansion of the universe was slowing down. Recent measurements of the cosmic microwave background radiation have shown that the universe is actually accelerating, not slowing down.
So, what I was referring to in the video is called the Comoving
Distance, or 'proper distance'. You can get a more detailed
definition from the above link, but the comoving distance is a
more accurate measure of the size of the universe because it takes
into account the fact that the observer (us, the earth), is moving.
It also takes into account that the universe has been expanding
since its beginning.
Here is a another great article about the size of the universe to get more info.
So where exactly did the Hubble look to take the deep field images? Here is a photo with the region of sky I referred to in the video. As you can see, the area in the L-shaped outline is devoid of all stars. This was done on purpose. Astronomers didn't want any stars from the Milky Way galaxy to get in the way, so they selected this region.
It's important to realize that dedicating so much Hubble telescope time to this little project was a risky move. Time on this telescope is expensive, with very long waiting lists of astronomers who want to use it. It was risky because no one knew what they were going to see if they did this. I think taking the risk paid off, in a HUGE way.
This section of sky is located in the constellation of Ursa Major. This constellation lies outside of the disk of the galaxy so there are fewer stars to dodge by looking here. It was important that the image not be contaminated with foreground stars from our own galaxy. To me, that made the image all that more amazing, because every single point of light in that picture was sure to be a galaxy.
The irregular shape of the area outlined above corresponds to the fact that the complete Hubble deep field image was pieced together from three individual images taken with the telescope pointed in adjacent areas of sky. The detector on the Hubble Space Telescope employs a really old CCD that is 800x800 pixels square. To cover more area, they took many sets of images and moved the telescope around as they did so. Then they stitched them together to make the final image.
The diagram at left is a schematic of where the hubble looked for the 1995 Deep Field image. The constellation (really an asterism) outlined is the Big Dipper, or Ursa Major the Great Bear.
I mention in the video that the Hubble stared into this region of sky for a little over 10 days. This was not done all at once. Many individual images were taken over the course of weeks, and then all of them were added together.
Adding images together like this is common in astronomical imaging. If you take, for example, 10 images with exposure times of 10 seconds each and then add them all together, it produces one image equal to an exposure time of 100 seconds.
The advantage of doing it this way has to do with the way images are produced by a CCD detector. CCD's produce more noise or 'grain' to an image if you just let them sit there collecting light. Less grainy images are obtained if you just add a bunch of shorter exposures together. With each image added, the light from the galaxies increases by the amount that the image was exposed, but the graininess increases by a lesser amount (the square root of the number of images for those technically motivated).
The result is an image that is sharper and has more detail.
To take the Ultra Deep Field, the Hubble Space Telescope looked in the direction of Orion, in the constellation Fornax. I'm afraid I don't have any pictures of this area yet, I'm still trying to find some, but the region in the animation on the video are accurate. It's just a little harder to see exactly where the Hubble was pointing when it took those pictures.
By the time they took the second image that became the Ultra Deep Field, the Hubble had been outfitted with an infrared camera called NICMOS, a 256x256 pixel camera that allowed them to see even more galaxies.
They also imaged using both cameras for a little longer, over 11 days this time, to produce the Ultra Deep Field. That image represents the farthest we have ever seen into the universe.
Keep Looking Up!
I've also cut out the Numa Numa Guy in that version.
There's a lot of information I like to get out to people that don't warrant an entire article.
I've been posting things like astronomy news and answers to questions I get from people who watch my astronomy videos or read this website. Think of it as a supplement to your love of astronomy!
Please check out the new DeepAstronomy Blog here! | <urn:uuid:27f041c6-4e85-464c-af35-61c558e094e6> | 2.859375 | 1,436 | Personal Blog | Science & Tech. | 52.113431 |
Issue Date: Jun 15, 2008
The concentration of CO2 in the atmosphere is at its highest since the past 650,000 years, says the US National Oceanic and Atmospheric Administration. Between 1979 and 2007, the level of CO2, the primary driver of climate change, increased at an average rate of 1.65 parts per million. The growth rate of methane, however, declined since 1992 due to change in emissions. Over the past 29 years, nitrous oxide emissions continue to increase steadily, while the chloro fluoro carbons (CFCs) have reached a plateau since 1992.
Recent Supreme Court order in Vedanta case holds hope for tribal community life
Butterflies on the roof of the world is a vivid and engaging narrative of the author's rendezvous with the butterflies and moths in particular, and nature in general | <urn:uuid:f8870a20-39c3-4f8c-8262-468e234570bf> | 2.9375 | 171 | Content Listing | Science & Tech. | 40.31 |
The Earth's climate has changed many times in the past. Subtropical forests have spread from the south into more temperate (or milder, cooler climates) areas. Millions of years later, ice sheets spread from the north covering much of the northern United States, Europe and Asia with great glaciers. Today, nearly all scientists believe human beings are changing the climate. How can that be?
Over the past few centuries, people have been burning more amounts of fuels such as wood, coal, oil, natural gas and gasoline. The gases formed by the burning, such as carbon dioxide, are building up in the atmosphere. They act like greenhouse glass. The result, experts believe, is that the Earth heating up and undergoing global warming. How can you show the greenhouse effect?
What do you need?
- Two identical glass jars
- 4 cups cold water
- 10 ice cubes
- One clear plastic bag
What to do?
- Take two identical glass jars each containing 2 cups of cold water.
- Add 5 ice cubes to each jar.
- Wrap one in a plastic bag (this is the greenhouse glass).
- Leave both jars in the sun for one hour.
- Measure the temperature of the water in each jar.
What you'll discover!
In bright sunshine, the air inside a greenhouse becomes warm. The greenhouse glass lets in the sun's light energy and some of its heat energy. This heat builds up inside the greenhouse. You just showed a small greenhouse effect. What could happen if this greenhouse effect changed the Earth's climate?
Another version of a greenhouse is what happens inside an automobile parked in the sun. The sun's light and heat gets into the vehicle and is trapped inside, like the plastic bag around the jar. The temperature inside a car can get over 120 degrees Fahrenheit (49 degrees Celsius).
For more about Global Climate Change, visit the State of California's Climate Change Portal at: http://www.climatechange.ca.gov.
Reprinted with the permission of the California Energy Commission. © 1994-2008 California Energy Commission.
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety. | <urn:uuid:d0db8f5f-6058-4882-b380-1c8c5899ce9a> | 4.03125 | 505 | Tutorial | Science & Tech. | 54.953725 |
Sanger method of DNA sequencing
Fred Sanger developed the first technique for sequencing DNA. DNA is replicated in the presence of chemically altered versions of the A, C, G, and T bases. These bases stop the replication process when they are incorporated into the growing strand of DNA, resulting in varying lengths of short DNA. These short DNA strands are ordered by size, and by reading the end letters from the shortest to the longest piece, the whole sequence of the original DNA is revealed.
Play Large: MOV / WMV (4 MB)
Play Small: MOV / WMV (2 MB)
To download the videos, in Internet Explorer right-click the link and select "Save Target As..." In Firefox right-click and select "Save Link As..." In Safari right-click and select "Download Linked File As..." | <urn:uuid:506fa508-e8fe-4b70-8b31-a7406f7325de> | 3.875 | 169 | Tutorial | Science & Tech. | 67.706119 |
Saving the Condors: Tim Hauck '03 and Matt Podolsky '06
Tim Hauck ’03 and Matt Podolsky ’06 are helping to bring North America’s largest flying land bird back from near extinction. by Aaron King ’09
Each morning, Tim Hauck ’03 and Matt Podolsky ’06 are greeted by the rising sun over the Vermillion Cliffs in northern Arizona. The solitude of this remote place is broken by their morning stirrings, the endless expanse of blue sky interrupted only by jutting edifices of earthen swirls of red, yellow, and brown. From their lodgings, they grab their gear and a bite to eat before making the short trip to the cliff rim, some 1,500 feet above the valley floor.
Here, somewhere between paradise and wilderness, Hauck and Podolsky dutifully wait. With their binoculars, they peruse the skies for the largest flying land bird in North America — the California condor.
Hauck and Podolsky are field biologists with the Peregrine Fund, a group focused on the conservation of birds of prey. They are part of the team that is reintroducing and managing condors in the Grand Canyon and surrounding areas.
“We’re releasing the birds with tags and radio and GPS transmitters attached so that we can keep track of their movement,” Podolsky says. “We’re just trying to keep birds alive and healthy — that’s our primary goal in tracking them.”
While that goal may sound straightforward, it was only 25 years ago that the condor population numbered 22. Today, the Arizona site alone is home to 75 wild condors. It is, to date, the most successful of the project’s reintroduction sites. Hauck has been with the project since mid-2005 and has seen marked progress.
“When I started, there were probably around 50 California condors in Arizona,” he says. “It’s a unique situation out here with amazing nesting areas, very remote. This is a special population that has the best chance of thriving.”
The one remaining hurdle is the threat of lead poisoning. Lead shot has been widely used by hunters for years. One of the condor’s primary sources of food is gut piles — the remains left by hunters after they clean their kills in the field. If the animal was shot with lead ammunition, the feeding condors could be in danger.
“We often see hundreds of microscopic fragments of lead in the gut piles and carcasses left in the field,” Podolsky says. “This is what condors are feeding on, and it’s extremely toxic.”
Despite the team’s best efforts to remove contaminated piles from the field, three or four condors are lost each year to lead poisoning. As a result, Hauck and Podolsky spend a lot of time trapping the birds and testing their blood-lead levels.
Nonetheless, progress is real and steady. They have been largely successful in educating hunters about the perils of lead ammunition, and the Peregrine Fund has partnered with the Arizona Game and Fish Department to provide a free, voluntary, nonlead ammunition substitution program. The program has had 90 percent compliance in the area around the condor release site.
Witnessing the dramatic recovery of the condor, once so near extinction, has profoundly affected Hauck and Podolsky.
“The way this project is run can serve as a good example for wildlife conservation efforts in general,” says Podolsky. “It’s really rewarding stuff for all of us here.”
Adds Hauck: “I still wake up sometimes and think, ‘Wow, I can’t believe I’m really doing this.’” | <urn:uuid:d7bb4ccb-2877-4b51-a991-cceb0da45e6b> | 2.765625 | 818 | Truncated | Science & Tech. | 56.485908 |
"It's just one of the things that distinguishes humanity, that we can actually answer questions that are deep and fundamental, make predictions and do science, and that it actually works," said Lisa Randall, professor of physics at Harvard and author of "Knocking on Heaven's Door."
Consider also that all the technology you know can be traced to pure research, initially perceived as esoteric. Electric lights -- and, indeed all of electricity -- came from fundamental research in the 19th century.
Computers and transistors arose from the understanding of quantum mechanics in the 1920s and 1930s, Incandela said.
Certainly, Einstein didn't know that his relativity theories would become pertinent to your smartphone's GPS. The atomic clocks on satellites must be corrected because, in accordance to Einstein's predictions, moving objects in space are on a different "time" relative to an observer on Earth.
"Technology usually lags pure science by a large amount of time, and I would say, probably now there's a good chance we're further ahead of technology than ever before," Incandela said.
Even the World Wide Web arose out of a proposal from Sir Timothy Berners-Lee, who was a physicist at CERN in the 1980s. Essentially, the reason we have the Internet that we all know and love is that Berners-Lee wanted to enable better communication among physicists there.
It's likely, Primack said, that useful things will also come from the searches for dark matter and dark energy, and for other particles that the LHC is hunting. No one knows what the uses will be yet -- but then again, no one predicted that the World Wide Web would arise at a particle physics lab, either. CERN is, in fact, the same laboratory that houses the LHC.
Nothing is certain, of course, it is at least possible that doing this pure science could help bring into reality the sorts of technologies that right now seem like science fiction.
"If we're really going to explore the universe, in terms of actually moving through the universe and having the ability to do space exploration that's what you see in the movies, so to speak, the 'Star Trek' type things, in principle, we're going to need to understand and have the ability to harness the potential of nature at a level that we don't have now," Incandela said. | <urn:uuid:aec04a3d-a96e-4b28-8809-8bae3d761f88> | 3.25 | 483 | Audio Transcript | Science & Tech. | 40.543182 |
In this chapter, we use Common Lisp to design and implement a CORBA server using the LispWorks ORB.
Our server presents an object-oriented interface to a
object and its accounts. Because we want the bank's account records to persist beyond the lifetime of the server, we would store the account records in a database. This database could be manipulated by the server using an SQL interface, such as that currently available with the LispWorks product.
Since the primary motivation for this tutorial is to illustrate the use of CORBA, we simply simulate the database using a hash table. It would be fairly easy to replace this implementation with code that manipulates a real database.
The hash table simply uses a structure instance for each row:
(defstruct database-row name balance limit)
In the case of an account that does not allow an overdraft
will be nil. | <urn:uuid:9c07a1c0-c323-4e7d-9c68-25f7eff68e18> | 2.765625 | 179 | Documentation | Software Dev. | 37.896622 |
GLOBAL warming may thwart attempts to eradicate rats from the sub-Antarctic island of South Georgia. Glaciers are melting so fast that the rodents will move to new areas before they can be poisoned, with devastating consequences for native birds.
Sailing ships brought rats to South Georgia more than 200 years ago. Since then they have invaded two-thirds of the shoreline, which is covered with tussock grass. When rats move in, they feed on the chicks and eggs of pipits and burrowing seabirds, wiping them out.
This is producing profound changes in the landscape. "The burrowing birds improve soil quality by aeration and nutrient provision, so you can tell the rat-infested areas from the unhealthy look of the grass," says Sally Poncet of the South Georgia Baseline Environment Survey.
Poncet and her colleagues are now trying to eradicate the rats to preserve the island's natural habitat. In small-scale trials, they are ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:28853cb4-53fc-40c5-a5ad-721d118f40d1> | 3.609375 | 228 | Truncated | Science & Tech. | 54.427538 |
Climate Action: Track Our Progress
Wanting to reduce our impact and take action against climate change is all well and good, but how do we begin to actually make a difference? Set some real goals!
In 2009, NOLS set Carbon Reduction Goals for the school as a whole. We want to reduce our carbon at aggressive levels sooner rather than later to prevent more emissions from ever entering the atmosphere to begin with. So, in keeping with the spirit of our 2013 Strategic Plan goals, we set stretch goals and interim targets—big enough to energize us, but not so high that they’re out of reach.
We settled on two interim goals:
- a 10% reduction from our 2006* carbon levels by 2010
- a 30% reduction by 2020
*Our 2006 carbon levels were determined in a sustainability audit performed in 2007 by Pure Strategies, Inc.
We are happy to note we exceeded our 2010 goal in 2009—a year early! While we expect numbers to fluctuate in our 2010 footprint as we grow more consistent in our data collection, we feel this is an accurate representation of our sustainability efforts and energy use in fiscal year 2009.
We based our goals on a number of recommendations and standards set by other institutions of higher education. Most of these were based on the 2% Solution, that recommends reducing absolute carbon emissions 2% annually until the year 2050, for an overall carbon reduction of greater than 80%. This recommendation came about as a result of general consensus in the scientific community that this reduction will keep the parts per million (ppm) of atmospheric carbon below catastrophic levels.
These goals are absolute, meaning they reflect the total actual carbon emitted by school operations within our carbon footprint boundary. While we decided to set our carbon reduction goals in absolute terms, we also decided that reporting them in both absolute and normalized interpretations was important.
Normalized carbon reporting will show us how much carbon we use per student day. In other words, it will illustrate our carbon “efficiency.” This will be especially helpful if the school grows in leaps and bounds for a period, making it a challenge to reduce our carbon emissions during the same period of time. We will be able to look at the normalized information and see that while perhaps we didn’t hit our absolute goal, we did reduce the amount of carbon we use for each student day we have in the field. | <urn:uuid:6c443bd5-24e9-41e2-b64c-73a63b20993a> | 2.71875 | 485 | Tutorial | Science & Tech. | 43.231517 |
II. HABITAT AND DISTRIBUTION
The ribbed mussel can be found along the Atlantic coast from the Gulf of Maine to Florida and the Gulf of Mexico (Franz 2001). It also reported from the San Francisco Bay on the West coast where it was introduced.
Geukensia demissa occurs in the Indian River Lagoon.
III. LIFE HISTORY AND POPULATION BIOLOGY
Age, Size, Lifespan:
Adult Geukensia demissa can live for more than 15 years and grow to nearly 10 cm in length. The age of the ribbed mussel can be determined by counting the annual growth ribs on the shell (Brousseau 1982). Juvenile mussels can mature when they reach 12 mm.
Geukensia demissa can be found among intertidal oyster reef clusters
in numbers over 1,500 per m2 (Coen et al. 1999).
Unlike oysters, ribbed mussels have the ability to reattach if dislodged,
providing this species with more opportunities to respond to disturbance.
Densities of 2000 up to 10,000 per m2 have been reported for this species
in areas along the northern Atlantic coast.
Ribbed mussels have separate sexes and the sex can be determined by the
color of the mantle. Females tend to be a medium brown whereas males are a
yellowish-cream color. There is usually one annual spawn that occurs
between June and September depending upon the region (Borrero 1987).
IV. PHYSICAL TOLERANCES
The ribbed mussel is very hardy, tolerating short-term exposures to temperatures in excess of 45°C, but succumbing at temperatures above 45°C (Jost and Helmuth 2007).
Geukensia demissa exhibits high salinity tolerance living in seawater at salinities less than 6 ppt and as high as 70 ppt.
V. COMMUNITY ECOLOGY
Geukensia demissa are filter feeders that "pump" water over their
gills where particles are either retained or passed into the digestive
system. The ribbed mussel possesses large latero-frontal cirri that
facilitate the retention of particles above 4 µ with a filtration rate
measured in the laboratory to be 6.80 liters of seawater per hr (Riisgard
1988). Ribbed mussels are one of the few bivalves able to forage on
small-sized bacterioplankton (Newell and Kambeck 1995, Kreeger et al.
American Museum of Natural History, Bivalves- Research, Training, and
Electronic Dissemination of Data. Available online.
Borrero FJ. 1987. Tidal height and gametogenesis: reproductive
variation among populations of Geukensia demissa. Biological
Brousseau DJ. 1984. Age and growth rate determinations for the
atlantic ribbed mussel, Geukensia demissa Dillwyn (Bivalvia:
Mytilidae). Estuaries and Coasts 7:233-241
Coen LD, Knott DM, Wenner, Hadley NH, and AH Ringwood. 1999. Intertidal
oyster reef studies in South Carolina: design, sampling and
experimental focus for evaluating habitat value and function. Pages 131
156, In: MW Luckenbach, Mann R, and JA Wesson (eds.), Oyster Reef
Habitat Restoration: A Synopsis and Synthesis of Approaches. Virginia
Institute of Marine Science Press. Gloucester Point, Virginia.
Franz DR. 2001. Recruitment, survivorship, and age structure of a New
York ribbed mussel population (Geukensia demissa) in relation to
shore level - a nine year study. Estuaries 24:319-327.
ITIS Integrated Taxonomic Information System. Available online.
Jost J and B Helmuth. 2007. Morphological and Ecological Determinants
of Body Temperature of Geukensia demissa, the Atlantic Ribbed Mussel,
and Their Effects On Mussel Mortality. Biological Bulletin 213:141-151.
Kreeger DA, Newell RIE, and CJ Langdon. 1990. Effect of tidal exposure
on utilization of dietary lignocellulose by the ribbed mussel
Geukensia demissa (Dillwyn) (Mollusca:Bivalvia). Journal
Experimental Marine Biology and Ecology. 144:85-100.
Newell SY and C Krambeck. 1995. Responses of bacterioplankton to tidal
inundations of a saltmarsh in a flume and adjacent mussel enclosures.
Journal of Experimental Marine Biology and Ecology 190:79-95.
Riisgard HU. 1988. Efficiency of particle retention and filtration
rate in 6 species of Northeast American bivalves. Marine Ecology
Progress Series 45:217-223.
Melany P. Puglisi, Smithsonian Marine Station
Submit additional information, photos or comments
Page last updated: October 1, 2008 | <urn:uuid:13365ee1-67bd-4b0e-9467-fcfad873a573> | 3.296875 | 1,090 | Knowledge Article | Science & Tech. | 43.387652 |
In science, it is a well known fact that in the process of applying extending theories to the real world, it is as much a question of known “how much” of something is happening, apart from what exactly is happening. Its this crucial aspect that has physicists worrying about adding never ending degrees of precision to any numbers they report (typical numbers in particle physics have up to 9 significant digits [PDF alert]). Chemists know well how important getting the right amount of catalysts is in order to make test-tube magic. Biology , unfortunately, has been the orphan child of this scientific obsession. The complexity of a single cell introduces so many variables that approaching the precision of physics is all but deemed impossible. Indeed, some have suggested that certain behaviors of living systems may really be ‘incomputable’. But perhaps this is why it is an exciting time for biology. Scientific thought in the last 20 years has morphed slowly from the search for universals to understanding the nature and consequences of variability. It is this variability that recent work (link at the bottom) from Grecco et al. at the Max Planck Institute for Molecular Physiology, published in Nature Methods seeks to unravel by giving biologists the means to do so.
Specifically, Grecco et al. study the flow of information in the form of phosphate groups, a staple information tag that cells attach to their proteins to switch them on , off or give them special properties. Phosphate groups are attached to proteins by enzymes called Kinases, and removed by enzymes called Phosphatases. The fine balance between the kinase and phosphatase activities determines which proteins are phosphorylated and remain so for varying lengths of time. A lot is known about which proteins get phosphorylated under what circumstances from tedious experiments over the last years that involve extracting the protoplasm into jumbled soup and then going about detecting how much phosphate is attached to which proteins. By replacing this step with a clever optical measurement, the authors opened the door to studying these interactions in living cells. Because the method is optical, and essentially works on a microscope (albeit not a very conventional one), the authors can not only tell which proteins get phosphorylated in response to a particular stimulus , but also where exactly in the cell these proteins are located.
To extract this information, the authors create a library of proteins containing phosphorylatable Tyrosine residues* , each tagged with one of the now famed fluorescent proteins. They adapted a technique called ‘reverse transfection’ to create a array of tiny spots of cells (a cellular array) , each expressing a different fluorescently-tagged protein. Then they added a stimulus – Epidermal Growth Factor, or EGF in short – an extracellular signal that sets off cellular signaling eventually leading to growth of a tissue. Then they added an antibody that would specifically recognize phosphorylated Tyrosines. The antibody itself was tagged with a particular fluorescent molecule. What they ended up with is cells, glowing with a fluorescent protein they had made it express, and an antibody that attached to this protein if its tyrosine were phosphorylated.
Now came the optical part. How does one find out if the antibody had bound to a protein? One of the most sensitive ways of detecting binding is to use a phenomenon called Foerster/Fluorescence Resonance Energy Transfer (FRET). Fluorescent molecules absorb light of a particular color, and electrons in them become ‘excited’, storing the energy for a brief instant. Eventually though, the electron gives off this energy as light again, but because some of the energy has been lost (no process is 100% efficient) , the light it gives is of a different color. Fluorescent molecules thus have an ‘absorption spectrum’ , and an ‘emission spectrum’. What happens , though, if there is another molecule sitting right next to this excited molecule that can absorb at exactly the same color that our excited molecule is about to emit? The laws of quantum physics say that , there is a chance that resonance will occur between these molecules and the excited molecule , instead of emitting light, will give away its energy to this other fluorescent molecule, that is perfectly suited to taking up this energy. This transfer can occur however , over only a very short distance , the so called Foerster radius – such that if it does occur, in almost all cases, it means that the two molecules are less than 20 angstroms from each other. In a solution of molecules, this is only likely to happen in any significant degree if the molecules are really bound to each other. In practice, it means the following, imagine a fluorophore that absorbs blue light and gives off green light. Another absorbs green light and gives off red light. If these molecules are so close as to be bound to each other, then shining blue light, will give off some green light, but also some red light. Some of the energy of the first fluorophore (the donor )has ‘leaked’ into the second one (the acceptor).
Left : A schematic representation of FRET-FLIM used for detecting Tyrosine Phosphorylation. Right : The high-throughput microcopy setup for measuring FLIM in biological samples.
FRET has been measured usually as the brightness of the signal. If energy is leaking between fluorophores, than the acceptor is brighter than it should be, and correspondingly the donor is dimmer than it should be. This method is not very precise in finding out how many of the donor-acceptor molecules are bound, though, since there are always unbound donors that confuse the signal. The authors use a different method to quantify FRET – they measure the average amount of time the donor fluorophores stay in their “excited” state – the fluorescence lifetime. If there is an acceptor bound and energy can leak, the donor molecules should remain in their excited state for a much shorter time – in other words their fluorescence lifetime will reduce. When performed on a microscope, the method is called Fluorescence Lifetime Imaging Microscopy (FLIM). The most useful part is that the average fluorescence lifetime reduces precisely in proportion to the amount of donors bound to acceptors, and FLIM can therefore tell us how much of a protein is phosporylated and thus bound to an antibody with an acceptor fluorophore.
The authors thus combined a high-throughput screening approach, with a sensitive optical method to detect protein phosphorylation, creating Cell Array – FLIM (CA-FLIM). They now had a way to see which of many proteins were phosphorylated, to what extent, and where in the cell, at the level of individual cells, when they added a stimulant. This is a level of detail in information that has been achieved in very few instances in biology. As with all cases of a data avalanche, however, the authors quickly found that biological variability was showing its effects – not all cells responded in the same way, and conventional methods of handling the data were incapable of resolving real phenomenon from the all-pervasive effects of this variability. So they had to develop a new way of analyzing this information, grouping cells into clusters based on how much of which protein was being phosphorylated. The mathematics of this new kind of ‘global analysis’ itself forms a significant proportion of the paper.
Finally, the authors compared their results with those obtained from the study of individual proteins done with tedious conventional methods. They found their data agreed with what other scientists had painstakingly discovered over years. Of course, CA-FLIM had revealed information of those proteins and many more in a matter of hours! What CA-FLIM added to the mix in many cases is information about the spatial aspects of this phosphorylation of proteins.
One point of critique here is that in order to detect the the phosphorylation of any protein, that protein needs to be expressed in its fluorescently tagged form. While this ectopic expression is relatively easy to do, it adds an extra population of that particular protein to what the cell itself is creating from its genome, known as the endogenous protein level. Since, the endogenous protein is not fluorescent, what happens to it remains in the dark. In all studies involving ectopic expression, the implicit assumption is that the fluorescent population behaves similarly to endogenous the non-fluorescent one , and all modifications occur to the same extent on both molecules. While this is more or less an reasonable assumption for many proteins , there is no way to be absolutely sure. Indeed, the assumption is in no way ironclad and several exceptions are known. In certain model organisms, such as yeast, it is possible to get around this problem by replacing the organism’s genes with fluorescently tagged versions of those same genes, but in mammalian cells, this is not yet technically possible.
CA-FLIM now adds to the growing list of methods that reduces the time required to detect cellular protein interactions by an order of magnitude. As technical challenges in implementation and automation are overcome, and CA-FLIM is expanded to other stimuli beyond the EGF used in this study, the multi-pronged nature of this approach should provide insights, or at least shine the light on interesting cellular phenomenon – increasing our knowledge of the basic unit of life further.
*Proteins can be phosphorylated on several amino acid residues , most commonly on Serine, Threonine and Tyrosine. Tyrosine phosphorylation seems more significant in the first steps of intracellular signaling and the authors focused on this particular kind.
NOTE : Nachiket Vartak , the author of this post , did not contribute to the study and development of CA-FLIM, but is affiliated to the same institution where the work was done. The author declares no conflict of interest.
Editor's Selection IconGrecco, H., Roda-Navarro, P., Girod, A., Hou, J., Frahm, T., Truxius, D., Pepperkok, R., Squire, A., & Bastiaens, P. (2010). In situ analysis of tyrosine phosphorylation networks by FLIM on cell arrays Nature Methods, 7 (6), 467-472 DOI: 10.1038/nmeth.1458 | <urn:uuid:56c60184-c286-459f-98f0-3c10e8c603e3> | 3.109375 | 2,125 | Academic Writing | Science & Tech. | 35.160935 |
This photograph of a cumulonimbus cloud was taken near Fort Lupton, Colorado. You can see some towers growing in this cloud.
Click on image for full size
Photo courtesy of Gregory Thompson
Supercell Thunderstorms and Squall Lines
A supercell thunderstorm is a huge rotating thunderstorm. It can last for several hours as a single storm. These storms are the most likely to produce long-lasting tornadoes and baseball-sized hail. Tornadoes produced from supercell thunderstorms are typically the largest and most damaging tornadoes due to the long duration of the storms. Several tornadoes can be produced from one supercell thunderstorm.
There are two types of supercell thunderstorms. One type brings high amounts of precipitation, creating downbursts, flash floods, and large hail. The other type brings low amounts of precipitation, developing tornadoes and large hail.
A squall line consists of several thunderstorms banded together in a line. Usually a squall line forms between a cold front and a warm front. A squall line can form from an individual storm that has split. This split storm helps to form the line of storms.
There are two types of squall lines. One type is a line of cumulonimbus clouds that grow and decay; the other is a line of steady supercells. Squall lines can be just as severe as a supercell thunderstorm. A squall line can produce heavy precipitation and strong winds. Most of the precipitation in the United States is from a squall line. Squall lines can extend over 600 miles (1000 km) when associated with thunderstorms.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Thunderstorms are one of the most thrilling and dangerous types of weather phenomena. Over 40,000 thunderstorms occur throughout the world each day. Thunderstorms form when very warm, moist air rises into...more
Cumulonimbus clouds belong to the Clouds with Vertical Growth group. They are generally known as thunderstorm clouds. A cumulonimbus cloud can grow up to 10km high. At this height, high winds will flatten...more
Wind is moving air. Warm air rises, and cool air comes in to take its place. This movement creates different pressures in the atmosphere which creates the winds around the globe. Since the Earth spins,...more
A tornado begins in a severe thunderstorm called a supercell. A supercell can last longer than a regular thunderstorm. The same property that keeps the storm going also produces most tornadoes. The wind...more
Have you ever tried to balance a long stick on your hand? Hard, isn't it? That's because the stick is part of an unstable system. If the wind pushes the stick a little bit, it will keep going in that direction....more
There are two main types of thunderstorms: ordinary and severe. Ordinary thunderstorms are the common summer storm. Ordinary thunderstorms last about one hour. The precipitation associated with them is...more
A supercell thunderstorm is a huge rotating thunderstorm. It can last for several hours as a single storm. These storms are the most likely to produce long-lasting tornadoes and baseball-sized hail. Tornadoes...more | <urn:uuid:9244f65f-cf1b-4e60-bdec-5095969cc961> | 3.453125 | 714 | Content Listing | Science & Tech. | 56.788406 |
The reality is more prosaic.
First of all, mini black holes at the LHC are an option only if one of the theories of "large extra dimensions" was in fact true. But of course, these theories are only speculations so far. Second, should mini black holes be created in high-energy particle collisions, they would evaporate very fast, due to Hawking radiation. Though Hawking radiation has not been experimentally verified so far, its existence is expected in almost all theoretical scenarios investigated (no matter where you go, you will always find somebody who disagrees on something).
But what would happen in the (quite unrealistic) case that tiny black holes were created at the LHC, and that they did not decay by the emission of Hawking radiation?
It's important to keep in mind that black holes do not have some special "vacuum cleaner" property - they just attract other stuff by the force of gravity.
Now, the tiny black holes that could be created at the LHC if theories of large extra dimensions were indeed correct would have masses in the range of a few TeV. 1 TeV corresponds to about 1000 times the mass of a proton, which is 0.94 GeV, or 1.7×10-27 kg. The corresponding Schwarzschild radius is about 1/1000 fm, or 10-18 m.
Because gravity is such a weak force, it's safe to assume that nothing happens to matter that encounters the black hole at a larger radial distance than one Schwarzschild radius. Assuming for simplicity that all stuff hitting with a smaller distance gets sucked in, the black hole has a cross section of about 10-36 m², or 10 nanobarn (that's more than typical neutrino cross sections).
What happens if such a "naive" black hole passes through the Earth?
For simplicity, we can assume that the Earth is made up of iron, with a density of 8 g/cm³, or 8000 kg/m³. Since mass is essentially the mass of nucleons (the protons and neutrons in the nuclei of the atoms), and taking into account the proton mass, this density corresponds to a density of 5×1030 nucleons per m3. But this means that on average, the black hole would travel 200 km before encountering a nucleon (200 km travel distance × 10-36 m² cross section × density of nucleons of 5×1030 nucleons per m3 equals one nucleon).
Nobody knows exactly what will happen when a tiny black hole hits a nucleon. On the scale of the black hole, the nucleon is about 1000 times larger in diameter, and a very dilute cloud of a few quarks and gluons. It may be that the black hole hits one of these partons, as they are called, thus disrupting the nucleon and carrying away a fraction of its mass. There is no theory to describe this, and there are all kinds of problems involved, as to what happens to confinement, colour neutrality, and so on. But whatever happened, in the end, the black hole may have gained, in the most extreme case, the mass of a nucleon.
Now, this is just one permille of the mass of the black hole, so it won't change its momentum, and just travel on along a straight path. After 100 encounters or so with nucleons, on average, it will have left the Earth, even if goes right through its centre. In addition one should keep in mind that scenarios in which a black hole can reach a thermodynamically stable endstate (though these scenarios are strongly disfavored for theoretical reasons), once the black hole would gain some mass it would no longer be in this stable endstate, and evaporate again until it reaches again its initial mass.
If the black hole starts at rest on the Earth's surface, it will fall through the centre of the Earth and engage in the oscillatory motion that is set as a problem in undergraduate textbooks, dealing with the free fall through a tunnel across the Earth.
But does the black hole start at rest, in the first place?
In fact, if the initial velocity of the black hole is larger than the escape velocity of the Earth's gravitational field, it will just escape into space. The escape velocity for the black hole is 11.2 km/s, the same as for any ordinary canon ball or satellite. This corresponds to a value of β = v/c ≈ 0.00004. That's tiny for typical high-energy collision kinematics, and most black holes produced will easily exceed this number by orders of magnitude. Earth' gravity can not trap these black holes created in the collision.
Even though the centre of mass frame of the colliding protons at the LCH is at rest with respect to the detectors and the surface of the Earth, this is generally not the case for the pair of colliding partons that creates the black hole. As a consequence, the black holes will have quite large momenta along the beam axis. Technically speaking, this momentum is expressed by a variable called rapidity y, where y = arctanh β. The black hole can become trapped in the Earth's gravitational field only unless its rapidity exceeds 0.00004 (for such a small argument, arctanh β is pretty much the same as β).
Now, here is a plot of the rapidity distribution of black holes from collisions at the LHC, assuming large extra dimensions.
Figure 5 from Black Hole Remnants at the LHC by Benjamin Koch, Marcus Bleicher and Sabine Hossenfelder, JHEP 0510 (2005) 053 (hep-ph/0507138)
Although this is not the initial rapidity distribution of the nascent mini black holes, the essential feature of the plot is clear: the typical rapidity of the black holes is of the order 1. For comparison, the rapidity of the 7 TeV colliding protons is 9.6. This means that just about one in 100000 tiny black holes produced will have a velocity along the beam axis smaller than escape velocity. But even these black holes will have some initial velocity in the direction perpendicular to the beam axis. This velocity is usually expressed by the "transverse momentum", and typically, it is also much bigger than the escape velocity. There are no tiny black holes created "at rest" in the detector!
In short: If tiny black holes were produced because large extra dimensions did exist in the necessary number with the necessary radius, and if they did not evaporate within 10-26 seconds as expected (Hawking evaporation is considered a very robust prediction, so this scenario is not confirmed by well founded theories), most of them would have such a high velocity that they escaped the gravitational field of the Earth for good. Even if they travelled straight through the centre of the Earth, the few nucleons they can hit wouldn't change their momentum in an appreciable way.
Tags: physics, LHC, black hole | <urn:uuid:29d06dfd-9799-46e7-bba1-7e2be3088f14> | 3.53125 | 1,453 | Comment Section | Science & Tech. | 53.760415 |
126 pages | 8 1/2 x 11
The ocean is a fundamental component of the earth's biosphere. It covers roughly 70 percent of Earth's surface and plays a pivotal role in the cycling of life's building blocks, such as nitrogen, carbon, oxygen, and sulfur. The ocean also contributes to regulating the climate system. Most of the primary producers in the ocean comprise of microscopic plants and some bacteria; and these photosynthetic organisms (phytoplankton) form the base of the ocean's food web. Monitoring the health of the ocean and its productivity is critical to understanding and managing the ocean's essential functions and living resources. Because the ocean is so vast and difficult for humans to explore, satellite remote sensing of ocean color is currently the only way to observe and monitor the biological state of the surface ocean globally on time scales of days to decades.
Ocean color measurements reveal a wealth of ecologically important characteristics including: chlorophyll concentration, the rate of phytoplankton photosynthesis, sediment transport, dispersion of pollutants, and responses of oceanic biota to long-term climate changes. Continuity of satellite ocean color data and associated climate research products are presently at significant risk for the U.S. ocean color community. Assessing Requirements for Sustained Ocean Color Research and Operations aims to identify the ocean color data needs for a broad range of end users, develop a consensus for the minimum requirements, and outline options to meet these needs on a sustained basis. The report assesses lessons learned in global ocean color remote sensing from the SeaWiFS/MODIS era to guide planning for acquisition of future global ocean color radiance data to support U.S. research and operational needs.
National Research Council. Assessing Requirements for Sustained Ocean Color Research and Operations . Washington, DC: The National Academies Press, 2011.
Please select a format: | <urn:uuid:a29d481b-0841-4872-b74a-e8a609b13f64> | 3.703125 | 383 | Knowledge Article | Science & Tech. | 28.123535 |
Loxahatchee National Wildlife Refuge; Florida;invasive Plants control; non-native; Invasive species
SPRAYING MELALUKA PLANT. Foreign plants and animals that establish themselves in domestic ecosystems sometimes become highly invasive and aggressive - displacing or out competing beneficial native species. Here, a FWS biologist at Loxahatchee NWR...
Starling sitting on dead branch in Siskiyou County, California. A non native species, european starlings were introduced to the United States in Central Park, New York in the early 1890's. All of the European Starlings found today in North...
Atlantic salmon spend the first two years of life in the fresh water habitats of their native stream (occasionally three, depending upon food availability). At two years of age, the fish undergo the process of smoltification, resulting in changes...
A Texas Cattle Rancher Who “ Became a Believer”
“ We’re just ‘ wildlifing’ it all over the place, and we’re
happy to do it,” Bob Long said. Long is enhancing
habitat on his 550- acre property to benefit the Houston
Montana: Endangered Species Grants
Help Keep “ Big Sky Country” Big
“ That’s one of the most productive bull trout streams
in the country,” said Fish and Wildlife Service
biologist Bob Lee of the headwaters of the Bull River
"The Pribilof Report 1949" King Island Eskomos plying their ivory carving craft beneath their skin umiaks on the beach near Nome where they have a summer camp. Only the simplest of hand tools are used for carving the walrus ivory tusks. | <urn:uuid:8a13c5b7-179b-405e-b9bc-355b6692978f> | 2.890625 | 369 | Content Listing | Science & Tech. | 44.711656 |
<programming> The debugging technique where the programmer inserts print statements into a program so that when run the program leaves a "trail of breadcrumbs" allowing him to see which parts were executed. The information output may just be a short string to indicate that a particular point in the code has been reached or it might be a complete stack trace. The output typically just goes to the window or terminal in which the program is running or may be written to a log file.
printf is the standard C print function, other languages would use different names.
Try this search on Wikipedia, OneLook, Google
Nearby terms: DEBUG « debugging « debugging an empty file « debugging by printf » DEC » dec » DEC Alpha | <urn:uuid:4776d761-db2a-4bad-bdcf-b2d9aac9a057> | 3.765625 | 147 | Structured Data | Software Dev. | 39.615 |
A new study from Duke University revises our idea of how the visual system works.
Old idea: bottom up
According to the older idea, images are constructed in our minds in a hierarchical fashion starting at the bottom. Data arriving at the retina is sorted into basic features, such as horizontal and vertical lines. These elements gradually resolve as they pass up through the layers of neural organization. Eventually they form into complete images that we recognize as particular objects. The brain’s higher level inferences, according to this model, develop only after this bottom-up process is completed.
New idea: top-down
Experiments using modern brain imaging data show that the process is really quite the reverse. The new idea, called “predictive coding,” proposes that the brain develops predictions about what we’re about to see. It tests those predictions in a top-down, rather than merely a bottom-up mode. The information that we take in is edited to fit the conception. This all happens within milliseconds and is largely unconscious.
Whenever low-level input contradicts the predictions and forces us to change our reading of an object, there is a storm of neural activity. This neural storm is particularly strong with optical illusions, such as the one that began this post. You may have experienced this if you ever, in a split second, mistook a stick on the ground for a snake. The top-down expectations get quickly overwhelmed by bottom-up corrections.
This illustration from Dinotopia: The World Beneath shows this kind mental process in action. An ambiguous cave formation, left, can either be seen as a skull or a woman with babies. Depending on how the mind wants to perceive the form, the details are marshaled to match the perception.
This phenomenon of conjuring faces and other meaningful patterns in apparently random visual data is a phenomenon that’s also called “pareidolia,” covered in an earlier post.
Here are three suggestions for how this new theory may affect us as artists (and I'm sure you'll think of other implications:
1. We mostly see what we expect to see. Viewers come to your pictures unconsciously preloaded in various ways.
2. Much information is lost because of this automatic editing process. It never reaches our conscious minds because it's edited out. Therefore we just don’t see a huge amount (and maybe that’s a good thing at times).
3. Having a wrong search image can actually make us blind to what we’re looking for. For example, if you thought the book you were looking for had a blue spine, you might not even see the correct book with the red spine.
Duke University news release
Another report on predictive coding
First optical illusion from Planet Perplex
Related GJ Post on Pareidolia and Apophenia:
Thanks, Rob Wood and Brad | <urn:uuid:8a2cbc30-a6d8-4b4f-9c4d-ea685f91f176> | 3.359375 | 597 | Personal Blog | Science & Tech. | 48.345588 |
In algebra 1 we've covered most line topics I think are important:
* calculating slope from a graph and from 2 points without a graph
* looking at a graph and finding the line equation in all 3 ways (standard, pt-slope,slope-int)
* being given either 2 points or a point and a slope and calculating any of the 3 equations.
* looking at any of the 3 line equations and graphing
* looking at any of the 3 line equations and finding 5 points on the line (and the intercepts)
* seeing that if you have a graph and it's line equation in any form, that you can plug in any point on the line and it will "work" and plug in any equation not on the line, and it won't work in the equation.
(I've probably missed some, but this is what I remember)
We still have to talk about parallel and perpendicular lines, but soon we'll move on to systems of equations. Before we do that, I thought that this would be the perfect time to assign a line project to my students. In all their later topics and math courses, lines will just be an aside where it's already assumed they have learned about them.
We had a discussion about positive and negative linear correlation and no correlation and correlation that is not linear. We went through the process of creating best fit lines and using that line to make a prediction for another point that is not represented by your data. Then I discussed that in real life, people collect data, and if they all fell on a straight line, the mathematicians/scientists would faint from the rareness of this occurrence, and that usually, if the data seems to be linear, it'll look like our scatter plots that have a linear correlation, and THAT'S how this is used in real life. People make a model and then use that model to make predictions.
Then I assigned this project that I will collect in a succession of 3 class periods. I already have had one student send me her data that was interesting. She found information about teacher pay from the 40's to the 2000's, and you can see that it looks linear until about 1980, and then there's a BIG leap of the points and then it looks linear again. After a back and forth e-mail discussion, she found that some site stated that there was a 400% increase in pay in 1980 .... hmmm, haven't verified that, but I remember some of my older teacher friends mentioning their super low pay in the 70's. | <urn:uuid:ba8d9956-a570-4a4d-8d9d-718beea70b66> | 3.5625 | 522 | Personal Blog | Science & Tech. | 59.439086 |
Project 2a: The Unix Shell
There are three objectives to this assignment:
In this assignment, you will implement a command line interpreter or, as it is more commonly known, a shell. The shell should operate in this basic way: when you type in a command (in response to its prompt), the shell creates a child process that executes the command you entered and then prompts for more user input when it has finished.
The shells you implement will be similar to, but simpler than, the one you
run every day in Unix. You can find out which shell you are running by typing
Your basic shell is basically an interactive loop: it repeatedly
prints a prompt
prompt> ./mysh mysh>
You should structure your shell such that it creates a new process for each new command (there are a few exceptions to this, which we will discuss below). There are two advantages of creating a new process. First, it protects the main shell process from any errors that occur in the new command. Second, it allows for concurrency; that is, multiple commands can be started and allowed to execute simultaneously.
Your basic shell should be able to parse a command, and run the
program corresponding to the command. For example, if the user types
Note that the shell itself does not "implement"
The maximum length of a line of input to the shell is 512 bytes (excluding the carriage return).
Whenever your shell accepts a command, it should check whether the command
is a built-in command or not. If it is, it should not be executed like
other programs. Instead, your shell will invoke your implementation of the
built-in command. For example, to implement the
So far, you have added your own
exit, cd, and pwd formats. The formats for exit , cd and pwd are:
[optionalSpace]exit[optionalSpace] [optionalSpace]pwd[optionalSpace] [optionalSpace]cd[optionalSpace] [optionalSpace]cd[oneOrMoreSpace]dir[optionalSpace]
When you run "cd" (without arguments), your shell should
change the working directory to the path stored in the $HOME environment
You do not have to support tilde (~). Although in a typical Unix shell you could go to a user's directory by typing "cd ~username", in this project you do not have to deal with tilde. You should treat it like a common character, i.e. you should just pass the whole word (e.g. "~username") to chdir(), and chdir will return error.
Basically, when a user types pwd, you simply call getcwd(). When a user changes the current working directory (e.g. "cd somepath"), you simply call chdir(). Hence, if you run your shell, and then run pwd, it should look like this:
% cd % pwd /afs/cs.wisc.edu/u/m/j/username % echo $PWD /u/m/j/username % ./mysh mysh> pwd /afs/cs.wisc.edu/u/m/j/username
Many times, a shell user prefers to send the output of his/her program to a
file rather than to the screen. The shell provides this nice feature with the
For example, if a user types
Here are some redirections that should not work:
ls > out1 out2 ls > out1 out2 out3 ls > out1 > out2
Your shell has one fun feature: when you type in the name of a .c file where a command should be, your shell recognizes this and tries to compile it and run it for you. For example, typing:
would compile hello.c into an executable named hello, and then run it. If there are other arguments on the command line, they should be passed to the running program as well.
The one and only error message. A section about the Error Message has been added. In summary, you should print this one and only error message whenever you encounter an error of any type:
char error_message = "An error has occurred\n"; write(STDERR_FILENO, error_message, strlen(error_message));
The error message should be printed to stderr (standard error). Also, do not add whitespaces or tabs or extra error messages.
There is a difference between errors that your shell catches and those that
the program catches. Your shell should catch all the syntax errors specified
in this project page. If the syntax of the command looks perfect, you simply
run the specified program. If there is any program-related errors
(e.g. invalid arguments to
Zero or more spaces can exist between a command and the shell special
mysh> ls mysh> ls > a mysh> ls>a
So far, you have run the shell in interactive mode. Most of the time, testing your shell in interactive mode is time-consuming. To make testing much faster, your shell should support batch mode .
In interactive mode, you display a prompt and the user of the shell will type in one or more commands at the prompt. In batch mode, your shell is started by specifying a batch file on its command line; the batch file contains the same list of commands as you would have typed in the interactive mode.
In batch mode, you should not display a prompt. You should print each line you read from the batch file back to the user before executing it; this will help you when you debug your shells (and us when we test your programs). To print the command line, do not use printf because printf will buffer the string in the C library and will not work as expected when you perform automated testing. To print the command line, use write(STDOUT_FILENO, ...) this way:
write(STDOUT_FILENO, cmdline, strlen(cmdline));
In both interactive and batch mode, your shell should terminates when it
To run the batch mode, your C program must be invoked exactly as follows:
The command line arguments to your shell are to be interpreted as follows.
batchFile: an optional argument (often indicated by square brackets as above). If present, your shell will read each line of the batchFile for commands to be executed. If not present or readable, you should print the one and only error message (see Error Message section below).
Implementing the batch mode should be very straightforward if your shell code is nicely structured. The batch file basically contains the same exact lines that you would have typed interactively in your shell. For example, if in the interactive mode, you test your program with these inputs:
emperor1% ./mysh mysh> ls some output printed here mysh> ls > /tmp/ls-out some output printed here mysh> notACommand some error printed here
then you could cut your testing time by putting the same input lines to a batch file (for example myBatchFile):
ls ls > /tmp/ls-out notACommand
and run your shell in batch mode:
prompt> ./mysh myBatchFile
In this example, the output of the batch mode should look like this:
ls some output printed here ls > /tmp/ls-out some output printed here notACommand some error printed here
Important Note: To automate grading, we will heavily use the batch mode . If you do everything correctly except the batch mode, you could be in trouble. Hence, make sure you can read and run the commands in the batch file. Soon, we will provide some batch files for you to test your program.
Defensive Programming and Error Messages
Defensive programming is required. Your program should check all parameters, error-codes, etc. before it trusts them. In general, there should be no circumstances in which your C program will core dump, hang indefinitely, or prematurely terminate. Therefore, your program must respond to all input in a reasonable manner; by "reasonable", we mean print the error message (as specified in the next paragraph) and either continue processing or exit, depending upon the situation.
Since your code will be graded with automated testing, you should print this one and only error message whenever you encounter an error of any type:
char error_message = "An error has occurred\n"; write(STDERR_FILENO, error_message, strlen(error_message));
For this project, the error message should be printed to stderr Also, do not attempt to add whitespaces or tabs or extra error messages.
You should consider the following situations as errors; in each
case, your shell should print the error message to
For the following situation, you should print the error message to
Your shell should also be able to handle the following scenarios below, which are not errors . A reasonable way to check if something is not an error is to run the command line in the real Unix shell.
All of these requirements will be tested extensively.
Writing your shell in a simple manner is a matter of finding the relevant library routines and calling them properly. To simplify things for you in this assignment, we will suggest a few library routines you may want to use to make your coding easier. (Do not expect this detailed of advice for future assignments!) You are free to use these routines if you want or to disregard our suggestions. To find information on these library routines, look at the manual pages (using the Unix command man ).
Parsing: For reading lines of input, you may want to look at fgets(). To open a file and get a handle with type FILE * , look into fopen(). Be sure to check the return code of these routines for errors! (If you see an error, the routine perror() is useful for displaying the problem. But do not print the error message from perror() to the screen. You should only print the one and only error message that we have specified above ). You may find the strtok() routine useful for parsing the command line (i.e., for extracting the arguments within a command separated by whitespace or a tab or ...).
Executing Commands: Look into fork , execvp , and wait/waitpid . See the UNIX man pages for these functions, and also read the Advance Programming in the UNIX Environment, Chapter 8 (specifically, 8.1, 8.2, 8.3, 8.6, 8.10). Before starting this project, you should definitely play around with these functions.
You will note that there are a variety of commands in the
int main(int argc, char *argv);
Note that this argument is an array of strings, or an array of pointers to characters. For example, if you invoke a program with:
foo 205 535
then argv = "foo", argv = "205" and argv = "535".
Important: the list of arguments must be terminated with a NULL pointer; that is, argv = NULL. We strongly recommend that you carefully check that you are constructing this array correctly!
For managing the current working directory, you should use getenv ,
chdir , and getcwd . The
Redirection is relatively easy to implement: just use close() on stdout and then open() on a file. More on this during discussion.
With file descriptor, you can perform read and write to a file. Maybe in your life so far, you have only used fopen() , fread() , and fwrite() for reading and writing to a file. Unfortunately, these functions work on FILE* , which is more of a C library support; the file descriptors are hidden.
To work on a file descriptor, you should use open() , read() , and write() system calls. These functions perform their work by using file descriptors. To understand more about file I/O and file descriptors you should read the Advanced UNIX Programming book Section 3 (specifically, 3.2 to 3.5, 3.7, 3.8, and 3.12). Before reading forward, at this point, you should get yourself familiar with file descriptor.
The idea of redirection is to make the stdout descriptor point to
your output file descriptor. First of all, let's understand the
STDOUT_FILENO file descriptor. When a command
Remember to get the basic functionality of your shell working before worrying about all of the error conditions and end cases. For example, first get a single command running (probably first a command with no arguments, such as "ls"). Then try adding more arguments.
Next, try working on multiple commands. Make sure that you are correctly handling all of the cases where there is miscellaneous white space around commands or missing commands. Finally, you add built-in commands and redirection suppors.
We strongly recommend that you check the return codes of all system calls from the very beginning of your work. This will often catch errors in how you are invoking these new system calls. And, it's just good programming sense.
Beat up your own code! You are the best (and in this case, the only) tester of this code. Throw lots of junk at it and make sure the shell behaves well. Good code comes through testing -- you must run all sorts of different tests to make sure things work as desired. Don't be gentle -- other users certainly won't be. Break it now so we don't have to break it later.
Keep versions of your code. More advanced programmers will use a source control system such as CVS . Minimally, when you get a piece of functionality working, make a copy of your .c file (perhaps a subdirectory with a version number, such as v1, v2, etc.). By keeping older, working versions around, you can comfortably work on adding new functionality, safe in the knowledge you can always go back to an older, working version if need be.
To ensure that we compile your C correctly for the demo, you will
need to create a simple makefile; this way our scripts can just
The name of your final executable should be
emperor1% ./mysh emperor1% ./mysh inputTestFile
Copy all of your .c source files into the appropriate subdirectory. Do not submit any .o files. Make sure that your code runs correctly on the linux machines in the 13XX labs.
We will run your program on a suite of test cases, some of which will exercise your programs ability to correctly execute commands and some of which will test your programs ability to catch error conditions. Be sure that you thoroughly exercise your program's capabilities on a wide range of test suites, so that you will not be unpleasantly surprised when we run our tests. | <urn:uuid:ee839554-61ca-416c-a9e3-4dfaac826dde> | 3.421875 | 3,096 | Tutorial | Software Dev. | 58.158516 |
Usually slender, elongate, small species. Sexes similar. Stylet usually small, delicate.
photomicrograph by Howard Ferris and Sam Woo, UC Davis.
Esophagus with slender procorpus, median bulb either well-developed, weakly developed or even not obvious; long, slender isthmus, esophageal glands symmetrically arranged, pyriform, rarely with slight overlap of intestine.
Female: One gonad, prodelphic; rarely, two gonads; columned uterus with four rows. Tail long, conoid to filiform.
Male: Caudal alae adanal, small, occasionally lacking.
photomicrograph by Howard Ferris and Sam Woo, UC Davis
Most of the characteristics of the species of Tylenchidae conform with the definition of primitivity as expressed by Luc et al. (1987): small stylet, delicate labial framework; amphid openings from small to elongate, sinuousoid or straight, extending posteriad longitudinally; median bulb spindle shaped, small, delicate or rounded, muscular; esophageal glands symmetrically arranged, pyriform; tail elongate, long.
The family Tylenchidae is one of the most diverse of the Tylenchina; apparently actively evolving in some characters, such as the amphids moving from large elongate post labial structures, varying to arc-shaped or oval pits limited to the labial plate, or even small elliptical apertures near the oral opening (Miculenchus, Ecphyadophora), with many links and relationships with other groups (i.e. Tylodorus with characteristics of the Dolichodoridae; Filenchus with characteristics of the Anguinidae, etc.).
The family is most closely related to the Anguinidae, which were formerly considered part of the Tylenchidae (Siddiqi, 1971). The families differ in that there are small elliptical amphids in Anguinidae, which also have elongated, axial spermatheca, large sperms with prominent cytoplasm, and females with a long post-uterine branch of the gonad (more than two body diameters at vulva level).
Commonly occurring in most soils. Feed on algae, mosses, lichens and plant roots. As an example: Soil nematodes were studied in three spruce forests in the Czech Republic from 1988 to 1991. A total of 74 species occurred, most belonged to the orders Tylenchida, Rhabditida and Dorylaimida. The most abundant nematodes were the mycophytophagous species of the family Tylenchidae followed by bacteriophages, especially by those in the order Rhabditida.
Probably fairly small. Most reports are about occurrence and abundance rather than documenting any effects on growth.
Ectoparasites of plant roots, root hairs, algae, etc.
Yeates et al. (1993a): placed the Tylenchidae in the following feeding groups:
Aglenchus: plant feeder (epidermal cell and root feeder),
Cephalenchus: plant feeder (ectoparasite),
Coslenchus: (epidermal cell and root feeder),
Filenchus: (epidermal cell and root feeder),
Malenchus: (epidermal cell and root feeder),
Tylenchus: plant feeder (algal, lichen (algal or fungal component), or moss feeders that feed by piercing), or hyphal feeder (?).
Yeates et al. (1993b): classified Tylenchus spp. as "plant associated", indicating that they were found in the rhizospheres of plants.
Okada et al (2002 and 2003) have demonstrated that Filenchus misellus is a fungal-feeding nematode and able to reproduce on a range of fungi.
The following is an initial compilation of feeding habits of Tylenchidae by Erik Schaper while working on an MSc-project under the supervision of Tom Bongers (Laboratory of Nematology, Wageningen University)..
Project Title: "Tylenchidae, plant parasites or fungal feeders?".
Okada et al (2002 and 2003) have experimentally demonstrated that Filenchus misellus is a fungal-feeding nematode and able to reproduce on a range of fungi.
Little is known of the life history of most species, but in so far as known, they have no specialized cycles, resting, or resistant stages.
Relatively slight, small stylets penetrating only thin cell walls.
Andrássy, I (1976), Aglenchus costatus, C.I.H. Descriptions of plant-parasitic nematodes, set 6, No. 80, 2 pp.
Baujard, P. (1995), Laboratory methods used for the study of the ecology and pathogenicity of Tylenchida, Longidoridae and Trichodoridae from rainy and semi-arid tropics of West Africa, Fundamental and Applied Nematology, 18, 63-66
Cobb, N.A. (1925), Biological relationships of the mathematical series 1, 2, 4, etc., Chapter 15 in: Contributions to a Science of Nematology,
Ferris, H., Venette, R.C., Lau, S.S. (1996), Dynamics of nematode communities in tomatoes grown in convential and organic farming systems, and their impact on soil fertility, Applied Soil Ecology, 3, 161-175
Geraert & Raski (1987) Rev. Nematol. 10(2):143-161.
Gowen, S.R. (1970), Observations on the fecundity and longevity of Tylenchus emarginatus on sitka spruce seedlings at different temperatures, Nematologica, 13, 267-272
Hanel, Ladislav. 1996. Comparison of soil nematode communities in three spruce forests Boubin Mount, Czech Republic. Biologia (Bratislava) 51.
Hooper, D.J. (1974), Cephalenchus emarginatus, C.I.H. Descriptions of plant-parasitic nematodes, Set 3, No. 35, 2 pp.
Khera, S., Zuckermann, B.M. (1962), Studies on the culturing of certain ectoparasitic nematodes on plant callus tissue, Nematologica, 8, 272-274
Khera, S., Zuckermann, B.M. (1963), In vitro studies of host-parasite relationships of some plant-parasitic nematodes, Nematologica, 9, 1-6
Micoletzky, H. (1925), Die freilebenden Süsswasser- und Moornematoden Dänemarks nebst Anhang über Amöbesporidien und andere Parasiten bei freilebenen Nematoden, D. Kgl. Danske Vidensk. Selsk. Skrifter, Naurvidensk. og Mathem., ser 8, 10, 57-310
Okada, H., Tsukiboshi, T., Kadota , I., 2002. Mycetophagy in Filenchus misellus (Andrássy, 1958) Raski & Geraert, 1987 (Nematoda: Tylenchidae), with notes on its morphology. Nematology 4, 795-801.
Okada, H., Kadota, I., 2003. Host status of 10 fungal isolates for two nematode species, Filenchus misellus and Aphelenchus avenae. Soil Biology and Biochemistry 35, 1601-1607.
Siddiqi, M.R. (1986), Tyenchida, parasites of plants and insects, CAB, Slough, 645 pp.
Sutherland, J.R. (1967), Parasitism of Tylenchus emarginatus on conifer seedling roots and some observations on the biology of the nematode, Nematologica, 13, 191-196
Thorne, G. (1961), Tylenchinae, chapter 5 in: Principles of Nematology, McGraw-Hill Book Company Inc., New York - Toronto - London, 553 pp.
Wood, F.H. (1971), Studies on the biology of soil-dwelling nematodes from tussock grassland, Ph.D. thesis, University of Canterbury, New Zealand, 286 pp.
Wood, F.H. (1973a), Life cycle and host-parasite relationships of Aglenchus costatus (de Man, 1921) Meyl, 1961 (Nematoda, Tylenchidae), New Zealand Journal of Agricultural Research, 16, 373-380
Wood, F.H. (1973b), Nematode feeding relationships, feeding relationships of soil-dwelling nematodes, Soil Biology and Biochemistry, 5, 593-601
Yeates, G.W., Bongers, T., Goede, R.G.M. de, Freckman, D.W., Georgieva, S.S. (1993a), Feeding habits in soil nematode families and genera - an outline for soil ecologists, Journal of Nematology, 25 (3): 315-331
Yeates, G.W., Wardle, D.A., Watson, R.N. (1993b), Relationships between nematodes, soil microbial biomass and weed-management strategies in maize and asparagus cropping systems, Soil Biology and Biochemistry, 25, 869-876 | <urn:uuid:168d9ea2-1762-48cd-92e2-deb6e97452f2> | 3.046875 | 2,122 | Knowledge Article | Science & Tech. | 43.741983 |
Single-car drivers commuting in fossil-fuel burning cars, smog, pollution, crime -- what other urban scourges can you think of? Half of the world's population currently lives in urban areas; yet these urban areas make up only 2 percent of the world's land and spend three-quarters of the world's resources [source: MIT]. That's a lot of people in a very small space consuming a great deal.
Between now and the year 2050, urban growth will only continue to rise: 89 million homes and 190 billion square feet (about 17.5 billion square meters) of retail and other nonresidential space will be built in the United States alone [source: National Resources Defense Council]. And in conjunction with that density, pollution is soaring. London, for instance, released about 45 million tons (about 41 million metric tons) of carbon dioxide, a greenhouse gas, into the atmosphere last year [source: IEEE]. By greening cities and neighborhoods around the world, we have the opportunity to make a positive impact on global warming.
How do green cities help in the effort against climate change? Eco-cities all share similar characteristics: They aim to reduce or eliminate fossil-fuel use, adopt sustainable building practices, promote "green space" and clean air quality, implement energy-efficient and widely available public transportation, create walkable city designs and develop well-organized mixed-use neighborhoods that combine living, working and shopping. These qualities add up to sustainable urbanism.
Here we'll look at five green cities of the future -- some that have broken ground for construction, some that are still the ambitious aspirations of city planners -- all competing to be the first carbon-neutral city in the world. | <urn:uuid:14a74a92-585c-48a1-a795-c7c57f187669> | 3.453125 | 348 | Listicle | Science & Tech. | 37.952493 |
Plants epiphytic, terrestrial, or on rock. Stems long-creeping, often threadlike and intertwining, or short-erect, protostelic, bearing brown hairs of 1--2 types. Roots sparse or absent. Leaves small, 0.5--20 × 0.2--5 cm, often forming dense mats. Petiole short, threadlike to wiry, often winged part or entire length. Blade ovate or oblong to lanceolate, simple to decompound, usually 1 cell thick between veins (except Trichomanes membranaceum Linnaeus), entire or dentate; scales or simple and/or stellate hairs often borne on veins or leaf margins. Veins free and divergent, occasionally present as unattached 'false' veins. Sori marginal on vein ends, enclosed by 2-valved or conic involucres. Sporangia borne on moundlike receptacle or on elongate 'bristle,' sessile or short-stalked; annulus oblique. Spores green, globose, trilete. Gametophytes filamentous or ribbonlike or a combination of both, much branched, 0.2--1 cm, often bearing gemmae, persistent, clone-forming by vegetative reproduction. Species outside the flora display a wide range of morphologies and habits, and many are somewhat larger than North American species. Some authors divide the Hymenophyllaceae into 30 or more genera. The subdivisions of these genera are treated here as subgenera and sections, following C. V. Morton (1968).
Although plants of the Hymenophyllaceae clearly have the capacity to withstand periodic desiccation and freezing, they have a delicate nature that requires they grow in deeply sheltered habitats of nearly continuous high moisture and humidity. This undoubtedly accounts for the relative rarity of all species in the flora. Possibly they are currently restricted from more widespread pre-Pleistocene occurrences. All owe their continuing existence largely or entirely to vegetative propagation by either the sporophyte or gametophyte generation. The capacity for vegetative reproduction and dispersal by gametophytes of the Hymenophyllaceae allows gametophyte colonies to persist indefinitely without completing a life cycle. In the flora, several species are maintained exclusively as gametophytes with sporophytes rarely or never produced. | <urn:uuid:fd64e098-4a41-43e5-9829-5b9a8e004ce0> | 3.484375 | 499 | Knowledge Article | Science & Tech. | 22.629784 |
Water weighs 62.4 pounds per cubic foot. When a vehicle stalls in water, the water's momentum is transferred to the car. For each foot the water rises, 500 pounds of lateral force is applied to the car. For each foot the water rises up the side of the car, the car displaces 1,500 pounds of water. In effect, the car weighs 1,500 pounds less for each foot the water rises. Most vehicles will float in just 2 feet of water.
Source: Tulsa World, p.A11, 10/6/2009 | <urn:uuid:a1f3b6d3-f4fd-416d-90bd-e7c0f27f8dd9> | 2.921875 | 114 | Knowledge Article | Science & Tech. | 94.929891 |
|Where's the other star?
At the center of this supernova remnant should be the companion star to the star that blew up.
Identifying this star is important for understanding just how
Type Ia supernova detonate, which in turn could lead to a better understanding of why the brightness of
such explosions are so predictable, which in turn is key to calibrating the entire
nature of our universe.
The trouble is that even a careful inspection of the center of
SNR 0509-67.5 has not found any star at all.
This indicates that the companion is intrinsically very faint -- much more faint that many types of
bright giant stars that had been previous candidates.
In fact, the implication is that the
companion star might have to be a faint
similar to -- but less massive than -- the star that detonated.
SNR 0509-67.5 is shown above in both visible light,
shining in red as imaged by the Hubble Space Telescope, and
X-ray light, shown in false-color green as imaged by the
Chandra X-ray Observatory.
Putting your cursor over the picture will highlight the central required
location for the missing companion star.
X-ray: NASA/CXC/SAO/J. Hughes et al.,
NASA/ESA/Hubble Heritage Team | <urn:uuid:4350bd80-0e27-4cdc-a687-268f4f29b8cf> | 3.296875 | 282 | Knowledge Article | Science & Tech. | 59.221198 |
Radioactivity : The Pros and Cons
Since its discovery by Henri Becquerel in 1896 much has been learned about
radioactive elements and their properties. This knowledge has led to
many beneficial applications of the numerous radioisotopes.
Did you know that some of the foods we eat have been treated by
exposure to radiation?
Have you ever wondered how we know the age of dinosaur bones?
Have you ever known anyone who was treated for cancer with radiation
Have you ever wondered how a nuclear submarine is powered?
Have you ever had an x-ray to look for a broken bone or a cavity?
All of these beneficial applications are due to scientific research,
discovery and development of nuclear chemistry
Although nuclear chemistry has provided numerous beneficial applications
to our society there is also a dark side to nuclear chemistry that was
must be aware of. The legacy that nuclear disasters such as Three Mile
Island and Chernobyl have left us has brought some societies to question
the continued use of nuclear energy. Why did these disasters happen?
How do we protect ourselves form these types of disasters? How do we
dispose of nuclear waste? Why don't we just use coal or petroleum to
furnish our energy needs?
The use of the atomic bomb to end World War II has been studied and
re-evaluated for 50 years since its use in Nagasaki and Hiroshima.
Should the US have use the A-bombs? Were there any other choices?
No doubt you have heard other stories of people being exposed to nuclear
radiation and developing cancer - sometimes in the name of
scientific research and medical development. Were these studies
ethical? Were they worth it?
These questions are all important and of no small significance. We must
use the past to teach us valuable lessons about the continued use of
nuclear chemistry. Explore on in this module to gain a better
understanding of nuclear chemistry. Then come back to this page and think
about the questions raised.
Return to the beginning of the Nuclear
Return to the Cruising Chemistry | <urn:uuid:690e1218-8da7-47d3-8575-c4113af8db30> | 3.578125 | 428 | Knowledge Article | Science & Tech. | 44.411125 |
Name: Briana R.
Our physics class is researching GMRs and magnetic fields
and we were wondering if you could tell us how to make one. How do they
work? Anything information you could give us would me amazing.
The GMR effect depends on sandwiching a thin non-ferromagnetic layer between
two magnetic materials. Because your class has been researching them, you
have discovered that "thin" means only a few atoms thick. It is not easy to
make a thin layer a few atoms thick because it requires laboratory vacuum
There is one company that makes GMR chips that sense magnetic fields
(http://www.nve.com/) , but just having a manufactured chip would not be
interesting because the company has done all the engineering for you.
The problem is that much of the "easy to make" physics was done fifty or 100
years ago, and some more recent discoveries, like GMR, transistors, hall
effect, and nanotubes, require fancy lab equipment to make or examine.
However, it is possible to still investigate physics phenomena using
household items like laser pointers, meters, diffraction gratings, lenses,
and so on.
Click here to return to the Engineering Archives
Update: June 2012 | <urn:uuid:23a159e9-05fa-4c18-a4b9-6250432ab8c8> | 3.703125 | 263 | Q&A Forum | Science & Tech. | 46.742403 |
A. iberus is on the IUCN red list of endangered species. DD (IUCN 2004). EN B1+2bcd in the Red List of freshwater fish from Spain (Doadrio 2002).
A. iberus is included into the Annexes II and III of The Bern Convention (1979) and in the Annexe II of the European Council Directive (1992/43/EEC).
At national level, it is included in the Spanish National Catalogue of Threatened Species. In Spain, when a species is considered an endangered species, regional autonomous governments are obliged to develop a Recovery Plan. In this way, since 1994 conservation plans have been developed in their first phases by different autonomous environmental agencies.
Recently, a LIFE-nature project (LIFE04 NAT/ES/000035) has begun exclusive efforts to increase the survival of two defined genetic units of A.iberus in the Murcian region.
A. iberus is threatened due to their limited and isolated distribution. During the last three decades there has been a progessive loss and alteration of its habitat especially as a result of an intensive agriculture and tourism development.
Current threats to its habitats and its populations include:
Inland populations are restricted to small creeks and are threatened by the depletion of water levels in local aquifers; its survival depends on strict control over the use of groundwater resources. The loss of traditional salt exploitation mines, as an important coastal habitat for the species, is another threat. | <urn:uuid:45719958-1870-4ca3-a88a-5261450636a4> | 3.3125 | 308 | Knowledge Article | Science & Tech. | 39.782933 |
Browsing through the blogosphere recently, I came across an interesting little story about the scientific method, scientific progress, and un-scientific spin (h/t Hank Roberts). The subject concerns the polar ozone hole in Antarctica and a possible role for cosmic rays in its variability on solar cycle timescales. The proponents of this link are a small research group at the University of Sherbrooke in Canada, who find themselves up against the mainstream stratospheric chemistry community and whose ideas are twisted out of all recognition by the more foolish of the usual suspects.
The story hit the ‘tubes earlier this year when researcher Q.B. Lu predicted that this years Antarctic ozone hole would be the biggest ever due to the actions of increased galactic cosmic rays (GCR) (because we are at solar minimum and GCR are inversely correlated to solar activity). This years peak ozone hole has now come and gone, and the prediction can therefore be evaluated. Unfortunately for Dr. Lu, this year’s hole was merely about average for the decade – a result that wasn’t too supportive for his theory.
This story made me a little curious about this though. Firstly, I didn’t initially understand why cosmic rays should be playing a role in ozone depletion – most of the cosmic ray effects that are usually discussed revolve around cloud-aerosol connections, but there are not many clouds in the stratosphere where the ozone holes form, and the ones there are (Polar Stratospheric Clouds – PSCs) are much more sensitive to temperature and water vapour than they are likely to be to background aerosols. On further investigation, it turns out that this idea has been out there for a few years (and was reported on then) and has subsequently been discussed in the ozone literature.
So let’s start with the background theory. Standard (Nobel-prize winning) stratospheric chemistry has tied ozone depletion to the increasing chlorine (Cl) load in the stratosphere which catalytically destroys ozone and comes from the photolytic dissolution of human-sourced chloro-fluoro-carbons (CFCs) high in the stratosphere. In the polar night, the presence of PSCs allows for a specific class of heterogeneous Cl reactions to occur on the surface of the cloud particles which turn out to be very efficient at destroying ozone. Hence the presence of an ozone hole in the very cold Antarctic polar vortex. Since PSCs are very sensitive to temperature, cold winter vortex conditions often presage a large ozone depletion the following spring (note that polar ozone depletion only occurs in sunlight and so is a spring time phenomena in both hemispheres). This is pretty much undisputed at this point (well, at least by serious scientists). We here at RealClimate even used this relationship to predict (successfully) a particularly large Arctic ozone depletion event in 2005.
Dr. Lu’s theory though posits an additional mechanism to release the Cl from the CFCs – and that is through GCR effects. Specifically, Lu suggests that the action of the GCR on CFCs attached to PSCs causes more Cl to be released, thus potentially delivering more Cl exactly where it could enhance polar depletion most effectively. The evidence for this comes from correlations of ozone loss with GCR (over a couple of solar cycles) and some suggestive lab experiments. Note that this does not call into question the anthropogenic source of the Cl which is still from CFCs.
However, Lu and colleagues’ theory has been strongly challenged in the literature. For instance, here, here and here. The comments focus on two main aspects, the weakness of the correlations (see figure), and the ancillary evidence that there isn’t any obvious evidence for CFC destruction in the polar vortex itself. In fact, correlations of CFCs with air mass tracers from the upper stratosphere are very stable, indicating that the photolytic conversion of the CFCs is by far the dominant source of Cl. These rebuttals seem quite compelling, and there doesn’t seem to be much continued support for Dr. Lu’s GCR idea. However, Lu is still pushing it (hence the press release this year just weeks before the prediction would be put to the test). One might think Dr. Lu’s ideas wrong, but one can’t fault his bravery in putting them to the test.
As we stated above, the un-exceptional ozone loss this year pretty much undermines the correlations that were at the heart of Lu’s idea. Thus I predict that this is unlikely to be discussed very much more in the literature except as an example of how interesting ideas are generated, discussed, tested and (in this case) found wanting. This indeed is how scientific progress is made.
But, as has often been noted, the contrarian-sphere is a world on its own. It was inevitable that the headline link between GCR and ozone holes would entice the old-school ozone depletion skeptics and ‘everything-is-solar” proponents out of their burrows. Tim Ball led the charge. Now Dr. Ball is a long time skeptic on the human influence on ozone depletion as well as climate change, and so he couldn’t resist the occasion to opine on all theories anthropogenic:
Nurtured by environmental hysteria and the determination to show all changes in the natural world are due to human activity, the claim CFCs were destroying ozone jumped directly from an unproven hypothesis to a scientific fact.
He also includes standard statements implying that scientists implicated CFCs in ozone depletion to deprive the developing world of refrigeration (oh my!), how there hasn’t been a change in ozone depletion in any case (despite showing a series of figures obviously demonstrating this – see here as well) and so on…. He did however note that Dr. Lu’s theories don’t actually change any of the mainstream prescriptions for dealing with ozone depletion (though he does get confused about the CO2 impact on stratospheric temperatures – it makes them colder, not warmer). But the real prize goes to Dennis “unstoppable” Avery who suggests that Dr. Lu’s theories will confirm a link of GCR to climate change:
If the South Pole gets an ozone-hole maximum in the coming weeks, it will strengthen the case for cosmic rays, and endorse a Modern Warming driven by solar variations rather than human-emitted CO2.
This is the same logic as assuming that because salt makes food taste better, throwing it behind your shoulder must bring luck. That is, they are just not connected. And I’m pretty sure he won’t accept the logical corollary. Needless to say this is a very feeble grasping at straws. But to quote a recent Monbiot article:
There is no pool so shallow that a thousand bloggers won’t drown in it.
Nor an ozone hole it seems either. | <urn:uuid:1dfaab98-d623-4e79-b2fd-f25716bcbc17> | 2.96875 | 1,447 | Personal Blog | Science & Tech. | 43.147989 |
Introduction to CGIIntroduction to CGI
When a server has client's request to compute a CGIprogram, usually written in C-like or Perl languages,is executed. A CGI receives parameters, executes oneor more computations, gets a result; send it back tothe client and exits. This means that at each client'srequest the same CGI program has to be loaded,executed and terminated. This procedure is actuallythe most popular in the client-server environment,but it often overloads servers. | <urn:uuid:0f9c3fe6-bcef-4947-ae3b-a06c6d5acf8e> | 3.109375 | 104 | Truncated | Software Dev. | 36.280962 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2006 July 31
Explanation: Have methane lakes been discovered on Saturn's Titan? That exciting possibility was uncovered from analyses of radar images returned last week by the robotic Cassini spacecraft now orbiting Saturn. The above image is a radar reflection from terrain near Titan's North Pole and spans a region about 200 kilometers across. Evidence that the dark areas might be pools of liquid hydrocarbons includes an extreme smoothness implied by the lack of a return radar signal, and apparently connected tributaries. If true, Titan would be only the second body in our Solar System, after Earth, found to possess liquids on the surface. Future observations from Cassini during Titan flybys might test the methane lake hypothesis, as comparative wind affects on the regions are studied.
Authors & editors:
NASA Web Site Statements, Warnings, and Disclaimers
NASA Official: Jay Norris. Specific rights apply.
A service of: EUD at NASA / GSFC
& Michigan Tech. U. | <urn:uuid:d11525d0-b635-4402-81f7-3148ba014cfd> | 3.46875 | 230 | Knowledge Article | Science & Tech. | 32.28 |
Conditions of Use
The water cycle!!!!
First in the water cycle it goes from evaporation to water vapor to some form of precipitation to transpiration to groundwater. Evaporation is warmth From the sun and causes water from lakes,and streams and soils to turn into water vapor in the air. Water vapor is water in a gas form that is held in the air until it changes back to water. Precipitation is made up of any type of water that falls to the earth like snow,hail,mist or rain. Transpiration happens when plants give off water vapor. Condensation is the opposite of evaporation. It takes place when water vapor in the air condenses from a gas. Energy,rain-when temperature is warm,like during the spring or summer,clouds get so full of water that rain starts to fall. Groundwater is simply water under the ground where the soil is completely filled or saturated with water! One fact about the water cycle is: water is the only thing that can be a gas,liquid,and a solid. Also rain doesn't just occur in the spring and the summer it can occur really anytime. Evaporation I think can happen when it's warm or cold even. That is what I have about the water cycle.
Article posted October 21, 2011 at 03:21 PM •
comment (2) • Reads 1877
Return to Blog List
Add a Comment
i agree with the whole thing :)
Comment Posted on December 7, 2011 at 02:53 PM by
wow that was a great way to explain the water cycle cant wait to see your other blog cool blog
Comment Posted on November 10, 2011 at 05:07 PM by
About the Blogger
I have always been interested in animation and movies since I was a young lad. Now I make Lego animations and when I grow up I want to be an awesome filmmaker. | <urn:uuid:a02bad61-7559-4806-b6a5-968b32ee2444> | 2.84375 | 389 | Personal Blog | Science & Tech. | 66.473935 |
|Mars's southern polar ice cap, seen here in true color, has shrunk in recent years due to planetary warming—similar to what's happening on Earth|
Mars Melt Hints at Solar, Not Human, Cause for Warming, Scientist Says
Mars, too, appears to be enjoying more mild and balmy temperatures. In 2005 data from NASA's Mars Global Surveyor and Odyssey missions revealed that the carbon dioxide "ice caps" near Mars's south pole had been diminishing for three summers in a row. Habibullo Abdussamatov, head of the St. Petersburg's Pulkovo Astronomical Observatory in Russia, says the Mars data is evidence that the current global warming on Earth is being caused by changes in the sun. "The long-term increase in solar irradiance is heating both Earth and Mars," he said. Abdussamatov believes that changes in the sun's heat output can account for almost all the climate changes we see on both planets. Mars and Earth, for instance, have experienced periodic ice ages throughout their histories.
Global Warming Fast Facts
There is little doubt that the planet is warming. Over the last century the average temperature has climbed about 1 degree Fahrenheit (0.6 of a degree Celsius) around the world. The spring ice thaw in the Northern Hemisphere occurs 9 days earlier than it did 150 years ago, and the fall freeze now typically starts 10 days later. | <urn:uuid:09bc5c4e-e367-49d9-baaf-795108019c22> | 3.78125 | 289 | Personal Blog | Science & Tech. | 52.943377 |
In powerful electromagnets, the magnetic field exerts a force on each turn of the windings, due to the Lorentz force acting on the moving charges within the wire. The Lorentz force is perpendicular to both the axis of the wire and the magnetic field. It can be visualized as a pressure between the magnetic field lines, pushing them apart. It has two effects on an electromagnet's windings.The field lines within the axis of the coil exert a radial force on each turn of the windings, tending to push them outward in all directions. This causes a tensile stress in the wire. The leakage field lines between each turn of the coil exert a repulsive force between adjacent turns, tending to push them apart.
Amperes Law: Consider a wire carrying a current. The current
produces a magnetic field. Now consider a closed path. Amperes law states that the integral of B.dl around the
closed path is equal to mu_0*I, where I is the current enclosed within the closed path.
Amperes law is useful in calculating the magnetic field produced
by highly symmetric current distributions.
In the lecture, examples are given that show how to apply Amperes
law to calculate the magnetic field produced by a current carrying long, straight, cylindrical wire, a toroid,
and a solenoid.
Magnetic Field Produced By Current, Part 2
Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. | <urn:uuid:e1244394-882c-497f-8c97-03e5b55c1270> | 4.25 | 340 | Tutorial | Science & Tech. | 48.629537 |
||This article needs additional citations for verification. (February 2011)|
Raman scattering or the Raman effect (pron.: //) is the inelastic scattering of a photon. It was discovered by C. V. Raman and K. S. Krishnan in liquids, and by G. Landsberg and L. I. Mandelstam in crystals. The effect had been predicted theoretically by A. Smekal in 1923.
When photons are scattered from an atom or molecule, most photons are elastically scattered (Rayleigh scattering), such that the scattered photons have the same energy (frequency and wavelength) as the incident photons. However, a small fraction of the scattered photons (approximately 1 in 10 million) are scattered by an excitation, with the scattered photons having a frequency different from, and usually lower than, that of the incident photons. In a gas, Raman scattering can occur with a change in energy of a molecule due to a transition (see energy level). Chemists are concerned primarily with such transitional Raman effect.
The inelastic scattering of light was predicted by Adolf Smekal in 1923 (and in German-language literature it may be referred to as the Smekal-Raman effect). In 1922, Indian physicist C. V. Raman published his work on the "Molecular Diffraction of Light," the first of a series of investigations with his collaborators that ultimately led to his discovery (on 28 February 1928) of the radiation effect that bears his name. The Raman effect was first reported by C. V. Raman and K. S. Krishnan, and independently by Grigory Landsberg and Leonid Mandelstam, on 21 February 1928 (that is why in the former Soviet Union the priority of Raman was always disputed; thus in Russian scientific literature this effect is usually referred to as "combination scattering" or "combinatory scattering"). Raman received the Nobel Prize in 1930 for his work on the scattering of light.
Degrees of Freedom
For any given chemical compound, there are a total of 3N degrees of freedom, where N is the number of atoms in the compound. This number arises from the ability of each atom in a molecule to move in three different directions (x, y, and z). When dealing with molecules, it is more common to consider the movement of the molecule as a whole. Consequently, the 3N degrees of freedom are partitioned into molecular translational, rotational, and vibrational motion. Three of the degrees of freedom correspond to translational motion of the molecule as a whole (along each of the three spatial dimensions). Similarly, three degrees of freedom correspond to rotations of the molecule about the , , and -axes. However, linear molecules only have two rotations because rotations along the bond axis do not change the positions of the atoms in the molecule. The remaining degrees of freedom correspond to molecular vibrational modes. These modes include stretching and bending motions of the chemical bonds of the molecule. For a linear molecule, the number of vibrational modes is:
whereas for a non-linear molecule the number of vibrational modes are
Molecular Vibrations and Infrared Radiation
The frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz. These frequencies correspond to radiation in the infrared (IR) region of the electromagnetic spectrum. At any given instant, each molecule in a sample has a certain amount of vibrational energy. However, the amount of vibrational energy that a molecule has continually changes due to collisions and other interactions with other molecules in the sample.
At room temperature, most of molecules will be in the lowest energy state, which is known as the ground state. A few molecules will be in higher energy states, which are known as excited states. The fraction of molecules occupying a given vibrational mode at a given temperature can be calculated using the Boltzmann distribution. Performing such a calculation shows that, for relatively low temperatures (such as those used for most routine spectroscopy), most of the molecules occupy the ground vibrational state. Such a molecule can be excited to a higher vibrational mode through the direct absorption of a photon of the appropriate energy. This is the mechanism by which IR spectroscopy operates: infrared radiation is passed through the sample, and the intensity of the transmitted light is compared with that of the incident light. A reduction in intensity at a given wavelength of light indicates the absorption of energy by a vibrational transition. The energy, , of a photon is
It is also possible to observe molecular vibrations by an inelastic scattering process. In inelastic scattering, an absorbed photon is re-emitted with lower energy. In Raman scattering, the difference in energy between the absorbed and re-emitted photons corresponds to the energy required to excite a molecule to a higher vibrational mode.
Typically, in Raman spectroscopy high intensity laser radiation with wavelengths in either the visible or near-infrared regions of the spectrum is passed through a sample. Photons from the laser beam are absorbed by the molecules, exciting them to a virtual energy state. If the molecules relax back to the vibrational state that they started in, the reemitted photon has the same energy as the original photon. This leads to scattering of the laser light, but with no change in energy between the incoming photons and the reemitted/scattered photons. This type of scattering is known as Rayleigh scattering.
However, it is possible for the molecules to relax back to a vibrational state that is higher in energy than the state they started in. In this case, the original photon and the reemitted photon differ in energy by the amount required to vibrationally excite the molecule. In perturbation theory, the Raman effect corresponds to the absorption and subsequent emission of a photon via an intermediate quantum state of a material. The intermediate state can be either a "real", i.e., stationary state or a virtual state.
Stokes and anti-Stokes scattering
The Raman interaction leads to two possible outcomes:
- the material absorbs energy and the emitted photon has a lower energy than the absorbed photon. This outcome is labeled Stokes Raman scattering.
- the material loses energy and the emitted photon has a higher energy than the absorbed photon. This outcome is labeled anti-Stokes Raman scattering.
The energy difference between the absorbed and emitted photon corresponds to the energy difference between two resonant states of the material and is independent of the absolute energy of the photon.
The spectrum of the scattered photons is termed the Raman spectrum. It shows the intensity of the scattered light as a function of its frequency difference Δν to the incident photons. The locations of corresponding Stokes and anti-Stokes peaks form a symmetric pattern around Δν=0. The frequency shifts are symmetric because they correspond to the energy difference between the same upper and lower resonant states. The intensities of the pairs of features will typically differ, though. It depends on the population of the initial state of the material, which in turn depends on the temperature. In thermodynamic equilibrium, the upper state will be less populated than the lower state. Therefore, the rate of transitions from the lower to the upper state (Stokes transitions) will be higher than in the opposite direction (anti-Stokes transitions). Correspondingly, Stokes scattering peaks are stronger than anti-Stokes scattering peaks. Their ratio depends on the temperature (which can practically be exploited for the measurement of temperature).
Distinction from fluorescence
The Raman effect differs from the process of fluorescence. For the latter, the incident light is completely absorbed and the system is transferred to an excited state from which it can go to various lower states only after a certain resonance lifetime. The result of both processes is in essence the same: A photon with the frequency different from that of the incident photon is produced and the molecule is brought to a higher or lower energy level. But the major difference is that the Raman effect can take place for any frequency of the incident light. In contrast to the fluorescence effect, the Raman effect is therefore not a resonant effect. In practice, this means that a fluorescence peak is anchored at a specific frequency, whereas a Raman peak maintains a constant separation from the excitation frequency.
The distortion of a molecule in an electric field, and therefore the vibrational Raman cross section, is determined by its polarizability. A Raman transition from one state to another, and therefore a Raman shift, can be activated optically only in the presence of non-zero polarizability derivative with respect to the normal coordinate (that is, the vibration or rotation): . Raman-active vibrations/rotations can be identified by using almost any textbook that treats quantum mechanics or group theory for chemistry. Then, Raman-active modes can be found for molecules or crystals that show symmetry by using the appropriate character table for that symmetry group.
Stimulated Raman scattering and Raman amplification
The Raman-scattering process as described above takes place spontaneously, i.e., in random time intervals, one of the many incoming photons is scattered by the material. This process is thus called spontaneous Raman scattering.
On the other hand, stimulated Raman scattering can take place when some Stokes photons have previously been generated by spontaneous Raman scattering (and somehow forced to remain in the material), or when deliberately injecting Stokes photons ("signal light") together with the original light ("pump light"). In that case, the total Raman-scattering rate is increased beyond that of spontaneous Raman scattering: pump photons are converted more rapidly into additional Stokes photons. The more Stokes photons are already present, the faster more of them are added. Effectively, this amplifies the Stokes light in the presence of the pump light, which is exploited in Raman amplifiers and Raman lasers.
Stimulated Raman scattering is a nonlinear-optical effect. It can be described, e.g., using a third-order nonlinear susceptibility .
Raman spectroscopy employs the Raman effect for materials analysis. The spectrum of the Raman-scattered light depends on the molecular constituents present and their state, allowing the spectrum to be used for material identification and analysis. Raman spectroscopy is used to analyze a wide range of materials, including gases, liquids, and solids. Highly complex materials such as biological organisms and human tissue can also be analyzed by Raman spectroscopy.
For solid materials, Raman scattering is used as a tool to detect high-frequency phonon and magnon excitations.
Raman lidar is used in atmospheric physics to measure the atmospheric extinction coefficient and the water vapour vertical distribution.
Stimulated Raman transitions are also widely used for manipulating a trapped ion's energy levels, and thus basis qubit states.
For high-intensity CW (continuous wave) lasers, SRS can be used to produce broad bandwidth spectra. This process can also be seen as a special case of four-wave mixing, wherein the frequencies of the two incident photons are equal and the emitted spectra are found in two bands separated from the incident light by the photon energies. The initial Raman spectrum is built up with spontaneous emission and is amplified later on. At high pumping levels in long fibers, higher-order Raman spectra can be generated by using the Raman spectrum as a new starting point, thereby building a chain of new spectra with decreasing amplitude. The disadvantage of intrinsic noise due to the initial spontaneous process can be overcome by seeding a spectrum at the beginning, or even using a feedback loop as in a resonator to stabilize the process. Since this technology easily fits into the fast evolving fiber laser field and there is demand for transversal coherent high-intensity light sources (i.e., broadband telecommunication, imaging applications), Raman amplification and spectrum generation might be widely used in the near-future.
- Raman, C. V. (1928). "A new radiation". Indian J. Phys. 2: 387–398. Retrieved 14 April 2013.
- Landsberg, G.; Mandelstam, L. (1928). "Eine neue Erscheinung bei der Lichtzerstreuung in Krystallen". Naturwissenschaften 16 (28): 557. Bibcode:1928NW.....16..557.. doi:10.1007/BF01506807.
- Smekal, A. (1923). "Zur Quantentheorie der Dispersion". Naturwissenschaften 11 (43): 873–875. Bibcode:1923NW.....11..873S. doi:10.1007/BF01576902.
- Harris and Bertolucci (1989). Symmetry and Spectroscopy. Dover Publications. ISBN 0-486-66144-X.
- Nature (1931-12-19). "A review of the 1931 book ''Der Smekal-Raman effekt''". Nature.com. Retrieved 2011-09-17.
- Singh, R. (2002). "C. V. Raman and the Discovery of the Raman Effect". Physics in Perspective (PIP) 4 (4): 399–420. Bibcode:2002PhP.....4..399S. doi:10.1007/s000160200002.
- "C.V. Raman: The Raman Effect". American Chemical Society. Retrieved June 6, 2012.
- "Painless laser device could spot early signs of disease". BBC News. 27 September 2010. | <urn:uuid:1f486089-e9c1-4230-9478-d4a8599ae967> | 3.4375 | 2,824 | Knowledge Article | Science & Tech. | 40.827524 |
Brief SummaryRead full entry
The Great Frigatebird (Fregata minor) is a large dispersive seabird in the frigatebird family. Major nesting populations are found in the Pacific (including Galapagos Islands) and Indian Oceans, as well as a population in the South Atlantic.
The Great Frigatebird is a lightly built large seabird up to 105 cm long with predominantly black plumage. The species exhibits sexual dimorphism; the female is larger than the adult male and has a white throat and breast, and the male's scapular feathers have a purple-green sheen. In breeding season, the male is able to distend its striking red gular sac. The species feeds on fish taken in flight from the ocean's surface (mostly flyingfish), and indulges in kleptoparasitismless frequently than other frigatebirds. They feed in pelagic waters within 80 km (50 mi) of their breeding colony or roosting areas. | <urn:uuid:a31342a6-106a-4dd2-8a36-86209791f909> | 3.34375 | 206 | Knowledge Article | Science & Tech. | 42.0175 |
Bob Tisdale, author of the awesome book “Who Turned on the Heat?” presented an interesting problem that turns out to be a good application of robust statistical tests called empirical fluctuation processes.
Bob notes that sea surface temperature (SST) in a large region of the globe in the Eastern Pacific does not appear to have warmed at all in the last 30 years, in contrast to model simulations (CMIP SST) for that region that show strong warming. The region in question is shown below.
The question is, what is the statistical significance of the difference between model simulations and the observations? The graph comparing the models with observations from Bob’s book shows two CMIP model projections strongly increasing at 0.15C per decade for the region (brown and green) and the observations increasing at 0.006C per decade (magenta).
However, there is a lot of variability in the observations, so the natural question is whether the difference is statistically significant? A simple-minded approach would be to compare the temperature change between 1980 and 2012 relative to the standard deviation, but this would be a very low power test, and only used by someone who wanted to obfuscate the obvious failure of climate models in this region.
Empirical fluctuation processes are a natural way to examine such questions in a powerful and generalized way, as we can ask of a strongly autocorrelated series — Has there been a change in level? — without requiring the increase to be a linear trend.
To illustrate the difference, if we assume a linear regression model, as is the usual practice: Y = mt +c the statistical test for a trend is whether the trend coefficient m is greater than zero.
H0: m=0 Ha: m>0
If we test for a change in level, the EFP statistical test is whether m is constant for all of time t:
H0: mi = m0 for i over all time t.
For answering questions similar to tests of trends in linear regression, the EFP path determines if and when a simple constant model Y=m+c deviates from the data. In R this is represented as the model Y~1. If we were to use a full model Y~t then this would test whether the trend of Y is constant, not whether the level of Y is constant. This is clearer if you have run linear models in R.
Moving on to the analysis, below are the three data series given to me by Bob, and available with the R code here.
The figure below shows the series in question on the x axis, the EFP path is the black line, and 95% significance levels for the EFP path are in red.
It can be seen clearly that while the EFP path for the SST observations series shows a little unusual behavior, with a significant change in level in 1998 and again in 2005, the level is currently is not significantly above the level in 1985.
The EFP path for the CMIP3 model (CMIP5 is similar), however, exceeds the 95% significant level in 1990 and continues to increase, clearly indicating a structural increase in level in the model that has continued to intensify.
Furthermore, we can ask whether there is a change in level between the CMIP models and the SST observations. The figure below shows the EFP path for the differences CMIP3-SST and CMIP5-SST. After some deviation from zero at about 1990, around 2000 the difference becomes very significant at the 5% level, and continues to increase. Thus the EFP test shows a very significant and widening disagreement between the temperature simulation of the CMIP over the observational SST series in the Eastern Pacific region after the year 2000.
While the average of multiple model simulations show a significant change in level over the period, in the parlance of climate science, there is not yet a detectable change in level in the observations.
One could say I am comparing apples and oranges, as the models are average behavior while the SST observations are a single realization. But, the fact remains only the simulations of models show warming, because there is no support for warming of the region from the observations. This is consistent with the previous post on Santer’s paper showing failure of models to match the observations over most latitudinal bands. | <urn:uuid:b44c48fc-14d5-4f4c-a74d-0f9e0c969aaa> | 2.75 | 892 | Academic Writing | Science & Tech. | 45.891421 |
Section 4: Strings and Extra Dimensions
Figure 9: A depiction of one of the six-dimensional spaces that seem promising for string compactification.
Source: © Wikimedia Commons, CC Attribution-Share Alike 2.5 Generic license. Author: Lunch, 23 September 2007. More info
We have already mentioned that string theories that correspond to quantum gravity together with the three other known fundamental forces seem to require 10 spacetime dimensions. While this may come as a bit of a shock—after all, we certainly seem to live in four spacetime dimensions—it does not immediately contradict the ability of string theory to describe our universe. The reason is that what we call a "physical theory" is a set of equations that is dictated by the fundamental fields and their interactions. Most physical theories have a unique basic set of fields and interactions, but the equations may have many different solutions. For instance, Einstein's theory of general relativity has many nonphysical solutions in addition to the cosmological solutions that look like our own universe. We know that there are solutions of string theory in which the 10 dimensions take the form of four macroscopic spacetime dimensions and six dimensions curled up in such a way as to be almost invisible. The hope is that one of these is relevant to physics in our world.
To begin to understand the physical consequences of tiny, curled-up extra dimensions, let us consider the simplest relevant example. The simplest possibility is to consider strings propagating in nine-dimensional flat spacetime, with the 10th dimension curled up on a circle of size R. This is clearly not a realistic theory of quantum gravity, but it offers us a tantalizing glimpse into one of the great theoretical questions about gravity: How will a consistent theory of quantum gravity alter our notions of spacetime geometry at short distances? In string theory, the concept of curling up, or compactification, on a circle, is already startlingly different from what it would be in point particle theory.
Figure 10: String theorists generally believe that extra dimensions are compactified, or curled up.
To compare string theory with normal particle theories, we will compute the simplest physical observable in each kind of theory, when it is compactified on a circle from ten to nine dimensions. This simplest observable is just the masses of elementary particles in the lower-dimensional space. It will turn out that a single type of particle (or string) in 10 dimensions gives rise to a whole infinite tower of particles in nine dimensions. But the infinite towers in the string and particle cases have an important difference that highlights the way that strings "see" a different geometry than point particles.
Particles in a curled-up dimension
Let us start by explaining how an infinite tower of nine-dimensional (9D) particles arises in the 10-dimensional (10D) particle theory. To a 9D observer, the velocity and momentum of a given particle in the hidden tenth dimension, which is too small to observe, are invisible. But the motion is real, and a particle moving in the tenth dimension has a nonzero energy. Since the particle is not moving around in the visible dimensions, one cannot attribute its energy to energy of motion, so the 9D observer attributes this energy to the particle's mass. Therefore, for a given particle species in the fundamental 10D theory, each type of motion it is allowed to perform along the extra circle gives rise to a new elementary particle from the 9D perspective.
Figure 11: If a particle is constrained to move on a circle, its wave must resemble the left rather than the right drawing.
Source: © CK-12 Foundation. More info
To understand precisely what elementary particles the 9D observer sees, we need to understand how the 10D particle is allowed to move on the circle. It turns out that this is quite simple. In quantum mechanics, as we will see in Units 5 and 6, the mathematical description of a particle is a "probability wave" that gives the likelihood of the particle being found at any position in space. The particle's energy is related to the frequency of the wave: a higher frequency wave corresponds to a particle with higher energy. When the particle motion is confined to a circle, as it is for our particle moving in the compactified tenth dimension, the particle's probability wave needs to oscillate some definite number of times (0, 1, 2 ...) as one goes around the circle and comes back to the same point. Each possible number of oscillations on the circle corresponds to a distinct value of energy that the 10D particle can have, and each distinct value of energy will look like a new particle with a different mass to the 9D observer. The masses of these particles are related to the size of the circle, and the number of wave oscillations around the circle:
So, as promised, the hidden velocity in the tenth dimension gives rise to a whole tower of particles in nine dimensions.
Strings in a curled-up dimension
Now, let us consider a string theory compactified on the same circle as above. For all intents and purposes, if the string itself is not oscillating, it is just like the 10D particle we discussed above. The 9D experimentalist will see the single string give rise to an infinite tower of 9D particles with distinct masses. But that’s not the end of the story. We can also wind the string around the circular tenth dimension. To visualize this, imagine winding a rubber band around the thin part of a doorknob, which is also a circle. If the string has a tension Tstring = 1/, (the conventional notation for the string tension), then winding the string once, twice, three times ... around a circle of size R, costs an energy:
This is because the tension is defined as the mass per unit length of the string; and if we wind the string n times around the circle, it has a length which is n times the circumference of the circle. Just as a 9D experimentalist cannot see momentum in the 10th dimension, she also cannot see this string's winding number. Instead, she sees each of the winding states above as new elementary particles in the 9D world, with discrete masses that depend on the size of the compactified dimension and the string tension.
Geometry at short distances
One of the problems of quantum gravity raised in Section 2 is that we expect geometry at short distances to be different somehow. After working out what particles our 9D observer would expect to see, we are finally in a position to understand how geometry at short distances is different in a string theory.
The string tension, 1/, is related to the length of the string, , via = 2. Strings are expected to be tiny, with ~ 10-32 centimeter, so the string tension is very high. If the circle is of moderate to macroscopic size, the winding mode particles are incredibly massive since their mass is proportional to the size of the circle multiplied by the string tension. In this case, the 9D elementary particle masses in the string theory look much like that in the point particle theory on a circle of the same size, because such incredibly massive particles are difficult to see in experiments.
Figure 12: The consequences of strings winding around a larger extra dimension are the same as strings moving around a smaller extra dimension.
However, let us now imagine shrinking R until it approaches the scale of string theory or quantum gravity, and becomes less than . Then, the pictures one sees in point particle theory, and in string theory, are completely different. When R is smaller than , the modes m1, m2 ... are becoming lighter and lighter. And at very small radii, they are low-energy excitations that one would see in experiments as light 9D particles.
In the string theory with a small, compactified dimension, then, there are two ways that a string can give rise to a tower of 9D particles: motion around the circle, as in the particle theory, and winding around the circle, which is unique to the string theory. We learn something very interesting about geometry in string theory when we compare the masses of particles in these two towers.
For example, in the "motion" tower, m1 = 1/R; and in the "winding" tower, m1 = R/. If we had a circle of size /R instead of size R, we'd get exactly the same particles, with the roles of the momentum-carrying strings and the wound strings interchanged. Up to this interchange, strings on a very large space are identical (in terms of these light particles, at least) to strings on a very small space. This large/small equivalence extends beyond the simple considerations we have described here. Indeed, the full string theory on a circle of radius R is completely equivalent to the full string theory on a circle of radius /R. This is a very simple illustration of what is sometimes called "quantum geometry" in string theory; string theories see geometric spaces of small size in a very different way than particle theories do. This is clearly an exciting realization, because many of the mysteries of quantum gravity involve spacetime at short distances and high energies. | <urn:uuid:ce0b91e4-e337-4c4e-a132-16e11e5da7ae> | 4.03125 | 1,876 | Academic Writing | Science & Tech. | 43.052426 |