text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Share this page:
MathGraph32 is a great free tool for exploring 2D and 3D math concepts.
Read the rest of this entry »
Posted in Computers & Internet, Learning, Mathematics | 4 Comments »
The 17-year cicada is due to emerge in north-eastern parts of USA in Spring 2013.
Posted in Environment, Mathematics | No Comments »
Scientists used Jacob’s Staff during the Renaissance to find heights and distances, using trigonometry. It was a forerunner to the sextant.
Posted in Mathematics, Travel & Culture | No Comments »
This is a classic late-1950s cartoon where Donald learns where math comes from.
Posted in Learning, Math movies, Mathematics | 4 Comments »
Can you count faster than a chimp? Our number sense begins with our ability to count.
See this PBS documentary about the development of fractals and how they are used in computer graphics.
Posted in Computers & Internet, Math movies, Mathematics | No Comments »
HOTmaths is an interesting online math resource. I take another look after a few years.
Posted in Computers & Internet, Learning, Mathematics | No Comments »
This short talk argues that people should know about probability and statistics before calculus. I tend to agree.
Posted in Learning, Math movies, Mathematics | 2 Comments »
Here’s an interactive graph where you can explore straight lines, parabolas, cubics and Bezier curves.
Posted in Computers & Internet, Mathematics | No Comments »
Is Santa’s frantic dash around the world each Christmas Eve, even possible?
Posted in Mathematics, Travel & Culture | 6 Comments »
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates twice a month. Join thousands of satisfied students, teachers and parents!
Given name: * required
email: * required
See the Interactive Mathematics spam guarantee. | <urn:uuid:4b60903d-3e14-4654-8568-ee6096f15495> | 3.1875 | 405 | Content Listing | Science & Tech. | 43.970237 |
What's Killing the Key Deer?
From land use to traffic, the deck is being stacked against this endangered species
Roger Di Silvestro
When the rains came that day to palm-shrouded Big Pine Key, the bulldozers were already lined up to go to work, cutting a road across this second-largest of the islands that make up the Florida Keys. At the same time, a phone was ringing in the Atlanta office of attorney David White, who was working on cases involving the National Wildlife Federation and the Florida Wildlife Federation.
"I got the call at 8 a.m.," he says. "By 2 p.m. I was in Miami pulling a judge away from a federal cocaine case."
By then, White had been joined by another Federation attorney, Henry Morgenstern. What he and Morgenstern wanted from the judge was an injunction to stop the road, which would bring heavy traffic into the National Key Deer Refuge, home to a diminutive, endangered subspecies of white-tailed deer unique to the Keys.
Traffic on roads penetrating deer habitat already was killing dozens of the animals yearly. The new road would compound the threat by putting 3,000 more vehicle trips per day on the road. This would tip the deer further toward extinction, the attorneys contended, and therefore was illegal under the Endangered Species Act.
The go-ahead to start construction of the road had come from the commissioners of Monroe County, which encompasses the Keys. The road was to be called Lytton's Way, named for a former county commissioner. It was designed to relieve traffic jams on Highway 1, the only road that runs the full length of the Keys. During winter months, a Saturday flea market on Big Pine backs up traffic for miles on Highway 1, an irritation to local residents tired of being stalled. Residents want an access road to get them around the jam. The access they had in mind in the late 1980s was Lytton's Way.
Local conservationists suggested elevating Highway 1 instead, allowing residents to travel under it. But businesses, fearing an elevated highway would siphon away transient customers, "went bananas," says Fred Manillo, a 17-year resident of Big Pine Key and a leader of the Key Deer Protection Alliance, Inc., a private group that seeks protection of the animals and their habitat.
The only thing that kept the bulldozers from cutting through the road that day was the rain, which gave the Federation lawyers time to present their case to the judge. The judge said he wanted to hear from the county commissioner's side of the debate, but the county attorney refused to attend the hearing. So the judge issued an injunction against the road, putting at least a temporary stop to it.
60 Years of Key Deer Controversy
The Key deer has been at the center of such controversies for at least 60 years, dating to a time when uncontrolled hunting was wiping out the species.
By 1950, the species had sunk to only an estimated 50 animals. Later, granted by Congress their own national wildlife refuge in 1957 and covered by the Endangered Species Act since 1967, the Key deer showed signs of recovery, reaching a peak of perhaps 400 animals in the 1970s.
In more recent years, however, the population has dwindled to no more than 300. The story behind the decline illustrates how weak implementation of the Endangered Species Act can undermine the protection of vanishing creatures.
More Than Just the Deer at Risk
More rides on the fate of the Key deer than just the survival of a single creature. The refuge is home to 16 listed species, including:
- Lower Keys marsh rabbit
- silver rice rat
- American crocodile
"The Key deer is the flagship for a whole fleet of species in the Keys," says Mark Robertson, head of The Nature Conservancy's Key West office. "There are many endemic plant and animal species, and they're all going to sink or swim together."
The quality of life for people who live in deer habitat also hangs in the balance, because what is good for Key deer, such as clean water, can also benefit local folks.
Moreover, recent court cases involving the Key deer have led to decisions of national significance, both for listed species and for taxpayers interested in saving billions of federal dollars.
How the Key Deer Came to the Florida Keys
The Key deer is a subspecies of white-tailed deer that lives only on a few islands in the Florida Keys, from Little Pine Key to Sugarloaf Key. The biggest and most important of those islands is 16-square-mile Big Pine Key, home to the bulk of the deer population and the base for the federal deer refuge.
Scientists speculate that white-tailed deer arrived in the Keys during the most-recent ice age, when seas were lower and the Keys were not islands but a continuous ridge of land. When the glaciers receded about 10,000 years ago, the seas rose, and the whitetails found themselves isolated from the mainland.
As a rule, species of large mammal that become isolated on islands gradually become smaller through evolution, allowing more efficient use of the limited amounts of resources available on islands. Thus, Key deer, at maturity, stand about 30 inches tall at the shoulder and weigh a maximum of only 80 pounds for males and 63 pounds for females, roughly half the weight of the average northern continental whitetail.
The deer feed on at least 180 species of Keys vegetation. They can drink brackish water, but cannot survive without some source of fresh water. Big Pine offers some of the most reliable water sources. Many of the Keys that lie off of Big Pine lack permanent drinking-water supplies, particularly during droughts.
Threats to the Key Deer
Since the 1970s, the Key deer has been dwindling. With the exception of Big Pine and No Name Keys, says Mike McMinn, assistant manager of the refuge, the population is collapsing. Cudjoe and Sugarloaf, he says, no longer even have deer. Limited to a single, declining population in a constricted range that is beleaguered by development, the deer are vulnerable to catastrophic destruction. Says McMinn, "If one force four or five hurricane hits Big Pine Key, we'll be lucky if we have any deer left."
While the deer have been declining, development has continued apace, centering on the most crucial part of Key deer habitat: Big Pine Key. Fifty years ago, only seven people lived on Big Pine. Twenty years ago, the island housed 1,500. Today, the number stands at about 4,300.
This influx of people and the development they stimulate have yielded a variety of factors dangerous to the deer:
Roads and motor vehicles: Road traffic on average kills 45 deer annually, the subspecies' single largest cause of death in an average annual mortality of 63 animals. McMinn says that the deer found dead do not represent all of those killed, however, since some crawl off to die undiscovered. Paving and other road improvements increase the number of deer killed. McMinn cites an unpaved road claimed no deer between 1985 and 1992, but on which four deer were killed within the first three years after it was paved.
Mosquito ditches: These narrow canals, about 2 feet wide and 2 or 3 feet deep, crisscross Big Pine in a chaotic network created to house gambusia, a fish imported from Africa to control mosquitos by eating them. "The ditches are a problem," says McMinn, "especially for young deer, but also for adults, which sometimes drown in them."
Fragmented habitat: "A big problem faced by the national wildlife refuge is fragmentation of deer habitat," says Manillo. The deer use a large portion of Big Pine, but development has subdivided the habitat. According to Mark Rosch, executive director of the Monroe County Land Authority, Big Pine Key is parceled out in plots of 5 acres or less. "If you have 3,000 or 5,000 parcels of land, you have 3,000 or 5,000 different expectations about what's going to be done with that land," Rosch says. The U.S. Fish and Wildlife Service (FWS) has to buy each of those parcels singly in order to create larger stretches of habitat. The price tag is $65 million, but since 1994 Congress has provided no funds for acquisition. Even before 1994, Rosch says, FWS received only about $1 million annually for buying the land.
Housing developments: When refuge biologist Tom Wilmers conducted a deer survey recently on Big Pine Key, he found more deer in the subdivisions than outside them. The animals enter yards to browse on ornamental plants and to take handouts from people. "Feeding deer is illegal," says McMinn. "When people feed the deer, the animals congregate in unnatural groups, which sets them up for disease epidemics."
Dogs: Free-roaming dogs are another threat to the tiny deer, since dogs will readily adopt the predatory habits of their wolf ancestors, chasing hoofed animals in packs. McMinn suspects that dogs are behind the deer's disappearance from some of the outer islands.
The press for more houses, roads and other development on Big Pine Key persists. Developers often ignore the needs of the deer and, in the process, ignore the restrictions of the Endangered Species Act and other laws. The county commissioners have let the developers get away with this because the developers wield powerful political clout, says Florida Wildlife Federation president Manley Fuller. One result of the developers' power was the Lytton's Way conflict.
In a similar vein, two years ago Monroe County officials gave a Big Pine Key resident permission to build a 6-foot-high, 400-foot-long fence on his lot, even though the county earlier had banned fencing in the area because fences impede deer movement. The resident said he needed the fence to keep children out of his hot tub and deer out of his shrubs. The permit was challenged by the Department of Community Affairs, a state agency that oversees development, and the case ended up in the Florida Supreme Court.
The court reached a five-to-two decision against the fence. Writing for the majority, Justice Gerald Kogan declared, "Landowners do not have an untrammeled right to use their property regardless of the legitimate environmental interest of the state." He added, "The clear policy underlying Florida environmental regulation is that our society is to be the steward of the natural world, not its unreasoning overlord."
Rosch argues that failure in Key deer management should not be blamed exclusively on the county commissioners.
"A tendency has developed to see the commission as having primary responsibility for the deer, but it doesn't," he says. FWS, he contends, has perpetuated serious management problems by failing to designate critical habitat for the deer, as required by the Endangered Species Act. Until FWS does this, he suggests, the county commissioners lack a critical guideline for development. FWS, Rosch believes, has never designated critical habitat because the agency wants to avoid the intense controversy surrounding such a decision.
Barry Stieglitz, manager of the National Key Deer Refuge, readily admits that FWS has avoided critical habitat designation for political reasons, but he considers those reasons sound.
"Designation would frighten residents already nervous about land-use issues and harden resistance to deer protection," he says. "You could go ahead and apply the additional label, but it's not going to affect the importance of the habitat."
The clouded situation surrounding the Key deer glimmers with a faint silver lining. For one thing, Florida has designated the Keys as an Area of Critical State Concern, making all county-commission decisions subject to state scrutiny.
"Any new land-use regulations have to be approved by the Florida cabinet [a quasi-legislative body elected by popular vote] and the Florida Department of Community Affairs," says The Nature Conservancy's Mark Robertson. And the state has been demonstratively more protective of the Keys than have the commissioners.
Another promising development for Key deer habitat protection: The Monroe County Commission that came in with the 1991 election is an improvement over previous commissions, says Rosch. For example, the new commissioners do not claim title to Lytton's Way.
The future of the deer also brightened recently when the Florida Department of Community Affairs ordered Monroe County commissioners to revise a proposed comprehensive county plan. The order came after local conservationists won a hearing on the plan, arguing that its development bias would harm the community. The state hearing officer, Larry Sartin, ruled in 1995 that state and local governments must limit growth in Monroe County or face ecological collapse.
The reasons for Sartin's ruling went well beyond Key deer, which he said "cannot tolerate further development without facing extinction." He also feared that additional development would hamper hurricane-evacuation plans; destroy "the unique environmental characteristics and importance" of North Key Largo, Ohio Key and Coupon Bight; and threaten water quality in the area.
Floodplain Management and Key Deer Protection
Additional support for Key deer protection came out of a 1994 court case brought by National Wildlife Federation and Florida Wildlife Federation against the Federal Emergency Management Agency (FEMA), which refused to consult with FWS to determine whether FEMA's flood-insurance subsidies encouraged development that might harm the deer.
Attorneys for NWF and FWF, David White and Henry Morgenstern, contended that the consultation was required under the Endangered Species Act. FEMA officials argued that the agency was not subject to the law.
U.S. District Judge K. Michael Moore agreed with White and Morgenstern. National Wildlife Federation attorney John Kostyack is now monitoring FEMA's compliance with the court order.
This ruling could affect FEMA activities across the United States.
Presently, FEMA at bargain prices underwrites insurance on buildings constructed in high-risk areas, such as floodplains and barrier islands, and spends an average of $1.5 billion yearly on disaster assistance and flood insurance claims. The court decision suggests that development under FEMA will be curtailed in some areas, saving tax dollars that would wash away in response to inevitable floods.
"This was a victory for both endangered species and the American taxpayer," White says. "The American people do not want to subsidize new development in flood zones and sensitive coastal areas which jeopardizes the existence of endangered species." Flood-prone areas provide habitat for 40 percent of U.S. endangered species and 60 percent of threatened species.
The Key to Key Deer Survival? Land
In the end, the crucial factor for Key deer is land. Without habitat, the animals and other jeopardized species will dwindle away.
"The ultimate solution is to acquire as much of the habitat as we can," says Florida Wildlife Federation's Manley Fuller. Big Pine's patchwork quilt of small lots makes land purchase a challenge, but one that shows some promise of being met. Under a state program called Conservation and Recreation Lands, Florida is acquiring undeveloped lands on Big Pine and No Name Keys. The Nature Conservancy has completed more than 200 transactions on Big Pine Key alone, acquiring a total of 550 acres.
A critical need now is to revive federal land acquistion. David Michaud, an NWF endangered-species specialist working on Key deer issues, sees land-acquisition funds as a top priority in this program.
"We need to get more acquisition money for the Monroe County Land Authority, restart federal acquisition and involve more private groups in buying land," he says.
The factors that weigh for and against the Key deer are echoed throughout the nation where other listed species struggle to survive.
"If people are willing to coexist with endangered species and the natural world," says Carolyn Waldron, NWF's acting vice president of Conservation Programs, "these creatures will survive and our quality of life will be the better for it."
Senior editor Roger DiSilvestro reports that Key deer conservation lost a valuable ally when Fred Manillo died suddenly just after he was interviewed for this article.
National Wildlife Federation and the Key Deer
The National Wildlife Federation made saving the Key deer the theme of its annual National Wildlife Week in 1952--when the subspecies numbered only 50 animals. Around that time, NWF and other groups provided funds for hiring a game warden to stop deer poaching. NWF also supported bills that led to creation of the National Key Deer Refuge.
Today, NWF has formulated a policy designed to protect the Key deer and meet the needs of the human community.
"Our goal is to protect the deer and other wildlife while ensuring a quality environment for human residents," says John Kostyack, who leads NWF's Key deer efforts. | <urn:uuid:d06759f7-d563-4c04-ab23-5bcab3a4e113> | 2.78125 | 3,462 | Knowledge Article | Science & Tech. | 44.017549 |
Patuxent Wildlife Research Center
|Did You Know...||Reference on Site|
|that among all vectors, ticks have the distinction of transmitting the widest diversity of microbes that are harmful to humans?||Ticks can harbor and transmit a wide diversity of pathogens simultaneously. Viruses, bacteria, and protozoan parasites are all transmitted by ticks. Most health problems in humans result from pathogens being transmitted to humans from ticks during blood meals.
|that emerging diseases like West Nile Virus that infect both wildlife and humans and that are actively transmitted between them require wildlife biologists to assist public health authorities?||In the collaboration between wildlife scientists and epidemiologists, research on wildlife species addresses the ecological, physiological, and behavioral aspects of the disease in animals, providing insights into how wildlife species maintain and spread the disease to people.
|that autumnal die-offs, involving hundreds of migratory birds, occurred in Chesapeake Bay in 2001, 2004 and 2005?||The most prominent events were at the Poplar Island Complex in proximity to brackish impoundments with algal blooms and elevated cyanobacteria counts (Anabaena spp.). Although avian botulism was documented as the cause of death of some individuals, recent evidence suggests that cyanobacteria toxin microcystin (MC) may play a role in the initiation of such botulism outbreaks. More information...|
|that Patuxent Wildlife Research Center has a major role in the training of whooping crane chicks to fly behind an ultralight?||Whoopers are trained to follow ultralight aircraft to learn a new migration route as the first step in establishing a new migratory flock of whoopers. The Class of 2006 will be the sixth group of young whooping cranes to take part in a project sponsored by the Whooping Crane Eastern Partnership (WCEP), a coalition of public and private organizations that is reintroducing endangered whooping cranes in eastern North America, part of their historic range. More information... Also see Whooping Crane Eastern Partnership and Operation Migration's web pages for further information.|
|that the Breeding Bird Survey (BBS) Results and Analysis website is a source of information about distributions and population changes of North American birds?||It is also a tool for learning about birds, with connections to the ID tips showing pictures of common North American birds and quizzes on bird distribution and identification. The primary objective of the BBS has been the estimation of population change for songbirds. However, the data have many potential uses, and investigators have used the data to address a variety of research and management objectives. More information...|
|that a laysan albatross ranks as the #1 oldest wild bird in North America: 50 years, 8 months?||The USGS Patuxent Wildlife Research Center Bird Banding Laboratory (BBL) maintains longevity records from bird banding information at this site: http://www.pwrc.usgs.gov/BBL/homepage/longvrec.htm Here's the top ten.|
|that you can learn the breeding calls of frogs and toads in the eastern United States and that you can listen to the calls of species in your state by using the frog call lookup option?||This is available on the Public Quiz of our Frog Call Quiz website at http://www.pwrc.usgs.gov/Frogquiz/index.cfm?fuseaction=publicQuiz.StartPublicQuiz and the frog call lookup option is found at: http://www.pwrc.usgs.gov/Frogquiz/index.cfm?fuseaction=main.lookup|
|that biologists use lice to identify cowbird hosts?||The host specificity of avian lice (Phthiraptera) may be utilized by biologists to investigate the brood parasitism patterns of Brown-headed Cowbirds (Molothrus ater). As nestlings, brood parasites have a unique opportunity to encounter lice that are typically host specific. More information...|
|that the Contaminant Exposure and Effects-Terrestrial Vertebrates database (CEE-TV) contains contaminant exposure and effects information for terrestrial vertebrates (birds, mammals, amphibians and reptiles) that reside in estuarine and coastal habitats along the Atlantic, Gulf and Pacific Coasts including Alaska and Hawaii and in the Great Lakes Region?||Data is compiled through computerized searches of published literature, reviews of existing databases, and solicitation of unpublished reports from conservation agencies, private groups and universities. Currently, the CEE-TV database contains over 17,000 records containing ecotoxicological exposure and effects information on approximately 252,000 individuals representing over 450 species. More information...|
|that the North American Breeding Bird Survey (BBS) is a cooperative effort between the U.S. Geological Survey's Patuxent Wildlife Research Center and the Canadian Wildlife Service's National Wildlife Research Centre to monitor the status and trends of North American bird populations?||Following a rigorous protocol, BBS data are collected by thousands of dedicated participants along thousands of randomly established roadside routes throughout the continent. Professional BBS coordinators and data managers work closely with researchers and statisticians to compile and deliver these population data and population trend analyses on more than 400 bird species, for use by conservation managers, scientists, and the general public. More information...|
|that the whooping crane is an endangered species and why?||Several factors have harmed whooping cranes. The primary one is the loss of habitat. Wetlands have been drained for agriculture. Oil and gas development and the construction of intercoastal waterways for barge traffic are additional threats. More information...|
|that bird banding is an universal and indispensable technique for studying the movement, survival and behavior of birds?||Bird banding is one of the most useful tools in the modern study of wild birds. Wild birds are captured and marked with a uniquely numbered band or ring placed on the leg. More information...|
|that earthworms help create the soil that supports life?||Earthworms eat soil because it contains organic matter. Organic matter comes from living organisms. A banana peel, a tree root, a deceased armadillo - all of them decay and become part of the soil organic matter. More information...|
|that a male Baltimore Oriole can be told from other black and orange orioles by its completely black head?||See other identification tips, life history information, and hear song of the Baltimore Oriole at http://www.mbr-pwrc.usgs.gov/id/framlst/i5070id.html.|
|that tadpoles typically are in specific aquatic habitats for longer periods than their adults, they sometimes are more difficult to find and nearly always more difficult to identify?||A key for the tadpoles of the United States and Canada features a different format and approach to identifying frog larvae. More details of ontogenetic variation are included than in many keys, and more attention is paid to using characteristics of living tadpoles. A tutorial examines morphological traits, and color photographs are included to simplify the identification process. More information...|
|that a large beaver-like rodent called a Nutria has caused extensive marsh loss at Blackwater National Wildlife Refuge in Maryland?||Nutria are large (8-18 lb) beaver-like rodents that are
5 to 10 times as large as our native muskrat and were accidentally introduced
to Maryland in 1940's.
USGS scientists at Patuxent Wildlife Research Center in partnership with the state of Maryland and the US Fish and Wildlife Service are working together to study the role of nutria in the extensive loss of marsh at Blackwater National Wildlife Refuge and surrounding state and private wetlands.
• Marsh loss was noticeable from photographs taken since the 1950's
• Loss of marsh has coincided with the increase of nutria population
• Nutria activity is directly contributing to marsh loss in Maryland
|that living organisms have long been used to monitor environmental contamination?||The value of biota in monitoring programs relates to their characteristic of integrating contaminant exposure and effects over time and space. More Information...|
|that SET stands for Surface Elevation Table and is a portable mechanical leveling device for measuring the relative elevation of wetland sediments?||The SET website is specifically designed to be a forum for researchers in wetland science who use or might use the device and to offer more information about the proper use of the SET and interpretation of its data. But we encourage anyone who wants to learn more about research techniques and their development to visit the site as well. More information...| | <urn:uuid:4fbc4ebb-3258-4698-bd6c-96039b9845b9> | 3.125 | 1,788 | Content Listing | Science & Tech. | 35.92421 |
Optical Materials by Sol-Gel Method
Tunable Laser samples prepared by the sol-gel method
Owing to the original efforts of the National Aeronautics and Space Administration to supply electric current from silicon photovoltaic (PV) cells to space vehicle, such devices are now available at a price of several US dollars per watt of power. At present large scale solar cell arrays are operating in inaccessible locations distant from conventional electricity plants. Previous estimates of price decrease to $1-$2/W which were obtained by making comparisons with the aluminium or electronic computers industry, may be slightly optimistic as the difficulties of preparing inexpensive silicon with a high photoelectric yield can not be removed by increased production. One way of lowering the price of PV electricity is to concentrate the solar radiation, particularly that part which is most efficient in PV energy conversion, on high efficiency solar cells. Although expensive their amount and cost can be considerably diminished by using concentrated solar light on their small areas. The light emitted as fluorescence form the edges of the concentrator can be matched to about 50% efficiency of solar cells.
The operation of a Luminescent Solar Concentrators (LSC) is based on absorption of solar radiation in a collector containing a fluorescent species in which the emission bands have little or no overlap with the absorption bands. The fluorescence emission is trapped by total internal reflection and concentrated at the edges of the collector which is usually a thin glass plate.
LCS advantages over conventional solar concentrators
LSC’s have the following advantages over conventional solar concentrators They collect both direct and diffuse light; there is a good heat dissipation of non-utilized energy by the large area of the collector plate in contact with air so that essentially "cold light" reaches the PV cells; tracking the sum is unnecessary; the luminescent species can be chosen to allow matching of the concentrated light to the maximum sensitivity of the PV cells. The main advantage is that the large area to be covered by the solar cell is reduced to the area of the edges.
The theory of LSC which is based on internal reflection of fluorescent light which is subsequently concentrated at the edges has been discussed in detail for inorganic materials and organic dyes incorporated in bulk polymers. A transparent plate doped by fluorescent species absorbs in the visible (solar part of the spectrum). The resulting high yield luminescence should then be evolved at the longer wavelengths part of the spectrum. About 75%-80% of the luminescence is trapped by total internal reflection in the plate having a refractive index about 1.5. Repeated reflections of the fluorescent light carry the radiation to the edges of the plate where it is emerges in the concentrated form. The concentration factor is proportional to the ratio of the surface of the plate to its edges and the optical efficiency of the plate. Photovoltaic cells can be coupled to the edges and receive the concentrated light. Such an arrangement should decrease substantially the amount of photovoltaic cells need to produce a given amount of electricity and thus reduce the cost of the system of photovoltaic cells.
While a large number of papers have been published about luminescent plates in which the dye is incorporated in the entire bulk of the plate, the configuration in which the plate is covered by a thin film incorporating the colorant deposited in close contact with the plate is relatively new. The advantage of doped thin sol-gel films having optical contact with the transparent plate is that the luminescence emitted from the thin film is trapped in the plate which parasitic losses due to self-absorption and scattering from impurities can be greatly reduced as compared to bulk doped plates.
As an example of such a system we may examine rhodamine 6G incorporated
in a sol-gel film. Based on the experimental data of absorption and emission of
rhodamine 6G in sol-gel glass, its quantum efficiency of 0.95, molar extinction
coefficient of 82000, and the overlap of absorption of Rh6G with the solar
spectrum using a 50 micron thick film deposited on a plate having refractive
index of 1.51, Monte-Carlo computations were performed to model optical
efficiency of a plate having dimensions of 1 m2. The optical
efficiencies of such plates were found approximately 15%.
In a recent work we have indeed been able to introduce the dyes into a composite polymer/so-gel glass system and into a glass using the sol-gel procedure. A combination of two dyes increases the overlap of absorption with the solar spectrum followed by an increase in optical efficiency.
The main requirements for LSC are their efficiency, photostability and ease of fabrication. This has been achieved here by deposition of organically modified sol-gel films doped by photostable perylimide dyes on glass substrate. The absorption spectra of these dyes extends from 420 to 620 nm covering the visible part of the solar spectrum and the emission is between 550 and 750 nm, close to the optimum response of silicon and germanium arsenide solar cells. The efficiency of such type of collector was calculated from the absorption coefficients, quantum efficiency of the fluorescence and the overlap between emission and absorption spectra, by the method of Monte-Carlo and found to be close to 20%. Optimum concentrations are shown to be strongly dependent on the extent of overlap between the absorption and the emission spectra, which also appears to be the limiting factor in respect to the efficiency of the Concentrator. | <urn:uuid:5bd21a7a-2a51-4956-80c7-9b839f40e288> | 3.125 | 1,125 | Academic Writing | Science & Tech. | 25.719783 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
If you could take the entire planet, sort it out into its various elements into piles, you’d have the following: 32% iron, 30% oxygen, 15% silicon, 14% magnesium, 3% sulfur, 2% nickel, and then much smaller piles of calcium, aluminum, and other trace elements.
Obviously, we don’t breath an iron atmosphere or swim in oceans of silicon. The elements of Earth are layered in the planet.
We live on the outermost layer of Earth, called the crust. This varies in depth between 5 and 75 km. It’s mostly made of silicates, with a tremendous amount of oxygen mixed in. In fact, 47% of the Earth’s crust is oxygen. The thickest parts of the crust are under the continents, and the thinnest parts are underneath the oceans.
Beneath this crust is the mantle, which goes down to a depth of 2890 km. It’s the largest layer on Earth, and mostly consists of silicate rocks rich in iron and magnesium. Volcanoes are places where this mantle wells up through the crust.
Below the mantle is the core, which is broken up into two parts: a solid inner core with a radius of 1,220 km, and then a liquid outer core that goes out to a radius of 3,400 km. Scientists think that the core consists mostly of iron (80%), which pulled together into the middle of the planet during the formation of the Earth, 4.5 billion years ago.
We did an episode of Astronomy Cast just on the Earth. Give it a listen, Episode 51: Earth. | <urn:uuid:15dea686-fec8-44d6-8de0-68a985c5ebf2> | 3.875 | 360 | Truncated | Science & Tech. | 71.11922 |
The dl module defines an interface to the dlopen() function, which is the most common interface on Unix platforms for handling dynamically linked libraries. It allows the program to call arbitrary functions in such a library.
Note: This module will not work unless
sizeof(int) == sizeof(long) == sizeof(char *)
The dl module defines the following function:
Return value is a dlobject.
The dl module defines the following constants:
The dl module defines the following exception:
>>> import dl, time >>> a=dl.open('/lib/libc.so.6') >>> a.call('time'), time.time() (929723914, 929723914.498)
This example was tried on a Debian GNU/Linux system, and is a good example of the fact that using this module is usually a bad alternative. | <urn:uuid:c6af933c-93c4-4ad1-a31c-e1a3aa354b5a> | 2.828125 | 186 | Documentation | Software Dev. | 57.18 |
When energy-saving becomes a game
A smartphone application bringing gaming dimensions to energy awareness has helped householders in Finland, Sweden and Italy reduce their electricity consumption by up to 19%.
Chemicals pollutants threaten health in the Arctic
Studies uncover risks and threats to Arctic inhabitant’s health that might be due to contaminants brought by warmer air and sea water currents resulting from climate change
Ari Asmi: Air pollution, another factor in global warming
Tiny particles impact our air quality and cause health problems, but European researchers have been discovering how these particles can also influence climate change.
Heat trading warms up
A new heat-trading simulation tool could help create the kind of open-market for heat trading as a means to avoid dumping useful heat and save energy while reducing carbon dioxide emissions
Nano Foil Brightens Screen
A new process called "nano-imprinting" enlarges the luminosity of screens efficiently without using more energy. Engineers of the European research project NaPanil have modified the glass surfaces on the micrometric and nanometric scale in order to control the path of the light
A nanotech solution controlling the path of light
We want our electrical devices to have bright screens with low energy needs, so they can be used for a long time before recharge is required. Scientists are increasing the intensity of light by making nanometer scale patterns on surfaces. The nanoimprinting method will change devices’ optical properties, without making them demand more energy.
Regenerating the Ear and the Eye
Repairing a defective ear or even an eye is no longer science fiction. Nano-technology can help to make medical history .
Boosting Memory Chips
Moore’s law predicts that the number of transistors on a silicon chip will double approximately every two years. Thanks to nano technology a similar acceleration is observed in data storage capability of memory chips b y CORINNA L UE CKE
Sponge Metal Ships
Sponge metal is tested to cut the weight of ships by 30 percent. Researchers from Fraunhofer Institute in Chemnitz, Germany, have developed an aluminum powder that foams when heated up. The new material is lighter than water and has a high stiffness
A new material to cut the weight of ships by 30%
A new material is tested to cut the weight of ships by 30 percent. For an average sized freight vessel with a capacity of 7000 m³ this corresponds to a weight reduction of more than 1000 tons. Researchers from Fraunhofer Institute for Machine Tools and Forming Technology in Chemnitz, Germany, ...
Breaking the vicious cycle of antibiotic resistant bacteria
More people die of hospital germs than of HIV every year. The reason is that antibiotics are becoming useless against an ever bigger number of multi-resistant bacteria that are spreading throughout the world. Today, this is not just an issue in hospitals, but throughout society at large. (May '09)
Power from the Islands
Three 65-metres high wind turbines are to forever change the face of the Högsåra archipelago off the coast of Finland. (May '08) | <urn:uuid:849c9036-a685-4acf-a57f-4a8d31521dc7> | 2.703125 | 640 | Content Listing | Science & Tech. | 31.594691 |
Researchers have developed porous materials that can soak up 80 times their volume of carbon dioxide, offering the tantalizing possibility that the greenhouse gas could be cheaply scrubbed from power-plant smokestacks. After the carbon dioxide has been absorbed by the new materials, it could be released through pressure changes, compressed, and, finally, pumped underground for long-term storage.
The "pumping it underground" part is obviously still an issue, because "What are you going to do with millions of tonnes of carbon dioxide that is not nearly as compact as nuclear waste?". And what happens if the pipe-line carrying your CO2 breaks, spills into a low-lying area, and suffocates anything living there?
Nevertheless, an interesting first step. | <urn:uuid:3933e77e-7bc1-46b2-bd01-ad2efef90339> | 3.09375 | 152 | Personal Blog | Science & Tech. | 38.831898 |
By Daniel Chamovitz
Last month, a group of scientists from around the world announced that they had successfully determined the DNA sequence of the tomato genome. One of the most surprising results of this work was that the lowly tomato, like rice and the common Arabidopsis before it, has more genes than we do – about 25% more.
How could this seemingly passive plant that we know and love, that key ingredient in ketchup and in our summer salads, be genetically more complex than we are?
This question has several answers, but I want to focus on one aspect of a tomato’s life, and indeed the lives of all plants that we don’t often consider: how plants deal with rootedness and adversity, and how this could influence their genetic complexity.
Rootedness is arguably the major constraint on plant evolution, and its implications are that plants can’t escape adversity. Think about how we deal with adversity. We often solve problems by moving, either by running away from a dangerous situation, or by migrating to a better environment. If we’re hungry, we can walk to the deli; if the weather gets cold, we can migrate to Miami; if we’re lonely, we can meet up with a friend for a cup of coffee. But plants are literally rooted to the ground, unable to migrate in search of food, unable to seek shelter in a storm, unable to search for a mate.
Because of this sessile state, plants have to be keenly aware of their environment so they can modulate their own physiology and survive. Indeed, many of us rarely stop to think about the sensory world in which plants live, and we may be surprised to learn that ferns, tulips, and yes, the tomato, contain incredibly sophisticated sensory mechanisms not so different from our own.
Like us, plants see light and are aware not only of its direction, but of the light’s intensity. Most surprising for many of my friends is that plants are also aware of the light’s color – like us, plants differentiate between red and blue. Plants even see types of light that we’re blind to, such as UV light and far-red light, the long waves that we barely see as the sun sets. Plants smell volatile pheromones given off by their own leaves and fruits, but also by the leaves and fruits of their neighbors. Trees feel the wind shaking their branches, and use this information to decide whether to grow tall and majestic or stubby and thick. Roots constantly monitor the soil, tasting and absorbing nutrients and also chemicals given off by other roots. Plants use these chemicals to communicate physiological states such as stress from lack of water.
The most amazing aspect of a plant’s life, to me, is that it integrates this varied sensory information, yielding an organism exquisitely suited to its environment – and this integration occurs in lieu of a nervous system. Leaves, flowers, and roots exchange information regarding light, pests, weather and water, and together this leads to different genes being turned on and off — all in the absence of neurons. So apparently a nervous system is only one evolutionary adaptation for information processing – it is necessary for human beings and other animals to process information, but that isn’t the case for plants.
Now, going back to the tomato, one adaptation that helps a plant survive in a changing environment is that it often has more than one copy of a given gene. A plant can have one copy of a gene for a normal environment, and a second copy which comes into play when it’s under environmental stress. Take a plant’s ability to sense light, for example – plants have up to 12 genes that encode different types of photoreceptors, which is more than twice that in humans. Among these genes, one will be used for high intensity blue light and one for low intensity blue light, and so on.
Our seemingly simple green neighbors utilize their genetic complexity to sense and survive adversity. They compensate for their inability to migrate away from a bad environment by having more genes which give them greater genetic options for responding to changing and extreme circumstances.
So the next time you walk around the vegetable section of your local grocer or farmers’ market, take a look at the bright, juicy tomatoes, and stop for a second to consider that they have more genes than you do. These extra genes may not make the tomato smarter, but they have helped it survive long enough to get to your shopping cart.
Daniel Chamovitz, Ph.D., author of “What a Plant Knows,” is Director of the Manna Center for Plant Biosciences at Tel Aviv University. | <urn:uuid:aeb205a2-31c1-482b-a11d-c487012389a3> | 3.484375 | 966 | Personal Blog | Science & Tech. | 43.421046 |
A Skew-t is a graphical representation of the atmosphere in one single column of air. It allows forecasters to gauge nearly everything about the environment: moisture, instability, shear, and environmental temperatures just to sample a few. A seasoned forecaster can quickly glance at a sounding and understand the basic stability of the atmosphere. A Skew-t is not the only graph used to show a sounding, Stuve diagrams are also used frequently. But in severe weather convective situations, forecasters tend to rely on the Skew-t because it is easier to visually see stability.
Other than a real-time sounding, which is launched at 00z and 12z from weather stations across the globe, weather models also produce forecast soundings. Forecasters can use this model information to try and predict severe potential in the summer or precipitation type in the colder months.
Here are a few places to find real-time soundings:
University of Wyoming -Stuve Diagram
The first time you saw a Skew-t, it probably just looked like a bunch of chaotic lines. But each one has a meaning and is shown graphically below.
Here is a break down of what each line is:
1) ISOBARS: Vertical lines of equal pressure and are put 50mb apart. Spacing decreases as pressure decreases, similar to the actual atmosphere.
2)ISOTHERMS: Lines of equal temperature in Celsius that start in the bottom left corner and run to the top right.
3)DRY ADIABAT: These represent an unsaturated parcels accent in the atmosphere of 10 degrees Celsius per kilometer.
4)MOIST ADIABAT: Once a parcel of air has been saturated, it will follow this line. The moist adiabat moves to the left as the parcel decreases temperature since colder air cannot hold as much moisture.
5)DEWPOINT: The left of the two roughly vertical lines on the Skew-t is the dewpoint of the air at that level.
6) TEMPERATURE: The right line is the environmental temperature, the dew point and temperature are derived from the actual sounding.
7) On the right side of nearly every sounding are wind barbs which show the wind speed and direction at the different levels of the atmosphere.
A parcel of air will follow this line until it becomes saturated and then follows the moist adiabat. This allows a forecaster (or computers now) to calculated CAPE and CINH along with a host of other thermodynamic indices. To learn more about how to properly draw a parcel line click here.
The Skew-t can be very important in predicting severe weather, from deciphering storm modes to the amount of instability and possibility of storm initiation. There are a number of indices which are generated as a result of a forecast and real-time sounding which will be discussed in length during future sections.
There two big downsides to using soundings which do not outweigh the positive effects, but are important to remember when using a sounding. The first being that a sounding is a snap-shot of the atmosphere in one specific location. While typically the atmosphere does not change all that fast, it is important to keep in mind that fronts and incoming/departing air masses can radically alter what a sounding looked like just a few moments prior. Secondly, weather balloons (RAOB) are only released at 00z and 12z, which means there is 12 hours between soundings and A LOT can change during that time. Monitor upstream soundings to predict changes that may be coming to your location.
During the colder months, soundings can be used to predict what type of precipitation will fall. When the environment temperature stays below freezing through the whole column, then snow will be the precip type. If the temperature creeps above freezing for a rather shallow depth (50-100mb), but falls back below freezing at the surface you will likely experience sleet. When the temperature is above freezing for an extended period of time aloft but below freezing at the surface, freezing rain is likely.
Here are a few places to find forecast soundings:
Powered by Facebook Comments | <urn:uuid:2b665857-96b0-4704-921e-eb1927cf5590> | 4.0625 | 855 | Knowledge Article | Science & Tech. | 41.889132 |
Electric Rays: a Shocking Use of
The electric rays (order Torpediniformes) have fascinated naturalists since antiquity, being best known for their highly specialized electrogenic organs. These organs are generally kidney-shaped, composed of stacks of 500 to over 1 000 striated muscle plaques that have been modified from the gill musculature. These plaques are all enervated on the same side, so that the electricity generated via muscular contraction is summed to produce an external shock.
Voltage potentials recorded from different electric rays vary tremendously, having been measured at as little as 8 to 37 volts (narcinids) up to 220 volts (in the torpedinid Torpedo nobiliana). The result is a jolt of electricity ranging from moderately tingly to stunningly powerful. In some forms, the shock is directed upward — where it may serve to deter would-be predators — and in others downward — where it may be used to incapacitate prey.
Recent field research carried out by Chris Lowe, Dick Bray, and Don Nelson off southern California has revealed that the Pacific Torpedo Ray (Torpedo californica) generates two distinct types of electrical pulse. These rays produce regular 'warning pulses' when pursued and sharp, powerful blasts to stun prey. In addition, the Pacific Torpedo employs several strategies to capture prey — including using bottom topography to sneak up on prey, cupping its pectoral fins and executing a neat barrel roll to manipulate incapacitated prey into the mouth. This may explain how these normally sluggish rays manage to capture surprisingly fast-swimming prey: there is a record of a four-foot (1.2-metre) Pacific Torpedo with a two-foot Coho Salmon (Oncorhynchus kisutch) in its stomach. This shocking ability may also explain why — although the non-electrogenic Bat Ray (Myliobatis californica) often turns up among White Shark (Carcharodon carcharias) stomach contents — the Pacific Torpedo has yet to meet such a toothy end.
Electric rays were used by the ancient Greeks as a kind of anesthetic, the electricity supposedly numbing the pain of operations and childbirth — in fact, the Greek word for these rays is narke, from which we get our word 'narcotic'. | <urn:uuid:a896e094-6e56-461e-aeba-9493b42f231b> | 3.5 | 492 | Knowledge Article | Science & Tech. | 24.195334 |
The diagonal fraction
bar in mathematics
. Also called virgule
. This symbol
was used to replace
, sometimes referred to as a vinculum
, due to the typographic
difficulty imposed by the horizontal bar which requires three terraces
An early written example of this usage can be found in a 1718 ledger by Thomas Twining. In his article "The Calculations of Functions" from 1845 and published in the Encyclopaedia Metropolitana, De Morgan recommended the use of the solidus.
Also, an ancient Roman gold coin used until the fall of the Byzantine Empire and introduced by Constantine. | <urn:uuid:c49cef17-340f-426e-91b0-809c22cd3984> | 3.234375 | 131 | Knowledge Article | Science & Tech. | 27.852774 |
The story of optics goes long back in the past. The oldest instance of
glass being used as a tool of magnification is over 4000 years old.
Glasses were first used as magnifiers and then used as optical aids
for people with poor vision. For the purpose of spectacles, several
refinements had to be made. But magnifiers have been around for a long
time helping people by making the finer details clearer. A combination
of such lenses gave us the microscope and the telescope. Thus
magnifying glasses are important in almost all walks of life.
For more information visit our website: | <urn:uuid:965c2537-cdc6-4154-a910-c5c15e84bda1> | 3.125 | 126 | Comment Section | Science & Tech. | 51.469208 |
For more than 40 years, scientists have tried to figure out what’s causing large parts of Canada, particularly the Hudson Bay region, to be “missing” gravity. In other words, gravity in the Hudson Bay area and surrounding regions is lower than it is in other parts of the world, a phenomenon first identified in the 1960s when the Earth’s global gravity fields were being charted.
Two theories have been proposed to account for this anomaly. But before we go over them, it’s important to first consider what creates gravity. At a basic level, gravity is proportional to mass. So when the mass of an area is somehow made smaller, gravity is made smaller. Gravity can vary on different parts of the Earth. Although we usually think of it as a ball, the Earth actually bulges at the Equator and gets flatter at the poles due to its rotation. The Earth’s mass is not spread out proportionally, and it can shift position over time. So scientists proposed two theories to explain how the mass of the Hudson Bay area had decreased and contributed to the area’s lower gravity.
- 7 Alternative Earth Theories (illuminutti.com)
- Gravity Change Reveals Magma’s Underground Movements to Determine Likelihood of Eruption (sott.net) | <urn:uuid:2c6b183f-d5a0-4a0a-9640-4f7122a4d162> | 4.25 | 276 | Content Listing | Science & Tech. | 52.026918 |
Major Boundary Currents and Inter-basin Flows
How will the key science questions be addressed?
East Australian Current system -
- Will the EAC strengthen with climate change as predicted?
- Will the North Queensland Current weaken with climate change as predicted?
- Will the bifurcation point and/or dynamics alter in a warmer ocean?
The same data streams required to detect interannual variability in the EAC and bifurcation dynamics of the South Equatorial Current in the Coral Sea will inform models forecasting future changes in basin flows over multi-decadal time scales. Without additional observations in the Gulf of Papua, the hydrodynamic model being developed for Queensland will be the richest source of information contributing to global understanding of heat transport from the South Equatorial Current to the New Guinea Coastal Undercurrent via the Hiri Current. In the future, it is reasonable to expect even more effort from the international community to gate the north-eastern boundary of the Coral Sea with permanent observing infrastructure (e.g. the Southwest Pacific Climate Experiment (SPICE program).
- What is the full-depth transport of the EAC leaving Queensland?
- What are the short and long-term changes in the heat content of the EAC leaving Queensland?
The above questions will mainly be answered by the Bluewater and Climate Node, but the three moorings across the South East Queensland shelf and slope at 28°S will make an important contribution to measuring the inner part of the EAC. The management of complementary infrastructure in deep water extending a ‘Brisbane Line’ across the full width of the poleward flow will be done by the Australian Bluewater Observing System. These measurements on velocity will be supported by the Brisbane-Fiji high density XBT line, synoptic maps of sea surface temperature from remote sensing, and a well resolved model nested inside Bluelink.
- Do changes in the EAC drive coastal currents in South East Queensland with impacts on sediment transport or beach erosion?
The Stradbroke National Reference Station and two slope moorings will provide key data sets to understand coastal currents in South East Queensland, particularly when teamed with observations from the wave ride buoy network. A key target will be monitoring erosion and beach replenishment, which is a focus of active coastal engineering groups at University of Queensland and Griffith University. Again these observations are likely to feed hydrodynamic models with coupled models for waves, bottom stress, and sediment transport. | <urn:uuid:356e2947-3168-4af1-a496-df9fbfa43054> | 2.6875 | 507 | Knowledge Article | Science & Tech. | 30.189969 |
In 1952 David Huffman invented data compression by discovering a method to assign the optimal unambiguous encoding to the symbols in a given message. This article explains how data compression works, and Huffman's method in detail, and also explains how the module is implemented. The module implements the Huffman data compression algorithm in Perl.
There's a bug somewhere that I never bothered to track down. In some files, the last few bytes are garbled either in the compression or the decompression phase. I'm not sure which and I haven't found a short test file that fails. If you do find a short test case that exercises the bug, or you find the bug, please let me know.
17 December 2001: Eric Prestemon has found the bug. The problem was that the output was emitted in 8-bit chunks, and if there were leftover bits at the end, they were never written to the output file. Thanks to Jerrad Pierce for supplying a patch.
Return to: Universe of Discourse main page | What's new page | Perl Paraphernalia | <urn:uuid:779d17f0-e2ea-409f-9fa6-3127be899d1f> | 2.875 | 218 | Comment Section | Software Dev. | 56.965 |
The answer of blaze comes closest, but is not totally clear:
conditional variables should only be used to signal a change in a condition.
Thread 1 checks a condition. If the condition doesn't meet, he waits on the condition variable until the condition meets. Because the condition is checked first, he shouldn't care whether the condition variable was signaled:
Thread 2 changes the condition and signals the change via the condition variable. He doesn't care whether threads are waiting or not:
The bottom line is: the communication is done via some condition. A condition variable only wakes up waiting threads so they can check the condition.
Examples for conditions:
- Queue is not empty, so a member can be taken from the queue
- A boolean flag is set,so the thread wait s until the other thread signal it's okay to continue
- some bits in a bitset are set, so the waiting thread can handle the corresponding events
see also pthread example | <urn:uuid:eab9daa2-3cf6-4084-a200-a384c8724635> | 2.859375 | 198 | Q&A Forum | Software Dev. | 47.151591 |
John Scannella is a doctoral student at Montana Sate University and the Museum of the Rockies, located in Bozeman, Montana. His research centers on understanding the evolutionary history of Triceratops, a genus of dinosaurs known for a characteristic bony head frill and three horns.
In July 2010 Scannella and John “Jack” Horner, curator of paleontology at the Museum of the Rockies and regents professor at Montana State University, published a paper in the Journal of Vertebrate Paleontology revealing that Triceratops and “Torosaurus,” rather than representing two separate genera of dinosaurs, actually are different versions of the same dinosaur. With this groundbreaking report in mind, Britannica science editors Kara Rogers and John Rafferty asked Scannella about the discovery and what it means for scientists’ understanding of dinosaur diversity.
* * *
Britannica: As your recent paper in the Journal of Vertebrate Paleontology points out, Triceratops and Torosaurus have been considered distinct dinosaurs, classified in separate genera, for more than a century. What led you to believe that the two may instead be the same species?
Scannella: A few years ago, Jack Horner and Mark Goodwin published the first description of how the skull of Triceratops changed as it grew up from a baby to an adult. The changes these animals underwent were pretty extreme—the horns above the eyes change from curving backwards in small juvenile trikes to curving forwards in more mature individuals, and the epoccipitals (spikes bordering the frill at the back of the skull) start out very triangular and flatten with age.
Torosaurus latus is a horned dinosaur that is found in the same geological formations and same geographic area as Triceratops. It looks very similar to Triceratops, except it has a much larger bony frill at the back of its skull. The frill of Torosaurus has two holes in it, whereas in Triceratops the frill is solid. When I started studying Triceratops, I read a paper written by John Ostrom and Peter Wellnhofer in 1990 in which they briefly suggested that Torosaurus could conceivably be what a male Triceratops looked like. Given what was being learned about how dinosaur skulls changed shape as they grew up, I thought that the hypothesis of Torosaurus being a separate species was an idea worth exploring further. It turned out that Jack (who is my advisor at Montana State University and my co-author on this study) had been thinking along similar lines for some time and had even suggested in public lectures that Torosaurus and Triceratops might be the same animal.
Britannica: In what way are the two dinosaurs related, and what evidence did you uncover that supports this relationship?
Scannella: All the evidence we have suggests that Torosaurus was not a separate dinosaur, but, instead, was simply what a full-grown Triceratops looked like. We’ve compared numerous skulls of Triceratops and Torosaurus and we’ve found intermediate morphologies between what is typically found in Triceratops and what is typically found in Torosaurus. The frill of Triceratops developed thin areas in the same regions where Torosaurus specimens have holes. As Triceratops grew up, the frill became longer, wider, and thinner and eventually formed the characteristic holes found in Torosaurus. We’ve also examined the bones of many Triceratops and Torosaurus under a microscope, and the tissues of Torosaurus specimens are more mature than those in even the largest Triceratops, which is pretty strong evidence that Torosaurus was a mature Triceratops.
Britannica: How much larger do you think Torosaurus was relative to Triceratops and is there a possibility that, rather than different growth stages, the two may represent different sexes of the same species?
Scannella: Some “Torosaurus” skulls are huge. They’re the size of my car. However, they are not all gigantic. But even “Torosaurus” skulls that are smaller than large Triceratops skulls have the indications of ontogenetic (developmental) maturity, which is very interesting, as it tells us something about variation in dinosaurs. Could this be sexual variation? It’s possible. It is not impossible that “Torosaurus” is what male Triceratops looked like, but our histological evidence so far suggests that “Torosaurus” specimens were more mature than even the biggest Triceratops skulls.
Britannica: Many Triceratops specimens are known, but Torosaurus specimens are comparatively rare. How many specimens of the two did you compare, and where were they discovered?
Scannella: I have personally examined well over a hundred Triceratops, easily. That’s one of the great things about Triceratops: there are a lot of them to study. The Museum of the Rockies alone has collected over 70 specimens from the Hell Creek Formation in Montana over the last 11 summers. “Torosaurus latus” is much rarer—it is known from fewer than a dozen specimens, and I’ve examined virtually every one that is available for study in the United States.
Britannica: How could your research and consideration of ontogeny (the development of an organism) impact current understanding of dinosaur diversity?
Scannella: If “Torosaurus” is Triceratops, then that decreases our perceived dinosaur diversity: where there once were two species there is now one. We are now realizing that it is likely that many of the dinosaur species that have been named in the past might just be growth stages of other dinosaurs. I think that, as more dinosaurs are re-examined with a consideration of ontogenetic change, dinosaur diversity as we’ve perceived it will continue to decline. Recognizing ontogenetic change as a major source of variation between specimens will give us a clearer view of dinosaur paleoecology.
Photo credits: Courtesy of John Scannella; Courtesy, Library Services Department, American Museum of Natural History, New York City, photograph, E.M. Fulda. | <urn:uuid:33315190-2a28-4134-adfc-2b2aa8153df4> | 3.671875 | 1,310 | Audio Transcript | Science & Tech. | 23.330385 |
Sometimes you'll want to work with more than simply hypothetical future
positions in order to determine whether a collision occurred. This happens
when your objects are zipping along, several units of length at a time, and
might pass through something briefly, but not enough to wind up in it. Consider
the following picture:|
Suppose if your object was as small as the circle in the picture, and it
moved 10 units each frame. Eventually you might realize that the object is
now on the other side of the wall. What you want is to predict, while the
object is in position 1, whether the object will go through the wall if it's
moved 10 units, and if so, where exactly the object will reach the wall.
Instead of obtaining a solution to this particular problem (with the circle
and the wall), I will present some general methods for solving these kinds
of problems. This is where mathematical thinking begins to really help.
Think of the prediction problem as a problem in time. Two objects are moving
relative to one another. Their positions are functions of time. You want
to know exactly when something happens -- for example if they collide. If
you find the time that happens, you can find their positions and anything
else that depends on time.
What you want is to define some sort of relationship between the two objects
that changes as a function of time. Here's what this kind of prediction problem
would involve. We have:
||Positions P1 and P2 of the two objects as functions
of time. (This can be in 1, 2, 3, etc. dimensions)
||The relationship between positions P1 and P2. Since
both P1 and P2 are functions of time,
R(P1(t), P2(t)) can just depend on time.
||Some value that you want the relationship to be.
If you're a bit confused by the generality of these functions, it's okay.
Basically it's like this:
The position functions can be anything you want. If an object is moving in
a straight line, then you have a linear equation. If you have acceleration,
such as gravity, then you might have a quadratic or cubic equation, etc.
Now you want to express the relationship you want in terms of the positions
of the objects, and the value you want it to be. If you want two circles
to collide, then R(P1, P2) would be the distance between
P1 and P2, squared, and n would be the sum of the radii,
squared. Now that you have expressed the collision conditions in terms
of the object positions, and the object positions in terms of t, then you
can express the entire collision scheme (the conditions) in terms of t. What
you get is an equation that corresponds to: R(t) = n
Once you have that, you can usually solve for t, because R(t) is usually
a polynomial. Thus you get the time at which the collision occurs. It doesn't
even have to be collision, it can be some other event. Up until now you do
everything on paper. What you finally put into your program is one, optimized
line: the equation for t. It looks something like:
t = . . .
Still confused? Let's take a look at some examples.
We have two circles, with centers (x1,y1),
(x2,y2) and radii r1, r2. The radii remain constant,
but the first circle moves (2, 7) in one second, and the other moves (-1,
2) in one second. The question is, will they collide in that second?
From this we have:
P1(t) = (x1 + 2t, y1 + 7t)
P2(t) = (x2 - t, y2 + 2t)
R(t) = [(x1 + 2t) - (x2 - 2t)]2
+ [(y1 + 2t) - (y2 + t)]2 = (x1
- x2 + 3t)2 + (y1 - y2 +
(x1 - x2)2 + (y1 -
y2)2 + 3t(2x1 - 2x2 +
3t) + 5t(2y1 - 2y2 + 5t) =
[32+52]t2 + 2[3(x1 -
x2) + 5(y1 - y2)]t + [(x1 -
x2)2 + (y1 -
This is a quadratic equation in t. We don't even have to solve it for t,
because the original question was whether the two circles will collide.
So we find the minimum of the function R(t). To do that, we take the derivative
of R(t). We get:
R'(t) = 2[32+52]t + 2[3(x1 -
x2) + 5(y1 - y2)]
And we solve for t1 when R'(t1) = 0 :
t1 = -[3(x1 - x2) + 5(y1
- y2)] / [32+52]
It is at this point in time that the distance is least between the two centers
of the circles. You can visualize this: if the circles start out moving towards
each other, for example, over time the distance will decrease and then increase
steadily (Figure 7).
We're almost done. First, we see if the distance between the circle centers
(squared) at the time t gotten above is less than the sum of the radii (squared).
In other words, we check: R(t1) < (r1 + r2)2
If the distance between the centers smaller than the sum of the radii at
that t, this means the circles collide at that t. Finally, we have to check
whether 0 < t < 1. If the question asked, "will the circles ever collide
if they continue moving this way forever?" we would be done. But the question
asked whether the circles would "collide in that second." Checking whether
t is between 0 and 1 is easy. So, if we've gotten this far, then the circles
Actually, if we analyze the method we used to get the answer, we can get
a general solution to the problem of two moving circles. If circle 1 moves
(a1, b1) units per second, and circle 2 moves
(a2, a2) units per second, then the minimum distance
time will be:
(1) t1 = -[(a1 -
a2)(x1 - x2) + (b1 -
b2)(y1 - y2)] / [(a1 -
a2)2 + (b1 - b2)2]
In our example, a1 - a2 = 3 and b1 -
b2 = 5.
In fact, we see that we don't need to know the movements of the circles in
absolute coordinates. We only need to know the movement of the balls
relative to each other. If we take the center of circle 1 to be the
origin, then the equation for t1 can be something like:
(2) t1 = -[ax + by] / [a2 + b2]
Compare this to equation (1).
We have a circle and a line segment going through it. The circle is centered
at (a, b) with radius r. The line segment goes from (x1,
y1) to (x2, y2). Does the line segment intersect
Again, we can consider this a time problem. Let's say the line segment
is really a point traveling through time. We have to get the time where the
point is closest to the center of the circle, and see whether at that time
the distance from the point to the circle (squared) is less than the radius
of the circle (squared).
P1(t) = (a, b)
P2(t) = ( x1 + [x2 - x1]t,
y1 + [y2 - y1]t ) -- where t goes
from 0 to 1
In fact, let's use the work we've already done in Example 1. Let's consider
the point to be a circle with radius 0. In that case we make use of equation
t1 = -[(0 - [x2 - x1])(a -
x1) + (0 - [y2 - y1])(b - y2)]
/ [(0 - [x2 - x1])2 + (0 - [y2 -
[(x2 - x1)(a - x1) + (y2 -
y1)(b - y2)] / [(x2 -
x1)2 + (y2 -
That's it. Solve for t1, and see if it's between 0 and 1. If it
is, plug it into R(t) [which is the distance between P1(t) and
P2(t), squared] and see if it's less than the radius squared.
By the way, if t1 isn't between 0 and 1, that means that an infinite
line would intersect that circle, but not the line segment, which runs from
(x1, y1) to (x2, y2)
Also, if the distances were relative to the center of the circle, then everything
would be shorter to write. Say the circle is centered at (0, 0), has radius
r, and the line segment runs from (x1, y1) to
t1 = -[(x2 - x1)x1 +
(y2 - y1)y1] / [(x2 -
x1)2 + (y2 - y1)2]
Note: in this particular case there are easier ways of solving the problem.
To find the minimum distance of a line AB to a point C, one could take a
random point D on the line, and project CD onto the line perpendicular to
AB. The absolute value of the resulting vector would be the minimum distance.
This example is just to illustrate the general method of predicting collisions.
One more example. This time with bounding boxes. We have a dot that travels
from (x1, y1) to (x2, y2) in
one second. We have a box with its center at (a, b), width w and height h.
Does the dot go through the box? If so, when is it inside the box?
This time we'll set things up a bit differently.
So, what you do really depends on what you're looking for. If you want
to find out when the objects begin to collide, then you solve R(t)
= n for t. (You might get more than one t depending on whether R(t) is linear,
quadratic, etc.) If you just want to find out whether the objects
will collide in the future, you can find out when R(t) is at its minimum
by solving for t in R'(t) = 0 and only check that 'minimum' case. The derivative
of R(t) is simpler than R(t); for example, if R(t) is quadratic in t then
R'(t) is linear.
P1(t) = (a, b)
P2(t) = ( x1 + [x2 - x1]t,
y1 + [y2 - y1]t ) -- where t goes
from 0 to 1
We'll have two relationship functions: the horizontal distance from
the center of the box and the vertical distance.
R1(t) = abs( x1 + [x2 -
x1]t - a ) -- the horizontal distance
R2(t) = abs( y1 + [y2 -
y1]t - b ) -- the vertical distance
This time we take no derivatives. Due to the question, we need to find when
the dot enters the box and when it leaves. So we solve for t in the following
R1(t) = w/2
R2(t) = h/2
For the first equation we get two values for t. This is because of the absolute
value operation in R1(t).
t1 = (w/2 - x1 + a) / (x2 -
t2 = -(w/2 + x1 - a) / (x2 -
This is expected. As the dot travels along the line, it will "enter" and
"leave" the horizontal interval of the box. And of course, the path
of the dot cannot be vertical, since then the point would make no horizontal
progress. This is demonstrated by the fact that, on a vertical line, (x2
- x1)=0 and dividing by zero is not allowed.
Similarly, for vertical distance,
t3 = (h/2 - y1 + b) / (y2 -
t4 = -(h/2 + y1 - b) / (y2 -
So we have found the times when the dot "enters" and "leaves" the vertical
interval of the box. (The path of the dot cannot be horizontal, for the same
reasons as stated above.)
We should, as before, check whether t is between 0 and 1. All that remains
after that is to see whether the time intervals (t1,
t2) and (t3, t4) overlap. This can be done
using everyone's favorite intersection lemma:
If t14 and t32 then
the dot enters the box for a period of time on its way from (x1,
y1) to (x2, y2).
Finally, we can get the start and end of this time period. To do that, we
sort t1, t2, t3 and t4 in order
from earliest to latest (or vice versa), and the middle two are the beginning
and end of the time period the dot is inside the box. | <urn:uuid:120dd438-2662-4c3f-9df4-e4e8c1fe0e0b> | 3.796875 | 3,091 | Tutorial | Science & Tech. | 80.943703 |
Figure 4: Optical design used to monitor the biological viablity of the cells using the nanoporous polymer film. On the right are the representative spectra corresponding to the each process. (a) To measure the transmitted light from the sample, illumination (light source) and observation (detector) are aligned on the same optical path and the corresponded spectra shows a broad shape which is not suitable for biological observation. (b) To measure the diffracted light, the observation (detector) is placed in the position of the diffracted light path, from the sample normal, leading to a sharp spectrum signal. (c) Seeding the cells on the sample surface introduces a scattering effect as the cells scatter some light out of the diffracted light path and the intensity of spectrum decreases. (d) Cell death causes the cells to detach from the sample surface and the intensity of spectrum increases. | <urn:uuid:0d3ed8f8-c284-42c3-a41f-7ccc9e168194> | 2.953125 | 181 | Academic Writing | Science & Tech. | 33.25 |
February is Black History Month, and to mark the occasion, we recently sat down with John Johnson, scientist at NASA's Exoplanet Science Institute at the California Institute of Technology in Pasadena. Johnson discussed his research and recent discoveries, and the path that led him to the work he's doing today.
Q: Can you tell us about where you grew up?
A: I grew up in St. Louis, Missouri, and I went to college at the University of Missouri at Rolla. It is now known as the Missouri University of Science and Technology. It is a small engineering and science school like Caltech. There I studied physics and earned a bachelor's degree in physics, and I applied to graduate school only in California because it was my goal to only live in California, having lived in Missouri all my life. I was accepted to the astronomy school at U.C. Berkeley. There I studied astrophysics and earned my master's degree in astronomy and eventually earned my Ph.D. in astrophysics, studying with Professor Geoff Marcy searching for planets around other stars.
Q: What is your job at NASA?
A: My job is to conduct research into the discovery and characteristics of planets around other stars. I use telescopes to observe the universe and nearby stars, looking for planetary systems.
Q: Tell us about your recent discovery?
A: Our recent discovery is very exciting, of a compact system of very small planets around a small red dwarf star. The discovery was originally made by the NASA Kepler team. They made their data public, and we were able to refine and revise the properties of the star in the planetary system. We found it was much smaller overall than we thought. It is a compact system of Earth-sized planets around a very faint red dwarf star.
Q: You found three planets?
A: We found three planets orbiting one red dwarf star. All three planets are smaller than the Earth. The smallest of those three planets is the size of Mars, making these the smallest planets found around another star. This is actually a solar system in that the central object is a hydrogen-burning star like our sun. But that star is not a G-type star like our sun, often referred to as a red dwarf.
Q: Have we found something like Earth?
A: In one respect we have found something like Earth. The planets are so small that the only conceivable concept of these worlds is they are rocky like our Earth. So they very much resemble Mars, Earth, and Venus much more than a giant gas planet. However, because they orbit close to their star, they are too hot to be in the habitable zone, which is the region where liquid water and life could exist.
Q: How does it feel to discover a solar system that is sort of like ours?
A: It is absolutely thrilling. When I started studying planets in graduate school, we were studying Jupiter-size planets. To me that was just amazing, seeing Jupiter-sized planets in unusual orbits around other stars. I couldn't believe it. It was an exciting time to be a researcher. It is a testament to how quickly the field is advancing. It is happening because we have dedicated instrumentation that is targeting nearby stars, looking for tiny planets. When you put resources into this one purpose, you achieve your goals. It is exciting to see that unfold in front of my eyes.
Q: What made you more confident to pursue stuff no one else has done?
A: I lacked a lot of confidence when I started studying astronomy. It was most challenging thing I have ever done in my life. I had an easy time in high school. In college, I found my way and figured out how to get my A's. When I got to graduate school, it was the first time that I encountered questions no one on Earth knew the answer to. There was no answer in the back of the book. There was not a right way to answer that question. I was looking at a difficult problem and trying to devise the method for reaching the answer. Because I was faced with the challenge and I could not do it overnight, it actually decreased my confidence initially. But fortunately I had a strong support network. I eventually accepted the fact it is okay to be stuck. To be a good scientist, you spend a great amount of time being stuck. Because that means you are doing something interesting. That means you are at the cutting edge. When I finally reached the answer, the pride restored my confidence, meaning I can actually do this. So I have been riding off that moment for the past 10 years as an astronomer. It is what helps me wake up in the morning and what keeps me going in my very busy days, are discoveries like these where there was no answer in the back of the book.
Q: What is your advice to young people?
A: A lot of young people believe the only way to be successful in this world is to go through school and get those grades, to go out and get a high-paying job and do your 9 to 5. There are people who are not satisfied with that. If you are not satisfied with that, then science is where to go. This is a field that is flexible enough where you can define your job. You can go out and find out how to investigate the universe. A lot of people don't consider astronomy, chemistry or astrobiology a career. This is a great job to understand the universe. I can't think of a better way to earn a paycheck. If there were a young person who came to me because he or she were really curious about the universe but feeling pressured to go out and earn a big paycheck, he or she should consider other paths. Academia and science are really fun places to work. | <urn:uuid:22cf9517-1248-4f5d-aff7-b8fbe96a700f> | 3.15625 | 1,179 | Audio Transcript | Science & Tech. | 63.023413 |
Relations of G. bulloides and G. glutinata
Only few samples exist in the CLIMAP Holocene data in which both species do not co-occur (Fig. 40). Their relative abundances show some inverse correlation. Globigerina bulloides is more abundant in central upwelling zones and areas of high productivity while G. glutinata is more frequent at their margins and in central ocean areas. This is well expressed in the biogeographic maps of Bé and Hutson (1977) in the area of upwelling in the Arabian Sea offshore from Somalia. The central area is occupied by abundant G. bulloides, while a belt with abundant G. glutinata exists in the marginal upwelling zone (see also Brock et al., 1992). Globigerina bulloides feeds on algal prey (Lee et al., 1966), while G. glutinata has more specific preferences for diatoms (Hemleben et al., 1989). Such different feeding strategies may explain why both species are related to productive environments but tend to occupy different zones, probably related to the phytoplankton bloom succession (dinoflagellates - diatoms).
Relations of G. calida and G. siphonifera
It is difficult to argue about possible taxonomic uncertainties in the counts of G. calida and G. siphonifera and subsequent problems in the interpretation of their relations with the physical environment. CLIMAP micropaleontologists have made serious efforts for quality control of their micropaleontologic data and taxonomic standardisation between the different members of the group. Other species, which are difficult to distinguish in their morphology (e.g. G. falconensis and G. bulloides) have distinctly different adaptations and suggest that the similarities in the ecologic pattern between G. calida and G. siphonifera are real. This problem may suggest to include both species in one taxonomic category and demands for taxonomic research.
Relations of G. rubescens and G. tenella
Globoturborotalita tenella is distinguished from the generally pink-colored G. rubescens by a secondary aperture on the last chamber. Pre-adult stages of G. rubescens and G. tenella are difficult to distinguish in their morphologies and taxonomic discrimination is made more difficult by the existence of a white form of G. rubescens in bottom sediments of temperate regions (Hemleben et al., 1989). Morphologic similarities and the nearly equal relations with the physical environment seen in G. rubescens and G. tenella may suggest ecophenotypes rather than different species. In other species variants are consistently more differentiated in their preferences compared to G. rubescens and G. tenella. Both species require taxonomic and ecologic research.
Relations of G. sacculifer and S. dehiscens
Bé (1965) considered S. dehiscens as a deep-water form of G. sacculifer in a terminal (reproductive) stage. In the laboratory, however, Glbigerinoides sacculifer was observed during gamete release and did not develop the "S. dehiscens" form (Hemleben et al., 1987). Other authors emphasize morphological differences in juvenile stages of the two species (Hemleben et al., 1989). Pattern in the plots of relative abundances vs. physical parameters, however, is very similar for G. sacculifer and S. dehiscens. Both species differ drastically in their relative abundances and comparisons of their relations with the physical environment are difficult. The correlation coefficients of their relative abundances computed with various regression methods are all well below 0.1. This, however, may be caused by the low relative abundance of S. dehiscens (< 5 %) which causes statistical uncertainty due to counting error in the data. Potentially, S. dehiscens may occupy a deep-water habitat with a biogeographic distribution similar to that of G. sacculifer. The possible existence of vertical clines in phenotypes, in contrast to the commonly observed geographic clines in other species, motivates more research on relations of the two species.
G. crassaformis and G. truncatulinoides
The origin of G truncatulinoides as a species, about 2.8 - 2.9 My ago, was analysed in a morphometric study by Lazarus et al. (1995). They suggested a sympatric mode of evolution, in which the differentiation and "geographic isolation" of ancestor (G. crassaformis) and descendant species (G. truncatulinoides) occurs through the occupation of different niches (e.g. depth habitats, seasonally different cycles, etc.) in the same biogeographic region (see discussion by Lazarus et al., 1995). The substantially different specialisations of both species seen in the relations with the physical environment (Figs. 19 and 24) support this view.
Table 1 lists those species which dominate at least one of the 461 samples used in this study. Only six species, however, can be considered as dominant species on a biogeographic scale: N. pachyderma, G. inflata, G. bulloides, G. ruber, G. glutinata, and G. menardii. Broad relations with sea surface temperatures in distinct biogeographic provinces exist for N. pachyderma in the polar and subpolar provinces, G. inflata in the transitional province, G. ruber in the subtropical and tropical province, and G. menardii in the warm tropical province. The latter species is not commonly a dominant species and may reflect selective dissolution (Kipp, 1976). Globigerina bulloides and G. glutinata dominate in productive high latitude environments and areas of upwelling. The biogeographic relations suggest different preferences of the two species for the central and marginal oceanographic and biologic conditions in such areas.
importance of the vertical water structure
Some species show most pronounced relations with the vertical temperature or density gradients, e.g. G. truncatulinoides, G. hirsuta, and T. quinqueloba, among others. On a biogeographic scale, the boundary between water masses with vertical
temperature gradients of more or less than 6 ûC in summer, seems to be the major limit between high and low latitude faunas in planktic foraminifera. This is well seen in the ecologic ranges of e.g. T. quinqueloba (Fig. 39) and N.
pachyderma (Fig. 33), which have their southern limits near this boundary and G. ruber (Fig. 15b), G. menardii (Fig. 22), and P. obliquiloculata (Fig. 36), which have their northern limits at this boundary. The limit corresponds with about
40û latitude in the North Atlantic and about 30û latitude in the Indian Ocean (Fig. 4). Other physical parameters do not show this clear separation between ecologic ranges of low latitude and high latitude faunas.
central cool productive
marginal cool productive | <urn:uuid:a7447cf8-28f5-4a5c-a678-34f305d0514e> | 3.375 | 1,520 | Academic Writing | Science & Tech. | 38.0596 |
- Explore the general resources to get an overview of global warming.
- Use the resources listed for your episode to research your interview and video.
- There are also some resources about producing television interviews that you may find useful.
Global warming – latest evidence
Garnaut report by GetUp!
Measure your carbon footprint
Student zone of BTN
Resources for each episode
Episode 1: Weather patterns.
Increased intensity of storms and cyclones.
Effects of Global Warming
Episode 2. Effects on the oceans.
Sea levels are gradually rising
Polar ice caps melting
Small Islands’ SOS
acidification of the oceans
food chains of the oceans
Episode 3. The causes of global warming
Episode 4. Carbon emissions trading scheme.
Carbon emissions trading scheme
Australia’s Greenhouse gas inventory
Breaching emission limits
Australian Government’s rebate schemes
Find your local representative to the Federal Parliament
Episode 5. Alternatives
Advantages and disadvantages of a range of alternative energy solutions
Australia’s solar cities
Producing television interviews
Tips from BTN | <urn:uuid:6e547ac3-fee9-4313-8dbf-c27ebbcb3734> | 2.78125 | 229 | Content Listing | Science & Tech. | 21.986667 |
- Students & Postdocs
- Education & Outreach
- About JILA
Diary of a Binge Eater
Fellow Mitch Begelman and his colleagues came up with the idea of quasistars to explain the origin of the supermassive black holes found at the center of most galaxies. According to Begelman, quasistars formed when massive amounts of gas were funneled into the center of protogalaxies. This prodigious amount of gas collapsed directly into black holes without forming stars. The resulting black holes grew rapidly by sucking in matter from the great envelopes of gas still surrounding them. This process released enough energy to puff up quasistars, which then radiated light (like stars). The quasistars evaporated after about a million years. However, in that amount of time, the “seed” black holes inside of them could only have acquired masses of about a hundred thousand Suns.
However, the black holes at the center of most galaxies today have masses of millions to billions of Suns. Something must have happened after the quasistars disappeared to cause the seed black holes to grow 10 to 100,000 times bigger.
Begelman recently explored a couple of intriguing ideas about what could have caused the black holes to rapidly increase in size. First, he wondered if conditions in nascent galaxies could recreate quasistars, which efficiently grew black holes while they lasted. However, his analysis showed that it would be impossible to come up with the huge reservoir of gas needed to recreate a quasistar.
Begelman next explored brief episodes of “binge eating” by the seed black holes left behind by the quasistars. He discovered that if the seed black holes were not too massive, it would be possible to force-feed them from a relatively small envelope of gas that was regularly replenished when more gas fell into the galactic center. During a period of force-feeding, the black hole could grow very rapidly as the gravity of the entire galaxy forced matter into it.
Before long, however, energy (in the form of jets) would spew out of the black hole and blow away the envelope. With smaller black holes, most of this gas would be trapped by the gravity of the galaxy. The trapped gas would eventually fall back into towards the black hole and cause the envelope to reform. This process would initiate another period of binge eating.
As the black hole got larger, however, it would fling more and more gas away fast enough to escape the galaxy. Even so, as long as some of the far-flung gas fell back in, episodes of force-feeding/binge eating continued. However, once the black hole grew large enough to fling the entire envelope out of the galaxy, it stopped growing.
The day came when the supermassive black hole literally threw away its next meal. Around the same time, any gas remaining inside the galaxy was forming millions, perhaps even billions of stars. As more stars appeared, it became even less likely that sufficient gas would ever reach the center of the galaxy to feed the now truly monstrous black hole.
What’s intriguing about the force-feeding model is that it may help explain an observed correlation between the size of a central supermassive black hole and the size of the galaxy surrounding it. The mass of a central black hole is equal to .1% of the mass of the core galaxy. It is also proportional to the fourth power of the speed of the core stars in the central bulge. The feedback mechanism implicit in the force-feeding model may provide an explanation why the appearance of billions of stars in the galactic core correlates with the cessation of black hole growth. — Julie Phillips
Reference: Mitchell C. Begelman, The Astrophysical Journal 749:L3 (2012). | <urn:uuid:ff1e84d5-543d-4c80-a869-56480c1ffa36> | 3.984375 | 780 | Knowledge Article | Science & Tech. | 43.539387 |
Let S be a set with n elements. Show:
If an (n is subscript) equals the number of subsets of S then an+1 (n+1 is subscript) = 2an (n is subscript)
Use this to prove by induction that an (n is subscript) = 2^n
If S contains n+ 1 members, choose one of them and call it "a". Removing that from S leaves you with set T that has n members and so, a(n) members.
Now note that every member of S either contains "a" or it doesn't. If it doesn't, it is one of the a(n) subsets of T. If it does, then it is one of the subsets of T with "a" added- there are still a(n) such subsets. Together there are a(n)+ a(n)= 2a(n) subsets of S. | <urn:uuid:c78c877f-dff3-4212-b079-33ab08e02195> | 3.078125 | 195 | Q&A Forum | Science & Tech. | 94.776403 |
Except for a few trifling practical details, we could easily get around without the consumption of hydrocarbons and the production of greenhouse gases, simply making use of the force of gravity that is constant and inexhaustible. For example, suppose we must journey to a location 30 miles away and at the same elevation. We keep our vehicle at the top of a ramp 100 ft. high. It only has to be hoisted there once. When we wish to travel, we shove off and when we reach the foot of the ramp we are going at about 55 mph. The speed attained is v = √(2gh), where g is the acceleration of gravity (32 fpsps or 980 cm/s2). Our 30-mile trip will require about 33 minutes, quite comparable to the capability of an ordinary motorcar, but now with effortless quietness and unsurpassable economy. At the destination, we slow the vehicle by letting it climb a ramp, and it stops when it reaches an elevation of 100 ft. To return, we simply turn the vehicle around and push off. The only problem is friction, but many proposals we hear today for energy and other purposes demand the solution of no less a problem for their realizations.
The theoretical basis of this proposal is the principle of Conservation of Energy. When we wish to travel, we convert some potential energy into kinetic energy, which is associated with a speed v. We move by the agency of the kinetic energy without using up any of it, since no force acts on the vehicle. Finally, we convert the kinetic energy we have enjoyed back into potential energy. Gravity is a reliable source of potential energy, E = mgh, where h is the distance the mass m has been raised. We could just as well use a spring, or compressed air in a cylinder, but gravity is a good choice.
A quicker journey can be obtained by continuing the ramp to a depth h halfway to the destination, and then climbing a similar ramp to the terminus. Speed increases steadily as the vehicle descends, again a √(2gh), and decreases similarly on the ascending half. However, the length of the journey increases as we go deeper. The journey time, I think, is given by T = 4(D2/4 + h2)1/2/√(2gh). As h→0, T increases to infinity, and as h→∞, T also increases without limit. There must be some minimum time for a certain h, which we can find by setting dT/dh = 0. Solving this equation, we find h = D/2√3 and T = 8√(D/2)/√(2g). For our 30-mile journey, we find T = 281 s or 4.7 minutes. The average speed is 383 mph, which is certainly rapid transit.
The Brachistochrone Transit Company claims to be able to beat this time, making the trip in 2.9 minutes, at an average speed of 614 mph. It does this by means of a cunningly shaped ramp, developed for it by Indian engineers. The shape of the ramp is a company secret, but let's see if we can find out what it is.
We take the x-axis as horizontal, the y-axis as vertically downward in the direction of gravity, and the origin as the starting point. The shape of the ramp is given by the curve y = y(x), where y(0) = 0, and y(X) = Y, where the ramp must pass through the point (X,Y). The speed is v = ds/dt = √(2gy). The arc length of the curve is s, and ds = √(1 + y'2) dx, where y' = dy/dx. The time required to cover ds is ds/v, or √(1 + y'2)/√(2gy). Therefore, the time required to move from x = 0 to x is T = (1/√(2g))∫(0,x)[√(1 + y'2)/√y]dx. We must find the y(x) that makes this time a minimum.
This celebrated problem was proposed by John Bernoulli (1667-1748) in 1696, at the very beginning of the rise of analysis and calculus. It was solved by Newton, Leibniz, and Bernoulli. The surprising result was that the curve was a cycloid, the curve traced out by a point on the circumference of a rolling wheel. J. L. de Lagrange (1736-1813)later (1788) showed a general method for attacking such problems, and we follow his analysis. Leonhard Euler (1707-1783), Bernoulli's student, also made valuable contributions.
Let us consider first the problem of finding extreme values (maxima and minima) of the integral I = ∫(0,1)F(x,y,y')dx. Here, y(x) is an unknown function. In minimizing a function f(x), we find a single value of x by setting df/dx = 0. Here we have a much more difficult problem, since we must find an infinity of values y(x). Nevertheless, we can reduce the problem to a minimization with respect to a single value by an ingenious artifice. Suppose y(x) is the function we are seeking. Let g(x) be another continuous function such that g(0) = g(1) = 0. Then, y(x) + εg(x)is a neighboring function that approaches y(x) when ε→0. If this is placed in the integral, then I is a function of ε, and for an extremum dI/dε = 0.
Using this varied function causes I to vary by δI = ∫[(∂F/∂y)εg(x) + (∂F/∂y')εg'(x)]dx. We can now get rid of g'(x) by integrating the second term by parts. This gives [∂F/∂y')g(x)](1,0) - ∫[(d/dx)(∂F/∂y')]g(x)dx. Now, δI = ∫(1,0)[(∂F/∂y) - (d/dx)(∂F/∂y')]εg(x)dx. Since εg(x) can be chosen at will, the expression in square brackets must be zero. Therefore, the integral is minimized if ∂F/∂y - (d/dx)∂F/∂y' = 0. This is a second order differential equation for the unknown function y(x), called the Euler-Lgrange equation. It is, in general, difficult to solve.
A very useful special case is the one in which F(x,y,y') is not an explicit function of x, as in the brachistochrone problem, where F(x,y,y') = √(1 + y'2)/√y. In this case, we can show that an integral of the equation is the expression E = F(y,y') - y'(∂F/∂y') = c. In fact, forming dE/dx and using the Euler-Lagrange equation, we get dE/dx = 0, or E = c, a constant.
In Hamilton's Principle of dynamics, F(y,y') is the Lagrangian, T - V. The integral E we have just found then becomes E = T - V - 2T, since y'(∂L/∂y') = y'(∂T/∂y') = 2T, since T is a homogeneous function of y' of degree 2. then, E = -(T + V) = c, or T + V = constant. This constant is the total energy H.
For the brachistochrone problem, this gives √[(1 + y'2)/y] - y'2/√[y(1 + y'2)] = c, which can be solved for y' to get y' = √[(1/c2y) - 1]. From this, we obtain x - b = ∫dy/[(1/c2y) - 1], where b is a second constant of integration. If we let 1/c2 = k, and y = ku, this integral becomes x = b = k∫√[u/(1-u)]du. The integral is easily performed if we substitute u = sin2(θ/2), where θ is a new parameter. Then, we get x - b = (k/2)(θ - sin θ), and, of course, y = (k/2)(1 - cos θ) by using the half-angle formula. These are just the parametric equations for the cycloid, where θ is the angle of rotation of the wheel. Taking b = 0 makes the curve pass through the origin, the starting point, where there is a cusp. It is intuitively satisfying that the journey begins with a free fall.
The parameter k must be chosen so that the cycloid passes through point (X,Y). To find k, we first take the ratio y/x = (1 - cos θ)/(θ - sin θ). This ratio varies from 0 at θ = 2π to infinity at θ = 0. Therefore, for any (X,Y) there will be some value of θ at which the cycloid passes through (X,Y). When θ < π, the curve will fall monotonically. For greater θ, the curve will pass through a minimum and rise to the destination. If Y = 0, then θ = 2π, and we will have a complete loop of the cycloid. The diameter of the rolling wheel is k.
In another article (Curves), I show that the brachistochrone is also the tautochrone. That is, the time taken to reach the bottom from any point on the curve is the same. The time is calculated in that article by straightforward integration, with the result that the time from cusp to cusp is (2π)√(k/2g). The period of a cycloidal pendulum is the same as the period of a small-amplitude pendulum of length 2k.
In mechanics, the variable x is the time t. The function F(t,y,y') is the Lagrangian, L = T - V, where T is the kinetic energy my'2/2 and V is the potential energy V(y). The motion must be such that the action, ∫Ldt, is an extremum (usually a minimum). The Euler-Lagrange equations are then (d/dt)(∂L/∂v) - ∂L/∂y = 0, where the velocity v = y'. With our Langrangian, this becomes (d/dt)(mv) + dV/dy = 0, or dp/dt = -dV/dy, which is just Newton's second law, with the force derived from the potential energy V. The momentum is p = ∂L/∂v. These relations can be generalized to provide a firm foundation for mechanics.
The BTC is actively soliciting government support, so necessary for the progress of free enterprise, and states that if it is successful, construction can begin as soon as the problems of friction and tunnelling to about 9 miles depth are overcome, which it is confident can be done. Lawyers and accountants are currently working on the problems. Tickets are available for the first journey for $100 standard, $200 first class. First class passengers will receive a complimentary T-shirt. The trip is not recommended for those with dicky tickers. Small, unmarked bills, please.
The term brachistochrone comes directly from Greek: brachistos, shortest, and chronos, time.
R. Courant, Differential and Integral Calculus, Vol. II (London: Blackie and Son, 1936). Chapter VII.
C. Lanczos, The Variational Principles of Mechanics (Toronto: The University of Toronto Press, 1949). A more complete treatment of the calculus of variations, especially as applied to Hamiltonian mechanics. The brachistochrone is mentioned, but not solved.
Composed by J. B. Calvert
Created 14 December 2004
Last revised 18 December 2004 | <urn:uuid:27ac06d0-441b-461f-a4fd-a20cea927972> | 3.765625 | 2,703 | Academic Writing | Science & Tech. | 77.225001 |
Comparative seasonal biogeography of mineralising nannoplankton in the Scotia Sea: Emiliania huxleyi, Fragilariopsis spp. and Tetraparma pelagica
Hinz, D.J.; Poulton, Alex; Nielsdottir, M.C.; Steigenberger, S.; Korb, R.; Achterberg, E.P.; Bibby, T.S.. 2012 Comparative seasonal biogeography of mineralising nannoplankton in the Scotia Sea: Emiliania huxleyi, Fragilariopsis spp. and Tetraparma pelagica. Deep Sea Research II, 59-60. 57-66. 10.1016/j.dsr2.2011.09.002Full text not available from this repository.
The Southern Ocean is an important biogeochemical region on a global scale, in which mineralising phytoplankton play a role in cycling energy, carbon and nutrients. Mineralising phytoplankton with cells 2–20 μm in diameter (nannoplankton) are poorly enumerated by traditional preservation and microscopy techniques, yet may fulfil an important role in the Southern Ocean. Here we define the spatial and temporal biogeography for these mineralising nannoplankton assessed by scanning electron microscopy in conjunction with an array of biological, physical, and chemical variables during two cruises to the Scotia Sea region of the Southern Ocean. The cruises encompassed two seasons, austral summer (January–February 2008) and austral autumn (March–April 2009). The biogeography of the three most numerous mineralising nannoplankton groups, the coccolithophore Emiliania huxleyi, the smaller (<10 μm) species of the diatom genus Fragilariopsis, and chrysophytes of the genus Tetraparma (mostly Tetraparma pelagica) were found to be related to the boundaries of the major circumpolar fronts. E. huxleyi abundances were relatively high in the northern water masses (maximum of 650 cells ml−1), while T. pelagica abundances were high in the southern water masses (maximum of 1910 cells ml−1). Small Fragilariopsis spp. abundances were also highest in the southern water masses (maximum of 1820 cells ml−1), but this group was present throughout the Scotia Sea. Multivariate statistical analysis found that the most influential environmental variables controlling mineralising nannoplankton biogeography were sea surface temperature and silicate concentration. Estimates of biomass indicated that the Scotia Sea mineralising nannoplankton community formed a substantial part of the total phytoplankton community, particularly south of the Southern Antarctic Circumpolar Current Front (SACCF) during the austral autumn, where mineralising nannoplankton biomass reached 36% of the total phytoplankton biomass. The results that are obtained suggest that traditional microscopic surveys of large Southern Ocean phytoplankton may underestimate total biomass by excluding key mineralising nannoplankton groups. Greater appreciation of the ecological significance of mineralising nannoplankton in the Southern Ocean will improve our understanding of the relationships between environmental parameters, primary production, and the biological carbon pump in this ecosystem.
|Item Type:||Publication - Article|
|Digital Object Identifier (DOI):||10.1016/j.dsr2.2011.09.002|
|Programmes:||BAS Programmes > Polar Science for Planet Earth (2009 - ) > Ecosystems|
|Additional Keywords:||Diatoms, Coccolithophores|
|Date made live:||16 Feb 2012 12:36|
Actions (login required) | <urn:uuid:4e320a95-e283-449e-a7b5-42aa8cdff1b5> | 2.734375 | 792 | Academic Writing | Science & Tech. | 25.069437 |
How radio waves create the current in antenna in terms of photons? If it is Compton scattering then why is not changed the freuency of photons?
An elementary explanation, at high school level:
The beam of radio wave photons are coherent, as Vladimir said. Coherent means that the electric and magnetic field of each individual photon has a fixed phase with all the others.
When the wave reaches an antenna, some of the photons are absorbed, pushing the electrons to a slightly higher energy level (energy h*nu) in the conduction band. Thus it is not scattering but absorption that generates the current with the frequency of the incoming beam.
It is coherence that , as the photon is absorbed, pushes or repulses the electrons in step, so that a current that has the frequency of the impinging beam is built up.
No, it is not a Compton scattering - the electrons in antenna are not really free.
A radio wave is a flux of coherent photons, they act together, not one by one.
Any EMW makes charges move but bound charges may make a work, so the incident wave may be partially absorbed.
Besides, a low-frequency Compton scattering is quite the same as the classical EMW scattering. So the "scattered" part of the resulting wave is nearly of the same frequency.
|show 7 more comments|
The beam of radio waves with coherent photons may be produced only by MASER. Without that photons do not have correlated phases, e.g. famous 21 cm hydrogen line in radio-astronomy is not consist from coherent photons. For radio waves, produced by usual transmitters situation even “worst” - here there is hardly even to suggest a strict justification to find photons with frequency related with frequency of transmitter. I do not have exact calculations, but to understand the reason of the problem it is enough to recall, that radio-transmitter may emit waves of different shapes and simply described by classical physics, but photon is emitted due to quantum transition between some energy levels and it is very different process. After all, photon by definition is described via time dependent function like $exp(-i\omega t)$ and we may not emit “triangle” photons instead of “sinusoidal” one using transmitter with electric current with tricky time dependance. | <urn:uuid:14e85713-449e-4613-9ad9-7acf3ddf7be3> | 3.34375 | 479 | Q&A Forum | Science & Tech. | 42.15087 |
September 6, 2011
A normal linked list can be accessed only at its head. A double-ended queue, or deque (pronounced “deck”), can be accessed at either end. Like a normal list, a deque can be null. New elements can be added at either end, the element at either end of a non-null deque can be fetched, and the element at either end of a non-null deque can be deleted. Deques are a combination of stacks and queues.
Your task is to write a function library that implements deques; you should be sure that all operations are performed in constant time. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2 | <urn:uuid:dd948fd6-621f-4ee8-96cf-470b18bb27bc> | 2.9375 | 170 | Personal Blog | Software Dev. | 59.242313 |
Hornets, spider wasps, ants, etc.
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Ashmead, W. J. 1903. Classification of the fossorial, predaceous and parasitic wasps, or the superfamily Vespoidea. Paper no. 15, Canadian Entomologist 35: 199-205.
Brothers, D. J. 1999. Phylogeny and evolution of wasps, ants and bees (Hymenoptera, Chrysisoidea, Vespoidea, and Apoidea). Zoologica Scripta 28: 233-249.
Brothers, D. J. and Carpenter, J. M. 1993. Phylogeny of Aculeata: Chrysidoidea and Vespoidea (Hymenoptera). Journal of Hymenoptera Research 2: 227-304.
Carpenter, J. M. 1981. The phylogenetic relationships and natural classification of the Vespoidea (Hymenoptera). Systematic Entomology 7: 11-38.
Grimaldi, D., Agosti, D., and Carpenter, J. M. 1997. New and rediscovered primitive ants (Hymenoptera: Formicidae) in Cretaceous amber from New Jersey, and their phylogenetic relationships. American Museum Novitates 3208: 1-43.
Page copyright © 1995
All Rights Reserved.
Citing this page:
Tree of Life Web Project. 1995. Vespoidea. Hornets, spider wasps, ants, etc.. Version 01 January 1995 (temporary). http://tolweb.org/Vespoidea/11191/1995.01.01 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:064cfcd0-22f6-4344-a662-e30bc08e77a4> | 2.9375 | 500 | Knowledge Article | Science & Tech. | 63.009055 |
Concept 2 Quiz
Hint for Question 7
The Cambrian Explosion represents abundant life forms and a huge expansion in marine biodiversity. The Cambrian period is the first of the Paleozoic era.
This hint opened in a full-sized browser window because your browser either does not support scripting or you have turned scripting off. You might try making the window smaller manually and leaving it open. This way, hints will open in that smaller window. | <urn:uuid:d175d529-3481-4ccf-8e00-d05b69dd5df3> | 3.15625 | 92 | Tutorial | Science & Tech. | 55.122 |
formed over 4.5 billion years ago, and it has been
changing ever since.
Sometimes these changes happen very fast. An earthquake can split the ground in a few seconds. Lava from a volcanic eruption can spread over the side of a volcano in minutes. A heavy rainstorm can flood a neighborhood in a day. These changes are easy to see.
But most changes happen so slowly we don't notice them at all. The continents slowly creep across the surface of the Earth at an average speed of eight centimeters a year. Over hundreds of millions of years, mountains form, and then slowly erode away.
How do Earth scientists know about these changes? They do a lot of detective work, and they look for clues all over the Earth! | <urn:uuid:d176ae95-613c-4d81-a927-0892aa090443> | 3.765625 | 153 | Knowledge Article | Science & Tech. | 68.288397 |
Except for the rings
Nebula (M57) is probably the most famous celestial band.
Its classic appearance is understood to be due to perspective -
our view from planet Earth looks down the center of a roughly
barrel-shaped cloud of glowing gas.
But expansive looping structures are seen to extend
the Ring Nebula's familiar central regions in
intriguing composite of ground based and
Hubble Space Telescope images with narrowband
image data from Subaru.
Of course, in this well-studied example of a
nebula, the glowing material
does not come from planets.
Instead, the gaseous shroud represents outer layers expelled
from the dying,
sun-like star at the nebula's center.
Intense ultraviolet light from the hot central star
ionizes atoms in the gas.
Ionized oxygen atoms produce the characteristic greenish glow and
ionized hydrogen the prominent red emission.
The central ring of the Ring Nebula is about one light-year
across and 2,000 light-years away.
tonight's shooting stars
it shines in the northen constellation Lyra. | <urn:uuid:36577b92-9c43-46f9-843e-1a9cd99f95c5> | 3.125 | 238 | Knowledge Article | Science & Tech. | 44.370476 |
July 13, 1989 – The American burying beetle was listed as endangered under the federal Endangered Species Act. Only two populations — one in eastern Oklahoma and one on a New England island — were known to exist, and only two live specimens had been located over the span of a decade.
September 27, 1991 – The U.S. Fish and Wildlife Service published a federal recovery plan for the American burying beetle.
July 26, 2011 – The U.S. House of Representatives passed a bill speeding a decision on the Keystone XL pipeline, regardless of whether proper analysis had been done on its environmental impacts — including potentially devastating habitat for the American burying beetle and numerous other endangered species.
|Photo courtesy USFWS||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /| | <urn:uuid:77591bd6-9944-41bd-871f-091a37357243> | 2.90625 | 167 | Knowledge Article | Science & Tech. | 34.156635 |
Boost Format library
The format library provides a class for formatting arguments according
to a format-string, as does printf, but with two major differences
- format sends the arguments to an internal stream, and so is entirely
type-safe and naturally supports all user-defined types.
- The ellipsis (...) can not be used correctly in the strongly typed
context of format, and thus the function call with arbitrary arguments is
replaced by successive calls to an argument feeding
You can find more Details in :
- Documentation (HTML).
- Sample programs
- The program sample_formats.cpp demonstrates
simple uses of format.
illustrates the few formatting features that were added to printf's
syntax such as simple positional directives, centered alignment, and
demonstrates uses of advanced features, like reusing, and modifying,
format objects, etc..
- And sample_userType.cpp shows the
behaviour of the format library on user-defined types.
02 December, 2006
Copyright © 2003 Samuel Krempp
Distributed under the Boost Software License, Version 1.0. (See
accompanying file LICENSE_1_0.txt or
copy at http://www.boost.org/LICENSE_1_0.txt) | <urn:uuid:f0c49306-8337-4f29-bb29-b65a76a155d3> | 2.78125 | 271 | Documentation | Software Dev. | 44.580705 |
A Square has 4 edges, all the same length.
All its vertices (corners) are right-angled.
It has 2 diagonals, both the same length.
It has 4 lines of symmetry, and rotational symmetry of order 4
With this shape it is only necessary to know one of its dimensions, and all the others can be derived from that.
Where (for brevity) it says 'edge', 'perimeter' and so on, it should, more correctly, be something like 'length of edge' or 'edge-length' etc. | <urn:uuid:c4225da7-1f2e-48fc-a1d7-1e309714b121> | 3.09375 | 120 | Knowledge Article | Science & Tech. | 62.257293 |
hello friends ... I will not be breaking the rules of the forum with the question that will, no studio on vacation programmer and I said let's read a book of c + + "how to program in C + +" by Deitel ... and you are the only ones who can give me a hand ... I have good teachers ... to the point, my question
cap.3.9 page 128
"The developer of the implementation of a class, responsible for creating a class GradeBook omnipresent, creates a header file LibroCalificaciones.hy LibroCalificaciones.cpp source code that includes (using # include) the header file and then compile the code file source code to create GradeBook object. to hide the implementation details of the GradeBook member functions, the developer of the class implementation provides the programmer client code LibroCalificaciones.h header file (which specifies the interface and data members of the class) and object code for class GradeBook (which contains the machine language instructions representing GradeBook member functions). programmer code the client does not receive LibroCalificaciones.cpp so unknown as Member functions implemented LibroCalificaiones "
question .... well, try to compile LibroCalificaciones.hy LibroCalificaciones.cpp but you can not because none has the main () function ... how to do?
LibroCalificaciones.h is a header file meaning, it just defines functions. LibroCalificaciones.c implements those functions. To use the functions, you must include LibroCalificaciones.h by adding #include "LibroCalificaciones.h" . Hope this helps. | <urn:uuid:9e1b1fde-6558-40a7-9d5d-0d8d726005ae> | 2.6875 | 350 | Comment Section | Software Dev. | 40.986946 |
Common names: Cicada, Dogday Cicada, Dogday Locust, Harvestfly, Harvestman Cicada, Locust
Scientific name: Order Homoptera, family Cicadidae, many genera and species
Size: Adult--1" to 3"
Identification: The big insects that make all the noise in mid-summer. They have wide, blunt heads with big bulging eyes and clear, brittle wings. Empty nymphal skins can be seen attached to trees, shrubs, and buildings in the summer. Skin looks like a hollow June bug skin.
Biology and life cycle: Males sing in a loud, sustained, shrill song in the summer. Nymphs have stout brown bodies with large front legs used as scoops. They feed on roots and molt until ready for the last molt. They dig out of the soil, climb a tree, and attach to tree bark or sometimes windows and door screens. Adults emerge during the final molt through a slit in the back, feed for a 5 or 6 weeks, mate, and then lay eggs in slits in tree branches. In two months the eggs hatch and the nymphs drop to the ground and burrow into the soil.
Habitat: Any treed area, conifers and mixed woods. Also in shrubs.
Feeding habits: Nymphs feed on tree roots. Plant damage comes from the egg-laying slits in stems, which cause tip growth to die.
Economic importance: Cause little major plant damage. Most serious damage comes from the egg-laying slits in the bark of small branches.
Natural control: Cicada killers.
Organic control: We know of no effective techniques yet. There are never enough of them in any one place. Beneficial nematodes will help.
Insight: Wrongly called locusts. Females have no sound apparatus. Only males make the sound. They probably defend themselves with their high-pitched sound. The male, which is sometimes called the harvestfly, is responsible for the sad, sustained sound that fills the air on hot summer days. This sound is a mating call and also a means of protection, so loud it hurts the ears of some predators. | <urn:uuid:1657ba7e-dbc3-4aec-8581-7dfcfb93277c> | 3.90625 | 465 | Knowledge Article | Science & Tech. | 64.232308 |
How to Manipulate Files in R
Occasionally, you may want to write a script in R that will traverse a given folder and perform actions on all the data in the files or a subset of files in that folder.
To get a list of files in a specific folder, use list.files() or dir(). These two functions do exactly the same thing, but for backward-compatibility reasons, the same function has two names:
> list.files(file.path("F:", "git", "roxygen2")) "roxygen2" "roxygen2.Rcheck" "roxygen2_2.0.tar.gz" "roxygen2_2.1.tar.gz"
|Iist.files||Lists files in a directory.|
|list.dirs||Lists subdirectories of a directory.|
|file.exists||Tests whether a specific file exists in a location.|
|file.create||Creates a file.|
|file.remove||Deletes files (and directories in Unix operating systems).|
|tempfile||Returns a name for a temporary file. If you create a file — for example, with file.create() or write.table() using this returned name — R will create a file in a temporary folder.|
|tempdir||Returns the file path of a temporary folder on your file system.|
Next, you get to exercise all your knowledge about working with files. In the next example, you first create a temporary file, then save a copy of the iris data frame to this file. To test that the file is on disk, you then read the newly created file to a new variable and inspect this variable. Finally, you delete the temporary file from disk.
Start by using the tempfile() function to return a name to a character string with the name of a file in a temporary folder on your system:
> my.file <- tempfile() > my.file "C:\\Users\\Andrie\\AppData\\Local\\Temp\\ RtmpGYeLTj\\file14d4366b6095"
Notice that the result is purely a character string, not a file. This file doesn’t yet exist anywhere. Next, you save a copy of the data frame iris to my.file using the write.csv() function. Then use list.files() to see if R created the file:
> write.csv(iris, file=my.file) > list.files(tempdir()) "file14d4366b6095"
As you can see, R created the file. Now you can use read.csv() to import the data to a new variable called file.iris:
> file.iris <- read.csv(my.file)
Use str() to investigate the structure of file.iris. As expected file.iris is a data.frame of 150 observations and six variables. Six variables, you say? Yes, six, although the original iris only has five columns.
What happened here was that the default value of the argument row.names of read.csv() is row.names=TRUE. (You can confirm this by taking a close look at the Help for ?read.csv().) So, R saved the original row names of iris to a new column called X:
> str(file.iris) 'data.frame': 150 obs. of 6 variables: $ X : int 1 2 3 4 5 6 7 8 9 10 ... $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ... $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ... $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ... $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ... $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
To leave your file system in its original order, you can use file.remove() to delete the temporary file:
> file.remove(my.file) > list.files(tempdir()) character(0)
As you can see, the result of list.files() is an empty character string, because the file no longer exists in that folder. | <urn:uuid:2ba28d48-a5f7-4dd3-bf47-a1719b87508e> | 3.71875 | 994 | Tutorial | Software Dev. | 88.856689 |
See The Hidden Jewels of Appalachia Video Here
If you want to hit paydirt the Appalachian region is the world’s salamander El Dorado—home to over 70 salamander species. Australia and Sub-Saharan Africa have no salamanders, Asia has 27 species the whole of Europe has 36 species. Central and South America have a bunch of salamander species, but they are mostly from just a few genera of lungless salamanders.
I lived in England for a while and saw what a big deal people make out of the few newt species there. People love ‘em. As a result, I was expecting to find a hardcore citizen-naturalist contingent of salamander fans in the USA. What I found instead, was a hardcore biologist fanbase of salamanders who were acutely aware of these hidden jewels. However, the more I spoke to non-biologists living in the Eastern USA, I learned that many people take these critters for granted, or have never noticed them.
Salamanders can be found in rivers, ponds, streams and vernal pools, under rotting logs and in caves. They inhabit many different habitats and can perform important ecological functions as predators of insects and food for other animals. Some estimates from Hubbard brook actually have shown that if you were to get all mammals, birds, reptiles in a forest and put each group on a scale against all the salamanders—the scale would likely tip in the salamanders favor every time!
If you are a already a fan, find Appalachian salamanders on Facebook, if you are a citizen scientist who loves amphibians, start taking photos of them and share your observations as part of the Global Amphibian Blitz – an online citizen-science initiative to find and map every species of amphibian in the world. But mostly get out in to nature and discover these incredible creatures for yourself.
More from ecomii: | <urn:uuid:8a3b81f3-40a5-4650-a664-53460e823ee3> | 2.953125 | 399 | Personal Blog | Science & Tech. | 35.780769 |
What Is a Blizzard?
A blizzard is a storm with dense, blowing snow characterized by low visibility and lasting at least three hours.
CREDIT: Ken Graham Photography.
The term "blizzard" is often tossed around when big winter storms blow in. But the National Weather service has an official definition of blizzard:
A blizzard is a storm with "considerable falling or blowing snow" and winds in excess of 35 mph and visibilities of less than 1/4 mile for at least 3 hours.
While blizzard conditions may occur for shorter periods of time, the weather service is particular about its warning system:
When all the blizzard conditions are expected, the National Weather Service will issue a "blizzard warning." When just two of the above conditions are expected, a "winter storm warning" or "heavy snow warning" may be issued.
Blizzard conditions often develop on the northwest side of an intense storm system, meteorologists explain. The difference between the lower pressure in the storm and the higher pressure to the west creates a tight pressure gradient, or difference in pressure between two locations, which in turn results in very strong winds.
The strong winds blow falling snow and pick snow up from the ground, cutting visibility and creating big snow drifts.
Where did the term "blizzard" come from?
It had been used to describe a canon shot or a volley of musket fire. It first showed up to describe a snowstorm in an Iowa newspaper in the 1870s, according to the weather service.
Blizzards are most common in the upper Midwest and Great Plains, but they can occur anywhere strong snowstorms strike.
The greatest snowfall event ever in the lower-48 United States was 63 inches, Georgetown, Colo., on Dec. 4, 1913. The record for New York City is 25.5 inches on Dec. 26, 1947, and for Boston it is a 23.6-inch event on Feb. 17, 2003.
It is not uncommon in the Midwest to have wind chills below minus 60 degrees Fahrenheit during blizzard conditions. Exposure to such low wind chill values can result in frostbite or hypothermia, so officials caution against going outside at all during a blizzard.
The coldest temperature ever recorded in the lower-48 United States was minus 70 degrees F (-57 degrees C) at Rogers Pass, Mont., on Jan. 20, 1954. The coldest temperature ever on Earth minus 128.5 degrees F (minus 89.2 degrees C), July 21, 1983, Vostok, Antarctica.
MORE FROM LiveScience.com | <urn:uuid:532e47fe-df96-41be-88b6-48bf83b5f221> | 3.765625 | 534 | Knowledge Article | Science & Tech. | 63.775905 |
(Above) Cross-section of the Tswaing crater modified after Brandt, D., (1994), Brandt & Reimold (1999), Partridge & Reimold (1990).
The Tswaing crater has a simple bowl shape with a 1.13 km diameter. The outer rim is composed of shattered rock called breccia and is elevated nearly 60m above the surrounding plains. Most of the original ejecta blanket has been eroded away, however numerous large granitic blocks can be found out to hundreds of meters from the crater rim.
Geology of Tswaing
Igneous Rocks: The most common rock type at the Tswaing crater and in the surrounding region is the Nebo Granite. The Nebo Granite is part of the Bushveld complex - a large layered igneous intrusion. The Bushveld itself is a geological novelty, consisting of several vertical kilometers of layered magmatic rock which extends horizontally for several hundred kilometers. The 2.06 billion year-old Bushveld complex is also one of the world's largest sources of platinum group metals.
Various small intrusions and dykes also cut into the granitic rock that forms the crater rim.These small intrusions are much older than the crater (1.3 billion years old).
Projectile: Unlike other small impact craters (e.g. Barringer Meteor Crater), the Tswaing projectile is believed to be chondritic (stony meteorite) in composition. Other small impact craters are all associated with iron projectiles.
Breccia and impact generated rocks: A sandy breccia layer, sampled in a drill core of the crater, contains abundant shock-metamorphosed quartz and feldspar grains, in addition to melt and glass fragments (after Reimold et al., Pretoria Saltpancrater: Impact Origin Confirmed,Geology, 20, 1992). This type of impact generated rock, known as suevite, consists of impact breccia clasts with inclusions of impact melt.
Sedimentary Rocks: An important group of regional sedimentary rock units known as the Karoo Supergroup is found near the crater. These rocks were thought to cover the granite in the local vicinity during the time of the impact. The sediments which formed these rocks were deposited 220 million years ago during the Triassic. Karoo sedimentary rocks consist of shale, sandstone and a gritty sediment.
Lacustrine Sediments: The center of the Tswaing crater floor is covered with a hyper-saline lake. The high salinity of this lake made it a source for the commercial extraction of soda brine from 1912 to 1956. At various times in the past, this lake has deposited carbonate mud, limestone, and evaporites on the crater floor to a current depth of nearly 90m.
This web site is based on information originally created for the NASA/UA Space Imagery Centerís Impact Cratering Series.
Concept and content by David A. Kring and Jake Bailey.
Design, graphics, and images by Jake Bailey and David A. Kring.
Any use of the information and images requires permission of the Space Imagery Center and/or David A. Kring (now at LPI). | <urn:uuid:074621dd-f490-44b1-96f8-a1df825d00b7> | 3.734375 | 681 | Knowledge Article | Science & Tech. | 47.983937 |
SOLAR panels that generate electricity are bulky and expensive. But now an American company says it can provide unobtrusive photovoltaic panels at half the cost of traditional versions.
The new technology will be on show at this month's Olympic Games in Atlanta, Georgia, where the US Department of Energy is building a solar-powered house. On the roof are 110 solar shingles supplied by Energy Conversion Devices (ECD) of Detroit, Michigan. "From the ground, it's hard to tell the difference between the asphalt shingles and the solar ones," says company chairman Robert Stempel.
Traditional photovoltaic cells are built by growing silicon crystals, slicing them into wafers 100 micrometres thick, and encasing them in glass. This process is now about as cheap as it is going to get, and assuming a cell lifetime of around twenty years, the electricity from crystalline cells is still roughly two or three times as ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:6c949cbe-a7d9-4bb0-bf66-0780ae962526> | 3.53125 | 223 | Truncated | Science & Tech. | 43.624785 |
gravity: This is the term used either for the force exerted by gravity or the acceleration caused by the force of gravity. At the surface of the earth, it has a value of approximately 9.8 meters per second per second, or 32 feet per second per second, depending on the measuring system you choose to use. It also has a value of 9.8 newtons per kilogram or 32 pounds per slug, depending upon which measuring system you choose to use. This latter amount is the force that gravity will exert on a given mass. Either way, the force causes the acc eleration and the number value of the total force per unit mass on the object and the acceleration it causes are identical.
howitzer and tunnel: In the context of this demonstration, the howitzer refers to a device that will fire a softball into the air. The tunnel refers to a piece of cardboard draped over a frame that forms a tunnel through which a demonstrator will coast along with the howitzer while on a three-wheel cart.
projectile: Any object which is given some velocity and then left to fly through the gravitational field with only the gravitational forc acting on it. An example is a baseball hit by a bat, a basketball shot toward a basket, a bullet fired from a gun, etc.
range: The horizontal distance a projectile is fired. How far it moves away from whatever gave it its horizontal motion. We will be firing a projectile at an angle with the horizontal, it will arc upward and outward. How far outward it goes is the range of the projectile.
simultaneous: Two objects will be released at the same time. We call it simultaneous fall. This means the two objects will begin falling at the same time.
trajectory: This refers to the path the projectile follows when fired from a gun, or a bat, or someone throwing the object, or whatever its source of motion is. In our demonstrations, projectiles will follow curved paths that will be interesting to observe.
HOW IT HAPPENS
Gravity is the force that pulls both balls toward the floor. It pulls harder on the heavier ball, but the heavier ball also has a greater resistance to being speeded up toward the floor. The two effects cancel each other out, and the two objects will hit the floor at the same time, given that they are dropped from the same height and at the same time. There will be another demonstration following this which will be set up to eliminate the differences in when the balls are dropped, and the difference from the horizontal that the ball is projected.
HOW IT HAPPENS
When the two are released, the fact that one ball is being propelled forward has nothing to do with the fact that it is being accelerated downward. The two directions are perpendicular to each other, and hence have no effect on each other. The two balls will hit the floor at exactly the same time.
HOW IT HAPPENS
The weight on the “arm” causes the center of mass of the arm to be shifted toward the lower end of the “arm” . Thus, when the supporting rod is removed, the center of mass of the arm accelerates toward the earth at the normal rate. This causes the part of the arm where the bucket is placed to fall faster than the normal acceleration of gravity and it also falls in an arc, allowing it to get out and under the bowling ball so it catches the bowling ball.
There is a very famous picture of a smokestack being demolished. This smokestack is toppled to the right, and the smokestack is broken in two before it hits the ground. It looks like the following:
This is a result of the center of mass falling quicker than the upper end of the smokestack. This entire demonstration does not violate the law of gravity, it just seems like it does!
In the previous demonstration, the idea was brought up that in order for the projectile to have a range other than zero, it would need to be fired at an angle with the vertical. In this demonstration, the demonstrator will ride in a three-wheel cart and will fire a projectile (softball) vertically into the air as he is moving forward. He will go through a tunnel, firing the projectile as he enters the tunnel, proceed through the tunnel and catch the projectile on the other side of the tunnel. The projectile will have passed over the top of the tunnel while the demonstrator and the cannon will have passed through the tunnel.
HOW IT HAPPENS
The motion of a projectile can be thought of as motion in two separate, perpendicular directions. One direction is horizontal, the other is vertical. When a projectile is fired at an angle between these two directions, it is given motion in both directions, causing the curved trajectory. The demonstrator will give the projectile motion in the vertical direction with the cannon pointed directly upward. He will give it horizontal motion by moving forward at the same time he fires it. The motion upward does not change the motion horizontally, so the projectile and the cart move along together horizontally. Thus the demonstrator will be able to catch the ball on the other side of the tunnel.
This is the classic demonstration designed to answer and age-old question. If a hunter is out in the jungle and wants to fire a tranquilizing dart to put a monkey to sleep, how does he aim at the monkey in order to hit with the dart? You see, the monkey is very smart, and as the puff of smoke from the dart gun shows up, the monkey knows to instantly drop from the tree to avoid the dart. The demonstration team sets up just such a situation, with a demonstrator being the “monkey” hanging from a tower with an electromagnet, and a billiard ball will be fired at him while he hangs there. At the instant the billiard ball is fired the demonstrator will be dropped from the tower by cutting the electricity to the electromagnet. He will fall to a large sponge pad on the floor, catching the billiarad ball in mid-air with a baseball glove.
HOW IT HAPPENS
When a projectile is fired from the barrel of a gun, its path is formed by the action of gravity together with its given velocity. If gravity were to stop acting, the desired direction of the projectile would be directly at the target. The projectile would follow a straight line directly from the muzzle of the gun to the target. When gravity is acting, it will tend to do the same thing to both objects, both the falling target and the projectile. Quite simply what this means is that the projectile must be aimed directly at the target in this situation also. The fact is that a target which drops the instant the projectile is fired from a long distance is easier to hit than one that is stationary! If the projectile is fired with a greater velocity, the target just doesn’t fall as far until it is hit by the projectile. In this case the projectile will follow a straighter path to the target. If the projectile is fired slower, it will follow a more curved path and hit the target farther down towards the ground. But as long as the gun is aimed directly at the target, the projectile will hit the target as it falls, provided the projectile has enough velocity to get to the target before it hits the ground.
<Back to:Physics Circus | <urn:uuid:0de81d02-2bb2-4201-a452-bf9cf6a897d6> | 4.15625 | 1,508 | Tutorial | Science & Tech. | 53.309455 |
Heat and Temperature
This page takes a historical approach to the discussion of temperature and thermometers.Treasures from TPF
Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on heat and temperature.SUPER Physics
These workshop-style tutorials and accompanying activities guide students through various wave topics - including heat and temperature.
Temperature and Thermometers
We all have a feel for what temperature is. We even have a shared language that we use to qualitatively describe temperature. The water in the shower or bathtub feels hot or cold or warm. The weather outside is chilly or steamy. We certainly have a good feel for how one temperature is qualitatively different than another temperature. We may not always agree on whether the room temperature is too hot or too cold or just right. But we will likely all agree that we possess built-in thermometers for making qualitative judgments about relative temperatures.
What is Temperature?
Despite our built-in feel for temperature, it remains one of those concepts in science that is difficult to define. It seems that a tutorial page exploring the topic of temperature and thermometers should begin with a simple definition of temperature. But it is at this point that I'm stumped. So I turn to that familiar resource, Dictionary.com ... where I find definitions that vary from the simple-yet-not-too-enlightening to the too-complex-to-be-enlightening. At the risk of doing a belly flop in the pool of enlightenment, I will list some of those definitions here:
- The degree of hotness or coldness of a body or environment.
- A measure of the warmth or coldness of an object or substance with reference to some standard value.
- A measure of the average kinetic energy of the particles in a sample of matter, expressed in terms of units or degrees designated on a standard scale.
- A measure of the ability of a substance, or more generally of any physical system, to transfer heat energy to another physical system.
- Any of various standardized numerical measures of this ability, such as the Kelvin, Fahrenheit, and Celsius scale.
For certain, we are comfortable with the first two definitions - the degree or measure of how hot or cold and object is. But our understanding of temperature is not furthered by such definitions. The third and the fourth definitions that reference the kinetic energy of particles and the ability of a substance to transfer heat are scientifically accurate. However, these definitions are far too sophisticated to serve as good starting points for a discussion of temperature. So we will resign to a definition similar to the fifth one that is listed - temperature can be defined as the reading on a thermometer. Admittedly, this definition lacks the power that is needed for eliciting the much-desired Aha! Now I Understand! moment. Nonetheless it serves as a great starting point for this lesson and heat and temperature. Temperatureis what the thermometer reads. Whatever it is that temperature is a measure of, it is reflected by the reading on a thermometer. So exactly how does a thermometer work? How does it reliably meter whatever it is that temperature is a measure of?
How a Thermometer Works
Today, there are a variety of types of thermometers. The type that most of us are familiar with from science class is the type that consists of a liquid encased in a narrow glass column. Older thermometers of this type used liquid mercury. In response to our understanding of the health concerns associated with mercury exposure, these types of thermometers usually use some type of liquid alcohol. These liquid thermometers are based on the principal of thermal expansion. When a substance gets hotter, it expands to a greater volume. Nearly all substances exhibit this behavior of thermal expansion. It is the basis of the design and operation of thermometers.
As the temperature of the liquid in a thermometer increases, its volume increases. The liquid is enclosed in a tall, narrow glass (or plastic) column with a constant cross-sectional area. The increase in volume is thus due to a change in height of the liquid within the column. The increase in volume, and thus in the height of the liquid column, is proportional to the increase in temperature. Suppose that a 10-degree increase in temperature causes a 1-cm increase in the column's height. Then a 20-degree increase in temperature will cause a 2-cm increase in the column's height. And a 30-degree increase in temperature will cause s 3-cm increase in the column's height. The relationship between the temperature and the column's height is linear over the small temperature range for which the thermometer is used. This linear relationship makes the calibration of a thermometer a relatively easy task.
The calibration of any measuring tool involves the placement of divisions or marks upon the tool to measure a quantity accurately in comparison to known standards. Any measuring tool - even a meter stick - must be calibrated. The tool needs divisions or markings; for instance, a meter stick typically has markings every 1-cm apart or every 1-mm apart. These markings must be accurately placed and the accuracy of their placement can only be judged when comparing it to another object known to have an accurate length.
A thermometer is calibrated by using two objects of known temperatures. The typical process involves using the freezing point and the boiling point of water. Water is known to freeze at 0°C and to boil at 100°C at an atmospheric pressure of 1 atm. By placing a thermometer in mixture of ice water and allowing the thermometer liquid to reach a stable height, the 0-degree mark can be placed upon the thermometer. Similarly, by placing the thermometer in boiling water (at 1 atm of pressure) and allowing the liquid level to reach a stable height, the 100-degree mark can be placed upon the thermometer. With these two markings placed upon the thermometer, 100 equally spaced divisions can be placed between them to represent the 1-degree marks. Since there is a linear relationship between the temperature and the height of the liquid, the divisions between 0 degree and 100 degree can be equally spaced. With a calibrated thermometer, accurate measurements can be made of the temperature of any object within the temperature range for which it has been calibrated.
The thermometer calibration process described above results in what is known as a centigrade thermometer. A centigrade thermometer has 100 divisions or intervals between the normal freezing point and the normal boiling point of water. Today, the centigrade scale is known as the Celsius scale, named after the Swedish astronomer Anders Celsius who is credited with its development. The Celsius scale is the most widely accepted temperature scale used throughout the world. It is the standard unit of temperature measurement in nearly all countries, the most notable exception being the United States. Using this scale, a temperature of 28 degrees Celsius is abbreviated as 28°C.
Traditionally slow to adopt the metric system and other accepted units of measurements, the United States more commonly uses the Fahrenheit temperature scale. A thermometer can be calibrated using the Fahrenheit scale in a similar manner as was described above. The difference is that the normal freezing point of water is designated as 32 degrees and the normal boiling point of water is designated as 212 degrees in the Fahrenheit scale. As such, there are 180 divisions or intervals between these two temperatures when using the Fahrenheit scale. The Fahrenheit scale is named in honor of German physicist Daniel Fahrenheit. A temperature of 76 degree Fahrenheit is abbreviated as 76°F. In most countries throughout the world, the Fahrenheit scale has been replaced by the use of the Celsius scale.
Temperatures expressed by the Fahrenheit scale can be converted to the Celsius scale equivalent using the equation below:
°C = (°F - 32°)/1.8
Similarly, temperatures expressed by the Celsius scale can be converted to the Fahrenheit scale equivalent using the equation below:
°F= 1.8•°C + 32°
The Kelvin Temperature Scale
While the Celsius and Fahrenheit scales are the most widely used temperature scales, there are several other scales that have been used throughout history. For example, there is the Rankine scale, the Newton scale and the Romer scale, all of which are rarely used. Finally, there is the Kelvin temperature scale, which is the standard metric system of temperature measurement and perhaps the most widely used temperature scale used among scientists. The Kelvin temperature scale is similar to the Celsius temperature scale in the sense that there are 100 equal degree increments between the normal freezing point and the normal boiling point of water. However, the zero-degree mark on the Kelvin temperature scale is 273.15 units cooler than it is on the Celsius scale. So a temperature of 0 Kelvin is equivalent to a temperature of -273.15 °C. Observe that the degree symbol is not used with this system. So a temperature of 300 units above 0 Kelvin is referred to as 300 Kelvin and not 300 degree Kelvin; such a temperature is abbreviated as 300 K. Conversions between Celsius temperatures and Kelvin temperatures (and vice versa) can be performed using one of the two equations below.
°C = K - 273.15°
K = °C + 273.15
The zero point on the Kelvin scale is known as absolute zero. It is the lowest temperature that can be achieved. The concept of an absolute temperature minimum was promoted by Scottish physicist William Thomson (a.k.a. Lord Kelvin) in 1848. Thomson theorized based on thermodynamic principles that the lowest temperature which could be achieved was -273°C. Prior to Thomson, experimentalists such as Robert Boyle (late 17th century) were well aware of the observation that the volume (and even the pressure) of a sample of gas was dependent upon its temperature. Measurements of the variations of pressure and volume with changes in the temperature could be made and plotted. Plots of volume vs. temperature (at constant pressure) and pressure vs. temperature (at constant volume) reflected the same conclusion - the volume and the pressure of a gas reduces to zero at a temperature of -273°C. Since these are the lowest values of volume and pressure that are possible, it is reasonable to conclude that -273°C was the lowest temperature that was possible.
Thomson referred to this minimum lowest temperature as absolute zero and argued that a temperature scale be adopted that had absolute zero as the lowest value on the scale. Today, that temperature scale bears his name. Scientists and engineers have been able to cool matter down to temperatures close to -273.15°C, but never below it. In the process of cooling matter to temperatures close to absolute zero, a variety of unusual properties have been observed. These properties include superconductivity, superfluidity and a state of matter known as a Bose-Einstein condensate.
Temperature is what the thermometer reads. But what exactly is temperature a reflection of? The concept of an absolute zero temperature is quite interesting and the observation of remarkable physical properties for samples of matter approaching absolute zero makes one ponder the topic more deeply. Is there something happening at the particle level which is related to the observations made at the macroscopic level? Is there something deeper to temperature than simply the reading on a thermometer? As the temperature of a sample of matter increases or decreases, what is happening at the level of atoms and molecules? This is the topic of the next page in Lesson 1.
Check Your Understanding
1. In the discussion on the calibration of a thermometer, it was mentioned that there was a linear relationship between temperature and the height of the liquid in the column. What if the relationship was not linear? Could a thermometer still be calibrated if temperature and the column height of the liquid were not related by a linear relationship?
2. Which is the smaller temperature increment - a degree Celsius or a degree Fahrenheit? Explain.
3. Perform the appropriate temperature conversions in order to fill in the blanks in the table below.
Celsius (°) Fahrenheit (°F) Kelvin (K) a. 0 b. 212 c. 0 d. 78 e. 12 | <urn:uuid:2ded0c11-c53a-4425-a6e9-277ee908b3f6> | 3.90625 | 2,476 | Tutorial | Science & Tech. | 40.009231 |
I'm not an EM guy, but you might find it easier to understand the words in for something like heat flow.
"Flux" is the total amount of heat flowing through a surface (measured as heat energy / second, i.e. power in watts)
"Flux density" is the flux per unit area, or sometimes the amount of heat generated per unit volume (for example heating something in a microwave oven, or heat generated by nuclear reactions).
FWIW, temperature is a scalar field (which is simpler to visualize than the vector fields in EM) and the direction of the flux is therefore the gradient of the temperature field. If you visualize a temperature distribution on a plane by drawing a contour map, the flux direction is at right angles to the temperature contour lines, and the magnitude of the flux density is higher where the temperature contours are closer together. | <urn:uuid:036f9afd-6c2d-4246-8d3c-4e15e790daf4> | 3.453125 | 183 | Q&A Forum | Science & Tech. | 38.892928 |
|Jun20-12, 12:27 PM||#1|
Confused about image size in plane, concave, or convex mirrors
So I know these equations
1/f = 1/p + 1/i
m = -i/p
f: focal length
p: object distance from mirror
i: image distance from mirror
Let's say that I have an object in front of a concave or convex mirror with the same |f|. p is much larger than the radius of curvature. Based on the equations above, |m|<1. Also, |m| should be nearly same for both mirrors. Does this mean that the images produced by the two mirrors would appear to be about the same size?
Also, if I replace the concave or convex mirror with a plane mirror (while keeping p the same), |m| would be 1 for the image. Does this mean that the image produced by the plane mirror would appear to be larger than the image produced by the concave/convex mirrors?
(I tried to check this using a spoon, but the spoons I have are not very reflective...)
|Jun20-12, 02:56 PM||#2|
Yes to all.
|Similar Threads for: Confused about image size in plane, concave, or convex mirrors|
|Image size in concave mirror||Introductory Physics Homework||3|
|Convex mirrors - image size||Introductory Physics Homework||3|
|Light reflections in concave/convex mirrors||Introductory Physics Homework||1|
|image forming in plane mirrors||Introductory Physics Homework||3|
|Concave/Convex-lens and mirrors||Introductory Physics Homework||2| | <urn:uuid:b0dd44b7-71ab-405d-ab90-bf7872fc95c9> | 3.25 | 380 | Comment Section | Science & Tech. | 53.660343 |
by Darrel Emerson, AA7FV and G3SYS QST, published by ARRL. Vol 79, No. 2, February 1995, p.21.
(Copyright on the original article is held by the ARRL.)
In July 1991 and in May 1994, partial solar eclipses were visible from the QTH of AA7FV in Tucson Arizona. For both events, a simple Yagi antenna was left pointing in advance at the mid-point of the eclipse, and the total noise output power from a ham radio receiver, tuned either to 146 MHz or to 436 MHz, was recorded.
During the July 1991 event, the sun was quite active. The trace below shows the signal recorded at 146 MHz over a period of 6 hours. The vertical dashed lines show the times of start, maximum obscuration, and ending of the optical eclipse. The base noise level gradually rises as the sun drifts into the antenna beam, with strong solar bursts superposed.
The solar noise drops fairly abruptly about 1 hour after the onset of the eclipse, but returns about 13 minutes before the end of the eclipse. This shows that the region producing the radio emission must be fairly small, and definitely not covering any substantial part of the optical disk. By about 21:00 UT, an hour or so after the end of the eclipse, the sun is drifting out of the Yagi antenna beam.
By calculating where the leading edge of the moon was over the sun at the point where the radio radiation disappears, and combining this with knowledge of where the trailing edge of the moon was as the radio emission reappears, it is possible to pinpoint the source of radio emission very precisely. Also, by measuring just how rapidly the radio emission disappeared and reappeared, it is possible to estimate the size of the radiating region on the solar surface. This comes out to about one hundredth of a degree.
The image below shows the optical sun photographed with a telescope and CCD camera just before the start of the eclipse, by Tom Folkers in Tucson, Arizona. Several sunspot regions can be seen. The red spot shows the location and approximate extent of the source of intense radio emission. The radiation seems to be associated with, although not completely coincident with, a fairly prominent sunspot group.
In May 1994 the sun was very quiet, unlike the July 1991 event. Radio emission at 146 MHz was barely discernible from the general galactic background radiation. The situation was a little clearer at 436 MHz however. The solid trace below shows the emission detected at 436 MHz over an 8-hour period on the day of the eclipse, but normalized by an identical observation made the following day. This normalization removes the effect of galactic background emission drifting through the antenna beam. The vertical, dashed lines mark the times of onset, maximum obscuration, and ending of the optical eclipse.
The lower, dashed line in the figure shows the change in solar emission expected if the radio emission comes from a smooth disk, equal in extent to the optical sun. The two curves match fairly well, showing that, at this part of the solar sunspot cycle, most of the emission at 436 MHz comes from a fairly smooth solar disk, in complete contrast to the 1991 measurements at 146 MHz.
Please refer to the original article in QST, February 1995 for details, including how the measurements were made, and for a derivation of solar brightness temperatures and an estimate of the flux of the solar noise storm. | <urn:uuid:a41f860e-0f7d-4f35-8b65-2095f5302d3f> | 2.890625 | 706 | Academic Writing | Science & Tech. | 50.545082 |
University of southern California researchers indicate us a more efficient use of graphene photovoltaicsAugust 2nd, 2010
Is it possible to imagine people powering their cellular phone or music/video device while jogging on a sunny day?
A University of Southern California team has produced flexible transparent carbon atom films that may have great potential for a brand new variety of solar cells.
In a paper recently published by the journal ACS Nano, researchers stated that organic photovoltaic (OPV) cells have been proposed as a technique to achieve low priced energy due to their ease of manufacture, light weight, and compatibility with flexible substrates.
The new work shows that graphene, an extremely conductive and highly transparent kind of carbon made up of atoms-thick sheets of carbon atoms, has high possibility to fill this role.
While graphene’s existence has been known for many years, it has only been studied extensively since 2004 because of the impracticality of manufacturing it in high quality and in quantity.
The University of southern California team has produced graphene/polymer sheets ranging in sizes about 150 square centimeters that in turn can be used to create dense arrays of flexible organic photovoltaic (OPV) cells.
These organic photovoltaic (OPV) devices convert solar radiation to electricity, although not as efficiently as silicon cells.
The energy provided by sunlight on a sunny day is about 1,000 watts per meter square, for every 1,000 watts of sunlight that hits a square meter part of the standard silicon solar cell, 14 watts of electricity will be generated, Organic solar cells are less efficient; their conversion rate for that same 1,000 watts of sunlight in the graphene-based solar cell would be only 1.3 watts.
But what graphene organic photovoltaic (OPV) lack in efficiency, can potentially be compensated by its lower price and, greater physical flexibility.
Researchers think it may eventually be possible to cover with inexpensive solar cell layers extensive areas like newspapers, magazines or power generating clothing.
In the meanwhile Prof. Ruoff and his colleagues of the mechanical engineering department at the University of Texas at Austin, are studying the basic science in the development of graphene-based ultracapacitors for usage in electronics and various fields.
Prof. Ruoff says batteries are relatively slow, they can store energy but take time to charge up, and then they distribute energy slowly, over time.
Ultracapacitors can be charged very quickly, within seconds, and discharge in a short time, but, today, they can’t store very much electrical energy.
The development of stable and less costly ultracapacitors is seen as a key step in using wind or solar-generated power, particularly if researchers can find solutions to enable capacitors to store energy longer, that is not yet possible.
Even with their current storage capacity, the graphene devices could provide quick energy when needed in certain situations on the green way.
They could be used, for example, to absorb heat generated in braking an automobile or train, and store it for a short time, and then use it for the electrical needs of the vehicle (i.e. starting the auto or acceleration)
About the writer - Sophia H. Walker writes for the solar battery charger blog, her personal hobby site related to tips to help individuals save electricity using solar powered energy for small accessories. | <urn:uuid:ac9284a4-392e-4262-b564-d1b5dc037a9e> | 3.6875 | 693 | Personal Blog | Science & Tech. | 23.681231 |
If you watch chimpanzees from different parts of Africa, you’ll see them doing very different things. Some use sticks to extract honey from beehives, while others prefer leaves. Some use sticks as hunting spears and others use them to fish for ants. Some drum on branches to get attention and others rip leaves between their teeth.
These behaviours have been described as cultural traditions; they’re the chimp equivalent of the musical styles, fashion trends and social rules of humans. They stem from the readiness of great apes to ape one another and pick up behaviours from their peers. But a new study complicates our understanding of chimp cultures. Kevin Langergraber at the Max Planck Institute for Evolutionary Anthropology has found that much of this variation in behaviour could have a genetic influence.
Their astounding selflessness is driven by an unusual way of handing down their genes, which means that females actually have more genes in common with their sisters than they do with their own daughters. And that makes them more likely to put the good of their colony sisters over their own reproductive legacy.
The more related the workers are to each other, the more willing they will be to co-operate. So you might expect colonies of social insects with fairly low genetic diversity to fare best. But that’s not the case, and Heather Matilla from Cornell University has found that exactly the opposite is true for bees.
Bee queens will often mate with several males (a strategy called polyandry). It’s an unexpected tactic, for it means that the queen’s daughters will be more genetically diverse and slightly less related to each other than they would be if they all shared the same father. And that could mean that selfless co-operation becomes less likely.
Despite this potential pitfall, social insect queens do frequently sleep with many males, and all species of honey bee do this. There must be some benefit, and Mattila has found it. Together with Thomas Seeley, she showed that a genetically diverse colony is actually a more productive and a stronger one. | <urn:uuid:bf0bc5f8-bb01-4299-a203-8ef057e3cfa1> | 3.40625 | 421 | Personal Blog | Science & Tech. | 46.577935 |
The word geothermal comes from the Greek words geo (earth) and therme (heat). So, geothermal energy is heat from within the earth. We can use the steam and hot water produced inside the earth to heat buildings or generate electricity. Geothermal energy is a renewable energy source because the water is replenished by rainfall and the heat is continuously produced inside the earth.
1 College Circle has two geothermal wells. They are used to provide 2/3 of the heating and cooling for the house. While modified systems, they provide enough energy to heat and cool the second and third floors.
Other geothermal facts of interest:
- California has 33 geothermal power plants and is the largest producer of geothermal energy in the world.
- The EPA has determined that geothermal heat pumps are the most energy efficient, environmentally clean and cost effective systems for temperature control. | <urn:uuid:836dd243-0a35-4e80-8af8-c84f67370de1> | 3.609375 | 178 | Knowledge Article | Science & Tech. | 42.869808 |
You are here:
In this activity, you will work with a reaction between strands of DNA.
Remember that DNA is a sequence of the letters A,C,T, and G. A binds with T and C binds with G. Solutions are specified by the DNA sequence, such as ACAC or TG, and by the concentration in units of micromolar (µM = 10-6M)
Consider mixing ACAC with TG. Predict what reactants and products should be present in the final solution. (Please give your final answer in total nanomoles (10-9moles) for each species in the solution)
Important: Please describe your complete procedure and the key quantities you measure. Points are based on whether or not you explain your procedure in sufficient detail for us to know what you did. You are not graded on the method you used; all approaches that produce accurate results are fine.
|Predict what should be present in the solution resulting from the above mixture.|
Now check your answer in the virtual lab (Note: you may have to dilute your reactants before mixing them)
Experimental part: DONE
|Do the results agree with your prediction? Please justify any differing results.|
|What reactant and how much of it is left after the reaction?|
|Well done! You did a good job!| | <urn:uuid:315dd9d7-431b-4af2-bd56-0ec1ae2aa552> | 4.125 | 283 | Tutorial | Science & Tech. | 60.427547 |
Rational and basic overview
The reason for having a compiler is that it is useful to have
certain common information between many different web pages
to implement a consistant style amoung the pages. To a certain
extent, this can be implemented by the web server via
Server Side Includes, but that feature is not uniformly
available. In addition, the webc compiler can do
a few other tricks.
webc is implemented in the Perl programming language.
Usually the implementation language of a program is not important.
However, it is here, because it is possible for the user to add
code to the compiler at run time. This feature allows for a certain
amount of flexibility and oddity.
Source for this file can be found in overview.wc. | <urn:uuid:eb707964-9b72-4c82-8f86-7ef029da965f> | 3.15625 | 162 | Documentation | Software Dev. | 43.567857 |
The immature insect, or larva, escapes from the egg and acts essentially as an eating machine, gathering nutrients. The larva has no wings or reproductive organs and has a form which does not resemble the adult of the species, but appears to be more like its ancient pre-insect ancestor. The larva's body is generally wormlike and may have no legs or may have extra legs to support the long body. As the larva eats, it grows extraordinarily quickly, molting a specific number of times before it pupates. The larvae of beetles are commonly called grubs, the larvae of flies maggots, and the larvae of butterflies and moths caterpillars.
When the larva has grown large enough and ingested enough food, it enters a stage of apparent dormancy as a pupa. The larva usually protects itself, either in a secure hiding spot, within a shelter of its own construction, or inside of a cocoon spun of silk from a gland near its mouth, as it prepares to pupate. It appears to be resting, but in fact, nothing could be further from the truth. It is during this phase that the metamorphism occurs. The insect must completely rearrange its internal and external structure and create some entirely new structures such as reproductive organs and wings.
The mature insect represents the true form of the species: it is the larva which departs from the characteristic body plan. The adult emerges head-first from the skin of the pupa, fills its wings with blood and, immediately or within a few hours, it flies away. In all insect species, only the adult can reproduce. Indeed, in many species the adult's sole purpose is reproduction; many adult moths lack a feeding apparatus entirely and live only a few days, just long enough to mate, whereas houseflies, for example, are well known for their adult eating habits.
Other insects undergo what is termed incomplete metamorphism. The immature individuals of the species, called nymphs if they live on land and naiads if they live in water, closely resemble the adult form, but with significant differences. Such features as coloration and the shape of the body may differ between the nymph and the adult, but the most significant difference is that the nymph has no wings and cannot reproduce. All of these insects go through a procedure called molting numerous times as they grow. When the idividual becomes too large to fit comfortably inside of its skeleton, a hormone is produced which triggers molting. The immature insect then develops a soft protective layer underneath its hard skeleton, splits this shell and pulls itself out of it, and lets its new, soft skeleton harden. In all incompletely metamorphosing winged insect species, the wings only appear with the final molt, as they would otherwise be shed with the skeleton and they cannot be replaced. | <urn:uuid:bdff6ba8-12e8-440b-a895-6a60eb730b51> | 4.15625 | 583 | Knowledge Article | Science & Tech. | 41.748667 |
In a hierarchically structured region, the average density increases as you go down the levels of the hierarchy to smaller and smaller scales. If there are dense star-forming cores at the bottom of the hierarchy, where the densities are largest and the sizes are smallest, then the fractional mass in the form of these cores increases as their level is approached. This is because more and more interclump gas is removed from the scale of interest as the densest substructures are approached. The fractional mass of cores is proportional to the instantaneous efficiency of star formation if the cores form stars. Therefore the local efficiency of star formation in a hierarchical cloud increases as the average density increases. The efficiency on the scale of a galaxy where the average density is low is ~ 1%; on the scale of an OB association it is ~ 5%, and in a cloud core where a bound cluster forms, it is ~ 40%. Bound cluster formation requires a high efficiency so there is a significant gravitating mass of stars remaining after the gas leaves. It follows that in hierarchical clouds, the probability of forming a bound cluster is automatically highest where the density is highest. Star clusters are the inner bound regions of a hierarchy of stellar and gaseous structures (Elmegreen 2008).
Outside the inner region, stars that form are not as likely to be bound to each other after the gas leaves. Then there are loose stellar groups, unbound OB subgroups, OB associations, and so on up to star complexes. Flocculent spiral arms and giant spiral-arm clouds are the largest scale on which gravitational instabilities drive the hierarchy of cloud and star-formation structures.
The hierarchy of young stellar structure continues inside bound clusters as well. Smith et al. (2005) found several levels of stellar subclustering inside the rho-Ophiuchus cloud, and Dahm & Simon (2005) found 4 subclusters with slightly different ages ( ± 1 Myr) in NGC 2264. Feigelson et al. (2009) observed X-rays from young stars in NGC 6334. The x-ray maps are nearly complete to stars more massive than 1 M and their distribution is hierarchical, with clusters of clusters inside this region. Gutermuth et al. (2005) studied azimuthal profiles of clusters and found that they have intensity fluctuations that are much larger than what would be expected from the randomness of stellar positions; the stars are sub-clustered in a statistically significant way. Sánchez & Alfaro (2009) measured the fractal dimension and hierarchical-Q parameter for 16 Milky Way clusters, using the ratio of cluster age to size as a measure of youth. They found that stars in younger and larger clusters are more clumped than stars in older and smaller clusters. Greater clumping means they have lower Q and lower fractal dimension. Schmeja et al. (2008) measured Q for several young clusters. For IC 348, NGC 1333, and Ophiuchus, Q is lower (more clumpy) for class 0/1 objects (young) than for class 2/3 objects (old). Among four of the subclumps in Ophiuchus, Q is lower and the region is more gassy where class 0/1 dominates; Q is also lower for class 0/1 alone than it is for class 2/3 in Ophiuchus.
Pretellar cores are spatially correlated too. Johnstone et al. (2000) derived a power-law 2-point correlation function from 103.8 AU to 104.6 AU for 850 µm sources in Ophiuchus, which means they are spatially correlated in a hierarchical fashion. Johnstone et al. (2001) found a similar power-law from 103.6 AU to 105.1 AU for 850 µm sources in Orion. Enoch et al. (2006) showed that 1.1 µm pre-stellar clumps in Perseus have a power-law 2-point correlation function from 104.2 AU to 105.4 AU. Young et al. (2006) found similar correlated structure for pre-stellar cores from 103.6 AU to 105 AU in Ophiuchus. These structures could go to larger scales, but the surveys end there.
In summary, clusters form in the cores of the hierarchy of interstellar structures and they are themselves the cores of the stellar hierarchy that follows this gas. Presumably, this hierarchy comes from self-gravity and turbulence. Gas structure continues to sub-stellar scales. The densest regions, which are where individual stars form, are always clustered into the next-densest regions. Stars form in the densest regions, some independently and some with competition for gas, and then they move around, possibly interact a little, and ultimately mix together inside the next-lower density region. That mixture is the cluster. More and more sub-clusters mix over time until the cloud disrupts. Simulations of such hierarchical merging have been done by many groups, such as Bonnell & Bate (2006) and Maschberger et al. (2010). Because of hierarchical structure, the efficiency is automatically high on small scales where the gas is dense. | <urn:uuid:a98c683f-d40f-494d-8ba3-044570b302db> | 2.84375 | 1,065 | Academic Writing | Science & Tech. | 56.866879 |
|Annu. Rev. Astron. Astrophys. 1998. 36:
Copyright © 1998 by . All rights reserved
2.5. Far-Infrared Continuum
A significant fraction of the bolometric luminosity of a galaxy is absorbed by interstellar dust and re-emitted in the thermal IR, at wavelengths of roughly 10-300 µm. The absorption cross section of the dust is strongly peaked in the ultraviolet, so in principle the FIR emission can be a sensitive tracer of the young stellar population and SFR. The IRAS survey provides FIR fluxes for over 30,000 galaxies (Moshir et al 1992), offering a rich reward to those who can calibrate an accurate SFR scale from the 10- to 100-µm FIR emission.
The efficacy of the FIR luminosity as an SFR tracer depends on the contribution of young stars to heating of the dust and on the optical depth of the dust in the star forming regions. The simplest physical situation is one in which young stars dominate the radiation field throughout the UV-visible and the dust opacity is high everywhere, in which case the FIR luminosity measures the bolometric luminosity of the starburst. In such a limiting case the FIR luminosity is the ultimate SFR tracer, providing what is essentially a calorimetric measure of the SFR. Such conditions roughly hold in the dense circumnuclear starbursts that power many IR-luminous galaxies.
The physical situation is more complex in the disks of normal galaxies, however (e.g. Lonsdale & Helou 1987, Cox & Mezger 1989, Rowan-Robinson & Crawford 1989). The FIR spectra of galaxies contain both a "warm" component associated with dust around young star-forming regions ( ~ 60 µm) and a cooler "infrared cirrus" component ( 100 µm), which is associated with more extended dust heated by the interstellar radiation field. In blue galaxies, both spectral components may be dominated by young stars, but in red galaxies, where the composite stellar continuum drops off steeply in the blue, dust heating from the visible spectra of older stars may be very important.
The relation of the global FIR emission of galaxies to the SFR has been a controversial subject. In late-type star-forming galaxies, where dust heating from young stars is expected to dominate the 40- to 120-µm emission, the FIR luminosity correlates with other SFR tracers such as the UV continuum and H luminosities (e.g. Lonsdale & Helou 1987, Sauvage & Thuan 1992, Buat & Xu 1996). However, early-type (S0-Sab) galaxies often exhibit high FIR luminosities but much cooler, cirrus-dominated emission. This emission has usually been attributed to dust heating from the general stellar radiation field, including the visible radiation from older stars (Lonsdale & Helou 1987, Buat & Deharveng 1988, Rowan-Robinson & Crawford 1989, Sauvage & Thuan 1992, 1994, Walterbos & Greenawalt 1996). This interpretation is supported by anomalously low UV and H emission (relative to the FIR luminosity) in these galaxies. However, Devereux & Young (1990), Devereux & Hameed (1997) have argued that young stars dominate the 40- to 120-µm emission in all of these galaxies, so that the FIR emission directly traces the SFR. They have provided convincing evidence that young stars are an important source of FIR luminosity in at least some early-type galaxies, including barred galaxies with strong nuclear starbursts and some unusually blue objects (Section 4). On the other hand, many early-type galaxies show no independent evidence of high SFRs, suggesting that the older stars or active galactic nuclei (AGNs) are responsible for much of the FIR emission. The Space Infrared Telescope Facility, scheduled for launch early in the next decade, should provide high-resolution FIR images of nearby galaxies and clarify the relationship between the SFR and IR emission in these galaxies.
The ambiguities discussed above affect the calibration of SFRs in terms of FIR luminosity, and there probably is no single calibration that applies to all galaxy types. However, the FIR emission should provide an excellent measure of the SFR in dusty circumnuclear starbursts. The SFR vs LFIR conversion is derived using synthesis models as described above. In the optically thick limit, it is only necessary to model the bolometric luminosity of the stellar population. The greatest uncertainty in this case is adoption of an appropriate age for the stellar population; this may be dictated by the time scale of the starburst itself or by the time scale for the dispersal of the dust (so the >> 1 approximation no longer holds). Calibrations have been published by several authors under different assumptions about the star formation time scale (e.g. Hunter et al 1986, Lehnert & Heckman 1996, Meurer et al 1997, Kennicutt 1998). Applying the models of Leitherer & Heckman (1995) for continuous bursts of age 10-100 Myr and adopting the IMF in this paper yields the following relation (Kennicutt 1998):
where LFIR refers to the IR luminosity integrated over the full-, mid-, and far-IR spectrum (8-1000 µm), though for starbursts most of this emission will fall in the 10- to 120-µm region (readers should beware that the definition of LFIR varies in the literature). Most of the other published calibrations lie within ± 30% of Equation 4. Strictly speaking, the relation given above applies only to starbursts with ages less than 108 years, where the approximations applied are valid. In more quiescent, normal star-forming galaxies, the relation will be more complicated; the contribution of dust heating from old stars will tend to lower the effective coefficient in Equation 4, whereas the lower optical depth of the dust will tend to increase the coefficient. In such cases, it is probably better to rely on an empirical calibration of SFR / LFIR that is based on other methods. For example, Buat & Xu (1996) derived a coefficient of 8+8-3 × 10-44, valid for galaxies of type Sb and later only, based on IRAS and UV flux measurements of 152 disk galaxies. The FIR luminosities share the same IMF sensitivity as the other direct star formation tracers, and it is important to be consistent when comparing results from different sources. | <urn:uuid:ed58ba1e-4a9c-481c-9e73-10125ded1f67> | 2.859375 | 1,359 | Academic Writing | Science & Tech. | 42.280311 |
March 9, 2009--
This spider-hunting wasp is one of 19 new species recently found in Australia
--most of them in Western Australia state, considered a "hotbed of biodiversity," scientists announced.
hunt down and paralyze spiders, which are later eaten alive by the wasps' developing larvae.
In all, scientists found 11 spiders and spider relatives, 3 crustaceans, 2 insects, a mollusc, a worm, and a sponge.
(Related pictures: "Cyanide Millipede, Huge Spider Among New Species."
"The discovery of new species of life on Earth is an ongoing and exciting process," study author Mark Harvey, head of terrestrial zoology at the Western Australian Museum, said in a statement.
"The future of all life on this fragile planet depends on how quickly we can recognize, document, and describe new species," added Harvey, whose discoveries appeared recently in the journal Records of the Western Australian Museum.
• More Photos in the News
• Today's 15 Most Read Stories
• Free Email Newsletter: Photo of the Month
Photograph courtesy Mark Harvey, Western Australian Museum | <urn:uuid:b74ff056-3a02-498d-8965-628bb2470162> | 3.234375 | 233 | Truncated | Science & Tech. | 31.875714 |
Reading the recent article Electrons Act Like Waves (from the Physical Review Focus series, a highly recommended, lay[wo]man-friendly feed for your newsreader), i've discovered one of those peculiar stories that make the history of physics even more enjoyable than it would be by its purely scientific side alone.
The article tells the story of Davisson and Germer's discovery of the so-called wave-like nature of electrons. As explained in every textbook, they set up an experiment consisting in scattering electrons through a (nickel) crystal, and observed the familiar fringes that one obtains when a wave crosses a gratting with alternating slots in a wall.
It was 1927, and i always pictured Davisson and Germer as intrepid experimenters boldly trying to confirm de Broglie's 1924 ideas about the wave nature of matter . (Justly enough, an idealisation of this experiment has become the de facto standard presentation of the quantum mechanical world!.) The funny thing is that this romantic picture has nearly nothing to do with what really happened. As it comes, D&G were looking for evidence of the atomic structure of metals and knew nothing about de Broglie. After the experiment had been going on for a somewhat sterile period, one of their widgets broke and overheated the nickel plaits, which crystalised and made (when used again for scattering electrons) the interference patterns apparent. The experimenters were all but bewildered, and only after Davisson discussed his results with other colleagues during a holiday in England, did they realize the importance of their discovery.
That's serendipity at its best. And, of course, it was not the first nor the last time that serendipity gave physicists a helping hand. The Oxford Dictionary gives a precise definition of this beautiful word:
serendipity noun the occurrence and development of events by chance in a happy or beneficial way.
or, even better, this one from Julius H. Comroe (as quoted by Simon Singh)
Serendipity is looking for a needle in a haystack and finding the Farmer's Daughter.
and its all too apt etymology:
ORIGIN 1754: coined by Horace Walpole, suggested by The Three Princes of Serendip, the title of a fairy tale in which the heroes “were always making discoveries, by accidents and sagacity, of things they were not in quest of.”
which in my opinion captures extremely well the kind of discoveries we're discussing. They were by chance, that's true, but not just chance: one needs to be in the quest of something, to begin with.
Another famous (and probably better known) example of serendipity at work is Penzias and Wilson's discovery of the cosmic microwave background. As explained by Ivan Kaminow,
He [Ivan] joked that Penzias was an unusually lucky guy. "Arno Penzias and Bob Wilson were trying to find the source of excess noise in their antenna, where pigeons were roosting," he said. "They spent hours searching for and removing the pigeon dung. Still the noise remained, and was later identified with the Big Bang."He laughed, "Thus, they looked for dung but found gold, which is just opposite of the experience of most of us."
The experiment was being conducted at Bell's Labs and its aim was to tune an ultra-sensitive microwave receiving system to study radio emissions from the Milky Way. It was only after Penzias talked with Robert H. Dicke (see also this nice memorial (PDF) for more on Dicke) that the misterious radiation was recognized as the relics of the Big Bang hypothesized by George Gamow some time before. I read the whole story for the first time in Weinberg's marvelous book (required reading), and I've always found a bit unfair that the Nobel prize went only to Penzias and Wilson.
My third serendipitous example comes also from the skies. In summer of 1974, Russell Hulse was a 23-year-old graduate student compiling data from the Arecibo Observatory radio telescope in Puerto Rico. The job was a little bit tedious: he was trying to detect periodic radio sources that could be interpreted as a pulsar . One of the pulsar's earmarks is its extraordinary regularity (a few nanoseconds deviation per year for a period of about a second). Around 100 pulsars were known back then, all with a stable period with a extremely slow tendency to increase. At the end of the day, the data obtained by the telescope was processed by a computer program written by Russell, which selected candidate signals based on the stability of their period. Those were correlated with later or former observations of the same sky zone, to rule out earth-based, spurious sources. One night, Russell boringly noticed a very weak candidate, so weak that, had it been a mere 4 percent fainter, it would have passed unnoticed. On top of that, its period was too short (about 0,06 seconds) and, even worst, it was variable. Russell was on the verge of discarding it more than once during the following weeks, but eventually he persevered and, helped by his supervisor, Joe Taylor (a.k.a. K1JT), correctly interpreted the observation as a binary pulsar. The rest is history, and a Nobel prize one. Russell tells the amazing story in his delicious Nobel lecture (PDF), which starts with these telling words:
I would like to take you along on a scientific adventure, a story of intense preparation, long hours, serendipity, and a certain level of compulsive behavior that tries to make sense out of everything that one observes.
I specially like this instance of serendipity, for it shows that, many a time, lucky strikes befall on those who work hard enough to get hit.
Update: I've just found an excellent article by Alan Lightman, Wheels of Fortune, which gives some very nice examples of serendipitous discoveries, as well as a nice discussion. After reading Michael post on serendipity in HEP, i was wondering about non-experimental lucky strikes, and Lightman gives an excellent example: Steve Weinberg's electroweak theory:
Serendipitous discovery strikes not only in the photographic plates, test tubes, and petri dishes of the laboratory. It also can strike in the pencil-and-paper world of theoretical scientists. In the fall of 1967, theoretical physicist Steven Weinberg was working out a new theory of the so-called “weak force,” one of the four fundamental forces of nature, when he discovered, to his surprise, that his new theory was actually two theories in one. Weinberg was approaching the weak force with the seminal idea that pairs of particles it acted upon, electrons and neutrinos for example, might be identical as far as the force is concerned, just as yellow and white tennis balls are identical as far as the game of tennis goes. When he cast this idea into the mathematical language of quantum physics, Weinberg found that his theory necessarily included the electromagnetic force as well as the weak force. The mathematics required the union of the two forces. As he later remarked, “I found in doing this, although it had not been my idea at all to start with, that it turned out to be a theory not only of the weak forces, based on an analogy with electromagnetism; it turned out to be a unified theory of the weak and electromagnetic forces.”
As an aside, i find the constant chatter about matter being some sort of schizophrenic mix between particles and waves misleading, if not outright wrong. As stressed (to no avail, it seems) by Feynman (see and hear him on this and much more in his Vega Lectures, for instance), electrons (and photons, for that matter) are particles. You never detect half an electron, or a pi-fold-photon. There's always ticks in a detector (a photo-multiplier, a photographic plate, or trails in a Wilson chamber, for instance). The wave function is not real (neither in the physical nor in the mathematical sense of real), and it 'oscillates' in an imaginary space which is not even 3-dimensional when more than a particle is described. The interference patterns observed (which arise from the addition of complex amplitudes which are squared afterwards) are not associated with single electrons, the only thing wavelike (with a twist) about them being the statistics of their hits on the wall. Even if you believe in Bohm's pilot waves, the particles are still particles! Of course, there's ample room for analogy, but i still find the typical discussions misleading.
The discovery of pulsars had also its share of serendipity. They were found, also unexpectedly, by Jocelyn Bell and Anthony Hewish while they were looking studying scintillating radio signals from compact sources. Jocelyn has written a lively report of their discovery, including the funny story of how they were on the verge on attributing the signals to extraterrestrials, and jokingly use monikers starting with the prefix LGM (for little green men) to name the misterious radio sources. There's also a good review of the tale over at the Hitchhiker's Guide to the Galaxy funny website.
The pulsar discovery also won a Nobel in 1974. But, curiously enough, the undergraduate hero of the story (Jocelyn Bell) was not awarded this time. One wonders.
Technorati Tags: feynman, general relativity, gravitational waves, physics, quantum mechanics | <urn:uuid:e4e0cf03-0a78-40c2-8a30-ea5b2a75ef59> | 2.75 | 2,007 | Personal Blog | Science & Tech. | 37.974871 |
Go to the first, previous, next, last section, table of contents.
As was seen in the previous chapter, the GNU configure and build system
uses a number of different files. The developer must write a few files.
The others are generated by various tools.
The system is rather flexible, and can be used in many different ways.
In describing the files that it uses, I will describe the common case,
and mention some other cases that may arise.
This section describes the files written or generated by the developer
of a package.
Here is a picture of the files which are written by the developer, the
generated files which would be included with a complete source
distribution, and the tools which create those files.
The file names are in rectangles with square corners and the tool names
are in rectangles with rounded corners
(e.g., `autoheader' is the name of a tool, not the name of a file).
The following files would be written by the developer.
This is the configuration script. This script contains invocations of
autoconf macros. It may also contain ordinary shell script code. This
file will contain feature tests for portability issues. The last thing
in the file will normally be an `AC_OUTPUT' macro listing which
files to create when the builder runs the configure script. This file
is always required when using the GNU configure system. See section Write configure.in.
This is the automake input file. It describes how the code should be
built. It consists of definitions of automake variables. It may also
contain ordinary Makefile targets. This file is only needed when using
automake (newer tools normally use automake, but there are still older
tools which have not been converted, in which the developer writes
`Makefile.in' directly). See section Write Makefile.am.
When the configure script creates a portability header file, by using
`AM_CONFIG_HEADER' (or, if not using automake,
`AC_CONFIG_HEADER'), this file is used to describe macros which are
not recognized by the `autoheader' command. This is normally a
fairly uninteresting file, consisting of a collection of `#undef'
lines with comments. Normally any call to `AC_DEFINE' in
`configure.in' will require a line in this file. See section Write acconfig.h.
This file is not always required. It defines local autoconf macros.
These macros may then be used in `configure.in'. If you don't need
any local autoconf macros, then you don't need this file at all. In
fact, in general, you never need local autoconf macros, since you can
put everything in `configure.in', but sometimes a local macro is
Newer tools may omit `acinclude.m4', and instead use a
subdirectory, typically named `m4', and define
`ACLOCAL_AMFLAGS = -I m4' in `Makefile.am' to force
`aclocal' to look there for macro definitions. The macro
definitions are then placed in separate files in that directory.
The `acinclude.m4' file is only used when using automake; in older
tools, the developer writes `aclocal.m4' directly, if it is needed.
The following files would be generated by the developer.
When using automake, these files are normally not generated manually
after the first time. Instead, the generated `Makefile' contains
rules to automatically rebuild the files as required. When
`AM_MAINTAINER_MODE' is used in `configure.in' (the normal
case in Cygnus code), the automatic rebuilding rules will only be
defined if you configure using the `--enable-maintainer-mode'
When using automatic rebuilding, it is important to ensure that all the
various tools have been built and installed on your `PATH'. Using
automatic rebuilding is highly recommended, so much so that I'm not
going to explain what you have to do if you don't use it.
This is the configure script which will be run when building the
package. This is generated by `autoconf' from `configure.in'
and `aclocal.m4'. This is a shell script.
This is the file which the configure script will turn into the
`Makefile' at build time. This file is generated by
`automake' from `Makefile.am'. If you aren't using automake,
you must write this file yourself. This file is pretty much a normal
`Makefile', with some configure substitutions for certain
This file is created by the `aclocal' program, based on the
contents of `configure.in' and `acinclude.m4' (or, as noted in
the description of `acinclude.m4' above, on the contents of an
`m4' subdirectory). This file contains definitions of autoconf
macros which `autoconf' will use when generating the file
`configure'. These autoconf macros may be defined by you in
`acinclude.m4' or they may be defined by other packages such as
automake, libtool or gettext. If you aren't using automake, you will
normally write this file yourself; in that case, if `configure.in'
uses only standard autoconf macros, this file will not be needed at all.
This file is created by `autoheader' based on `acconfig.h' and
`configure.in'. At build time, the configure script will define
some of the macros in it to create `config.h', which may then be
included by your program. This permits your C code to use preprocessor
conditionals to change its behaviour based on the characteristics of the
host system. This file may also be called `config.h.in'.
This rather uninteresting file, which I omitted from the picture, is
generated by `automake'. It always contains the string
`timestamp'. It is used as a timestamp file indicating whether
`config.in' is up to date. Using a timestamp file means that
`config.in' can be marked as up to date without actually changing
its modification time. This is useful since `config.in' depends
upon `configure.in', but it is easy to change `configure.in'
in a way which does not affect `config.in'.
This section describes the files which are created at configure and
build time. These are the files which somebody who builds the package
Of course, the developer will also build the package. The distinction
between developer files and build files is not that the developer does
not see the build files, but that somebody who only builds the package
does not have to worry about the developer files.
Here is a picture of the files which will be created at build time.
`config.status' is both a created file and a shell script which is
run to create other files, and the picture attempts to show that.
This is a description of the files which are created at build time.
The first step in building a package is to run the `configure'
script. The `configure' script will create the file
`config.status', which is itself a shell script. When you first
run `configure', it will automatically run `config.status'.
An `Makefile' derived from an automake generated `Makefile.in'
will contain rules to automatically run `config.status' again when
necessary to recreate certain files if their inputs change.
This is the file which make will read to build the program. The
`config.status' script will transform `Makefile.in' into
This file defines C preprocessor macros which C code can use to adjust
its behaviour on different systems. The `config.status' script
will transform `config.in' into `config.h'.
This file did not fit neatly into the picture, and I omitted it. It is
used by the `configure' script to cache results between runs. This
can be an important speedup. If you modify `configure.in' in such
a way that the results of old tests should change (perhaps you have
added a new library to `LDFLAGS'), then you will have to remove
`config.cache' to force the tests to be rerun.
The autoconf manual explains how to set up a site specific cache file.
This can speed up running `configure' scripts on your system.
This file, which I omitted from the picture, is similar to
`stamp-h.in'. It is used as a timestamp file indicating whether
`config.h' is up to date. This is useful since `config.h'
depends upon `config.status', but it is easy for
`config.status' to change in a way which does not affect
The GNU configure and build system requires several support files to be
included with your distribution. You do not normally need to concern
yourself with these. If you are using the Cygnus tree, most are already
present. Otherwise, they will be installed with your source by
`automake' (with the `--add-missing' option) and
You don't have to put the support files in the top level directory. You
can put them in a subdirectory, and use the `AC_CONFIG_AUX_DIR'
macro in `configure.in' to tell `automake' and the
`configure' script where they are.
In this section, I describe the support files, so that you can know what
they are and why they are there.
Added by automake if you are using gettext. This is a documentation
file about the gettext project.
Used by an automake generated `Makefile' if you put `ansi2knr'
in `AUTOMAKE_OPTIONS' in `Makefile.am'. This permits
compiling ANSI C code with a K&R C compiler.
The man page which goes with `ansi2knr.c'.
A shell script which determines the configuration name for the system on
which it is run.
A shell script which canonicalizes a configuration name entered by a
Used to compile Emacs LISP files.
A shell script which installs a program. This is used if the configure
script can not find an install binary.
Used by libtool. This is a shell script which configures libtool for
the particular system on which it is used.
Used by libtool. This is the actual libtool script which is used, after
it is configured by `ltconfig' to build a library.
A shell script used by an automake generated `Makefile' to pretty
print the modification time of a file. This is used to maintain version
numbers for texinfo files.
A shell script used if some tool is missing entirely. This is used by
an automake generated `Makefile' to avoid certain sorts of
A shell script which creates a directory, including all parent
directories. This is used by an automake generated `Makefile'
Required if you have any texinfo files. This is used when converting
Texinfo files into DVI using `texi2dvi' and TeX.
A shell script used by an automake generated `Makefile' to run
programs like `bison', `yacc', `flex', and `lex'.
These programs default to producing output files with a fixed name, and
the `ylwrap' script runs them in a subdirectory to avoid file name
conflicts when using a parallel make program.
Go to the first, previous, next, last section, table of contents. | <urn:uuid:05a3df35-0b2d-46a6-8836-be0a8dfe337b> | 3.234375 | 2,570 | Documentation | Software Dev. | 50.72503 |
Category: Jellies, Anemones and kin view all from this category
Description Aequorea victoria, also sometimes called the crystal jelly, is a bioluminescent hydrozoan jellyfish, or hydromedusa, that is found off the west coast of North America. This species is thought to be synonymous with Aequorea aequorea of Osamu Shimomura, the discoverer of green fluorescent protein (GFP). Shimomura together with Martin Chalfie and Roger Y. Tsien were awarded the 2008 Nobel Prize in Chemistry for the discovery and development of this protein as an important biological research tool. Originally the victoria species was supposed to designate the variant found in the Pacific, and the aequorea designation was used for specimens found in the Atlantic and Mediterranean. The species name used in GFP purification was later disputed by M.N. Arai and A. Brinckmann-Voss (1980), who decided to separate them on the basis of 40 specimens collected from around Vancouver Island. Osamu Shimomura notes that this species in general shows great variation: from 1961 to 1988 he collected around 1 million individuals in the waters surrounding the Friday Harbor Laboratories of University of Washington, and in many cases there were pronounced variations in the form of the jellyfish. In September 2009, Aequorea victoria was spotted in the Moray Firth, an unusual occurrence, as crystal jellies had never been seen or reported in British waters. The specimen is now on display in Macduff Marine Aquarium in Aberdeenshire, Scotland.
Habitat Open ocean.
Range California, Texas, Western Canada, Alaska, Florida, New England, Northwest, Mid-Atlantic. | <urn:uuid:7ed6c298-4b69-4437-be93-a0e355f1f195> | 2.875 | 357 | Knowledge Article | Science & Tech. | 28.200847 |
A spacecraft returning from the Moon and reentering the Earth's atmosphere far exceeds the speed of the ballistic missiles that "leisurely" fall planetward from the fringes of the atmosphere. When spacecraft heading home from the Moon impact the atmosphere, it is like hitting a fiery wall. Temperatures above that of the surface of the Sun prevail around the exposed forward surfaces. These spacecraft are, in essence, artificial meteors; and it is common knowledge that natural meteors are mostly consumed in their whitehot descent through the atmosphere. To design the spacecraft heat shield, terrestrial wind tunnels are used to simulate flow conditions characteristic of reentry speeds in the neighborhood of 37 000 feet per second for lunar reentry and 50 000 feet per second and above for planetary reentry. If wind tunnel simulation were to prove impossible, the spacecraft designer could never be certain that a fatal error in design concept or some undiscerned flaw in the heat shield might lead to the destruction of the vehicle. To preclude such a grave consequence, rocket-launched unmanned flight vehicles are used, where practicable to validate the integrity of the vehicle design.
The term hypersonic has been used to define the speed regime above about Mach 5, at which heating of the air becomes an overriding factor in vehicle design. In hypersonic wind tunnel operation (below Mach 10 with gas heated to prevent liquefaction), it is assumed that the air streaming by the body behaves as a perfect gas, as defined by the laws of thermodynamics. However, as space vehicles progress into the regime of orbital entry speeds, the strong shock wave generated near the nose of the body produces a very large temperature increase (and a pressure increase as well) that will change the chemical composition of the streaming air. These are often referred to as "real gas" effects. The oxygen and nitrogen molecules in the air tend to dissociate and may become electrically charged and form an ionized sheath around the entry vehicle. This sheath can block the transmission of electromagnetic radiation. The dramatic communications blackout experienced by the first Mercury capsule during reentry illustrates the phenomena. This is called the regime of hypervelocity flight. To reproduce this group of extreme conditions in terrestrial laboratories, aerodynamicists have designed exotic facilities that are usually called wind tunnels, but that stretch the definition considerably.
A fact of life faced by the designer of a very high speed wind tunnel is the extreme temperature of the air entering the nozzle that accelerates the air to the desired speeds. Just before the nozzle, in the stilling chamber, the wind tunnel air is essentially at rest. After accelerating through the nozzle and impacting the nose of the spacecraft model in the test section, the air is once again at rest. Since no energy has been added between these two stations, the temperatures of the air at both stations will be the same. Now a spacecraft entering the Earth's atmosphere at, for example, Mach 10, will experience a stagnation air temperature at the nose of approximately 8000 ° F. The implication for wind tunnels is that somehow the air in the stilling chamber must be heated to 8000° F (or even higher for higher velocities) to reproduce stagnation reentry temperatures on the model. Such temperatures approach those of the Sun's surface and far exceed those normally available in industrial and scientific laboratories. | <urn:uuid:56fd864d-3529-4fd9-839a-6b596be053f8> | 3.375 | 667 | Knowledge Article | Science & Tech. | 27.384472 |
Winds of Mars: Aeolian Activity and Landforms
following is a short list of general references on Mars used
in the preparation of this slide set. Numerous others exist,
although care must be taken to keep them in historical context
due to the extremely rapid pace with which our understanding
of the Red Planet has advanced. No journal articles are cited;
if you wish to pursue a more in-depth study of Mars, these texts
can guide you to those articles.
J. (1977) Mars and Its Satellites, A Detailed Commentary
on the Nomenclature. Exposition Press, Hicksville, New
York. 200 pp.
Carr M. H. (1981 ) The Surface of Mars. Yale University,
New Haven. 232 pp.
M. H., ed. (1984) The Geology of Terrestrial Planets.
NASA SP-469. 317 pp.
E. C. and Ezell L.N. (1984) On Mars: Exploration of the
Red Planet. NASA SP-4212. 535 pp.
R. and Iversen J. D. (1985) Wind as a Geologic Process.
Cambridge University, Cambridge. 333 pp.
H., Aksnes K., Hunt G., Marov M., Millman P., Morrison D., Owen
T., Shevchenko V., Smith B., and Tejfel V. (1986) International
Astronomical Union Working Group for Planetary Nomenclature,
Annual Gazetteer of Planetary Nomenclature. U.S. Geological
Survey Open-File Report 84-692. 442 pp.
T. A., Arvidson R. E., Head J. W. III, Jones K. L., and Saunders
R. S. (1976) The Geology of Mars. Princeton University,
Princeton. 400 pp.
Orbiter Imaging Team (1980) Viking Orbiter Views of Mars.
NASA SP-411. 182 pp.
Special Publications (SPs) are available from the Superintendent
of Documents, U.S. Government Printing Office, Washington DC | <urn:uuid:4a84daf5-a8b8-4feb-a342-575351b947ed> | 2.953125 | 450 | Content Listing | Science & Tech. | 78.632216 |
|Jul8-07, 03:11 AM||#1|
1. The problem statement, all variables and given/known data
A block (mass = 2.4 kg) is hanging from a massless cord that is wrapped around a pulley (moment of inertia = 1.5 x 10-3 kg·m2), as the figure shows. Initially the pulley is prevented from rotating and the block is stationary. Then, the pulley is allowed to rotate as the block falls. The cord does not slip relative to the pulley as the block falls. Assume that the radius of the cord around the pulley remains constant at a value of 0.032 m during the block's descent.
Find (a) the angular acceleration of the pulley and (b) the tension in the cord.
2. Relevant equations
Newtons Second law and Newtons second law of rotation
F=ma and Torque=Ialpha
3. The attempt at a solution
I tried using this equation, but i get the wrong answer no matter what I do.
|Jul8-07, 05:05 AM||#2|
i think it must be
because the tension is opposite to the gravitational pull......
and where is the pic???
|Jul8-07, 07:28 AM||#3|
ma=mg-T => a=g-T/m (1)
also, a=alpha*radius of pulley and alpha= Torque/inertia = T*radius of pulley/inertia of pulley
=> a=radius^2*T/inertia (2)
from 1 and 2, hopefully we can find the answer. I'm not sure why they mention the radius of the cord?
Tell me if it works out, I didn't have time to actually solve it myself.
|Jul8-07, 01:25 PM||#4|
Yes thankyou, that helped alot!!
|Similar discussions for: Rotational Motion|
|Uniform Circular Motion, Rotational Motion, Torque, and Inertia||General Physics||1|
|Rotational Motion||Introductory Physics Homework||3|
|when to use term Rotational motion and Circular motion?||General Physics||2|
|Rotational Motion||Introductory Physics Homework||1|
|rotational motion||Introductory Physics Homework||4| | <urn:uuid:c6cccfb4-a74a-49b6-a646-b0d662be4865> | 3.390625 | 512 | Comment Section | Science & Tech. | 76.194761 |
Dark Globule in IC 1396
This 22-second animation shows how our view of a dark globule in IC 1396 changes as we move from visible light through near-infrared to mid-infrared wavelengths. The dark globule is virtually opaque at visible-light wavelengths and becomes transparent in the near-infrared. A glowing stellar nursery, with never before seen protostars and young stars, is vividly revealed through mid-infrared images obtained by the Spitzer Space Telescope.
Browse Videos in Science Animations
This artist's animation illustrates how silicate crystals like those found in comets can be created by an outburst fr... | <urn:uuid:1c47238d-6b97-4bbd-b4e3-cbd6ec218342> | 2.96875 | 132 | Content Listing | Science & Tech. | 42.327384 |
SNR 0104-72.3: A supernova remnant located in the Small Magellanic Cloud, about 190,000 light years from Earth.
Caption: A new composite image from NASA's Chandra X-ray Observatory (purple) and Spitzer Space Telescope (red and green) shows a supernova remnant with a different look. This object, known as SNR 0104-72.3 is in the Small Magellanic Cloud, a small neighboring galaxy to the Milky Way. Astronomers think that this object is the remains of a so-called Type Ia supernova caused by the thermonuclear explosion of a white dwarf, but it has a shape unlike any other in its class.
Scale: Full field image is
32.4 arcmin across.
Chandra X-ray Observatory | <urn:uuid:e677c8b2-a5f8-4d14-b367-2ccefeb9a108> | 3.34375 | 167 | Truncated | Science & Tech. | 64.246801 |
A pupa (Latin pupa for doll, pl: pupae or pupas) is the life stage of some insects undergoing transformation. The pupal stage is found only in holometabolous insects, those that undergo a complete metamorphosis, going through four life stages; embryo, larva, pupa and imago. (For a list of such insects see Holometabolism).
The pupae of different groups of insects have different names such as chrysalis in the Lepidoptera order and tumbler in the mosquito family. Pupae may further be enclosed in other structures such as cocoons, nests or shells.
Position in life cycle
In the life of an insect the pupal stage follows the larval stage and precedes adulthood (imago). It is during the time of pupation that the adult structures of the insect are formed while the larval structures are broken down. Pupae are inactive, and usually sessile (not able to move about). They have a hard protective coating and often use camouflage to evade potential predators.
Pupation may last weeks, months or even years. For example it is two weeks in monarch butterflies. The pupa may enter dormancy or diapause until the appropriate season for the adult insect. In temperate climate pupae usually stay dormant during winter, in the tropics pupae usually do so during the dry season. Anise Swallowtails sometimes emerge after years as a chrysalis.
Insects emerge (eclose) from pupae by splitting the pupal case, and the whole process of pupation is controlled by the insect's hormones. Most butterflies emerge in the morning. In mosquitoes the emergence is in the evening or night. In fleas the process is triggered by vibrations that indicate the possible presence of a suitable host. Prior to emergence, the adult inside the pupal exoskeleton is termed "pharate". Once the pharate adult has eclosed from the pupa, the empty pupal exoskeleton is called an "exuvium" (or exuvia); in most hymenopterans (ants, bees and wasps) the exuvium is so thin and membranous that it becomes "crumpled" as it is shed.
Pupal mating
In a few taxa of the Lepidoptera, especially Heliconius, pupal mating is an extreme form of reproductive strategy where adult males mate with female pupa about to emerge or with the newly moulted female; this is accompanied by other actions such as capping of the reproductive system of the female with the sphragis, denying access to other males, or by exuding an anti-aphrodisiac pheromone.
Pupae are usually immobile and are largely defenseless. To overcome this, a common feature is concealed placement. There are some species of Lycaenid butterflies who are protected in their pupal stage by ants. Another means of defense by pupae of other species is the capability of making sounds or vibrations to scare potential predators. A few species use chemical defenses including toxic secretions. The pupae of social hymenopterans are protected by adult members of the hive.
A chrysalis (Latin chrysallis, from Greek χρυσαλλίς = chrysallís, pl: chrysalides, also known as an aurelia) or nympha is the pupal stage of butterflies. The term is derived from the metallic gold-coloration found in the pupae of many butterflies, referred to by the Greek term χρυσός (chrysós) for gold.
When the caterpillar is fully grown, it makes a button of silk which it uses to fasten its body to a leaf or a twig. Then the caterpillar's skin comes off for the final time. Under this old skin is a hard skin called a chrysalis.
Because chrysalides are often showy and are formed in the open, they are the most familiar examples of pupae. Most chrysalides are attached to a surface by a Velcro-like arrangement of a silken pad spun by the caterpillar, usually cemented to the underside of a perch, and the cremastral hook or hooks protruding from the rear of the chrysalis or cremaster at the tip of the pupal abdomen by which the caterpillar fixes itself to the pad of silk. (Gr. 'kremastos'=suspend)
Like other types of pupae, the chrysalis stage in most butterflies is one in which there is little movement. However, some butterfly pupae are capable of moving the abdominal segments to produce sounds or to scare away potential predators. Within the chrysalis, growth and differentiation occur. The adult butterfly emerges (ecloses) from this and expands its wings by pumping haemolymph into the wing veins. Although this sudden and rapid change from pupa to imago is often called metamorphosis, metamorphosis is really the whole series of changes that an insect undergoes from egg to adult.
On emerging the butterfly uses a liquid, sometimes called cocoonase, which softens the shell of the chrysalis. Additionally, it uses two sharp claws located on the thick joints at the base of the forewings to help make its way out. Having emerged from the chrysalis, the butterfly will usually sit on the empty shell in order to expand and harden its wings. However, if the chrysalis was near the ground (such as if it fell off from its silk pad), the butterfly would find another vertical surface to rest upon and harden its wings (such as a wall or fence).
It is important to differentiate between pupa, chrysalis and cocoon. The pupa is the stage between the larva and adult stages. The chrysalis is a butterfly pupa. A cocoon is a silk case that moths, and sometimes other insects, spin around the pupa.
Cocoons may be tough or soft, opaque or translucent, solid or meshlike, of various colors, or composed of multiple layers, depending on the type of insect larva producing it. Many moth caterpillars shed the larval hairs (setae) and incorporate them into the cocoon; if these are urticating hairs then the cocoon is also irritating to the touch. Some larvae attach small twigs, fecal pellets or pieces of vegetation to the outside of their cocoon in an attempt to disguise it from predators. Others spin their cocoon in a concealed location – on the underside of a leaf, in a crevice, down near the base of a tree trunk, suspended from a twig or concealed in the leaf litter.
The silk in the cocoon of the silk moth can be unravelled to get silk fibre which makes this moth the most economically important of all Lepidopterans. The moth is the only completely domesticated Lepidopteran and does not exist in the wild.
Insects that pupate in a cocoon must escape from it, and they do this either by the pupa cutting its way out, or by secreting fluids, sometimes called cocoonase, that soften the cocoon. Some cocoons are constructed with built-in lines of weakness along which they will tear easily from inside, or with exit holes that only allow a one-way passage out; such features facilitate the escape of the adult insect after it emerges from the pupal skin.
Some pupae remain inside the exoskeleton of the final larval instar and this last larval "shell" is called a puparium (plural, puparia). Flies of the group Muscomorpha have puparia, as do members of the order Strepsiptera, and the Hemipteran family Aleyrodidae.
An Emperor Gum Moth caterpillar spinning its cocoon.
Luna moth cocoon and pupa.
Luna moth emerging from silk cocoon.
Chrysalis of Gulf Fritillary
Monarch Butterfly chrysalis
See also
|Wikimedia Commons has media related to: Pupae|
- Borror, D. J. Dwight M. DeLong and Charles A. Triplehorn. An introduction to the study of insects. New York: Holt, Rinehart & Winston. Sixth Edition. May 19, 2004 0030968356
- Preston–Mafham, Rod; Preston–Mafham, Ken (1993). The encyclopedia of land invertebrate behaviour (Illustrated ed.). MIT Press. p. 113. ISBN 978-0-262-16137-4. Retrieved 16 November 2010.
- Boggs, Carol L.; Watt, Ward B. & Ehrlich, Paul R. (2003). Butterflies: ecology and evolution taking flight (Illustrated ed.). University of Chicago Press. p. 739. ISBN 978-0-226-06318-8. Retrieved 16 November 2010.
- Darby, Gene (1958). What is a Butterfly. Chicago: Benefic Press. p. 19.
- Academic Dictionaries and Encyclopedias
- AMNH Accessed December 2006
- The Entomologist
- Darby, Gene (1958). What is a Butterfly. Chicago: Benefic Press. p. 41.
- Malcolm J. Scoble. 1992. The Lepidoptera: form, function and diversity. Oxford: Oxford University Press.
- Gullan, P.J., Cranston, P.S. 2000. The Insects: An Outline of Entomology. 2nd Edition. Blackwell. | <urn:uuid:8344f9e3-d95e-4d2e-ad2b-5369d9db0db2> | 3.796875 | 2,027 | Knowledge Article | Science & Tech. | 47.945275 |
Less snowmelt in Antarctica
Climatic change in Antarctica is complicated. The northernmost part of the continent, the Antarctic Peninsula, is warming at extreme rates, while elsewhere the pattern is mixed and in some parts there appears to be little or no warming. Up to a point, we glaciologists don’t mind whether Antarctica is warming or not. It is so cold that even an implausible temperature increase wouldn’t come close enough to the melting point to affect the mass balance.
Indeed, there is a plausible argument that warming would make the mass balance more positive. The Antarctic interior is extremely dry because the capacity of the intensely cold atmosphere to deliver water vapour, and therefore snow, is minimal. Warmer air can carry more water vapour, so snowfall should increase in a warmer Antarctica.
The evolving mass balance of Antarctica is most interesting around the edges, though. Warmer ocean water is increasing melting at the bases of ice shelves and pulling grounded ice across the grounding lines at increasingly scary rates. A modest increase in interior snowfall would not make this picture less scary.
Ice-stream dynamics is not the only interesting thing about the periphery of Antarctica. Here, in the least cold latitudes, we observe what little melting does happen. Spread over the continent, it amounts to a few mm of water-equivalent loss per year, against gains by snowfall of about 150 mm/yr. Losses by discharge across the grounding line are much greater. But melting, if negligible in the big picture, is still interesting.
In a recent paper, Tedesco and Monaghan update a standard measure of melt intensity in Antarctica, the so-called melting index. They watch the ice sheet’s emission at microwave wavelengths (8 to 16 mm) and exploit one of the most useful radiative attributes of water. At these wavelengths, the emissivity of frozen water is low, and as conventionally presented in imagery it looks bright, but when it melts its emissivity rises dramatically and it looks black. An intermittently wet snow surface flickers between bright and dark, and we can keep track of melting by noting, in twice-daily overpasses by the imaging satellites, whether the image pixels are bright (cold) or black (warm).
The melting index, summed over a glacierized region for a span of time, is measured in square-kilometre-days, an odd-sounding unit but one that captures what we want to know. For each pixel it is just the number of days on which the pixel was black times the area of the pixel. For the whole region it is the sum of these pixel counts.
The Antarctic melting index has averaged about 35 million km2 days per year (October to September, to be sure of keeping the austral summer months together) between 1980 and 2008. Here comes the intriguing feature: in 2009 it was only 17.8 million km2 days, which is not only a record low but also continues a trend towards lesser annual indices that began in 2005. The melt extent (the area experiencing at least one day of melting) was the second lowest recorded, reaching only half the average of 1.3 million km2.
Tedesco and Monaghan account for this oddity in terms of slow organized variability in how the atmosphere behaves. Two patterns of multi-annual variation in the circulation of the southern atmosphere, the Southern Oscillation and the Southern Annular Mode, together correlate rather well with the melting index. But the authors acknowledge that the correlation breaks down in some Antarctic regions, and that the common variance does not point to a clear-cut physical explanation. (Translation: we don’t understand what is happening.)
Antarctica is a happy hunting ground for climate denialists, but they need to be ignored because they are on a wild goose chase. In the first place, anomalous patterns of temperature change haven’t stopped melting rates from accelerating, and ice shelves from disintegrating, in the warmest part of the continent. Second, global warming is global. Regional non-warming, and even regional cooling, don’t invalidate the main conclusion. The fact that we don’t understand why Antarctica is anomalous doesn’t invalidate it either. Finally, when it comes to Antarctic change it’s the ocean that we need to worry about. From the glaciological standpoint, warmer water is the problem, not warmer air.
TrackBack URL for this entry: | <urn:uuid:aeedbef5-83c0-4aaf-9136-4faae229aca9> | 3.625 | 919 | Personal Blog | Science & Tech. | 42.125631 |
SURFRAD includes ancillary data (e.g., cloud cover, moisture) that affect the transfer of solar and thermal infrared radiation to and from the surface. An aerosol optical depth product has been recently added.
Aerosol optical depth is a measure of the extinction of the solar beam by dust and haze. In other words, particles in the atmosphere (dust, smoke, pollution) can block sunlight by absorbing or by scattering light. AOD tells us how much direct sunlight is prevented from reaching the ground by these aerosol particles. It is a dimensionless number that is related to the amount of aerosol in the vertical column of atmosphere over the observation location.
A value of 0.01 corresponds to an extremely clean atmosphere, and a value of 0.4 would correspond to a very hazy condition. An average aerosol optical depth for the U.S. is 0.1 to 0.15. | <urn:uuid:f08805d1-1a5f-4870-84fc-892a2c595f8b> | 3.28125 | 189 | Knowledge Article | Science & Tech. | 57.970574 |
tis a ``symbol'' and 3 is an ``integer.'' Roughly speaking the objects of ACL2 can be partitioned into the following types:
3, -22/7, #c(3 5/2)Characters
#\A, #\a, #\SpaceStrings
"This is a string."Symbols
'abc, 'smith::abcConses (or Ordered Pairs)
'((a . 1) (b . 2))
When proving theorems it is important to know the types of object
returned by a term. ACL2 uses a complicated heuristic algorithm,
type-set , to determine what types of objects a
term may produce. The user can more or less program the
type-set algorithm by proving
ACL2 is an ``untyped'' logic in the sense that the syntax is not
typed: It is legal to apply a function symbol of n arguments to any
n terms, regardless of the types of the argument terms. Thus, it is
permitted to write such odd expressions as
(+ t 3) which sums the
t and the integer 3. Common Lisp does not prohibit such
expressions. We like untyped languages because they are simple to
describe, though proving theorems about them can be awkward because,
unless one is careful in the way one defines or states things,
unusual cases (like
(+ t 3)) can arise.
To make theorem proving easier in ACL2, the axioms actually define a
value for such terms. The value of
(+ t 3) is 3; under the ACL2
axioms, non-numeric arguments to
+ are treated as though they
You might immediately wonder about our claim that ACL2 is Common
(+ t 3) is ``an error'' (and will sometimes even
``signal an error'') in Common Lisp. It is to handle this problem that
ACL2 has guards. We will discuss guards later in the Walking Tour.
However, many new users simply ignore the issue of guards entirely.
You should now return to the Walking Tour. | <urn:uuid:3f4e6a02-f3c1-4fcd-bd93-c228a665b3df> | 2.765625 | 454 | Documentation | Software Dev. | 62.872365 |
An anonymous reader sends this quote from an article at Txchnologist:
“The spectacle of a booster rocket lifting off a launch pad atop a mass of brilliant flames and billowing smoke is an iconic image of the Space Age. Such powerful chemical rockets are needed to break the bonds of Earth’s gravity and send spacecraft into orbit. But once a vehicle has progressed beyond low-earth orbit chemical rockets are not necessarily the best way to get around outer space. That’s because chemical propulsion systems require such large quantities of fuel to generate high speeds, there is little room for payload. As a result rocket scientists are increasingly turning to electric rockets, which accelerate propellants out the back end using solar-powered electromagnetic fields rather than chemical reactions. The electric rockets use so much less propellant that the entire spacecraft can be much more compact, which enables them to scale down the original launch boosters.”
Read more of this story at Slashdot.
Read the whole story on Slashdot | <urn:uuid:dd0185b7-dd13-4d1e-9626-4407d79c2d29> | 3.3125 | 202 | Truncated | Science & Tech. | 34.383824 |
Coral Reefs, the Human View
Part A: Coral Reef Adventure
Animal, vegetable, or mineral? What exactly is coral? While corals might look more like rocks or plants, they are actually made up of tiny invertebrate animals, called polypscoral polyp: a small individual coral animal with a tube-shaped body and a mouth surrounded by tentacles., that are related to sea anemones and jellyfish. In total, there are more than 4,000 different coral species of various shapes, sizes, and colors.
- Watch the IMAX film Coral Reef Adventure. As you watch, take notes as preparation for answering the following Stop and Think questions.
Stop and Think1: According to the film, coral reefs have "perhaps the greatest concentration of symbiosis within one single habitat on the planet." Describe at least two examples of symbiosis on a coral reef as depicted in the film Coral Reef Adventure. Explain which organism(s) benefit or are harmed by these relationships.
2: What human relationships with coral reefs were depicted in the film? Would you classify any of these relationships as symbiotic? Explain. | <urn:uuid:96d64fca-a302-4889-a8e2-14123d5feeb4> | 3.59375 | 232 | Tutorial | Science & Tech. | 41.978096 |
Megalodon is greek word which means Big Tooth, it is named so because Megalodon shark had quite big tooth sized approx 170mm, these were the biggest and most dangerous killer predator shark ever lived on earth which are believed to got extinct about 28 to 1.5 million years ago.
Here are some interesting facts of these sharks who once ruled the oceans
These were the biggest sharks of their kind they could be as long as 20 meters about twice as big as great white shark of today
Some scientists take megalodon shark as bulkier or oversized version of great white shark of today, Megalodon shark is believed to weight about 47 metric tons
The teeth were triangular shaped strong and could be as long as 170 mm and could weight about 1/2 kg.
It is amazing that teeth of such enormous creature survived centuries.
Did you know if jaws of megalodon were fully open a whole bunch of human family could reside in it.
The most information about these sharks were revealed by analyzing the fossil teeth of these sharks.
A 20 meters long could have bite force of about 40900 lbf which is about 10 times greater then great white shark of today, you can easily estimate the immense power of these predators from there bite force which has been till now recorded the biggest bite force in history.
Megalodon shark were highly mobile specie but mostly mostly loved to live in near subtropical region of oceans, the fossil remains of these sharks were found almost from all over the world. | <urn:uuid:067834d4-096a-433f-8daf-0f9f34e1042f> | 3.515625 | 308 | Personal Blog | Science & Tech. | 42.003314 |
The other day one of my colleagues sent me an article about a tummy rocket that runs on stomach acid and jets about doing sensor and/or chemical things. Then I got a pointer to this article [Link] on a wirelessly powered 3mm x 4mm x thin chip that goes into the human circulatory system (no teleporter, sorry!) and putts about (maybe) and does mechanical things like sensing or zapping.
What makes this notable to me is not the technology per se, but that the developer, a boffin at Stanford U, got to this by questioning an old assumption.
“scientists were approaching the problem incorrectly. In their models, they assumed that human muscle, fat, and bone were generally good conductors of electricity, and therefore governed by a specific subset—the “quasi-static approximation,” to be exact—of the mathematical principles known as Maxwell’s equations.
Poon (the researcher) took a different tack, choosing instead to model tissue as a dielectric—a type of insulator. As it turns out, human tissue is a poor conductor of electricity. But radio waves can still move through tissue. In a dielectric, the signal is conveyed as waves of shifting polarization of atoms within cells.”
Simply put someone assumed a while ago that human bodies are good conductors and that sidetracked the technology. The critical question is why didn’t they question the assumption and make actual measurement?
Electricity was early associated with frog muscles, which experiments seem to contradict the assumption prima facie. But someone decided humans acted like copper wires. If anything this is a good example of how good science is based on continually questioning assumptions and testing them. | <urn:uuid:589027b4-f319-4996-8f9d-a0dca83d08da> | 2.828125 | 359 | Personal Blog | Science & Tech. | 40.270395 |
Learn more physics!
What is distilled water?
- Samantha (age 9)
Distilled water is water that has been boiled and then recondensed
(that is, the water vapor is turned back into liquid water on a cold
People distill water in order to purify it. Dissolved contaminants
like salts are left behind in the boiling pot as the water vapor rises
away. It might not work if the contaminants also boil and recondense,
such as having some dissolved alcohol. Plus, you have to be careful not
to re-contaminate the water after distilling it.
(published on 10/22/2007)
Follow-up on this answer. | <urn:uuid:67e93fcd-e00b-4b70-8bc9-56870933c138> | 2.96875 | 144 | Q&A Forum | Science & Tech. | 53.533235 |
They are compounds formed by a metal and hydrogen, in which
the hydrogen has an oxidation number –1.
The hydrides of groups 1 and 2 are more ionic than
covalent. The hydrides of groups 13 and 14 are more covalent than ionic. But
they have the same name, except boron hydride which is named as H + nometal
In Formulae you have an exercise to write the names of these substances
and to check your results. You also have the answer to the exercise.
In Names you have an exercise to write the formulae for these substances
and to check your results. You have to introduce the formulae without
subscripts, for example for water = H2O. You also have the answer to the | <urn:uuid:7f364e6a-1eac-451b-8d0e-7f4ff8a2a3f6> | 3.578125 | 168 | Tutorial | Science & Tech. | 52.951157 |
Measuring one of the world's largest glaciers
Over two Antarctic summer seasons British Antarctic Survey (BAS) mounted an ambitious and challenging deep-field science campaign to one of the most remote places on Earth. -. For three months of each of the 2007 and 2008 austral summers three scientists and two field assistants camped in orange pyramid tents surviving on dried food and working in temperatures between –33 and +2°C. The nearest base was the BAS Rothera Research Station on the Antarctic Peninsula, some 800 miles away.
The quest for glaciologists Robert Bingham, Julian Scott and Andy Smith was to determine what’s causing one of the world’s biggest glaciers to speed up, and how Pine Island Glacier, on the West Antarctic Ice Sheet, will contribute to sea level rise. This is their story.
Page: | <urn:uuid:4afbfea1-28f9-4503-9fc6-c46f86319a30> | 3.359375 | 172 | Knowledge Article | Science & Tech. | 52.745298 |
Welcome to biology-online.org! Please login to access all site features. Create account.
Log me on automatically each visit
| Page history
| Printable version
(Science: chemistry) A lactone obtained by reduction of phthalyl chloride, as a white crystalline substance; hence, by extension, any one of the series of which phthalide proper is the type.
Alternative forms: phthalid.
Origin: Phthalyl _ anhydride.
Please contribute to this project, if you have more information about this term feel free to edit this page
This page was last modified 21:16, 3 October 2005. This page has been accessed 1,866 times. What links here
| Related changes
| Permanent link
© Biology-Online.org. All Rights Reserved.
Register | Login
| About Us | Contact Us | Link to Us | Disclaimer & Privacy | <urn:uuid:7a0793cf-2a05-44be-a806-d277887fd2c9> | 2.765625 | 187 | Structured Data | Science & Tech. | 41.878128 |
As part of a physics lab practical exam, you are dropped off in the woods with an EM field meter and a compass. Your task is to find your way to a radio beacon (broadcasting with P=10 kW) somewhere to the north of you. You set your EM field meter to pick up the beacon frequency f=2 kHz.
(a) You are dropped off a distance R1 from the beacon where the EM meter reports the signal’s intensity as . After a short walk, the meter reports the signals intensity as I2=(2.25 I1). In terms of your current position R2, what was your original distance from the beacon R1?
(b) The short walk you took in part (a), was really a bit of a hike. Specifically, you hiked a=3 km due north and then b=5 km due west. Also, your first signal intensity measurement was I1= 5.53 uW/m^2. Assuming that the beacon broadcasts like a point source, what are R1 and R2 as measured in kilometers?
(c) At what compass bearing is the beacon relative to your position at R2? (Measure your angles from due east.) | <urn:uuid:b966df23-40e7-462c-96cf-f78d7b1452c5> | 3.890625 | 251 | Q&A Forum | Science & Tech. | 82.441134 |
To use all functions of this page, please activate cookies in your browser.
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
Convective instability (also known as potential instability or thermal instability) occurs when dry mid-level air rises (usually caused by mountains or hills) over very warm, moist air in the lower troposphere. The differences saturation cause changes adiabatic lapse rates, and can result in the air layer becoming unstable and possibly overturning.
Convective instability is also termed static instability, because the instability does not depend on the existing motion of the air; this contrasts with dynamic instability.
High convective instability can lead to severe thunderstorms and tornadoes. This is because the moist air which is trapped in the lower layer; eventually a rising bubble of humid air breaks through the dry layer triggering the development of a cumulonimbus cloud.
|This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Convective_instability". A list of authors is available in Wikipedia.| | <urn:uuid:3d61ab75-48a1-4fcf-9b9e-651726cac494> | 3.40625 | 259 | Knowledge Article | Science & Tech. | 24.714493 |
Milton Banana wrote:
Just trying to provide some perspective. Man's carbon is not the only carbon going in this system.
But it is the majority of what is being added to the system.
There is plenty of carbon before we arrived on the scene. Yes, I was refering to the total carbon budget.
Which is a blatent attempt at misleading since the carbon cycle IS the system which is being discussed.
Man's contribution is not insignificant, but it is trival really.
No, it is no if one has at least a basic understanding of science. You see the carbon cycle represents a system which is in near equilibrium and the carbon transfers from one part of the system to the other. Not much carbon is added by nature and not much is removed, which is why it takes so long for the atmospheric concentration to be lowered by nature.
Anyone reading just google "total carbon budget" and educate yourself. Or let me provide a NASA study.
If you educated yourself you need a better teacher.
Here is a look at a NASA carbon study. Clearly showing nature provides much more carbon into our system. No question about it. Basically it breaks down as follows.
No, it is showing the system through which carbon flows in out biosphere. The items being mislables as emissions tell the tale of misrepresentation you confirmed in the use of "exchange" and "emit" for what is in the carbon cycle and what is added to it.
Surface Ocean contains about 1,000 Gigatons of CO2
Intermediate and deep ocean contains about 38,000 Gt of CO2
In contrast to the atmosphere containing about 750 GT of CO2
Vegetation, Soils, and Detritus contains about 2,200 GT of CO2
Each year, the surface ocean and atmosphere exchange an estimated 90 GT C;
vegetation and the atmosphere, 60 GT C; marine biota and the surface ocean,
50 GT C; and the surface ocean and the intermediate and deep oceans, 100 GT
Note that the system EXCHANGES carbon.
Mankind emits about 8.5 GT of CO2 per year.
While mankind ADDS to the system.
The atmosphere CO2 is cycled out in less than 8 years.
No, the individual molecules will be exchanged in about that timeframe, but the concentration in the atmosphere will be unaffected. This is where you are confused about the carbon cycle.
CO2 is NOT pollution.
Yes, it is.
770,000 mmt of CO2 is from NATURAL SOURCE
Moving within the system.
23,100 mmt of CO2 is from MAN MADE SOURCE
Being added to the system.
770,000 + 23,100 = 793,000 TOTAL
Faulty logic presented as math.
Absorption is 781,400 mmt
The other part of the cycle, which has a built in buffer to handle the small amount of carbon nature normally will add to the system. This would include weathering, volcanic action, tectonic action and the like.
So as I have clearly demonstrated nature produces 96 percent of the carbon in our system.
No the carbon is not produced, but only exchanged. There is a huge difference between the two. That difference is why the amount humans have added to the system has increased the concentration of CO2 in the atmosphere and changed the pH of the oceans.
It doesn't matter really CO2 just cannot do what the climate change advocated say it does. No way it is just not possible.
Given the espressed lack of basice science knowledge here, I would not take your word on it, sorry.
Here is another carbon study in case your on the fence.
Carbon budgets are to be avoided at all costs if you want to conceal the true nature of climate.
Only to those who either cannot or do not wish to understand the basic science being discussed. It is no a problem for those who understand science and do not have an agenda they are desparate to support. | <urn:uuid:83d3f613-ee8f-4fc0-b1a0-746afadb2074> | 2.8125 | 844 | Comment Section | Science & Tech. | 61.413279 |
Eureka Tower Radiation Instruments
NOAA Physical Science Division Arctic Observations and Processes group and Environment Canada erected a 10.5 m flux tower in Eureka, Nunavat, Canada in 2007. At the top, we installed upwelling/downwelling shortwave and longwave radiation instruments . Downwelling instruments are facing up, measuring solar radiation from the Sun, while upwelling instruments are looking down, measuring radiation from the ground. Kipp and Zonen CM22 radiometers measure shortwave radiation, while Eppley PIR radiometers measure infrared radiation.
The six-panel plot on the right shows the tower shortwave (SW) and longwave (LW) radiation data (61 minute running average) in 2009. Units are in W/m^2. The bottom right plot shows albedo, calculated by dividing SW upwelling by SW downwelling. melting snow during the Arctic spring/summer lowers the albedo from near 1 in the spring to close to 0 in June when the ice is mostly melted. Large noise seen from October-April shows the evidence of long winter in arctic. During this period, longwave radiation dominates. Instruments are detecting very small upwelling and downwelling SW signals. When you divide a very small number by a very small number, it fluctuates very wildly!
Did you know...
- Radiation instruments (radiometer, pyrgeometer, and pyranometer) have a device inside, which converts thermal energy to electrical energy. These instruments receive both shortwave and longwave radiation. | <urn:uuid:b9b448e6-c71c-4272-99b6-93f76494060d> | 3.609375 | 328 | Knowledge Article | Science & Tech. | 36.820816 |
The law connecting the volume of a gas with the temperature was discovered by Gay-Lussac, and independently by Dalton; but it is gene rally attributed to the former chemist. It is : Provided pressure be kept constant, the volume of a gas, measured at 0 degrees C., increases by 1/273 for each rise of 1 degree C..
Or 1 volume of gas at 0 degrees will become 1.00367 volume at 1; 1.0367 volume at 10 ; 1.367 volume at 100, and so on. Generally stated, if / stand for a temperature, 1 volume of gas will become 1 + O.OO367/ when heated from 0 degrees to that temperature.
A third law may be deduced from these two ; it is, that if the volume of a gas be kept constant, the pressure of a gas will increase 1^ of its initial value at for each rise of 1. This is evident from the following consideration : Suppose that i volume of a gas is heated from o to i; the volume will increase to 1.00367 volume. To reduce the volume again to its initial value, i, the pressure must be raised by 0.00367 of its original amount. If the initial pressure corresponded to that of 76 centimeters of mercury, it would have to be increased to 76 + (76 x 0.00367) centimeters, or to
76. 279 centimeters in order that the gas should resume its original volume of I. The same consideration will hold if the gas is cooled instead of being heated ; but of course in that case the pressure will be reduced, in stead of being raised. It follows from this, that if the temperature could be reduced to 273 below o C., the gas would exert no pressure. This temperature, 273, is termed " absolute zero." As a matter of fact, so low a temperature has never been reached ; and, moreover, it is certain that all gases would change to liquids before that temperature was attained. But it serves as the starting point for what is termed the "absolute scale of tempera ture." Gay-Lussac's law may therefore be stated thus: The volume of a gas at constant pressure increases as the absolute temperature ; and its corollary, thus : The pressure of a gas at constant volume increases as the absolute temperature. For o C. corresponds with 273 on the absolute scale; and 273 volumes of gas will become 274, if the temperature is raised from 273 absolute to 274 absolute. Similarly, the pressure of a gas will increase in the proportion 273 : 274 if the absolute temperature is increased from 273 to 274. | <urn:uuid:55eb0de3-950d-45f0-b65c-fdd0bcde7d91> | 3.84375 | 532 | Knowledge Article | Science & Tech. | 60.580788 |
Evolutionary game of rock-paper-scissors may lead to new species
Washington, Feb 19 (ANI): A new study conducted by researchers at the University of California, Santa Cruz (UCSC), has determined that the evolutionary game of rock-paper-scissors in some animals might lead to the emergence of new species.
The study documents the disappearance of certain morphs of the side-blotched lizard in some populations.
The side-blotched lizard, Uta stansburiana, has three morphs differing in color and mating behavior.
Barry Sinervo, a professor of ecology and evolutionary biology at UCSC, has studied a population of side-blotched lizards near Los Banos, California, for over 20 years.
Ammon Corl, now a postdoctoral researcher at Uppsala University in Sweden, led the new study as a graduate student at UCSC and is first author of the paper.
Previous work by Sinervo and his colleagues showed that competition among male side-blotched lizards takes the form of a rock-paper-scissors game in which each mating strategy beats and is beaten by one other strategy.
Males with orange throats can take territory from blue-throated males because they have more testosterone and body mass.
As a result, orange males control large territories containing many females.
Blue-throated males cooperate with each other to defend territories and closely guard females, so they are able to beat the sneaking strategy of yellow-throated males.
Yellow-throated males are not territorial, but mimic female behavior and coloration to sneak onto the large territories of orange males to mate with females.
Corl found the three color morphs in many places, but not everywhere.
Some populations were missing some of the color morphs.
In the field, the researchers captured lizards to collect tissue samples for DNA analysis and then released them back into the wild.
In the lab, they used the tissue samples to get DNA sequences from all of the lizard populations in the study.
"Based on these sequences, we reconstructed the 'family tree' of the lizard populations and figured out which populations were more closely related to one another. This let us figure out how the mating strategies evolved," Corl said.
The results showed that all three color morphs existed millions of years ago and have persisted since then in many populations.
Over time, however, some branches of the lizard family tree lost some of the color types.
Sinervo has documented the cycling of the rock-paper-scissors game at his main study site for 22 years, with the dominant morph in the population changing every four to five years.
"It's like an evolutionary clock ticking between rock, paper, scissors then back to rock," he said. (ANI)
Read More: New University Campus So | University Campus | Kashmir University | Aligarh Muslim University | K P University | University Po | Kumaon University Nainital | Gorakhpur University | Agra University | Bundelkhand University So | Santa Cruz So | M.p.r.site | Cruz Fernandespuram | Kannur University Campus | Calicut University | Kochi University | Jadavpur University | Kolkata University | Santa | Vidyasagar University So | Bano
Hundreds run for women safety in New Delhi
May 19, 2013 at 1:26 PM
CANNES HAS BEST ORGANISED RED CARPET-FREIDA PINTO
May 19, 2013 at 1:08 PM
IPL SPOT-FIXING: CRUCIAL BCCI MEETING BEGINS
May 19, 2013 at 1:05 PM | <urn:uuid:66f58c14-919c-4ea1-95c2-186ab4be8bce> | 3.453125 | 767 | Truncated | Science & Tech. | 28.687899 |
Diameter of Universe
Name: Thomas P.
What is the diameter of the Universe?
This is a tricky question to answer because the term "diameter" implies that there is
"something" on the "other side", but there is no "other side" so when an astronomer
observes a more distant object (s)he is looking back in time rather more than in space,
actually space-time. The further an object appears, it actually means the older it is.
Recent observations suggest that the expansion of the Universe, in which everything is
receding from every other thing, is actually accelerating. The Universe expansion does
not appear to be slowing down! Sound confusing? It is. It is difficult but inescapable --
how the Universe is behaving is not well understood.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:a72ff9a5-2dc1-4e2e-8ebd-2f5f675a0a55> | 2.703125 | 183 | Q&A Forum | Science & Tech. | 41.836667 |
Click on any phrase to play the video at that point.Close
Organic chemists make molecules, very complicated molecules, by chopping up a big molecule into small molecules and reverse engineering. And as a chemist, one of the things I wanted to ask my research group a couple of years ago is, could we make a really cool universal chemistry set? In essence, could we "app" chemistry?
Now what would this mean, and how would we do it? Well to start to do this, we took a 3D printer and we started to print our beakers and our test tubes on one side and then print the molecule at the same time on the other side and combine them together in what we call reactionware. And so by printing the vessel and doing the chemistry at the same time, we may start to access this universal toolkit of chemistry.
Now what could this mean? Well if we can embed biological and chemical networks like a search engine, so if you have a cell that's ill that you need to cure or bacteria that you want to kill, if you have this embedded in your device at the same time, and you do the chemistry, you may be able to make drugs in a new way.
So how are we doing this in the lab? Well it requires software, it requires hardware and it requires chemical inks. And so the really cool bit is, the idea is that we want to have a universal set of inks that we put out with the printer, and you download the blueprint, the organic chemistry for that molecule and you make it in the device. And so you can make your molecule in the printer using this software.
But to take baby steps to get there, first of all we want to look at drug design and production, or drug discovery and manufacturing. Because if we can manufacture it after we've discovered it, we could deploy it anywhere. You don't need to go to the chemist anymore. We can print drugs at point of need. We can download new diagnostics. Say a new super bug has emerged. You put it in your search engine, and you create the drug to treat the threat. So this allows you on-the-fly molecular assembly.
You can share this video by copying this HTML to your clipboard and pasting into your blog or web page.
need to get the latest Flash player.
Got an idea, question, or debate inspired by this talk? Start a TED Conversation.
Chemist Lee Cronin is working on a 3D printer that, instead of objects, is able to print molecules. An exciting potential long-term application: printing your own medicine using chemical inks.
A professor of chemistry, nanoscience and chemical complexity, Lee Cronin and his research group investigate how chemistry can revolutionize modern technology and even create life. Full bio » | <urn:uuid:27ca9c25-7943-4ab9-8a4b-cfdd37d9079d> | 3.25 | 579 | Audio Transcript | Science & Tech. | 59.582704 |
The blastomussa is a hard coral species of the Blastomussa and Micromussa genus that populates the waters of the Indo-Pacific region, and is also known as a Blasto Coral, Pineapple Coral or Branched Cup Coral. Most can be found on deeper reef slopes where the position provides an excellent shield from rough wave conditions and other environmental disturbances.
Sometimes confused with the Candy Cane coral, it can be distinguished by a more compact and rounded appearance. It can also be mistaken for Mushroom Anemones because the fleshy polyps will usually cover the entire skeleton when they are wide open. Blastomussa is a rare coral, and only contains two distinct species- the Blastomussa Merleti and Blastomussa Wellsi.
Blastomussa are colonial creatures and when fully formed, resemble brain corals when polyps are completely open. The centers of blastomussa are vivid neon green which seems to radiate and glow when exposed to light. Species ofblastomussa are usually dark red with shades of brown and green colors mixed in its bodies. The two types of blastomussa have very different size polyps and therefore easy to identify from one another. The B. Wellsi blastomussa have polyps that are large and fleshy, and range from 1-5 in size, while those of B. Merleti are less than one. The composition of B. Merleti is made of pipe-like polyps, which elongate the skeleton that connects to the base and ultimately other polyps in the colony. Reef hobbyists are able to easily propagate this type of blastomussa because of the lengthy polyps which break off easily into single or clustered polyps, allowing for new colonies to form in alternate locations.
Blastomussa's Relationship With Algae
What blastomussa doesn't share with other types of hard corals, it makes up for in its photosynthetic nature. While it does secure food from other sources, the majority of its nutritional content is made from exclusively the minute algae it hosts within its body. When blastomussa do hunt for prey, it uses its feeding tentacles to expand and trap organisms that are brought by in the ocean currents. When these tentacles are extended to full capacity, the blastomussa mimics the appearance of a sea anemone.
Coexistence With Blastomussa
There are many organisms that live in coexistence with the blastomussa. Organisms find the blastomussa attractive because the skeletal structure makes an obscure hiding place where they can be protected from predators. Neighboring sea creatures that prefer the blastomussa in which to settle include sponges mollusks, various sessile invertebrates and even other corals.
Blastomussa In Captivity
The blastomussa corals are hard to obtain, mainly because of their natural location within the ocean. By dwelling at significant depths and hiding on the side of lower reef slopes, blastomussa are difficult to harvest, and will therefore sell for high prices to hobbyists and reef enthusiasts. Individual polyps can sell at prices of $70 or more! | <urn:uuid:2ede4671-224c-455c-ad39-58d9c78815c5> | 3.25 | 664 | Knowledge Article | Science & Tech. | 29.011182 |
During a tough Massanutten Mountain South Training Run a couple of months ago I entertain good friend Caren Jew with my impromptu lectures on differential equations, on "gold farming" in online multiplayer games, on the rôle of the Nurse in Shakespeare's Romeo and Juliet, on wiki spam, and on the unreasonable effectiveness of mathematics in the natural sciences. Five-plus hours pass as if so many minutes — for me anyway! Yes, I act like Mr. Know-It-All sometimes (always?). But Caren is one tough trail grrrl and survives the ordeal.
My ability to come up with a theory for any phenomenon is momentarily tested, however, when Caren observes the Moon rising in front of us at 3:30pm during the drive home. I am literally stunned. We saw the gibbous Moon setting during our outbound trip at 5:30am, only ten hours earlier. How can it be back so soon? It has to rise 50+ minutes later every day to circle the Earth in one lunar month. Did it take a short cut?
To Caren's vast amusement it takes me several minutes to come up with a theory to explain her observation. We're in the midst of winter, so the Sun is far south. The Moon's orbit is in the plane of the ecliptic and since it's nearly full at the moment, it's almost opposite the Sun, and thus is quite far north. Just as nights (when the Sun is below the horizon) are short in summertime, so also "full-moon-nights" (when the Moon is below the horizon) are short in wintertime. Therefore it's normal to see the Moon rising less than 12 hours after it has set, considering our northly latitude and the time of year.
How obvious — after a bit of head-scratching! (^_^) | <urn:uuid:4f5fdc10-fc51-4184-a0a9-a16c89984334> | 3 | 387 | Personal Blog | Science & Tech. | 62.875077 |
& Tornado Alley
Sir Isaac Newton discovered in 1666 that the index of refraction of glass was different for different colors of light. This meant, he believed, that any glass lens would have chromatic aberration; i.e., there would be rainbow fringes of any image no matter how sharply focused. Since chromatic aberration seemed unavoidable in lens systems Newton invented the reflective telescope based upon mirrors. However about the time he was giving up on the possibility of achromatic lenses glass makers were creating new glasses by using lead compounds as ingredients. The only glass available up to that time was made from sand, lime and soda, what is now called crown glass. This was a formula that went back to the Phoenicians. The introduction of lead not only changed the refractive index of glass but reduced the differences in the refractive index for different colors of light. The leaded glasses were called flint glasses.
The degree of dispersion for a glass is defined as a ratio ν given by
where nD, nF and nC are the refractive indices at wavelenghts of 587.6 nm, 486.1 nm, and 656.3 nm, respectively. These are the Fraunhofer wavelengths. (The letters D, F and C are arbitrary labels for lines of the spectrum of elements that are used as standards.) The D-line is the characteristic yellow gold color of the sodium spectrum. C and F are lines in the hydrogen spectrum. The values of ν are generally in the range of 30 to 60. Chromatic aberration can be virtually eliminated by using multiple elements in a lens system with differing refractive indices.
Using leaded glasses Chester Moore Hall in 1733 designed an achromatic lens system. By 1758 J. Dollond had obtained a patent in the United Kingdom for an achromatic lens system and began their manufacture.
The locus of the high level optical research shifted to the continent. The researches of Joseph von Fraunhofer concerning the spectra of elements made the meaurement of the optical qualities of glasses precise. Fraunhofer and collaborators built the first optical glass factory near Munich. Later such factories were also built in France, Switzerland and England.
It was not until the late 19th century that optical theory and practice advanced significantly beyond Fraunhofer's work. It was E. Abbe moved the technology to a new level. Abbe founded the Zeiss plant in Jena where he set a chemist, Otto Schott, to work examining the influence of different elements on the optical characteristics of glasses. The use of boron and barium compounds led to several categories of glasses; i.e, the borosilicate crown glasses, the barium crown glasses and the barium flint glasses. Schott went on to found a company which produced special glasses of these types. Companies with similar lines were formed in the other major industrial countries; e.g., Bausch and Lomb in the U.S., Parra-Mantois in France and G.B. Chance in the U.K. It is notable that the Eastman Kodak company in the U.S. developed a glass which did not contain silicon.
|Indices of Refraction and the ν-Values for Various Types of Optical Glasses|
|Light Barium Crown||1.570||1.573||1.580||57|
M. Herzberger, "Optical Qualities of Glass," in R.K. Luneburg's Mathematical Theory of Optics, University of California Press, 1964, pp.411-431.
Frank L. Pedrotti and Leno S. Pedrotti, Introduction to Optics, Prentice-Hall, 1987.
HOME PAGE OF Thayer Watkins | <urn:uuid:e2306097-2480-47f9-87e6-03a0d95d3939> | 3.65625 | 784 | Knowledge Article | Science & Tech. | 57.380688 |
Not Paying Attention to Planemos? What Makes You Think You Believe in Global Warming?
This month, a set of 'twin' planemos was discovered some 400 light years distant (like we really care, right?) . Designated Oph 162225-240515 (or Oph 1622), the pair was discovered using the European Southern Observatory New Technology Telescope at La Silla, Chile. A few dozen such objects have been identified in recent years but this is the first planemos pair discovered beyond a solar system.
The pair's existence challenges current theories about the formation of planets and stars, astronomers reported in the journal Science. The two are separated by 22 billion miles (about six times the distance between the Sun and Pluto). Both are young, about a million years old, scientists say.
Oph1622 is also a brown dwarf (a sub-stellar object). Its mass equals about 14 Jupiters, or some one-seventy-fifth that of the Sun.
Instinct tells me scientists should be more open-minded about the popular theory of man-induced climate change. Why? On the scale of long times and large objects, science is probably still in its infancy. Science is accountable for continually testing and refining its latest theories, you knew that, right? Part of the Scientific Method. UPDATE: Many scientists and theorists today argue that concepts of causality are not obligatory to science, but are well-defined only under particular, admittedly widespread conditions. see Causal explanation.
But, more importantly, there must exist a Time-order relationship. The hypothesized causes must precede the observed effects in time. Not so, CO2 increases and global warming.
Planemo is a celestial object of planetary mass -
The term describes celestial bodies larger than asteroids but smaller than nuclear reactive stars. Planemos that orbit stars are commonly referred to as, planets. Planemo is a contraction of "planetary mass object". | <urn:uuid:87a93b56-40aa-411b-a17b-fdc90bc9d7cf> | 3.46875 | 404 | Personal Blog | Science & Tech. | 45.202803 |
|Eukaryotic||a and c2||Phycocyanin or Phycoerythrin||4||0-2||starch and lipids||proteinaceous periplast||2 equal|
|Public Domain||NOAA||Wikimedia Commons, Daniel Vaulot|
Cryptophytes, or cryptomonads, are single-celled algae that have two flagella, used for swimming.
The cryptophytes are single-celled flagellates and have pigments found in no other group of algae (phycoerythrin and phycocyanin). Pigments are structures that absorb light and include the pigment, chlorophyll. The name “crypto” means secret, or hidden, and these algae can be secretive in their life habits. Cryptophytes are an interesting group of organisms because they are able to obtain energy from the sun through photosynthesis, as well as obtain energy by eating particulate food. Therefore, these algae are both photosynthetic and heterotrophic. There are many different species found across the world, including moist areas on soils, ice-covered lakes, tropical oceans, in blooms on beaches, and as intestinal parasites in animals. There is still much to be discovered about this group of organisms and their life habits.
Cryptophytes contain chlorophyll a and c2 for photosynthesis, which they use for converting the sun’s energy into food. Cryptophytes save the extra energy from photosynthesis in the form of starch, a type of sugar. Starch acts as a food reserve for the cell during times when it is not able to photosynthesize, like in the darkness of winter or deep in lakes, where light does not reach. Cells also contain the accessory pigments phycocyanin (blue) or phycoerythrin (red). Accessory pigments are the molecules responsible for the color of cells, and cryptophytes may appear red, yellowish green, or brown in color.
Most cryptophytes are flattened and elliptical in shape and have 2 flagella. The cryptophytes have a unique cell covering called a periplast. The periplast contains ejectosomes (also called trichocysts), tightly coiled strands of protein which also contain poisons. These ejectosomes are a defense mechanism. A cell can eject the ejectosomes if it feels threatened by a predator, such as a zooplankter. The ejectosome distracts the predator and gives the cryptophyte time to swim away.
Cryptophytes are able to eat prey (heterotrophic) or use photosynthesis (autotrophic) to obtain energy for the cell. Cells are distinct because of a feature on the periplast called a furrow. Inside the furrow, or gullet, are more ejectosomes. The cells are able to engulf (eat) bacteria or other protoctists and the poison from the ejectosomes subdues or kills the prey. Cells also photosynthesize using chlorophyll and accessory pigments. Cryptophytes have additional pigments of alpha-carotene, cryptoxanthin and alloxanthin. These are unusual pigments within the algae, and because of them, scientists think that cryptophytes first evolved as heterotrophic organisms and later acquired (engulfed, but did not digest) photosynthetic symbionts. That is, cryptophytes are thought to have acquired symbionts more than once in their evolutionary history.
Cryptomonads in Rocky Mountain Lakes
In the aquatic environments of the Rocky Mountains, cryptophytes are a very important part of lake ecosystems. Cryptophytes are abundant in the phytoplankton and they can also live through the winter, under ice-cover and with little solar radiation for phytosynthesis. They are also an important food zooplankton. Zooplankton, in turn, are food for fish and other organisms that are part of the aquatic food web.
17 taxa shown below, 17 of which appear in at least one sample.
|Cryptomonas sp. 1||1||2||RMNP,||4016|
|Cryptomonas sp. 2||1||2||RMNP,||4017|
|Plagioselmis sp. #1||1||148||SLW,||4014|
|Plagioselmis sp. #2||1||112||SLW,||4013|
RMNP = Rocky Mountain National Park, CO
SLW = Silver Lakes Watershed, CO
Images are not scaled. An individual that looks bigger than its neighbors might actually be smaller. All images were made to fit within an area of 360px high and 200px wide.
Plagioselmis sp. #1
Length 10-12 µm
Width 5-7 µm
Plagioselmis sp. #2
Length 10-12 µm
Width 5-7 µm
Representative images missing for: Chilomonas sp. | Chroomonas acuta | Chroomonas sp. | Cryptomonas alpina | Cryptomonas erosa | Cryptomonas marsonii | Cryptomonas ovata | Cryptomonas rostrata | Cryptomonas sp. | Cryptomonas sp. 1 | Cryptomonas sp. 2 | Plagioselmis nannoplanctica | Plagioselmis sp. | Rhodomonas minuta | Rhodomonas sp. | | <urn:uuid:32844625-615b-4f75-9a95-69cb112db65f> | 3.921875 | 1,157 | Knowledge Article | Science & Tech. | 38.189876 |
MARMAP Bongo Nets 1990-2009Entry ID: MARMAP_BongoNets
Abstract: Abundance and biomass of fish species collected during the day from 1973 to 1980 off the coast of the southeastern United States (Cape Fear, NC to Cape Canaveral, FL).
Purpose: For thirty years, the Marine Resources Research Institute (MRRI) at the South Carolina Department of Natural Resources (SCDNR), through the Marine Resources Monitoring, Assessment and Prediction (MARMAP) program, has conducted fisheries-independent research on groundfish, reef fish, ichthyoplankton, and coastal pelagic fishes within the region between Cape Lookout, North Carolina, and Ft Pierce, ... Florida. The overall mission of the program has been to determine distribution, relative abundance, and critical habitat of economically and ecologically important fishes of the South Atlantic Bight (SAB), and to relate these features to environmental factors and exploitation activities. Research toward fulfilling these goals has included trawl surveys (from 6-350 m depth); ichthyoplankton surveys; location and mapping of reef habitat; sampling of reefs throughout the SAB; life history and population studies of priority species; tagging studies of commercially important species and special studies directed at specific management problems in the region. Survey work has also provided a monitoring program that has allowed the standardized sampling of fish populations over time and development of an historical base for future comparisons of long-term trends.
Annual MARMAP cruises to assess relative abundance of reef fishes in the sponge-coral and shelf edge (live bottom) habitats of the South Atlantic Bight (SAB) have been conducted since 1978. MARMAP currently samples natural live bottom habitat from Cape Lookout, NC to the Ft. Pierce area, FL. The current main MARMAP objectives are to:
(1) Sample reef fishes in the snapper-grouper complex at using a variety of gears in live bottom, rocky outcrop, high relief, and mud bottom habitats,
(2) Collect detailed data for time series description of species for annual composition and relative abundance,
(3) Obtain population characteristics on fish species of interest through life history information analysis, including age and growth, sex ratio, size and age of sexual maturation and transition, spawning season, fecundity, and diet. Priorities are dictated by the SEDAR schedule and other management considerations,
(4) Collect hydrographic data (e.g. depth, temperature, salinity, etc.) for comparison to fish abundance and composition indices,
(5) Collect DNA samples from selected fish species for stock identification,
(6) Expand sampling area in North Carolina and south Florida as well as reconnoiter new live bottom areas with underwater video (UWTV) to add to the MARMAP site database.
(Click for Interactive Map)
ISO Topic Category
Role: SERF AUTHOR
Phone: (301) 614-6898
Email: Tyler.B.Stevens at nasa.gov
NASA Goddard Space Flight Center Global Change Master Directory
Province or State: MD
Postal Code: 20771
Role: TECHNICAL CONTACT
Email: bachelet at fsl.orst.edu
Department of Biological and Ecological Engineering Oregon State University
Province or State: OR
Postal Code: 97331
Simulating Broad-Scale Fire Severity in a Dynamic Global Vegetation Model.
J.M. Lenihan, C. Daly, D. Bachelet, and R.P. Neilson.
Northwest Science 72:91-103 (1998)
Dynamic Simulation of Tree-Grass Interactions for Global Change Studies.
C. Daly, D. Bachelet, J.M. Lenihan, R.P. Neilson, W.J. Parton, and D. Ojima.
Ecological Applications 10(2):449-469 (2000)
Climate, Fire and Grazing Effects at Wind Cave National Park, SD.
D. Bachelet, J.M. Lenihan, C. Daly, and R.P. Neilson.
Ecological Modelling 134(2-3):229-244 (2000)
D. Bachelet, J.M. Lenihan, C. Daly, R.P. Neilson, D.S. Ojima, W.J. Parton.
General Technical Report PNW-GTR-508. Corvallis, OR. USDA Forest Service,
Pacific Northwest Research Station. (2001)
Climate Change Effects on Vegetation distribution and Carbon Budget in the U.S.
D. Bachelet, R.P. Neilson, J.M. Lenihan, and R.J. Drapek.
Ecosystems 4(3):164-185 (2001)
Aber, J., R.P. Neilson, S. McNulty, J.M. Lenihan, D. Bachelet, and R.J. Drapek.
BioScience 51(9):735-751 (2001)
D. Bachelet, R.P. Neilson, T. Hickler, R.J. Drapek, J.M. Lenihan, M.T. Sykes,
B. Smith, S. Stitch, K. Thonicke. Global Biogeochemical Cycles 17(2):1045
J.M. Lenihan, R.J. Drapek, D. Bachelet, and R.P. Neilson.
Ecological Applications 13(6):1667-1681 (2003)
D. Bachelet, R.P. Neilson, J.M. Lenihan, and R.J. Drapek
Environmental Management Vol. 33 Supplement 1 ppS23-S43
R.E. Keane, G.J. Cary, I.D. Davies, M.D. Flannigan, R.H. Gardner, S. Lavorel,
J.M. Lenihan, C. Li, T.S. Rupp Ecological Modelling 179:3-27 (2004)
D. Bachelet, J.M. Lenihan, R.P. Neilson, R.J. Drapek, and T. Kittel
Canadian Journal of Forest Research 35:2244-2257 (2005) PDF
Creation and Review Dates | <urn:uuid:53a73f3f-3da4-4208-876a-cdf7f4c5379d> | 2.890625 | 1,356 | Content Listing | Science & Tech. | 63.098162 |
(Submitted March 16, 1998)
Who discovered the quasar and when?
The discovery of quasars was really spread over time. Quasar is a
shortening of "quasi-stellar radio source", and they've also been
called quasi-stellar objects or QSOs. In the late 50s, several
radio sources were matched with very dim optical objects that looked
like stars, but had strange spectra with a lot of ultraviolet excess.
One of them, 3C273 had it's position very accurately measured by C.
Hazard and co-workers, using lunar occultations. In 1962, M. Schmidt
obtained a spectrum of this "star", which showed a redshift of 0.158.
This is when QSO was coined, because this was a very distant object
that was masquerading as a star, a quasi-stellar object.
This description is paraphrased from a book, "High Energy Astrophysics",
by M.S. Longair.
Also, you can see
Thanks for your question!
Eric Christian and Maggie Masetti
for Ask an Astrophysicist | <urn:uuid:9e2f52eb-3a46-473a-b756-72b4ccf987fd> | 4.15625 | 240 | Q&A Forum | Science & Tech. | 63.661013 |
File:Climate Change Attribution.png
From Global Warming Art
This figure, based on Meehl et al. (2004), shows the ability with which a global climate model (the DOE PCM ) is able to reconstruct the historical temperature record and the degree to which the associated temperature changes can be decomposed into various forcing factors. The top part of the figure compares a five year average of global temperature measurements (Jones and Moberg 2001) to the Meehl et al. results incorporating the effects of five predetermined forcing factors: greenhouse gases, man-made sulfate emissions, solar variability, ozone changes (both stratospheric and tropospheric), and volcanic emissions (including natural sulfates). The time history and radiative forcing effectiveness for each of these factors was specified in advance and was not adjusted to specifically match the temperature record.
Also shown are grey bands indicating the 68% and 95% range for natural variability in the five year average of temperature as determined from multiple simulations with different initial conditions. In other words, the bands indicate the estimated size of fluctuations that are expected to result from changes in weather rather than changes in climate. Ideally the model should be able to reconstruct temperature variations to within about the tolerance specified by these bands. Though the model captures the gross features of twentieth century climate change, it remains likely that some of the differences between model and observation reflect the limitations of the model and/or our understanding of the histories of the observed forcing factors.
In the lower portion of the figure are the results of additional simulations in which the model was operated with only one forcing factor at a time. A key conclusion of the Meehl et al. (2004) work is that the model response to all factors combined is approximately equal to the sum of the responses to each of the factors taken individually. They conclude therefore that it is reasonable to discuss how the evolving man-made and natural influences individually impact climate. Meehl et al. attribute most of the 0.52 °C global warming between 1900 and 1994 to a 0.69 °C temperature forcing from greenhouse gases partially offset by a 0.27 °C cooling due to man-made sulfate emissions and with other factors contributing the balance. This contrasts with the warming from 1900 to 1940 for which the model only attributes a net increases of 0.06 °C to the combined effects of greenhouse gases and sulfate emissions. The zeros on both plots are set equal to 1900 temperatures.
|Temperature change relative to 1900|
Note that "Net" reflects the model runs with all factors included and is not identical to simply summing the individual factors.
- Meehl, G.A., W.M. Washington, C.A. Ammann, J.M. Arblaster, T.M.L. Wigleym and C. Tebaldi (2004). "Combinations of Natural and Anthropogenic Forcings in Twentieth-Century Climate". Journal of Climate 17: 3721-3727.
- Jones, P.D. and Moberg, A. (2003). "Hemispheric and large-scale surface air temperature variations: An extensive revision and an update to 2001". Journal of Climate 16: 206-223.
This figure was created by Robert A. Rohde from published data.
GWArt images and pages linking to this file
Wikipedia pages and images linking to this file
Click on a date/time to view the file as it appeared at that time.
|current||09:51, 20 February 2006||500×573 (37 KB)||Robert A. Rohde| | <urn:uuid:d8d60356-d3a9-4ef6-8553-f27ac6d1bacb> | 3.734375 | 741 | Knowledge Article | Science & Tech. | 47.190133 |
Balloons on Mars
Balloon History on Earth
The first widely recorded, public demonstration of a balloon took place in June of 1783. On this date, a 105-foot circumference balloon, designed by the brothers Joseph and Jacques Montgolfier, was launched in Annonay, France. It rose to an altitude of 6,000 feet. This balloon or ballon was named for the oblong paper bag used in their early experiments. The brothers made a fire and used the smoke and heated air to fill the ballon. Because the air inside the ballon was warmer than the cooler and heavier air surrounding the ballon, it floated upward.
A few months later, a hydrogen-filled balloon designed by Professor Jacques Alexander Charles was successfully launched in Paris, France. Since hydrogen is a gas that is lighter than air molecules, it displaces the air molecules (or pushes them out of the way) as it rises upwards. By the end of that year, both kinds of balloons were being used to carry passengers.
The invention of the balloon started a new period of explorations. The early "aeronauts" competed with one another to travel higher and farther. Today, balloons are used mostly for sports and recreational purposes or for high-altitude scientific and meteorological research.
There are two main parts of a balloon: the balloon itself which is called the envelope, and the basket or gondola. The gondola is attached to the envelope by strong cables. The envelope is made of a lightweight, gas-tight fabric.
How They Stay Airborne
A balloon gets its lift from Archimedes' Principle. A balloon traps lighter-than-air gases (hydrogen or helium or hot air) in its envelope. These gases then displace the cooler and/or heavier air surrounding the envelope on the outside. This creates an upthrust or buoyancy force that lifts the envelope and gondola (that is, if the weight of gondola, passenger(s) and payload are not heavier than the lift force!)
The air pressure in the atmosphere slowly decreases as one rises in altitude. If the pressure on the top of a balloon could be carefully measured, one would find that the air pressure is slightly less there than at the bottom of the balloon. The greater pressure acting on the bottom of the balloon results in a small lift force pushing up on the ball. The amount of lift force depends on the difference in pressure between the top and the bottom of the balloon. As the difference in air pressure between the top of the balloon and the bottom of the balloon becomes greater, the lift force will become greater. It also depends on how big the surface area of the balloon is. The greater the surface area of the balloon the greater the lift force, also.
The Greek mathematician and engineer, Archimedes, described the amount of buoyancy force on an object as equal to the weight of the fluid that it displaces.
As a simple math problem it looks like this:
Weight of displaced fluid = object's volume x fluid densityThe weight of the fluid displaced is the displaced volume times the fluid density. Since the buoyancy force on an object in the atmosphere is so small, an object must be very large and very lightweight to get enough buoyant lift to be useful.
So what does Archimedes taking a bath have to do with flying balloons on Mars? In order for a hot air balloon to carry two people of average weight in Earth's atmosphere, the envelope would need to be almost 55 feet in diameter. That's a fairly large balloon. This balloon would then displace about 6,200 pounds of air. The hot air inside the balloon would weigh about 5,200 pounds. This means that the hot air balloon could really only lift about 1,000 pounds. If the balloon envelope, basket, fuel tanks, and burner weigh about 600 pounds, that would leave only about 400 pounds of lift to pick up two people. In Earth's atmosphere it would be able to generate enough lift to carry its payload safely.
But what about Mars' atmosphere?
Surprise, a hot air balloon on Mars makes no sense, because there is no air on Mars. The atmosphere on Mars is CO2 but there is so little of it that it is not enough to heat. On Mars a helium balloon would make sense.
Did you ever play with a helium balloon at a birthday party when you were little? Can you imagine a helium balloon on Earth that could carry two people (about 400 lbs.)? If the envelope of the balloon was made of thin mylar, it would be 1/3 of the weight of the fabric that is used for hot air balloons. There would be no propane gas tank or gas burner but there would still be a gondola and there would be a helium tank. The size of the helium balloon would be 30 feet in diameter.
On Mars if you used the same materials for a two person balloon, the diameter of the balloon would have to be 160 feet. That's one enormous balloon! The envelop would weigh 1300 lbs. and if you packed it in a container with no airspace, the container would be 2 feet in diameter and 5.5 feet long. That's a big heavy object to send all the way to Mars. There would be very few benefits for the cost of such a balloon. Not to mention that it would be difficult to control the balloon's flight.
So, how are balloons controlled anyway?
The aeronaut can only control the upward and downward movement of the balloon. To ascend, the aeronaut adds hot air or more lighter-than-air gas into the envelope. To descend, the aeronaut releases the hot air or lighter-than-air gas out of the envelope. A balloon has no means of propulsion. A balloon's side-to-side movement cannot be controlled so it drifts with the wind. To change direction, the aeronaut must ascend or descend to catch a wind current moving in the desired direction of flight.
Hot Air Balloon
These balloons are used mainly for sport and recreation. They use air heated by a burner to give the lifting force to carry the gondola and its passengers and cargo. To rise higher, more hot air is released into the envelope. This is done by pulling on a cord that releases the flow of liquid propane from its storage cylinder through a tube toward the burner. The liquid is heated by a flame which warms it and turns it into a gas. The gas reaches the burner. Flames are released from the burner which warms the air in the envelope. To descend, hot air is released from the top of the envelope. This causes the air inside the balloon to become cooler. The balloon then descends.
A hydrogen balloon works just like a hot air balloon except that it does not use a burner to generate hot air. It uses a gas called hydrogen. The gas is stored in a tank and released into the envelope when the balloon needs to ascend. Hydrogen is lighter than the mixture of air molecules found in Earth's atmosphere. So when the envelope is filled with hydrogen, it naturally rises above the heavier air molecules by pushing them out of the way on the way up.
Compare Balloons on Mars and Earth
Atmospheric Flight Table of Contents
Planetary Flight Home Page | <urn:uuid:9efc2814-202d-46aa-b2d9-2d16396e1c3d> | 3.9375 | 1,498 | Knowledge Article | Science & Tech. | 60.729109 |
Scientists have used a novel technique to probe the nature of dark energy some 10 billion years into the past. They hope it will bring them closer to an explanation for the strange force that appears to be driving the Universe apart at an accelerating rate. The method relies on bright but distant objects known as quasars to map the spread of hydrogen gas clouds in space. The 3D distribution of these clouds can be used as a tracer for the influence of dark energy through time. A scholarly paper describing the approach has been submitted to the journal Astronomy & Astrophysics and posted on the arXiv.org preprint site. It is authored by the BOSS (Baryon Oscillation Spectroscopic Survey) team, which uses the 2.5m Sloan Foundation Telescope in New Mexico, US, to make its observations of the sky.
The international group's new data is said to be a very neat fit with theory, confirming ideas that dark energy did not have a dominant role in the nascent Universe. Back then, gravity actually held sway, decelerating cosmic expansion. Only later did dark energy come to the fore. "We know very little about dark energy but one of our ideas is that it is a property of space itself - when you have more space, you have more energy," explained Dr Matthew Pieri, a BOSS team-member. "So, dark energy is something that increases with time. As the Universe expands, it gives us more space and therefore more energy, and at some point dark energy takes over from gravity to end the deceleration and drive an acceleration," the Portsmouth University, UK, researcher told BBC News.
The BOSS team used 48,000 distant quasars to "back-light" and map the distribution of clouds of hydrogen gas in the early Universe
The discovery that everything in the cosmos is now moving apart at a faster and faster rate was one of the major breakthroughs of the 20th Century. But scientists have found themselves grasping for new physics to try to explain this extraordinary phenomenon. A number of techniques are being deployed to try to get some insight. One concerns so-called baryon acoustic oscillations. These refer to the pressure-driven waves that passed through the post-Big-Bang Universe and which subsequently became frozen into the distribution of matter once it had cooled to a sufficient level. Today, those oscillations show themselves as a "preferred scale" in the spread of galaxies - a slight excess in the numbers of such objects with separations of 500 million light-years.
It is an observation that can be used as a kind of standard ruler to measure the geometry of the cosmos. The BOSS team has already done this using a large volume of galaxies that stretch some six billion light-years from Earth. But at greater distances - and hence deeper in cosmic time - these standard galaxies are simply too faint for the Sloan telescope to see. Instead, the BOSS team has used quasars (quasi-stellar radio sources) to help it map the cosmos. Quasars are far flung galaxies where a massive central black hole is driving the emission of huge amounts of electromagnetic radiation. These are visible to Sloan. | <urn:uuid:0d0b1bb6-aa5b-473f-a32a-f0af3bee18e4> | 3.546875 | 647 | Comment Section | Science & Tech. | 44.445511 |
How to make a rocket using common cooking ingredients
Use baking soda and vinegar to create a chemical reaction which launches a rocket (a film canister) sky high.
How do differences in surfaces affect the adhesion of tape?
The purpose of this experiment is to examine how differences in surfaces affect the adhesion of several brands of tape. This experiment will show which brand of tape works the best, and on which type of surface they stick to the best. Based on information known, the tape will stick better to the dryer surfaces, and the less slippery.
How to make a baking soda volcano
You can create quite a fizzle by mixing baking soda/sodium bicarbonate - a base with vinegar/acetic acid. Well.. it's theoretically 'edible' but I wouldn't recommend tasting it (yuck!).
Show how many colors of food dyes or inks are found in a smartie or m&m
This experiment shows you how to split the different colors of ink or food dyes that can be found in a smartie or m&m using the process of chromatography
The amazing jumping rice Krispies
This really fun experiment shows you how to make a plate of Rice Crispies stand up and jump from a plate, with the help of static electricity.
How to make a potato battery
This activity uses a common potato and two different metals to make a enough electricity to run a small digital clock. Try this activity then attempt to expand on it to create a neat science fair project.
A really simple but cool experiment using raisins and soda water!
How much space does trash occupy?
The amount of trash placed in landfills and sanitary dumps is a major concern. Let's look at how much space trash really occupies.
How to make a vegetable oil tanker
This activity shows you how to make an oil "tanker' which can be used to clean up a vegetable oil "spill". | <urn:uuid:c6196f9d-4519-4bff-bd09-06e26b577c7a> | 3.84375 | 404 | Content Listing | Science & Tech. | 53.507437 |
From fire scars in the stumps of old-growth oak trees, a team of researchers led by Illinois botanist William E. McClain has given us an amazing glimpse into the history of fire in the U.S. Upper Midwest. The team’s work, published in a recent issue of the journal Castanea, is the most in-depth study of the region’s fire history published to date—detailing the frequency of human-made fires over a 226-year period and revealing how a brief interval of fire suppression permanently changed the landscape.
McClain, who is based at the Illinois State Museum, began tracing the history of fire in Illinois oak trees in 1996. That year, he made the observation of a lifetime when he was invited to have a look at a stand of trees slated for auction in northwestern Hamilton County, at a site a few miles south of the village of Dahlgren.
“The woods was wonderful,” he said. “It had numerous trees that had old-growth features, such as the spiraling of bark on the trunk. We walked through an adjacent [lot where] trees had [already] been harvested. The fire scars in the stumps caught my attention immediately. I had never seen so many scars.”Living Records of Fire History
While researchers from the Illinois Department of Natural Resources, Illinois Nature Preserves Commission, and Illinois Natural History Survey, created maps and performed extensive ecological assessments of the study area, McClain prepared cross-sections from 36 old-growth post oaks (Quercus stellata) scattered across the Hamilton County study site. For each cross-section, he counted growth rings and fire scars.
Fire scars, similar to other marks and features found in the growth layers in the heartwood of trees, are records of natural history. Each year, a new layer of wood, or growth ring, is formed from cambium, the thin layer of living tissue between the wood and the bark. In trees that survive fire, the areas of cambium that die as a result of exposure to intense heat are overgrown by a new cambium layer. “This process continues each year until the wounded area is healed,” McClain said.
After healing, there exists a visible fissure in the heartwood, which is a distinguishing feature of fire scars. Fire scars frequently are dark in color and contain charcoal.
Tracing the Fire History of Illinois
While the ecological impacts of present-day wildfires are relatively well understood, much less is known about the historical role of human-made fire in shaping the ecosystems that exist today. This is especially true for places such as the Midwestern United States, where fire is now relatively infrequent but appears to have been common throughout long periods of the region’s history.Written accounts indicate that many fires were observed in Illinois in the 17th and 18th centuries. According to McClain, “During [these periods], many fires were started intentionally for hunting or other purposes. Lightning-caused fires were not as common.”
After comparing the chronology of post oak fire scars with the written accounts of fire, and after examining fire records dating to the 1670s from white oaks in Baber Woods in Edgar County, McClain and colleagues were able to confirm that the peoples who inhabited Illinois prior to the arrival of European settlers set fires that burned into the woodlands of Hamilton County about every two or three years. The team also found that a temporary period of fire suppression produced a dramatic and permanent ecological change locally.
Indeed, the fire scars in post oaks and fire records from nearby states indicate that fires were set routinely in the Upper Midwest, perhaps for hundreds of years, until 1850. Up to that point, open post-oak woodlands and prairies dominated the Midwest landscape. (Fires did continue longer in parts of Illinois and other states.)
But the disappearance of fire in the Hamilton County woods after 1850 caused a fundamental change in the region’s ecology. Post oaks are slow growing, relatively insensitive to fire, and intolerant to shade. Until 1850, they were able to thrive because frequent fire killed off faster growing, shade-tolerant species and kept the woodland open. Although the practice of intentionally setting fires resumed around 1885, by then the woodland had a greater density of trees. The lack of fire for 35 years had allowed shade-tolerant species to proliferate, giving rise to a forest with a dense understory.
According to McClain, this pattern of fire suppression and ecological change is now apparent elsewhere in the United States. “The lack of fire is considered to be the reason why mesic (requiring moderate amounts of moisture), shade-tolerant, fire-sensitive tree species are replacing the oaks in forests throughout the eastern part of the United States.”
“Oaks are not reproducing,” he added. The same is true for the post oaks in Hamilton County and for the white oaks in Baber Woods—the seedlings of these are not surviving, which means that they may soon disappear from the region entirely.
About Science Up Front
A regular Britannica Blog feature written by the encyclopedia’s own Kara Rogers, Science Up Front goes behind the headlines to bring researchers’ stories of discovery centerstage. Begun in 2009 to highlight the ingenious work of pioneering scientists and to bring greater accuracy to science reporting, Rogers goes straight to the source, exploring the latest advances in science, from medicine to nanotechnology to conservation, through first-hand interviews with researchers. The series covers all things science, so check back regularly to see who’s up on Science Up Front. | <urn:uuid:d73ef6ba-4ee0-4765-8de0-368c0c370bc3> | 3.640625 | 1,177 | Knowledge Article | Science & Tech. | 40.682552 |
Archived:Calculating text width in Qt
This code snippets shows how to use the QFont and QFontMetrics classes to draw a string into the center of the screen and to get the string height and width in pixels.
Note: In order to use this code, you need to have Qt for Symbian installed on your platform.
- Install latest Qt for Symbian see Qt for Symbian - Installation packages
- Check this link for installation guide: How to install the package.
- Go through this article: Getting started with Qt for Symbian
- This source code centers the text and sets the text colour.
// Create font
// Set current font
// Set font color
// Get QFontMetrics reference
QFontMetrics fm = painter.fontMetrics();
QString text = "helloworld";
// Calculate text center position into the screen using QFontMetrics class
QPoint center = QPoint( ( width()-fm.width(text))/2,
( height() - fm.height())/2 );
// QFontMetrics::width() gives calculated text width with current QFont in QPainter
// QFontMetrics::height() gives text height
The text is centered on the screen.
The code example can be found at File:HelloworldNew.zip | <urn:uuid:764f495e-5459-4624-a80e-50e606011e24> | 3.1875 | 284 | Documentation | Software Dev. | 61.520943 |
Disulfide bonds are an essential stabilizing feature of many proteins. More than 50% of human ER proteins are estimated to contain disulfide bonds (dsb) and the majority of secreted proteins also contain dsbs.
Using bacteria to produce these proteins efficiently presents a significant challenge because the correct pairing of cysteines in a protein with multiple disulfide bonds is inherently fraught with error. Misoxidation of the incorrect pairs of cysteines results in misfolding and low yields.
Nature has resolved this issue by shuffling the incorrect disulfide bonds in misfolded protein into their native correct pairing by the activity of disulfide bond isomerases. Protein disulfide isomerase (PDI) carries out dsb oxidation and isomerization in the ER of all eukaryotes.
In many Gram-negative bacteria (including E. coli) the cooperative action of DsbA and DsbC oxidizes proteins within the periplasmic space. This is also true within commonly used E. coli protein-expression strains, with the exception of engineered strains such as the Origami™ strains from EMD Biosciences and the SHuffle® strains from New England Biolabs. These commercial strains have been similarly engineered to possess an oxidative cytoplasmic environment that favors disulfide bond formation. The SHuffle strains have been further modified to support robust production of dsb-containing proteins.
Dsb Formation in the SHuffle Strains
SHuffle strains contain a unique trio of modifications that produce robust disulfide bond formation in the cytoplasm.
First, the cytoplasmic environment is altered by elimination of glutathione reductase (gor gene product) and thioredoxin reductase (trxB gene product). Since the combination of gor and trxB deletions are lethal, a suppressor mutation in the ahpC gene is necessary for the SHuffle (and Origami) strains to maintain viability.
Second, SHuffle strains are uniquely engineered to overexpress DsbC within the cytoplasm. The DsbC enzyme acts as a disulfide bond isomerase and “shuffles” mis-oxidized cysteine pairs allowing the recombinant target protein to achieve its properly folded confirmation. Due to the action of DsbC in the SHuffle cell, less target protein proceeds down the paths of protease degradation or inclusion body formation.
Finally, the SHuffle strains were engineered from robust parent strains capable of tightly controlled protein expression. For example, SHuffle T7 Express (B strain) and SHuffle T7 (K-12 strain) both express the T7 RNA polymerase from the lac operon, whereas most other T7 expression strains utilize the DE3 prophage.
T7 expression in DE3 strains is known to be somewhat uncontrolled and this can have very detrimental effects on cell growth and protein yield when the target protein is even mildly toxic, which is a common occurrence with heterologous dsb-containing proteins. If strictly controlled T7 expression is required, then lysY strains are also available.
The lysY gene product is a variant of T7 lysozyme containing a mutation that eliminates “lysozyme” function on the E. coli cell wall, but the ability of LysY to inhibit T7 RNA polymerase function is unaffected. During the pre-induction phase, a constant low level of LysY inhibitor protein is produced to inactivate any basal expression of T7 RNA polymerase. | <urn:uuid:a9cfca30-9e7f-46de-a6c9-b6cc4e03daec> | 2.703125 | 736 | Knowledge Article | Science & Tech. | 25.005991 |
All Sather 1.1 implementations must support the language kernel defined in the last chapter. This chapter defines language extensions which may not be meaningful on every platform or which can be very difficult to implement. For example, platforms without a Fortran compiler need not implement the Fortran language interface.
Although these extensions are optional, they should be considered part of the Sather specification. For example, Sather 1.1 implementations which interface to Fortran must provide the language extension described here. The ICSI compiler supports all extensions described in this chapter on one or more platforms.
The threaded and synchronization extensions enable parallel processing. The synchronization and distributed extensions are only of use with the threaded extension. Collectively, these three extensions are known as pSather, and the language without these extensions is serial Sather. | <urn:uuid:9d60e45f-6801-4e39-a7ed-e1df5c0b20bb> | 2.8125 | 164 | Documentation | Software Dev. | 26.533141 |