text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
In Japan, engineers have laid a power line that can connect reactor 2 of the Daiichi facility to the off-site power grid, the International Atomic Energy Agency reported. Workers are working to reconnect the power to reactor 2 after they complete spraying water into the reactor 3 complex to provide additional cooling to the used fuel pool. Reconnecting to the power grid is expected to enhance efforts to prevent further damage at the plant.
Japan's Nuclear and Industrial Safety Agency reported on Thursday that the backup diesel generator for reactor 6 is working and supplying electricity to reactors 5 and 6. TEPCO is preparing to add water to the storage pools that house used nuclear fuel rods at those two reactors.
UPDATE AS OF 1:30 P.M. EDT, THURSDAY, MARCH 17
Radiation readings at the Fukushima Daiichi site boundary were measured today at a lower level, between 2 and 3 millirem per hour.
Brave New Climate analysis from Barry Brook
Are the spent fuel in the pools in Units 3 and 4 are now uncovered? The big concern here is that unlike the releases from damaged fuel in the reactor cores of Units 1, 2, and 3, which were largely filtered by scrubbing in the containment suppression pools (wetwell torus), releases of volatile fission products (e.g., cesium and iodine) from these spent fuel pools have direct pathways to the environment, if they remain dry for an extended period.
Efforts to deliver water to these pools have proven to be very difficult, and fuel damage may be occurring. If they are exposed, then the use of the evaporation of salt water as a heat sink over periods of more than a few days is not viable because the quantities of salt deposited as the water evaporates becomes large in volume and plugs the flow paths through the fuel, degrading heat removal. Everything that is cooled becomes a heat sink to condense anything volatilized. Unfortunately, a fresh water supply seems difficult to come by.
In sum, this accident is now significantly more severe than Three Mile Island in 1979. It resulted from a unique combination of failures to plant systems caused by the tsunami, and the broad destruction of infrastructure for water and electricity supply which would normally be reestablished within a day or two following a reactor accident. My initial estimates of the extent of the problem, on March 12, did not anticipate the cascading problems that arose from the extended loss of externally sourced AC power to the site, and my prediction that ‘there is no credible risk of a serious accident‘ has been proven quite wrong as a result. It remains to be seen whether my forecast on the possibility of containment breaches and the very low level of danger to the public as a result of this tragic chain of circumstances will be proven correct. For the sake of the people there, I sure hope it does stand the test of time.
So it is now generally agreed that this is an International nuclear and radiological event scale 6 event. Worse than Three mile island (5) but not yet to Chernobyl (7).
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:71d0ca3d-ac55-4b48-9a0c-5d9bf2d85365> | 2.9375 | 648 | Personal Blog | Science & Tech. | 42.62098 |
|[ To Contents ]|
Isotropic Vector Matrix
The isotropic vector matrix has already been introduced; we just didn't know its name.
If you can visualize the space-filling array of spheres in "cubic packing" described in the previous chapter, that's half the picture. Now, imagine interconnecting the centers of all spheresand then eliminating the spheres. Two collinear radii meeting at the tangency point between adjacent spheres form one unit vectorthe length of which is equal to the sphere's diameter (Fig. 9-1). The resulting array of vectors is the "isotropic vector matrix," a space-filling network of continuously alternating octahedra and tetrahedra. Reviewing the characteristics of cubic packing, we shall not be surprised to find that all the newly formed vertices (the spheres' centers) are identically situated. Two types of cells, one type of vertex.
It's not hard to see how Fuller's search for a geometry of vectors led him to the isotropic vector matrix. "Since vectors... produce conceptual structural models of energy events, and since my hypothetical generalization of Avogadro's law requires that 'all the conditions of energy be everywhere the same,'" ponders Fuller, "what does this condition look like as structured in vectorial geometry?" His answer is ready: "Obviously all the vectors must be the same length and all of them must interact [sic] at the same angles" (986.131b).
The isotropic vector matrix, or IVM, takes the VE a step further, consisting of identical lengths and angles, not for vectors surrounding just one point, but surrounding every point in an indefinite expanse. In Fuller's words, the IVM is "a multidimensional matrix in which the vertexes are everywhere the same and equidistant from one another" (222.25).
It is not correct to conclude that the IVM consists of many vector equilibria packed together, for the VE by itself cannot fill space. To understand why not, we look at isolated sections of the IVM. As difficult as it is to visualize the overall matrix, a single row of alternating tetrahedra and octahedra, or even a planar expanse, can be easily envisioned (Fig. 9-2a, b). Separate planar layers are then stacked together in such a way that every octahedron is adjacent to a tetrahedron and vice versa. Figure 9-3 shows three layers of the resulting matrix.
Every node in the IVMas the origin of twelve unit vectors radiating outwardlyis the center of a local vector equilibrium. The ends of these unit vectors define the twelve vertices of the VE. However, this does not mean that adjacent cuboctahedra pack together to produce a space-filling expanse. A symmetrical array can be created by bringing the square faces of adjacent vector equilibria together, but they are necessarily separated by octahedral cavitiesframed by the triangular faces of eight converging VEs. The unavoidable octahedra between adjacent VEs provide yet another manifestation of the specificity of the shape of space. This array can be readily understood by observing in Figure 9-4 that a packing of vector equilibria is equivalent to a framework of cubes in which the corners have been chopped off, thus automatically carving out an octahedral cavity at every junction of eight boxes.
The above observations provide information about the shapes and angles of the IVMthe most symmetrical arrangement of points in spaceand therefore about the shape of space itself. These characteristics reveal the basis for the term "isotropic vector matrix": in Fuller's words,
"isotropic" meaning "everywhere the same, isotropic vector" meaning every-where the same energy conditions."... This state of omnisameness of vectors....... prescribes an everywhere state of equilibrium." (420.01-3)
He calls the IVM "multidimensional" because it "accommodates" (or occupies) all spatial dimensions, andconsistent with his unorthodox interpretation of dimensionspace is "multi-" rather than "three-dimensional." Vectors are directed in every possible direction, while deliberately maintaining equivalent lengths and angles. This equivalence is necessarily determined by the symmetry of space:
This matrix constitutes an array of equilateral triangles that corresponds with the comprehensive coordination of nature's most economical, most comfortable, structural interrelationships employing 60-degree association and disassociation. (420.01)
As seen in the earlier development of vector equilibrium, spatial "omnisymmetry" incorporates four planes of symmetry: four unique directions of equilateral triangles. Recalling the way cookies fit most economically on a baking sheet, we can feel quite comfortable with the triangular symmetry of the plane. The implication is that the shape of space can be described through four such continuous planes.
|[ To Contents ]| | <urn:uuid:9e158552-6b2b-4168-b157-1d08592b4fc9> | 3.328125 | 1,026 | Academic Writing | Science & Tech. | 31.273684 |
The World Meteorological Organization (WMO) released a statement on the global climate in 2006 on December 14 2006.
Unfortunately, with respect to reporting on surface temperature trends, the Statement perpetuates the use of surface temperature trends as the metric to assess global warming (or cooling) [i.e. why not, at least, also include ocean heat content anomalies for 2006?]. Moreover, the Statement does not question the accuracy and spatial representativeness of the land surface temperature data.
The WMO Statement on the Status of the Global Climate in 2006 includes the information that,
“The global mean surface temperature in 2006 is currently estimated to be + 0.42°C above the 1961-1990 annual average (14°C/57.2°F), according to the records maintained by Members of the World Meteorological Organization (WMO). The year 2006 is currently estimated to be the sixth warmest year on record. Final figures will not be released until March 2007.
Averaged separately for both hemispheres, 2006 surface temperatures for the northern hemisphere (0.58°C above 30-year mean of 14.6°C/58.28°F) are likely to be the fourth warmest and for the southern hemisphere (0.26°C above 30-year mean of 13.4°C/56.12°F), the seventh warmest in the instrumental record from 1861 to the present.
Since the start of the 20th century, the global average surface temperature has risen approximately 0.7°C. But this rise has not been continuous. Since 1976, the global average temperature has risen sharply, at 0.18°C per decade. In the northern and southern hemispheres, the period 1997-2006 averaged 0.53°C and 0.27°C above the 1961-1990 mean, respectively.
Regional temperature anomalies
The beginning of 2006 was unusually mild in large parts of North America and the western European Arctic islands, though there were harsh winter conditions in Asia, the Russian Federation and parts of eastern Europe. Canada experienced its mildest winter and spring on record, the USA its warmest January-September on record and the monthly temperatures in the Arctic island of Spitsbergen (Svalbard Lufthavn) for January and April included new highs with anomalies of +12.6°C and +12.2°C, respectively.
Persistent extreme heat affected much of eastern Australia from late December 2005 until early March with many records being set (e.g. second hottest day on record in Sydney with 44.2°C/111.6°F on 1 January). Spring 2006 (September-November) was Australia’s warmest since seasonal records were first compiled in 1950. Heat waves were also registered in Brazil from January until March (e.g. 44.6°C/112.3°F in Bom Jesus on 31 January – one of the highest temperatures ever recorded in Brazil).
Several parts of Europe and the USA experienced heat waves with record temperatures in July and August. Air temperatures in many parts of the USA reached 40°C/104°F or more. The July European-average land-surface air temperature was the warmest on record at 2.7°C above the climatological normal.
Autumn 2006 (September-November) was exceptional in large parts of Europe at more than 3°C warmer than the climatological normal from the north side of the Alps to southern Norway. In many countries it was the warmest autumn since official measurements began: records in central England go back to 1659 (1706 in The Netherlands and 1768 in Denmark).”
Climate Science has three comments in this presentation of the temperature anomolies.
1. The summary of regional extremes included only one brief extreme cold period in 2006. If there were just this one, this would be remarkable, and would bolster those who have concluded the global climate system is on a rapid upswing of warming. However, if there were other extreme cold periods, the neglect of including these cold events is a clear example of cherry picking to promote a particular perspective on climate change.
The figure below presents the NCEP/NCAR Reanalysis of the surface temperature anomalies for January to November 2006 (thanks to Phil Klotzbach for this). As clear in this figure, it was significantly warmer than average in the polar latitudes, but there was regions of cooler than average temperatures (such over and near northern Australia and large parts of Siberia). Such regional spatial structure illustrates why a focus of regional trends and anomalies, rather than a global average linear trend should be the emphasis in multi-decadal climate assessments.
To provide examples of the regionally large anomalies on shorter time scales, the four figures below illustrate both significant cold and warm anomalies for the January-February and October-November 2006 time periods (for both the surface air and 700 hPa temperatures). The large winter cold anomaly is quite clear in the January-February figure, for example.
Other figures which document regionally large warm and cool anomalies are available from the excellent NOAA Climate Diagnostic website.
2. As we have documented most recently in
Pielke Sr., R.A., C. Davey, D. Niyogi, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, J. Angel, R. Mahmood, S. Foster, J. Steinweg-Woods, R. Boyles , S. Fall, R.T. McNider, and P. Blanken, 2006: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Research, submitted,
there are significant biases in the land surface component of the temperature trend record. This includes a warm bias in the nighttime minimum temperatures (if the intent is to monitor climate system heat content changes). The WMO is either ignoring or is unaware that any warming (or cooling) in the nighttime boundary layer results in near surface temperature anomalies that overstate the actual warming or cooling in the boundary layer; see
Pielke Sr., R.A., and T. Matsui, 2005: Should light wind and windy nights have the same temperature trends at individual levels even if the boundary layer averaged heat content change is the same? Geophys. Res. Letts., 32, No. 21, L21813, 10.1029/2005GL024407.
3. The WMO Statement reports on where the surface temperature data used to prepare the Statement comes from; i.e.
This preliminary information for 2006 is based on observations up to the end of November from networks of land-based weather stations, ships and buoys. The data are collected and disseminated on a continuing basis by the National Meteorological and Hydrological Services of WMO Members. However, the declining state of some observational platforms in some parts of the world is of concern.
It should be noted that, following established practice, WMO’s global temperature analyses are based on two different datasets. One is the combined dataset maintained by the Hadley Centre of the UK Met Office, and the Climatic Research Unit, University of East Anglia, UK. The other is maintained by the US Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA). Results from these two datasets are comparable: both indicate that 2006 is likely to be the sixth warmest year globally.”
The two data sets that are referred to, however, are NOT different and are NOT independent assessments of temperature anomalies. As we report in the Pielke et al. paper,
“The raw surface temperature data from which all of the different global surface temperature trend analyses are derived are essentially the same. The best estimate that has been reported is that 90-95% of the raw data in each of the analyses is the same [Phil Jones, personal communication]. That the analyses produce similar trends should, therefore, come as no surprise.”
Thus the WMO Statement that there are two data sets is misleading, and provides a reader of the Statement with an inaccurate assumption on the robustness of the assessment of the surface temperature trends.
The preparation of the WMO Statement, therefore, is not a balanced presentation of climate system heat content changes (e.g. “global warming”), or climate, in general, in 2006. What should be of concern to everyone is that peer-reviewed issues that have been reported in the scientific literature concerning the robustness of the land surface temperature data to assess multi-decadal trends and anomalies to tenths of a degree are being ignored. Moreover, there is a clear emphasis on warm events rather than also including the colder than average episodes that occurred during the year. It is encouraging, however, that the WMO Statement had a regional focus in part of their Statement, as has been urged by Climate Science.
For their Final Statement for 2006, all of us should encourage the WMO to prepare a summary of the climate which includes each of the major regional temperature anomaly events, even if they conflict with the multi-decadal global climate models predictions. | <urn:uuid:fb5ffa5b-a83d-468f-be8f-4e496707be3b> | 3.625 | 1,904 | Academic Writing | Science & Tech. | 48.84354 |
Up to this point, I've shown you various comparison functions without really saying much about the differences between them. In this chapter, I'll (finally) tell you about how and why the comparison functions differ and offer some guidelines for their proper use.
Lisp has a core set of comparison functions that work on virtually any kind of object. These are:
The tests with the shorter names support stricter definitions of equality. The tests with the longer implement less restrictive, perhaps more intuitive, definitions of equality. We'll learn about each of the four definitions in the following sections.
EQ is true for identical symbols. In fact, it's true
for any identical object. In other words, an object is
EQ to itself. Even a composite object, such as a list,
EQ to itself. (But two lists are not
EQ just because they look the same when printed; they
must truly be the same list to be
EQ just compares the memory addresses of
The reason that symbols are EQ when they have the same name (and are in the same package) is that the Lisp reader interns symbols as it reads them. The first time the reader sees a symbol, it creates it. On subsequent appearances, the reader simply uses the existing symbol.
EQ is not guaranteed to be true for identical
characters or numbers. This is because most Lisp systems don't
assign a unique memory address to a particular number or character;
numbers and characters are generally created as needed and stored
temporarily in the hardware registers of the processor.
EQ's notion of equality,
and extends it to identical numbers and characters. Numbers must
agree in value and type; thus 0.0 is not
EQL to 0. Characters must be truly identical;
EQL is case sensitive.
EQL are not generally true for
lists that print the same. Lists that are not
have the same structure will be indistinguishable when printed; they
will also be
Strings are also considered
EQUAL if they print the
EQL, the comparison of characters within strings
EQUALP is the most permissive of the core comparison
functions. Everything that is
EQUAL is also
EQUALP ignores case distinctions between characters, and
applies the (typeless) mathematical concept of equality to numbers; thus
EQUALP to 0.
EQUALP is true if corresponding elements
EQUALP in the following composite data types:
The generality of the above longer-named tests comes with a price. They must test the types of their arguments to decide what kind of equality is applicable; this takes time.
EQ is blind to type of an object; either the objects
are the same object, or they're not. This kind of test typically
compiles into one or two machine instructions and is very fast.
You can avoid unnecessary runtime overhead by using the most restrictive (shortest-named) test that meets your needs.
If you know the type of your data in advance, you can use comparisons that are specialized to test that particular type of data. Tests are available for characters, strings, lists, and numbers. And, of course, there are also comparisons for other relationships besides equality. | <urn:uuid:1d3e55e6-7b93-4baf-a089-280b4968b100> | 3.734375 | 681 | Documentation | Software Dev. | 45.174583 |
Educators' Collection of Resources on Microbial Life Exposed to High Levels of Radiation
This microbe, called Halobacterium, may hold the key to protecting astronauts from one of the greatest threats they would face during a mission to Mars: space radiation. The harsh radiation of interplanetary space can penetrate astronauts' bodies, damaging the DNA in their cells, which can cause cancer and other illnesses. DNA damage is also behind cancers that people suffer here on Earth. Photo and excerpt taken from http://science.nasa.gov/headlines/y2004/10sep_radmicrobe.htm
Other Radiation Collections
Advanced Collection: Compiled for professionals and advanced learners, this collection includes resources such as journal articles, academic reviews, and surveys.
General Collection: Resources such as news articles, web sites, and reference pages provide a comprehensive array of information about radiation tolerant microbes.
To see the complete collection of resources about Microbial Life in High Levels of Radiation click here | <urn:uuid:3601568c-91c2-45f7-90c0-7a1c64e02406> | 3.625 | 201 | Content Listing | Science & Tech. | 20.29575 |
Iteration is at the heart of agile development practices. In an agile project you do something, measure your progress, and then use the feedback from the measurement to figure out what to do next. This cycle allows you to follow the Agile Manifesto value Responding to change over following a plan by providing for points in time where you can measure your progress at the project level. Whether your approach to agile is project-focused like Scrum or development-focused, like extreme programming, iteration is what drives an agile project.
Agile methods use engineering practices such as Unit Testing and Continuous Integration to provide for feedback cycles close to the code. The Iteration allows for a feedback at the macro level, giving the stakeholders the ability to view progress in short regular feedback cycles.
While iteration is key to agile development working within iterations can be challenging for customers and developers alike.
In an agile project the team is group of people delivering the application code. This includes developers, testers, designers, etc. The product owner , sometimes referred to as the customer, is the person specifying the functionality that the team will deliver and the priorities. The product owner specifies what will be built and the team decides how to build it.
The product owner maintains the product backlog, a prioritized list of everything that may be in the product someday. The portion of the product backlog that the product owner assigns to the iteration is the iteration backlog .
Incremental development means building parts of a system, for example working on the interface to the database. Iteration means starting with a rough solution that works, then improving on it as you go. Jeff Patton explains the difference between incremental and iterative development quite concisely in his article The Neglected Practice of Iteration . In practice, teams combine iteration and incremental development, but it iterating on an end-to-end solution is what lets you validate requirements.
In an agile project, an iteration is a fixed period of time during which a team implements a set of features, which result in a shippable increment of the product or project. This period of time is referred to as a Sprint in Scrum. XP refers to a weekly cycle . Regardless of the name, the key features of an agile iteration are:
- It is time boxed. There is a fixed start and end.
- The amount of work that is planned to be completed during the iteration should not change during the iteration.
- It starts with a planning activity.
- It ends with a review activity.
- At the end there is a shippable product or project increment that can be refined in future iterations.
The last point highlights the difference between iterative and incremental development: each iteration helps the project or application take shape so that the product owner can validate the state of the project with the current goals of the project.
The basics of iterations are that team will:
- Commit to a list of items.
- Work on the list.
- Review what was done.
To iterate successfully an agile team must follow certain activities.
During an iteration, the key activities are:
- Estimating and Planning
The iteration starts with the product owner assigning items off of the product backlog to the iteration. The team then meets and plans how they can complete the backlog items by the end of the iteration. If the team is not confident that they can complete the planned work within the iteration, the team should raise their concerns with the product owner and revise the backlog for that iteration.
It is important to have a realistic iteration backlog so that the team and the product owner have common expectations. While it is not unreasonable to have work in queue in case the team finishes their work early, planning to do more work than can be done leads to the team and the product owner not taking the plan seriously rather than a commitment. Over-planning also makes it more difficult to improve your estimation process.
At the end of the iteration | <urn:uuid:fa20a799-9a8b-4862-aab2-e6631a333050> | 2.78125 | 800 | Knowledge Article | Software Dev. | 38.25174 |
Posted in: Press: Environmental
Relevant tags: drought, rivers, water
Water, rivers and climate change are inextricably linked, and are ringing warning bells across the world. More than ever before, the global water situation is uniting people in hardship, with billions being spent to protect water supplies, livelihoods and, ultimately, lives.
In Australia, one of the driest continents, a growing population and drying climate is challenging environmental scientists, water managers and politicians to find short and long-term solutions to the growing crisis. And answers are not cheap or easy, often being social problems that require political action.
The statistics alone are frightening. Of the water available for Australians to use, one quarter of the rivers and lakes are already used for drinking, industry and agriculture, and one third of underground water is being pumped to the surface and used for the same purposes.
If you ask Australia’s national science agency, the CSIRO, about climate change, the outlook is bleak. By 2030 rainfall on the major capitals (except Hobart) could drop by 15 per cent. According to the 2001 report, Climate Change Projections for Australia, Perth could loose up to 20 per cent of rainfall. At the same time, rising temperatures will increase evaporation, further reducing water supplies in dams, rivers and reservoirs.
In another recent scientific report by the same agency, which examines water price implications for each of Australia’s main cities and regions in 25 years’ time, the real price of water could skyrocket.
The 2006 report, Without Water: The economics of supplying water to 5 million more Australians, says if governments do not act to expand water trading and access ‘new’ sources of water such as building desalination plants, establishing large sewage recycling schemes and making use of storm water, the price of water would increase by between five and ten times in large cities to manage demand.
Internationally, the situation is not much better, and in many areas is far worse. The United Nations describes the global water situation as a “crisis… essentially caused by the way in which we mismanage water.” The U.N. is so concerned about water, it has named 2005 to 2015 as the Decade of Water.
More than 2.7 billion people will face severe water shortages by the year 2025 if the world continues consuming water at the same rate, the United Nations has warned in its annual World Water Assessment Program report.
The looming crisis is being blamed on mismanagement of existing water resources, population growth and changing weather patterns. The areas most at risk from the growing water scarcity are in semi-arid regions of sub-Saharan Africa and Asia.
“Even where supplies are sufficient or plentiful, they are increasingly at risk from pollution and rising demand,” says U.N. Secretary General Kofi Annan.
Extremes in water supply deliver unacceptable shocks to the developing world, explains World Bank Senior Water Advisor, David Grey. “Monsoons, droughts, depleted groundwater resources, and typhoons devastate poor countries because they’re in too deep a hole economically to reduce their risk,” he says.
Grey, soon to visit Australia as a keynote speaker for the International Riversymposium, sees a strong link between the sophistication of a country’s water management and its economic health. He says investors are avoiding countries with unpredictable food production, health problems related to poor water quality, and unreliable electricity supplies.
“Investment doesn’t flow to places where catastrophic water events cause huge social and economic problems and large-scale losses of life,” says Grey.
Like many international water experts, Grey believes Australia must take a lead role with international assistance, training and capacity building for river management, particularly in the Asian Pacific region. He’s impressed by organisations such as the International Riverfoundation which has set up ‘twinning’ programs to help developing countries better manage their river catchments.
Partnerships and community action are critical to managing water and protecting rivers. Many will be highlighted at the coming International Riversymposium in Brisbane in September.
The theme, ‘Managing rivers with climate change and expanding populations’ will investigate the challenge of meeting human needs for water under changing climatic conditions. It’s an opportunity for hundreds of people to share ideas, case studies and examples on how to tackle threats to rivers and catchments.
“Local communities can do amazing things,” says Riversymposium chair Professor Paul Greenfield of the University of Queensland. “There are many positive stories showing how science, public policy and community action are addressing river and global warming issues.”
“For example, the Bulimba Creek Catchment Association, typical of many local conservation groups throughout Australia, has an outstanding record of revegetating bushland and improving water quality in a network of Brisbane creeks,” says Professor Greenfield.
“The association coordinates Waterwatch, supports 23 local Bushcare groups, provides training programs to volunteers, and involves students and community groups in practical conservation projects.”
“Since 1999, the group has involved the community in rehabilitating 46 sites within the catchment, and four sites outside it with support from Landcare, the Natural Heritage Trust and local leaders.”
Each year, the symposium highlights new international and Australian industry practices, government regulations, technology and community education programs to sustain river water supply and quality. The four-day event also includes the prestigious Thiess International and National Riverprize.
The prize, regarded as the ‘Noble prize for saving rivers’, recognises outstanding achievements in river conservation and management. There are overseas nominations from Israel, U.S.A., Kyrgyzstan, China, and Canada vying for the $225,000 Thiess International Riverprize. There are also nominations from Australia competing for the $75,000 Thiess National Riverprize.
While Australia may not yet be experiencing some of the more dramatic and life threatening situations as many river systems overseas, the clock is ticking, particularly in relation to the current drought and low levels in large dams that supply water to major population centres.
The 9th International Riversymposium will be held at Brisbane’s Convention & Exhibition Centre from 4 -7September as part of the city’s annual Riverfestival. Other activities include Riverfire, Riverfeast and post-symposium study tours.
Regular updates on international river issues, such as water scarcity, estuary flows, wastewater treatment, community consultation, legal frameworks, damming rivers and water policy, will be published in free e-newsletters. For more information please visit the following websites:
This article was written by
International Riversymposium Media | <urn:uuid:00096146-4804-4ef8-a29d-41d206c41261> | 3.140625 | 1,406 | Truncated | Science & Tech. | 24.23292 |
› View Now
Wildfire and Pine Beetles
Wildfire and Pine Beetles
Narration: Jefferson Beck
[ music ] [ music ] Across the Rocky Mountain West, red hues dot the forest. But these aren't the colors of autumn. These trees are dying, under attack by an unseen adversary - the mountain pine beetle. Mountain pine beetles are native to western forests, and they've evolved with the lodgepole pine trees they infest. But in the last few years, warming temperatures have caused their numbers to surge. They're killing an unprecedented number of trees. Some say the swath of dead forest left behind sets the stage for another Rocky Mountain native - wildfire.
[ Phil Townsend: ] "For a long time we thought that beetle damaged forests were more likely to burn than green forests. And that's because they look much drier and you have a feeling that this is just a tinderbox ready to go."
But are these trees really more likely to burn? Forest ecologist Phil Townsend and his team are using NASA satellite imagery to find out. The Landsat satellites don't have high enough resolution to discern individual trees, but Landsat's special near-infrared sensor can detect areas of damaged forest. In this false color view, green means healthy forest. Green and red together means damaged trees mixed with healthy ones - possible beetle damage. Recently burned forest shows up as bright red. Landsat images let us study forest health across a large area. But each pixel captures almost a thousand square meters of forest - covering lots of trees. So how can you be sure what's really going on inside a pixel? You've got to hit the ground and see. The team lays out transect tape to measure out points thirty meters apart - the area within a single Landsat pixel. Within this pixel zone, they get a close-up look at the health of each tree.
[ Phil Townsend: ] "When we're in the forest, conducting our research, we look for signs of beetle damage to the trees. The first and most obvious sign would be whether the tree has red needles or not. Well, that's a sign that the tree is dead, but it's not necessarily always caused by beetles. So we then look at the bark of the tree and if we see pitch tubes, which are where beetles have attacked the tree, or exit holes, which are where the young beetles have emerged from the tree, then we know that there has been beetle damage."
Pitch tubes are holes bored by beetles. Living trees defend themselves from beetles by streaming sticky resin from the wounds. But if enough beetles drill enough holes, the trees die. The research confirms that they're reading the Landsat data correctly. The target zones are, for the most part, killed by beetles. Next, they can compare those zones to areas burned by fire, and what they've discovered is surprising. Instead of creating a tinderbox ready to burn, the beetle-killed swaths appear to have little effect on fire. In fact, in some instances, they may even reduce the risk of severe fires. they may even reduce the risk of severe fires.
[ Phil Townsend: ] "Once those needles come off the tree, that fuel source isn't so much there. So actually the beetle damaged forest may be less susceptible to burning than a green forest, where you still have material, and during a drought this material may be very dry and be able to carry the fire from the surface up to the canopy."
The Landsat data, double-checked with on-the-ground observations, show us that things aren't always as they appear at first glance.
[ Phil Townsend: ] "I think it's important for people not to assume that there are relationships between certain types of features out on the landscape. It's often much more complicated than we think. 'Oh, that forest has been damaged by beetles, it's more likely to burn,' and that's why it's important to ask questions and not just take everything as gospel truth and to go out and actually do the research and see if what we think in our mind is actually what's happening on the ground."
While one mystery seems to be solved, another remains. Why are both mountain pine beetle numbers and fire risk on the rise? The answer may well be our changing climate. Cold winter nights kill beetle larvae. In the last decade, temperatures haven't dipped as low. More beetles are surviving to damage more forest. And fires take hold and spread faster in a warmer, drier climate.
[ Phil Townsend: ] "The beetles and the fire might not directly be related to each other, but they might be each related to the change in the climate, and that's important to find out."
[ music ] [ sound effect ]
› View Now | <urn:uuid:5c3e388b-09f7-46db-a018-1b98eaf8a11e> | 3.734375 | 981 | Audio Transcript | Science & Tech. | 59.812733 |
Organisms that live in in the water column or in a suitable atmosphere and drift or float in that environment, being incapable of swimming against the current or wind. A world that has life and bodies of water of any kind will have plankton. On garden worlds, larger or more actively moving organisms in the environment will depend on plankton for food; plankton are the basis for the larger aquatic (or in rarer cases aerial) biome. Analogues to biological plankton are found in some nanecologies and mechosystems. Types of plankton include:
Phytoplankton: Primary producers, typically photosynthetic or occasionally chemosynthetic; typically algae or organisms bearing algae as symbionts.
Zooplankton: Animals or the equivalent that feed on phytoplankton. The distinction between zooplankton and phytoplankton is not exact, since it is common for organisms to mix the two strategies.
Skyplankton/Airplankton/Aeroplankton: Plankton-like organisms found in the dense atmosphere and cloud-tops of some eogaian, eocytherian, and To'ul'hian worlds, or the atmosphere of a gas giant (where they may be called joviplankton), in some gaian worlds where flotation mechanisms have evolved, or in artificial microgravity environments such a freesphere or the middle regions of some rotating habs. Such organisms may be given a distinctive term such as phytoaeroplankton or zooaeroplankton.
Mechoplankton: Nanotech-based organisms that serve a plankton-like role in an artificial mechosystem, or escaped or evolved wild organisms of this kind in botworlds (see nanecology, bionanecology, hylonanecology).
- Zooplankton - Text by M. Alan Kazlev
Animals that float passively in the water as part of the plankton. Zooplankton feed on other plankton (phytoplankton, bacterioplankton or other zooplankton) and are in turn food for larger aquatic organisms. An important part of the aquatic ecology of any terragen and terragen-type ecosystem. | <urn:uuid:3127216b-e06b-429e-aea7-ff2b0af57446> | 3.28125 | 473 | Knowledge Article | Science & Tech. | 20.11069 |
The atomic nucleus shown in the top half of this picture is carbon-14. The 14
C nucleus has 6 protons plus 8 neutrons, giving it an atomic mass of 14.
Click on image for full size
Original artwork by Windows to the Universe staff (Randy Russell).
Carbon-14 is an isotope of the element carbon. All carbon atoms have 6 protons in their nucleus. Most carbon atoms also have 6 neutrons, giving them an atomic mass of 12 ( = 6 protons + 6 neutrons). Carbon-14 atoms have two extra neutrons, giving them a total of 8 neutrons. Carbon-14 has an atomic mass of 14 ( = 6 protons + 8 neutrons). The extra neutrons make the nucleus of carbon-14 unstable. Carbon-14 is radioactive!
Radioactive carbon-14 (also written as 14C) has a half-life of 5,730 years. 14C is used to determine the ages of artifacts that were once living (such as pieces of wood, teeth or bones, coral skeletons, etc.) via a technique called "carbon-14 dating" or "radiocarbon dating".
Some of the carbon dioxide gas in Earth's atmosphere contains 14C atoms. The supply of CO2 molecules which contain carbon-14 is continuously replenished in our atmosphere. Cosmic rays from space sporadically strike nitrogen atoms, converting some common nitrogen-14 atoms into radioactive carbon-14 atoms.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Carbon-14 dating (also called "radiocarbon dating") is used to determine the age of materials that contain carbon that was originally in living things. It is often used in archeology and some...more
An element (also called a "chemical element") is a substance made up entirely of atoms having the same atomic number; that is, all of the atoms have the same number of protons. Hydrogen, helium, oxygen,...more
Some materials are radioactive. Their atoms give off radiation. When an atom gives off radiation, it turns into a different kind of atom. That is called radioactive decay. Some atoms decay very quickly,...more
Some materials are radioactive. They emit radiation. When an atom of a radioactive substance gives off radiation, it becomes a new type of atom. This process is called radioactive decay. There are two...more
Atoms and the tiny particles from which they are made strongly influence the world around us. The fields of atomic physics and particle physics help us understand the life cycles of stars, the forms of...more
One main type of radiation, particle radiation, is the result of subatomic particles hurtling at tremendous speeds. Protons, cosmic rays, and alpha and beta particles are some of the most common types...more
One way scientists measure the size of something is by its mass. Scientists can even measure very, very tiny things like atoms. One measure of the size of an atom is its "atomic mass". Almost all of the...more | <urn:uuid:2c8c9e85-9bc3-4282-8963-f3bed12c06a5> | 3.921875 | 664 | Knowledge Article | Science & Tech. | 55.119476 |
I understand what the enzyme ATP synthase does, but I'm not exactly sure how it does it. I've heard that it uses rotary catalysis, but how exactly does this work? How is the energy from the H+ ion harnessed, turned into mechanical energy, and then finally into the chemical energy stored in ATP? Why does the spinning of parts of the enzyme need to occur for this to happen?
Image from wikipedia page on ATP synthase
In brief, the addition and release of protons to the structure cause a conformational change that leads to another conformational change. This series of conformational changes occurs in such a way that it induces a rotational motion.
The rotation of the central axel that extends through both hemispheres of this large complex is driven by the proton gradient. Specific residues in the c12 ring are protonated in order to drive this rotation. A high concentration of protons (in this picture the high concentration is found above the complex) allows each of the subunits in the membrane to be constantly loaded with H+ ammunition, ready to fire when it comes to be their turn.
The function of this axel is to slightly but significantly change the shape of one subunit at at a time in the red part.
Changing the conformation of subunits in the red part is what allows for the three steps in ATP synthesis to occur: binding of ADP and Pi, catalysis of the formation of ATP, and release of ATP.
All of these changes occur at the molecular level. The proton forms a bond with a single amino acid of the entire protein complex, but this is sufficient to cause a difference in the most stable 3-dimensional structure of the whole. This could cause either a cascade of changes that result in a net rotation or it could cause a re-stabilization of the structure due to electrostatic forces.
According to Vinit K. Rastogi & Mark E. Girvin in their 1999 Nature paper on the subject, it's more like the former:
The whole paper's a good read. But this is as current as I am on the topic, and it's likely that a more detailed mechanism has been determined for the action of ATP synthase since 1999. | <urn:uuid:2b4b148b-c511-4ee2-bda2-f504541b4c3f> | 2.890625 | 462 | Q&A Forum | Science & Tech. | 49.013825 |
You can’t go for a month without seeing a claim that some new discovery has rewritten evolutionary history. If headlines are to be believed, phylogeny – the business of drawing family trees between different species – is an etch-a-sketch science. No sooner are family trees drawn before they’re rearranged. It’s easy to rile against these seemingly sensationalist claims, but James Tarver from the University of Bristol has found that the reality is more complex.
Tarver focused on two popular groups of animals – dinosaurs and catarrhines, a group of primates that includes humans, apes and all monkeys from Asia and Africa. Together with Phil Donoghue and Mike Benton, Tarver looked at how the evolutionary trees for these two groups have changed over the last 200 years. They found that the catarrhine tree is far more stable than that of the dinosaurs. For the latter group, claims about new fossils that rewrite evolutionary history (while still arguably hyperbolic) have the ring of truth about them. | <urn:uuid:25513afd-86b7-4cfd-a477-0492e8530ea4> | 3 | 215 | Personal Blog | Science & Tech. | 37.978569 |
When the drugs we use to heal us create unintended monsters.
The deep sea earthquake and resulting Tsunami that killed hundreds of thousands of people in 2004 was a natural disaster of epic proportions. There’s no way it could have been avoided, the tectonic plates that caused the quake had simply reached their breaking point. However, people living in India and Sri Lanka could have been alerted before the tsunami hit their communities. Sadly, there were no adequate warning systems in place. Learn more on this Moment of Science.
Hundreds of millions of years ago the continents were joined together in a super-continent that scientists call Pangaea. It means “all lands” in Greek. It stretched all the way from the North to the South Pole, but that’s not the way the earth looks now, obviously. What happened? Learn more on this Moment of Science.
It may sound like science fiction, but a certain kind of plant has been engineered to grow its own pesticide. Find out which plant on this Moment of Science. | <urn:uuid:88ddea0f-982e-44c7-860f-ed5032c5cdb5> | 2.90625 | 214 | Content Listing | Science & Tech. | 62.828689 |
Science Fair Project Encyclopedia
|Formula weight||61.8 amu|
|Melting point||Decomposes at 442 K (169 °C)|
|Density||1.4 ×103 kg/m3|
|Solubility||5.7 g in 100g water|
|S0gas, 1 bar||295.23 J/mol·K|
|Ingestion||Toxic. Vomiting and diarrhea in small doses, larger doses may be fatal.|
|Inhalation||May cause irritation.|
|Skin||May cause irritation.|
|Eyes||May cause irritation.|
|More info||Hazardous Chemical Database|
Boric acid, also called boracic acid, is a chemical compound, a mild acid often used as an antiseptic, insecticide, flame retardant, and a component of other chemical compounds. It exists in the form of colorless crystals or a white powder and dissolves in water. It has the chemical formula H3BO3 and is known by the chemical name hydrogen orthoborate.
For details of its chemistry, see http://www.encyclopedia.com/html/b1/boricaci.asp.
It can be used as an antiseptic only for minor burns or cuts and is sometimes used in dressings or salves or is applied in a very dilute solution as an eye wash. It is poisonous if taken internally or inhaled, although it is generally not considered to be much more toxic than table salt (based on its LD50 rating of 2660).
It is often used as a relatively nontoxic insecticide, for killing cockroaches, termites, fire ants, fleas, and many other insects. It can be used directly in powdered form for fleas and cockroaches, or mixed with sugar for ants. It is also a component of many commercial insecticides.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:f2ed66b4-bc29-40d1-ac83-abe3ac625c1c> | 3.34375 | 435 | Knowledge Article | Science & Tech. | 53.102158 |
Science Fair Project Encyclopedia
- This article is about flavor, the sensory impression. There is another article on Flavor (particle physics) for the particle property.
Flavor (or flavour) is the sensory impression of a food or other substance. It is determined by the three chemical senses of taste, olfaction (smell), and the so-called trigeminal senses, which detect chemical irritants in the mouth and throat.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:0b164637-6541-4e5a-a2fd-d24a5e852b71> | 3.6875 | 120 | Knowledge Article | Science & Tech. | 38.385055 |
Common Lisp the Language, 2nd Edition
Throughout this chapter the notation S is used to denote the jth element of the series S. As in a list or vector, the first element of a series has the subscript zero.
The # macro character syntax #Zlist denotes a series that contains the elements of list. This syntax is also used when series are printed.
(choose-if #'symbolp #Z(a 2 b)) => #Z(a b)
Series are self-evaluating objects and the series data type is disjoint from all other types.
The type specifier (series element-type) denotes the set of series whose elements are all members of the type element-type.
series arg &rest args
The function series returns an unbounded series that endlessly repeats the values of the arguments. The second example below shows the preferred method for constructing a bounded series.
(series 'b 'c) => #Z(b c b c b c ...) (scan (list 'a 'b 'c)) => #Z(a b c) | <urn:uuid:968c771b-828e-4b30-b3c7-fe5053bdae9c> | 3.765625 | 227 | Documentation | Software Dev. | 69.401364 |
Copyright © 2007 Dorling Kindersley
No one has ever seen a living dinosaur, so scientists rely on fossil remains to provide clues to how these ancient reptiles looked and behaved when they were alive. Scrappy evidence in the past meant the early dinosaur experts had certain beliefs about dinosaurs we now know to be incorrect. New discoveries are being made all the time, and each one expands what we know about dinosaurs – sometimes confirming, and sometimes over-turning, the accepted thinking about a particular species.
The first Hypsilophodon fossils were discovered in 1849, on the Isle of Wight, England. At the time, it was believed this small, agile, plant-eating dinosaur had lived in trees, where it used its long tail for balancing on branches, and its sharp claws for clinging on. This theory has now been proved completely wrong. Today, scientists believe Hypsilophodon was actually a ground-living dinosaur, which held its stiff tail off the ground, using it as a stabilizer as it moved. It probably used its clawed hands to pull at the plants it ate.
One of the most significant leaps in how we think dinosaurs looked has happened since the mid-1990s. Before then it was thought that, because dinosaurs were reptiles, they all had scaly skin. Many scientists no longer believe this to be the case for all dinosaurs, based on fossils found in China. The evidence suggests that some small predators, such as Velociraptor had bodies clothed in feathers and down. These coverings are usually associated with birds, so the finds provide evidence supporting the theory that dinosaurs and birds are related.
Ideas about Corythosaurus’s lifestyle have altered over recent years. These changing views are based on theories about the function of its head crest. As it was hollow, the crest was once thought to be an underwater breathing tube used like a snorkel. This led scientists to believe Corythosaurus lived in water. It is now thought that Corythosaurus was a land animal, and that its crest was either for display, or a sound chamber through which it made noises.
To order this book direct from the publisher, visit DK's website. | <urn:uuid:bcf39de0-5e95-439a-8a0a-daf6255a6e70> | 4.28125 | 450 | Knowledge Article | Science & Tech. | 45.792775 |
50-Year Study Shows Coral 'Clocks' Unreliable
by Brian Thomas, M.S. *
Some biologists like to say that massive coral reefs represent more than 100,000 years of growth, supposedly nullifying the Bible's account of a world that is only thousands of years old. However, many known factors can affect coral reef growth rates. Now, a 50-year study of Caribbean coral reefs confirms the unpredictability of using such growth as a "clock."
Researchers in the past have assumed that by measuring the rate of growth of a coral reef, as well as the total size of the reef, they can estimate how long it took corals to build it. One big problem with this "natural clock" system is that the growth rate of corals is inconsistent and relies on a host of changing variables.
Coral reef growth rates change with available nutrition, physical weathering, water temperature, light penetration (and therefore sea floor depth or sea level changes), and other factors. Soft corals have soft bodies that do not deposit limestone "homes," but hard corals can leave behind rocky records if subsequent generations continue to add material. So, since hard corals grow very fast in some conditions and very slow in others, there is no reliable rate of growth to apply when estimating the age of hard coral reefs.
One 1972 "largely hypothetical" estimate of coral reef growth rate, based on adding up guesses for those factors affecting reef growth, was 1,000 grams (over two pounds) per square meter (more than 10 square feet) each year. But the authors admitted that "more rapid rates of sea level rise several thousand years ago probably were accompanied by greater net (and gross) production."1
Gene Shinn, now a researcher with the United States Geological Survey, began measuring coral reef growth rates in the Florida Keys back in 1960. He inserted stainless steel rods into live hard corals and took pictures throughout the Caribbean over the years. By comparing photographs taken from then until 2010, Shinn tracked coral reef measurements for 50 years.
He found that starting in the late 1970s, disease diminished the corals and that "unfortunately, coral reef growth and structure continues to deteriorate today."2 Thus, disease is yet another important factor that can alter coral reef growth rates.
Applying Shinn's measured growth rate of zero during the period from around 1980 to 2010, coral reefs would take infinite time to grow—which is to say they should not exist. On the other hand, corals can grow extraordinarily fast in the absence of disease and with slightly warmer water and a gradually subsiding ocean floor to keep the coral near to light.
Drs. John Whitcomb and Henry Morris noted this in 1961, citing a study that found 20 centimeters of coral reef growth in five years. They wrote, "This rate of growth could certainly account for most of the coral reef depths around the world even during the few thousand years since the Deluge."3
Like any process used as a natural clock, one must assume a constant rate for that process through history. But when it comes to using coral reefs as such a clock, their growth rates have proved to be less than reliable and therefore do not challenge the Genesis record of a young world.
- Chave, K. E., S. V. Smith and K. J. Roy. 1972. Carbonate production by coral reefs. Marine Geology. 12 (2): 123-140.
- Corals: A 50-Year Photographic Record of Changes. U. S. Geological Survey online video. Posted on usgs.gov, accessed January 17, 2011.
- Morris, H. M. and J. C. Whitcomb. 1961. The Genesis Flood. Phillipsburgh, NJ: Presbyterian and Reformed Publishing, 408.
Image credit: NOAA
* Mr. Thomas is Science Writer at the Institute for Creation Research.
Article posted on January 28, 2011. | <urn:uuid:071318c8-b42f-47bc-aa06-4c0676b0fee9> | 3.4375 | 807 | Truncated | Science & Tech. | 62.032881 |
Question: Thousands of animals are killed on our roads every year, so road kills must exert a selective pressure on animal populations. Is there any evidence that animals are developing road sense?
Answer: Despite the increase in traffic, over the past few years I have seen far fewer dead hedgehogs on the roads. I've also observed that hedgehogs in our garden are less inclined to roll up when disturbed; they are much more likely to get up on their toes and run away. So it's my belief that increased traffic and road kills have selected for more extrovert, longer-legged and faster-running hedgehogs.
Answer: In areas with many roads, the hedgehog has lost the tendency to roll into a ball at the first sign of danger and runs away instead.
Answer: Thirty years ago, whenever I met a hedgehog it would curl up ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:db7b165f-2086-4e14-b160-ed9053fd09c3> | 3.03125 | 205 | Truncated | Science & Tech. | 52.386402 |
SOMETIMES you just have to say it like it is: new wording for tornado warnings may have saved lives.
More than 100 tornadoes struck the US Midwest last weekend - the most severe outbreak so far this year. But the death toll of six people was very low compared with similar outbreaks in recent years.
That seems in part to be down to research about how people respond to warnings. After the US's deadly tornadoes in 2011, the National Weather Service (NWS) found that residents waited for visible signs of the threat before responding. New warnings try to conjure those images in words.
Tornado warnings in the past said things like: "You should activate your tornado action plan and take protective action now." But this time residents in Kansas were told: "Mass devastation is highly likely, making the area unrecognisable to survivors."
Won't these words also dull with time? Mike Hudson of NWS acknowledges it's a possibility - one that will be evaluated in further storms this year.
- New Scientist
- Not just a website!
- Subscribe to New Scientist and get:
- New Scientist magazine delivered every week
- Unlimited online access to articles from over 500 back issues
- Subscribe Now and Save
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Fri Apr 20 16:35:57 BST 2012 by Eric Kvaalen
"Mass devastation is highly likely, making the area unrecognisable to survivors."
Highly likely, to me, means more than a 50% chance. In how many places where they said that did it happen?
Tue Apr 24 10:23:46 BST 2012 by polistra
The traditional tornado alley (Kan, Okla, Tex) has been well served for 30 years by expert forecasters and a highly organized system of media warnings. New words from NWS won't matter there.
Might matter in areas that don't get twisters often, but the people in those areas are oblivious and unprepared. New words from NWS won't matter there either.
SYSTEMATIC PREPARATION is the key.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:45a7189d-c74a-4acf-8475-6a3df816cc49> | 2.75 | 562 | Comment Section | Science & Tech. | 54.832489 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 6 results on physics.org and 17 results in our database of sites
15 are Websites,
0 are Videos,
and 2 are Experiments)
Search results on physics.org
Search results from our links database
An article talking about the role microscopic particles in the air play in helping clouds form.
An artist controls the temperature and humidity in a room to create the right conditions to make clouds inside.
Ever wondered how much a cloud weighs and how it says in the air. Read this article to find out more.
Description of the background to a cloud chamber particle detector.
A good explanation of early particle detectors: cloud and bubble chambers. Aimed at A level students.
An introduction to cloud computing and how it is predicted to change IT from PC mag.
Find out why clouds appear white.
THe How Stuff Works guide to cloud computing and how it is expected to change the way we store data.
News and views on cloud computing from the Guardian website.
Article on the science of thunder and lightning. But how do clouds come by all this energy, and couldn't we put it to good use?
Showing 1 - 10 of 17 | <urn:uuid:3a25e435-4fec-472f-97d5-dfbfb797e411> | 2.9375 | 289 | Content Listing | Science & Tech. | 68.516667 |
Variable Scope in SQL Procedure is used to define the scope of a variable used in SQL.
Understand with Example
The Tutorial illustrates an example from Variable Scope in SQL Procedure.
For sake of your understanding :
Local variables - These variable are declared inside a inner block and cannot be referenced by outside blocks.
Global variables - These variable are declared in a outer block and reference itself and its inner blocks.
To understand Variable Scope in SQL, we create a Procedure abc that include a Local variable. The Local variable are declared in a inner block and cannot be accessed by outside blocks In this example we declare a variable 'x', whose data type is char and default value is 'inner' in the Local variables. The SQL Query include a outer block and declare a variable 'x' whose data type is 'char' type. These are declared in outer block which can be referenced by its itself and inner blocks.
create procedure abc() BEGIN DECLARE x CHAR(5) DEFAULT 'outer';
BEGIN DECLARE x CHAR(5) DEFAULT 'inner'; Select x; END; Select x; END$$
To invoke a Procedure 'abc', we use the given below Syntax :
+-------+ | x | +-------+ | inner | +-------+ 1 row in set (0.00 sec)
+-------+ | x | +-------+ | outer | +-------+ 1 row in set (0.03 sec)
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:ada0f372-faa5-4b9d-b32b-50baed1edf35> | 3.515625 | 343 | Documentation | Software Dev. | 47.434817 |
Chemistry in its element: compounds
MP3 Download (3452k)
Introducing Chemistry in its element series two: distilling the compounds that count. Each week a leading scientist or author tells the story behind a different compound
Chemistry in its element - Sulfur mustard
Distilling the compounds that count, you're listening to Chemistry in its element brought to you by Chemistry World magazine
This week, prepare for chemical warfare - with Brian Clegg:
My grandfather served in the first world war, lying about his age in his eagerness to sign up and fight for his country. In later years, he told many tales of warfare in Belgium. But nothing held the same horror for him as the gas attacks. Though he never experienced them first hand, they remained the ultimate bogeymen of that terrible conflict.
The sulfur mustards were the final, most terrible agents of chemical warfare deployed in the first world war, named for their distinctive odour, reminiscent of wild mustard or garlic. This odour is down to impurities, though - the pure compounds are colourless and odourless. The simplest and best known member of the family consists of a sulfur atom bracketed by a pair of hydrocarbons, each with two carbon atoms and each having the final hydrogen replaced by chlorine.
The sulfur mustards were created with one aim in mind. To disable human beings. Although actually volatile liquids, the sulfur mustards were an integral part of the development of gas warfare, and were widely known as mustard gas when deployed in 1917. The compounds had been discovered in the nineteenth century, but it was only the demands for more effective chemical weapons that saw them go into production during the first world war.
Chemical weapons had been proposed long before - Leonardo da Vinci mentions the possibility of using deadly gas as a weapon - but they had not been deployed, and were supposedly banned by the Hague Convention of 1899, attempting to place gentlemanly restrictions on the ways human beings killed each other. But soon after 1914 it became clear that the first world war would abandon gentlemanly behaviour.
It started with the element chlorine, first deployed at Ypres on the 22nd of April 1915. Released from a chain of six thousand cylinders, the gas rolled forward from the German front line into the opposing French trenches, carried by the prevailing wind. The Allies retaliated within months. But this was only the start of the chemical arms race.
Chlorine was initially followed up with phosgene, a killer gas that had none of the unpleasant immediate reactions that warns the victim of the onset of chlorine. With phosgene there was just the scent of new mown hay and a cumulative effect that leaves the victims with lungs that are unable to process oxygen. But phosgene was delicate in approach when compared with mustard gas.
Unlike their predecessors, the sulfur mustard compounds, first used in battle by the Germans in 1917, would not disperse in hours. Because they are liquids, delivered as an aerosol of fine droplets, they can render a region uninhabitable for weeks. Even if the victims are wearing gas masks they can still suffer terribly from a mustard gas attack, as it causes intense blistering to the skin, even through clothing. Most mustard gas victims were not killed by it, but were incapacitated by that blistering.
Another reason mustard gas proved such a dangerous weapon is that its effects are not felt for hours after exposure, so that it is easy to suffer a crippling or deadly dose without being aware of it. Once their effectiveness was proved, German production of sulfur mustards went into overtime. At the peak, a million mustard gas shells were fired in 10 days. Being a liquid, the sulfur mustards can also be deployed by crop spraying aircraft. Mustard gas remains one of the few chemical agents to have been deployed in modern times, when the Iraqis used it on Kurdish separatists in the 1980s.
As weapons, gases and aerosols like the sulfur mustards are of mixed benefit. They are difficult to control - a sudden gust of wind can blow them back into the faces of the forces that are deploying them. But there is no doubting their psychological effect. It has often been argued by supporters of chemical weapons that they are no different from any other technology designed to incapacitate, and can often be used to force an enemy out of the way without causing significant casualties. One German soldier commented after the first chemical offensive that they had been able to walk with their weapons tucked under their arms, just as if they were strolling along in a game hunt. But many officers in the first world war were disgusted by their use.
A German officer, reflecting on his army's use of gas said: 'Poisoning the enemy just as one poisons rats struck me as it must any straight-forward soldier. repulsive.' Gunfire is, at least, directed. Chemical weapons seem to have a mind of their own, drifting, rolling potentially killing thousands. It is not surprising that they caused such outrage. Clara Haber, the wife of Fritz Haber, the lead German scientist in the battle to develop chemical agents, shot herself in May 1915. Her suicide seems likely to have been triggered by her disgust at her husband's work.
Not everyone was worried though. Shortly after the first world war, with memories of gas attacks still vivid in the minds of troops, Winston Churchill commented 'I do not understand this squeamishness about the use of gas.' For Churchill, at the time, chemical weapons were real assets that, as he put it, 'spread a lively terror'.
Unlike other chemical weapons, the sulfur mustards were not disinfectants or insecticides that were turned against their creator. Mustard gas has no function other than to maim and kill. These remain chemical compounds without a single redeeming feature.
A compound created purely to cause harm, and unliked as a result. That was Brian Clegg with the dangerous, deadly chemistry of sulfur mustard. Now next week, a compound we all like to take in, in ice-cream, perfume, cakes and more.
In the past, a lot of vanillin came from the waste from paper mills. Recently, a Japanese scientist, Mayu Yamamoto, found a novel way of making vanillin. She extracted lignin from cow dung and converted that to vanillin. This discovery won her the 2007 Ig Nobel prize for chemistry, the send-up of the real Nobel prize.
Natural vanilla extract costs up to 200 times that of a man-made substitute, so there is a lot of fake vanilla extract on the market.
And you can find out the chemistry causing vanillin to be in such high demand - so high that chemists have resorted to extracting it from cow dung, by joining Simon Cotton in next week's Chemistry in its element. Until then, thank you for listening. I'm Meera Senthilingam
Chemistry in its element comes to you from Chemistry World, the magazine of the Royal Society of Chemistry and is produced by thenakedscientists dot com. There are more compounds that count on our website at chemistryworld dot org slash compounds. | <urn:uuid:e80b588d-09b0-4056-9455-ab58bc8c15b0> | 3.328125 | 1,461 | Truncated | Science & Tech. | 48.26362 |
Introduction to Laser Principles
word "laser" is an acronym for Light Amplification by Stimulated Emission
of Radiation. The easiest laser model to understand is the two level system.
In a two level system, the particles have only two availible energy levels,
separated by some energy difference which is typically referred to in terms
of the photon energy, hv0. These two levels are generally referred
to as the upper and lower laser states. When a particle in the upper state
interacts with a photon matching the energy separation of the levels, the
particle may decay, emitting another photon with the same phase and
frequency as the incident photon. Thus we have gotten two photons for
the price of one. This process is known as stimulated emission.
fundamental concept in lasers is the idea of a "population inversion".
A normal thermal population in any material will have most of the particles
in the ground state. However, we would prefer to have most of the particles
in the excited state so we can get free photons through stimulated emission.
Thus in a laser we strive to create a "population inversion" where most
or all of the particles are in the excited state. This is achieved by adding
energy to the laser medium (usually from an electrical discharge or an
optical source such as another laser or a flashlamp); this process is called
Another fundamental concept in lasers is the idea of gain, which is basically
a short way of referring to the "free" photons described earlier. Suppose
we have just pumped our laser medium so that all of the particles are in
their excited state. One of those particles now spontaneously decays back
down to its ground state, emitting a photon (hv0). This photon
is of the right frequency to stimulate emission from another excited state
particle, which emits another photon which can stimulate another excited
state particle, and so on. (see the figure below).
In addition to stimulated emission processes there are also stimulated
absorption processes in which a ground state particle absorbs a photon
matching the energy gap and jumps to the excited state. (represented by
the gray arrow in the above figure). Thus we lose one photon to each stimulated
absorption process. Since the probabilities for stimulated absorption and
emission processes are equal (relative to population of the ground and
excited states -- Einstein's famous result), it is clearly detrimental
to the laser to have any particles in the ground state. For this reason,
two level lasers are not practical -- it is not in general possible to
pump more than half of the molecules into the excited state. | <urn:uuid:b851d309-75bc-44e3-b158-1b1637c6f822> | 3.984375 | 548 | Tutorial | Science & Tech. | 32.110205 |
These pictures show the Sun in ultraviolet "light". The left picture was at "solar min". The right picture was at "solar max". This UV wavelength (28.4 nanometers) helps scientists see certain parts of the Sun's atmosphere. The UV "light" is given off by iron at a temperature of two million kelvins.
Images courtesy of SOHO (NASA & ESA). | <urn:uuid:1b03223f-dc7c-4bfd-9b12-344a459474ce> | 2.875 | 83 | Knowledge Article | Science & Tech. | 64.927458 |
In the next few months, I will be writing about climate and climate change. My purpose is to discuss the various ways we influence climate both locally and globally. The plan is to start locally, with the effect of cities on climate. I will follow with a discussion of our effects on regional climate, and then close with a look at the global climate. This sequence will probably be interrupted by other topics if they come up.
1. Local climate
Local climate is probably the easiest for us to understand because we can see it in our daily lives. It has been known for decades that cities affect climate. Before air conditioning, the cities would get so hot in the summertime that people who had the money would leave the cities during the weekends for cooler weather in the countryside.
The figures show two examples. In Figure 1, an infrared image of Salt Lake City, notice how the urban area (yellow and red) is warmer than the suburbs (mixed greens and blues) which are warmer than the surrounding mountains. I’m not sure why some of the mountains (upper right) look warm – perhaps there are rocks there. Figure 2 is a satellite image from MODIS (MODerate Resolution Imaging Spectroradiometer) showing the surface temperatures in Beijing in the afternoon. The MODIS pixels are much larger, so the picture looks blurrier, but the message is the same: Bejing is also warmer than the surrounding area. | <urn:uuid:bb45fcdd-efe4-4b03-8e97-d43830d6e1a5> | 2.84375 | 295 | Personal Blog | Science & Tech. | 48.029426 |
Researchers using a decade of satellite data on fires and a suite of climate models have produced the first thorough global estimate of changes in the frequency of fires in the world’s forests under greenhouse-driven global warming. There’s ample uncertainty but the study, published today in the peer-reviewed online journal Ecosphere, points to a variety of outcomes, with fires likely becoming more frequent in zones you might expect — like temperate North America and particularly the western United States — but rarer in the tropics.
Here’s a link to the study, “Climate change and disruptions to global fire activity,” and click here for maps showing the projected shift in fire patterns and where the 16 models agree and disagree. Read on for the paper abstract and some input from the authors, who stress this is a first take at an important question:
[1:05 p.m. | Updated A news release from the University of California, Berkeley, has more on the study, including additional context from the authors.] Here’s the abstract:
Future disruptions to fire activity will threaten ecosystems and human well-being throughout the world, yet there are few fire projections at global scales and almost none from a broad range of global climate models (GCMs). Here we integrate global fire datasets and environmental covariates to build spatial statistical models of fire probability at a 0.5° resolution and examine environmental controls on fire activity. Fire models are driven by climate norms from 16 GCMs (A2 emissions scenario) to assess the magnitude and direction of change over two time periods, 2010–2039 and 2070–2099. From the ensemble results, we identify areas of consensus for increases or decreases in fire activity, as well as areas where GCMs disagree. Although certain biomes are sensitive to constraints on biomass productivity and others to atmospheric conditions promoting combustion, substantial and rapid shifts are projected for future fire activity across vast portions of the globe. In the near term, the most consistent increases in fire activity occur in biomes with already somewhat warm climates; decreases are less pronounced and concentrated primarily in a few tropical and subtropical biomes. However, models do not agree on the direction of near-term changes across more than 50 percent of terrestrial lands, highlighting major uncertainties in the next few decades.
By the end of the century, the magnitude and the agreement in direction of change are projected to increase substantially. Most far-term model agreement on increasing fire probabilities (62 percent) occurs at mid- to high-latitudes, while agreement on decreasing probabilities (20 percent) is mainly in the tropics. Although our global models demonstrate that long-term environmental norms are very successful at capturing chronic fire probability patterns, future work is necessary to assess how much more explanatory power would be added through interannual variation in climate variables. This study provides a first examination of global disruptions to fire activity using an empirically based statistical framework and a multi-model ensemble of GCM projections, an important step toward assessing fire-related vulnerabilities to humans and the ecosystems upon which they depend.
I asked them:
1) It’d be great to interlace this with a global breakdown of the social and ecological impacts (for better and worse), as well. Is that in the gameplan? In other words, a lot more people in poverty and dependent on forests live in the tropics, so presumably exposure to fire-related impacts there is a bigger deal?
2) I sifted the paper for the terms insect and pest and didn’t see them. Is fire risk calculated strictly on climate shifts or considering other secondary drivers (which would in the Amazon also include road building of course). Presumably this is a strictly climatological analysis?
3) I couldn’t find (on quick read) if the paper takes a stab at the overall carbon accounting of estimated shifts in fire patterns.
This paper is really the first look at how climate might interact with wildfire frequency at the global scale. We used new satellite records of fire incidence to create fire models which we then drove with a broad range of future climate model scenarios to get a sense of where the climate projections agreed on the sign of the change in fire frequency and where they did not. This work is the precursor to everything else you mention below: the social impacts, relationships to local factors like pests, carbon accounting, etc. So while I wouldn’t feel comfortable commenting on these in any terms other than very general ones, that is certainly where the work needs to go next.
Basically, global fire research is at quite an early stage… There really are very few attempts to do this, due to the complexity of the problem and the number of global climate projections to incorporate. We are starting with fire frequencies (as driven by climatic control on plant productivity and flammability patterns), with a goal of linking to intensities, sizes, and other fire regime parameters in future… and with that would be more explicit links to human dynamics, pests/pathogens, carbon, and all of the other things we care about. Fire affects almost all of them however, at some scale! | <urn:uuid:bba9207a-5524-48b0-bfa2-11c16e3be396> | 3.453125 | 1,044 | Comment Section | Science & Tech. | 27.302937 |
A proof is a mathematical argument used to verify the truth of a statement. This usually takes the form of an orderly series of statements based upon axioms. When a statement has been proven true, it is considered to be a theorem.
Proofs generally use an implication as the statement to prove. The goal of a proof is to show that for all values of a given number, object, etc., that if a given condition is met, the conclusion will be true. For example, the implication, "for all natural numbers n, if n is a prime greater than 2, then n is odd" gives the domain of the implication (n is a natural number), a condition or hypothesis (n is a prime greater than 2) and the conclusion (n is odd).
Methods of proofEdit
There are many ways to go about proving any one statement. However, some statements are more conducive to a particular method.
A direct proof of an implication proceeds in an orderly fashion from the hypothesis, using logical arguments to get directly from the hypothesis to the conclusion.
Proof by contradictionEdit
Proof by contradiction assumes a true hypothesis and false conclusion and shows how this presents a contradiction.
An indirect proof follows the same method as the direct proof, but it uses the contrapositive of the implication (if the conclusion is false, then the hypothesis is false).
- Main article: Mathematical induction
Mathematical induction seeks to show by implication that if a value is true for a given natural number, it is true for all natural numbers greater than that number. Induction is generally only applied to the natural numbers. The induction principle proceeds as follows:
- Let P be a predicate with the natural numbers as its domain. Suppose that P has these two properties:
- is true (or , if zero is included as a natural number)
- For all natural numbers , if is true, then is true.
- Then is true for all natural numbers .
Complete induction is similar to mathematical induction, except that the hypothesis of the implication in the second property of implication is not only for P(n), but for all values less than or equal to n.
List of proofs Edit
Please see our list of proofs. | <urn:uuid:38a28ad9-a3df-41f8-b7a0-962994e8339d> | 4.75 | 453 | Knowledge Article | Science & Tech. | 45.407703 |
List comprehensions are a nice feature in Python. They are, however, just syntactic sugar for for loops. E.g. the following list comprehension:
def f(l): return [i ** 2 for i in l if i % 3 == 0]
is sugar for the following for loop:
def f(l): result = for i in l: if i % 3 == 0: result.append(i ** 2) return result
The interesting bit about this is that list comprehensions are actually implemented in almost exactly this way. If one disassembles the two functions above one gets sort of similar bytecode for both (apart from some details, like the fact that the append in the list comprehension is done with a special LIST_APPEND bytecode).
Now, when doing this sort of expansion there are some classical problems: what name should the intermediate list get that is being built? (I said classical because this is indeed one of the problems of many macro systems). What CPython does is give the list the name _ (and _... with nested list comprehensions). You can observe this behaviour with the following code:
$ python Python 2.5.2 (r252:60911, Apr 21 2008, 11:12:42) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> [dir() for i in ] ['_', '__builtins__', '__doc__', '__name__', 'i'] >>> [[dir() for i in ] for j in ] ['_', '_', '__builtins__', '__doc__', '__name__', 'i', 'j']
That is a sort of nice decision, since you can not reach that name by any "normal" means. Of course you can confuse yourself in funny ways if you want:
>>> [locals()['_'].extend([i, i + 1]) for i in range(10)] [0, 1, None, 1, 2, None, 2, 3, None, 3, 4, None, 4, 5, None, 5, 6, None, 6, 7, None, 7, 8, None, 8, 9, None, 9, 10, None]
Now to the real reason why I am writing this blog post. PyPy's Python interpreter implements list comprehensions in more or less exactly the same way, with on tiny difference: the name of the variable:
$ pypy-c-53594-generation-allworking Python 2.4.1 (pypy 1.0.0 build 53594) on linux2 Type "help", "copyright", "credits" or "license" for more information. ``the globe is our pony, the cosmos our real horse'' >>>> [dir() for i in ] ['$list0', '__builtins__', '__doc__', '__name__', 'i']
Now, that shouldn't really matter for anybody, should it? Turns out it does. The following way too clever code is apparently used a lot:
__all__ = [__name for __name in locals().keys() if not __name.startswith('_') ' or __name == '_']
In PyPy this will give you a "$list0" in __all__, which will prevent the import of that module :-(. I guess I need to change the name to match CPython's.
Lesson learned: no detail is obscure enough to not have some code depending on it. Mostly problems on this level of obscurity are the things we are fixing in PyPy at the moment. | <urn:uuid:41615aa6-3c31-45ac-bb0b-2a67cff4da81> | 2.96875 | 794 | Personal Blog | Software Dev. | 77.479819 |
Copyright © University of Cambridge. All rights reserved.
The builders have dug a hole in the ground to be filled with concrete for the foundations of our garage. The shape is designed to avoid subsidence that might be caused by the roots of some very large trees nearby. The hole is perfectly symmetrical with vertical walls, 1.2 metres high, surrounding a rectangular area measuring 6 metres by 5 metres. A mound of earth in the centre, the shape of the frustum of a pyramid, has a rectangular base 4.8 metres by 3.8 metres and a horizontal rectangular top 3.8 metres by 2.8 metres. The height of this mound is 0.9 metres. How many cubic metres of ready-mix concrete should the builders order to fill this hole to make the concrete raft for the foundations? Can you explain the method simply to help the builders to do the calculations for similar shaped rafts of other dimensions? | <urn:uuid:8741120d-e035-4177-b398-cb10cbc6afa2> | 3.515625 | 187 | Q&A Forum | Science & Tech. | 67.201536 |
Crick and Watson's first attempt to solve the structure of DNA in the fall of 1951 was brief and unsuccessful. Thinking like
Pauling, they quickly came up with a model of three DNA strands wound around each other in a helix, phosphates at the core.
It seemed to fit the density data, the x-ray data was compatible with anything from two to four strands per molecule, and
it solved a theoretical problem. If DNA was the genetic material then it had to say something specific to the body; it had
to have a language that could be translated somehow into the making of proteins. It was already known that the sugars and
phosphates were simple repeating units, unvarying along the DNA strands. The bases were the variables. The bases varied, but
the x-ray pattern indicated a repeating crystalline structure; ergo, the core - the part of the structure giving rise to the
repeating patterns - must contain the repeating subunits, the sugars or phosphates, with the bases sticking out where they
would not get in the way.
The only major problem was explaining how one could pack phosphates into the middle when at normal pH they would be generally
expected to carry a negative charge. All those negative charges at the core would repel each other, blowing the structure
apart. The triple helix they had devised was so pretty, though, and fit so much of the data that Crick and Watson figured
there had to be a place for positive ions at the core to cancel out the negative charges. They grabbed a copy of Pauling's
The Nature of the Chemical Bond, searched for inorganic ions that would fit their needs, and found that magnesium or calcium might fit. There was no good
evidence for the presence of these positive ions, but there was no good evidence against it, either. They were trying to think
like Pauling, after all, and Pauling would certainly have assumed - as he had with the alpha helix - that the structure came
first and the minor details fell into place later. | <urn:uuid:bcaebc9a-cd36-461f-bc68-dda4f8cc2898> | 4.0625 | 426 | Academic Writing | Science & Tech. | 45.406285 |
Though the sizes are not to scale, the Sun and planets of the inner
solar system are shown
in this illustration, where each red dot
represents an asteroid.
New results from NEOWISE, the
infrared asteroid hunting
portion of the
are shown on the left compared to
old population projections of mid-size or larger
from surveys at visible wavelengths.
the good news is, NEOWISE observations estimate there are 40 percent
fewer near-Earth asteroids that are larger than 100 meters (330 feet),
than indicated by visible light searches.
Based on infrared imaging, the NEOWISE results are more accurate
Heated by the Sun, asteroids of the same size radiate the same amount
of infrared light, but can reflect very different amounts of visible
sunlight depending on how shiny their surface is, or
their surface albedo.
effect can bias
surveys based on optical observations.
NEOWISE results reduce the estimated number
of mid-size near-Earth asteroids from about 35,000 to 19,500,
but the majority
still remain undiscovered. | <urn:uuid:09f75edf-0ffe-4d28-9c35-88a8616aed2a> | 4.03125 | 233 | Knowledge Article | Science & Tech. | 30.154227 |
What's a database? We can use pretty much anything as a database, as long as it allows us to store our data and retrieve it later. There are many different kinds of databases. Some allow us to store data and retrieve it years later; others are capable of preserving data only while there is an electricity supply. Some databases are designed for fast searches, others for fast insertions. Some databases are very easy to use, while some are very complicated (you may even have to learn a whole language to know how to operate them). There are also large price differences.
When we choose a database for our application, we first need to define the requirements in detail (this is known as a specification). If the application is for short-term use, we probably aren't going to use an expensive, advanced database. A quick-and-dirty hack may do. If, on the other hand, we design a system for long-term use, it makes sense to take the time to find the ideal database implementation.
Databases can be of two kinds: volatile and non-volatile. These two concepts pretty much relate to the two kinds of computer memory: RAM-style memory, which usually loses all its contents when the electricity supply is cut off; and magnetic (or optical) memory, such as hard disks and compact discs, which can retain the information even without power.
mod_perl, modperl, Apache, perl, cgi, html, mod_perl, e-commerce, scalability, free, open source, OSS, apache, squid, high availability, modperl, linux, unix, Web, www, mod_perl, webserver, admin, apache, book, webmaster, tools, modperl, guide, docs, documentation, help, mod_perl, perl, information, apache, script, errata, eric cholet, perl, apache, mod-perl, stas bekman, mod_perl, cool, perl, Apache, performance, speed, choice
Other projects to check out: meta-religion.com is for those interested in Religious, Spiritual and Esoteric Phenomena. i-want-a-better.com is a community of people discussing what they would like to be improved in their lives and things they use and interact with. You may also want to find a healer in your area or read articles on variety of topics. | <urn:uuid:a1c7b3e2-1858-4ca8-bfcb-b7cf7625fa00> | 3.09375 | 506 | Knowledge Article | Software Dev. | 36.571792 |
For conservationists, it is often a case of two steps forward, one step back. Compare the upturn in fortune for the forests of Borneo with the depressing tale of state government intervention in Mato Grosso, central Brazil.
Since 1996 Borneo has been haemorrhaging 2 million hectares of forest a year to loggers, forest fires and plantations; only half its original forest remains. On 12 January the island's three governments - those of Brunei, Indonesia and Malaysia - agreed at a summit hosted by the Philippines to conserve 22 million hectares of rainforest in the "heart of Borneo", the last large block of forest in the island's interior and the only place in the world apart from Sumatra where orang-utans, elephants and rhinos still coexist.
This is good news for the island's treasure trove of plants and animals: in the past 25 years, 422 new plant species alone have been discovered, ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:f02c9c26-8d15-4229-a836-233a6a8f8ca8> | 3.34375 | 223 | Truncated | Science & Tech. | 48.240711 |
The story of how a fossil promised to revolutionize evolutionary theory, yet was not what it seemed.
Bear Dogs of the Amphicyonidae
A short review of the infamous amphicyonids, better known as the ‘bear dogs’, including a look at their place in ecosystems, hunting, evolution and extinction as well as some other details.
Coelacanths and their living examples the Latimeria
How coelacanths were once a part of evolutionary theory, and how the discovery of living examples revealed the truth of the matter as well as how these ancient fish really lived.
A look at how entelodonts have been portrayed over the years with a review of the theories that may answer some of the questions about these strange beasts.
False sabre toothed cats - the nimravids and
An article about the members of the Nimravidae and the Barbourofelidae, better known as the ‘false sabre-toothed cats’.
Detailed facts and information about the Ornithomimosaurs (bird/ostrich mimic dinosaurs) including, diet, appearance, behaviour, adaptations and classification.
Pack Hunting Dinosaurs
A look at some of the fossils and dig sites that may suggest pack hunting behaviour in dinosaurs, including the larger theropods.
Prehistoric sharks through The Ages
An overview of the prehistoric sharks of the ancient world, from Elgestolepis of the Silurian to C. megalodon of the Oligocene to Pleistocene. Information includes body forms, time periods for each species, and predator/prey interaction for the different kinds of sharks.
Pterosaurs - An Overview
The biology of pterosaurs and an explanation of flight characteristics and lifestyles for different kinds of pterosaur.
Terror Birds of the Phorusrhacidae
A look at the hyper carnivorous birds that were once the apex predators of the America's.
A brief review of a very specialist group of marine reptiles that lived during the Triassic, including their evolution, adaptations, behaviour and extinction. | <urn:uuid:2138b340-1d7f-4f21-81da-190b4acb4f4f> | 2.90625 | 434 | Content Listing | Science & Tech. | 26.619923 |
When direct sunlight strikes falling rain, a rainbow is seen at a point directly opposite the Sun. A double rainbow occurs when some of the light entering the raindrop is refracted into its component colors, reflected off the back interior wall of the drop, and refracted again as it exits the drop.
Click on image for full size
Image Courtesy of Carlye Calvin/University Corporation for Atmospheric Research
Rainbows appear in the sky when there is bright sunlight
. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of sunlight by these water droplets. It takes many droplets (each refracting and reflecting light back to our eyes at slightly different angles) to produce the brilliant colors of a rainbow.
You can only see a rainbow if the sun is behind you and the rain in front.
You can even make your own rainbow with a garden hose or water sprinkler on a sunny day.
A double rainbow occurs when some of the light entering the raindrop is refracted into its component colors, reflected off the back interior wall of the drop, and refracted again as it exits the drop. In the second rainbow, the colors are reversed where blue is on the outside and red is on the inside. The dark area in between the two rainbows is called Alexander's band.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Rain is precipitation that falls to the Earth in drops of 5mm or more in diameter according to the US National Weather Service. Virga is rain that evaporates before reaching the ground. Raindrops form...more
Mine are the night and morning, The pits of air, the gulf of space, The sportive sun, the gibbous moon, The innumerable days. I hid in the solar glory, I am dumb in the pealing song, I rest on the pitch...more
Have you ever seen clouds in the sky that looked different than "normal" clouds? Or have you wondered why rainbows form? Sometimes there are phenomena in the sky that are affected by light and make clouds...more
Rainbows appear in the sky when there is bright sunlight and rain. Sunlight is known as visible or white light and is actually a mixture of colors. Rainbows result from the refraction and reflection of...more
The Earth travels around the sun one full time per year. During this year, the seasons change depending on the amount of sunlight reaching the surface and the Earth's tilt as it revolves around the sun....more
Scientists sometimes travel in specially outfitted airplanes in order to gather data about atmospheric conditions. These research aircraft have special inlet ports that bring air from the outside into...more
An anemometer is a weather instrument used to measure the wind (it can also be called a wind gauge). Anemometers can measure wind speed, wind direction, and other information like the largest gust of wind...more | <urn:uuid:4328e80c-6674-454b-830f-5fb87e67a93a> | 3.8125 | 649 | Content Listing | Science & Tech. | 58.005332 |
Sand Dunes - How climate change is altering the landscape in the desert Southwest:Human induced climate change is caused by the release of green house gases, including carbon dioxide (CO2), methane (CH3), nitrous oxide (N2O), as well as rapid deforestation. Although climate change is a global phenomenon, the regional impacts vary around the globe. In the Southwestern United States, climate change is leading to drier, hotter conditions which have consequences for both ecosystems and human communities (US Environmental Protection Agency, 2011, www.epa.gov/climatechange/fq/effects.html).
The following is an example of how climate change is impacting tribal lands. More specifically, this case addresses sand dune migration which is a landscape-scale consequence of climate change. This brief, true story explains some of the challenges to infrastructure and livelihood already being felt by one family on the Navajo Nation.
The Teesto Dunes:
Teesto family home with nearby sand dunes. Photo courtesy of Institute for Tribal Environmental Professionals, Northern Arizona University.
Environment near Teesto. Photo courtesy of JoŽlle Clark
The dunes near the family's home are growing closer and larger. They sometimes need to dig their vehicle out of the sand by hand after a storm. This can prevent them from going to town for groceries or to miss a hospital visit. The sand dunes on the road leading to the house must be cleared with a tractor for the school bus to come pick up their children. Since the family does not have running water, they use an outhouse or outdoor toilet. They often have to dig it out from under the growing sand dunes.
If they do not keep sand away from their home and outhouse, this Navajo family will have to rebuild in a different location. For this Navajo family, moving would create a financial hardship and cultural loss. Many Navajo traditions are tied to the home and surrounding area.
How can we help this family?The Environmental Education Outreach Program (EEOP) at Northern Arizona University's Institute for Tribal Environmental Professionals has partnered with the U.S. Geological Survey on a pilot study to monitor dune migration and to stabilize dunes in the Teesto, Arizona area. To learn how you can help, click HERE.
Sand sausage grid on Teesto sand dune. Photo courtesy of Institute for Tribal Environmental Professionals, Northern Arizona University.
|Luckily for the Teesto family, scientists are interested in the growth of the sand dunes. These changes to the landscape
provide information about short and long-term effects of climate change in the region. For example, Dr. Margaret Hiza
Redsteer of the U.S. Geological Survey (USGS), as well as local students and volunteers, are working to help create
solutions for sand dune movement. They study the rate of growth of the dunes, take aerial photos of the dunes, and
monitor the wind and precipitation in the area.
Working with the scientists, the community has tried one idea to slow the growth of the sand dunes. Sand sausages are fabric tubes made from a corn-based material. The tubes are filled with sand from the dune and laid out in a grid-like pattern. Within each square of the grid, small seed cakes containing native seeds are planted. The idea for this system is to capture the wind blown sand, slow the growth of the dune, and help native plants grow after rainfalls. This technique of using sand sausages in grids was first used on sand dunes in Mongolia.
For more information please contact:
Mansel Nelson, Program Coordinator, Sr.
Last updated: May 10, 2012 | <urn:uuid:8b99e8aa-b1ea-4908-a708-7941bcf2186e> | 3.734375 | 765 | Knowledge Article | Science & Tech. | 44.335428 |
It was unseasonably, strangely warm yesterday, hitting a reported high of 68 (69 at the airport) in the middle of December. That’s 12 degrees warmer than the average Atlanta high for Dec. 9 of 56 degrees.
Now, if I were to adopt the debating technique of certain global-warming skeptics, I might seize upon yesterday’s freakish heat as proof that the scientists are right and that global warming is real. Just as deniers use an odd snowstorm in Houston to scoff at claims that the planet is getting hotter, I could do the same:
“Look how warm it was yesterday! Almost 70 in mid-December! How can you claim that global warming isn’t real?”
But of course, that would be wrong. It would be foolish. One day’s temperature is a matter of weather, not climate. One odd day, month, season, year or even a series of years tell us little about long-term climate trends. In addition, data from Atlanta or any other single monitoring station don’t tell us anything about trends globally.
As I was perusing the December record-temperature data at weather.com, though, I noticed something unusual.
The oldest record I could find from Atlanta goes back to 1880, giving us a temperature database of at least 129 years. Logically, daily record-high temperatures ought to be distributed fairly equally over that time period. Yet they’re not.
Of the 31 daily record highs in December, 24 have been set in the last 25 years, far more than logic suggests. Let me put that another way: Seven record daily highs for Atlanta were set in the first 104 years of record-keeping; 24 record highs have been set in the last 25 years. (Looking at the other end of the gauge, only 3 of 31 daily December lows have been set in the past quarter century, which is about what you’d expect in a normal distribution.)
Startled, I looked at January records. Nine of the 31 record highs that month have been set in the last 25 years. That’s a lot fewer than in December, but statistically, it’s also two or three times more than should be the case if all else were equal.
Now, even that does not constitute proof of global warming. At most, you can say that it is consistent with climate change — climatologists do warn that warming would be most noticeable in winter months. But even that claim might be stretching things too far.
As climate researchers would tell you, a lot of that apparent warming might be explained by the growing urbanization of Atlanta. The city has become a heat island of concrete and asphalt, which makes it complicated to compare today’s temperature records with those of 100 years ago. So researchers use various “tricks” to correct for that change — they have agreed-upon methods to account for the effect of heat islands, and they adjust the data accordingly.
In other words, it is not a simple matter. And given the complexity of the science, there would certainly be an opportunity for researchers to give the data a slight little finagle here, a tiny bit of exaggeration there, and — voila!! — global warming!
That’s roughly the version of reality that many global-warming skeptics are now asking their supporters to accept. Having largely lost the argument about a scientific consensus on the issue — the consensus is too solid to dismiss any longer — they now argue that the consensus itself is a fraud.
“At worst its junk science and it’s part of a massive international scientific fraud,” as U.S. Rep. James Sensenbrenner, R-Wis., claims. Sensenbrenner is former chairman of both the House Science Committee and the House Judiciary Committee, and is now the ranking Republican member on the House Select Committee for Energy Independence and Global Warming.
Of course, the narrative pushed by Sensenbrenner and others would require you to believe that smart people all over the world — most of whom who have wanted to be scientists all their lives and have worked hard to achieve that goal — have somehow gotten together to collude in this giant fraud. Depending on the day and the skeptic/theorist, the researchers were motivated to join that conspiracy by a lust for government research grants — although private industry would pay more for the opposite conclusion — or by some secret desire to promote socialism or even one-world government.
Since the late ’70s, as the story goes, scientists in the United States, Britain, Japan, Canada, France, Korea and many other nations have been perpetrating this silent fraud, and as young scientists emerged from universities they too have been secretly initiated into the priesthood. It’s a story right out of the books of Dan Brown, requiring the skills of a Robert Langdon to unravel.
Either that, or global warming is real. | <urn:uuid:b4b876f2-9d46-41b1-9424-b405f147fe7d> | 2.921875 | 1,024 | Personal Blog | Science & Tech. | 55.567921 |
Sorting algorithm is an algorithm that puts elements of a list in a certain order . The most-used orders are numerical order and lexicographical order.
Sorting algorithm classification:
Stability-Maintaining relative order of records with equal keys.
Methods applied like insertion,exchange,,selection,merging etc.
Sorting is a process of linear ordering of list of objects
Sorting techniques are categoried into:
1. Internal sorting
2. External sorting
It takes the place in the main memory of a computer.
Eg: Bubble sort,Insertion sort,Shell sort,Quick sort,Heap Sort,etc
It takes the place in secondary memory of a computer,Since the number of objects to be stored is too large to fit in main memory
Eg:Merge sort,Multiway Merge,Polyphase merge.
Insertion sort is a simple sorting algorithm, a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort , heapsort, or merge sort.
Insertion sorts works by taking elements from the list one by one and inserting them in their current position into a new sorted list.
Insertion sort consists of N-1 passes.
Insertion Sort Provides Several Advantages:
- Simple implementation
- Efficient for (quite) small data sets
- Adaptive, i.e. efficient for data sets that are already substantially sorted: the time complexity is O( n + d), where d is the number of inversions
- More efficient in practice than most other simple quadratic (i.e. O ( n 2)) algorithms such as selection sort or bubble sort: the average running time is n 2/4, and the running time is linear in the best case
- Stable, i.e. does not change the relative order of elements with equal keys
- In-place , i.e. only requires a constant amount O(1) of additional memory space
- Online , i.e. can sort a list as it receives it
Limitations of insertion sort:
It is relatively efficient for small lists and mostly- sorted list.
It is expensive because of shifting all following elements by one.
Analysis of insertion sort:
Worest case performance O(n2)
Best case performance O(n)
Average case performance O(n2)
Step 1: Read the input list size.
iv) list [swap+1]
Step 2: Read the list elements.
step 3: pass=1
step 4: Repeat the following steps until pass reach size-1 (for N-1 passes)
iii) Repeat the follwoing steps if this condition is true list[swap]>key and swap >= 0.(for comparision).
a) list [sawp+1]=list[swap ]
C Program To Implement Insertion Sort
CPP Program To Insertion Sort | <urn:uuid:327ea02e-6e3e-49b8-a33b-66b54b88ca9e> | 4.21875 | 630 | Tutorial | Software Dev. | 52.916667 |
Search Course Communities:
Interactive Math Programs
Course Topic(s): Ordinary Differential Equations | Modeling, ODE | Numeric Methods | Graphic Methods
A collection of Java applets for a number of topics. (All have a good parser, but many are not very visual and none have a description of the algorithm.): Euler data list output -- no visuals; Direction field has nice clean simple graphics; first-order (actually uses Improved Euler) numeric values and a subsequent plotter; Runge-Kutta produces values. Similar applets for second order and systems. Notably there are a number of modeling models (2,3,4 body), planets, double pendulum and spherical pendulum. These models have graphical output, but no description of the underlying equations.
Resource URL: http://www.dartmouth.edu/~rewn/
To rate this resource on a 1-5 scheme, click on the appropriate icosahedron:
Creator(s): R.E. Williamson
Contributor(s): R. E. Williamson
This resource was cataloged by Allan StruthersPublisher:
Dartmouth University, Math
This review was published on July 09, 2011
Be the first to start a discussion about this resource. | <urn:uuid:723e0b9c-aaaa-4fa1-9646-ebf2c0049efc> | 2.984375 | 265 | Content Listing | Science & Tech. | 39.945304 |
Measuring Vertical Profiles
Molluscan Fisheries staff using standard surveying techniques to measure the elevation, width, and shape of the beach. Measurements begin 3 meters into the vegetation of the upper beach (as seen in this photo) and continue at regular intervals until the person bearing the stadia rod is below sea level. Monthly measurements are used to assess the impact of erosion on nourished and unnourished beaches in Pinellas County.
Image Credit: FWC | <urn:uuid:4e2c5bcf-0798-4146-94f0-908f0f3374ea> | 3.1875 | 97 | Knowledge Article | Science & Tech. | 23.366804 |
What would be the smallest number of moves needed to move a Knight
from a chess set from one corner to the opposite corner of a 99 by
99 square board?
With one cut a piece of card 16 cm by 9 cm can be made into two pieces which can be rearranged to form a square 12 cm by 12 cm. Explain how this can be done.
A cylindrical helix is just a spiral on a cylinder, like an ordinary spring or the thread on a bolt. If I turn a left-handed helix over (top to bottom) does it become a right handed helix? | <urn:uuid:ae9f70a9-4148-41d4-b88a-990c294c28a9> | 3.015625 | 125 | Q&A Forum | Science & Tech. | 75.975491 |
sciencehabit writes "Like all invisible things that are only partly understood, black holes evoke a sense of mystery. Astronomers know that the tremendous gravitational pull of a black hole sucks matter in, and that the material falling in causes powerful jets of particles to shoot out of the hole at nearly the speed of light. But how exactly this phenomenon occurs remains a matter of conjecture, because astronomers have never quite managed to observe the details – until now. Astrophysicists have taken the closest look to date at the region where matter swirls around a black hole. By measuring the size of the base of a jet shooting out of the supermassive black hole at the center of the M87 galaxy (abstract), the researchers conclude that the black hole must be spinning and that the material orbiting must also be swirling in the same direction. Some of the material from this orbiting 'accretion disk' is also falling into the black hole, like water swirling down a drain."
Check out SlashCloud for the latest in cloud computing.
Reader intellitech points to an article at National Geographic, from which he excerpts: "If astronomers' early predictions hold true, the holidays next year may hold a glowing gift for stargazers—a superbright comet, just discovered streaking near Saturn. Even with powerful telescopes, comet 2012 S1 (ISON) is now just a faint glow in the constellation Cancer. But the ball of ice and rocks might become visible to the naked eye for a few months in late 2013 and early 2014—perhaps outshining the moon, astronomers say. The comet is already remarkably bright, given how far it is from the sun, astronomer Raminder Singh Samra said. What's more, 2012 S1 seems to be following the path of the Great Comet of 1680, considered one of the most spectacular ever seen from Earth."
An anonymous reader writes with news of a recent paper about the bias among science faculty against female students. The study, recently published in the Proceedings of the National Academy of Sciences, asked professors to evaluate applications for a lab manager position. The faculty were given information about fictional applicants with randomly-assigned genders. They tended to rate male applicants as more hire-able than female applicants, and male names also generated higher starting salary and more mentoring offers. This bias was found in both male and female faculty. "The average salary suggested by male scientists for the male student was $30,520; for the female student, it was $27,111. Female scientists recommended, on average, a salary of $29,333 for the male student and $25,000 for the female student."
DevotedSkeptic sends this news from NASA: "The 18,000-pound test article that mimics the size and weight of NASA's Orion spacecraft crew module recently completed a final series of water impact tests in the Hydro Impact Basin at the agency's Langley Research Center in Hampton, Va. The campaign of swing and vertical drops simulated various water landing scenarios to account for different velocities, parachute deployments, entry angles, wave heights and wind conditions the spacecraft may encounter when landing in the Pacific Ocean. The next round of water impact testing is scheduled to begin in late 2013 using a full-sized model that was built to validate the flight vehicle's production processes and tools."
SchrodingerZ writes "In the wake of Neil Armstrong's death, the United States Navy has announced this week that a new research vessel will be named in his honor. This ship will be the first Armstrong-class Auxiliary General Oceanographic Research (AGOR) ship in the world. This ship got its name from secretary Ray Mabus, who wanted to honor the first man to set foot on the moon. 'Naming this class of ships and this vessel after Neil Armstrong honors the memory of an extraordinary individual, but more importantly, it reminds us all to embrace the challenges of exploration and to never stop discovering,' say Mabus. Armstrong, before his career at NASA, flew in combat missions during the Korean war. 'The Armstrong-class AGOR ship will be a modern oceanographic research platform equipped with acoustic equipment capable of mapping the deepest parts of the oceans, and modular on-board laboratories that will provide the flexibility to meet a wide variety of oceanographic research challenges.' It will be 238 feet long, beam length of 50 feet, and will be able to travel at 12 knots. The ship is currently under construction in Anacortes, Washington."
derekmead writes "Data from the enormous Green Bank Telescope at the National Radio Astronomy Observatory has been used to test some of Einstein's theories, discover new molecules in space, and find evidence of the building blocks of life and of the origins of galaxies. With 6,600 hours of observation time a year, the GBT produces massive amounts of data on the makeup of space, and any researchers with reason to use the data are welcome to do so. The eleven-year-old GBT stands as one of the crowning achievements of American big science. But with the National Science Foundation strapped for cash like most other science-minded government agencies, the NRAO's funding is threatened. In August of this year, the Astronomy Portfolio Review, a committee appointed by the NSF, recommended that the GBT be defunded over the next five years. Researchers, along with locals and West Virginia congressmen, are fighting the decision, which puts the nearly $100 million telescope at risk. Unless they succeed, America's giant dish will go silent."
ananyo writes "Two species of African spiny mouse have been caught at something no other mammal is known to do — completely regenerating damaged tissue. The work could help improve wound healing in humans. The species — Acomys kempi and Acomys percivali — have skin that is brittle and easily torn, which helps them to escape predators by jettisoning patches of their skin when caught or bitten. Researchers report that whereas normal laboratory mice (Mus musculus) grow scar tissue when their skin is removed, African spiny mice can regrow complete suites of hair follicles, skin, sweat glands, fur and even cartilage (abstract). Tissue regeneration has not been seen in mammals before, though it is common in crustaceans, insects, reptiles and amphibians."
coondoggie writes "The U.S. Department of Homeland Security this week issued a call for unmanned systems makers to participate in a program that will ultimately determine their safety and performance for use in first responder, law enforcement and border security situations. In a twist that will certainly raise some eyebrows, the results of the program — called the Robotic Aircraft for Public Safety (RAPS) — will remain unavailable to the public, which, considering how involved the actual public may be with these drones is unfortunate."
An anonymous reader writes in with a story about some new electronics that are designed to melt in your body not in your hand. "Scientists have created ultra-thin electronic devices that can 'melt away' in the body once their job is done. A new study published in the journal Science, details how scientists have created a tiny, fully functional electronic device capable of vanishing within their environment, like in the body or in water, once they are no longer needed or useful. There are already implants that dispense drugs or provide electrical stimulation but they do not dissolve. The latest creation is an early step in a technology that may benefit not only medicine, like enabling the development of medical implants that don't need to be surgically removed or the risk of long-term side effects, but also electronic waste disposal."
sighted writes "NASA reports that its Curiosity rover mission has found evidence that a stream once ran vigorously — and for a sustained amount of time — across the area on Mars where the rover is driving. There is, of course, earlier evidence for the presence of water on Mars, but NASA says this evidence, images of rocks containing ancient streambed gravels, is the first of its kind."
Hugh Pickens writes "Doug Gross writes that thanks to technology, there's been a recent sea change in how people today kill time. 'Those dog-eared magazines in your doctor's office are going unread. Your fellow customers in line at the deli counter are being ignored. And simply gazing around at one's surroundings? Forget about it.' With their games, music, videos, social media and texting, smartphones 'superstimulate,' a desire humans have to play when things get dull, says anthropologist Christopher Lynn and he believes that modern society may be making that desire even stronger. 'When you're habituated to constant stimulation, when you lack it, you sort of don't know what to do with yourself,' says Lynn. 'When we aren't used to having down time, it results in anxiety. 'Oh my god, I should be doing something.' And we reach for the smartphone. It's our omnipresent relief from that.' Researchers say this all makes sense. Fiddling with our phones, they say, addresses a basic human need to cure boredom by any means necessary. But they also fear that by filling almost every second of down time by peering at our phones we are missing out on the creative and potentially rewarding ways we've dealt with boredom in days past. 'Informational overload from all quarters means that there can often be very little time for personal thought, reflection, or even just 'zoning out,'" researchers write. 'With a mobile (phone) that is constantly switched on and a plethora of entertainments available to distract the naked eye, it is understandable that some people find it difficult to actually get bored in that particular fidgety, introspective kind of way.'"
coondoggie writes "The US Air Force this week said it will base the first Space Fence radar post on Kwajalein Island in the Republic of the Marshall Islands with the site planned to be operational by 2017. The Space Fence is part of the Department of Defense's effort to better track and detect space objects which can consist of thousands of pieces of space debris as well as commercial and military satellite parts."
puddingebola writes in with news of a new app that might be of interest to those studying Einstein's brain, or just looking for something neat for Halloween. "Albert Einstein's brain, that revolutionized physics, can now be downloaded as an iPad app for USD 9.99. The exclusive application, which has been just launched, promises to make detailed images of Einstein's brain more accessible to scientists than ever before. The funding to scan and digitize nearly 350 fragile and priceless slides made from slices of Einstein's brain after his death in 1955 were given to a medical museum under development in Chicago, website 'Independent.ie' reported. The application will allow researchers and novices to peer into the eccentric Nobel winner's brain as if they were looking through a microscope. 'I can't wait to find out what they'll discover,' Steve Landers, a consultant for the National Museum of Health and Medicine Chicago, who designed the app, was quoted as saying by 'Press Association.'"
DeviceGuru writes "Suitable Technologies today unveiled a telepresence robot based on technology from Willow Garage, a robotics research lab. Beam (as in 'Beam me up, Scotty' — no, really!) implements a video chat function on a computer you can remotely drive around via Internet-based control. Beam, which stands 62 inches tall and weighs 95 pounds, adheres to four operational imperatives, which are intended to mimic human interaction and behavior: reciprocity of vision (if I see you, you must see me); ensuring private communication (no recordings of what goes on); transparency of technology (keeping the interaction natural); and respect social norms (don't push or shove Beam!). But the big question is: Does Beam also adhere to Isaac Asimov's Three Laws of Robotics? Let's hope so!"
Third Position writes "The most unambiguous data to date on the elusive 113th atomic element has been obtained by researchers at the RIKEN Nishina Center for Accelerator-based Science (RNC). A chain of six consecutive alpha decays, produced in experiments at the RIKEN Radioisotope Beam Factory (RIBF), conclusively identifies the element through connections to well-known daughter nuclides. The search for superheavy elements is a difficult and painstaking process. Such elements do not occur in nature and must be produced through experiments involving nuclear reactors or particle accelerators, via processes of nuclear fusion or neutron absorption. Since the first such element was discovered in 1940, the United States, Russia and Germany have competed to synthesize more of them. Elements 93 to 103 were discovered by the Americans, elements 104 to 106 by the Russians and the Americans, elements 107 to 112 by the Germans, and the two most recently named elements, 114 and 116, by cooperative work of the Russians and Americans. With their latest findings, associate chief scientist Kosuke Morita and his team at the RNC are set follow in these footsteps and make Japan the first country in Asia to name an atomic element."
Capt.Albatross writes "A couple of months ago, the New York Times published political scientist Andrew Hacker's opinion that teaching algebra is harmful. Today, it has followed up with an article that is clearly intended to indicate the usefulness of basic mathematics by suggesting useful exercises in a variety of 'real-world' topics. While the starter questions in each topic involve formula evaluation rather than symbolic manipulation, the follow-up questions invite readers to delve more deeply. The value of mathematics education has been a (recurring issue on Slashdot)."
New mareacaspica writes with this snippet from Nature: "Researchers have constructed 3D models of two different insects, in their nymph stage by scanning their fossils with a novel technique called X-ray microtomography. They obtained sections, two centimeters long, and from the sections constructed the models. Such fossils of juvenile insects are very rare during that ancient period, and the research could provide a better understanding not only of insects, but also other animals, as the technique develops." Original Paper.
astroengine writes "A new analysis of recent observations finds evidence for a protoplanetary disk around a red dwarf star plunging in the direction of the supermassive black hole at the center of our galaxy. Ruth Murray-Clay and Avi Loeb of the Harvard-Smithsonian Center for Astrophysics did the theoretical work. Stefan Gillessen of the Max-Planck-Institute for Extraterrestrial Physics made the observations using the European Southern Observatory's Very Large Telescope. The red dwarf star will make its closest approach in the summer of 2013, hurtling only 270 billion miles from black hole. (Or roughly 54 solar system diameters, as measured from the furthest edge of the Kuiper belt.) It won't get sucked into the black hole, but it will be flung back along its elliptical orbit out to a distance of a little more than 1/10 light-years."
jamstar7 writes "From the article: 'NASA is reportedly mulling the construction of a floating Moon base that would serve as a launching site for manned missions to Mars and other destinations more distant than any humans have traveled to so far. The Orlando Sentinel reported over the weekend that the proposed outpost, called a "gateway spacecraft," would support "a small astronaut crew and function as a staging area for future missions to the moon and Mars."' This is actually a good idea, using the Moon as a staging base for exploring the cosmos. Once we build manufacturing capability there, why not build spacecraft there? We can build bigger, more spacious craft so as to not lock up future astronauts in a closet for months or years at a time." Moon base isn't quite accurate: it would be a space station at the Earth-Moon L2 Lagrange point about 60000 km from the surface of the dark side of the moon.
bdking writes "A typeface family commonly found on the devices installed in many modern cars is more likely to cause drivers to spend more time looking away from the road than an alternative typeface tested in two studies, according to new research from MIT's AgeLab." It seems that the closed letter forms of Grotesque type faces require slightly more time to read than open letter forms of Humanist type faces, just enough that it could be problematic at highway speeds. | <urn:uuid:bad2dd1e-77a3-446a-a4c7-8458c2fe6189> | 3.109375 | 3,353 | Content Listing | Science & Tech. | 37.844342 |
In astronomy, axial tilt is the angle between a planet's rotational axis at its north pole and a line perpendicular to the orbital plane of the Planet. It is also called axial inclination or obliquity. The axial tilt of Earth is the cause of seasons like summer and winter on Earth.
Axial tilt of major celestial bodies in our solar system [change]
- Sun 7.25 (to the Ecliptic)
- Mercury ~0.01
- Venus 177.4
- Earth 23.439281
- Moon 1.5424
- Mars 25.19
- Ceres ~4
- Jupiter 3.13
- Saturn 26.73
- Uranus 97.77
- Neptune 28.32
- Pluto 119.61
Axial tilt of Venus, Uranus, Pluto [change]
The axial tilts of Venus, Uranus and Pluto are greater than 90 degrees because of following reasons.
- Venus: Venus is rotating in a retrograde direction, opposite to the direction of planets like Earth. The north pole of Venus is pointed 'down' (southward); hence the angle between the rotational axis of Venus passing through its north pole and the line perpendicular to its orbital plane is 177.4 degrees.
- Uranus: Planet Uranus is rotating on its side. The direction of Uranus's rotational axis through its north pole is almost in the direction of its orbit around the Sun, hence its axial tilt is 97.77 degrees. If the direction of its rotational axis had been aligned horizontally with its orbital plane in the direction of its orbit around the Sun, then the axial tilt of the planet would have been exactly 90 degrees.
- Pluto: Like Venus, Pluto's rotational axis and north pole are pointed slightly downward (southward). Hence the angle between Pluto's rotational axis passing through its north pole and the line perpendicular to its orbital plane is 119.61 degrees. | <urn:uuid:0ae5672c-505d-4eef-a189-c807b6547932> | 4.125 | 405 | Knowledge Article | Science & Tech. | 62.742049 |
Research by Peter Capak, C. L. Carilli, N. Lee, T. Aldcroft, H. Aussel, E. Schinnerer, G.W. Wilson, M. S. Yun, A . Blain, M. Giavalisco, O. Ilbert, J. Kartaltepe, K.-S. Lee, H. McCracken, B. Mobasher, M. Salvato, S. Sasaki, K.S. Scott, K. Sheth, Y. Shioya, D. Thompson, M. Elvis, D. B. Sanders, N. Z. Scoville, Y.Tanaguchi
Scientists report the spectroscopic confirmation of a submillimeter galaxy (SMG) (the Baby Boom Galaxy) with an estimated LIR = (0.5 - 2.0) x 1013 L. The spectra, mid-IR, and X-ray properties indicate the bolometric luminosity is dominated by star formation at a rate of >1000 M. Multiple, spatially separated components are visible in the Ly[alpha] line with an observed velocity difference of up to 380 km and the object morphology indicates a merger. The best-fit spectral energy distribution and spectral line indicators suggest the object is 2-8 M years old and contains >1010 M of stellar mass. This object is a likely progenitor for the massive early-type systems seen at z~2. The green and red splotch in this image is the most active star-making galaxy in the very distant universe. Nicknamed "Baby Boom," the galaxy is churning out an average of up to 4,000 stars per year, more than 100 times the number produced in our own Milky Way galaxy. It was spotted 12.3 billion light-years away by a suite of telescopes, including NASA's Spitzer Space Telescope.
The study of galaxies detected at millimeter and submillimeter wavelengths is one of the most rapidly developing fields in observational astronomy. It is now known that a large fraction of the star formation activity is enshrouded in dust, with the star formation rate (SFR) being directly proportional to the far-infrared (FIR) luminosity of galaxies, modulo possible contributions from an active galactic nucleus (AGN). Surveys performed at millimeter wavelengths directly probe the FIR luminosity, and hence the amount of star formation. Furthermore, the shape of the galaxy spectral energy distributions (SEDs) at rest-frame millimeter wavelengths results in a negative K-correction in the range . Therefore a flux-limited survey is equivalent to an SFR-limited survey at these redshifts (Blain et al. 2002).
Significance to Solar System Exploration
Our Milky Way galaxy produces only about 10 new stars annually. But a galaxy far, far away is experiencing a major baby boom. It is producing more than 4,000 new stars a year, and should become a massive elliptical galaxy. The discovery challenges the accepted model for galaxy formation, which has most galaxies slowly bulking up by absorbing pieces of other galaxies, rather than growing internally. The discovery also fundamentally changes the way astronomers view origins of stars in our solar system, and beyond.
Last Updated: 2 February 2011 | <urn:uuid:34fb4375-f133-4ee0-8958-2718542a0d15> | 2.96875 | 671 | Academic Writing | Science & Tech. | 55.864957 |
Coccoliths in the Celtic Sea
As the basis of the marine foodchain, phytoplankton are important indicators of change in the oceans. These marine flora also extract carbon dioxide from the atmosphere for use in photosynthesis, and play an important role in global climate. Phytoplankton blooms that occur near the surface are readily visible from space, enabling a global estimation of the presence of chlorophyll and other pigments. There are more than 5,000 different species of phytoplankton however, and it is not always possible to identify the type of phytoplankton present using space-based remote sensing.
Coccolithophores, however, are a group of phytoplankton that are identifiable from space. These microscopic plants armor themselves with external plates of calcium carbonate. The plates, or coccoliths, give the ocean a milky white or turquoise appearance during intense blooms. The long-term flux of coccoliths to the ocean floor is the main process responsible for the formation of chalk and limestone.
This image is a natural-color view of the Celtic Sea and English Channel regions, and was acquired by the Multi-angle Imaging SpectroRadiometer's nadir (vertical-viewing) camera on June 4, 2001 during Terra orbit 7778. It represents an area of 380 kilometers x 445 kilometers, and includes portions of southwestern England and northwestern France. The coccolithophore bloom in the lower left-hand corner usually occurs in the Celtic Sea for several weeks in summer. The coccoliths backscatter light from the water column to create a bright optical effect. Other algal and/or phytoplankton blooms can also be discerned along the coasts near Portsmouth, England and Granville, France.
At full resolution, evidence of human activity is also apparent in this image. White specks associated with ship wakes are present in the open water, and aircraft contrails are visible within the high cirrus clouds over the English Channel.
MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.
Credit: Image credit: NASA/GSFC/LaRC/JPL, MISR Team.
<< RETURN TO GALLERY | <urn:uuid:ce7c4240-34d1-4a34-86fb-4a2d0ba79d8d> | 3.984375 | 518 | Knowledge Article | Science & Tech. | 32.599627 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
secondary effects of radiation
...similarly produced, can experience a variety of reactions even before neutralization occurs. Such an ion may fragment all by itself, or it may react with a neutral molecule in what is called an ion–molecule reaction. In either case new chemical species are created. These transformed ions and radicals, as well as the electrons, parent ions, and excited states, are capable of reacting...
study of mass spectrometry
Owing to the poor vacuums available prior to the contributions of Gaede and Langmuir (see above), this subject was forced on the attention of early experimenters. They observed masses of 3 and 19, which could not have been produced by simple ionization and which arise from the following reactions, respectively:
What made you want to look up "ion-molecule reaction"? Please share what surprised you most... | <urn:uuid:f432d402-09e7-4435-a2a0-33f48c6a0a41> | 3.40625 | 216 | Truncated | Science & Tech. | 45.609643 |
High Science Inside the Belly of the Alps
- Story Resources
- Back to Main Page »
- The Promise of Underground Geological Repositories
- Photo Gallery
- Full Waste Coverage
- Grimsel Test Site
- Yucca Mountain Project
- Managing High Level Waste
- Public Opinion [pdf]
- IAEA Centres of Excellence
- Waste Profiles by Country
- Waste Management Database
- Spent Fuel Storage Overview [pdf]
Think of the Swiss Alps and images of majestic glaciers, skilled skiers and pastoral valleys come to mind. But deep within the crystalline rock of the Aar Massif in central Switzerland Toni Baer is a modern day explorer chasing facts for the future. Almost half a kilometre underground in a mountain at an altitude of 1730 metres, he monitors computers, drills bore holes and contacts scientists scattered across the globe.
Mr. Baer is the local manager of the Grimsel Test Site, an underground rock laboratory that is looking at ways to safely dispose of radioactive waste produced by Switzerland's five nuclear power plants. They generate 40% of the country's electricity.
Part of the investigations at the Grimsel rock lab is finding ways to package these highly radioactive wastes deep underground, so that it is isolated and contained for generations to come.
Grimsel is run by NAGRA, a body founded by utilities and the Swiss Government for radioactive waste disposal. Switzerland is one of about ten countries running rock laboratories. They're seeking the best answers they can get about future possibilities. Could radioactive gas escape from the mountain caves or is there any route through which water and food could become contaminated?
Getting to Grimsel is not easy in winter. Mr. Baer must take a cable car up into the Juhlistock Alps. He then enters a tunnelled out cave and drives for about five minutes into the mountain itself before coming to a door. Behind the door the cave transforms. This middle–earth setting changes to become much like any other normal office (although possibly better equipped) with white walls and florescent lighting, toilets, kitchen and coffee machine. Beyond this floodlit reception, in cave laboratories, the experiments take place.
Simulating the Future
Real spent fuel is not used in the experiments at Grimsel. Its properties are simulated. For example, heaters are used to imitate the heat generated by the radioactive wastes, which could warm the rock to around 100 C – a temperature hot enough to boil water. The "mock" waste is put in steel containers designed to last for 10, 000 years, then placed in a deep tunnel making the mountainous rock itself a barrier against the release of radioactive materials. The tunnel is then backfilled with concrete or bentonite, which is a special clay used for sealing repositories.
Part of the investigative work at Grimsel is to analyse how water, as the transport vehicle for waste, moves and interacts with the bentonite, rock and concrete interfaces.
In November 2003, Mr. Baer showed IAEA sponsored scholars from Eastern Europe and Asia how to drill boreholes into the rock and investigate how air, water and gases move through the mountain of rock that surrounds the Grimsel Test tunnel. This is important because the main way radioactive particles could seep out of a repository into the human environment is if they are transported by water. Data from the Grimsel experiments is relayed via computer back to scientists in the Swiss office or to collaborating universities and countries. At present, Japan, Spain, Germany, Taiwan, China, the United States, Czech Republic, France and Sweden are also involved in experiments at Grimsel.
Learning to Find the Right Sites
This year marks Mr. Baer's twentieth year working the earth caverns. When he first started at Grimsel, it was thought that hard crystalline, granite rock (similar to that of the Juhlistock in which the Grimsel test site is excavated) would be the ideal barrier to enclose the waste. In unfractured rock, fluids typically travel only a few inches over hundreds, and even thousands, of years. But investigations showed because granites are hard and brittle, they can "crack" or fracture as a result of geologic movements. Geological repositories therefore need to be built in rock zones that are free of major fractures through which water can travel.
Further in situ research in Switzerland showed that the lowly permeable clay rock in Zurcher Weinland in the country's north might be suitable for a disposal site, says NAGRA Spokesperson, Mr. Heinz Sager. "The clay rock forms a tight sealing barrier that does not fracture and the area is tectonically stable," he said.
Around the world salt domes and volcanic tuff are among other formations being considered for repositories, says Mr. Malcolm Gray, an IAEA nuclear engineer overseeing the Centers of Excellence programme. "In general, it has been shown that by adapting the engineering approach to the different available geological formations, suitable underground waste disposal facilities can be designed," he said.
However, selecting a site does not just depend on geology and an appropriately engineered design. "Selecting a site is as much dependant on social factors as on technological choices," Mr. Gray said.
"Not in My Backyard"
While the experts generally agree that geological disposal will be safe, and countries such as the United States and Finland have decided to move forward with this option, in many countries, the public remains skeptical. A 2001 survey of Europeans about radioactive waste showed that just less than half of the 16,000 people interviewed did not think a safe method existed to dispose of this most hazardous radioactive waste.
As it has for other types of engineering projects, public opposition can and has slowed repository development in many countries. It is the reason why actual spent fuel is not used in the Grimsel experiments. Gaining public trust, acceptance and credibility when it comes to radioactive waste disposal still has a way to go.
"Those working in the nuclear industry must listen to the public's concerns, take them into account in their planning and present their case in a way that satisfies societal needs. Demonstrating that repositories can work – through rock laboratories such as Grimsel and others that are part of the IAEA network – can help to build public understanding that there is a safe solution to radioactive waste disposal," Mr. Gray said.
Scientists, engineers, policy and decision makers are beginning to recognize that transparency, communication and public involvement are critical for public acceptance and the political will needed to tackle radioactive waste disposal.
However, creating the conditions for public acceptance is not an easy task and the route to success is not clear. At an international conference on Issues and Trends in Radioactive Waste Management held by the Agency in December 2002, experts called for greater and better communication with stakeholders to build public acceptance. A December 2003 conference in Sweden at which IAEA Director General Mohamed ElBaradei spoke took a closer look at political and technical issues surrounding geological waste disposal.
In Switzerland, construction of a repository for spent fuel is not foreseen until well into this century. Right now, public attitudes support nuclear power, and Parliament has approved legislation permitting construction of new nuclear plants. Views on repositories there, however, are mixed. Swiss Ambassador Heinrich Reimann informed the IAEA General Conference in September 2003 that a referendum held in 2002 in a Swiss canton opposed exploratory tunnelling work as a first step toward a repository planned at Wellenberg for low and intermediate level radioactive wastes. The Government now is analysing documents on geological disposal of high–level waste and the construction, long–term safety and use of oplainus clay as a rock host for a repository.
Going Back to Nature
In an effort to sway a skeptical but curious public, Grimsel and other underground research labs have opened their doors to visitors. They offer underground tours of the rock labs to demonstrate how a repository can work.
Part of the problem says the head of NAGRA's Corporate Communications, Mr. Heinz Sager, is that nuclear waste is seen as somebody else's responsibility. "It's not like throwing out a drink container, where you see and touch the rubbish you throw away. Nuclear waste is invisible to people. They don't associate watching the TV or lighting a room with creating nuclear waste," he said.
Toni Baer does not need convincing. He has monitored underground experiments at Grimsel for two decades. He points to nature, as well as scientific testing, as a sign that geological repositories can work.
"Look at the crystals," Mr. Baer says. Near the Grimsel Test Site beautifully formed crystals stand perfectly preserved in a cave, kept intact for 15 million years and subjected to temperatures of 400 degrees. Nature's powers to preserve are also seen by the 30,000 year–old prehistoric cave paintings in Southern France. Or in Gabon, West Africa, when natural processes concentrated enough radioactive particles to cause an underground, non–explosive nuclear chain reaction. In two billion years the radioactive waste that was created has moved less than three metres.
"Look at Nature," Mr. Baer says. "Then you can understand it's possible."
Even as natural analogues show the possibilities, they raise technical and scientific considerations important to sound decisions about future waste repositories. The work being done in Switzerland and other countries is playing a key role toward the goal of safely isolating the world's radioactive waste. –– Kirstie Hansen, IAEA Division of Public Information | <urn:uuid:624106fe-6ed1-4de8-8941-7d9b07a5911e> | 3.15625 | 1,942 | Knowledge Article | Science & Tech. | 37.124194 |
This chapter is focused on optical (i.e. visible and infrared) interferometry with in mind the comparison of this technique with radio interferometry developed in the other chapters of this book. The objective is to give some keys to understand how optical interferometers works. I present first a small history of optical interferometry followed by a census of interferometers in operation or in construction (Sect. 4.1). Section 4.2 discusses the common points and main differences between optical and radio interferometry at millimeter wavelengths. Then I describe how an optical interferometer works (Sect. 4.3) at the system level and at the signal detection level (Sect. 4.4). Finally I present in Sect. 4.5 the main limitations that optical interferometry faces like the atmospheric turbulence and other sources of noise in the measured signal.
Stellar interferometry has been first proposed by Fizeau in 1868. At that time, the phenomenon of light interference is already well studied and the physicists know that the contrast of the fringes depends on the geometry of the source. [Fizeau 1868] suggests to deduce the star diameter from the extinction of the fringe contrast with widely separated slits. [Stéphan.1873] installs a mask with two apertures on the 0.80-m telescope of the Marseille observatory to test Fizeau's method. He detects fringes but the contrast of fringes do not decrease with the aperture distance. He concludes that stars must be smaller than 0.158 arcsec.
Following his own work on the measurement of light speed, Michelson seems to have independently discovered optical interferometry in the 1890's. In order to span a large range of baselines, he and Pease install a 20-foot metal beam above the 100-in telescope of Mount Wilson. Two mirrors inclined by 45 degrees send the light to the middle of the telescope where two other mirrors inject the light in the telescope. The interferometric fringes are formed at the focus of the telescope (see Fig. 4.1). By translating the outside mirrors, the baseline changes and therefore also the contrast of the fringes. In the 1920's, [Michelson & Pease 1921] measure the first diameters of stars that required baselines longer than 3m. To extend these first results, Pease builds a stand-alone interferometer on a 50-foot metal beam, but fails in getting calibrated results because of the unexpected importance of mechanical flexures.
During almost 50 years, direct detection interferometry stalled. [Hanbury Brown & Twiss 1956] invented intensity interferometry which is limited to a small handset of bright sources. Interferometry had a new birth in the mid-1970's with the advent of new technology: detectors, actuators, servo-control, etc. [Labeyrie 1975] was the first to directly combine the light from two separated telescopes in the optical range. Since that time several interferometers with relatively small apertures have been built and operated. A list of current and future interferometers is given in Table 4.1 and commented in next section.
In 1988, the heterodyne techniques used in the radio domain was first implemented in an operating interferometer at by the Infrared Spatial Interferometer (ISI, [Danchi et al. 1988]).
|COAST||Cambridge Optical Aperture Synthesis Telescope||5||0.4||20|
|GI2T||Grand Interféromètre à 2 Télescopes||2||1.52||65|
|IOTA||Infrared & Optical Telescope Array||0.4||38|
|ISI||Infrared Spatial Interferometer||1.65|
|NPOI||Naval Prototype Optical Interferometer|
|PTI||Palomar Testbed Interferometer||3||0.4||110|
|SUSI||Sidney University Stellar Interferometer||2||0.14||640|
|CHARA||Center for High Angular Resolution Astrophysics||6||1||350|
|KI-main||Keck Interferometer main array||2||10||60|
|KI-outriggers||Keck Interferometer auxiliary array||4||1.8||140|
|VLTI/VIMA||Very Large Telescope Interferometer main array||4||8||130|
|VLTI/VISA||Very Large Telescope Interferometer sub-array||3||1.8||200|
|LBT||Large Binocular Telescope||2||8.4||23|
|upgrade in progress|
Current interferometers (see list in Table 4.1) are composed of relatively small telescopes, with diameters ranging between 15 centimeters to 1.5 meters. The number of telescopes used to combine the light is usually two, but if two facilities work routinely with 3 or more apertures (COAST and NPOI). The maximum baseline length ranges between a few meters up to several hundreds of meters (e.g. SUSI). Almost each interferometer has its own beam combination scheme (see Sect. 4.4.1). They work either in the visible ( ) or the infrared ( ) domains.
Each interferometer has been designed with one main astrophysical objective: synthetic aperture imaging (COAST, UK), high resolution spectroscopy in the visible (GI2T, France), high accuracy measurement in the near-infrared (IOTA, USA), high resolution spectroscopy in the thermal infrared (ISI, USA), wide angle astrometry (NPOI, USA), narrow angle astrometry and phase reference (PTI, USA), stellar astrophysics in the visible (SUSI, Australia). CHARA (USA), which obtained its first fringes at the end of 1999, aims at binary observations and synthetic aperture imaging.
The new generation consists of interferometers with very large telescopes: the VLTI with -m telescopes, the Keck Interferometer with -m telescopes and the LBT with two 8-m telescopes. Their main objective is the gain in flux sensitivity which will allow for the first time the study of extra-galactic sources. Both the VLTI and the Keck Interferometer are supplemented by auxiliary 1.8-m telescopes, that are still larger than the largest apertures in the previous generation of interferometers. | <urn:uuid:57c79f47-bf9f-4f09-bf03-f09b4685fc31> | 4.25 | 1,361 | Academic Writing | Science & Tech. | 42.489926 |
Highlights of our Work
movie ( 789.9KB )
Mechanical forces are everywhere in human life. Strong forces power machines and cars, our body's forces let us labor and move, soft forces are sensed through touch, even softer ones through hearing. Forces are also ubiquitous in the living cell, driving its molecular machines and motors as well as signaling ongoing action in its surroundings. Man made, force bearing machines rely on extremely strong materials not found in the cell. How can the cell bear substantial forces? Also, how do cells sense extremely weak forces as in hearing, surpassing most microphones? Single molecule measurements, reviewed in a recent issue of Science, begin to answer these questions offering information on biomolecules' mechanical responses and action. However, the information offered by these measurements is not enough to relate the biomolecular function to the biomolecular architecture. Biomolecules in cells can move in amazing ways, but we did not know why. As one contribution in the Science issue demonstrates, computational modeling comes to the rescue. It can simulate the measurements and, in doing so, can reveal the physical mechanisms underlying cellular mechanics at the atomic level. In as far as observed data are available, the simulations show impressive agreement with actual measurements. While initially only following experiments or, at best, guiding experiments, modeling has advanced now further and through simulated measurements discovered on its own entirely novel mechanical properties that were later verified by experimental measurements. Experimentalists reacted to the new competition and began to do simulations themselves. More here. | <urn:uuid:b16e1702-05a8-47b4-95d7-4f5b51eb7600> | 3.515625 | 308 | Content Listing | Science & Tech. | 20.522662 |
One of the goals of this column is to make it more clear what astronomers actually do all day (or night) long. As I have been discussing, one of the things that I do frequently is writing proposals to convince other astronomers to let me use their telescope. The once-a-year proposals for use of the Hubble Space Telescope were due on Friday at 5pm (I pressed the “send” button at 4:49pm). Here is the proposal that I submitted. While it is written for other astronomers, so often flies into astronomical shorthand, I think it is at least moderately readable by anyone generally interested in what is going on in the outer part of the solar system. Plus, there is no chance I could possibly attempt to write anything else this week. In that spirit, here is:
A compositional-dynamical survey of the Kuiper belt: a new window on the formation of the outer solar system.
The eight planets overwhelmingly dominate the solar system by mass, but their small numbers, coupled with their stochastic pasts, make it impossible to construct a unique formation history from the dynamical or compositional characteristics of them alone. In contrast, the huge numbers of small bodies scattered throughout and even beyond the planets, while insignificant by mass, provide an almost unlimited number of probes of the statistical conditions, history, and interactions in the solar system. Studies of these small bodies have been exploited for many years in the inner part of the solar system, where combined dynamical and compositional observations of asteroids have been used to trace chemical gradients, study early radioactivity, and detect and analyze collisional histories in the region of the terrestrial planets (Bottke et al. 2005 and references therein). While a similar study of the Kuiper belt would offer similar promise for understanding the formation of the region of the giant planets, the typical objects in the Kuiper belt are 10,000 times fainter than those in the asteroid belt, so this promise has been hampered by the difficulty of obtaining concrete observations of the surface compositions of these objects.
Instead, attempts to understand the formation and evolution of the Kuiper belt have largely been dynamical simulations where a hypothesized starting condition is evolved under the gravitational influence of the early giant planets and an attempt is made to reproduce the current observed populations(Levison and Morbidelli 2003, Tsiganis et al. 2005, Charnoz and Morbidelli 2007, Lykawka and Mukai 2008). With little compositional information known for the real Kuiper belt, the test particles in the simulation are free to have any formation location and history as long as they end at the correct point. Allowing compositional information to guide and constrain these studies would add an entire new dimension to our understanding of the formation and evolution of the outer solar system.
New visible-infrared capabilities of WFC3 allow such compositional information of a large number of Kuiper belt objects to be obtained for the first time. Here we propose to exploit these capabilities to perform the first ever large-scale dynamical-compositional study of Kuiper belt objects (KBOs) and their progeny to study the chemical, dynamical, and collisional history of the region of the giant planets.
Kuiper belt compositions: the current view.
Combining compositional and dynamical information on small bodies has proved a powerful technique in the inner solar system for understanding the formation of the terrestrial planetary region, but it has only been used to a very limited extent in the outer solar system. Color diversity.The earliest attempts to jointly consider outer solar system compositions and dynamics simultaneously were attempted using only the colors of KBOs.While colors are a poor proxy for composition, they have proved a fascinating early tracer of the dynamical homogeneity -- and lack thereof -- of the Kuiper belt. The earliest photometric observations (Jewitt and Luu 1998) suggested that KBOs came in a wide variety of colors and that there was no relationship between the color and any orbital or physical parameter of the object. To date, this great heterogeneity remains unexplained, though it clearly points to a wide diversity of formation or evolutionary histories throughout the Kuiper belt.
The cold classical KBOs.
Subsequent observations of colors of larger numbers of KBOs eventually showed that one dynamical subset of the Kuiper belt, the ``cold classical KBOs'' on dynamically cold low inclination and eccentricity orbits, consists exclusively of objects that are red (Tegler and Romanishin 2000, Trujillo and Brown 2002). While the color red is impossible to interpret compositionally without more spectral information, the existence of this red grouping has been used to argue that the cold classicals are a unique population whose dynamical coherence has been maintained through the dramatic evolution of the outer solar system (Morbidelli and Brown 2004). The need to retain this group of objects is one of the key constraints on -- and sometimes the death of -- models of the evolution of the outer solar system and is the earliest example of the power of combining (even limited) compositional information with small body dynamics.
While color groupings have proved interesting for helping to understanding the evolution and rearrangement of the outer solar system, the actual cause for the different colors remains unknown.Infrared spectroscopy would allow a direct probe of the surface ices common in the outer solar system, but for many years few infrared spectra were available, as few KBOs were bright enough for even the lowest resolution spectroscopy with the largest telescopes. This difficulty was partially alleviated by our wide field search for the largest KBOs (Brown 2008), which finally provided a moderate number of bright observable objects, and by long term programs at VLT and Keck that slowly obtained spectra of the very brightest of these (i.e. Barucci et al. 2006, Barkume et al. 2008). The most systematic survey to date is our Keck survey (Barkume et al. 2008), which obtained 1.5 to 2.5 micron spectra of 45 objects in the outer solar system. Three results from this small sample provide examples and details of what could be expected from a much larger survey.
Fragments from a giant primordial collision.
One small set of KBOs stood out in the Keck survey for their unique spectra (Fig 1a). This collection of objects has surfaces which look like laboratory spectra of pure uncontaminated water ice. Moreover, all of these pure water ice objects have nearly identical orbits(Fig. 2a), and the largest of them, the nearly Pluto-sized 2003 EL61, had previously been speculated to have suffered a giant impact at some point in its past which gave it its rapid spin and system of at least two moons (Brown et al. 2006). The compositional and dynamical association of the water ice objects with 2003 EL61 itself made it clear that the small set of pure water ice objects were fragments of the giant impact that had shaped 2003 EL61 (Brown et al. 2007). This impact is the largest anywhere in the solar system for which we have multiple extant fragments identified, providing a unique laboratory into the types of massive collisions which shaped the solar system.
It is expected that the 2003 EL61 impact occurred during the time of solar system clearing when the Kuiper belt was significantly more dense than its current state. A model by Levison et al. (2008) suggests that the impact actually occurred between two objects which were themselves in the processes of being scattered out of the solar system. As would be expected from a collision of objects that were on unstable orbits, some of the 2003 EL61 family is itself in an unstable region of space. In Ragozzine and Brown (2007), we exploited these instabilities to develop a dynamical chronometer to use the current spread in orbital elements of 2003 EL61 fragments to determine the time of the 2003 EL61 impact. To date, with the small number of family members known, we can only place a lower limit of 1 Gyr on the age. But with more objects discovered we will be able to more precisely date this impact, and thus date the time of solar system clearing.While almost all models to date assume that major clearing occurred 4.5 Gyr ago, the new and to date quite successful Nice model (Tsiganis et al. 2005 and papers following) posits that solar system clearing was delayed by ~1 Gyr and did not largely occur until the time of the Late Heavy Bombardment. The study of the dynamics of this compositionally unique set of objects could answer one of the most important questions about the timing of major events in the outer solar system.
The methane giants.
Schaller and Brown (2007) suggested that a small number of the largest and coldest objects should have enough surface gravity to maintain their volatiles against loss to space over the age of the solar system. In their model, the final loss to space is controlled by the slow leakage of Jeans escape from a vapor-pressure controlled atmosphere. The loss is an intimate function of the object size and of the precise orbit. The results of the model predictions to date have been nearly perfect: almost everything that the model suggests should have volatiles on the surface (predominantly methane; Fig 1a) does, and nothing that the model suggests shouldn't have volatiles has been found to have volatiles. This success opens the possibility of being able to find outliers with unusual dynamical or compositional histories by finding objects whose predictions don't fit within the framework of the model. Indeed, the one object which doesn't fit the model prediction is 2003 EL61, the giant parent of the collisional family. We presume that the impact took away most of the volatiles on the outermost fragments, but, more importantly, even if we had know nothing else about 2003 EL61, its failure to have a predictable surface composition would have quickly drawn attention to it.
Overall spectral diversity.
Once the 2003 EL61 family is removed from the spectral sample, no apparent compositional-dynamical correlation or pattern is seen in the remaining 40 objects (Fig 2a). While the compositions of asteroids are strongly stratified as a function of heliocentric distance, the KBOs have no such stratification. Just as objects with different optical colors are jumbled throughout the Kuiper belt, so are objects with different infrared spectra. Unlike the asteroid belt, however, where compositional differences are glaring and distinct, in the Kuiper belt, the spectra of almost every KBO fits along a smooth continuum with the only differences being the amount of absorption due to water ice and the optical color (Fig 1b). While initially unexpected, the lack of other significant surface ices is now understood as a natural consequence of thermal escape of the more volatile ices (Schaller and Brown 2007).
Oddly, however, little correlation appears between the optical colors and the amount of water ice absorption (Fig 1b), conflicting with the commonly held conceptual view that KBO surfaces are a simple mix between red colors due to irradiated organics and blue colors due to fresh water ice exposed by collision (Jewitt and Luu 1998) or that KBOs can be compositionally classified by optical colors alone (Barruci et al. 2006).
While the cause of this continuous diversity is unknown, the broad possibilities are limited: the surfaces can reflect either primordial differences in the objects, subsequent evolution of the objects, or both. Primordial differences would likely reflect formation location, while evolution could reflect both thermal and collisional history.
Whatever the cause of the surface composition variability, understanding the reason would allow significant new insights into the evolution of the outer solar system. If the variations are primarily primordial, we could use KBO composition to reconstruct the initial locations of the objects that are now jumbled in the Kuiper belt, while if the variations are evolutionary we will be able to use compositions to reconstruct collisional or thermal histories of different regions of the Kuiper belt. In either case, with the current small number of objects known it is impossible to determine the cause of the variability, but the promise for this potential tool is strong.
The proposal continues on for another few pages, describing precisely how we want to use the Hubble Space Telescope to answer some of these questions that we had set out here. | <urn:uuid:ca38845b-a1bc-4211-b0b6-9929b6046a35> | 2.890625 | 2,549 | Personal Blog | Science & Tech. | 30.514247 |
The Hubble Space Telescope has inadvertently caught 14 runaway stars speeding through dense interstellar gas. The discovery may shed light on whether the turbulence they create could prevent surrounding gas from collapsing into new stars.
Astronomers led by Raghvendra Sahai of NASA's Jet Propulsion Laboratory had been searching for ageing, bloated stars with Hubble's Advanced Camera for Surveys in 2005 and 2006 - before the instrument failed permanently in 2007.
But when the researchers studied the images, they noticed 14 young stars that were shooting through interstellar gas, creating 'bow shocks' in front of them that resemble the water waves created at the bow of a speeding boat. The bow shocks form where particles streaming from the stars in stellar 'winds' plough into surrounding gas.
"When I first saw the images, I said, 'Wow, this is like a bullet speeding through the interstellar medium,'" Sahai said in a statement.
Similar bow shocks had been observed in the 1980s by the Infrared Astronomical Satellite. But those bow shocks were much larger than the ones observed by Hubble, suggesting they were produced by more massive stars with more powerful stellar winds.
"The stars in our study are likely the lower-mass and/or lower-speed counterparts to the massive stars with bow shocks detected by IRAS," says Sahai. He adds that low-mass stars outnumber their higher-mass counterparts, suggesting the newly found stars represent most of the universe's stellar runaways.
The stars' winds suggest they are just millions of years old. And their bow shocks suggest they are travelling through the interstellar gas at more than 180,000 kilometres per hour - about five times as fast as most young stars.
What accelerated them to such speeds? One possibility is that the stars began their lives in pairs, but got boosted to high speeds when their partner exploded in a supernova.
Alternatively, the stars may have been involved in a gravitational run-in with two or three other stars and got kicked out in the process. If they are just a million years or so old and are moving at about 180,000 km/h, they must have travelled about 160 light years from their birthplace.
Stellar birth control?
The team plans to search for more such runaway stars and will also continue to scrutinise the existing Hubble observations to see if the stellar speedsters have much of an effect on the gas clouds they are travelling through, since turbulence can prevent gas clouds from condensing into new stars.
"One of the questions that these very showy encounters raise is what effect they have on the clouds," said team member Mark Morris of the University of California, Los Angeles, in a statement. "Is it an insignificant flash in the pan, or do the strong winds from these stars stir up the clouds and thereby slow down their evolution toward forming another generation of stars?"
The research was presented on Wednesday at a meeting of the American Astronomical Society in Long Beach, California.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Jan 08 16:58:55 GMT 2009 by Sas
If Two Of Them Make A Black Hoole..ARe We Close Enough To Be Sucked In???
Runaway Stars Carve Eerie Cosmic Sculptures
Sat Jan 24 15:32:44 GMT 2009 by Klar Stempien
"One of the questions that these very showy encounters raise is what effect they have on the clouds,"....
This is 180 degrees out of phase. It's the cloud (plasma) that controls the star. Again, realize that gravity is overpowered by 10^36 times!
I Like To Look Outside At Night For The Bright Star And It Was Looking Like A Falling Star
Fri Jul 17 08:15:00 BST 2009 by zuleyma a. lazaro
i like to look outside at night for the bright star and it was looking like a falling star.i thought i saw a star ... not sure what kind it is, but it is nice. It moves really fast and looks like a falling fire work...it is really bright star and it happen around 11:45pm- 12:something and yea please get back at me b/c i would really like to now what kind it is...
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:3ba303bc-c781-4311-8efe-3d05e0685ee2> | 3.484375 | 994 | Comment Section | Science & Tech. | 62.967431 |
Do we have a little Neanderthal in us? That's not a reference to your behaviour at the end-of-year office party, but to the genes of our extinct cousins. With the imminent publication of the genome sequence of Homo neanderthalis, that question may finally be answered.
So far no one has uncovered evidence of any cross-species romps - at least none that left a trace in our DNA. The 3-billion-nucleotide Neanderthal genome is our best chance yet of finding out.
Whether they did or didn't will make the headlines next year, but the importance of the Neanderthal genome reaches much further. For a start, any sign of interbreeding will force us to rethink our place among our ancestors. The researchers working on the genome have already discovered some details of the hominin's nature: a few individuals were pale-skinned redheads; others couldn't taste bitter vegetables; they may have spoken a ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:5386037e-7ccd-4495-9fcc-473273015429> | 3.59375 | 221 | Truncated | Science & Tech. | 49.907818 |
What happens in your brain when you get lost or forget something? Johns Hopkins University neuroscientist Amy Shelton believes she can find the answer. With funding from the National Science Foundation, she's testing human spatial recognition. Study subjects learn and recall their way around a virtual maze while an MRI scans their brains. By analyzing MRI images of blood flow in the human brain, Shelton can get a picture of how the brain learns and recalls the spatial world outside the body. By understanding those processes, she believes she can develop techniques that will help improve human memory.
This is an episode from Science Nation, NSF's online magazine that's all about science for the people.
General Restrictions: Images and other media in the National Science Foundation Multimedia Gallery are available for use in print and electronic material by NSF employees, members of the media, university staff, teachers and the general public. All media in the gallery are intended for personal, educational and nonprofit/non-commercial use only.
Videos credited to the National Science Foundation, an agency of the U.S. Government, may be distributed freely. However, some materials within the videos may be copyrighted. If you would like to use portions of NSF-produced programs in another product, please contact the Video Team in the Office of Legislative and Public Affairs at the National Science Foundation. Additional information about general usage can be found in Conditions. | <urn:uuid:6ec87e00-b349-44ee-a7b4-66cf9bd778f0> | 3.359375 | 283 | Truncated | Science & Tech. | 37.391537 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 2 results on physics.org and 5 results in our database of sites
5 are Websites,
0 are Videos,
and 0 are Experiments)
Search results on physics.org
Search results from our links database
When some stars enter their final stage of life they throw out material into space in a supernova. Here's a representation of how they work.
A supernova is an explosion of a massive supergiant star. It may shine with the brightness of 10 billion suns! The total energy output may be 10^44 joules, as much as the total output of the sun ...
A quick journey through the different stages in a star's life, from nebula to supernova.
The supernova which produced the Crab Nebula was observed by the Chinese in 1054 AD. It is positioned in the constellation Taurus and is about 1 kiloparsecs away from Earth. The nebula has been a ...
Brief introduction from NASA on the nature of neutron stars, the most dense objects known, created in the cores of massive stars during supernova explosions. Includes links to related topics. | <urn:uuid:3402292b-1f8c-4de4-98f8-613a741b8816> | 3.078125 | 279 | Content Listing | Science & Tech. | 64.54 |
iText APIs provides facility to make the lines. In this program we are going to tell you how you can make single lines and how you can make multiple lines on the text in pdf files. You can make multiple lines and also make colorful lines by using this example.
In this example we are making an object of Chunk.
Then set under line, using setUnderLine( float f, float f) method. For
new line we use Chunk.NEWLINE. We are also creating colorful lines.
Following methods are used for creating multiple and colorful lines:
This method is used to a make new line.
Chunk.setUnderLine(Color color, float
thickness, float thicknessmul, float y, float ymul, int cap):
This method is used to create a colorful line. You can set thickness and y position of the text where the line will be seen. When you increase it's value then the line will be seen towards the top of the text and the line will be seen towards the bottom of the text when you decrease the value. You can understand better changing it's value. Inside the method, parameter thicknessmul is used for setting line thickness according to the font size. You can also set ypositionmul for setting the line position on text towards the top or bottom. It is also a multiplication factor with the font size. Integer type value cap is used to set the line at beginning and ending point of the line.
This is projecting square cap. The stroke continues beyond the endpoint of the line. It's value is equal to 2 that is of integer type.
This is Butt Cap. This stoke is squared off the endpoint of line. There is no projection beyond the end of the line. It's value is equal to 0 that is of integer type.
This is Round Cap. A semicircular arc with a diameter equal to the width is drawn around the endpoint and filled in. It's value is 1 that is of integer type.
The code of the program is given below:
The output of the program is given below:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:7db50d61-3a91-47d9-82af-64212e1b73b1> | 3.625 | 476 | Documentation | Software Dev. | 64.217931 |
Feb. 13, 2008 ESA's test centre is buzzing with activity and anticipation as it welcomes its latest guest. The gigantic telescope of ESA’s space-based infrared observatory, Herschel, is being prepared to be assembled with its spacecraft in the next few weeks.
Herschel’s telescope, which will carry the largest mirror ever flown in space, has just been delivered to ESA’s European Space Research and Technology Centre, ESTEC. Here, engineers and scientists are busy with the final steps that will prepare the infrared observatory for launch in late 2008.
The 3.5-m diameter technological marvel is made from 12 silicon-carbide petals brazed together to form a single structure and coated with a layer of reflective aluminium, forming a remarkably lightweight mirror. The fully-assembled telescope, which includes the primary mirror, the secondary mirror and its support structure, is a feathery 320 kg; remarkably low for such a sturdy structure capable of withstanding high launch loads and functioning precisely in the harsh environment of space.
It is this powerful telescope that will allow scientists to look deep into space, at long infrared wavelengths. Herschel’s spectral coverage, which ranges from far-infrared to sub-millimetre wavelengths, will be made available for space-based observations for the first time.
This will make it possible to observe and study relatively cool objects everywhere in the universe, from our own back yard to distant galaxies, teaching us much more about the birth and evolution of stars and galaxies.
The next step is testing the telescope's interface with the spacecraft. Additionally, the mirrors will be tested for optical and mechanical stability.
First, the solar array and sunshield will be integrated with the cryostat, which was mated with the service module in September last year. Once this is done, the telescope will be ready to be mated, completing the spacecraft. The spacecraft will then undergo extensive environmental and functional tests before being shipped to Kourou for the launch campaign.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:fc53a174-e9cd-4e29-ac65-477a0577ba0b> | 3.234375 | 458 | Truncated | Science & Tech. | 36.318308 |
In Focus: Carbon Capture Technology
Coal is bountiful, cheap, and accounts for about 50% of our current electricity generation. Worldwide, coal is the dominant source of power and is projected to increase even further as petroleum prices skyrocket.
However, coal is dirty. Along with ash, it is also carbon rich and has an incredibly high ratio of CO2 output per kWh of electricity generated. This is because the hydrocarbon ratio in coal is very low (about 1C to 1H), meaning more C per bond broken, and consequentially more CO2.
In an age where concern over greenhouse gasses seems to run counter to free market ideology, what can be done to make coal energy, more appealing?
The answer lies with carbon capture technology.
Carbon capture, and carbon scrubbing, is the technology that goes into cleaning, filtering, and storing the excess CO2 generated by coal fired power plants.
Already, there are examples of carbon capture that we did not even build. Every plant and algae species are effectively a terrestrial carbon capture machine. They take CO2 from the air and turn it into stored, organic carbon.
However, with an increase in airborne CO2 ,clearly more than simply trees and algae are necessary to rectify out CO2 imbalance.
One method of carbon capture is to inject effluent CO2 into underground caverns. According to a USGS study, closer to 40% of the coal fired power plants within the U.S. lie directly above of potential geological CO2 storage caverns. This could provide successful areas where we can inject CO2 underground rather than allow it to enter the atmosphere. This makes ‘dirty coal’ into ‘economical coal’ and a far more attractive option than it is today.
Other Carbon Capture technology seeks to pump CO2 into existing geological cavities formed from dried oil wells. Yet other tech seeks to crystallize the CO2 and use the resulting mineral as building material.
As it stands now, CO2 sequestration technology is not yet commercially viable. Due to the large amount of energy required to pump the gas underground, there is constant research going into how to better perform CO2 sequestration technology.
This research falls into precombustion and postcombustion categories.
Precombustion alters the fuel source before it is ignited. Often times, this is a gasification process (see gasification article).
Postcombustion is dealing with the CO2 after the fuel source has been ignited. These methods include passing carbon through membrane filters, allowing the CO2 to be biologically metabolized, using the CO2 in fertilizer aspects, using CO2 as an enzyme for catalysis processes or adding CO2 to landfills for accelerate the carbon cycle.
Existing CO2 sequestration projects include:
Archer Daniels Midland (IL)
Leucadia Energy, LLC (NY)
The cutting edge of carbon sequestration technology includes using the CO2 for anything from biofuel to concrete production. These include large industrial projects such as Phycal, LLC.
For more on carbon sequestration:
EnergyGridIQ (EIQ) is a powerful and comprehensive location-based database of energy projects and thousands of energy incentives offered by federal, state and private sources. EIQ also provides an alert service for new RFPs and opportunities, and targeted local advertising. In addition to geocoded incentive data, EIQ offers extensive additional geocoded datasets including weather, market prices, and client data such as energy consumption. The overlaid datasets on EIQ’s unique map interface provide a powerful tool to manage energy decisions. EIQ founders and current team include energy industry and deregulated market veterans, data scientists, technology and mobile experts and literally NASA rocket engineers.
Search 26k+ Solar Articles
- Glass and Green Building
- How China Will Transform The Energy Industry
- New Project Will Forecast Solar Generation
- In Focus: The Potential of Los Angeles Solar
- Tesla Reports Profit, Stock SKYROCKETS
- SolarCity Raises $500M
- Graphene That Redefines Electric Current
- NextEra Gobbles Up Smart Energy Capital
- Oil Prices and Renewable Energy
- 5 Promising Eco Careers
- In Focus: People Power!
- The EV Cordless Power Vehicle Charging System | <urn:uuid:cb6a9f51-5307-46d2-beaa-7160bd67a51d> | 3.65625 | 892 | Knowledge Article | Science & Tech. | 26.455926 |
A new article highlights the really cool work that has come from Ken Smith and Henry Ruhl. Ken’s lab has monitoring a deep-sea abyssal site called Station M for the last 20+ years located of the southern California coast. Their work has lead to many major findings. One of the most important is that deep-sea patterns and processes are intrinsically linked to surface production over short ecological timescales. Thus El Nino/La Nina cycles and other such meterological/oceanic events leave a deep-sea signature. You can find a nice list of the labs publications here.
You can read more about this research and the other things Henry is involved with in the following posts
Climate Induced Collapse of Deep-Sea Systems
25 Things You Should Know About the Deep Sea: The Deep Sea is Not Stable
25 Things You Should Know About the Deep Sea: Patterns and Processes in the Deep Sea Are Linked to Surface Production
Science and Industry Colloboration in Deep-Sea Research | <urn:uuid:e5ed75e6-5eb5-4397-b37f-ee45bfed4753> | 3.203125 | 208 | Personal Blog | Science & Tech. | 44.087253 |
Sulochanan, Bindu and Korabu, Laxman Shankar (2009) Enhalus acoroides (L.f.) Royle fruits observed in Gulf of Mannar. Marine Fisheries Information Service, Technical and Extension Series, 200 . pp. 19-21.
Seagrass are angiospermous plants adapted to grow in marine environment. Seagrass meadows are the nursery ground for many commercially important shrimps, crabs and fishes. Its root mat adds stability to the sediments of coastal zone and the leaves help filter the water of suspended particles. There are 13 genera and about 52 species of seagrass distributed throughout the world.
|Uncontrolled Keywords:||Seagrass; Gulf of Mannar; Enhalus acoroides|
|Deposited By:||INVALID USER|
|Deposited On:||16 Sep 2010 16:33|
|Last Modified:||16 Sep 2010 16:33|
Repository Staff Only: item control page | <urn:uuid:afe0cc68-7f79-4ae2-a4e9-4ee705d4b154> | 2.859375 | 217 | Academic Writing | Science & Tech. | 44.93036 |
How to add comments to code
Adding comments to code is an important part of writing Java programs. Though it is recommended to add as many comments as possible in your code but still there are some best practices which if followed really make code commenting a useful affair not only for the developer himself but also fellow developers.
Here in this tutorial, I will list down some tips and guidelines for writing the code comments in a Java program:
1) Add comments for each ending block so that it is clear which code block is going to end.
2) Format the comments so that they are properly aligned and break to next line if exceed the viewable area. This helps in getting rid of extra scrolling just to view the comments.
3) Use proper language and avoid frustating comments like:
a) //Finally got it
c) //you can't escape from me
4) Comment out the code which could be reused in future but could not be implemented this time.
5) Focus on why part of the code and explain the need of code you are going to write.
6) Usually the bug id is also added when you are working on some enhancement/bug so that other developers can get the need for the code. | <urn:uuid:46da2a1e-2ef5-4003-89dc-ecc7a4f9042a> | 2.921875 | 250 | Tutorial | Software Dev. | 52.138333 |
6.4. What Are the Spokes?
One topic of especial interest in the Cartwheel is the spokes which so perplexed Zwicky in 1941. Spokes in ring galaxies are rare. Indeed, the Cartwheel spokes are unique among the known collisional ring galaxies. AM0644-741 and II Hz 4 have spiral-like arms interior to the ring, but there are few of them and they are less flocculent in appearance. It is possible that these latter features are the direct result of a low order spiral mode driven by a slightly asymmetric collision. Similarly, the Cartwheel spokes might be the result of a higher order mode in the collisional perturbation, but their morphology makes it seem less likely. We favor the alternative hypothesis that they are the result of internal gravitational instabilities following the impact, and if so, they may provide sensitive constraints on that process. Several of the simulation studies referenced above provide input on this question, including especially Hernquist and Weil (1993), Struck-Marcell and Higdon (1993), and Gerber (1993).
Let us consider the wave-triggered instability hypothesis in a little more detail. Before the collision, assume that the surface density in the disk is below, but near the threshold value for axisymmetric gravitational instability (e.g. Binney and Tremaine 1987, Section 5.3). The disk already knows the approximate instability scale, because the wavelength of the fastest growing mode is determined only by parameters intrinsic to the target galaxy. Finite amplitude stochastic fluctuations can generate flocculent spirals on roughly this scale. After the collision, wave compression increases the surface density, pushing it over the critical value for instability. Unstable growth occurs in the rings, with the flocculent spirals or giant clouds serving as seeds. The simulations of Struck-Marcell and Higdon show that there need not be a one-to-one correspondence between seeds and spokes. Some seeds may merge in the ring, but the number of seeds and spokes (and star-forming knots in the ring) should be roughly the same since they are formed by the same instability. In the rarefaction behind the compression wave high density regions are stretched into spokes. Clearly these high density regions must not become so tightly bound in the wave compression that the shear and expansion are unable to stretch them apart. At the same time we assume that the wave compression is sufficient to lead to gravitational collapse and star formation on smaller scales within these proto-spokes. Some flocculent seeds may consist of such loose agglomerations that they do not grow significantly in the compression and are subsequently disrupted in the rarefaction.
This hypothesis has many interesting consequences. Firstly, no spokes are predicted if the perturbation is small and the pre-collision disk is well below the gravitational instability threshold, e.g., if the gas surface density is very low. However, the mass of the companion galaxy in most ring systems is usually fairly substantial compared with the target galaxy, if the optical or IR luminosity of the companion is any guide. Thus, even if the precollisional gas was quite stable, we might expect a substantial perturbation in most targets, which would be sufficient to push the gas density over the instability threshold. In this case a second factor enters, namely, the instability growth time relative to the compression time. The former is a local quantity, essentially the local free-fall time. The latter is one half of the local epicyclic period, which depends on the global gravitational potential, and so the compression time depends on global structure. Therefore, spoke formation depends on both local and global parameters.
One possible explanation for the apparent rarity of spokes is that most spokes may generally be hard to recognize. A reasonable estimate for the scale of spokes is given by max, the maximum unstable wavelength. According to the linear perturbation theory of the axisymmetric gravitational instability (see Binney and Tremaine (1987), Section 6.2),
where is the gas surface density. The number of spokes can be estimated as the ring circumference divided by this scale,
If this number is large, we might expect many small spoke segments, which would be poorly resolved on most ground-based CCD images presently available. Moreover, since each such "spokelet" only contains a small fraction of the gas in the disk, the enhancement of star formation within the spokelet may be modest. The brightness contrast between spokelet and inter-spoke regions should be small. Thus, relatively bright spokes, stretching between two rings, or between an outer ring and the nuclear regions, may only form when Nspoke 2. (Note: this criterion is closely related to the dependence of the swing amplification on the parameter X = / crit, see Toomre 1981). Within the next few years it should be possible to obtain observations of sufficiently high resolution to test these ideas.
In the meantime, we can return to the published simulations, and ask what information they provide about spokes. At first sight, the fully self-gravitational N-body/SPH studies seem to give contradictory results. In the simulations of Hernquist and Weil (1993) strong spokes form, while spokes do not form in the simulations of Gerber (1993), and Gerber, Lamb and Balsara (1994). Spokes, if present, are not strong in the N-body/gas particle simulations of Horellou and Combes (1994). Spokes also formed in the models of Struck-Marcell and Higdon (1993), in which self-gravity was computed only on small scales.
Another mysterious fact is that in all these simulations the precollision target disks were evidently set up near the instability threshold. This statement can be quantified through the use of Toomre's Q parameter for axisymmetric stability (see e.g., Kennicutt 1989). Gerber's simulations were initialized such that Q = 1.5 for the stars throughout the disk (where Q is defined as 0.3r / (G), and r is the radial velocity dispersion). Hernquist and Weil (1993) quote a value of Q = 1.3 (apparently also for the stars) in their model. In Struck-Marcell and Higdon (1993), the value of Q is similarly about 2.0-3.0 throughout the disk. Thus, it would appear that these simulations, starting from similar initial conditions, produce different results.
In fact, there are significant differences in the halo and disk density distributions between the various models. Gerber's target rotation curve increases roughly linearly out to about 1.5 disk scale lengths, and then is quite flat (though it does turn over). Hernquist and Weil describe the rotation curve of their target disk as nearly flat out to a cutoff radius, rc. The rotation curve used by Struck-Marcell and Higdon was of the form v (r / )1/5, which is slowly rising at radii greater than the scale radius . These differences and those in the initial conditions account for different radial distributions of max and Nspoke.
In Hernquist and Weil (1993), the value spoke ranges from a few in the inner regions to greater than 10 in the outer disk. According to our calculation, in Gerber's disk Nspoke 11 at r = 1.0 (in computational units, or 0.8 disk scale lengths), and rises steadily at larger radii. Moreover, in Gerber's simulations the typical value of max is only of the order of a couple of mesh lengths used in the PM calculation. These results are in accord with the discussion above, i.e., spokes are seen in the models of Hernquist and Weil because of the low value of Nspoke. Only small spokelets are expected in Gerber's simulations, and these may be smoothed since their size is comparable to the finite difference algorithm scale. In the models of Struck-Marcell and Higdon (1993), self-gravity is only computed over a small range of wavelengths, but the target disk was initialized such that max falls within this range. Models with Nspoke of order of a few produced spokes much like those of Hernquist and Weil, though weaker. Models with larger values of Nspoke produced numerous spokelets (see Figure 8 of Struck-Marcell and Higdon 1993). Note also that heating and cooling effects have not been included in these "spoke simulations" (see Section 6.5).
In the figures of Hernquist and Weil, the spokes appear in the gas before the collision, and are then amplified by the ring passage. Their early appearance may be the result of a slightly lower effective Q than the other models, and greater intrinsic amplification because the most unstable mode is of relatively low order.
The story of the formation of the spokes is not yet a closed book. We have recently produced realistic looking spokes behind the second ring in a simulation which only produced rather weak small-scale spokelets (large Nspoke) behind the first ring (Struck, in preparation). This Cartwheel model is based on the premise that the far companion (Galaxy G3 in Higdon's nomenclature; see Section 3.3.1) was the intruder, and that the outer ring is the second, not the first ring. The computational details are as in Struck-Marcell and Higdon (1993), except that the intruder model also contains a gas disk, and heating and cooling processes were included in the model (Section 6.5). The companion mass was also lower (about 15% of the Cartwheel). Such models open up the possibility that the disk need not be on the edge of instability for spoke formation initially, but that two cycles of ring compressional amplification could do the job. This would help to explain the rarity of spokes, since only well evolved ring galaxies would develop them.
We remind the reader that, thus far, no gas has been found unambiguously associated with the spokes as might be expected from the models above and they still remain enigmatic. The discovery of dust lanes crossing the inner ring in the vicinity of the spokes (based on the HST observations of Figure 1) does, however, suggest that the detection of molecules in the spokes may simply be a matter of time and the need for greater sensitivity than is currently possible (see Section 3). | <urn:uuid:a444c9d2-4929-44d9-8334-f55bd3f14fd7> | 3.625 | 2,155 | Academic Writing | Science & Tech. | 46.170136 |
Can you explain what is happening and account for the values being
My average speed for a journey was 50 mph, my return average speed
of 70 mph. Why wasn't my average speed for the round trip 60mph ?
An environment that simulates a protractor carrying a right- angled
triangle of unit hypotenuse.
A conveyor belt, with tins placed at regular intervals, is moving
at a steady rate towards a labelling machine. A gerbil starts from
the beginning of the belt and jumps from tin to tin.
Swimmers in opposite directions cross at 20m and at 30m from each
end of a swimming pool. How long is the pool ?
How high will a ball taking a million seconds to fall travel?
Can you work out the natural time scale for the universe?
At what positions and speeds can the bomb be dropped to destroy the
How fast would you have to throw a ball upwards so that it would
Can you work out which of the equations models a bouncing bomb?
Will you be able to hit the target?
Follow in the steps of Newton and find the path that the earth
follows around the sun.
A simplified account of special relativity and the twins paradox.
Two sudokus in one. Challenge yourself to make the necessary | <urn:uuid:4eb4cc44-c2b7-44e3-aede-4d4344d6fdad> | 3.234375 | 271 | Content Listing | Science & Tech. | 69.366667 |
Applying Doppler Effect to Moving Galaxies
Overview: Students make the observation that farther galaxies move away faster, and check that a model of an expanding universe makes predictions that match with those observations.
Physical resources: Expanding universe model
Electronic resources: Virtual spectroscopy
Observations of moving galaxies:
- Motivating question: How can we use the idea of redshift to figure out the velocity of objects? Students brainstorm ideas with group.
- How do we know what was emitted? Introduce spectral lines as the photon we know must have been emitted with a certain energy, in our case, we'll look at line emission from hydrogen atoms.
- Virtual spectroscope activity: (MiniSpectroscopy)
- Examine Hydrogen spectrum at rest, predict how "example galaxy" is moving, relative to Earth. (Peak is at a longer wavelength, so it is moving away from us.)
- Give students only the spectra of galaxies A through D
- What direction are they moving? (away from Earth, because peak of emission is at a longer wavelength than it is when hydrogen is at rest)
- Challenge: put them in order by the speed (slowest to fastest) they are moving away from Earth.
- Now, give students the images of galaxies A through D
- What's different about these galaxies? (angular diameter)
- Given that most galaxies are about the same linear diameter, put them in order by their distance from Earth, closest to furthest.
- Students should describe the pattern in these observations, and put their description on the whiteboard. (The order is the same, further galaxies move away faster.)
- Instructor introduces Hubble's law as a restatement of this observation: Galaxies that are farther away move away from us faster.
Model of expanding universe, to explain Hubble's law observations above:
- Introduce two-dimensional "expanding universe" model which we've taken an image of at two different times
- Label "smaller" universe as time t = 0, and "larger" universe as time t = 10 seconds.
- Have groups of students "live" in galaxy A, B or C and have them make predictions of the following for each of the two other labeled galaxies, as well as another galaxy of their choice:
- Distance from your galaxy to other galaxy at time t = 0 seconds (cm)
- Distance from your galaxy to other galaxy at time t = 10 seconds (cm)
- Change in distance (cm)
- Change in time (sec, all should be 10 seconds)
- Speed = change in distance / change in time (cm / sec)
- Direction of motion (description, or arrow)
- Have students populate classroom prediction table
- Summarize important patterns seen in predictions: Galaxies at a greater distance move faster.
- Have students line up their "home galaxy" while holding both "universes" up to the light, and describe what has happened to all the other galaxies (they have moved away from the home galaxy, on a line connecting the home galaxy to the other galaxy.) Then have them switch their "Home galaxy" to the other two labeled galaxies, in turn. (All galaxies will see this pattern of all others moving away).
- Refined prediction: Galaxies at a greater distance move faster, and move away from each other along a line connecting the two. From any galaxy, all others look like they are moving away.
- These predictions match up with the observations we've made about actual galaxies in our universe, so we can't rule out the "expanding universe" model.
- Some students have difficulty identifying what information they should extract from the spectra when comparing the sample of galaxies. They may think the intensity of the peak is what they should order by, instead of the location of the peak on the wavelength (energy) scale.
- Many students have difficulty separating the observations from the models in this activity. If so, clarify with the assessment question below.
- Can we determine redshifts for galaxies that do not have emission lines? (no, we must know the energy at which the photons were originally emitted).
- What if we observed every galaxy moving toward us, with further galaxies moving toward us faster? How would that change our model to explain the observations? (contracting universe).
- Which is a statement of the Doppler effect, and which is a statement of Hubble's Law?
- When we observe galaxies moving away from us, we receive lower energy photons compared to what that galaxy actually emits. (Doppler effect)
- Galaxies moving away from us faster are also further away (Hubble's Law).
- Galaxies moving towards us give us photons that are higher energy than when they were emitted (Doppler effect).
- Closer galaxies are moving away from us slower (Hubble's Law).
- Discuss what each deals with: Hubble's law relates speed to distance, and Doppler effect relates change in energy of photons to speed of motion.
- Image of review page of notes: (Hubble's law 2)
< return to Investigation 6 | <urn:uuid:e63154c2-8819-4bb8-a8ef-dc090eaaa8c6> | 4.46875 | 1,072 | Tutorial | Science & Tech. | 36.524391 |
What to know
-Columns, groups or families:
how many valance electrons (electrons in last energy level) present
-Rows or periods: how many energy levels there are
e.g-If you are in gr.3 you will have: 3 valence electrons
e.g-If you are in per. 4 you will have: 4 energy levels
are limits to the amount of electrons that an energy level can hold these are them:
EL #1(floor) = 2e-
EL #2 (floor) = 8e-
EL #3 (floor) = 18e-
EL #4 (floor) = 32e-
Number (#): number of Protons (p+) and Electrons (e-)
Mass(Mass number):Atomic Mass Atomic Number = number of neutrons
How to do
a Bohr Model:
-Pick an element, in
this case Magnesium (Mg)
-Then draw a nucleus:
-Round off your atomic mass to the nearest whole number and write it down next to the atomic
-Then write down which group and period it is in using your Periodic Table of the Elements:
-Knowing the Period you are in, in this case Per. 3, draw the necessary number of energy levels.
Using the group number it is in write down how many valence electrons you will have.
-Using the Atomic number, 12 for Mg, write
down inside the nucleus how many p+s you will have. Then fill in your first two energy levels with the remaining e-s, but
respect their maximum amount. Remember that Atomic number tells you how many protons AND electrons you have.
-Finally, subtract you Atomic number, 12 in this case, from your Atomic mass, 24, to tell you how
many neutrons you will have in your nucleus. Then write it in your nucleus.
24-12 = 12 (neutrons) | <urn:uuid:4196afed-2133-44c0-a91f-7818768ce5bf> | 4.1875 | 409 | Tutorial | Science & Tech. | 62.461012 |
eldavojohn writes "The Hubble Constant is used for many things in astrophysics: from determining how fast things are moving away from us, to the total volume of the universe, to predicting how our universe will end. The current best value for the Hubble Constant is 74.2 ± 3.6 (km/s)/Mpc according to recent conventional methods and the recently restored Hubble Telescope. Most astronomers agree that that's within 10% of its actual value. Researchers now claim that they might be able to get to 3% using water molecules in galactic disks to act as masers that amplify radio waves, to analyze galaxies seven times as far away as the current measurements. The further away the 'standard candle' is, the more assured they can be that local effects are not skewing the measurements. From one of the researchers: 'We measured a direct, geometric distance to the galaxy, independent of the complications and assumptions inherent in other techniques. The measurement highlights a valuable method that can be used to determine the local expansion rate of the universe, which is essential in our quest to find the nature of dark energy.' Once the Square Kilometer Array is completed, they hope to get even closer to the actual value." | <urn:uuid:3f9e742d-9b5c-4b27-bdbf-c3b6179fbc4a> | 3.09375 | 246 | Truncated | Science & Tech. | 41.146239 |
phosphorescence science definition
- The emission of light by a substance as a result of having absorbed energy from a form of electromagnetic radiation, such as visible light or x-rays. Unlike fluorescence, phosphorescence continues for a short while after the source of radiation is removed. Glow-in-the-dark products are phosphorescent. Compare fluorescence.
- The light produced in this way.
Learn more about phosphorescence | <urn:uuid:f03ff53a-0e36-49b4-9d93-741d07b43122> | 3.28125 | 87 | Structured Data | Science & Tech. | 27.28675 |
The windspeed in the highest cloud layer reach 355 km/hr (220 mi/hr) which is roughly equal the Earth's jet stream.
The middle cloud layer has the fastest winds. These winds can reach 724 km/hr (450 mi/hr.) That is faster than the fastest tornado on Earth!
In the lowest cloud levels, the winds blow at around 160 km/hr (100 mi./hr) Then, at the surface there is a gentle breeze of only 3.6 km/hr (2.2 mi./hr.)
Copyright © 1997 Kathy A. Miles and Charles F. Peters II | <urn:uuid:2dc17121-9184-4f5e-821a-245d59e00d84> | 3.078125 | 126 | Knowledge Article | Science & Tech. | 112.578565 |
Burrowing in marine muds by crack propagation: kinematics and forces
Attribution-NonCommercial-ShareAlike CC BY-NC-SA Doi: 10.1242/jeb.010371
The polychaete Nereis virens burrows through muddy sediments by exerting dorsoventral forces against the walls of its tongue-depressor-shaped burrow to extend an oblate hemispheroidal crack. Stress is concentrated at the crack tip, which extends when the stress intensity factor (KI) exceeds the critical stress intensity factor (KIc). Relevant forces were measured in gelatin, an analog for elastic muds, by photoelastic stress analysis, and were 0.015±0.001 N (mean ± s.d.; N=5). Measured elastic moduli (E) for gelatin and sediment were used in finite element models to convert the forces in gelatin to those required in muds to maintain the same body shapes observed in gelatin. The force increases directly with increasing sediment stiffness, and is 0.16 N for measured sediment stiffness of E=2.7×104 Pa. This measurement of forces exerted by burrowers is the first that explicitly considers the mechanical behavior of the sediment. Calculated stress intensity factors fall within the range of critical values for gelatin and exceed those for sediment, showing that crack propagation is a mechanically feasible mechanism of burrowing. The pharynx extends anteriorly as it everts, extending the crack tip only as far as the anterior of the worm, consistent with wedge-driven fracture and drawing obvious parallels between soft-bodied burrowers and more rigid, wedge-shaped burrowers (i.e. clams). Our results raise questions about the reputed high energetic cost of burrowing and emphasize the need for better understanding of sediment mechanics to quantify external energy expenditure during burrowing.
Sanjay Raja Arwade, Kelly M. Dorgan, and Peter A. Jumars. "Burrowing in marine muds by crack propagation: kinematics and forces" Journal of Experimental Biology 210.23 (2007): 4198-4212.
Available at: http://works.bepress.com/sanjay_arwade/1 | <urn:uuid:f85398ea-c8e2-4592-9541-3a519363a602> | 2.75 | 461 | Academic Writing | Science & Tech. | 41.27153 |
NGC 3603: A giant star-forming region.
Caption: Chandra has resolved the multitude of individual X-ray sources in one of our galaxy's most active star-forming regions. This giant cloud of dust, gas, and stars, known as NGC 3603, is approximately 20,000 light years from Earth. The hundred or more young stars in this image were born in a burst of star formation less than two million years ago. Our Sun, by comparison, is approximately 4.5 billion years old and is considered to be middle-aged.
Scale: Image is 8.2 arcmin on a side.
Chandra X-ray Observatory ACIS Image | <urn:uuid:5048c199-879f-4718-8cef-e74bc14937c0> | 3.703125 | 139 | Truncated | Science & Tech. | 70.238839 |
Cal Poly Pomona is one of the most biologically diverse campuses in the CSU system. In addition to its gardens, orchards, and agricultural fields, it has portions of four different ecological communities (coastal sage scrub, walnut woodland, oak woodland, and riparian woodland). These natural areas are home to at least 100 different species of birds, 39 species of mammals, 20 species of reptiles and amphibians, 261 species of plants, and more species of insects and other arthropods than anyone has ever taken the time to identify and count.
For detailed information, and to learn why biodiversity is so important, please see the biodiversity website.
|American College and University Presidents' Climate Commitment|
|Assn. for the Advancement of Sustainability in Higher Education|
|US Green Building Council|
|John T. Lyle Center for Regenerative Studies| | <urn:uuid:f61d4bb7-b466-47a7-81ba-04ab75d0eaa3> | 3 | 182 | Knowledge Article | Science & Tech. | 22.577983 |
Ionizing & Non-Ionizing Radiation
Most naturally occurring radioactive materials and many fission products; undergo radioactive decay through a series of transformations (loss of particles or electromagnetic energy from an unstable nucleus) rather than in a single step. Until the last step, these radionuclides emit energy or particle with each transformation and become another radionuclide. Man-made elements, which are all heavier than uranium and unstable, undergo decay in this way. This decay chain, or decay series, ends in a stable nuclide.
On this page:
- Uranium Decay Chain
- The Importance of Radionuclide Decay Chains
- How do scientists know how much radioactivity there will be?
- Radon ingrowth during uranium decay
For example, uranium-238 decays through a series of steps to become a stable form of lead. Each step in the illustration below, indicates a different nuclide. Only a few of the steps are labeled, and the numbers below each label indicate the length of the particular radionuclide's half-life. Uranium-238 has the longest half-life, 4.5 billion years, and radon-222 the shortest, 3.8 days. The last radionuclide in the chain, polonium-210 transforms to lead-210, and eventually the stable nuclide, lead-206.
Uranium-238 Decay Chain
The Importance of Radionuclide Decay Chains
Radionuclide decay chains are important in planning for the management and disposal of radioactive materials and waste and for site cleanup. As radioactive decay progresses, the concentration of the original radionuclides decreases, while the concentration of their decay products increases and then decreases as they undergo transformation.
The increasing concentration of decay products and activity is called ingrowth. The illustration below shows ingrowth when the decay product is stable and the original radionuclide is replaced. In this situation, the activity decreases with decay of the original radionuclide.
decreases as radioactive decay progresses.
If the decay products are not stable, their decay contributes to the total activity and makes planning for radiation protection more complex.
In the case of a radioactive waste repository, the mix of radionuclides in the waste will change over time.;The amount of radiation being released can actually rise over time as successive radioactive decay products undergo decay. The radiation protection standards set for a repository must take into account varying levels of radioactivity as successive iterations of radionuclide ingrowth take place, even though the process continues over thousands of years.
How do scientists know how much radioactivity there will be?
The pattern of ingrowth varies according to the relative length of the half-lives of the original radionuclide and its decay products. Under certain conditions, decay products undergo transformation at the same rate they are produced. When this occurs, radioactive equilibrium is said to exist. Whether equilibrium occurs depends on the relative lengths of the half-live of radionuclides and their decay products.
Using equations that account for half-lives, the rate of ingrowth, whether equilibrium occurs, the original amount of radionuclide, and the steps in its decay chain, scientists can estimate the amount of activity that will be present at various points.
- Radioactive Equilibrium
This page explains the factors that determine equilibrium among radionuclides during radioactive decay.
Radon ingrowth during uranium decay
The importance of understanding decay chains is illustrated by the ingrowth of radon-222 during decay of uranium-238. Uranium was distributed widely in the earth's crust as it formed. Given the age of the earth, uranium's slowly progressing decay chain now commonly produces radon-222. It is radioactive and has several characteristics that magnify its health effects:
- Radon is a gas. It can seep through soil and cracks in rock into the air. It can seep through foundations into homes (particularly basements), and accumulate into fairly high concentrations.
- Radon decay emits alpha particles, the radiation that presents the greatest hazard to lung tissue.
- Radon's very short half-life (3.8 days) means that it emits alpha particles at a high rate.
During exposure assessments, we pay close attention to the potential for radon generation. In designing cleanup standards for uranium mill tailings sites, we targeted radium-226, which decays to radon-222, rather than the radon-222 alone. The radium-226 continue to generate radon-222 during its much longer half-life.
Radon and uranium miners
A higher than expected level of lung disease in uranium miners helped call attention to the effects of radon-222. The miners worked long hours in enclosed spaces, surrounded by uranium ore and radon that seeped out of the rock. Health workers expected to see health problems in the miners that would reflect direct exposure to radiation. Instead, the predominant health problems were lung cancer and other lung diseases.
First the health workers suspected the dust itself. They knew that high concentrations of small particles, such as coal dust, asbestos, or cotton fibers, could damage workers' lungs. However, close examination of the uranium-238 decay chain identified radon-222 as the most likely culprit.
This led to regulations in two areas:
- improved ventilation in uranium mines and
- limits on the amount of radon ventilated from the mines to the ambient air.
- National Emissions Standards for Hazardous Air Pollutants: Underground Uranium Mines
This page describes the regulations that limit emissions of airborne radionuclides from these mines.
- Mine Safety and Health Agency (Department of Labor): Radon Daughter Measurements
This page describes measurements of the radioactive decay products of radon (which is found in uranium mines.) | <urn:uuid:6a52c4ef-b9d5-4af1-ac19-2e86f2855317> | 4.25 | 1,203 | Knowledge Article | Science & Tech. | 31.572239 |
Forests are not only valuable sources of wood and fuel, but they are also home to many types of plants and animals.
Trees also stop wind and rain removing soil, they modify local climate and slow down climate change by storing carbon.
Unfortunately, the forests are rapidly shrinking. Every year, huge areas are destroyed by human activity.
As they fly overhead, satellites can obtain a detailed view of an entire forest in a matter of days. They provide the only quick, easy way to map the ever-changing forests and provide regular updates on their condition.
A visible-infrared scanner is used to study plant growth and measure surface temperatures. It also detects fires used to clear fields.
Satellites can even pick out different types of tree and show how healthy they are.
Maps based on satellite images help managers and local authorities to protect and preserve the shrinking forests.
Last update: 13 December 2004 | <urn:uuid:43668b08-b96e-4b7f-8720-1c88a78ff58a> | 3.71875 | 186 | Knowledge Article | Science & Tech. | 41.005726 |
1. The World of Chemistry
The relationships of chemistry to the other sciences and to everyday life are presented.
The search for new colors in the mid 1800s boosted the development of modern chemistry.
3. Measurement: The Foundation of Chemistry
The distinction between accuracy and precision and its importance in commerce and science are
4. Modeling the Unseen
Models are used to explain phenomena that are beyond the realm of ordinary perception.
5. A Matter of State
Matter is examined in its three principal states — gases, liquids, and solids — relating the visible
world to the submicroscopic.
6. The Atom
Viewers journey inside the atom to appreciate its architectural beauty and grasp how atomic
structure determines chemical behavior.
7. The Periodic Table
The development and arrangement of the periodic table of elements is examined.
8. Chemical Bonds
The differences between ionic and covalent bonds are explained by the use of scientific models
and examples from nature.
9. Molecular Architecture
The program examines isomers and how the electronic structure of a molecule's elements and bonds
affects its shape and physical properties.
10. Signals From Within
Chemists' knowledge of the interaction of radiation and matter is the basis for analytical methods
of sensitivity and specificity.
11. The Mole
Using Avogadro's law, the mass of a substance can be related to the number of particles
contained in that mass.
The special chemical properties of water are explored, along with the need for its protection and
13. The Driving Forces
Endothermic and exothermic reactions are investigated and the role of entropy is revealed.
14. Molecules in Action
Observing molecules during chemical reactions helps explain the role of catalysts. Dynamic
equilibrium is also demonstrated.
15. The Busy Electron
The principles of electrochemical cell design are explained through batteries, sensors, and a
16. The Proton in Chemistry
Demonstrations explain pH and how it is measured, and the important role of acids and bases.
17. The Precious Envelope
The earth's atmosphere is examined through theories of chemical evolution; ozone depletion and
the greenhouse effect are explained.
18. The Chemistry of the Earth
Silicon, a cornerstone of the high-tech industry, is one of the elements of the Earth highlighted in
Malleability, ductility, and conductivity are examined, along with methods for extracting metals
from ores and blending alloys.
20. On the Surface
Surface science examines how surfaces react with each other at the molecular level.
The versatility of carbon's molecular structures and the enormous range of properties of its
compounds are presented.
22. The Age of Polymers
How chemists control the molecular structure to create polymers with special properties is
23. Proteins: Structure and Function
The program examines proteins — polymers built from only 20 basic amino acids.
24. The Genetic Code
The structure and role of the nucleic acids, DNA and RNA, are investigated.
25. Chemistry and the Environment
Dump site waste management demonstrates chemistry's benefits and problems.
Interviews with leaders from academia and industry explore the frontiers of chemical research. | <urn:uuid:6f4718ca-85ab-4fcc-931d-751db3664df3> | 3.953125 | 678 | Content Listing | Science & Tech. | 34.754997 |
PONDER this one in the bath. Chances are you've just scrubbed your back with a choice example of one of evolution's greatest inventions. Or at least, a good plastic copy.
Sponges are a key example of multicellular life, an innovation that transformed living things from solitary cells into fantastically complex bodies. It was such a great move, it evolved at least 16 different times. Animals, land plants, fungi and algae all joined in.
Cells have been joining forces for billions of years. Even bacteria can do it, forming complex colonies with a three-dimensional structure and some division of labour. But hundreds of millions of years ago, eukaryotes - more complex cells that package up their DNA in a nucleus - took things to a new level. They formed permanent colonies in which certain cells dedicated themselves to different tasks, such as nutrition or excretion, and whose behaviour was well coordinated. ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:0de2fc2c-0a90-4f24-b414-99d716e6d1c2> | 3.515625 | 217 | Truncated | Science & Tech. | 50.306105 |
Through the environment cluster management program, NSTDA developed technology to produce porous material which is resistant to low temperature, made from rice husk ash and other waste materials. This material can be used in diverse industries ie: construction, aquaculture and soil-less planting. The products from this research include:
A porous material for waste water treatment - The product is as porous as 60 percent of its mass with holes from micron-to millimeter-sized. The holes provide living space for beneficial bacteria that naturally break down the excretions of aquatic animals.
Lightweight rocks - The materials are made from foam with the density of only 0.6-1.0 grams and 1-1.3 gram per cubic centimeter. They can be used in place of natural rocks for decoration or for construction that requires light weight structure. Due to its non-toxic quality, the rocks are popularly used in aquarium.
“Hortimedia” growing material – A very porous water-retentive like coconut husk, perlite or vermiculite, yet more durable. It is available in several formulae with different water retention qualities for growing various kinds of plants. The low electrical conductance of less than 0.4 mS/cm, make it suitable for hydroponics or soil-less growing systems.
The porous material for waste water treatment won the gold medal in the Protection of the Environment & Energy category from the 38th International Exhibition of Inventions held in Geneva, Switzerland. | <urn:uuid:22841d1c-7957-4451-bd0f-ed87ddf8b16e> | 3.234375 | 310 | Knowledge Article | Science & Tech. | 38.070231 |
There is no certain bet in nuclear physics but work by Nobel laureate Carlo Rubbia at CERN (European Organization for Nuclear Research) on the use of thorium as a cheap, clean and safe alternative to uranium in reactors may be the magic bullet we have all been hoping for, though we have barely begun to crack the potential of solar power.
Dr Rubbia says a tonne of the silvery metal – named after the Norse god of thunder, who also gave us Thor’s day or Thursday - produces as much energy as 200 tonnes of uranium, or 3,500,000 tonnes of coal. A mere fistful would light London for a week.
Thorium eats its own hazardous waste. It can even scavenge the plutonium left by uranium reactors, acting as an eco-cleaner. "It’s the Big One," said Kirk Sorensen, a former NASA rocket engineer and now chief nuclear technologist at Teledyne Brown Engineering.
"Once you start looking more closely, it blows your mind away. You can run civilisation on thorium for hundreds of thousands of years, and it’s essentially free. You don’t have to deal with uranium cartels," he said. | <urn:uuid:87066ba2-5627-4871-bb6b-28ba82a685ef> | 3.125 | 249 | Comment Section | Science & Tech. | 54.00464 |
Hafnium is a chemical element in the periodic table that has the symbol Hf and atomic number 72.
A lustrous, silvery gray tetravalent transition metal, hafnium resembles zirconium chemically and is found in zirconium minerals.
Hafnium is used in tungsten alloys in filaments and electrodes and also acts as a neutron absorber in control rods in nuclear power plants.
For more information about the topic Hafnium, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:187d59a2-c00d-482b-bd0f-ccf10fba5aa6> | 3.4375 | 141 | Knowledge Article | Science & Tech. | 30.410444 |
NASA is pressing ahead, its fingers crossed, for a third and final shuttle flight this year.
Many types of asteroid could have created the kind of amino acids used by life on Earth to build proteins and regulate chemical reactions, according to new NASA research.
Astronomers have measured the most massive known black hole in our cosmic neighborhood, weighing in at the equivalent of 6.6 billion suns.
The Crab Nebula - long thought to be the most stable source of high energy radiation in the sky - has astonished astronomers by showing signs of dimming.
A California researcher is urging NASA to (seriously) study the effects of space on human sexual behavior and procreation.
Astronomers at NASA are peering ever further away into the universe, and have now discovered their most distant galaxy yet.
Well, that's our vacation sorted: Space Adventures says it has three seats for sale for 2013 flights to the International Space Station.
Hubble scientists have released a picture of a strange green blob floating near a neighboring spiral galaxy.
The European Space Agency has released the fist results from its Planck mission, focusing on the coldest objects in the universe.
Scientists have for the first time discovered that antimatter is regularly being produced on Earth - by thunderstorms.
Astronomers have been astonished to find a supermassive black hole in the center of a tiny low-mass galaxy, suggesting that such black holes can form before their host galaxies.
Astronomers have found the smallest planet yet outside our solar system, a rocky planet about one and a half times the size of Earth.
NASA's having another shot at regaining communication with the Mars rover Spirit, bogged down in a Martian crater for the last eighteen months.
A ten-year-old Canadian girl has become the youngest-ever person to discover a supernova.
A new global project will, for the first time, be able to track astrophysical events across the sky as they happen.
NASA has found four more cracks in the Discovery shuttle's fuel tank, but says it believes it can repair them in time to launch on February 3 as planned.
Five hundred 'space artifacts' are set for the auctioneer's block later this month - including a Playboy calendar photo that made it to the moon in 1969.
An Indian rocket was deliberately blown up on Christmas Day less than a minute after launch.
According to the New Testament, the guiding light that led the failthful to the birth of baby Jesus was the Star of Bethlehem.
The Apophis asteroid (99942) is expected to pass uncomfortably close to Earth in 2036. | <urn:uuid:c8f7ad79-6f4e-4238-bfd5-078bed351603> | 3.265625 | 537 | Content Listing | Science & Tech. | 45.279098 |
Thousands of asteroids lie in a belt between Mars and Jupiter. These asteroids lie in a location in the solar system where there seems to be a jump in the spacing between the planets. Scientists think that this debris may be the precursors of a planet, which was prevented from forming.
Asteroids are small boulders with either rounded or irregular shapes, which can be as large as a few football stadiums in size.
This is page 17 of 60 | <urn:uuid:0fcfff8b-4713-4156-80f3-7f2958342079> | 3.328125 | 92 | Truncated | Science & Tech. | 60.424193 |
‘Fast-blooming’ seaweed slows Caribbean reef recovery – scientists
SYDNEY, Australia, July 15, CMC – Fast-blooming seaweed is the main reason why the Caribbean’s coral reefs take longer to recover from stress than Australia’s Great Barrier Reef in Australia and those in the Indo-Pacific region, Australian marine scientists have said.
“Indo-Pacific reefs have less seaweeds than the Caribbean Sea,” said George Roff of the ARC Centre of Excellence for Coral Reef Studies in Australia in the journal Trends in Ecology and Evolution (TREE).
A study by the ARC, a world-leading research centre on coral reefs, includes survey data from the Indo-Pacific and Caribbean reefs from 1965 to 2010.
(Full story in Full News) | <urn:uuid:2edcd4e4-39f2-4474-99df-9da69d5fe7c9> | 2.828125 | 166 | Truncated | Science & Tech. | 32.125226 |
|Key to genera of coneheaded katydids (Copiphorinae).|
All species of Neoconocephalus except N. triops and N. payhayokee have one generation per year. All except N. triops overwinter as eggs. N. triops overwinters as adults and has one generation in the northern extremes of its range and two and perhaps three generations each year farther south. N. payhayokee has spring and fall generations of adults.Food
Adults feed nearly exclusively on the seeds of grasses; juveniles apparently feed on grass flowers and developing seeds. Other foods-such as sedge fruits, grass leaves, and living insects-have been noted occasionally.Singing Behavior
Calling is chiefly at night. Only N. retusus commonly sings in the afternoon as well. Some species sing within tangles of dense vegetation near the ground; many climb to near the top of the ground cover. N. triops frequently sings from the tops of tall trees. When disturbed, singing males fly, run, or drop.
When males of common coneheads are captured and held in cages, they frequently prove to be parasitized internally by maggots of tachinid flies known to locate their hosts by their calling songs (Burk 1982). The fly larvae emerge from the conehead within a few days after capture, and the conehead dies. The larvae make hard, brown puparia that yield yellowish adult flies in about two weeks. In keeping with how the flies find their hosts, mainly males are parasitized.
Common coneheads and their Old World counterparts (formerly Homorocoryphus, now Ruspolia) have been used as experimental animals by biologists investigating mechanisms of hearing, mechanisms of sound production, species specificity of phonotaxis, muscle physiology, body temperature, and effects of temperature on wingstroke rate. | <urn:uuid:f06da312-9b02-4df5-ae02-b0c570d4b884> | 3.203125 | 388 | Knowledge Article | Science & Tech. | 45.693519 |
BREAK IN THE FOOD CHAIN?
The experts we surveyed pointed to disappearing native species as a major threat to the health of the Great Lakes. Some native fish populations are declining. The Great Lakes Radio Consortium's Lester Graham reports on how the decline of one of those organisms is of most concern.
COASTER BROOK TROUT
The coaster brook trout is on the brink of disappearing. It used to be found in great numbers, but human impacts have hurt the fish. The Great Lakes Radio Consortium's Chris McCarus reports on efforts to rescue the fish, and how these efforts could help the Great Lakes too.
THE AMERICAN EEL
In our next report in the series, Ten Threats to the Great Lakes, we hear about native species that are in trouble. The Great Lakes Radio Consortium's David Sommerstein reports that few species illustrate the dangers of the multiple threats to the Great Lakes as the American Eel.
SAVING AN ANCIENT FISH
We've been bringing you reports from the Great Lakes Radio Consortium's series Ten Threats to the Great Lakes. One of the threats experts identified is disappearing native species. One disappearing species that has scientists worries is the ancient fish, the lake sturgeon. The Great Lakes Radio Consortium's Celeste Headlee has that story.
Rare Warbler Makes Comeback
Saving Rattlesnakes from Development
Is Endangered Species Act Endangered?
Canadian 'Species At Risk' Law Criticized
Biologists Track Lynx's Return
Brighter Future for Native Trout
Endangered Mussel Rides to Renewal
What plants and animals are on the federal endangered species list?
Learn more about disappearing native species
More on the endangered plants and animals in this region | <urn:uuid:97adf725-d660-4bbc-a73e-fde2fb8888e8> | 3.21875 | 360 | Content Listing | Science & Tech. | 47.733739 |
The cobalt-60 isotope undergoes beta decay with a half-life of 5.24 years.
This particular radioisotope is historically important for several reasons. It is involved in the radioactive fallout from nuclear weapons. For many years, the gamma radiation from this decay was the main source for radiation therapy for cancer. This decay was used in the famous experiment by C. S. Wu in which she demonstrated the nonconservation of parity .
|HyperPhysics***** Nuclear||Go Back| | <urn:uuid:cbe972c8-d54e-4632-a3e9-b25299ae7316> | 2.828125 | 103 | Knowledge Article | Science & Tech. | 48.029342 |
A Solar Technology Seeks Its Day in the Sun
By John Collins Rudolph in the New York Times, November 17 2009
For years, concentrating photovoltaic technology, which uses mirrors or lenses to focus the sun’s rays on small, high-efficiency solar panels, has tantalized researchers as a potential source of low-cost, utility-scale solar power.
Under the right conditions, the technology could deliver cheaper power than conventional photovoltaics, while using considerably less land than thin-film solar panels.
“It is projected to have lower costs than conventional photovoltaics in sunny areas,” said Daniel Friedman, a solar energy researcher at the United States National Renewable Energy Laboratory in Golden, Colo. “I don’t think that’s too controversial.” | <urn:uuid:58c43b5c-1373-4fbf-a5c6-729400a10cf1> | 2.984375 | 170 | Truncated | Science & Tech. | 22.368162 |
The results of a four-summer (1964-1967) hydrologic study of the watershed of Glenn Creek, about 8 miles north of Fairbanks, Alaska, in the Yukon-Tanana uplands physiographic province, are presented. This work was initiated to provide initial base line hydrologic data for a small subarctic watershed, the first of its kind in North America. Standard hydrologic and meteorologic instrumentation was used, and streamflow characteristics were analyzed by standard hydrograph techniques. The stream is second-order, and drains an area of 0.70 square mile. Basin elevations are from 842 ft to 1618 ft. In regard to topography, geology, soils, permafrost, vegetation, and climate, the watershed seems to be representative of low-order, low-elevation drainage basins in the province. Analysis of rainfall-runoff data indicates that about half the 12.3-in. normal annual precipitation is runoff. The remainder is the actual evapotranspiration, which equals only about 30% of estimated potential evapotranspiration. For individual storms, runoff/rainfall proportions were from 0.03 to 0.42, and were positively correlated with antecedent discharge of the stream, which is a measure of watershed wetness. The stream responds rapidly to rainstorms except when the basin is very dry, and has markedly slow recessions compared with temperate-region streams of similar size. Rate of recessions is apparently controlled by concurrent evapotranspiration rates. Analysis of hydrographs and knowledge of the physical characteristics of the basin indicate that storm runoff occurs initially as surface runoff from bare soil areas adjacent to the stream, while recessions are dominated by a combination of tunnel flow beneath moss-covered parts of the basins and by typical ground-water flow through the moss and soils. Peak discharges for individual storms could be well estimated by an equation including antecedent discharge, total precipitation and storm duration, and the average recession constant. These results represent the first detailed hydrologic data from the discontinuous permafrost zone of the North American taiga and should be of significance to the International Hydrological Decade and International Biological Program. | <urn:uuid:1659c03d-ca42-41a9-aa13-49404fc58f40> | 2.6875 | 457 | Academic Writing | Science & Tech. | 24.098356 |
Let b and c be relatively prime integers, and suppose a is an integer that is
divisible by both b and c. Prove that bc|a.
Follow Math Help Forum on Facebook and Google+
If a is divisible by both b and c, we have:
Originally Posted by Aryth If a is divisible by both b and c, we have:
Thus: ummm.... . how does that work?
b and c divide a for some integers h and k. b and c are relatively prime means there are integers such that .
Now multiply through by a. .
View Tag Cloud | <urn:uuid:2c94a7a9-1713-4ed9-a83e-c5833366799e> | 3.1875 | 128 | Q&A Forum | Science & Tech. | 82.632479 |
Studies on the growth of Rhizocarpon geographicum in NW Scotland, and some implications for lichenometry
Bradwell, Tom. 2010 Studies on the growth of Rhizocarpon geographicum in NW Scotland, and some implications for lichenometry. Geografiska Annaler Series A Physical Geography, 92 (1). 41-52. 10.1111/j.1468-0459.2010.00376.xFull text not available from this repository. (Request a copy)
Scotland, a maritime subpolar environment (55–60°N), has seen relatively few applications of lichenometry – even though it offers much potential. Perhaps surprisingly, direct measurements of Rhizocarpon geographicum growth rates in Scotland are so far lacking. This study reports on the growth of this crustose areolate species from two sites in Assynt, NW Scotland, between 2002 and 2009. Repeat photography of 23 non-competing thalli growing under identical environmental conditions on a single vertical surface over 5 years at Inchnadamph showed growth rates to be a function of size – with larger thalli (10–30 mm) growing significantly faster than the smallest thalli (<10 mm). Mean diametral growth rates in thalli >10 mm are 0.67 mm yr−1 (s.d. = 0.16). Studies on a second vertical surface near Lochinver, over 7 years, yielded complex growth data on a more mature population of R. geographicum thalli (<50 mm in diameter). Here, mean diametral growth rates in the larger thalli (>10 mm) are slower (0.29 mm yr−1; s.d. = 0.12) than those at Inchnadamph. However, at this site, competition with other species rules out any meaningful comparison of growth rates between the two sites. Other growth processes were monitored over the five to seven-year study period, including hypothallus growth, areolae development, thallus coalescence, and inter-species competition – all have important implications for the use of Rhizocarpon species in lichenometry.
|Programmes:||BGS Programmes 2010 > Geology and Landscape (Scotland)|
|Additional Keywords:||Lichenometry, Northwest Scotland|
|NORA Subject Terms:||Botany
|Date made live:||20 Jul 2010 15:11|
Actions (login required) | <urn:uuid:61d9b4ee-357d-42bd-b49a-34f73a4ad7f3> | 2.734375 | 506 | Academic Writing | Science & Tech. | 51.679375 |
Succeed in Understanding Astronomy
Astronomy is a branch of Physical Science that is concerned with the study of objects and effects in space or regions beyond the Earth's atmosphere.
My name is Ron Kurtus, and I've always been interested in space and the stars. I realized that knowledge of Astronomy is important for an understanding of the Universe and even how the Earth was formed. There is a great need for people who understand scientific principles and know how to think logically. Your knowledge and skills in these areas can help you excel in school, advance your career or improve your business.
The purpose of these lessons is to help you gain understanding the basics of Astronomy, such that you will become a champion in the subject. If you have any questions, send me an email.
Note: We now have many lessons in audio, so you can read along the spoken word.
Special thanks to Diego Mastrangelo, of Buenos Aires, Argentina, for helping to edit many of the Astronomy lessons.
Observations in Astronomy - audio
Our Solar System - audio
Kepler's Laws of Orbital Motion - audio
Characteristics of our Sun - audio
Characteristics of the Earth - audio
Motion of the Earth - audio
Characteristics of our Moon - audio
Motion of the Moon - audio
Phases of the Moon - audio
Characteristics of our Universe - audio
Constellations - audio
Galaxies - audio
Black Holes - audio
Big Bang Theory - audio
Answer a survey question and see the results from other participants.
Click on a button to send an email, Facebook message, Tweet, or other message to share the link for this page: | <urn:uuid:6ff54371-3075-4e9a-aab2-fb297ba1a22c> | 3.5 | 347 | Content Listing | Science & Tech. | 36.277123 |
Climate Change, Dark Ages, and Armchair Disaster Prediction
Allowing that it was an “outrageous metaphor,” Giegengack took a roll of toilet paper out of his bag. He told the amused audience at Rainey Auditorium that, when measured proportionally, each inch of the 1,000-sheet roll is roughly equivalent to one million years of the Earth’s 4.5-billion-year history. Using those proportions, 1/100 of an inch of the entire roll represents all of recorded human history. And despite there being evidence of “both the lowest temperatures and the lowest concentrations of carbon dioxide in the atmosphere” happening within the last million years, he said, today’s scientists are mainly looking at the last 200 years when studying climate change. That’s a lot of unused toilet paper.While McKibben Loudly Weeps - WendyMcElroy.com
“Climatologists … are taking records of those 200 years,” Giegengack said, “subjecting it to very detailed analysis, and projecting it into computer models of what the climate will be like in the future based on this.”
He held up the tiny sliver of one sheet of toilet paper to represent this.
While McKibben Loudly WeepsTwitter / RyanMaue: You don't hear much about the ...
(to the tune of the Beatles' "While My Guitar Gently Weeps"...)
I look at the world and I see it's improving
While McKibben loudly weeps
From poverty, millions are steadily moving
Still McKibben loudly weeps
You don't hear much about the "Great Pacific Climate Shift of 1976/77" or the weather of 1978. Life starts in 1979 for climate science. | <urn:uuid:916dd363-c8fe-4118-9def-d0bd53eb4048> | 3.03125 | 374 | Personal Blog | Science & Tech. | 56.734523 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Tuesday, 14 May 2013
New teeth news American alligators could help to revolutionise tooth replacement in humans, suggests a new study.
Friday, 3 May 2013
Micro drone US scientists have created a tiny winged robot that can fly.
Tuesday, 30 April 2013
Proto-dinos Ten million years after the world's largest mass extinction, a lineage of animals thought to have led to dinosaurs took hold in what is now Tanzania and Zambia, according to new research.
Thursday, 25 April 2013
Dino-mechanics Many dinosaurs, like T rex, had scrawny arms, but paleontologists have discovered that as dinosaurs gradually evolved bigger arms, they began to stand and move more like birds.
Wednesday, 24 April 2013
ISON comet A small but incredibly bright comet heading toward the sun could do more than dazzle Earth's skies when it arrives later this year.
Monday, 22 April 2013
All systems go An Antares rocket, one of two launchers developed with NASA backing to fly cargo capsules to the International Space Station, has blasted off on its debut mission, successfully depositing a dummy spacecraft into orbit.
Friday, 12 April 2013
Ancient human An early human ancestor that roamed South Africa two million years ago had a primitive pigeon-toed gait, human-like front teeth, ate mostly vegies and spent a lot of time swinging in the trees.
Monday, 8 April 2013
I heart coffee Ever get that warm feeling in your chest when you get a whiff of brewing coffee or fresh cinnamon rolls baking in the oven? It could, in fact, be your heart 'smelling'.
Wednesday, 27 March 2013
Croc wipe out A mass extinction that occurred over 200 million years ago, killed off a slew of huge predators, including hefty beasts that looked like crocodiles and enormous armadillos, according to new research.
Friday, 22 March 2013
Walking machines A lizard-bot that scurries across sand takes us a step closer in the quest for a powerful, real-life running machine.
Friday, 8 March 2013
Mars map Water blasted out from an underground aquifer on Mars relatively recently, carving out deep flood channels in the surface that were buried by lava flows, reveal new 3D maps.
Friday, 1 March 2013
Radiation ring For more than four weeks last year, a previously unknown third radiation belt circled Earth before it was annihilated - along with the entire outer belt - by a shock wave, a pair of NASA probes show.
Tuesday, 26 February 2013
Chicken little A recently discovered dinosaur, Yulong mini, was appropriately named, as the remains of its chicken-sized offspring are now among the smallest dinosaurs ever found, according to a new study.
Monday, 25 February 2013
Robotic future Humans and robots work better together if they can swap roles and learn from each other, say US scientists.
Friday, 22 February 2013
Electric connection Flowers may be silent, but scientists have just discovered that electric fields allow them to communicate with bumblebees and possibly other species, including humans. | <urn:uuid:ef6d527c-6b8e-482f-9ecb-13db47635f84> | 2.734375 | 650 | Content Listing | Science & Tech. | 38.511241 |
Science Fair Project Encyclopedia
Ewens's sampling formula
In population genetics, Ewens's sampling formula, introduced by Warren Ewens , states that under certain conditions (specified below), if a random sample of n gametes is taken from a population and classified according to the gene at a particular locus then the probability that there are a1 alleles represented once in the sample, and a2 alleles represented twice, and so on, is
for some positive number θ, whenever a1, ..., an is a sequence of nonnegative integers such that
The phrase "under certain conditions", used above, must of course be made precise. The assumptions are (1) the sample size n is small by comparison to the size of the whole population, and (2) the population is in statistical equilibrium under mutation and genetic drift and the role of selection at the locus in question is negligible, and (3) every mutant allele is novel.
When θ = 0, the probability is 1 that all n genes are the same. When θ = 1, then the distribution is precisely that of the integer partition induced by a uniformly distributed random permutation. As the probability that no two of the n genes are the same approaches 1.
This family of probability distributions enjoys the property that if after the sample of n is taken, m of the n gametes are chosen without replacement, then the resulting probability distribution on the set of all partitions of the smaller integer m is just what the formula above would give if m were put in place of n.
- Warren Ewens, "The sampling theory of selectively neutral alleles", Theoretical Population Biology, volume 3, pages 87—112, 1972.
- J.F.C. Kingman, "Random partitions in population genetics", Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences, volume 361, number 1704, 1978.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:6eaa6847-3a23-4410-a07f-644762e45438> | 3.296875 | 428 | Knowledge Article | Science & Tech. | 37.277479 |
Jupiter's moon Io is about the same mass and size as the Earth's Moon. Based on this we would expect Io to have about the same inventory of radioactive elements and the same cooling rate as the Moon. We would expect Io to have the same level geological activity as the Moon, namely none. However, Io is the most geologically active surface in the Solar system. This means that the mechanism responsible for heating the interior of Io is very different from that of the Moon.
The mechanism responsible for heating the interior of Io is called Tidal Heating. This little tutorial is my attempted to explain a rather simplified version of the tidal heating of Io.
The force of gravity between two objects (M) and (m) depends of their respective mass and the square of the distance (d) between them. The force is very strongly dependent on the distance between the objects.
This means that when Io orbits Jupiter, the side of Io nearest to Jupiter feel a slightly larger gravitational pull than the side of Io furthest from Jupiter. Since Jupiter is very massive (318 times the mass of the Earth) this means that this difference is rather large.
This difference in gravitational forces actually distorts the shape Io. The image show this effect greatly exaggerated, the actual distortion is about 100 meters. The Earth has the same effect on the Moon but to a much lesser extent. This difference in gravitational forces is called the Tidal Force
Since Io is in synchronous orbit around Jupiter it keeps the same face toward Jupiter at all times (just like the Earth's Moon). This means that the distorted shape of Io keeps the same orientation with respect to Jupiter (this is a slight simplification). If Io was Jupiter's only moon this would be the end of the story. Io would be in a nice nearly circular orbit about Jupiter with its slightly distorted shape. This is what is happening with the Earth's Moon. No tidal heating would occur.
However, Io's orbit is in a 2:1 resonance with the orbit of Europa, another moon of Jupiter. This means that Io make two orbits for every one orbit that Europa makes.
This means that the orbit of Io is changed. Io orbit is forced the be slightly eccentric (red line, shown very exaggerated). This is the same mechanism that changes the orbits of the asteroid to create the Kirkwood Gaps, and changes the orbits of ring particles of Saturn, Uranus, and Neptune to create the gaps.
Since Io is being forced by Europa into an eccentric orbit, its distance from Jupiter constantly changes. When Io is close to Jupiter the tidal forces are greater so the distortion of Io is greater. When Io is further from Jupiter the tidal forces are less so the distortion of Io is less.
Io goes around Jupiter in 1.8 days. This means that in 1.8 days the shape of Io goes from more distorted figure to a less distorted figure.
The constant change in shape of Io causes a large amount of friction in the layer of rocks that make up the world. This friction generates a great deal of internal heat. It is this internal heat source that drives the tremendous volcanic activity we see on the surface of Io. This heating mechanism is called Tidal Heating
In order to have Tidal Heating you need:
This little tutorial is a very qualitative discussion of Tidal Heating. If you want to dive into all of the fun mathematics the Wikipedia entry is a pretty good place to start. | <urn:uuid:c9ddf1c3-b739-45b8-897c-f5160001ba60> | 4.40625 | 703 | Tutorial | Science & Tech. | 55.212245 |
How common are Earth-sized planets?
extrapolations from new data taken by NASA's orbiting
are indicating that at least one in ten stars are orbited by an
Earth-sized planet, making our Milky Way Galaxy the home to over ten
Unfortunately, this estimate applies only to planets effectively inside
the orbit of Mercury, making these hot-Earths
poor vacation opportunities for humans.
This histogram depicts the
estimated fraction of stars that have close
orbiting planets of various sizes.
The number of Sun-like stars with Earth-like planets in Earth-like
orbits is surely much less, but even so, Kepler has also just announced
the discovery of
four more of those. | <urn:uuid:33bcbba3-4193-4b4d-8bf9-f51d4d520769> | 3.390625 | 149 | Knowledge Article | Science & Tech. | 26.588214 |
- For some other uses of the word "wing" please see Wing (disambiguation).
A wing is a surface used to produce an aerodynamic force normal to the direction of motion by travelling in air or another gaseous medium. The first use of the word was for the foremost limbs of birds, but has been extended to include other animal limbs and man-made devices.
The most common use of wings is to fly by deflecting air downwards to produce lift, but upside-down wings are also commonly used as a way to produce downforce and hold objects to the ground (for example racing cars).
Wing shapes: a swept wing KC-10 Extender from Travis Air Force Base, California, refuels a delta wing F/A-22 Raptor.
Terms used to describe aeroplane wings
- Leading edge: the front edge of the wing
- Trailing edge : the back edge of the wing
- Span: distance from wing tip to wing tip
- Chord: distance from wing leading edge to wing trailing edge, usually measured parallel to the long axis of the fuselage
- aspect ratio: ratio of span to standard mean chord
Aeroplane wings may feature some of the following:
- A rounded leading edge cross-section
- A sharp trailing edge cross-section
- Leading-edge devices such as slats, slots, or extensions
- Trailing-edge devices such as flaps
- Ailerons (usually near the wingtips) to provide roll control
- Spoilers on the upper surface to disrupt lift
- Vortex generators to help prevent flow seperation
- Wing fences to keep flow attached to the wing
- Dihedral, or a positive wing angle to the horizontal. This gives inherent stability in roll. As the aircraft rolls, the lower wing generates more lift than the upper, rolling the aircraft back into the level position. Anhedral, or a negative wing angle to the horizontal has a destabilising effect.
- Swept wings are good for high-speed aircraft. The wing is at an angle to the airflow, so that the effective flow speed across the wing chord is lower.
- Elliptical wings (technically wings with an elliptical lift distribution) are theoretically optimum for efficiency at subsonic speeds.
- Delta wings have reasonable performance at subsonic and supersonic speeds.
- Waveriders are efficient supersonic wings that take advantage of shock waves.
- Rogallo wings are two hollow half-cones of fabric, one of the simplest wings to construct.
- Swing-wings (or variable geometry wings) are able to move in flight to give the benefits of dihedral and delta wing. Although they were originally proposed by German aerodynamicists during the 1940s, they are currently only found on some military fighter aircraft such as the Grumman F-14, Panavia Tornado, and General Dynamics F-111.
- Ring wings are optimally loaded closed lifting surfaces with higher aerodynamic efficiency than planar wings having the same aspect-ratios. Other non planar wing systems display an aerodynamic efficiency intermediate between ring wings and planar wings.
Science of Wings
At the simplest level, a wing produces lift by deflecting air downward, which propels the flying body upward with an equal and opposite force (see Newton's Third Law). Bernoulli's principle has traditionally been used to explain the functioning of a wing in terms of differing pressure above and below the wing, but this model can often be misleading or depend on false assumptions. See Coanda effect for an alternative explanation of how a wing produces lift.
The amount of lift produced by a wing increases with the angle of attack (the angle between the onset flow and the chord line) but this relationship ends once the stall angle is reached. At this angle the airflow starts to separate from the upper surface, and any further increase in angle of attack gives no more lift (it will in fact dramatically reduce) and gives a large increase in drag.
Wing design can be complex and is one of the principal applications of the science of aerodynamics.
- A helicopter uses a rotating wing with a varible pitch or angle to provide a directional force.
- The space shuttle uses its wings only for lift during its descent.
Structures with the same purpose as wings, but designed to operate in liquid media, are generally called fins, with hydrodynamics as the governing science. | <urn:uuid:8a9466cd-b61d-4280-acc7-743de519d295> | 3.515625 | 918 | Knowledge Article | Science & Tech. | 36.280084 |
How to find zoom level for a Google Static Map
Google Maps API has an integer "zoom level" which defines the resolution of the current view. Zoom levels between 0 (the lowest zoom level, in which the entire world can be seen on one map) to 21 (down to individual buildings) are possible within the default road maps view. When we want to show several location at the same time in the device screen, we need to find the exact zoom level so we can optimally show the map in the screen. If we want to show several locations in our devices' screen we need to find the zoom level that can be passed Google map static API (for example). If we want to cover entire screen of device that can shows all the interesting points then we can be associated the device screen size with zoom level.
Google Maps sets zoom level 0 to encompass the entire earth. Each succeeding zoom level doubles the precision in both horizontal and vertical dimensions. In this article, we try to find the zoom level of a Google map using Symbian C++ code.
How to calculate zoom level
When the zoom level is 0 then entire world is shown to the device. That means if the device screen is size is 260 pixels then periphery of world should fit to screen width. If we know the distance between two points, we can find the horizontal/vertical component of the distance that should fit to the screen. We need to use the bigger component (between horizontal and vertical) to fit to the device screen.
CosValue = iDistance*cosAngle;
SinValue = iDistance*sinAngle; // Distance between 2 points
valueusedforzoom = (CosValue>SinValue)?CosValue:SinValue;
TReal64 logpartup = (2*KPi* 6372.795)/( valueusedforzoom);
//6372.795 radius of earth in km
TReal64 logpartupval = 0.0;
TInt ret = KErrNone;
ret = Math::Log(logpartupval, logpartup);
TReal64 logpartdownresult = 0.0;
ret = Math::Log(logpartdownresult, 2.0);
TInt zoom = (TInt)(logpartupval/logpartdownresult );
This code can be tested by making a http request to Google server. For example we can make a request with the following http request
where centerLat and centerLon is average latitude and longitude of two places respectively. FoundZoom is the zoom found in the previous code, width and height are the width and height of the device screen. This can be easily verified with this example code How to use Google Maps data in mobile applications.
Here are some images from above http (with addition of markers) request.
Latitude of Tampere = 61.4703;
Longitude of Tampere = 23.7744;
Latitude of Dhaka = 23.7049;
Longitude of Dhaka = 90.4395;
Latitude of Espoo = 60.2043;
Longitude of Espoo = 24.6506; | <urn:uuid:dafbc04b-719c-4144-badc-544d607b3e94> | 2.828125 | 657 | Documentation | Software Dev. | 58.371461 |
Cladophora Balls on the BrainCladophora Main Page Life History Ecology Culture References
A curious phenomenon is associated with a certain type of Cladophora that forms free-living spheres, filled with water, mud or gas released from photosynthesis. These ball forms, called Aegagropila, may be found in both freshwater lakes and nearshore marine environments, and vary from a diameter of a few millimeters to the size of a human head!
The spherical appearance of these algae develops by rolling over the bottom surface, driven by wave motion. Harder floor substrate leads to balls of more regular shape. The balls, once formed, can be kept in lab conditions for a number of years without losing their shape, suggesting some sort of inherent spherical morphology to the Aegagropila.
Eventually, the balls end up floating at the water's surface, or sitting at the bottom of shallow lakes, depending on whether gas, liquid or solid matter fills the algae. There are reports of other species of Cladophora demonstrating aegagropilous growth, among them phycologist Schiller's observation of C. columbiana!
Possible Uses for Cladophora Balls
Cladophora Mania Hits Japan
In Japan, Aegagropila enjoy somewhat of a "cult" following. A certain lake in Hokkaido is known to form especially perfect Cladophora balls, which the local "Aidic" people involve in their summer festival. A folktale accompanies the dense green spheres, in which the hearts of a young couple who drown in the lake turn into Cladophora balls. Aegagropila's popularity in Japan has even spread to more urban areas. Tokyo has a bar named "Marimba," the Japanese word for the balls, which sells plastic souvenirs in the the shape of the popular alga. In recent years, aegagropilous Cladophora has even become a protected species in Japan, and a Cladophora ball postage stamp has been issued.
Cladophora Main Page Life History Ecology Culture References | <urn:uuid:140ae9b3-7d50-416a-8569-2077b5171e35> | 3.234375 | 429 | Knowledge Article | Science & Tech. | 26.232353 |
Hydrothermal Vents can be found on the seafloor in areas with volcanic activity.
Cracks in the ocean floor allow cold water to seep below the earth’s crust,
where it is heated up to 400 ° C (756 ° F). The hot water absorbs chemicals
from the molten rock below. When the hot fluid meets the oxygen rich cold seawater,
chemical reactions take place, depositing metals and minerals to form the strange
chimney like structures. A vent named ‘Godzilla’ reached 9 meters | <urn:uuid:171e7b02-c479-4423-bbc9-ef09e56ed44b> | 3.4375 | 114 | Knowledge Article | Science & Tech. | 58.5775 |
The Ganges and Indus river dolphins are the last survivors of a once-diverse group. Should they be priorities, or should we bet on the more resilient groups?
Deciding Which Species to Save
Felix Marx just finished his Ph.D. in paleontology at the University of Otago in New Zealand.
Updated January 11, 2013, 10:36 AM
As far as we have managed to read it, the fossil record tells us that whales and dolphins were probably more diverse in the past than they are now. These ups and downs of cetacean diversity over time are the result of different families originating, diversifying and, eventually, declining again, sometimes to extinction. For example, the living Ganges and Indus river dolphins are the last survivors of a group that was rather diverse between 15 and 30 million years ago. Shortly thereafter, the family went into decline —at the same time as modern oceanic dolphins, the most diverse cetacean family alive today, started to take off.
Of course, these are extremely long-term trends, but they can tell us something about the levels of threat living cetaceans are facing. Being the last of their kind with no close living relatives, the Ganges and Indus river dolphins are the last guardians of a wealth of genetic, morphological and ecological diversity. The same can be said for many other living cetaceans, such as the pygmy right whale, or sperm whales. Should any of these animals ever be driven to extinction, the loss of evolutionary history would be considerable and, arguably, much more significant than if a species with many living relatives were to disappear. Who knows what evolutionary and ecological potential may lie dormant in this ancient survivor? The recent extinction of the Yangtze River dolphin, itself the last survivor of an ancient lineage, provides a haunting example.
Taking such an evolutionary point of view invites the question of what “threat” actually means. Should we assign a greater level of threat to the endangered Ganges River dolphin because of its evolutionary history than to, say, the vaquita – which has many living relatives? Naturally, there are many other reasons for conservation besides genetic or morphological uniqueness, and many of these, such as the ecological or “emotional” significance of a particular species, may outweigh any evolutionary argument.
In fact, the argument for preserving these relatively distinct species can easily be turned on its head. A focus on saving relics like the Ganges River dolphin might be misguided, if it shifts attention from other, ostensibly more evolutionary flexible groups. Take rorquals, for example, or killer whales, which at the moment seem to be in the process of diversifying into different species. Arguably, such evolutionarily active lineages represent the future, and may prove a better bet in terms of adapting to large-scale environmental change than the Ganges River dolphin.
Whichever line of argument people may follow, evolutionary history and potential should not be ignored when making conservation decisions. | <urn:uuid:7e02a7f4-6d61-4afc-bc46-cd997c5e4b2f> | 3.390625 | 620 | Nonfiction Writing | Science & Tech. | 28.096801 |
This can be done in such a way that indicating a local or networked source becomes simple. An import path such as /widgets/helloworld indicates a local file path. A path such as http://helloworld.foocorp.com points to a network file store. There could be other syntax, of course, but this is one example of an easy way to do this.
Dynamically Loaded Binaries
Like many languages, Python allows you to talk to external binaries, typically written in C or C++, to talk to hardware APIs, or to optimize performance for especially CPU intensive tasks. The issue with these, of course, is that it is difficult and many times impossible to make this type of code portable because the API the C program talks to (a telecom interface, for example) does not exist on many platforms.
For this type of module, how about a slightly different syntax to tell the runtime environment to load an external binary? This also requires changes to the runtime environment, which I'll discuss later. In our hypothetical language, we'd write:
import http://1_0_9.fastfft.widgets.code.foocorp.com type=bin runonce=n as=fastfft x = fastfft() fft = x.transform(some_pcm_audio)
This short example adds two parameters to the
runonce=y/n. This tells the runtime engine to fetch and launch a compiled binary that will process data from the application, not unlike a DLL. The
runonce parameter tells the runtime engine whether it can launch multiple instances of the binary or only one.
The runtime interpreter hides the messiness of this process from the Python application and does something like this:
- Try to fetch a copy of the binary for the target platform; throw an exception if a problem occurs (for example, file not found, CRC error, etc.).
- Launch an instance of the binary and tell it to talk to the runtime environment via interprocess communication (for example, a localhost TCP socket).
- Pass data to/from the Python app via the interprocess communication interface.
This is a simple and fairly clean way to make it easy for Python apps to use external binaries. This itself is not news. A voice-scripting language I used years ago did something similar to this, minus the dynamic load and binding trick. The goal here is to make it easy to talk to binaries, but do so in a way that does not require the user to run a build command prior to running the application.
Using the example above, I want to use a C program that does a fast Fourier transform on a segment of audio data. This is a computationally intensive task, so it'll be a lot faster in compiled C than in an interpreted language such as Python. In this framework, the runtime engine launches an instance of fastfft.exe in the background and tells fastfft.exe to talk to it at
From my perspective, I am just talking to something that looks like any other object. When I invoke a method, the runtime engine sends a message to the external application via IPC, does its thing, and sends a response back, which gets returned to my application. Simple.
Again, the details of how this works behind the scenes are not so important. I use TCP via localhost as an example, mainly because any networked appliance will, by definition, be able to talk via a localhost socket. In a real-world version of this, C and C++ developers will have a wrapper library that provides a simple interface in and out of their programs. The key requirement here is to eliminate the need to preload external modules, without the need for a build operation prior to running the program. It's worth losing a little bit of performance to gain this flexibility.
Built-In Version Control
You may have noticed that this system has a built-in form of version control. Code repositories must explicitly create a separate path for every version of a module. Likewise, developers would be required, or at least strongly encouraged to explicitly refer to a unique version at runtime.
Bootstrapping to a Web OS
This is a simple extension of a proven language. At the very least, it will make building and sharing applications easier, though it has potential beyond that.
Imagine a minimal machine that has the basic guts you need in a computer. Storage, I/O, and a network. It has a lightweight, built-in interpreter and is designed to run only Python apps. In effect, it's a Python computer and operating system.
When you first take this computer home, it boots up with a picture of a snake and a command line. You type the name of a program you want to run. Any program. Maybe you want to run an MP3 player, so you type "PyMP3".
Your computer is brand new, and you don't have that program yet, so behind the scenes it tells itself to:
It then runs this program, and because it probably contains
import statements of its own, automatically works through the underlying packages it needs to run. This all happens in the background, and after a short while, the program runs, as if it had been on your machine all along.
While many companies could do this, Google is in a unique position to make a system like this a reality. With its data center infrastructure and world-class software engineers, it can easily fund a project to make the necessary modifications to the Python interpreter (as well as other languages if desired) and to operate a trusted code repository--the two key ingredients required to build this system.
With the right sponsorship, a system like this could lead to major changes in the way developers build and share software. It could do so at a surprisingly modest cost, because this system requires only straightforward modifications to existing and widely used programming tools. Just as hyperlinking was a straightforward enhancement to document markup languages that enabled the development of the World Wide Web, hyperlinked source code will bring similar benefits to software development.
Brian McConnell is an inventor, author, and serial telecom entrepreneur. He has founded three telecom startups since moving to California. The most recent, Open Communication Systems, designs cutting-edge telecom applications based on open standards telephony technology.
Return to the Python DevCenter.
2009-01-07 02:27:19 PauluS1188 [View]
2008-02-22 03:11:27 thisfred [View]
I too am baffled by the fanboys
2008-02-19 11:46:41 khiltd [View]
2007-04-17 21:54:39 benQboy [View]
2007-02-01 00:57:03 jian.wu [View]
Naming and CRC
2007-01-27 11:14:07 JohnNilsson [View]
It's not the URLs, it's the APIs
2007-01-24 07:01:23 Dave_Turvene [View]
2007-01-23 14:17:33 klemmerj [View]
2007-01-20 12:18:53 ianb [View]
2007-01-20 11:49:25 Brian McConnell | [View]
Why This Needs Commercial Sponsorship
2007-01-22 20:52:32 donbarthel1 [View]
2007-01-20 02:52:41 malcontent [View]
2007-01-19 14:01:28 carribeiro [View]
2007-01-19 07:52:48 seanlynch [View]
2007-01-18 21:17:34 Paddy3118 [View]
2007-01-20 03:53:37 tbuitenh [View]
In the language or in the file-system?
2007-01-19 01:01:03 peterhickman [View] | <urn:uuid:3bdd9df7-8675-4650-9e63-be7eac388413> | 3.15625 | 1,651 | Comment Section | Software Dev. | 58.437671 |