text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Last week in Singapore, Rick Rashid, Microsoft Senior Vice-President (Research) highlighted how computer science theories (and not just computers) are increasing scientists’ arsenal to fight the HIV virus. Below is an extract from the interview “Microsoft takes computer science into fight against HIV“.
Computer science is giving scientists new ways to look at the virus that causes AIDS (acquired immune deficiency syndrome), perspectives that may help efforts to develop an effective vaccine and other medicines, according to the head of Microsoft’s research arm.
“It’s really focused on new ways of thinking about how to describe and analyze systemic activities within a cell,” said Rick Rashid, “Computer science theory, especially computer science languages, can actually be used to describe cell processes, and then the mathematics that we use to analyse programs can also be applied to analyse cell activities because there’s an underlying mathematical relationship,” Rashid said.
“It’s opening up peoples minds to how computers can help them, not just to do their work better, but how the underlying theory and underlying computer science changes the way they look at their problems,” he said.
Since 2005, Microsoft has sought to apply machine-learning techniques, including technology used in spam and antivirus filters, to AIDS research. The goal is to find genetic patterns in HIV that can be used to “train” the human immune system to fight the virus. In particular, Microsoft has looked for ways to track how HIV mutates to evade the human immune system.
“The idea is that because the genome is basically digital, it can be described as a string and analyzed as a string. It opens up an opportunity to think about a lot of problems in that space as data mining or machine-learning problems,” Rashid said. | <urn:uuid:86ad5c5d-d496-4dc6-a97c-afcd4a674132> | 3.03125 | 379 | Personal Blog | Science & Tech. | 28.455189 |
In Part III, we saw how the current state of the art in Haskell type-level programming leaves some things to be desired: it requires duplicating both data declarations and code, and even worse, it’s untyped. What to do?
Currently, GHC’s core language has three “levels”:
- Expressions: these include things like term variables, lambdas, applications, and case expressions.
- Types: e.g. type variables, base types, forall, function types.
*and arrow kinds.
Types classify expressions (the compiler accepts only well-typed expressions), and kinds classify types (the compiler accepts only well-kinded types). For example,
3 :: Int,
Int :: *,
Maybe :: * -> *,
and so on.
The basic idea is to allow the automatic lifting of types to kinds, and their data constructors to type constructors. For example, assuming again that we have
data Nat = Z | S Nat
we can view
Z :: Nat as either the data constructor
Z with type
Nat, or as the type
Z with kind
S :: Nat -> Nat can be viewed either as a data constructor and a type, or a type constructor and a kind.
One obvious question: if
Z is a type, and types classify expressions, then what expressions have type
Z? The answer: there aren’t any. But this makes sense. We want to be able to use
Z as a type-level “value”, and don’t really care about whether it classifies any expressions. And indeed, without this auto-lifting, if we wanted to have a type-level
Z we would have declared an empty data type
Notice we have much richer kinds now, since we are basically importing an entire copy of the type level into the kind level. But that also means we will need something to classify kinds as well, so we need another level… and what’s to stop us from lifting kinds up to the next level, and so on? We would end up with an infinite hierarchy of levels. In fact, this is exactly what Omega does.
But in our case we can do something much simpler: we simply collapse the type and kind levels into a single level so that types and kinds are now the same thing, which I will call typekinds (for lack of a better term). We just take the ordinary syntax of types that we already had, and the only things we need to add are lifted data constructors and
*. (There are still some questions about whether we represent arrow kinds using the arrow type constructor or forall, but I’ll leave that aside for the moment.) To tie the knot, we add the axiom that the typekind
* is classified by itself. It is well-known that this allows the encoding of set-theoretic paradoxes that render the type system inconsistent when viewed as a logic — but Haskell’s type system is already an inconsistent logic anyway, because of general recursion, so who cares?
So, what are the difficult issues remaining?
- Coercions: GHC’s core language includes a syntax of coercions for explicitly casting between equivalent types. Making the type system richer requires more sophisticated types of coercions and makes it harder to prove that everything still works out. But I think we have this mostly ironed out.
- Surface syntax: Suppose in addition to
Z :: Natwe also declare a type called
Z. This is legal, since expressions and types inhabit different namespaces. But now suppose GHC sees
Zin a type. How does it know which
Zwe want? Is it the type
Z, or is it the data constructor
Zlifted to a type? There has to be a way for the programmer to specify what they mean. I think we have a solution to this that is simple to understand and not too heavyweight — I can write about this in more detail if anyone wants.
- Type inference: This probably makes type inference a lot harder. But I wouldn’t know for sure since that’s the one thing we haven’t really thought too hard about yet.
One final question that may be bothering some: why not just go all the way and collapse all the levels, and have a true dependently-typed language? It’s a valid question, and there are of course languages, notably Agda, Coq, and Epigram, which take this approach. However, one benefit of maintaining a separation between the expression and typekind levels is that it enables a simple erasure semantics: everything at the expression level is needed at runtime, and everything at the typekind level can be erased at compile time since it has no computational significance. Erasure analysis for languages with collapsed expression and type levels is still very much an active area of research.
There’s more to say, but at this point it’s probably easiest to just open things up to questions/comments/feature requests and I’ll write more about whatever comes up! I should probably give more examples as well, which I’ll try to do soon. | <urn:uuid:186c2f30-6289-4cff-b2ff-a42d637ee2a9> | 2.765625 | 1,084 | Personal Blog | Software Dev. | 51.411045 |
Protein tertiary structure
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
In biochemistry and molecular biology, the tertiary structure of a protein or any other macromolecule is defined by its three-dimensional structure through its the atomic coordinates. Tertiary structure is formed by the packing of protein secondary structure elements into compact globular units called protein domains. Whole proteins can comprise one or several such domains, and its tertiary structure can refer to each individual domain as well as to the complete configuration of the whole protein, provided it contains a single, contiguous polypeptide chain backbone. Proteins that are formed by the assembly of separate, folded polypeptide chains give rise to quaternary structure.
Thorough understanding of the tertiary structure of proteins and their determinants represents a long-standing issue in biochemistry. The first predicted structure of globular protein was the cyclol model of Dorothy Wrinch, but this was quickly discounted as being inconsistent with experimental data. Modern methods are sometimes able to predict the tertiary structure de novo to within 5 Å for small proteins (<120 residues) and under favorable conditions, e.g., confident secondary structure predictions.
Determinants of tertiary structure
In globular proteins, tertiary interactions are frequently stabilized by the sequestration of hydrophobic amino acid residues in the protein core, from which water is excluded, and by the consequent enrichment of charged or hydrophilic residues on the protein's water-exposed surface. In secreted proteins that do not spend time in the cytoplasm, disulfide bonds between cysteine residue helps to maintain the protein's tertiary structure. A variety of common and stable tertiary structures appear in a large number of proteins that are unrelated in both function and evolution - for example, many proteins are shaped like a TIM barrel, named for the enzyme triosephosphateisomerase. Another common structure is the highly stable dimeric coiled coil structure composed of 2-7 alpha helices. Proteins are classified by the folds they represent in databases like SCOP and CATH.
Stability of native states
The most typical conformation of a protein in its cellular environment is generally referred to as the native state or native conformation. It is commonly assumed that this state is also the most thermodynamically stable conformation attainable for any given primary structure. This is a reasonable first approximation but the claim assumes that the reaction is not under kinetic control - that is, the time required for the protein to attain its native conformation before being translated is small.
In the cell, a variety of protein chaperones assist a newly synthesized polypeptide in attaining its native conformation. Some of these proteins are highly specific in their function, such as protein disulfide isomerase. Others are very general and can be of assistance to most globular proteins - the prokaryotic GroEL/GroES system and the homologous eukaryotic Heat shock proteins Hsp60/Hsp10 system fall into this category.
Some proteins explicitly take advantage of the fact that they can become kinetically trapped in a relatively high-energy conformation due to folding kinetics. Influenza hemagglutinin, for example, is synthesized as a single polypeptide chain that acts as a kinetic trap. The "mature" activated protein is proteolytically cleaved to form two polypeptide chains that are trapped in a high-energy conformation. Upon encountering a drop in pH, the protein undergoes an energetically favorable conformational rearrangement that enables it to penetrate a host cell membrane.
Relationship to primary structure
Tertiary structure is considered to be largely determined by the protein's primary structure - the sequence of amino acids of which the protein is composed of. Efforts to predict the tertiary structure from the primary structure are known generally as protein structure prediction. However, the environment in which a protein is synthesized and allowed to be folded are significant determinants of its final shape and are usually not directly taken into account by current prediction methods. Most of such methods do rely on comparisons between the sequences to be predicted and sequences of known structure in the Protein Data Bank. Thus, they account for the environment indirectly, assuming the target and template sequences share similar cellular contexts.
Proteins, due to the precise conformations they fold into, are nature's original nanomachines. Developing an inexpensive and practical way of designing and targeting proteins would revolutionize medicine and would have incredibly far-reaching implications. The significance of such a discovery cannot be overstated.
The majority of protein structures known to date have been solved with the experimental technique of X-ray crystallography, which typically provides data of high resolution but provides no time-dependent information on the protein's conformational flexibility. A second most-common way of solving protein structures uses NMR. It provides somewhat lower-resolution data in general and is limited to relatively small proteins, but can provide time-dependent information about the motion of a protein in the solution. Dual polarisation interferometry is a time resolved analytical method for determining the overall conformation and conformational changes in the surface captured proteins providing complementary information to these high resolution methods. More is known about the tertiary structural features of soluble globular proteins than about membrane proteins because the latter class is extremely difficult to study using these methods.
Stanford University's Folding@home project is a distributed computing research effort which uses its approximately 5 petaFLOPS (~10 x86 petaFLOPS) of computing power to attempt to model the tertiary and quaternary structures of proteins, as well as other aspects of how and why proteins fold into inordinately complex and varied shapes. No currently existing algorithm is yet able to consistently predict a proteins' tertiary or quaternary structure given only its primary structure. Learning how to accurately predict the tertiary and quaternary structure of any protein given only its amino acid sequence and the pertinent cellular conditions would be a monumental achievement. The calculations performed by the algorithms are constantly evolving, increasing in complexity and nuance, and involve an enormous number of variables. These techniques are superficially comparable to weather models that show hurricane storm tracks; each of several algorithms independently models a complex system somewhat different from each of its sister weather algorithms. The average of all the algorithms' output is taken to be the most likely "storm track". The shape of proteins can be elucidated through a somewhat similar process.
Researchers are also interested in proteins that can fold into more than one stable configuration. Protein aggregation diseases such as Alzheimer's Disease and Huntington's Disease as well as prion diseases such as Mad Cow disease can be better understood by constructing (and reconstructing) disease models. The most common way of doing this is by developing a way of inducing the desired disease state in test animals, for example by administering MPTP to give the animals Parkinson's disease, or knocking out a gene essential for the prevention of certain tumors from the animals' genomes. The Folding@home project and other projects similar to it now allow for the modelling of such disease states. Perhaps more importantly, full human proteins encoded by full human genes can be used without any of the ethical problems that arise in studying living human beings. They are quickly becoming indispensable tools among researchers from a broad variety of disciplines.
See also
- Protein structure
- Folding (chemistry)
- Quaternary structure
- Structural biology
- Protein contact map
- Proteopedia, a collaborative encyclopedia of proteins and other molecules.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "tertiary structure".
- Branden, Carl & Tooze, John (1991, 1999). Introduction to Protein Structure. New York: Garland Publising, Inc.
- Kyte, Jack (1995). Structure in Protein Chemistry. New York: Garland Publishing, Inc. ISBN 0-81531701-8.
- Whisstock J, Bottomley S (2006). "Molecular gymnastics: serpin structure, folding and misfolding". Curr Opin Struct Biol 16 (6): 761–8. doi:10.1016/j.sbi.2006.10.005. PMID 17079131.
- Gettins P (2002). "Serpin structure, mechanism, and function". Chem Rev 102 (12): 4751–804. doi:10.1021/cr010170. PMID 12475206.
- Whisstock JC, Skinner R, Carrell RW, Lesk AM (2000). "Conformational changes in serpins: I. The native and cleaved conformations of alpha(1)-antitrypsin". J Mol Biol. 296 (2): 685–99. doi:10.1006/jmbi.1999.3520. PMID 10669617.
- "Folding@home". Retrieved 2010-12-18.
- "Folding@home - FAQ". Retrieved 2010-12-18.
- "Folding@home - Science".
- "NHC Model Overview". Retrieved 2010-12-18.
- Schober A (October 2004). "Classic toxin-induced animal models of Parkinson's disease: 6-OHDA and MPTP". Cell Tissue Res. 318 (1): 215–24. doi:10.1007/s00441-004-0938-y. PMID 15503155.
- "Tp53 Knockout Rat". Cancer. Retrieved 2010-12-18.
- "Feature - What is Folding and Why Does it Matter?". Retrieved December 18, 2010.
- Protein Data Bank
- Display, analyse and superimpose protein 3D structures
- Visualize, analyze and compare multiple protein/DNA structures and their sequences simultaneously.
- Display, analyse and superimpose protein 3D structures
- WWW-based course teaching elementary protein bioinformatics
- Critical Assessment of Structure Prediction (CASP)
- Structural Classification of Proteins (SCOP)
- CATH Protein Structure Classification
- DALI/FSSP software and database of superposed protein structures
- TOPOFIT-DB Invariant Structural Cores between proteins
- PDBWiki — PDBWiki Home Page - a website for community annotation of PDB structures. | <urn:uuid:f5b386d1-ebe1-4756-be4a-c3b30cca75e2> | 3.015625 | 2,186 | Knowledge Article | Science & Tech. | 29.913927 |
Lowell's astronomers carry out research in areas spanning much of modern astrophysics, from studies of tiny icy objects in our own solar system to the structure of distant galaxies. Meet our scientists and learn more about our diverse programs here.
Solar and Stellar Activity Cycles
Wes Lockwood, Jeffrey Hall, and Brian Skiff
Twenty years ago, stimulated by the new knowledge that the Sun’s brightness variations over the 11-year solar cycle were less than 0.1 percent, Lockwood and colleagues began a systematic photometric study of the small brightness fluctuations of sunlike stars of various ages. Using the 21-inch telescope and a dedicated photometer, Brian Skiff observed several dozen sunlike stars for 16 consecutive seasons. Here’s what they found: (1) a majority of sunlike stars have detectable year-to-year variations from as small as 0.3 percent to several percent; (2) the amount of variability decreases with increasing stellar age; and, (3) in comparison with the stars in our survey, the Sun appears to be relatively quiescent. This may turn out to be a very important result in the arena of Sun-climate studies. Many of these same stars have also been observed spectroscopically using Lowell’s Solar-Stellar Spectrograph, an instrument fed by optical fiber from a solar feed and from the 1.1-m J. S. Hall telescope at Anderson Mesa. This program is operated in collaboration with Jeffrey Hall. It is intended to characterize the magnetic activity of these stars AND the Sun on the timescale of the 11-year solar cycle.
Select a program from the list below to read more about it. | <urn:uuid:7b5a2076-7997-449c-b40a-ae6d1015b026> | 2.953125 | 346 | Content Listing | Science & Tech. | 47.844373 |
A report on the annual meeting of the Society of Developmental Biology, Madison, Wisconsin, USA, 21-24 July 2002.
The introduction of new concepts and of new techniques is equally important in advancing scientific research; ideally, they push each other forward. The talks at the recent Society for Developmental Biology (SDB) meeting gave ample evidence that technical progress continues to transform research in different areas of developmental biology. The last few years have seen remarkable progress in large-scale DNA sequencing, gene-expression analysis, and bioinformatics. The full genome sequence of several model organisms is now available, and new ones are being added continually. Another technical advance is the availability of microarrays and of serial analysis of gene expression (SAGE), and of methods to analyze the data they generate. Microarrays (and SAGE) measure levels of gene expression simultaneously for a large number of genes and, combined with the availability of whole-genome sequence data, have given us remarkable new tools for examining development. Several talks at the SDB meeting highlighted these advances and focused on new concepts of gene regulation and genetic networks that are beginning to emerge.
In the session on 'dealing with complexity', Stuart Kim (Stanford University, USA) described the kinds of new information that can be derived from a whole-genome approach to studying Caenorhabditis elegans. Microarray experiments can be used to analyze the expression and regulation of genes in different tissues and under different conditions. A number of different approaches have been used to isolate mRNA from specific tissues in cases where isolating the tissues by manual dissection is impractical. For example, in Drosophila, researchers have taken advantage of maternal-effect mutations to generate egg collections in which 100% of the embryos are mutant or have used green fluorescent protein (GFP) markers to allow automated sorting of specific embryos or cells. Recently, a technique has been developed called mRNA tagging, which allows mRNA to be labeled in living cells and subsequently to be isolated. A tissue-specific promoter drives the expression of a FLAG-tagged form of poly (A)-binding protein; all the mRNA in a desired population of cells is thus selectively tagged and can be isolated from the total mRNA of the animal. In this way, the Kim lab has been able to gather microarray data on specific expression in C. elegans tissues such as muscle cells, which are otherwise difficult to isolate. They have shown that the mRNAs of 1,364 genes are significantly enriched in nematode muscle. When mapped back to their chromosomal locations, the genes co-expressed in muscle were often found in clusters of 2-5 genes that cannot be explained as local gene-duplication events. This result suggests that the local chromosomal environment may play an important role in the regulation of muscle genes, perhaps as a consequence of chromatin opening.
The total amount of microarray data now available for model systems such as C. elegans has become staggering. Another innovation discussed in Kim's talk was the use of topographic mapping of array data to make it more accessible by presenting it in a useful visual format (Figure 1). This tool makes it far easier for the investigator to discover interesting links within the data and to examine the relationships between genes at many levels of resolution.
Figure 1. A 'topomap' of Caenorhabditis elegans gene expression. The co-regulation of genes is plotted in three dimensions; the z axis represents gene density and the position in the x-y plane is a relative measure of gene relatedness based on the microarray expression data. Selected classes of gene are enriched in specific 'mountains'. Image courtesy of Stuart Kim.
In the same session, Richard Young (Massachusetts Institute of Technology, Cambridge, USA) illustrated the power of a genome-wide analysis of the location of DNA-bound proteins on yeast chromosomes to aid understanding of genetic regulatory networks. In the initial experiments to assay genome-wide location, done in the Young lab two years ago, two DNA-binding proteins, Gal4 and Ste12, were co-immunoprecipitated with their respective DNA sequences from cells grown under various conditions (such as different carbon sources and levels of pheromones). Using microarrays, it was then possible to determine which genes corresponded to the bound sequences. As a result, they were able to find new downstream targets of these two DNA-binding proteins that had not been detected by previous mutant screens. In subsequent experiments, this approach was expanded to the analysis of nine proteins that have roles in cell-cycle regulation. The fascinating picture emerging from this study is that transcriptional activators functioning during one stage of the cell cycle regulate transcriptional activators of the following stage. The data from these experiments are now being used to assemble models that depict large regulatory networks, and the approach will clearly be applicable to all systems. With knowledge of such networks, we can understand how different networks are inter-related and how different cellular functions are coordinated.
Also in this session, David Keys (University of California, Berkeley, USA) described the experimental design and early data from a screen for tissue-specific enhancers in the sea squirt Ciona intestinalis. Adult C. intestinalis could be easily mistaken for a plant or sponge, but as an embryo and larva C. intestinalis has a prominent notochord, a morphological structure also found in vertebrates, revealing its close relationship to this group of animals. The project is designed to allow a comprehensive analysis of the enhancer sequences of C. intestinalis. To locate the enhancers, genomic DNA of C. intestinalis is cut into pieces of about 1,500 base pairs (bp), cloned into a vector that allows potential enhancers to drive expression of the reporter gene lacZ, and introduced into embryos by electroporation. After just a few hours, the embryos can be assayed for lacZ expression and any tissue-specific patterns of expression can be recorded. The prediction is that the enhancers that drive similar patterns of expression will share sequence motifs that can be used to establish a 'cis-regulatory code'. Because the genome of C. intestinalis has been sequenced, these putative enhancer sequences can also be associated with specific genes. The analysis is further improved by sequence comparisons with a closely related species, Ciona savignyi, whose genome has also been sequenced. These studies should also help us understand the evolution of enhancers, a central component of understanding evolution as a whole.
In the session on germ cells, Ruth Lehmann (New York University and Howard Hughes Medical Institute, USA) described a comprehensive approach to understanding the migration of germ cells in Drosophila. This particular avenue of research began several years ago with a mutagenesis screen that identified Hmgcr as a gene that mediates the attraction of germ cells towards the gonad. Somewhat surprisingly, Hmgcr encodes the HMG-CoA reductase enzyme, which is at the beginning of the well-studied metabolic pathway for cholesterol synthesis. Given that the entire Drosophila genome sequence is now available, Lehmann and colleagues were able to identify additional genes in the Drosophila genome that encode the other enzymes in the cholesterol biosynthesis pathway and to analyze their expression by in situ hybridization, to find out which parts of that pathway might also be necessary for germ cell migration. They found that the pathway in Drosophila lacks several of the terminal biosynthetic steps, explaining the long-known fact that insects cannot synthesize cholesterol. It will, of course, be of great interest to know exactly which product in the pathway is responsible for germ cell attraction.
In the same session, June Nasrallah (Cornell University, Ithaca, USA) presented an approach using a model plant, Arabidopsis thaliana, for the analysis of self-incompatibility, a process that prevents self-fertilization and ensures cross-fertilization in plants. Although self-incompatibility is absent from A. thaliana, it is present in a vast number of plant species, among them Arabidopsis lyrata, a species closely related to A. thaliana. Self-incompatibility in A. lyrata is mediated by stigma receptor kinases (SRKs) in the female stigma and their ligands, S-locus cysteine-rich protein (SCRs), in male pollen; the coding sequences of these two genes are among the most polymorphic known in eukaryotes. In A. thaliana, the SRK and SCR genes are truncated and nonfunctional, consistent with the absence of self-incompatibility. When a pair of the SRK and SCR genes from A. lyrata are transformed into A. thaliana, however, self-incompatibility is conferred on A. thaliana. These experiments demonstrate an elegant approach for cross-species studies and open up the analysis of self-incompatibility to the many genetic and genomic tools available for A. thaliana. They also present a fascinating case through which to explore how and why genes upstream in a genetic cascade can be lost during evolution while their downstream targets are maintained.
Modern developmental biology does not consist of just genome sequences and microarrays, nor are these even always the most exciting approaches. There is, however, no doubt that the availability of genomic tools is having a major impact on the field of developmental biology, and we can look forward to many exciting new breakthroughs as a result. | <urn:uuid:c4d7de89-c343-4aa6-8681-dc6652ca179f> | 3 | 1,963 | Academic Writing | Science & Tech. | 23.828861 |
In offering this book to teachers of elementary chemistry the authors lay no claim to any great originality. It has been their aim to prepare a text-book constructed along lines which have become recognized as best suited to an elementary treatment of the subject. At the same time they have made a consistent effort to make the text clear in outline, simple in style and language, conservatively modern in point of view, and thoroughly teachable.
these transformations all the other physical properties of a substance save weight are likely to change, the inquiry arises, Does the weight also change? Much careful experimenting has shown that it does not. The weight of the products formed in any change in matter always equals the weight of the substances undergoing change.
~Law of conservation of matter.~ The important truth just stated is frequently referred to as the law of conservation of matter, and this law may be briefly stated thus: Matter can neither be created nor destroyed, though it can be changed from one form into another.
~Classification of matter.~ At first sight there appears to be no limit to the varieties of matter of which the world is made. For convenience in study we may classify all these varieties under three heads, namely, mechanical mixtures, chemical compounds, and elements.
[Illustration: Fig. 1]
~Mechanical mixtures.~ If equal bulks of common salt and iron filings | <urn:uuid:9df1451e-5ba8-4b89-87e3-1025945a334c> | 3.28125 | 281 | Truncated | Science & Tech. | 42.129775 |
"Math Chat" began as a live phone-in TV show in the USA, spawned a newspaper column and a website, and now it has produced a book. The whole project was the brainchild of Williams' College mathematician Frank Morgan and has both stimulated interest in maths across a broad range of the community and led to the formation of highly successful undergraduate research groups in mathematics.
This is one of the world's outstanding pedagogic texts. It has the rare distinction of being a mathematics book that has sold a million copies. The COMAP project is a coalition of leading mathematicians and educators, directed by Solomon Garfunkel, who over a period of twelve years and five ever-expanding editions have created a beautiful introduction to the practical applications of some of the most important areas of discrete mathematics.
One of the most striking and powerful means of presenting numbers is completely ignored in the mathematics that is taught in schools, and it rarely makes an appearance in university courses. Yet the continued fraction is one of the most revealing representations of many numbers, sometimes containing extraordinary patterns and symmetries. John D. Barrow explains. | <urn:uuid:0b5a5d79-6436-4930-9713-c4925040ba5c> | 3.109375 | 226 | Content Listing | Science & Tech. | 31.6492 |
5 Written Questions
5 Matching Questions
- What is the periodic table?
- Alkali Metals
- What is the periodic law?
- a is an aton or group of bonded atoms that has a positive or negative charge
- b group 17 elements that react vigorously with most metals to form salts
- c the physical and chemical properties of the elements are periodic functions of their atomic numbers
- d elements that often have a silvery apperance and are soft to cut. not found in nature and usually stored in kerosene. highly reactive to most non metals and water.
- e is an arragement of the elements in order of their atomic numbers so that elemetns with similar properties fall in the same column or group
5 Multiple Choice Questions
- the energies for removal of addittional electrons from an atom
- any process that results in formation of an ion
- the energy change that occus when an electron is acquired by a neutral atom
- In the modern periodic tabke, how are elements arranged
- the p block elements together with the s block elements
5 True/False Questions
What is the trend for atomic radius → across a period is smaller and increase down a group
Atomic Radius → In the modern periodic tabke, how are elements arranged
Ionization Energy (IE1) → any process that results in formation of an ion
What did Mendeleev do? → arranged in order of increasing atomic mass
Transition elements → d block elements with typical metallic properties. good conductors of electricity and less reactive. | <urn:uuid:335ccda7-ea1b-42f8-909b-5354716d35f0> | 3.78125 | 324 | Q&A Forum | Science & Tech. | 30.153018 |
In the 1830s, at the peak of the canal-building era, hydropower was used to transport barge traffic up and down steep hills using inclined plane railroads. For direct mechanical power transmission industries that used hydropower had to be near the waterfall. For example, during the last half of the 19th century, many grist mills were built at Saint Anthony Falls, to use the 50 foot (15 metre) drop in the Mississippi River. The mills were important for the growth of Minneapolis. Today the largest use of hydropower is for electric power generation. That allows low cost energy to be used at long distances from the watercourse.
Types of water power [change]
There are many forms of water power:
- Water wheels, used for hundreds of years to power mills and machinery
- Hydroelectric energy, a term usually reserved for hydroelectric dams.
- Tidal energy, which captures energy from the tides in horizontal direction
- Tidal stream power, which does the same, but vertically
- Wave power, which uses the energy in waves
Hydroelectric power [change]
Main article: Hydroelectricity
Hydroelectric power is a means of conserving the energy. Hydroelectric power now supplies about 715,000 MWe or 19% of world electricity (16% in 2003). Large dams are still being designed. Apart from a few countries with plenty of it, hydro power is normally applied to peak load demand because it is readily stopped and started. Nevertheless, hydroelectric power is probably not a major option for the future of energy production in the developed nations because most major sites within these nations are either already being exploited or are unavailable for other reasons, such as environmental considerations.
Hydroelectric power can be far less expensive than electricity generated from fossil fuel or nuclear energy. Areas with abundant hydroelectric power attract industry. Environmental concerns about the effects of reservoirs may prohibit development of economic hydropower sources.
The chief advantage of hydroelectric dams is their ability to handle seasonal (as well as daily) high peak loads. When the electricity demands drop, the dam simply stores more water. Some electricity generators use water dams to store excess energy (often during the night), by using the electricity to pump water up into a basin. Electricity can be generated when demand increases. In practice the utilization of stored water in river dams is sometimes complicated by demands for irrigation which may occur out of phase with peak electrical demands.
Tidal power [change]
Harnessing the tides in a bay or estuary has been achieved in France (since 1966), Canada and Russia, and could be achieved in other areas with a large tidal range. The trapped water turns turbines as it is released through the tidal barrage in either direction. Another possible fault is that the system would generate electricity most efficiently in bursts every six hours (once every tide). This limits how tidal energy can be used.
Tidal stream power [change]
A relatively new technology, tidal stream generators draw energy from currents in much the same way that wind generators do. The higher density of water means that a single generator can provide significant power. This technology is at the early stages of development and will need more research before it can produce any higher amount of energy.
But severalwere tested in the UK, in France and the USA. Already in 2003 a turbine that produces 300 kW was tested in the UK.
The Canadian company Blue Energy has plans for installing very large arrays tidal current devices mounted in what they call a 'tidal fence' in various locations around the world, based on a vertical axis turbine design.
Wave power [change]
Power from ocean surface wave motion might produce much more energy than tides. It has been tested that it is possible to produce energy from waves, particularly in Scotland in the UK. But there are still a lot of technical problems.
A prototype shore based wave power generator is being constructed at Port Kembla in Australia and is expected to generate up to 500 MWh annually. Wave energy is captured by an air driven generator and converted to electricity. For countries with large coastlines and rough sea conditions, the energy of waves offers the possibility of generating electricity in utility volumes. Excess power during rough seas could be used to produce hydrogen.
Other pages [change]
- Micro-hydro power, Adam Harvey, 2004, Intermediate Technology Development Group, retrieved 1 January 2005 from http://www.itdg.org/docs/technical_information_service/micro_hydro_power.pdf.
- Microhydropower Systems, U.S. Department of Energy, Energy Efficiency and Renewable Energy, 2005
Other websites [change]
- International Centre for Hydropower (ICH) hydropower portal with links to numerous organisations related to hydropower worldwide
- Practical Action (ITDG) a UK charity developing micro-hydro power and giving extensive technical documentation.
- National Hydropower Association
- British Hydropower Association
- Congressional Research Service (CRS) Reports regarding Hydropower
- Hydro Quebec
- The Federal Energy Regulatory Commission (FERC) Federal Agency that regulates more than 1500 hydropower dams in the United States.
- Hydropower Reform Coalition A U.S.-based coalition of more than 130 national, state, and local conservation and recreation groups that seek to protect and restore rivers affected by hydropower dams.
- Small Scale Hydro Power | <urn:uuid:ac60fcbc-967b-4830-8061-61d5589687d6> | 3.84375 | 1,112 | Knowledge Article | Science & Tech. | 27.135236 |
A hot wind constantly buffets Earth and the other bodies of the solar system: the solar wind. It's a flow of electrically charged particles from the Sun. It blows steadily at a million miles an hour or more, with gusts of more than twice that.
When the solar wind hits Earth, it triggers some impressive effects. It creates beautiful aurorae -- the shimmering curtains of color known as the northern and southern lights. It warms the outer atmosphere. And it can disrupt radio broadcasts and electric power grids.
The source of the solar wind has remained something of a mystery -- scientists have had a hard time figuring out how the particles are blown into space.
But astronomers who've watched the Sun with a Japanese spacecraft say they've found the answer.
The Sun's surface temperature isn't the same all over -- there are hot spots and cool spots. Observations by the Hinode spacecraft show that the hotspots produce their own magnetic fields -- bright filaments that loop above the Sun.
Charged particles follow these lines of magnetic force into space. Where the magnetic fields of two bright spots intersect, it's like streams from two water hoses flowing together -- there's a spray of particles heading in different directions. Some of the particles flow out into space -- forming the solar wind.
More about the Sun tomorrow.
Script by Damond Benningfield, Copyright 2008
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:18b09402-09a9-4c47-bdfd-62aa3094f2fa> | 4.09375 | 301 | Truncated | Science & Tech. | 52.825057 |
Episode 705: Cosmology
Cosmology is the study of the origins, history and future of the universe. The currently favoured model is the big bang.
- Discussion: The hot big bang (10 minutes)
- Discussion: How old is the universe? (10 minutes)
- Student questions: The age of the universe (20 minutes)
- Discussion: Cosmic microwave background (10 minutes)
- Discussion: The future of the universe (10 minutes)
- Student questions: Critical density (20 minutes)
- Discussion: Missing mass and dark energy (10 minutes)
- Student activity: Olbers’ paradox (20 minutes)
Discussion: The hot big bang
Imagine running a film of the universe ‘backwards’ – all matter and energy were originally in a very, very dense state. All exploded outwards = the big bang (Fred Hoyle coined this name, intending it to be derisive).
What happened before the big bang? Space-time was created at the big bang; so many cosmologists argue that the question has no physical meaning.
At the present time (2005) the overwhelming weight of evidence favours the Hot Big Bang theory. There are 3 independent pieces of evidence:
- The observed expansion of the universe
- The Cosmic Microwave Background (CMB) radiation
- The cosmic relative abundance of the light elements (created by the big bang, rather than subsequently in stars or supernovae)
Discussion: How old is the universe?
Episode 704-3: Hubble’s law and the age of the universe (Word, 200 KB)
As discussed above, the Hubble constant gives us a means of estimating the age of the universe. It is ~ 14 billion years old.
Episode 705-1: The ‘age’ of the universe (Word, 102 KB)
Thus the size of the visible universe is set by how far a light beam has travelled since the big bang = c ´ age of universe = 14 ´ 109 light years or 1.3 ´ 1026 m.
Student questions: The age of the universe
Students can look at a number of estimates of the age of the universe.
Episode 705-2: Calculating the age of the universe (Word, 59 KB)
Episode 705-3 The parsec (Word, 50 KB)
Discussion: Cosmic microwave background (CMB)
Space is filled with the cosmic microwave background radiation. This allows an estimate of how much the universe has stretched since it was emitted.
Episode 705-4: The cosmic microwave background radiation (Word, 198 KB)
Discussion: The future of the universe
This is a topic that is still the focus of research and debate.
If there is enough mass in the universe then its gravitational attraction will eventually overcome the expansion. The universe will stop expanding and then collapse to an eventual big crunch (cf. throwing an object upwards with less than the escape velocity). This is called a closed universe.
If the quantity of mass is just right then the expansion slows to zero ‘at infinity’. This is called a flat universe.
If there is not enough mass for its gravitation effect to overcome the expansion, the universe will continue to expand forever. This is called an open universe.
A closed universe might rebound forever – a big bang eventually resulting in a big crunch which rebounds into a big bang and so on. The whole universe may be a gigantic oscillator!
The critical density is the demarcation between an open and closed universe.
Student questions: Critical density
Observation has yet to pin down the actual density with sufficient precision to decide if our universe has a density larger or smaller than the predicted critical density r0.
Episode 705-5: Critical density (Word, 36 KB)
Discussion: Missing mass and dark energy
According to the standard cosmological model, the universe consists of three categories of mass/energy. Dark matter (25%), dark energy (70%) and a smattering of normal matter (5%).
Dark matter was invented to account for the observed rotational shape of galaxies. Dark energy has been invented more recently to account for the latest red shift data. (As of 2005) at the highest red shifts the universe seems to be accelerating!
Student activity: Olbers’ paradox
To exemplify the depth of ideas that can come from thinking about a simple observation – that the sky is dark at night – students can read about Olbers’ paradox. They could report their thoughts to the class.
Episode 705-6: The sky is dark at night (Word, 46 KB)
Download this episode
Episode 705: Cosmology (Word, 455 KB) | <urn:uuid:1993b933-798d-4809-a70f-59aa508036bf> | 3.625 | 978 | Content Listing | Science & Tech. | 42.87779 |
A dynamic-link library (DLL) is a module that contains functions and data that can be used by another module (application or DLL). In Linux/UNIX, the same concept is implemented in shared object (.so) files. From now on, I use the term shared libraries to refer to DLL and SO files.
Advantaged of using shared libraries are:
- Create modular applications
- Reduce memory overhead when several applications use the same functionality at the same time, because although each application gets its own copy of the data, they can share the code.
This article will address the following topics:
- Fundamentals of shared libraries
- Differences between DLL and SO
- How we can create/use the shared libraries in both Windows and Linux?
- How we can write a platform independent code? Keep the same source files and compile for different platforms.
Creating, Linking and Compiling the DLL or SO:
- Write code for the DLL or SO. Identify the functions or variables that are to be available for the calling process.
- Compile the source code into an object file.
- Link that object file into either a DLL or SO.
Accessing the DLL or SO from a Calling Process:
- Load the DLL or SO.
- Get a pointer to the exported function or variable.
- Utilize the exported function or variable.
- Close the library.
There are many differences in the way shared libraries are created, exported and used.
A DLL can define two kinds of functions: exported and internal. The exported functions are intended to be called by other modules, as well as from within the DLL where they are defined. Internal functions are typically intended to be called only from within the DLL where they are defined. Although a DLL can export data, its data is generally used only by its functions. However, there is nothing to prevent another module from reading or writing that address.
But in the case of Linux/Unix, no special export statement needs to be added to the code to indicate exportable symbols, since all symbols are available to an interrogating process (the process which loads the SO/DLL).
Export symbols in src file
No export symbol required.
__declspec( dllexport )
Loading the shared library
( const char *pathname, int mode );
( LPCTSTR lpLibFileName );
Runtime access of functions
void* dlsym( void* handle,
const char *name);
GetProcAddress( HMODULE hModule,
Closing the shared library
int dlclose( void *handle );
( HMODULE hLibModule );
Read further for more information on the above functions.
Creating, Compiling and Linking the DLL or SO
All UNIX object files are candidates for inclusion into a shared object library. No special export statements need to be added to the code to indicate exportable symbols, since all symbols are available to an interrogating process (the process which loads the SO/DLL).
In Windows NT, however, only the specified symbols will be exported (i.e., available to an interrogating process). Exportable objects are indicated by the including the keyword '
__declspec(dllexport)'. The following examples demonstrate how to export variables and functions.
__declspec( dllexport ) void MyExportFunction();
__declspec (dllexport) int MyExportVariable;
Both DLL and SO files are linked from compiled object files.
In Windows, most of the IDEs automatically help you compile and link the DLL.
CC = g++
add.so : add.o
$(CC) add.o -shared -o add.dll
add.o : add.cpp
$(CC) $(CFLAGS) add.cpp
Under UNIX, the linking of object code into a shared library can be accomplished using the '-shared' option of the linker executable 'ld'. For example, the following command line can be used to create an SO file add from add.cpp.
Accessing the DLL or SO
To use the shared objects in UNIX, the include directive '#include <dlfcn.h>' must be used. Under Windows, the include directive '#include <windows.h>' must be used.
In Unix, loading the SO file can be accomplished from the function dlopen(). The function protoype is:
void* dlopen( const char *pathname, int mode )
The argument pathname is either the absolute or relative (from the current directory) path and filename of the .SO file to load. The argument mode is either the symbol
RTLD_LAZY will locate symbols in the file given by pathname as they are referenced, while
RTLD_NOW will locate all symbols before returning. The function dlopen() will return a pointer to the handle to the opened library, or
NULL if there is an error.
#define RTLD_LAZY 1
#define RTLD_NOW 2
Under Windows, the function to load a library is given by:
HINSTANCE LoadLibrary( LPCTSTR lpLibFileName );
In this case,
lpLibFileName carries the filename of an executable module. This function returns a handle to the DLL (of type
NULL if there is an error.
Under UNIX, the shared object will be searched for in the following places:
- In the directory specified by the pathname argument to dlopen() if it is not a simple file name (i.e. it contains a character). In this case, the exact file is the only placed searched; steps two through four below are ignored.
- In any path specified via the
-rpath argument to
ld(1) when the executable was statically linked.
- In any directory specified by the environment variable
LD_LIBRARY_PATH is not set, 64-bit programs will also examine the variable
LD_LIBRARY64_PATH, and new 32-bit ABI programs will examine the variable
LD_LIBRARYN32_PATH to determine if an ABI-specific path has been specified. All three of these variables will be ignored if the process is running setuid or setgid.
- The default search paths will be used. These are /usr/lib:/lib for 32-bit programs, /usr/lib64:/lib64 for 64-bit programs, and /usr/lib32:/lib32 for new 32-bit ABI programs.
Under Windows, the shared object will be searched for in the following places:
- The directory from which the application loaded
- The current directory
- Windows 95 and Windows 98: The Windows system directory. Use the
GetSystemDirectory function to get the path of this directory.
- Windows NT: The 32-bit Windows system directory. Use the
GetSystemDirectory function to get the path of this directory. The name of this directory is SYSTEM32.
- Windows NT: The 16-bit Windows system directory. There is no function that obtains the path of this directory, but it is searched. The name of this directory is SYSTEM.
- The Windows directory. Use the
GetWindowsDirectory function to get the path of this directory.
- The directories that are listed in the
PATH environment variable.
Under Unix, symbols can be referenced from a SO once the library is loaded using
dlopen(). The function
dlsym() will return a pointer to a symbol in the library.
void* dlsym( void* handle, const char *name);
The handle argument is the handle to the library returned by
dlopen(). The name argument is a
string containing the name of the symbol. The function returns a pointer to the symbol if it is found and
NULL if not or if there is an error.
FARPROC GetProcAddress( HMODULE hModule, LPCSTR lpProcName);
Under Windows, the functions can be accessed with a call to
hModule is the handle to the module returned from
LoadLibrary(). The argument
lpProcName is the
string containing the name of the function. This procedure returns the function pointer to the procedure if successful, else it returns
Closing the library is accomplished in Unix using the function dlclose, and in Windows using the function
FreeLibrary. Note that these functions return either a 0 or a non-zero value, but Windows returns 0 if there is an error. Unix returns 0 if successful.
In Unix, the library is closed with a call to
int dlclose( void *handle );
The argument handle is the handle to the opened SO file (the handle returned by
dlopen). This function returns 0 if successful, a non-zero value if not successful.
BOOL FreeLibrary( HMODULE hLibModule );
In Windows NT, the library is closed using the function Free Library.
hLibModule is the handle to the loaded DLL library module. This function returns a non-zero value if the library closes successfully, and a 0 if there is an error.
Coding for Multiple Platforms
Most of the big applications that we write will have many calls to API functions specific to the operating system. This will make the application platform dependant. The source code that is written compiles in all the platforms without any modifications to the source code, as shown in the figure below will be the ideal situation. This can be achieved by routing the operating system specific calls through a common function, which in turn will call the operating system specific calls based on the operating system.
One solution to create platform independent code is to create a header file, which handles all platform dependant calls. Based on the compiler (or operating system), the same code will generate applications for different platforms.
Main functions which differ between windows and Linux are:
- Functions related to shared libraries, e.g.
- Functions related to creation and usage of threads/ forks.
The sample given below demonstrates a simple example of such a header file, which handles platform specific calls. In this example, the compiler is being checked to differentiate different platforms.
- We can achieve the same result using preprocessor directives for each and every OS specific calls. But that will make the code ugly and non-readable.
- Once such a code is written, the usage will be simple and easy. This is applicable especially when the code size is huge.
- Extension to new platform/modification of calls will be very simple.
#if defined(_MSC_VER) #include <windows.h>
#elif defined(__GNUC__) #include <dlfcn.h>
#error define your copiler
void* LoadSharedLibrary(char *pcDllname, int iMode = 2)
std::string sDllName = pcDllname;
#if defined(_MSC_VER) sDllName += ".dll";
#elif defined(__GNUC__) sDllName += ".so";
void *GetFunction(void *Lib, char *Fnname)
#if defined(_MSC_VER) return (void*)GetProcAddress((HINSTANCE)Lib,Fnname);
#elif defined(__GNUC__) return dlsym(Lib,Fnname);
bool FreeSharedLibrary(void *hDLL)
#if defined(_MSC_VER) return FreeLibrary((HINSTANCE)hDLL);
#elif defined(__GNUC__) return dlclose(hDLL);
using namespace std;
typedef int (*AddFnPtr)(int,int);
int main(int argc, char* argv)
hDLL = LoadSharedLibrary("add");
if(hDLL == 0)
AddFn = (AddFnPtr)GetFunction(hDLL,"fnAdd");
int iTmp = AddFn(8,5);
cout<<"8 + 3 = "<<iTmp;
Another problem, which we face when we code targeting multiple platforms, is so called "Big Endian Little Endian". This is a problem raised because of different byte ordering used for information storage. Some machines choose to store the object in memory ordered from least significant byte to most, while other machines store them from most to least. The former convention—where the least significant byte comes first—is referred to as little endian. Most machines follow this convention. The latter convention—where the most significant byte comes first—is referred to as big endian. This convention is followed by most machines from IBM, Motorola, and Sun Microsystems. I will take a simple example to elaborate this problem. Say you want to store an integer data (4 byte long say) 0x12345678 starting from a memory address 0x5000 to 0x5003. The data arrangement in the memory will be as below.
0x5000 0x5001 0x5002 0x5003
0x5000 0x5001 0x5002 0x5003
This issue will become critical if we store data in external binary files. If two different platforms use the same binary data file, the data retrieved from the file will be completely different. Keep this point in mind when you are targeting many platforms.
With increasing usage of Linux, it will be good always to target multiple platforms when we write code. If we can keep the same source code and just compile for a different platform after coding, we can reduce the time spent on porting from one platform to another. | <urn:uuid:d5d6b7dc-511b-43a5-91f0-2be6d8a28a0e> | 3.359375 | 2,893 | Documentation | Software Dev. | 47.379088 |
Chapters 18 - Global Climate Change
I. Central Case: Rising Temperatures and Seas May Take the Maldives Under
A. A nation of low-lying islands, or atolls, in the Indian Ocean, the Maldives is known for its spectacular tropical setting, colorful coral reefs, and sun-drenched beaches.
B. Nearly 80% of the Maldives= land area of 300 km2 lies less than 1 m above sea level, and the highest point of ground is only 2.4 m.
C. The world= s oceans rose 10-20 cm this past century, and are expected to continue to rise as temperatures warm, causing melting ice caps to discharge water into the ocean.
D. The island= s government has evacuated residents from several of the lowest-lying islands in recent years.
E. The tsunami in December of 2004 destroyed large sectors of the islands, including both homes and infrastructure such as hospitals and modes of transportation.
F. Other effects included soil erosion, saltwater contamination of aquifers, and other environmental damage.
G. The tsunami was caused by an earthquake, but the rising sea level allowed it to inflict great damage on the low-lying islands.
H. Maldives islanders are not alone in the worries: the people of other island nations and mainland coastal areas of the world fear the future.
II. Earth= s Hospitable Climate
1. Changes in the long-term pattern of atmosphere conditions worldwide, involving temperature, precipitation, and storm frequency and intensity, are global climate change.
A. The sun and the atmosphere keep Earth warm.
1. The sun, the atmosphere, and the oceans exert more influence on Earth= s climate than all other factors combined.
B. A Greenhouse gases@ warm the lower atmosphere.
1. As Earth= s surface absorbs solar radiation, its temperature increases and it emits radiation in the infrared portion of the spectrum.
2. Some atmospheric gases absorb infrared radiation effectively and are known as greenhouse gases.
3. When these gases absorb heat, they warm the atmosphere (specifically, the troposphere) as well as Earth= s surface. This warming is known as the greenhouse effect.
4. The greenhouse effect is a natural phenomenon that has been increased through human activities.
- 1 -
C. Carbon dioxide is the primary greenhouse gas.
1. Although carbon dioxide is not the most potent greenhouse gas on a per-molecule basis, its abundance in the atmosphere means that it contributes more to the greenhouse effect than other gases.
2. Carbon dioxide concentrations have increased and are currently at the highest level in at least 400,000 years.
3. In the last two centuries humans have been burning increasing amounts of fossil fuels in their homes, factories, and automobiles. At the same time we have cleared and burned forests, reducing the biosphere= s ability to absorb carbon dioxide from the atmosphere.
D. Other greenhouse gases add to warming.
1. Other greenhouse gases are increasing in the atmosphere.
a. We release methane into the atmosphere by tapping into fossil fuel deposits, raising large herds of cattle, disposing of organic matter in landfills, and growing certain types of crops, including rice.
b. Nitrous oxide is a by-product of feedlots, chemical manufacturing plants, auto emissions, and modern agricultural practices.
c. Ozone concentrations in the troposphere have increased b 36% since 1750.
d. Water vapor is the most abundant greenhouse gas, and its concentration increases as tropospheric temperatures rise.
E. Aerosols and other elements may exert a cooling effect on the lower atmosphere.
1. Microscopic droplets and particles can have either a warming or a cooling effect. Most tropospheric aerosols, such as the sulfate aerosols produced by fossil fuel combustion, may slow global warming in the short term.
F. The atmosphere is not the only factor that influences climate.
1. Milankovitch cycles are changes in Earth= s rotation and orbit around the sun, and they result in slight changes in the relative amount of solar radiation reaching Earth= s surface at different latitudes.
2. Oceanic circulation also shapes climate.
a. El Niño conditions occur when equatorial winds weaken and allow warm water from the western Pacific to move eastward, eventually prevent cold water from welling up in the eastern Pacific.
b. In La Niña events, cold surface waters extend far westward in the equatorial Pacific.
c. Many scientists today are exploring whether globally warming air and sea temperatures may be increasing the frequency and strength of El Niño events.
- 2 -
d. North Atlantic Deep Water (NADW) is part of a circulation pattern that moves warm surface water northward toward Europe where cooler water then sinks and returns in the other direction.
III. Methods of Studying Climate Change
A. Proxy indicators tell us about the past.
1. Ice caps and glaciers have preserved tiny bubbles of ancient atmosphere.
2. Sediment beds beneath bodies of water can be analyzed to learn about the ancient vegetation in an area and, by extension, what the climate was like at the time.
3. These sources of indirect evidence, which substitute for direct measurements, are called proxy indicators.
4. Other proxy indicators include coral reefs and tree rings.
B. Direct atmospheric sampling tells us about the present.
1. Charles keeling of the Scripps Institution of Oceanography documented trends in atmospheric carbon dioxide concentrations starting in 1958.
2. Keeling= s data show that atmospheric carbon dioxide concentrations have increased from around 315 ppm to 378 ppm since 1958.
C. Coupled general circulation models help us understand climate change.
1. Coupled general circulation models (CGCMs) are computer programs that combine what is known about weather patterns, atmospheric circulation, atmosphere-ocean interactions, and feedback mechanisms to simulate climate processes.
2. Over a dozen research labs around the world operate CGCMs.
3. Tests suggest that today= s computerized models provide a good approximation of the relative effects of natural and anthropogenic influences on global climate.
IV. Climate Change Estimates and Predictions
1. The most thoroughly reviewed and widely accepted collection of scientific information concerning global climate change is a series of reports issued by the Intergovernmental Panel on Climate Change (IPCC).
A. The IPCC report summarizes evidence of recent changes in global climate.
1. Besides the data on increases in atmospheric concentrations of greenhouse gases, the 2001 IPCC report presented a number of findings on how climate change has already influenced the weather, Earth= s physical characteristics and processes, the habits of organisms, and our economies.
B. Sea-level rise and other changes interact in complex ways.
- 3 -
1. Warming temperatures are causing glaciers to shrink and disappear in many areas around the world, and polar ice shelves are melting. This, combined with the warming temperature resulting in expansion of the water, is causing a rise in sea level.
2. Higher sea levels lead to beach erosion, coastal flooding, intrusion of saltwater into aquifers, and other impacts.
3. The record number of hurricanes and tropical storms in 2005, including Hurricane Katrina, left many people wondering if global warming was to blame.
C. The IPCC and other groups project future impacts of climate change.
1. In 2000, the U.S. Global Change Research Program issued a report highlighting the past and future effects of global climate change on the United States. The report developed a series of predictions.
2. Climate change will affect agriculture and forestry.
a. Some croplands could lose their ability to produce food, while other areas might see increased agricultural productivity.
b. U.S. forests may become more productive, but the frequency and intensity of forest fires could increase.
3. Freshwater and marine systems.
a. Erosion and flooding could alter the structure and function of aquatic systems.
b. Less precipitation in some areas would shrink surface water sources, affecting the habitats, organisms, and human health.
4. Human health.
a. There could be increased exposure to numerous health problems including respiratory ailments from air pollution and expansion of tropical diseases, as well as problems from storms and flooding.
b. Alternatively, there may be fewer diseases and injuries related to cold weather.
V. Debate over Climate Change
1. Within the scientific community the debate involves the details and mechanisms of climate change and the extent and nature of its likely effects.
2. In the wider culture, many people, primarily nonscientists, contest the findings and interpretations. Some of these skeptics have vested interests in continuing the use of fossil fuels, and some of them have much power to make policy.
3. A third area of debate involves how human societies should respond to climate change.
- 4 -
A. Scientists agree that climate change is occurring but disagree on many details.
1. In the summer of 2005 the national academies of science from 11 nations issued a joint statement urging action.
B. Some challenge the scientific consensus.
1. The media have tried to portray both sides of the issue.
2. Many charge that the media have given the impression that the debate is even, when in fact there are far fewer skeptics than believers, and the relatively few greenhouse skeptics among scientists are often funded by fossil fuel-related industries such as those related to petroleum and automobiles.
C. How should we respond to climate change?
1. There is much debate about the economic and political costs of reducing greenhouse gas emissions, and whether the reduction should be voluntary, government regulated, or as a result of economic sanctions.
VI. Strategies for Reducing Emissions
A. Electricity generation is the largest source of U.S. greenhouse gases.
1. Conservation and efficiency can arise from new technologies, or from individual ethical choices.
2. Renewable sources of electricity can also reduce fossil fuel use.
B. Transportation is the second largest source of U.S. greenhouse gases.
1. One-third of the average American city is devoted to use by cars - including roads, parking, garages, and gas stations.
2. The typical automobile is highly inefficient. Close to 85% of the fuel you use does something other than move your car down the road.
3. Automobile technology is making possible alternatives such as electric vehicles, alternative fuels, hybrid vehicles, and hydrogen fuel cells.
4. Driving less and using public transportation are lifestyle choices that reduce reliance on cars.
C. Some international treaties address climate change.
1. In 1992, the United Nations convened the United Nations Conference on Environment and Development Earth Summit in Rio de Janeiro. Five documents were signed, including the U.N. Framework Convention of Climate Change (FCCC), which outlined a plan for reducing greenhouse gas emissions through a voluntary, nation-by-nation approach.
a. In the U.S., greenhouse emissions increased by over 13% in the 10 years following the Rio conference.
b. Germany and the United Kingdom both cut their greenhouse emissions by 13% to 18% during the same period.
- 5 -
c. The decision was made to create a binding international treaty that would require all signatory nations to reduce greenhouse gas emissions. This is the Kyoto Protocol.
D. The United States has resisted the Kyoto Protocol.
1. The Kyoto Protocol was to take effect when nations responsible for 55% of global greenhouse emissions ratified it. That occurred in 2005 when Russia became the 127th nation to sign.
2. The United States, the world= s largest emitter of greenhouse gases, refuses to ratify the protocol, claiming it is unfair to industrialized nations.
3. Proponents of the Kyoto Protocol point out that the industrialized world created the problem and therefore should make the sacrifices necessary to solve it.
E. Some feel climate change demands the precautionary principle.
A. Many factors, including human activities, can shape atmospheric composition and global climate.
B. Scientists and policymakers are beginning to understand anthropogenic climate change and its environmental impacts more fully.
- 6 - | <urn:uuid:841bf080-4e5e-42dd-83f1-5d6f69fe290e> | 4.0625 | 2,505 | Academic Writing | Science & Tech. | 45.404144 |
Tricks With Torque
To demonstrate torque in a simple system.
Spool with two strings attached to a handle
To start make sure that the string is wound around the inner spool with about two feet left out for you to be able to pull on. For the first part of the demonstration pull on the rope while it is parallel or at a small angle to the ground, the spool should roll towards you (right on the diagram). Next you will pull the rope directly up, perpendicular to the ground, and the spool should roll away from you (left on the diagram). The last part of the demonstration requires that you line up the rope so that the line made by the rope points directly at the point of contact between the spool and ground. When you pull on the rope, the spool should slide (not roll) towards you.
To make sure that all the three examples will work for you, try them before class. The sliding spool is the hardest to perform and therefore will require extra practice to make them work. You also want to be careful if you are performing this demonstration on a table because the spool will continue to roll of the table and may be damaged. Each example can be counter intuitive so you may wish to poll the class on which way the spool will roll. The class will likely require a detailed explanation using the torque about the point of contact.
Consider the torque about the point of contact between the spool and ground. In the first example rxF where r is drawn from the point of contact (point A) to where the string leaves the inner spool (point B), is into the page. In the second example rxF is out of the page In the third example rxF is equal to 0 so the torque is 0 and the spool slides. Consider the net Force in each case. In the first example any vertical component of the tension is balanced with gravity and the normal force of the table. Since the spool rolls to the right, the force of friction is also to the right. So the net force is to the right. In the second example, again gravity, the normal force and the vertical string tension add to zero. Since the spool rolls to the left, the frictional force is to the left. Since you give no horizontal string tension, it is the force of friction that causes the spool to move to the left. In the third example, the friction is to the left, but it is overcome by the horizontal component of the string tension which is to the right | <urn:uuid:940d1f6c-5df2-423c-b934-5d0db15ebe4f> | 4.28125 | 522 | Tutorial | Science & Tech. | 65.762227 |
On the long roster of essential biological processes, few, if any, are more important than photosynthesis, which converts sunlight to chemical energy. All plants use photosynthesis to obtain the energy they need to live, and all animals, in turn, harvest the energy they need — directly or indirectly — from plants.
The way plants, algae, and some bacteria harness energy from the sun to produce food is a marvel of biology, and not all the details of the process are well understood. But between 1982 and 1985, Johann Deisenhofer, working in collaboration with Robert Huber and Hartmut Michel at Germany's Max Planck Institute for Biochemistry, helped resolve the pathway by which sunlight is converted to chemical energy. The accomplishment earned Deisenhofer and his colleagues the 1988 Nobel Prize in Chemistry. Earlier in the same year, Deisenhofer became an HHMI investigator at the University of Texas Southwestern Medical Center.
Photosynthesis is initiated in plants as sunlight is captured by chlorophyll, the pigment that gives plants their green color. Chlorophyll is sequestered in organelles called chloroplasts, also home to the plant's protein reaction center. There, sunlight is transformed into chemical energy as chlorophyll molecules eject electrons, pumping energy from one protein to another.
In 1982, using Rhodopseudomonas viridis, a purple bacterium capable of photosynthesis, Michel grew crystals of the microbe's photosynthetic reaction center. The reaction center is an aggregate of proteins located in the membrane of the bacterial cell. Proteins from the reaction center initiate the process of converting sunlight to chemical energy.
Over the next three years, Deisenhofer and Michel, using x-ray crystallography, constructed a detailed, atom-by-atom picture of the protein complex. At the time, it was the largest such complex to be characterized. The arrangement of the more than 10,000 atoms of the photosynthetic reaction center's protein complex provided the blueprint scientists needed to further understanding of how the sun's energy is transformed into the chemical energy necessary for much of life on Earth. The feat led to a general understanding of the mechanisms of photosynthesis and a more detailed understanding of how the process works in bacteria. It also enabled a more thorough comparison of the similarities and differences between the photosynthetic processes of bacteria and plants.
Deisenhofer's work has implications far beyond the theoretical understanding of photosynthesis. Its potential has already been realized in agriculture to help develop crop plants resistant to the herbicides that shut down the photosynthetic capabilities of weeds. Importantly, the work fueled new insight into the biology of cell membranes, the all-important interface between cells and the outside world. That knowledge, and the methods used to obtain it, promises a better understanding of how diseases affect cells and how those diseases might be treated.
Finally, by providing a window to the highly efficient processes plants and other organisms use to gather energy from the sun, the work may help improve the design of solar panels and other technologies that humans use to do the same.
Photo: Reid Horn | <urn:uuid:998beeec-2a82-4a37-bccf-1785b0121f6b> | 4.34375 | 632 | Knowledge Article | Science & Tech. | 26.033578 |
The process of accumulation and sinking of warm surface waters along a coastline. A change of air flow of the atmosphere can result in the sinking or downwelling of warm surface water. The resulting reduced nutrient supply near the surface affects the ocean productivity and meteorological conditions of the coastal regions in the downwelling area.
a process by which surface waters increase in density and sink. Strong downwelling occurs mainly off Greenland and Antarctica.
a downward movement of a liquid or plastic substance.
the vertical movement of a fluid downward due to density differences or where two fluid masses converge, displacing fliud downward. In the ocean, it often refers to where Ekman transport causes surface waters to converge or impinge on the coast, displacing surface waters to converge or impinge on the coast, displacing surface water downward thickening the surface layer
A downward movement (sinking) of surface water caused by ONSHORE EKMAN TRANSPORT, converging CURRENTS, or when a water mass becomes more dense than the surrounding water.
A downward motion of surface or subsurface water that removes excess mass brought into an area by convergent horizontal flow near the surface. See also upwelling.
Downwelling is the process of accumulation and sinking of higher density material beneath lower density material, such as cold or saline water beneath warmer or fresher water or cold air beneath warm air. It is the sinking limb of a convection cell. Upwelling is the opposite process and together these two forces are responsible in the oceans for the thermohaline circulation. | <urn:uuid:61e18a50-0af4-45a5-921f-c6275ddef450> | 3.8125 | 321 | Structured Data | Science & Tech. | 29.127871 |
The water soldier grows typically in slow-moving meso-eutrophic waters which have a favourable level of free-iron in the sediment.
In autumn it sinks to hibernate as a rosette with green leaves on the sediment surface. The plants become buoyant in spring following increased photosynthetic gas production and the formation of new leaves.
The apical and sub-apical parts of the long roots are covered with root hairs and penetrate the sediment.
Female plants predominate in most of Europe with isolated male plants found in Denmark, Sweden and Finland.
Although fruits may form, no viable seed has been found in the British Isles since the Pleistocene. It reproduces vigorously via asexual reproduction. | <urn:uuid:ac55c09f-59ba-4d64-82f5-76112b3ade27> | 3.359375 | 149 | Knowledge Article | Science & Tech. | 38.915526 |
The U.S. National Oceanic And Atmospheric Administration (NOAA) released its updated Arctic Report card earlier this week. Like many other recent reports about the condition of the north, the canary in climate change coal mine, the news is sobering; new records have been set for snow extent, sea ice extent and ice sheet surface melting. [...]
It’s been a “Goliath,” record-setting melting year in Greenland, home of the world’s second largest ice shelf. On August 8th, a full four weeks before the end of “melting season,” cumulative melting on the island had exceeded the previous record set in 2010, which included the full season.
The record melt was figured by the “cumulative melting index,” created by researcher Marco Tedesco of The City College of New York’s Cryosphere Processes Laboratory to measure the “strength” of the melting season. The index is basically the number of days when melting occurs multiplied by the physical area that
. . . → Read More: DeSmogBlog: "Goliath" Melting Year Shatters Records in Greenland
* In case you missed the news of the huge Greenland melt this week, you can read Climate Progress’s post on it, ABC News On Greenland Ice Melt: Scientists Say They’ve Never Seen Anything Like This Before. Here’s some pictures that give an idea of the immensity of what’s happening up there, because of our [...]
How hot is it this year?
Maybe the breaking of thousands of temperature records across the USA so far this year didn't get your attention.
Perhaps you have yet to be presented with the scary facts in Bill McKibben's latest article about Climate Change's New Math.
Well if you missed those cheery bits of reporting, have a gander at this shocking graph from meltfactor.org that shows a mind-blowing change in Greenland's ice sheet "albedo." It has literally fallen off the chart in comparison to previous years.
NASA has shown repeatedly that the Actic icecap is melting, and melting faster than climate models predict. This new visualization is stark and should be of obvious concern, simply because of the impact on sea levels. Now there is a potentially new threat. The process of shrinkage may cause a chemical reaction that could poison the Arctic ecosystem with mercury.
The disappearance of old, thick ice in the Arctic means an increase in bromine released into the atmosphere. The new, thinner ice has more salt and this is where the bromine comes from. As it melts it interacts
. . . → Read More: DeSmogBlog: Shrinking Arctic Ice May Cause Mercury Poisoning | <urn:uuid:8289b808-858b-4b87-94ce-0929e113a702> | 3.328125 | 568 | Content Listing | Science & Tech. | 56.096052 |
Geothermal energy continues to be a primary focal point for renewable energy sources and recent developments by the likes ofLockheed Martin and others are showing potential for oceans to be another viable option for geothermal technologies.
The concept is not new, but the application is. The general field is referred to as Ocean Thermal Energy Conversion (OTEC) and designs are being tested now in tropical areas.
One of the big concerns around tapping into the ocean for energy is that our processes can take away needed heat from the fragile oceanic environment. If not careful, we could upset the delicate balance in the ocean and – in a worst case scenario – wreak havoc. Using new techniques, the technology is now much more safe and requires far less energy to power a turbine.How Does OTEC Work?
Instead of using water-based steam to drive a turbine, the Lockheed Martin (LMC) version uses ammonia, which evaporates around 20 degrees Celsius. The waters in the tropics are about 75 degrees Fahrenheit on the surface and 3,000 feet below are about 40 degrees Fahrenheit. That 35 degree difference is all that is needed to evaporate ammonia and then re-condense it into liquid form.
This entire process is in a closed loop. What that means is there are no by-products or exhaust. OTEC is a completely clean source of energy.How Much Energy Are We Talking about?
If all goes as planned, one OTEC facility can generate more than enough energy without affecting the ocean’s ambient temperatures. Estimates are 3 to 5 terawatts with no affect to the ocean and zero emissions, and that’s just using tropical regions as a launching point for OTEC.
With energy production in the range of 15 terawatts per year globally, OTEC can make a significant impact. And these estimates are based on a few OTEC facilities in tropical regions only. When you expand the technology, the energy production opportunities start to get really exciting.
While applying OTEC principles in a different way, even Dayton, Ohio is getting into water-based geothermal energy. The region sits on an aquifier that holds the temperature below the surface at a steady 55 degrees Fahrenheit. During the summer water pumped in to cool the buildings. The winter months use the water for heat. The result is a 40% reduction in heating and cooling costs.Base Power
Many other types of renewable energy sources are intermittent. When the wind stops blowing, clouds cover the sun, or water levels don’t surge; solar, wind, and hydro systems stop generating power. OTEC may be a huge boon to the base power – energy production 24 hours a day, 7 days a week. While other energy sources can produce more energy during their peak output times, OTEC can keep a steady supply of energy coming to prevent shortages.Bottom Line
The only cost involved with OTEC is installation and maintenance. James Klett of the Oak Ridge National Laboratory says:
“I think this is a case where if we build it they will come. If we can build a power source that doesn’t require fuel and only requires maintenance, then we won’t have to worry about the price of fuel going up and down. The price of energy generated by an OTEC plant will be tied to the cost of maintenance—and if we come up with cheaper ways of maintaining the plant, the price of the OTEC energy could actually go down, and hopefully be competitive with conventional power plants.” (ORNL.gov)
As the technology hopefully proves to be viable, automated maintenance systems have been discussed requiring human intervention only on a seasonal basis.Other Applications
And the technology has applications far beyond extracting energy from our oceans. The advances by LMC can be put to use in traditional turbine systems for dramatically improved energy production.
The upcoming months will be exciting to watch as a test center is being built in Hawaii for a full-tilt environmental study of the technology. | <urn:uuid:0541dfac-d277-4260-894a-a0b6d2c4c5ec> | 3.640625 | 807 | Knowledge Article | Science & Tech. | 42.426646 |
The following calculations were made separately for each month at 9 stations/sites, which were located at Lapland (Käsivarsi and Sodankylä), coast of the Gulf of Bothnia (Hailuoto, Vaasa, and Rauma), coast of the Gulf of Finland (Hanko and Kotka), and inland Finland (Keski-Suomi and Kainuu).
First we search the minimum values for parameters RS and RD. The minimum values have to be selected among 194 = 130 321 possible combinations of years.
RS measures the representativeness with respect to the wind speed distribution: k is the shape parameter and A the scale parameter of the Weibull distribution, i refers to the parameter value in a combination of four years, and the overbar refers to the parameter value in the 19 year data set. The factor 6 is applied to make the deviations of A and k equally important for RS.
RD measures the representativeness with respect to the distribution of wind directions. D is the wind direction sector: D1 = 0-30°, D2 = 30-60°, and so on. V is the wind speed, and P is the percentage of cases in each wind direction sector.
The final parameter to be minimized is | <urn:uuid:aacce157-3fc2-4603-9d3d-322f2f57b3be> | 2.921875 | 263 | Tutorial | Science & Tech. | 49.40625 |
Definition: A polyphase merge which seeks to minimize the number of merge passes by allocating output runs of each pass to the various output files. Since polyphase merging must have a different number of runs in each file to be efficient, one seeks the optimal way of selecting how many runs go into each output file. A series of kth order Fibonacci numbers is one way to select the number of runs.
See also merge, polyphase merge, optimal merge.
If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.
Entry modified 17 December 2004.
HTML page formatted Fri Mar 25 16:20:34 2011.
Cite this as:
Art S. Kagel, "optimal polyphase merge", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 17 December 2004. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/optimpolymrg.html | <urn:uuid:c2e3597d-33d2-4114-9ba7-fc9669c67fb0> | 2.859375 | 221 | Knowledge Article | Software Dev. | 61.078088 |
1: Someone was saying that in a return type user-defined function (which returns a value) one has to have same data type of return value as the declared type of arguments. I think this is absolutely wrong. Because you could have different data types for arguments, further you could have totally different data type for return value. For example,is completely valid function in my view. Please confirm this for me. Thanks.Code:bool func(int dummya, float dummyb, long dummyc)
2: Other point that someone was making about, I think, functions which don't take arguments. Perhaps, he was saying that in such functions you can cout and cin operators, but this is not possible for functions which take arguments. Perhaps, you could make some sense out of it! Please let me know what point was being made if you understand it. Thank you. | <urn:uuid:5dfc05d7-5447-4a19-a6e0-b19893d40cc5> | 2.8125 | 179 | Q&A Forum | Software Dev. | 60.668661 |
Population surveys of Boca Ciega Bay and surrounding waters are conducted on a regular basis during the summer months.� The research is formally authorized by the National Marine Fisheries Service. Typically, observations are made from a 19' boat, and digital pictures of the dorsal fin of each dolphin present in a group are taken and later used for photo-identification purposes.
Optimally 3-4 members of the ECDP are present on the boat to conduct a survey, each fulfilling a different duty, such as driver, photographer, and data collector. During ad libitum surveys only a limited amount of time is spent with each dolphin group, with the emphasis on obtaining the necessary photographs and collecting environmental and behavioral data.� This type of survey provides valuable information about population size, group composition, dolphin distribution, and habitat use.� In addition, focal animal follows are conducted, in which one dolphin or a group of dolphins is observed for several consecutive hours. This type of survey provides valuable information about activity budgets and habitat use.
The ECDP has logged a total of 452 survey days (as of summer 2005).� A comparable amount of time has also been spent in lab processing data.� There have been 1951 groups of bottlenose dolphins observed, and there are currently 673 dolphins in the Eckerd College catalog
The Eckerd College catalog is compared with catalogs from Mote Marine Laboratory in Sarasota, Florida, the Clearwater Marine Aquarium, and the catalog of Tampa Bay dolphins compiled for a Master's thesis at the University of South Florida. These collaborations allow for the construction of a clearer picture of dolphin home ranges and distribution in coastal waters of west-central Florida. | <urn:uuid:6412c150-e9f0-4b23-8245-5f9ded083623> | 2.90625 | 339 | Knowledge Article | Science & Tech. | 21.616895 |
Where possible VOGLE allows for front and back buffers to enable things like animation and smooth updating of the screen. The routine backbuffer is used to initialise double buffering.
Make VOGLE draw in the backbuffer. Returns -1 if the device is not up to it.
Fortran: integer function backbuffer C: backbuffer() Pascal: function BackBuffer:integer
Make VOGLE draw in the front buffer. This will always work.
Fortran: subroutine frontbuffer C: frontbuffer() Pascal: procedure FrontBuffer
Swap the front and back buffers.
Fortran: subroutine swapbuffers C: swapbuffers() Pascal: procedure SwapBuffers | <urn:uuid:b3e2d746-5811-45e2-9c9a-18a900d2605f> | 2.71875 | 145 | Documentation | Software Dev. | 36.411364 |
Ocean chemistry changing at 'unprecedented rate'
(Reuters) - Carbon dioxide emissions that contribute to global warming are also turning the oceans more acidic at the fastest pace in hundreds of thousands of years, the National Research Council reported Thursday.
"The chemistry of the ocean is changing at an unprecedented rate and magnitude due to anthropogenic carbon dioxide emissions," the council said. "The rate of change exceeds any known to have occurred for at least the past hundreds of thousands of years."
Ocean acidification eats away at coral reefs, interferes with some fish species' ability to find their homes and can hurt commercial shellfish like mussels and oysters and keep them from forming their protective shells.
Corrosion happens when carbon dioxide is stored in the oceans and reacts with sea water to form carbonic acid. Unless carbon dioxide emissions are curbed, oceans will grow more acidic, the report said.
Oceans absorb about one-third of all human-generated carbon dioxide emissions, including those from burning fossil fuels, cement production and deforestation, the report said.
The increase in acidity is 0.1 points on the 14-point pH scale, which means this indicator has changed more since the start of the Industrial Revolution than at any time in the last 800,000 years, according to the report.
The council's report recommended setting up an observing network to monitor the oceans over the long term.
"A global network of robust and sustained chemical and biological observations will be necessary to establish a baseline and to detect and predict changes attributable to acidification," the report said.
ACID OCEANS AND 'AVATAR'
Scientists have been studying this growing phenomenon for years, but ocean acidification is generally a low priority at international and U.S. discussions of climate change.
A new compromise U.S. Senate bill targeting carbon dioxide emissions is expected to be unveiled on April 26.
Ocean acidification was center stage at a congressional hearing Thursday, the 40th anniversary of Earth Day in the United States.
"This increase in (ocean) acidity threatens to decimate entire species, including those that are at the foundation of the marine food chain," Democratic Senator Frank Lautenberg of New Jersey told a Commerce Committee panel. "If that occurs, the consequences are devastating."
Lautenberg said that in New Jersey, Atlantic coast businesses generate $50 billion a year and account for one of every six jobs in the state.
Sigourney Weaver, a star of the environmental-themed film "Avatar" and narrator of the documentary "Acid Test" about ocean acidification, testified about its dangers. She said people seem more aware of the problem now than they did six months ago.
"I think that the science is so indisputable and easy to understand and ... we've already run out of time to discuss this," Weaver said by telephone after her testimony. "Now we have to take action."
(Editing by Sandra Maler)
Image - http://www.scientificamerican.com/media/inline/ocean-acidification-hits-great-barrier-reef_1.jpg
For further enlightenment enter a word or phrase into the search box @ New Illuminati:
or http://newilluminati.blog-city.com (this one only works with Firefox)
The Her(m)etic Hermit - http://hermetic.blog.com
New Illuminati – http://nexusilluminati.blogspot.com
http://newilluminati.blog-city.com (this one only works with Firefox)
New Illuminati on Facebook - http://www.facebook.com/pages/New-Illuminati/320674219559
This material is published under Creative Commons Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…
From the New Illuminati – http://nexusilluminati.blogspot.com | <urn:uuid:a8ba9946-68bb-48f1-a542-66e1d79cdd15> | 3.21875 | 896 | Personal Blog | Science & Tech. | 37.572374 |
The thought that NE Ohio is an area previously free of earthquake activity is utter nonsense.
From an article back in January 2008
Still, Ohio seems increasingly prone to the shakes; there have been 25 earthquakes of magnitude 2.0 or higher over the past two years, equal to the number the five years before that. There have been about as many such quakes in Ohio so far this decade as in the previous 30 years
“We think Ohio, especially northeast Ohio and Lake Erie, is going through a period of increased seismic activity,” said Mike Hansen, coordinator of the Ohio Seismic Network.
One explanation could be that there is better monitoring equipment than in the past. But more frequent activity is a likelihood.
Ohio State University geophysics professor Ralph Von Frese said location is the key.
“We’re the earthquake capital of the Big Ten, at least,” he said. “This is earthquake country, and Ohio is situated so that we’ve experienced more than every state around us.”
So just what is it that seems to make Ohio so … shaky?
“That’s the question everyone is interested in _ and the one we can’t answer for certain,” Hansen said.
There are, however, several theories about frequent earthquakes, including the one with the 3.1 magnitude that hit Tuesday.
Geologists say Ohio might be especially active because of a combination of a few ancient collisions.
The first was probably 800 million years ago, when land masses the size of continents came together, causing mountain ranges and fault lines.
Another factor is that movement still is going on.
“We know that North America is being gradually pushed westward,” Hansen said.
Sylvia Hayek, a seismologist with Natural Resources Canada, said the Ohio area also may be reacting to something called glacial or crustal rebound. The idea is that the Earth’s crust is still recovering or shifting from the retreat of glaciers. The weight of the glaciers depressed the crust of the Earth.
When they retreated about 12,000 years ago, they caused some instability in the Earth’s crust in Ohio, Hansen and Hayek said.
And they haven’t all been small earthquakes in the past
this from the Ohio Government Dept of Natural Resources
January 31, 1986 Northeastern Ohio Earthquake
(from Summer 1986 Ohio Geology)
On January 31, 1986, many northeastern Ohio residents were startled into the realization that this area is seismically active; historically, the region has the second highest frequency of earthquake activity of any area of the state. Only Shelby County and vicinity in western Ohio have experienced more earthquakes in historic times. The 1986 northeastern Ohio earthquake has the distinction of being the most intensively studied Ohio earthquake, the first earth quake in the state for which injuries were recorded, and the nearest earthquake to a nuclear power plant in the United States. The 1986 event ranks as probably the third largest earthquake in Ohio.
The historic seismic activity in northeastern Ohio, which long predates the Calhio injection well and indeed any drilling or deep mining activity in this part of the state, suggests that seismic activity in this area is not a result of human activities.
At least 19 earthquakes are known to have occurred in the northeastern Ohio counties of Ashtabula (1), Cuyahoga (7), Lake (4), Lorain (1), Portage (3), and Summit (3) prior to the 1986 Lake County event. Most of these earthquakes, the earliest of which occurred in 1836, predate the availability of seismographs and are therefore located and rated as to intensity on the basis of newspaper and other historic accounts. These data have recently been researched and evaluated in detail by personnel from Weston Geophysical.
These accounts suggest that most of the previous seismic activity in northeastern Ohio was of relatively low intensity and associated with only minor and isolated damages such as a few broken windows and items falling off shelves. One earthquake, which occurred on March 9, 1943, had a Richter magnitude of 4.7 and an epicentral intensity of V. No significant damages were reported from this quake, which had a felt area of 220,000 square kilometers (85,000 square miles). This event was originally assigned a location beneath Lake Erie, offshore from Ashtabula County; however, recent reevaluation of the seismic records of this event by seismologists from the U.S. Geological Survey placed the epicenter on the Lake-Geauga County line, near the epicenter of the 1986 event.
If you are interested in looking at information on Ohio historical seismicity, earthquakes etc.http://earthquake.usgs.gov/earthquakes/states/?region=Ohio
Note that there are two time periods since modern seismic monitoring has been in place during which there were 13 recorded earthquakes above 3.0, both of which predated the drilling of shale gas wells in Ohio by 1.2 decades.
I note that the article in question doesn’t mention what the epicenter depth for the earthquakes are. The fault system which runs through NE Ohio into the Great Lakes is thought to be responsible for earthquakes in the region and generally the epicenter is several km deeper than any well, especially the shallow shale wells.
Sounds to me like the farmer quoted has been talking to a legal team bent on extracting payment from oil and gas operators to make claims (no matter how frivolous) go away. | <urn:uuid:923b674a-fa68-4ce5-89ce-f74e0ac40063> | 3.265625 | 1,128 | Comment Section | Science & Tech. | 44.164619 |
setsockopt - set the socket options
int setsockopt(int socket, int level, int option_name,
const void *option_value, socklen_t option_len);
The setsockopt() function shall set the option specified by the option_name argument, at the protocol level specified by the level argument, to the value pointed to by the option_value argument for the socket associated with the file descriptor specified by the socket argument.
The level argument specifies the protocol level at which the option resides. To set options at the socket level, specify the level argument as SOL_SOCKET. To set options at other levels, supply the appropriate level identifier for the protocol controlling the option. For example, to indicate that an option is interpreted by the TCP (Transport Control Protocol), set level to IPPROTO_TCP as defined in the <netinet/in.h> header.
The option_name argument specifies a single option to set. The option_name argument and any specified options are passed uninterpreted to the appropriate protocol module for interpretations. The <sys/socket.h> header defines the socket-level options. The options are as follows:
- Turns on recording of debugging information. This option enables or disables debugging in the underlying protocol modules. This option takes an int value. This is a Boolean option.
- Permits sending of broadcast messages, if this is supported by the protocol. This option takes an int value. This is a Boolean option.
- Specifies that the rules used in validating addresses supplied to bind() should allow reuse of local addresses, if this is supported by the protocol. This option takes an int value. This is a Boolean option.
- Keeps connections active by enabling the periodic transmission of messages, if this is supported by the protocol. This option takes an int value.
If the connected socket fails to respond to these messages, the connection is broken and threads writing to that socket are notified with a SIGPIPE signal. This is a Boolean option.
- Lingers on a close() if data is present. This option controls the action taken when unsent messages queue on a socket and close() is performed. If SO_LINGER is set, the system shall block the calling thread during close() until it can transmit the data or until the time expires. If SO_LINGER is not specified, and close() is issued, the system handles the call in a way that allows the calling thread to continue as quickly as possible. This option takes a linger structure, as defined in the <sys/socket.h> header, to specify the state of the option and linger interval.
- Leaves received out-of-band data (data marked urgent) inline. This option takes an int value. This is a Boolean option.
- Sets send buffer size. This option takes an int value.
- Sets receive buffer size. This option takes an int value.
- Requests that outgoing messages bypass the standard routing facilities. The destination shall be on a directly-connected network, and messages are directed to the appropriate network interface according to the destination address. The effect, if any, of this option depends on what protocol is in use. This option takes an int value. This is a Boolean option.
- Sets the minimum number of bytes to process for socket input operations. The default value for SO_RCVLOWAT is 1. If SO_RCVLOWAT is set to a larger value, blocking receive calls normally wait until they have received the smaller of the low water mark value or the requested amount. (They may return less than the low water mark if an error occurs, a signal is caught, or the type of data next in the receive queue is different from that returned; for example, out-of-band data.) This option takes an int value. Note that not all implementations allow this option to be set.
- Sets the timeout value that specifies the maximum amount of time an input function waits until it completes. It accepts a timeval structure with the number of seconds and microseconds specifying the limit on how long to wait for an input operation to complete. If a receive operation has blocked for this much time without receiving additional data, it shall return with a partial count or errno set to [EAGAIN] or [EWOULDBLOCK] if no data is received. The default for this option is zero, which indicates that a receive operation shall not time out. This option takes a timeval structure. Note that not all implementations allow this option to be set.
- Sets the minimum number of bytes to process for socket output operations. Non-blocking output operations shall process no data if flow control does not allow the smaller of the send low water mark value or the entire request to be processed. This option takes an int value. Note that not all implementations allow this option to be set.
- Sets the timeout value specifying the amount of time that an output function blocks because flow control prevents data from being sent. If a send operation has blocked for this time, it shall return with a partial count or with errno set to [EAGAIN] or [EWOULDBLOCK] if no data is sent. The default for this option is zero, which indicates that a send operation shall not time out. This option stores a timeval structure. Note that not all implementations allow this option to be set.
For Boolean options, 0 indicates that the option is disabled and 1 indicates that the option is enabled.
Options at other protocol levels vary in format and name.
Upon successful completion, setsockopt() shall return 0. Otherwise, -1 shall be returned and errno set to indicate the error.
The setsockopt() function shall fail if:
- The socket argument is not a valid file descriptor.
- The send and receive timeout values are too big to fit into the timeout fields in the socket structure.
- The specified option is invalid at the specified socket level or the socket has been shut down.
- The socket is already connected, and a specified option cannot be set while the socket is connected.
- The option is not supported by the protocol.
- The socket argument does not refer to a socket.
The setsockopt() function may fail if:
- There was insufficient memory available for the operation to complete.
- Insufficient resources are available in the system to complete the call.
The setsockopt() function provides an application program with the means to control socket behavior. An application program can use setsockopt() to allocate buffer space, control timeouts, or permit socket data broadcasts. The <sys/socket.h> header defines the socket-level options available to setsockopt().
Options may exist at multiple protocol levels. The SO_ options are always present at the uppermost socket level.
Sockets, bind(), endprotoent(), getsockopt(), socket(), the Base Definitions volume of IEEE Std 1003.1-2001, <netinet/in.h>, <sys/socket.h>
First released in Issue 6. Derived from the XNS, Issue 5.2 specification.
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/125 is applied, updating the SO_LINGER option in the DESCRIPTION to refer to the calling thread rather than the process. | <urn:uuid:f4429657-a9cf-4340-91ea-1bc1e4f6aa31> | 3.03125 | 1,532 | Documentation | Software Dev. | 50.099173 |
Extreme Mammal: Platypus
I have a sweatshirt I love hanging in my closet. It has small pictures of an alligator, duck, beaver, and snake. It says – with our powers combined, we are PLATYPUS! I couldn’t think of a better example of an extreme mammal than a Platypus. It is so unusual that when Europeans first encountered the species in 1798, it was thought to be a hoax. Captain John Hunter sent a pelt and sketch to scientists in Great Britain. They were dumbfounded. One scientist, Robert Shaw even took scissors to the skin to check for stitches as he thought it was an ingenious fake. And even in nature it looks out of place, with the beak of a duck, the body and fur of a beaver, the behavior of an alligator, it seems to be nothing more than a mish-mosh. Which holds a grain of truth, for the Platypus sports attributes that thwart its classification as a mammal. Below are some of the more unique features of this species…
The Platypus is semi-aquatic. It forages for food in the water, swimming smoothly with its webbed feet. However, it can fold back its webbing on it feet when walking on land and clutch the earth with its nails. Its legs are on the sides of its body rather than underneath its body like other mammals. It can run on land using knuckle walking. Its gait resembles an alligator more than a mammal like a beaver. Moreover, the Platypus lays eggs that resemble those of reptiles. Like reptilian eggs, only part of the egg of a Platypus divides as it develops. The Platypus is one of the only mammals in the world to lay eggs rather than giving birth to live young.
It is also one of the only mammals to be venomous! Both female and males of the species have a spur on their hind foot. The female loses this spur after about three months to a year but the male Platypus’s spur develops venom that can deliver excruciating pain. This venom is different than venoms in reptiles as it is not life threatening and doesn’t seem to be used for predation. Instead the production of venom corresponds to mating seasons and seems to be used for male dominant displays.
For predation, the Platypus relies on its sensitive bill. The Platypus closes its eyes, ears, and nose while in the water, so it does not hunt by sight, sound, or smell. It has electroreceptors that detect tiny electrical currents given off by the muscular contractions of its prey. Being sensitive to these currents, it is able to differentiate between animate and inanimate objects and feed accordingly. The Platypus is a carnivore and forages for worms, insects, larvae, shrimp and other shellfish. Feeding becomes even more interesting as the Platypus has no teeth. They will store caught prey, mud and bits of gravel in pouches in their cheek until they surface from the water. They will then use the debris and gravel to “chew” their food.
In the dictionary, mammals are classified as vertebrate animals that are warm-blooded, give birth to live young and feed their young milk. The Platypus is my favorite extreme mammal because it is the quintessential exception to this rule.
37.7699 -122.467174Tags: extreme mammal, platypus | <urn:uuid:be71ee96-3df2-4673-8101-79f932f39979> | 2.953125 | 717 | Personal Blog | Science & Tech. | 58.476875 |
From: LARRY KLAES (email@example.com)
Date: Thu Aug 01 2002 - 12:45:10 PDT
----- Original Message -----
Sent: Tuesday, July 30, 2002 12:34 PM
Subject: Look At That Asteroid (2002 NY40)
Look at that Asteroid
NASA Space Science
A big space rock will soon come so close to Earth that sky watchers can see
it through binoculars.
July 30, 2002: Relax, there's no danger of a collision, but it will be close
enough to see through binoculars: a big space rock, not far from Earth.
Astronomers discovered the nearby asteroid, named 2002 NY40--not to be
confused with better-known 2002 NT7--on July 14th. It measures about 800
meters across, and follows an orbit that ranges from the asteroid belt to
the inner solar system. On August 18th, the asteroid will glide past our
planet only 1.3 times farther away than the Moon.
"Flybys like this happen every 50 years or so," says Don Yeomans, the
manager of NASA's Near-Earth Object Program office at JPL. The last time
(that we know of) was August 31, 1925, when another 800-meter asteroid
passed by just outside the Moon's orbit. In those days there were no
dedicated asteroid hunters--the object, 2001 CU11, wasn't discovered until
77 years later. At the time of the flyby, no one even knew it was happening.
2002 NY40 is different. We know the asteroid is coming, and astronomers have
time to prepare.
One team of observers led by Mike Nolan at the giant Arecibo radar in Puerto
Rico will "ping" 2002 NY40 with radio waves as it approaches Earth. Such
data result in impressive 3D maps of asteroids, which have often surprised
astronomers with their weird shapes. Some prove to be binary systems (one
space rock orbiting another) and one even looks like a dog bone.
"Radar data will also improve our knowledge of the asteroid's orbit," adds
Jon Giorgini, a member of the radar team from JPL. "At present, we know
there's little risk of a collision with 2002 NY40 for decades. When the
Arecibo radar measurements are done, the orbit uncertainties should shrink
by more than a factor of 200. We'll be able to extrapolate the asteroid's
motion hundreds of years into the past and into the future, too."
2002 NY40 is faint now. It shines by reflected sunlight like a 17th
magnitude star. As it nears Earth, however, the space rock will brighten,
soaring to 9th magnitude on August 18th. That's about 16 times dimmer than
the dimmest star you can see without a telescope. But as asteroids go, it's
"Asteroids are hard to see," explains Yeomans, "because they're mostly black
like charcoal. The most common ones--carbon-rich C-type asteroids--reflect
only 3% to 5% of the light that hits them. Metallic asteroids, which are
somewhat rare, reflect more: 10% to 15%."
"We don't know yet what this asteroid is made of," he continued, "but we'll
have a much better idea by the end of August." Astronomers using
ground-based telescopes will have little trouble recording the asteroid's
spectrum and thus its composition.
On the date of closest approach, the asteroid will sail past Vega, the
brightest star in the evening summer sky. Sky watchers with powerful
binoculars or small telescopes can see it--a speck of light moving 8 degrees
per hour. (Note: The flyby will be visible mostly from Earth's northern
hemisphere; this is not a good opportunity for southern sky watchers. North
Americans can see it best after sunset on Aug. 17th; Europeans should look
during the hours before dawn on Aug. 18th.)
Something extraordinary will happen hours after 2002 NY40 passes Earth: the
space rock will quickly fade.
Asteroids, like moons and planets, have phases. The sunlit side of 2002 NY40
is facing Earth now. It's full, like a full Moon. On August 18th, the
asteroid will cross Earth's orbit on its way toward the Sun. Then the phase
of the asteroid will change--from full to gibbous to half.... finally the
night side will turn to face Earth. The asteroid will grow dark, like a new
It's not every day you can peer through binoculars and see a near-Earth
asteroid--and then see it disappear. But 2002 NY40 has a lot to offer.
"Mother Nature is making it very easy for us to study this one," says
Yeomans. That's good because "we need to know more about near-Earth
asteroids in case we ever need to destroy or deflect one." What are they
made of? How are asteroids put together? These are key questions that 2002
NY40 will help answer.
"Don't forget," adds Yeomans, "most asteroids pose no threat to Earth. But
they do contain valuable metals, minerals and even water that we might tap
in the future." When such asteroids come close (but not too close!) we have
relatively easy access to them--both to study and, one day perhaps, to
Or, to paraphrase Nietzsche, asteroids (like 2002 NY40) that do not hit us,
make us stronger.
For more information about 2002 NY40, including an up-to-date ephemeris for
sky watchers, please visit JPL's Near-Earth Object Program web site:
3D Orbit Simulation:
This archive was generated by hypermail 2.1.2 : Thu Aug 01 2002 - 12:58:21 PDT | <urn:uuid:26f4bb5b-0a6e-48b8-bf6c-1fd898bb8b8a> | 3.171875 | 1,265 | Comment Section | Science & Tech. | 67.432705 |
Poisson's ratio (), named after Siméon Poisson, is the negative ratio of transverse to axial strain. In fact, when a sample object is stretched (or squeezed), to an extension (or contraction) in the direction of the applied load, it corresponds a contraction (or extension) in a direction perpendicular to the applied load. The ratio between these two quantities is the Poisson's ratio.
When a material is compressed in one direction, it usually tends to expand in the other two directions perpendicular to the direction of compression. This phenomenon is called the Poisson effect. Poisson's ratio (nu) is a measure of the Poisson effect. The Poisson ratio is the ratio of the fraction (or percent) of expansion divided by the fraction (or percent) of compression, for small values of these changes.
Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. This is common observation when a rubber band is stretched, when it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative stretching, and will have the same value as above. In certain rare cases, a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio.
The Poisson's ratio of a stable, isotropic, linear elastic material cannot be less than −1.0 nor greater than 0.5 due to the requirement that Young's modulus, the shear modulus and bulk modulus have positive values.1 Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible material deformed elastically at small strains would have a Poisson's ratio of exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation (Seismic Performance of Steel-Encased Concrete Piles by RJT Park) (which occurs largely at constant volume.) Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0: showing very little lateral expansion when compressed. Some materials, mostly polymer foams, have a negative Poisson's ratio; if these auxetic materials are stretched in one direction, they become thicker in perpendicular directions. Some anisotropic materials have one or more Poisson ratios above 0.5 in some directions.
Assuming that the material is stretched or compressed along the axial direction (the x axis in the below diagram):
- is the resulting Poisson's ratio,
- is transverse strain (negative for axial tension (stretching), positive for axial compression)
- is axial strain (positive for axial tension, negative for axial compression).
On the molecular level, Poisson’s effect is caused by slight movements between molecules and the stretching of molecular bonds within the material lattice to accommodate the stress. When the bonds elongate in the direction of load, they shorten in the other directions. This behavior multiplied many times throughout the material lattice is what drives the phenomenon.
For a cube stretched in the x-direction (see figure 1) with a length increase of in the x direction, and a length decrease of in the y and z directions, the infinitesimal diagonal strains are given by:
Integrating the definition of Poisson's ratio:
Solving and exponentiating, the relationship between and is found to be:
For very small values of and , the first-order approximation yields:
The relative change of volume ΔV/V of a cube due to the stretch of the material can now be calculated. Using and :
Using the above derived relationship between and :
and for very small values of and , the first-order approximation yields:
For isotropic materials we can use Lamé’s relation2
where is bulk modulus.
If a rod with diameter (or width, or thickness) d and length L is subject to tension so that its length will change by ΔL then its diameter d will change by:
The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used:
- is original diameter
- is rod diameter change
- is Poisson's ratio
- is original length, before stretch
- is the change of length.
The value is negative because it decreases with increase of length
For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the other axis in three dimensions. Thus it is possible to generalize Hooke's Law (for compressive forces) into three dimensions:
- , and are strain in the direction of , and axis
- , and are stress in the direction of , and axis
- is Young's modulus (the same in all directions: , and for isotropic materials)
- is Poisson's ratio (the same in all directions: , and for isotropic materials)
These equations will hold in the general case which includes shear forces as well as compressive forces, and the full generalization of Hooke's law is given by:
where is the Kronecker delta and
- is the Young's modulus along axis
- is the shear modulus in direction on the plane whose normal is in direction
- is the Poisson's ratio that corresponds to a contraction in direction when an extension is applied in direction .
The Poisson's ratio of an orthotropic material is different in each direction (x, y and z). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the equation are independent. There are only nine independent material properties; three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be obtained from the relations
From the above relations we can see that if then . The larger Poisson's ratio (in this case ) is called the major Poisson's ratio while the smaller one (in this case ) is called the minor Poisson's ratio. We can find similar relations between the other Poisson's ratios.
where we have used the plane of symmetry to reduce the number of constants, i.e., .
The symmetry of the stress and strain tensors implies that
This leaves us with six independent constants . However, transverse isotropy gives rise to a further constraint between and which is
Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of and is the major Poisson's ratio. The other major and minor Poisson's ratios are equal.
|Material||Plane of symmetry|
|Nomex honeycomb core||, =ribbon direction||0.49||0.69||0.01||2.75||3.88||0.01|
|glass fiber-epoxy resin||0.29||0.29||0.32||0.06||0.06||0.32|
Some materials known as auxetic materials display a negative Poisson’s ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive (i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain.7 This can also be done in a structured way and lead to new aspects in material design as for Mechanical Metamaterials.
One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the pipe, resulting in a radial stress within the pipe material. Due to Poisson's effect, this radial stress will cause the pipe to slightly increase in diameter and decrease in length. The decrease in length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise prone to failure.citation needed
Another area of application for Poisson's effect is in the realm of structural geology. Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale, excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints and dormant stresses in the rock.8
The use of cork as a stopper for wine bottles is due to cork having a Poisson ratio of practically zero, so that, as the cork is inserted into the bottle, the upper part which is not yet inserted does not expand as the lower part is compressed. The force needed to insert a cork into a bottle arises only from the compression of the cork and the friction between the cork and the bottle. If the stopper were made of rubber, for example, (with a Poisson ratio of about 1/2), there would be a relatively large additional force required to overcome the expansion of the upper part of the rubber stopper.
- 3-D elasticity
- Hooke's Law
- Impulse excitation technique
- Orthotropic material
- Shear modulus
- Young's modulus
- Coefficient of thermal expansion
- H. GERCEK; “Poisson's ratio values for rocks”; International Journal of Rock Mechanics and Mining Sciences; Elsevier; January 2007; 44 (1): pp. 1–13.
- http://arxiv.org/ftp/arxiv/papers/1204/1204.3859.pdf - Limits to Poisson’s ratio in isotropic materials – general result for arbitrary deformation.
- Boresi, A. P, Schmidt, R. J. and Sidebottom, O. M., 1993, Advanced Mechanics of Materials, Wiley.
- Lekhnitskii, SG., (1963), Theory of elasticity of an anisotropic elastic body, Holden-Day Inc.
- Tan, S. C., 1994, Stress Concentrations in Laminated Composites, Technomic Publishing Company, Lancaster, PA.
- Poisson's ratio calculation of glasses
- Negative Poisson's ratio
- Meaning of Poisson's ratio
- Negative Poisson's ratio materials
- More on negative Poisson's ratio materials (auxetic)
|Homogeneous isotropic linear elastic materials have their elastic properties uniquely determined by any two moduli among these, thus given any two, any other of the elastic moduli can be calculated according to these formulas.| | <urn:uuid:6d4c6a47-a87c-4610-ac0f-b99875ca107b> | 3.40625 | 2,361 | Knowledge Article | Science & Tech. | 46.698389 |
Typical situations in which a density matrix is needed include: a quantum system in thermal equilibrium (at finite temperatures), nonequilibrium time-evolution that starts out of a mixed equilibrium state, and entanglement between two subsystems, where each individual system must be described by a density matrix even though the complete system may be in a pure state.
The density matrix (commonly designated by ρ) is an operator acting on the Hilbert space of the system in question. For the special case of a pure state, it is given by the projection operator of this state. For a mixed state, where the system is in the quantum-mechanical state |ψj〉 with probability pj, the density matrix is the sum of the projectors, weighted with the appropriate probabilities (see bra-ket notation):
ρ = ∑j pj |ψj〉〈ψj|
The density matrix is used to calculate the expectation value of any operator A of the system, averaged over the different states |ψj〉. This is done by taking the trace of the product of ρ and A:
tr[ρ A]=∑j pj 〈ψj|A|ψj〉
The probabilities pj are nonnegative and normalized (i.e. their sum gives one). For the density matrix, this means that ρ is a positive semidefinite hermitian operator (its eigenvalues are nonnegative) and the trace of ρ (the sum of its eigenvalues) is equal to one. | <urn:uuid:c9a61375-d47a-4b74-a84e-6614927377ec> | 3.359375 | 330 | Knowledge Article | Science & Tech. | 32.024125 |
News and trends in the geosciences for February 2000:
Bacteria in Lake Vostok
Beneath Antarctica's Vostok Station lies
one of the last oases for life on Earth still unexplored: subglacial Lake
Freeze-fry from the snowball
theory that Earth experienced Neoproterozoic glaciations that covered
the planet from the poles to the tropics finds renewed support.
Evidence for a methane burp
Could an undersea landslide
have resulted in the release of enough methane to significantly contribute
to the Late Paleocene Thermal Maximum warming event?
HELP for water resources
A new research intitiative
led by the United Nations will establish a worldwide, remote-sensing monitoring
system to help prepare areas that could experience droughts.
snores of sleeping giants A
steep-sided stratovolcano standing a mile above it's surroundings that
has never collapsed is a rare find -- should man heed warning signs of
collapse more seriously?
Mountain winds take off
Coastal weather can
be strongly affected by high, broad mountains with steep windward slopes,
according to scientists at the national Center for Atmospheric Research.
Waves in the Digital Future NSF
and NASA will fund the Digital Library for Earth System Education.
notes about ongoing research in the geosciences. | <urn:uuid:afbd9239-2503-44d5-a021-f79c76a82533> | 2.859375 | 287 | Content Listing | Science & Tech. | 23.671015 |
Video: Self-aware monkeys
Experiments on monkeys suggest that the animals can recognise and react to their own image in a mirror. They altered their posture to look at their own genitals and other body parts they couldn't see directly.
The rhesus macaques studied in the experiments may be telling us that the current "gold standard" test for self-awareness is not sensitive enough to identify all animals that are self-aware – although the method is steeped in controversy.
All these animals pass the so-called "mark test", in which they are put to sleep, daubed with a spot of dye on the face to alter their appearance, then woken to see if they notice and react to the mark when they see it in a mirror.
Macaques have previously failed the mark test, and the animals tested in the current study were no exception. But they revealed by accident that they do indeed recognise themselves in mirrors.
They couldn't miss it
Their talents came to light in a completely unrelated experiment, which was intended to study how drugs that are given to children to combat attention-deficit hyperactivity disorder alter brain signalling. To find out, two macaques had been given skullcap-mounted implants to monitor electrical signals in their brains.
To the surprise of the researchers, the monkeys began interacting with a mirror they had been given purely for entertainment. The researchers' hunch is that, unlike the dye spot used in the mark test, the implant was big and invasive enough for the monkeys to realise that it was themselves they could see in the mirror, not other monkeys.
"We think the cap is equivalent to a 'supermark' that they definitely notice," says Luis Populin of the University of Wisconsin-Madison, who led the research.
Populin says that his colleague Abigail Rajala noticed that when the two animals were returned to their cage for the first time after having had the cap fitted, they manoeuvred both the mirror and their posture to view and groom the area around the implant.
"This grooming behaviour was inconsistent with the literature," says Populin, so he and his colleagues decided to investigate further.
First, he gave them the mark test, which they both failed. Then he extended the experiment to include five implanted monkeys in total, giving them all access to small mirrors and observing what happened.
All five clearly manipulated and manoeuvred their mirror to view and groom the implanted area and other "invisible" areas such as the genitals. When they could use the mirror they touched these areas 10 times as often as they did without it.
Next, the monkeys were given much larger mirrors, which enabled them to view themselves full-length. This time, they were even more curious, looking at themselves twice as often as in the smaller mirror, and performing gymnastic feats to view reflections of otherwise inaccessible parts of their bodies, especially their genitals. This behaviour vanished completely when the mirror was obscured with a black cover.
Seen and felt
Populin says that the behaviour met two gold standards seen as proof of the mark test: first, the macaques quickly realised the reflections were not other monkeys; secondly, they used the mirror to view otherwise inaccessible parts of their bodies.
"It's similar to behaviours exhibited by chimpanzees when they confront mirrors," says Populin.
Populin suggests that by using "supermarks" instead of the ordinary dye mark, it may be possible to identify self-awareness even in animals who fail the mark test.
"It is a mark that's not only seen but also felt," says Frans de Waal of the Yerkes National Primate Research Center at Emory University in Atlanta, Georgia, whose work has helped reveal self-awareness in dolphins and elephants as well as apes. "As a result, the monkeys have two sources of feedback at the same time: the image in the mirror and the sensation of something new on their heads."
Diana Reiss of City University of New York, who has demonstrated that dolphins have self-awareness, agrees that the extra information is probably crucial. "The mark test may not be salient enough for some species," she says. So some, like the macaques, may need more obvious prompts.
Whatever the explanation, Populin says that the results clearly show that there is less of a divide than previously assumed between the "intelligent" species, like apes, and their less illustrious relatives.
The results are disputed by the inventor of the mark test, Gordon Gallup of the State University of New York in Albany. Like de Waal, he thinks that the tactile cues, along with being able to feel, touch and locate the implants, enable the monkeys to identify the objects, but that recognising the object doesn't mean they recognise themselves.
"In effect, the acrylic head implant is an inanimate object that has been placed on the rhesus monkey's head, and there's plenty of evidence that rhesus monkeys that can't recognise themselves in mirrors can nonetheless easily learn to use mirrored cues to locate otherwise hidden objects," he says.
Journal reference: PLoS One, DOI: 10.1371/journal.pone.0012865
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
You Can't "prove" Awareness
Thu Sep 30 13:43:40 BST 2010 by Liza
"there's plenty of evidence that rhesus monkeys that can't recognise themselves in mirrors can nonetheless easily learn to use mirrored cues to locate otherwise hidden objects"
So, when investigating their genitals in front of a mirror, they're not self-aware? How is that different from investigating an inkmark in a mirror?
Besides, this whole only-animals-that-recognize themselves-in-mirrors-are-self-aware idea is shifty. What with animals that mainly rely on smell and hearing? A dog may well see his reflection and think "hey, that looks a lot like me, but it has no smell so it's irrelevant". On the other hand, seeing an inkmark in a mirror and trying to remove it is no proof of self-awareness. It only proves the animals know how mirrors work. Trying to remove the mark is equivalent to any animal trying to remove a mark on a normally visible place on their body.
Am I The Only One Who's Thinking. . .
Thu Sep 30 13:47:52 BST 2010 by Liza
...that the cages for these monkeys are barbarically small?
Am I The Only One Who's Thinking. . .
Tue Dec 14 14:57:26 GMT 2010 by Karen
I was thinking the exact same thing, We, who are not only self aware, but also aware of others and their suffering, keep them in cages under horrible conditions. That's what's shocking!
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:a1373c9b-cb20-4baa-a38f-fc1ac623df09> | 3.21875 | 1,522 | Comment Section | Science & Tech. | 48.408415 |
Radioactivity and Shielding
I am working on an experiment of radioactivity for my
physics class and I found the absorption of radiation by matter using
aluminum foil and placing various numbers of sheets of it between the
gamma source and the Geiger-Muller tube, I found the counts per minute
using for different numbers of foils placed in between the source and the
tube. I graphed thickness vs. ln I (ln of cpm). I wanted to know What
the coefficient of linear absorption is for aluminum(the actual value so
I can compare it to my experimental)???? What would have been different
if another material was used instead of aluminum for absorption? I have
lnI = -ux + lnIo....would the coefficient of linear absorption been
different with different material?
When radiation is absorbed, one "piece" of the radiation is picked uo by an
atom or molecule of the absorbing material. It may be turned into heat, or
it may be re-emitted in another direction.
For gamma radiation, the "pieces" are photons. A large number are released
by the gamma source.
Some get stopped, while some pass through without hitting any aluminum
Different materials have different abilities to stop a gamma photon. Some
materials, such as lead and iron, are very good at absorbing a gamma photon.
Some materials, such as paper, are not very good at all. A sheet of lead
will absorb many more photons than will a sheet of paper with the same
thickness. Just as important as individual molecules is how tightly packed
the molecules are. Molecules that are very close together are more likely
to have a photon come into contact. The photon still may not be absorbed,
but it will have more molecules to get past.
For a VERY thin sheet of material, the coefficient of linear absorption is
(the portion of the intensity absorbed) divided by (the thickness). For
thicker sheets, you have to use the logorithmic relation.
I expect sheets of paper or plastic will give significantly smaller
coefficients than aluminum.
Dr. Ken Mellendorf
Illinois Central College
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:c9d9494f-fcbc-4ff4-a86b-3d69bffa4380> | 3.15625 | 470 | Q&A Forum | Science & Tech. | 48.120616 |
A Primer on Python Metaclass Programming
Pages: 1, 2
Solving Problems with Magic
So far, we have seen the basics of metaclasses. Putting them to work is more subtle. The challenge of using metaclasses is that in typical OOP design, classes do not really do much. Class inheritance structures encapsulate and package data and methods, but one typically works with instances in the concrete.
There are two general categories of programming tasks where I think metaclasses are genuinely valuable.
The first, and probably more common, category is where you do not know at design time exactly what a class needs to do. Obviously, you will have some idea about it, but some particular detail might depend on information that will not be available until later. "Later" itself can be of two sorts: a), when a library module is used by an application, and b), at runtime when some situation exists. This category is close to what is often called "Aspect Oriented Programming" (AOP). Let me show an elegant example:
Example 7. Metaclass configuration at runtime
% cat dump.py #!/usr/bin/python import sys if len(sys.argv) > 2: module, metaklass = sys.argv[1:3] m = __import__(module, globals(), locals(), [metaklass]) __metaclass__ = getattr(m, metaklass) class Data: def __init__(self): self.num = 38 self.lst = ['a','b','c'] self.str = 'spam' dumps = lambda self: `self` __str__ = lambda self: self.dumps() data = Data() print data % dump.py <__main__.Data instance at 1686a0>
As you would expect, this application prints out a rather generic
description of the
data object (a conventional instance). We get
a rather different result by passing runtime arguments to the
Example 8. Adding an external serialization metaclass
% dump.py gnosis.magic MetaXMLPickler <?xml version="1.0"?> <!DOCTYPE PyObject SYSTEM "PyObjects.dtd"> <PyObject module="__main__" class="Data" id="720748"> <attr name="lst" type="list" id="980012" > <item type="string" value="a" /> <item type="string" value="b" /> <item type="string" value="c" /> </attr> <attr name="num" type="numeric" value="38" /> <attr name="str" type="string" value="spam" /> </PyObject>
The particular example uses the serialization style of
gnosis.xml.pickle, but the most current
package also contains the metaclass serializers
MetaPrettyPrint. Moreover, a user of
dump.py "application" can impose the use of any "MetaPickler"
she wishes, from any Python package that defines one. Writing an appropriate
metaclass for this purpose will look something like this:
Example 9. Adding an attribute with a metaclass
class MetaPickler(type): "Metaclass for gnosis.xml.pickle serialization" def __init__(cls, name, bases, dict): from gnosis.xml.pickle import dumps super(MetaPickler, cls).__init__(name, bases, dict) setattr(cls, 'dumps', dumps)
The remarkable achievement of this arrangement is that the application programmer need have no knowledge about what serialization will be used--nor even whether serialization or some other cross-sectional capability will be added at the command line.
Perhaps the most common use of metaclasses is similar to that of
MetaPicklers: adding, deleting, renaming, or substituting methods for those
defined in the produced class. In our example, a "native"
Data.dump() method is replaced by a different one from outside of the
application, at the time the class
Data is created (and therefore,
in every subsequent instance).
A useful book on metaclasses is Putting Metaclasses to Work, by Ira R. Forman, Scott Danforth, Addison-Wesley 1999 (ISBN 0201433052).
For metaclasses in Python specifically, Guido van Rossum's essay, " Unifying types and classes in Python 2.2," is useful.
My Gnosis Utilities package contains functions to make working with metaclasses easier and more powerful.
David Mertz , being a sort of Foucauldian Berkeley, believes, esse est denunte.
Return to Python DevCenter. | <urn:uuid:eaeeded7-a104-4119-95e9-ed74dd007b5b> | 2.96875 | 1,014 | Knowledge Article | Software Dev. | 46.819192 |
|Jun24-12, 05:45 AM||#1|
Momentum or Earth?
Earlier I was thinking about circular motions and centripetal forces and the earth around the sun, If p=mv and the velocity of a object in circular motion is constantly changing does this mean Earth's momentum is constantly changing?
or is it that earth is travelling on a straight path through curved space-time and does not have centripetal force?
|Jun24-12, 07:15 AM||#2|
|Similar Threads for: Momentum or Earth?|
|Total Angular Momentum of the Earth||Introductory Physics Homework||5|
|Momentum of an asteroid / Earth||Introductory Physics Homework||1|
|Angular Momentum of Earth Decreasing||Advanced Physics Homework||1|
|Angular momentum of the earth||Introductory Physics Homework||5|
|linear momentum: a bullet and the earth||Classical Physics||7| | <urn:uuid:eebfaf9a-7b70-4e6d-a515-433797de986e> | 2.796875 | 213 | Comment Section | Science & Tech. | 38.347762 |
No doubt many of you have seen stories during the last few years about the repeated episodes of cold and snow in Western Europe-- and particularly Britain. Last December, some called it the snowiest winter in 25 years and the coldest in over 100 years.
On average, Britain’s winter climate is mild and has been so for many decades, probably since the end of the “Little Ice Age” in the 1800s. So what’s going on? Is the Gulf Stream changing? Or is it something else?
Despite its overall mild climate, Britain’s winters are also extremely variable, occasionally resembling those usually seen in Eastern Europe, but with less intense cold.
What has caught the eye of British climatologists is that last winter was the third in a row with unusual cold and snow for at least part of the winter. In an article entitled “Do freezing temperatures across Europe indicate another ice age is imminent?”, the author says that “scientists will be studying facts and figures to determine if ……the cold weather conditions are a temporary blip or an indication of something more serious such as the return to an ice-age period.”
All of this may have prompted one of our blog commenters last year to make reference to a Discover Magazine article of May 22, 2004: “A New Ice Age: The Day After Tomorrow?” (The article, in turn was apparently prompted by a movie with a similar title.)
In the story, Brad Lemley discusses the theory proposed by climate scientist William Curry and others at the Woods Hole Institution in Massachusetts, that in the face of global warming, changes in North Atlantic ocean currents will result in a mini ice age for both western Europe and the eastern United States for perhaps as long as the last one—400-500 years!
The commenter asked for the Capital Weather Gang’s thoughts on the above theory. Speaking for myself, I was intrigued not so much by the Woods Hole concept (because I had heard of it before—and disregarded it), but by the opposing theory of Richard Seager, a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory.
In Seager’s study, entitled “Is the Gulf Stream responsible for Europe’s mild winters?”, Seager and his team maintain that the Gulf Stream has little to do with Britain’s relatively mild winters, that even a total Gulf Stream shut-down would have little impact on the British climate and contribute to only a modest cooling. Seager believes the idea was started by M.F. Maury in his 1855 book The Physical Geography of the Sea and its Meteorology, and has been perpetuated ever since.
All of which brings me back to my earliest science classes, when we were told that the only reason London, England, for example, which is even further north than Winnipeg, Canada, is so much milder is because of the Gulf Stream. The current, it was (and continues to be) said, carries vast quantities of heat across the North Atlantic, much of which is released into the atmosphere of western Europe by the prevailing westerlies.
The underlying concept of the above theory seems plausible enough. But I always thought that a huge mistake, as Seager points out, was that without the Gulf Stream, the prevailing winds would still be picking up and carrying some warmth—albeit minimal--and moisture to the British Isles, which would still have a primarily maritime, rather than a continental, climate. After all, these winds would not have passed over snow-covered prairies.
On the contrary, we know that the eastern United States is downwind, more or less, from the huge North American land mass. Under the right conditions, arctic high pressure systems can race southeastward over snow covered terrain, with relatively little modification from their source region.
In order for anything close to the above process to occur in Britain, either (1) a polar continental air mass must expand westward and southward from Russia and through Scandinavia; or (2) what’s called (in Britain) an Arctic maritime air mass must dive due south. Both of these scenarios occur relatively infrequently because they are contrary to the more typical west-to-east circulation pattern.
But during the “Little Ice Age,” which lasted roughly 500 years until the mid-nineteenth century, conditions were much different, as Brian Fagan points out in his book, The Little Ice Age, (Basic Books, Copyright 2000). Water temperatures were significantly colder, sub-polar ice floes extended much further south, and winter harbor ice in some years was quite extensive off the coasts of eastern England, France, and Holland, sometimes closing down shipping lanes in the North Sea. As a result, frigid air masses could approach Britain with less modification.
However, even if the Gulf Stream showed signs of shutting down completely, prompting some experts to call for another mini-ice age, the “thousand-pound gorilla,” as Seager puts it, would be the NAO (North Atlantic Oscillation).
A negative NAO strongly correlates with cold western European and eastern U.S. winters. But in Seager’s opinion, with or without the Gulf Stream, the NAO is apt to frequently go positive in the future, providing an extended period of balmy winters. The culprit, he believes: increasing atmospheric greenhouse gases, (Seager modestly declines to be thought of as an expert on the future gyrations of the NAO.)
Brian Fagan seems to bolster Seager’s research, with convincing evidence of a persistent negative NAO during that frigid period’s 500 year reign.
In summary, I do think Seager’s basic hypothesis deserves more debate by meteorologists, climatologists, and lay people alike. What do you, our valued readers, think about Seager’s theory? We’ve set up a poll. Give us your opinion.
If you vote no, give us your thoughts on the primary driver of Britain’s climate... | <urn:uuid:c27ab57b-1794-4820-ae03-50712486696b> | 3.453125 | 1,265 | Personal Blog | Science & Tech. | 45.154909 |
Here’s how Earthshine works. Sunlight reflects off the blue and white Earth into space at the moon. The moon absorbs some and sends the rest back to us. We see this twice-reflected light as a dim illumination of the portion of the moon not lit by direct sunlight. Illustration, crescent and sun photos: Bob King; Earth image: NASA
Can’t get enough of that good old moonshine. Wait a second, I meant Earthshine. Our region’s been fortunate to have several clear nights in a row for watching this spooky light fill out the fattening crescent. I hope you’ve had a chance to see it too.
Last night during astronomy class, we talked about how late into the moon’s phase Earthshine might be visible. Years ago, I did an experiment using my 10-inch reflecting telescope and was able to see it when the moon was nine or ten days past new. More casually, I’ve seen it in binoculars when the moon was at half (first quarter phase).
This diagram shows the moon’s phase and the corresponding phase of the Earth as seen by someone on the moon. Notice that as the moon fills out, the Earth gets thinner. Illustration: Bob King; Earth and moon from Stellarium.
As the moon orbits the Earth, it marches upward from the sun while increasing in phase. As seen from the moon, the Earth also goes through phases that are complementary to the moon’s. When the moon’s a very thin crescent, an astronaut on its surface would look back toward our planet and see a nearly full Earth. A full Earth reflects a lot of sunlight back at the moon, so Earthshine is brightest when the crescent is thinnest.
As the moon’s phase increases toward first quarter and beyond, the Earth’s phase wanes, going from full to half to crescent. With less Earth to reflect sunlight, the Earthshine gets fainter and fainter. It also doesn’t help that the area for the Earth to illuminate shrinks as the sunlit portion of the moon grows ever larger night by night.
Our moon’s not the only one lit by its home planet. This is Saturn’s moon Iapetus photographed by the Cassini spacecraft. The right side is overexposed because it’s illuminated by the sun. The left side glows by light reflecting off Saturn’s clouds. Since sunlight is much fainter at Saturn’s distance than at Earth’s, Cassini had to make a long time exposure to capture this image. Credit: NASA
So now I’d like you to try a little experiment yourself. How many days after new moon can you still see the Earthshine — with your naked eye and also with binoculars? New moon was Friday April 24 which would make the moon five days old tonight (Weds.). Last night, the whole class saw the Earthshine easily without optical aid. As it becomes increasingly fainter in the coming nights, you’ll need to use averted vision to spot it. Look around and about the faint part of the moon instead of straight at it. Even better, hide the bright part of the moon behind a wall or power pole.
Drop me an e-mail and let me know how you fare. I’ll be out there drinking in my share of Earthshine too.
Lyle Anderson of Duluth sent this photo of the International Space Station he took from his home on Sunday at 4:42 a.m. (I added the annotations). The station’s been making regular pre-dawn passes over the U.S. this week. Check yesterday’s blog for times. | <urn:uuid:488aedb2-991d-44d5-8d0b-4d823a8e6d72> | 4.09375 | 791 | Personal Blog | Science & Tech. | 72.886655 |
Getting Acquainted with Entropy
You can experience directly the massA measure of the force required to impart unit acceleration to an object; mass is proportional to chemical amount, which represents the quantity of matter in an object., volume, or temperatureA physical property that indicates whether one object can transfer thermal energy to another object. of a substanceA material that is either an element or that has a fixed ratio of elements in its chemical formula., but you cannot experience its entropyA thermodynamic state function, symbol S, that equals the reversible heat energy transfer divided by temperature; higher entropy corresponds to greater dispersal of energy on the molecular scale. See also standard entropy.. Consequently you may have the feeling that entropy is somehow less real than other properties of matterAnything that occupies space and has mass; contrasted with energy.. We hope to show in this section that it is quite easy to predict whether the entropy under one set of circumstances will be larger than under another set of circumstances, and also to explain why. With a little practice in making such predictions in simple cases you will acquire an intuitive feel for entropy and it will lose its air of mystery.
The entropy of a substance depends on two things: first, the state of a substance—its temperature, pressureForce per unit area; in gases arising from the force exerted by collisions of gas molecules with the wall of the container., and amount; and second, how the substance is structured at the molecular level. We will discuss how state properties affect entropy first.
Temperature As we saw in the last section, there should be only one way of arranging the energyA system's capacity to do work. in a perfect crystalA solid with a regular polyhedral shape; for example, in sodium chloride (table salt) the crystal faces are all at 90° angles. A solid in which the atoms, molecules, or ions are arranged in a regular, repeating lattice structure. at 0 K. If W = 1, then S = k ln W = 0; so that the entropy should be zero at the absolute zeroThe minimum possible temperature: 0 K, -273.15 °C, -459.67 °F. of temperature. This rule, known as the third law of thermodynamicsA formal statement that at the absolute zero of temperature the value of the entropy of a perfect crystal is equal to zero., is obeyed by all solids unless some randomness of arrangement is accidentally “frozen” into the crystal. As energy is fed into the crystal with increasing temperature, we find that an increasing number of alternative ways of dividing the energy between the atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. become possible. W increases, and so does S. Without exception the entropy of any pure substance always increases with temperature.
Volume and Pressure We argued earlier that when a gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container. doubles its volume, the number of ways in which the gas molecules can distribute themselves in space is enormously increased and the entropy increases by 5.76 J K–1. More generally the entropy of a gas always increases with increasing volume and decreases with increasing pressure. In the case of solids and liquids the volume changes very little with the pressure and so the entropy also changes very little.
Amount of Substance One of the main reasons why the entropy is such a convenient quantity to use is that its magnitude is proportional to the amount of substance. Thus the entropy of 2 mol of a given substance is twice as large as the entropy of 1 mol. Properties which behave in this way are said to be extensive properties. The mass, the volume, and the enthalpyA thermodynamic state function, symbol H, that equals internal energy plus pressure x volume; the change in enthalpy corresponds to the energy transferred as a result of a temperature difference (heat transfer) when a reaction occurs at constant pressure. are also extensive properties, but the temperature, pressure, and thermodynamic probability are not. | <urn:uuid:515677ee-0c85-41fa-b12d-7d0d77499011> | 3.828125 | 887 | Tutorial | Science & Tech. | 34.472865 |
“There is absolutely nothing to restrict the geographical ranges of animals in the deep sea. Dr. Wallich, the pioneer of deep-sea research, eighteen years ago recognized the deep homothermal sea “As the great highway for animal migration, extending pole to pole” Below 500 fathoms it is everywhere dark and cold, and there are no ridges that rise on the ocean bottom to within 500 fathoms of the surface, so as to bar the migration of animals in the course of generations from one ocean to another, or all over the bottom of any one of the oceans.” –Mosely 1880, p. 546
In the last 130 years, our view of the deep oceans radically changed. No longer is the largest environment on earth considered a dark homogenous wasteland, unchanging through time, buffered against climatic fluctuation, and devoid of biodiversity. The modern view of the deep-sea realizes a varied landscape comprised of reducing habitats such hydrothermal vents, methane seeps; mid-oceanic ridge systems and geologic hotspots that produce topographic complexity and new volcanic substrates; large food falls in the form of wood and whales that provide oases of food in a low productivity system; intricate canyon and slope systems presenting radical environmental shifts; and oxygen minimum zones intercepting continental margins. Even the expansive mud bottom that serves as the backdrop for these multiple habitats is considered a spatially and temporally dynamic system characterized by episodic benthic storms, internal tides, downslope debris and sediment flows, patchy carbon input across multiple scales, rich with biogenic disturbance and structure, and intrinsically linked to ocean surface processes. The hypothesis of a biological desert is but a ghost of science past put to rest by findings of spectacularly high biodiversity. But what of the notion of deep-sea species unbound by dispersal barriers able to distribute across to the farthest extents of ocean basins? What is the biogeography of the deep sea?
In a multipart series I will examine the biogeography (defined very well by Jim Lemiere as the patterns of the geographic distribution of biodiversity – where organisms are, where they ain’t, and who lives with or without whom) of the deep. I do this in conjunction with an invited review on the same project, authored by myself and two accomplished deep-sea biologists that I have longed admired and possess a deep respect for. My plan is to use DSN as sounding board for this project.
Now, the biogeography of deep-sea organisms is historically a black box characterized more by conjecture than data. Unsurprising given the remoteness of this environment and challenges to sampling it presents. Previously, many assumed, due to the lack of perceived environmental variability and geographic barriers, ranges of deep-sea species on the abyssal plains and continental margins were exceedingly large. New research utilizing higher-resolution sampling, molecular methods, and a ever-expanding amount of information in public databases are once again raising the question of how small marine invertebrates disperse across the vast distances of the ocean floor. Recent findings on unique seamount and chemosynthetic habits suggest high levels of endemism and challenge the broad applicability of single paradigm for all deep-sea environments.
Stay tuned for the first segment on the origins of deep-sea fauna. | <urn:uuid:ba3506d4-0626-48b6-be9e-33e0594da13e> | 2.9375 | 686 | Personal Blog | Science & Tech. | 23.86 |
|Jmol-3D images||Image 1|
|Molar mass||112.08676 g/mol|
N/A - decomposes
|Solubility in water||Soluble|
|Main hazards||carcinogen & teratogen with chronic exposure|
|Flash point||non flammable|
| (what is: / ?)
Except where noted otherwise, data are given for materials in their standard state (at 25 °C, 100 kPa)
Uracil // (U) one of the four nucleobases in the nucleic acid of RNA that are represented by the letters A, G, C and U. The others are adenine, cytosine, and guanine. In RNA, uracil (U) binds to adenine (A) via two hydrogen bonds. In DNA, the uracil nucleobase is replaced by thymine.
Uracil is a common and naturally occurring pyrimidine derivative. Originally discovered in 1900, it was isolated by hydrolysis of yeast nuclein that was found in bovine thymus and spleen, herring sperm, and wheat germ. It is a planar, unsaturated compound that has the ability to absorb light.
Studies reported in 2008, based on 12C/13C isotopic ratios of organic compounds found in the Murchison meteorite, suggested that uracil, xanthine and related molecules were formed extraterrestrially.
In RNA, uracil base-pairs with adenine and replaces thymine during DNA transcription. Methylation of uracil produces thymine. In DNA, the evolutionary substitution of thymine for uracil may have increased DNA stability and improved the efficiency of DNA replication. Uracil pairs with adenine through hydrogen bonding. When base pairing with adenine, uracil acts as both a hydrogen bond acceptor and a hydrogen bond donor. In RNA, uracil binds with a ribose sugar to form the ribonucleoside uridine. When a phosphate attaches to uridine, uridine 5'-monophosphate is produced.
Uracil undergoes amide-imidic acid tautomeric shifts because any nuclear instability the molecule may have from the lack of formal aromaticity is compensated by the cyclic-amidic stability. The amide tautomer is referred to as the lactam structure, while the imidic acid tautomer is referred to as the lactim structure. These tautomeric forms are predominant at pH 7. The lactam structure is the most common form of uracil.
Uracil also recycles itself to form nucleotides by undergoing a series of phosphoribosyltransferase reactions. Degradation of uracil produces the substrates aspartate, carbon dioxide, and ammonia.
- C4H4N2O2 → H3NCH2CH2COO- + NH4+ + CO2
Uracil is a weak acid. The first site of ionization of uracil is not known. The negative charge is placed on the oxygen anion and produces a pKa of less than or equal to 12. The basic pKa = -3.4, while the acidic pKa = 9.389. In the gas phase, uracil has 4 sites that are more acidic than water.
In a scholarly article published in October 2009, NASA scientists reported having reproduced uracil from pyrimidine by exposing it to ultraviolet light under space-like conditions. This suggests that one possible natural original source for uracil in the RNA world could have been panspermia.
There are many laboratory syntheses of uracil available. The first reaction is the simplest of the syntheses, by adding water to cytosine to produce uracil and ammonia. The most common way to synthesize uracil is by the condensation of maleic acid with urea in fuming sulfuric acid as seen below also. Uracil can also be synthesized by a double decomposition of thiouracil in aqueous chloroacetic acid.
- C4H5N3O + H2O → C4H4N2O2 + NH3
- C4H4O4 + CH4N2O → C4H4N2O2 + 2 H2O + CO
Uracil readily undergoes regular reactions including oxidation, nitration, and alkylation. While in the presence of phenol (PhOH) and sodium hypochlorite (NaOCl), uracil can be visualized in the blue region of UV light. Uracil also has the capability to react with elemental halogens because of the presence of more than one strongly electron donating group.
Uracil readily undergoes addition to ribose sugars and phosphates to partake in synthesis and further reactions in the body. Uracil becomes uridine, uridine monophosphate (UMP), uridine diphosphate (UDP), uridine triphosphate (UTP), and uridine diphosphate glucose (UDP-glucose). Each one of these molecules is synthesized in the body and has specific functions.
When uracil reacts with anhydrous hydrazine, a first-order kinetic reaction occurs and the ring of uracil opens up. If the pH of the reaction increases to >10.5, the uracil anion forms, making the reaction go much slower. The same slowing of the reaction occurs if the pH decreases because of the protonation of the hydrazine. The reactivity of uracil is unchanged even if the temperature changes.
Uracil can be used for drug delivery and as a pharmaceutical. When elemental fluorine is reacted with uracil, 5-fluorouracil is produced. 5-Fluorouracil is an anticancer drug (antimetabolite) used to masquerade as uracil during the nucleic acid replication process. Because 5-Fluorouracil is similar in shape to, but does not perform the same chemistry as, uracil, the drug inhibits RNA replication enzymes, thereby eliminating RNA synthesis and stopping the growth of cancerous cells. Uracil can also be used in the synthesis of caffeine
Uracil's use in the body is to help carry out the synthesis of many enzymes necessary for cell function through bonding with riboses and phosphates. Uracil serves as allosteric regulator and coenzyme for reactions in the human body and in plants. UMP controls the activity of carbamoyl phosphate synthetase and aspartate transcarbamoylase in plants, while UDP and UTP requlate CPSase II activity in animals. UDP-glucose regulates the conversion of glucose to galactose in the liver and other tissues in the process of carbohydrate metabolism. Uracil is also involved in the biosynthesis of polysaccharides and the transportation of sugars containing aldehydes.
It can also increase the risk for cancer in cases in which the body is extremely deficient in folate. The deficiency in folate leads to increased ratio of deoxyuracilmonophosphates (dUMP)/deoxythyminemonophosphates (dTMP) and uracil misincorporation into DNA and eventually low production of DNA.
Uracil can be used to determine microbial contamination of tomatoes. The presence of uracil is an indication of lactic acid bacteria contamination in the fruit. Uracil derivatives containing a diazine ring are used in pesticides. Uracil derivatives are more often used as antiphotosynthetic herbicides, destroying weeds in cotton, sugar beet, turnips, soya, peas, sunflower crops, vineyards, berry plantations, and orchards.
In yeast, uracil concentrations are inversely proportional to uracil permease.
- Richard L. Myers, Rusty L. Myers: The 100 most important chemical compounds, 92-93 link
- Garrett, Reginald H.; Grisham, Charles M. Principals of Biochemistry with a Human Focus. United States: Brooks/Cole Thomson Learning, 1997.
- Brown, D.J. Heterocyclic Compounds: Thy Pyrimidines. Vol 52. New York: Interscience, 1994.
- Horton, Robert H.; et al.Principles of Biochemistry. 3rd ed. Upper Saddle River, NJ: Prentice Hall, 2002.
- Martins, Zita; Botta, Oliver; Fogel, Marilyn L.; Sephton, Mark A.; Glavin, Daniel P.; Watson, Jonathan S.; Dworkin, Jason P.; Schwartz, Alan W. et al. (15 June 2008). "Extraterrestrial nucleobases in the Murchison meteorite". Earth and Planetary Science Letters 270 (1–2): 130–136. arXiv:0806.2286. Bibcode:2008E&PSL.270..130M. doi:10.1016/j.epsl.2008.03.026.
- AFP Staff (20 August 2009). "We may all be space aliens: study". AFP. Retrieved 2011-08-14.
- Clark et al. "The Surface Composition of Titan". Smithsonian/NASA Astrophysics Data System. Retrieved 15 November 2012.
- Zorbach, W.W. Synthetic Procedures in Nucleic Acid Chemistry: Physical and Physicochemical Aids in Determination of Structure. Vol 2. New York: Wiley-Interscience, 1973.
- Lee, J.K.; Kurinovich, Ma. J Am Soc Mass Spectrom.13(8), 2005, 985-95.
- Chittenden, G.J.F.; Schwartz, Alan W. Nature.263,(5575), 350-1.
- Kochetkov, N.K. and Budovskii, E.I. Organic Chemistry of Nucleic Acids Part B. New York: Plenum Press, 1972.
- "A Novel Method of Caffeine Synthesis from Uracil". Synthetic Communications 33 (19): 3291–3297. NaN undefined NaN. doi:10.1081/SCC-120023986.
- Brown, E.G. Ring Nitrogen and Key Biomolecules: The Biochemistry of N-Heterocycles. Boston: Lluwer Academic Publishers, 1998.
- Mashiyama, S.T; et al.'Anal Biochem. 330(1),2004, 58-69.
- Hildalgo, A; et al.'J Agric Food Chem.53(2),2005, 349-55.
- Pozharskii, A.F.; et al.Heterocycles in Life and Society: An Introduction to Heterocyclic Chemistry and Biochemistry and the Role of Heterocycles in Science, Technology, Medicine, and Agriculture. New York: John Wiley and Sons, 1997. | <urn:uuid:c3d90c7e-d96c-464a-909f-0daebabef533> | 3.546875 | 2,341 | Knowledge Article | Science & Tech. | 41.865531 |
Brief SummaryRead full entry
The Phycitinae comprise approximately 3500 described species in more than 550 genera (Beccaloni et al. 2003; Nuss et al. 2009). The subfamily occurs in all biogeographical regions except Antarctica, but it is poorly known outside the Holarctic biomes, and the number of species yet to be described is likely to exceed the number of species already named. The adults vary in size from very small to relatively large micro moths. The forewings are generally relatively narrow and elongate, and generally rather drab coloured; greyish-brownish in a variable pattern with white and black. But some are more brightly coloured with patterns of red, pink and yellow. A discal spot is often present, and many groups have a distinct white costal margin. The hind wings are broad, generally whitish without distinct markings. Most adults are crepuscular or nocturnal and are readily attracted to UV lights. They rest with the wings curled cigar-like around the body and the antennae pressed backwards flat along the body. The larvae are generally concealed feeders in a wide variety of plants or stored dry products. Many are serious pests, but others such as the Cactus-moth (Cactoblastis cactorum) are important biological control agents of invasive plants. | <urn:uuid:9fb9c806-67c0-4ab2-812c-697ebdf6b621> | 3.703125 | 276 | Knowledge Article | Science & Tech. | 37.311182 |
The USGS Water Science School
The U.S. Geological Survey (USGS) often works on projects where the ecology and biology of local streams are studied. In these studies, it is often necessary to investigate what fish live in the steam. This picture shows fish being collected as part of a water-quality study in Georgia. Part of this study was to check the fish in different parts of the basin to see if and how they were affected by local pollution and chemicals, such as pesticides. The hydrologists had to come up with a way to collect the fish, and we see them in action. They are actually shocking the water with a strong electrical charge to stun the fish into submission so they can be collected. The hydrologist in the center has a power pack on her back and is holding the electrical wand. At the correct moment she submerges the wand in the creek, presses a button, and the guy downstream will collect the fish for further study.
Credit: Alan Cressler, USGS | <urn:uuid:57130314-75f7-441a-9e1b-81bdcc51d0d7> | 4.0625 | 206 | Truncated | Science & Tech. | 59.356759 |
While officials in Washington pick apart the sequence of events that led to the release of nearly five million barrels of oil in the Deepwater Horizon disaster, scientists are still trying to figure out where all of that oil went.
Some of it was captured in the spill response, some of it is still visible in marshes and on beaches, some of it remains in plumes underwater and some of it evaporated, though there are still heated disputes over how much went where.
Meanwhile, a group of scientists in Alabama has been studying another pathway for all of that oil: through the Gulf of Mexico’s hungry oil-eating microbes. Scientists expected bacteria to eat the oil, but speculation remained about what would happen after that. As it turns out, the oil that the bacteria consumed traveled up the food chain as the oil-eating bacteria were eaten in turn, the scientists suggest.
By tracking oil’s particular carbon signature, which differs from the nutrients in the usual bacterial diet, scientists from the Dauphin Island Sea Lab in Alabama were able to observe the growing presence of oil in the planktonic food web as oil reached the waters of the northern gulf. Read more… | <urn:uuid:5392112d-26b5-4027-a3ee-792e417a3fce> | 3.3125 | 240 | Truncated | Science & Tech. | 35.472059 |
Tiny Bugs Enlisted to Fight Invading Water Hyacinths
By Leon Marshall in Johannesburg
for National Geographic News
|March 3, 2003|
The purple-blue flowers protruding from mats of green leaves spreading
over a tranquil pond make a beautiful sight. But in South Africa, as
with many other countries invaded by the South American water hyacinth,
the fight to stem its relentless spread has long been a losing one.
Until five tiny bugs were enlisted to solve the problem.
South African scientists have imported the insects from the Amazon Basin, the original habitat of the hyacinth where they are its natural enemies. They are increasingly proving the best way of keeping it in check in Africa as well. The bugs feed on different parts of the plant and in different ways, and can be used in combination or singly as their effectiveness differs with climates and seasons.
The bugs include two beetles (Neochetina eichhorniae and Neochetina bruchi) which tunnel into the leaves and crown of the plant, a mite (Orthogalumna terebrantis) which hollows out feeding galleries between the leaf veins, a moth (Niphograpta albiguttalis) whose larvae feed on the leaf surface and burrow into the crown, and a mirid (Eccritotarsus catarinensis) which causes browning of the leaf through chlorophyll extraction.
The water hyacinth (Eichhornia crassipes) is a prolific grower. In favourable conditions it can double in population every 12 days, quickly clogging waterways and smothering indigenous species by cutting off their sunlight and oxygen. The main reason for its spread around the world is its beauty and its consequent popularity as an ornamental plant.
The havoc it is causing in parts of Africa is underscored in a new booklet, Invasive Alien Species in Africa's Wetlands, in which it is described as the world's worst water weed. The booklet has been released by IUCN (the World Conservation Union), together with the Ramsar Convention on Wetlands of International Importance and the Global Invasive Species Programme (GISP).
"The hyacinth has inflicted immense damage on wetlands and [Africa's] biodiversity, as well as on the economy," said Geoffrey Howard, regional program coordinator for IUCN in eastern Africa.
The plant forms floating mats which obstruct shipping and river flows, and which get into water supply systems, drainage canals, and hydro-power generators. Dense colonies at lake edges smother indigenous species, and they make it hard for rural communities to collect water, fish from shore, or get their fishing boats through.
The water loss the weed causes through transpiration is well above that of open water, causing reservoir and lake levels to drop faster than from evaporation.
"The harm it and other invasive species does to African wetlands alone runs to billions of dollars annually. This, while their impact is only just beginning to be appreciated," said Howard.
The hyacinth is free-floating, with long feathery roots and hollow stems that lend it buoyancy. It grows from seed and propagation. In shallow water it can flower profusely when it gets rooted in soil, leaving many seeds which can last up to 15 years and which germinate as conditions become favourable.
It grows particularly well in warm and humid conditions and in water with high nutrient levels, such as from human and animal waste and agricultural fertilisers. It also represents one of the drawbacks of dam-building, for it flourishes in still water, particularly in the shallows, which is where indigenous species mostly breed. In open rivers, collections tend to get flushed away by floods.
The hyacinth has become a problem in Africa particularly on Lake Kariba, Zimbabwe, the great lakes of East Africa, including Victoria, and on slow-moving rivers like the Nile and the Congo, said Helmut Zimmermann, divisional manager of weeds research at South Africa's Plant Protection Research Institute. In South Africa it is a problem in rivers and lakes on the more humid eastern side of the country.
South Africa is among the countries which have taken tough measures to stop people from spreading the hyacinth. Heavy penalties can be imposed for importing, selling, or transporting it. Boat owners can get into serious trouble for leaving it stuck to propellers or hulls when boats are in transit.
Neighbouring Botswana is even stricter. There, careless people could go straight to jail for similar offenses. The country is still free of the weed but is profoundly concerned about the danger of it infesting particularly the spectacular Okavango swamps in its northern reaches, the setting for many of National Geographic's wildlife documentaries.
Another reason for the water hyacinth getting so easily out of control is that animals do not really feed on it, said Zimmerman. Cattle will eat it, but its nutritional value is low and it causes diarrhoea.
The hyacinth can be controlled by mechanical means, using manpower and machines, but this is mostly unsuccessful as it grows faster than it can be cleared. Various herbicides are effective but have considerable risks for other wetlands plants.
Bring in the Bugs
The most effective and sustainable method has proved to be biological control, with the help in particular of the five insects brought in from the Amazon River Basin, said Linda Mabulu, the Plant Protection Research Institute's hyacinth researcher.
But this kind of control takes time, and its biggest drawback is people's impatience, Mabula said. "They want to see quick results and when they don't, they revert to chemical methods which kill the insects and other life without necessarily eradicating the weed.
"Sometimes I think it is not the plant that is the problem as much as it is the people. You ask them not to spray, but they do so, or they spray at the wrong time when the biological agents are most vulnerable."
Great care was taken with the introduction of the Amazon insects to South Africa, Mabula said. They were exhaustively tested and kept in quarantine until it was absolutely certain they would not turn to indigenous plants and become a new problem. The testing is repeated when the insects are provided to other countries in Africa which might have different plants they could attack.
Biological control does not eradicate invasive weeds but merely reduces them to manageable proportions, Mabula said. As with most insects, they start slowing their own reproduction when they are at risk of destroying the plants completely.
The insects have been distributed to most parts of South Africa, and Mabula is confident that, however slowly, the battle is being won. "People are becoming more comfortable with relying on the bugs to do the job. We would prefer them not to use chemicals, but if they insist, we teach them how and at which time of year so that they do not kill off the insects.
"We take more time now to explain to them how it works, and we find that they even become enthusiastic, to the point of some conducting their own experiments with the insects we give them to see which work best for them," she said.
|© 1996-2008 National Geographic Society. All rights reserved.| | <urn:uuid:1974c066-5256-41f4-900f-15c93f8dcd75> | 3.65625 | 1,492 | Truncated | Science & Tech. | 41.165931 |
RetroBSD is a port of 2.11BSD Unix intended for embedded systems with fixed memory mapping. The current target is Microchip PIC32 microcontroller with 128 kbytes of RAM and 512 kbytes of Flash. PIC32 processor has MIPS M4K architecture, executable data memory and flexible RAM partitioning between user and kernel modes.
- Small resource requirements. RetroBSD needs only 128 kbytes of RAM to be up and running user applications.
- Memory protection. Kernel memory is fully protected from user application using hardware mechanisms.
- Open functionality. Usually, user application is fixed in Flash memory - but in case of RetroBSD, any number of applications could be placed into SD card, and run as required.
- Real multitasking. Standard POSIX API is implemented (fork, exec, wait4 etc).
- Development system on-board. It is possible to have C compiler in the system, and to recompile the user application (or the whole operating system) when needed. | <urn:uuid:d11de41c-fc0f-492b-89ff-c621bd3d393d> | 2.96875 | 206 | Knowledge Article | Software Dev. | 40.052297 |
Michael Becker, a doctoral student at McGill University, was a scientific diver on an expedition to Lake Untersee in Antarctica.
If you’re going where there’s no air to breathe, you better be organized.
Any kind of underwater diving involves process for that very reason. There’s the early-morning wake-up, the weather check, the gear check (masks, fins, regulators … check). And then there’s the dive site approach – whether you’re walking in from the shore or taking a boat to some far forgotten reef.
Diving Lake Untersee is just like that – except in Antarctica we get to our dive site by snowmobile. And Untersee ratchets up the workload because it’s remote, technical and cold.
To even get into the lake is a feat of accomplishment and a trick of clever engineering. Just the thought of trying to chip a dive hole through 10-foot thick lake ice could give you tendinitis long before you get your feet wet.
During the early days of Antarctic diving in the late-1970s, the expedition leader Dale Andersen and a few clever people working in the Dry Valley region of Antarctica came up with an ingenious way of getting through the ice. They modified an industrial-strength steam cleaner to circulate boiling hot liquid through a closed-circuit piece of copper tubing. All a would-be diver had to do was place the tubing inside a small hole drilled in the ice and wait for magic to happen as the hole slowly formed over two days.
But even when the hole is melted there’s still a lot to do before getting into the water.
We start with a hearty breakfast of dehydrated granola — a meal that I hope to never see again. After breakfast, the diver gets ready by sorting the gear and putting on the dry suit while the first tender drives out by snowmobile to chip out any refrozen ice from the dive hole.
The second snowmobile carries the other dive tender and the diver (already in their dry suit) to the hole. The diver sits on the ice platform and is dressed with weights, tank, gloves, and is tied in to the all-important safety line.
This line is the life tether. It is fed out and taken in as needed. That way the surface assistants have a sense of how far away the diver is, and the diver knows where to return. In the early days, sequences of line pulls would be used to communicate simple commands like an early Morse code for dive messages. Nowadays, the dive line connects a surface communication box to the diver’s facemask. Diver and tender are easily able to grumble back and forth to each other with all the benefits of modern technology.
Once the diver’s mask is on they slide in, do the dive, come back to the hole and are yanked out. If the diver still has a pulse there is applause all around and we go back to celebrate with a dinner of dehydrated food.
Safety is paramount here and there is no margin of error. The nearest recompression chamber for a dive injury is 2,000 miles away in Cape Town. There are no helicopters for rescue and any serious injury or accident could mean death.
We follow all this protocol and process in pursuit of one thing – studying microbial communities locked away from human history.
The Science Down There
Dale and I have done a number of dives to collect data and samples on the conical stromatolites found at the bottom of Lake Untersee. Dale has surfaced several times with sediment cores of the lake’s bottom. These cores tell us about the history of the lake and its resident organisms. By looking at cross-sections of the cores, we can see that the microbial communities grow over the years in layers known as laminations. These laminae show us a chronosequence of events, alternating between mineral deposition and organic layer growth. These mineral deposits must come from somewhere as the lake surface is covered in ice. It’s thought that the occasional influx of silt from nearby glaciers provides the sediment that the cyanobacteria then recolonize.
But there’s more than just grabbing a sample and returning to the surface – the lake environment needs to be described in precise detail.
These cyanobacteria are photosynthetic and dependent on light to create their energy. One of my dives was spent swimming transects back and forth directly underneath the 10-foot ice ceiling holding a light meter. This gives us an idea of the amount of energy that is available for photosynthesis beneath the lake. The ice cover isn’t completely uniform; there are dark areas intermingled with sections of bright windows. Also, since light drops off with depth, not all life within the lake is receiving the same amount of energy.
It’s not just these cyanobacterial mats that thrive in Lake Untersee.
There is a diverse world of bacteria and viruses that inhabit their own unique sections of the water column all the way from the lake surface to over 500 feet below. These areas are far beyond our range capacity as scientific divers, and so we must rely on a different technique to sample these distant creatures.
Our two Russian scientists, Vladimir Akimov and Valery Galchenko from the Winogradsky Institute of Microbiology, are microbiologists that specialize in microbial life in extreme environments. Their work has taken them from remote regions of Yakutia, Russia, studying heat-loving extremophiles, to the even more remote Lake Untersee to study the isolated bacteria inhabiting this lake.
Different communities of bacteria thrive according to the changing abiotic conditions, as you get deeper in the lake’s water column. These environments are mapped out by lowering sensors to measure conductivity, temperature, and depth, or CTD, from the lake’s surface down to around 330 feet – our maximum sample depth.
Vladimir and Valery then lower their sampler to different points within the water column and capture about a gallon of water. These samples are brought back to camp, and the two spend hour after waking hour filtering the water to concentrate samples of both bacteria and viruses. There’s no human health concern with these viruses – they are specific to the bacteria in the lake, and must exist in some sort of equilibrium with the lake life.
One of the areas that Vladimir and Valery are particularly interested is a section of the lake at 256 feet. At this depth, the lake chemistry changes quite a bit – it becomes anoxic, meaning without oxygen. The organisms that thrive in this section have no need for oxygen in their metabolic processes. They use sulfur instead.
From a practical perspective, that means the samples reek. Rich in hydrogen sulfide, they smell like sour, rotten eggs. But by studying this transition from the clear, oxygen-rich water above to the dark, oxygen-poor water below we can get a sense of the two different worlds experienced by bacteria within the same lake.
What we bring up from the depths of Lake Untersee is only the beginning of a long scientific process. All these samples must be carried back to the civilized world, processed and analyzed over the next several months. Only then we will be able to more fully understand the ecosystem of Lake Untersee, and only then will we fully understand the significance of what we’re seeing.
And that’s what makes all this time, effort, and risk worth it. Diving Untersee has been an incredible experience, but without the questions driving us forward, it would be a lot to gamble for a good view. | <urn:uuid:6953bcfc-0963-49a2-be30-a841f3a2efc2> | 2.796875 | 1,583 | Personal Blog | Science & Tech. | 45.302084 |
In cell biology, potassium channels are the most common type of ion channel. They form potassium-selective pores that span cell membranes. Potassium channels are found in most cells, and control the electrical excitability of the cell membrane. They shape action potentials and set the resting membrane potential. They regulate cellular processes such as the secretion of hormones and their malfunction can lead to diseases.
Potassium channels open or close in response to the transmembrane voltage, the presence of calcium ions or other signalling molecules. When open, they allow potassium ions to cross the membrane at a rate which is nearly as fast as their diffusion through bulk water. There are over 80 mammalian genes that encode potassium channel subunits. Potassium channels have a tetrameric arrangement. Four subunits are arranged around a central pore. All potassium channel subunits have a distinctive pore-loop structure that lines the top of the pore and is responsible for potassium selectivity.
Potassium channels found in bacteria are amongst the most studied of ion channels, in terms of their molecular structure. Using X-ray crystallography, profound insights have been gained into how potassium ions pass through these channels and why (smaller) sodium ions do not. The 2003 Nobel Prize for Chemistry was awarded to Rod Mackinnon for his pioneering work on this subject. | <urn:uuid:9f581fc7-9ed9-4ea2-9c2f-479f418af58d> | 3.859375 | 267 | Knowledge Article | Science & Tech. | 36.576789 |
On September 1, 1859, British astronomer Richard Carrington was sketching sunspots through a telescope when he saw a bright, oval-shaped light expanding outward from the Sun. Eighteen hours later, brilliant auroras colored the night sky as far south as the Caribbean, and Earth’s fledgling telegraph systems went berserk: wires melted; sparks from the wires shocked operators and set the telegraph paper on fire; and telegraph networks across Europe and North America failed.
The culprit was a solar storm: essentially a large burp of plasma erupting from the sun and streaming outward through the solar system. While storms as strong as the “Carrington Event” are very rare, they are more likely to occur during the peaks of 11-year solar cycles—with the next peak scheduled to hit in 2013.
While in 1859 the damage was relatively minor, today humans are totally dependent on electricity to power everything from communication systems to the pumps that bring drinkable water into city pipes. If a massive storm like the Carrington Event strikes Earth again, it could cripple the power grid for years, destroy orbiting satellites, and cause an estimated $2 trillion in damages, according to a recent report from the U.S. National Academy of Sciences.
The question is not whether such a storm will recur, but when, and then how to protect against it. Arts & Sciences Professor of Astronomy W. Jeffrey Hughes, who is director of BU’s Center for Integrated Space Weather Modeling, has spent the past decade leading the development of an early warning system to detect incoming solar storms. Backed by a $40 million National Science Foundation grant, Hughes and his team created a prediction model that will enable us to prepare for the worst.
A warning system using this prediction model became fully operational this past fall at the National Oceanic and Atmospheric Administration (NOAA)’s Space Weather Prediction Center in Boulder, Colorado. The warning system will give two or three days’ notice of impending solar storms—basically the time it takes for a storm to travel from the Sun to Earth. It will estimate the speed of the storm—faster storms cause more damage—but cannot tell us one crucial piece of information: the direction of the storm’s magnetic field.
The direction of a solar storm’s magnetic field is vitally important. If a storm hits Earth’s magnetosphere with the storm’s magnetic field pointing north (paralleling Earth’s magnetic field), the storm will bounce off of the magnetosphere, doing little harm. This is equivalent to what happens when two magnets repulse each other. However, if the storm happens to hit Earth with its field pointing south, we have a big problem. “If the magnetic field in the solar wind points in the opposite direction from Earth’s magnetic field, then they can effectively annihilate each other, allowing the solar wind to couple [transfer] energy into the magnetosphere,” says Hughes. “This is the single biggest factor in determining whether we have a super storm.”
In this scenario, the interaction between the solar wind and Earth’s magnetosphere would generate electrical turmoil in the upper atmosphere, causing magnetic variations in Earth’s internal magnetic field. These magnetic variations would, in turn, generate electrical currents on Earth’s surface.
Strong surface currents would enter the wires of our power grid, overloading and destroying the massive transformers that transmit electricity across long distances. Replacing these transformers would likely take years, since only a few are manufactured each year. In the meantime, only those living close to power plants would likely still have power, while vast populations would experience blackouts. Many military and commercial communications satellites would be destroyed. It would be one of the worst natural disasters in modern history.
The new NOAA warning system would allow operators to power-down satellites, protecting them from a major solar storm. It also would give first responders time to prepare for disaster. To really protect the power grid we would need to build what essentially would be massive surge protectors at key points. According to solar storm expert John Kappenman in a report for the Oak Ridge National Lab, enormous surge protectors could be installed near transformers at a cost of around $1 billion. However, the U.S. government has not allocated the funding for such a project.
While the likelihood of a super storm like the Carrington Event occurring during each solar cycle is very low, it could happen. “Each solar maximum takes us by surprise in some way, usually because we’ve developed some new technology in the 11 years since that last one without realizing how it’s vulnerable,” says Hughes. “I’m sure we’re in for another surprise in 2013. It will certainly be important for the electric power grid to be fully prepared.” | <urn:uuid:e0c07622-1256-4a27-9c4b-673e95d639ca> | 4.09375 | 1,009 | Knowledge Article | Science & Tech. | 41.291832 |
13 September 2012
13 September 2012
Speaker: Maria Zuber (MIT)
Title:Gravity Recovery and Interior Laboratory (GRAIL) Mission: Mission Status and Initial Science Results
Abstract:The Gravity Recovery and Interior Laboratory (GRAIL) Mission is a component of the NASA Discovery Program. GRAIL is a twin-spacecraft lunar gravity mission that has two primary objectives: to determine the structure of the lunar interior, from crust to core; and to advance understanding of the thermal evolution of the Moon. GRAIL launched successfully from the Cape Canaveral Air Force Station on September 10, 2011, executed a low-energy trajectory to the Moon, and inserted the twin spacecraft into lunar orbit on December 31, 2011 and January 1, 2012. A series of maneuvers brought both spacecraft into low-altitude (55-km), near-circular, polar lunar orbits, from which they perform high-precision satellite-to-satellite ranging using a Ka-band payload along with an S-band link for time synchronization. Precise measurements of distance changes between the spacecraft are used to map the lunar gravity field. GRAIL completed its primary mapping mission on May 29, 2012, collecting and transmitting to Earth more than 99.99% of the possible data. Spacecraft and instrument performance were nominal and has led to the production of a high-resolution and high-accuracy global gravity field, improved over all previous models by two orders of magnitude on the nearside and nearly three orders of magnitude over the farside. The field is being used to understand the thickness, density and porosity of the lunar crust, the mechanics of formation and compensation states of lunar impact basins, and the structure of the mantle and core. initiated on August 30, 2012 and will consist of global gravity field mapping from an average altitude of 22 km. | <urn:uuid:f75ee11b-43bf-40ec-b676-ebea98a9261a> | 2.828125 | 376 | Academic Writing | Science & Tech. | 27.073592 |
A mapping of a topological vector space into a topological vector space such that is a bounded subset in for any bounded subset of . Every operator , continuous on , is a bounded operator. If is a linear operator, then for to be bounded it is sufficient that there exists a neighbourhood such that is bounded in . Suppose that and are normed linear spaces and that the linear operator is bounded. Then
This number is called the norm of the operator and is denoted by . Then
and is the smallest constant such that
for any . Conversely, if this inequality is satisfied, then is bounded. For linear operators mapping a normed space into a normed space , the concepts of boundedness and continuity are equivalent. This is not the case for arbitrary topological vector spaces and , but if is bornological and is a locally convex space, then the boundedness of a linear operator implies its continuity. If is a Hilbert space and is a bounded symmetric operator, then the quadratic form is bounded on the unit ball . The numbers
are called the upper and lower bounds of the operator . The points and belongs to the spectrum of , and the whole spectrum lies in the interval . Examples of bounded operators are: the projection operator (projector) onto a complemented subspace of a Banach space, and an isometric operator acting on a Hilbert space.
If the space and have the structure of a partially ordered set, for example are vector lattices (cf. Vector lattice), then a concept of order-boundedness of an operator can be introduced, besides the topological boundedness considered above. An operator is called order-bounded if is an order-bounded set in for any order-bounded set in . Examples: an isotone operator, i.e. an operator such that implies .
|||L.A. Lyusternik, V.I. Sobolev, "Elemente der Funktionalanalysis" , Akademie Verlag (1955) (Translated from Russian)|
|||W. Rudin, "Functional analysis" , McGraw-Hill (1973)|
|||G. Birkhoff, "Lattice theory" , Colloq. Publ. , 25 , Amer. Math. Soc. (1967)|
|[a1]||A.E. Taylor, D.C. Lay, "Introduction to functional analysis" , Wiley (1980)|
|[a2]||A.C. Zaanen, "Riesz spaces" , II , North-Holland (1983)|
Bounded operator. V.I. Sobolev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Bounded_operator&oldid=17165 | <urn:uuid:20b97adf-a480-4132-a14f-f1988c39968f> | 3.0625 | 586 | Knowledge Article | Science & Tech. | 45.951098 |
Study: Loss Of Genetic Diversity Threatens Species Diversity
Davis, California - Human activities are eliminating biological diversity at an unprecedented rate. Critical when you consider that variation in plants and animals gives us a rich and robust assemblage of foods, medicines, industrial materials and recreation activities. A new study shows this is bad news for all species.
Lead researcher Dr. Richard Lankau says, "This is one of the first studies to show that genetic diversity and species diversity depend on each other," Lankau said.
"Diversity within a species is necessary to maintain diversity among species, and at the same time, diversity among species is necessary to maintain diversity within a species. And if any one type is removed from the system, the cycle can break down, and the community becomes dominated by a single species."
A new study offers clues to how these losses relate to one another -- information that is essential as scientists and land managers strive to protect the remaining natural variation.
Sharon Strauss, a professor of evolution and ecology at UC Davis, and former doctoral student Richard Lankau (now a post-doctoral researcher at the University of Missouri-St. Louis and the University of Illinois), studied competition among genetically varied plants of one species (black mustard, Brassica nigra), and among black mustard and plants of other species.
The research was funded by the National Science Foundation. The paper, titled "Mutual feedbacks maintain both genetic and species diversity in a plant community," was published in the Sept. 14 issue of the journal Science. | <urn:uuid:df9f2419-1cbc-4c73-9ae1-0b85b4d5cd23> | 2.828125 | 311 | Truncated | Science & Tech. | 26.796375 |
Posted at La República Catalana
Temperature at Darwin Airport in Australia. The “adjustments” in red reverse the real blue temperature fall of -0.7 °C to a rise of 1.2 °C. Below: Bolivia, a very high mountainous country, has no data after 1990 (boxed in blue). But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years? Nasa simply filled in the missing numbers – originally cool, as Bolivia contains proportionately more land above 3,000 m than any other country in the world –with hot ones from the Amazon jungle. The single station north of 65° lat (boxed in green) explains all temperature in northern Canada.
American climate centers have also been manipulating global temperature data to fraudulently advance the alarmist warming agenda. NOAA deleted cherry-picked cooler stations and Nasa’s GISS altered data, replacing the dropped NOAA readings with warmer stations. Although satellites have been available since 1978, global analyses still use land-based data, but NOAA deleted 1,500 of 6,000 thermometers around the globe. 75% represents quite a drop of primary data. Stations in cooler, rural areas of higher latitude and elevation were scrapped in favor of more urban lower latitude and elevation. Consequently, post-1990 readings have a warm bias by this selective geographic location and by the urban heat island effect (Search The Warm Bias of Forecasters).
Canada’s reporting stations dropped from 496 to 44, with stations at lower elevations tripling higher elevations with one thermometer for everything north of 65° Lat in Eureka, known as the Garden Spot of the Arctic for its unusually moderate summers. In California, only four stations remain –one in San Francisco and three in LA near the beach, impossible to compare with the record of thermometers in the snowy mountains. Historic true averages are compared to false averages always yielding a warming trend. US stations dropped from 1,850 to 136, a 90% fall. This bias has created a 0.6 °C warming. NOAA “adjusted” the data and “homogenized” stations. But given the plummeting number it’s no surprise this always results in warmer readings. The homogeneity bias alone accounts for almost one half of the warming. Add the selection bias and actual warming completely disappears! The manipulation does not stop there. NASA’s GISS then merged readings and put the data through a few “adjustments” of its own, changing the long term trend to create artificial warming. The planet was then flattened onto an 8,000-box grid of 1,200 kms. Hawaii’s surviving stations in major airports were used to infill the surrounding empty Grid Boxes. So in effect, airport tarmacs stand in for temperature over 1,200 km of sea. Up to 1.5 °C warming has been added by NOAA’s and Nasa adjustments to double the supposed 20th C. warming.
(“Climategate: CRU Was But the Tip of the Iceberg, by Marc Sheppard, American Thinker, 22 January 2010) | <urn:uuid:a12d867b-87af-44a5-93d0-70f5ef544dc7> | 3.171875 | 654 | Knowledge Article | Science & Tech. | 52.356037 |
TNRTB Archive - Retained for reference information
American chemists have found more evidence for the supernatural design of the quantum tunneling phenomenon. In quantum tunneling, a particle crosses or tunnels through a potential energy barrier where classical physics would conclude that the particle did not have enough energy to do so. They discovered that the value of the level of uncertainty in the Heisenberg uncertainty principle of quantum mechanics is exquisitely fine-tuned so as to permit an extremely broad range of quantum tunneling rates (a factor of 50 or more) for many life-essential biological reactions. Without such exquisite fine tuning life would be impossible.
o Oliver S. Wenger et al., “Electron Tunneling Through Organic Molecules in
Frozen Glasses,” Science 307 (2005): 99-102.
· Related Resource
o Hugh Ross, “Anthropic Principle: A Precise Plan for Humanity”
· Product Spotlight
o The Creator and the Cosmos, 3rd ed., by Hugh Ross | <urn:uuid:9bf27cce-5cb9-4d4e-b722-dbb29d6a0752> | 3.28125 | 208 | Content Listing | Science & Tech. | 28.950759 |
The first day of summer is June 21, just about two weeks away. The Summer Triangle now appears in the east at dusk. Made of three bright stars from three different constellations, the Summer Triangle can be picked out when the brightest stars are appearing after sunset, so it can be an easy target for young astronomers just starting to learn their stars in the summer months.
The Summer Triangle is not a constellation itself, but an asterism. An asterism is a pattern of stars that may be part of an official constellation or made up of stars from several constellations. A constellation (by today’s astronomical consideration) is an internationally defined area of the night sky. The stars which make up the Summer Triangle are:
- Vega, of Lyra the Harp. Vega is the topmost and brightest star of the triangle.
- Deneb, of Cygnus the Swan. Deneb is lower left of Vega, by a visual distance of two or three fist-widths at arm’s length.
- Altair, of Aquila the Eagle. Altair is farther to the right, and the last of the three stars to rise.
Under a clear, dark sky you can see the Milky Way stretching through and beyond this large starry triangle. Aim some binoculars at this region to marvel at the sheer quantity of stars found here.
The Summer Triangle will be visible now all summer. As we approach autumn, it will reach a position high in the south to overhead at dusk and early evening, so there is plenty of time for your little astronomers to become acquainted with this asterism. | <urn:uuid:23bbae20-f3bd-4f87-830e-afe5b87e86d6> | 3.75 | 334 | Personal Blog | Science & Tech. | 62.424535 |
Python’s with statement, available since Python 2.5, does not seem to be widely used despite being very useful.
Here we describe how to use the
with statement to create a simple timer with the syntax:
>>> with TicToc(): ... some_slow_operations ... Elapsed time is 2.000073 seconds.
> tic ; some_slow_operations; toc Elapsed time is 10.020349 seconds.
Before we describe the TicToc class, let’s review the
with statement. The documentation for reading files suggests that it is a good way to ensure files are properly closed, even if the code reading them raises an exception.
>>> with open('/tmp/workfile', 'r') as f: ... read_data = f.read() >>> f.closed True
The with statement uses two methods,
__enter__, which is run before the block executes, and
__exit__, which is run after the block executes even if an exception is raised. With this in mind, the previous code is mostly equivalent to:
>>> f = open('/tmp/workfile', 'r') >>> f.__enter__() <open file '/tmp/workfile', mode 'r' at 0x10041d810> >>> read_data = f.read() >>> f.__exit__(None, None, None) >>> f.closed True
Here the method
file.__exit__ has automatically closed the file.
The timer will be implemented as a class defining two methods:
__enter__which records the start time.
__exit__which prints the difference between the current time and the start time.
import time class TicToc(object): """ A simple code timer. Example ------- >>> with TicToc(): ... time.sleep(2) Elapsed time is 2.000073 seconds. """ def __init__(self, do_print = True): self.do_print = do_print def __enter__(self): self.start_time = time.time() return self def __exit__(self, type, value, traceback): self.elapsed = time.time() - self.start_time if self.do_print: print "Elapsed time is %f seconds." % self.elapsed
>>> from simpletimer import TicToc >>> import time >>> with TicToc(): ... time.sleep(2) ... Elapsed time is 2.000115 seconds. >>> with TicToc(False) as t: ... time.sleep(1) ... >>> t.elapsed 1.0001189708709717 | <urn:uuid:61d60221-fae1-4943-890d-950ddc77340d> | 3.734375 | 564 | Documentation | Software Dev. | 66.99704 |
With the help of an Item Renderer, list-based controls can customize the appearance of the data provider's data.
In addition, the custom appearance can be controlled by conditional logic processed by an Item Renderer. We'll look at this a bit later.
A single instance of your Item Renderer class (ListIR) is created for each visible item of the list-based control.
As the user scrolls through the items of a list-based control, Item Renderer instances are recycled rather than creating new instances. This is a concept known as virtualization.
Example: When Data Item #1 is removed from the list-based control's display due to a scrolling action, the Item Renderer instance previously used to render Data Item #1 is recycled and used to display the data for Data Item #6.
Item Renderers must directly or indirectly (through class inheritance) implement the IDataRenderer interface which enforces the implementation of 2 very important methods: get data() & set data(). With an implicit getter and setter in place for the data property, the Item Renderer instance can receive a data object passed from the host component (the list-based control) which represents the data for the item it is responsible to render (i.e. Data Item #6).
Example: If instance #1 of the Item Renderer is rendering the data for Data Item #6, the data for Data Item #6 is passed to instance #1 of the Item Renderer via the Item Renderer's set data setter method.When an instance of an Item Renderer is recycled due to scrolling, the set data method is called and the applicable data object is passed. This is the point in which conditional logic can be applied to control the appearance of an Item Renderer.
Note: It is important to remember that Item Renderers are recycled and as such, their property values may reflect a condition met by the previous "user" of the Item Renderer instance (i.e. the Data Item). Therefore, it is important to reset properties to the default value when conditional logic fails. In the previous example, the reset is accomplished by setting the label's font color back to black (0x000000) when data.age >= 21.In the previous example, we utilized a property of the data object (data.age) and a hard-coded value (21) to dictate the item's appearance. However, certain scenarios may require access to property values outside of an Item Renderer's scope to determine the appearance.
Example: The value of an HSlider in conjunction with the age property of the data object (data.age) should dictate the font color of each item's label.This scenario requires the following:
1. Create a new public property for the list-based control exposed via a getter and setter
By extending an existing class and adding a new public property, our list-based control will have a place to store the value of an external property (i.e. the value of the HSlider's value property).
2. Bind the new list-based control's public property (MyList.minAge) to the value of the desired external property utilizing Flex's data-binding utility
Data-binding will provide real-time synchronization of the new public property (minAge) with the external property's value (i.e. the HSlider's value property)
3. Provide the Item Renderer with access to a ListData object
To access the value of our new public property from within an Item Renderer instance, our Item Renderer must implement the IDropInListItemRenderer interface. This ensures a reference to the Item Renderer's owner (the list-based control) is available to the Item Renderer, and consequently, so is our new public property.
4. Make a call to the invalidateList method when the source of our new, bounded public property is changed (i.e. the HSlider value property changes)
By calling MyList.invalidateList() each time the HSlider value changes, we ensure that the conditional logic of the Item Renderer is re-evaluated. InvalidateList forces each Item Renderer instance of a list-based control to call the data setter method.
See it in action (view source is enabled): | <urn:uuid:056f689c-69d3-43e4-8395-2e2dbbf2a568> | 2.9375 | 895 | Documentation | Software Dev. | 43.321583 |
January 25, 2012
Our photo gallery of the most stunning images from the recent northern lights show.
Precious few people around the world have ever had the chance to witness the remarkable phenomenon known as the aurora borealis, or northern lights. The collision of magnetically charged solar particles with the earth’s magnetosphere produces dancing waves of florescent green and deep blue that appear to wave across the sky, but under normal conditions, the lights can been seen only in far northern latitudes. Even then, the aurora borealis is unpredictable in occurrence and can be difficult to spot.
Recent storms on the surface of the sun, though, have produced levels of solar particles headed towards the earth not seen for a decade—and dazzling northern lights. Skygazers report that, over the past week, remarkably intense displays have appeared in skies in Scandinavia and Northern England. Scientists predict that recent surges are just a small taste of what’s to come over the next year or so, as the cycle of solar activity is expected to peak in 2013 and 2014.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. | <urn:uuid:e2e5cf4c-7415-46ba-af26-eb02e2452de2> | 3.140625 | 238 | Truncated | Science & Tech. | 46.337575 |
Update! (19 July 06) Added Multiply. Fixed a problem with using __builtin_clz().
Update! (17 July 06) The code has been considerably refactored. Decided to go with single function per expression. The expressions have been reduced as a first optimization pass.
ProjectThe goal of this project is serve as an example of developing some relatively complex operations completely without branches - a software implementation of half-precision floating point numbers (That does not use floating point hardware). This example should echo the IEEE 754 standard for floating point numbers as closely as reasonable, including support for +/- INF, QNan, SNan, and denormalized numbers. However, exceptions will not be implemented.
Half-precision floats are used in cases where neither the range nor the precision of 32 bit floating point numbers are needed, but where some dynamic precision is required. Two common uses are for image transformation, where the range of each component (e.g. red, green, blue, alpha) is typically limited to or near [0.0,1.0] or vertex data (e.g. position, texture coordinates, color values, etc.).
The main advantage of half-precision floats is their size. Beyond the considerable potential for memory savings, processing a large number of half-precision values is more cache-friendly than using 32 bit values.
The current released version (including tests) can be downloaded here: half.c half.h | <urn:uuid:02123ba3-e7ba-42a0-83e2-40a55b381422> | 2.78125 | 306 | Documentation | Software Dev. | 47.521267 |
Q&A: Normal Stars, White Dwarf Stars, and Star Clusters
Is Antares showing signs of pre-Supernova
activity like Eta Carinae or Betelgeuse?
We believe it will be many years before Antares shows any
signs of pre-supernova activity. More information on this star may be
which explains the process by which the star evolves and explodes.
Another site you may appreciate is at the University of Michigan, again
on the evolution of stars:
If you consider that even the most massive stars, with the shortest
lifetimes before exploding, still live for about 2 million years, then
you can see that it would be an amazingly lucky coincidence if we
happened to look with a telescope at a star at the exact moment it
began to show signs of pre-explosion activity. Although, as you'll
read in the Chandra site mentioned above, if we had had a telescope in
the year 1054, we would have observed an event just like this. We may
get lucky again in our lifetime! | <urn:uuid:0c815c24-9ed9-4756-9e78-cd8143d6021c> | 2.75 | 224 | Q&A Forum | Science & Tech. | 38.237843 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
RE: Triassic dinosaur evolution
Eric Boehm wrote:
> Generically, I would think a quadrupedal predator would be able to grow
> larger than a Biped, and given the limbs of a T-rex, its obvious freeing the
> forelimbs was not a major factor in the effectiveness of a biped predator.
Whoah! Hang on there. The forelimbs of T. rex were not a major factor in the
effectiveness of *that particular* bipedal predator. But there's plenty of
non-avian theropods (of all sizes) in which the forelimbs were quite large, and
presumably put to good use in predation (or bad use, if you happen to be the
prey). Not that I'm saying that T. rex didn't use its forelimbs in predation;
it's just that it didn't use them to catch prey. Some big allosauroids and
"megalosaur"-grade theropods had very powerful forelimbs; even though some of
the latter (like _Torvosaurus_) had forelimbs that were quite short, they were
also very robust.
The use of the forelimbs in prey capture could be seen as extremely important
to dinosaurs, as a major factor in the shift to bipedality by the first
dinosaurs (or their immediate ancestors). This prey-catching function was
retained by most theropod lineages - although not all, because some did
drastically reduce the size of the forelimbs (tyrannosaurids, _Giganotosaurus_,
carnotaurines, alvarezsaurs, compsognathids), and/or used them for something
else (alvarezsaurids, ornithomimosaurs, therizinosauroids, birds). The
tyrannosaurid strategy for prey capture allowed the forelimbs to shrink, and
take on an auxilliary role to the jaws. But that's just the way that
tyrannosaurids did things. Overall, the forelimbs of theropods played an
important role in predation.
Want to do more with Windows Live? Learn “10 hidden secrets” from Jamie. | <urn:uuid:2011aac7-c510-44b8-a06b-6a570890d51e> | 2.859375 | 514 | Comment Section | Science & Tech. | 40.754103 |
Plastids are major organelles found in the cells of plants and algae. They are the site of manufacture and storage of important chemical compounds used by the cell. Plastids often contain pigments used in photosynthesis and the types of pigments present can change or determine the cell's color. They possess a double-stranded DNA molecule, which is circular, like that of prokaryotes.
Plastids in plants
Plastids carry out photosynthesis, the storage of products like starch and the synthesis of many classes of molecules such as fatty acids and terpenes which are used for energy production and as raw material for the synthesis of other molecules. For example, the components of the plant cuticle and epicuticular waxes are synthesized in the epidermal cells from palmitic acid synthesized in the chloroplasts of mesophyll cells. All plastids are derived from proplastids (formerly "eoplasts", eo-: dawn, early), which are present in the meristematic regions of the plant. Proplastids and young chloroplasts commonly divide by binary fission, but more mature chloroplasts also have this capacity.
In plants, plastids may differentiate into several forms, depending upon which function they play in the cell. Undifferentiated plastids (proplastids) may develop into any of the following variants:
- Chloroplasts green plastids: for photosynthesis; see also etioplasts, the predecessors of chloroplasts
- Chromoplasts coloured plastids: for pigment synthesis and storage
- Gerontoplasts: control the dismantling of the photosynthetic apparatus during senescence
- Leucoplasts colourless plastids: for monoterpene synthesis; leucoplasts sometimes differentiate into more specialized plastids:
Depending on their morphology and function, plastids have the ability to differentiate, or redifferentiate, between these and other forms.
Each plastid creates multiple copies of a circular 75–250 kilobase plastome. The number of genome copies per plastid is variable, ranging from more than 1000 in rapidly dividing cells, which, in general, contain few plastids, to 100 or fewer in mature cells, where plastid divisions have given rise to a large number of plastids. The plastome contains about 100 genes encoding ribosomal and transfer ribonucleic acids (rRNAs and tRNAs) as well as proteins involved in photosynthesis and plastid gene transcription and translation. However, these proteins only represent a small fraction of the total protein set-up necessary to build and maintain the structure and function of a particular type of plastid. Plant Nuclear genes encode the vast majority of plastid proteins, and the expression of plastid genes and nuclear genes is tightly co-regulated to coordinate proper development of plastids in relation to cell differentiation.
Plastid DNA exists as large protein-DNA complexes associated with the inner envelope membrane and called 'plastid nucleoids'. Each nucleoid particle may contain more than 10 copies of the plastid DNA. The proplastid contains a single nucleoid located in the centre of the plastid. The developing plastid has many nucleoids, localized at the periphery of the plastid, bound to the inner envelope membrane. During the development of proplastids to chloroplasts, and when plastids convert from one type to another, nucleoids change in morphology, size and location within the organelle. The remodelling of nucleoids is believed to occur by modifications to the composition and abundance of nucleoid proteins.
Many plastids, particularly those responsible for photosynthesis, possess numerous internal membrane layers.
In plant cells, long thin protuberances called stromules sometimes form and extend from the main plastid body into the cytosol and interconnect several plastids. Proteins, and presumably smaller molecules, can move within stromules. Most cultured cells that are relatively large compared to other plant cells have very long and abundant stromules that extend to the cell periphery.
Plastids in algae
In algae, the term leucoplast is used for all unpigmented plastids and their function differs from the leucoplasts of plants. Etioplasts, amyloplasts and chromoplasts are plant-specific and do not occur in algae. Plastids in algae and hornworts may also differ from plant plastids in that they contain pyrenoids.
Glaucocystophytic algae contain muroplasts, which are similar to chloroplasts except that they have a cell wall that is similar to that of prokaryotes. Rhydophytic algae contain rhydoplasts, which are red chloroplasts that allow the algae to photosynthesise to a depth of up to 268 m.
Inheritance of plastids
Most plants inherit the plastids from only one parent. In general, angiosperms inherit plastids from the female gamete, whereas many gymnosperms inherit plastids from the male pollen. Algae also inherit plastids from only one parent. The plastid DNA of the other parent is, thus, completely lost.
In normal intraspecific crossings (resulting in normal hybrids of one species), the inheritance of plastid DNA appears to be quite strictly 100% uniparental. In interspecific hybridisations, however, the inheritance of plastids appears to be more erratic. Although plastids inherit mainly maternally in interspecific hybridisations, there are many reports of hybrids of flowering plants that contain plastids of the father. Approximately 20% of angiosperms, including alfalfa (Medicago sativa), normally show biparental inheritance of plastids.
Origin of plastids
Plastids are thought to have originated from endosymbiotic cyanobacteria. The symbiosis evolved around 1500 million years ago and enabled eukaryotes to carry out oxygenic photosynthesis. Three evolutionary lineages have since emerged in which the plastids are named differently: chloroplasts in green algae and plants, rhodoplasts in red algae and cyanelles in the glaucophytes. The plastids differ by their pigmentation, but also in ultrastructure. The chloroplasts, e.g., have lost all phycobilisomes, the light harvesting complexes found in cyanobacteria, red algae and glaucophytes, but instead contain stroma and grana thylakoids, structures found only in plants and in closely related green algae. The glaucocystophycean plastid — in contrast to the chloroplasts and the rhodoplasts — is still surrounded by the remains of the cyanobacterial cell wall. All these primary plastids are surrounded by two membranes.
Complex plastids start by secondary endosymbiosis, when a eukaryote engulfs a red or green alga and retains the algal plastid, which is typically surrounded by more than two membranes. In some cases these plastids may be reduced in their metabolic and/or photosynthetic capacity. Algae with complex plastids derived by secondary endosymbiosis of a red alga include the heterokonts, haptophytes, cryptomonads, and most dinoflagellates (= rhodoplasts). Those that endosymbiosed a green alga include the euglenids and chlorarachniophytes (= chloroplasts). The Apicomplexa, a phylum of obligate parasitic protozoa including the causative agents of malaria (Plasmodium spp.), toxoplasmosis (Toxoplasma gondii), and many other human or animal diseases also harbor a complex plastid (although this organelle has been lost in some apicomplexans, such as Cryptosporidium parvum, which causes cryptosporidiosis). The 'apicoplast' is no longer capable of photosynthesis, but is an essential organelle, and a promising target for antiparasitic drug development.
Some dinoflagellates and sea slugs, in particular of the genus Elysia, take up algae as food and keep the plastid of the digested alga to profit from the photosynthesis; after a while, the plastids are also digested. These captured plastids are known as kleptoplastids.
- A Novel View of Chloroplast Structure: contains fluorescence images of chloroplasts and stromules as well as an easy to read chapter.
- Wycliffe P, Sitbon F, Wernersson J, Ezcurra I, Ellerström M, Rask L (October 2005). "Continuous expression in tobacco leaves of a Brassica napus PEND homologue blocks differentiation of plastids and development of palisade cells". Plant J. 44 (1): 1–15. doi:10.1111/j.1365-313X.2005.02482.x. PMID 16167891.
- Birky CW (2001). "The inheritance of genes in mitochondria and chloroplasts: laws, mechanisms, and models". Annu. Rev. Genet. 35: 125–48. doi:10.1146/annurev.genet.35.102401.090231. PMID 11700280. PDF
See also
- Kolattukudy, PE (1996) Biosynthetic pathways of cutin and waxes, and their sensitivity to environmental stresses. In: Plant Cuticles. Ed. by G. Kerstiens, BIOS Scientific publishers Ltd., Oxford, pp 83-108
- Wise, Robert R. (2006). "1. The Diversity of Plastid Form and Function". Advances in Photosynthesis and Respiration 23. Springer. pp. 3–26. doi:10.1007/978-1-4020-4061-0_1.
- Zhang, Q.; Sodmergen (2010). "Why does biparental plastid inheritance revive in angiosperms?". Journal of Plant Research 123 (2): 201–206. doi:10.1007/s10265-009-0291-z. PMID 20052516.
- Hedges SB, Blair JE, Venturi ML, Shoe JL (January 2004). "A molecular timescale of eukaryote evolution and the rise of complex multicellular life". BMC Evol. Biol. 4: 2. doi:10.1186/1471-2148-4-2. PMC 341452. PMID 15005799.
Further reading
- Chan CX, Bhattacharya D (2010). "The origins of plastids". Nature Education 3 (9): 84.
- Bhattacharya, D., ed. (1997). Origins of Algae and their Plastids. New York: Springer-Verlag/Wein. ISBN 3-211-83036-7.
- Tree of Life Eukaryotes
- Transplastomic plants for biocontainment (biological confinement of transgenes) — Co-extra research project on coexistence and traceability of GM and non-GM supply chains | <urn:uuid:92b4f548-bde8-4c0a-a06e-2c7a7a7f801a> | 3.890625 | 2,412 | Knowledge Article | Science & Tech. | 37.04873 |
These sessile Cnidaria occur clownfishes or anemonefishes, from the Genus Amphiprion and the Genus Premnas. Only ten of the >1000 species of Actiniaria are known to associate with clownfish, and all ten of them are shallow-water dwellers, because they all have another symbiotic relationship in common- they all host single-celled algae in their tissues. These algae produce sugar through photosynthesis (which is why this partnership must live in shallow water with abundant sunlight), and share it with their sea anemone host in exchange for shelter. This kind of relationship is quite common in Cnidaria. Reef forming corals also host algae this way. (Fautin and Allen, 1992)
The relationship between sea anemones and clownfish is more unusual. Sea anemones, like corals and other Cnidaria, have stinging cells in their tentacles, but clownfish living with them are not stung. How they manage this is not well understood, but it appears to be related to a mucus coating their skin. Researchers disagree over whether this is produced by the fish, or produced by the anemone and gradually picked up by the fish. A clownfish does take some time to adapt to an anemone host at first, touching and then retreating from the tentacles until gradually it settles among them unharmed.(Fautin and Allen, 1992)
The benefit to the clownfish is clear. Predators will not pursue them among their host's stinging tentacles, and in nature clownfish are rarely seen far from a host anemone. In some cases, the anemone also benefits. When clownfish were removed from their anemone host Entacmaea quadricolor in Papua New Guinea, within a single day, the anemones had been decimated, apparently by butterflyfish which were still hunting for leftover scraps when researchers returned. Other species of sea anemone host clownfish sometimes but thrive without them also, and may not benefit from the relationship.(Fautin and Allen, 1992)
Small crustaceans, including crabs and shrimp sometimes also associate closely with sea anemones. These small animals have proven to be nimble and elusive so researchers don't yet know very much about these relationships.(Fautin and Allen, 1992)
- Dr. Daphne G. Fautin and Dr. Gerald R. Allen. 1992. FIELD GUIDE TO ANEMONE FISHES AND THEIR HOST SEA ANEMONES. Western Australian Museum, Perth, Australia.
- Steven Vogel. 2003. Comparative Biomechanics: Life's Physical World. Princeton: Princeton University Press. 580 p. | <urn:uuid:9ea27dda-a845-416c-8621-24a547d4c6fa> | 3.453125 | 563 | Knowledge Article | Science & Tech. | 44.56746 |
The Physics Factbook™
Edited by Glenn Elert -- Written by his students
An educational, Fair Use website
topic index | author index | special index
|Bibliographic Entry||Result |
|Barnes, Sue & Helena Curtis. Biology, Fifth Edition. New York: Worth, 1989: 180-182.||"In the course of the reaction, about 7 kilocalories of energy are released per mole."||29 kJ |
|"Adenosine Triphosphate." Encarta. Redmond, WA: Microsoft, 1997-2000.||"With the release of the end phosphate group, 7 kilocalories of energy become available for work and the ATP molecule becomes ADP."||29 kJ |
|Farabee, M.J. ATP and Biological Energy. On-Line Biology Book. Estrella Mountain Community College, 2000.||"Energy is stored in the covalent bonds between phosphates, with the greatest amount of energy (approximately 7 kilocalories) in the bond between the second and third phosphate groups."||29 kJ |
|Hinkle, Peter & Richard McCarthy. "How cells make ATP." Scientific American. March 1978: 238, 104-117||"The amount of energy needed to form ATP depends on the chemical environment, but is never more than about 15 kilocalories per mole"||63 kJ |
|Campbell, Neil. Biology, Third Edition. Benjamin Cummings, 1993: 97-101.||"The reaction is exergonic, and under laboratory conditions, releases 7.3 kcal of energy per mole of ATP hydrolyzed"||31 kJ |
(per mole, lab)
(per mole, cell)
|Bray, Dennis. Cell Movements. New York: Garland, 1992: 6.||"What is this power requirement in terms of ATP molecules, the principle currency of energy in the cell? Hydrolysis of one gram mole of ATP releases about 470 kJ of useful energy; hydrolysis of a single ATP molecule, about 10-19 J."||470 kJ |
(per gram mole)
All of the biosynthesis activities of the cell, many of itstransport processes and a variety of other activities requireenergy. Energy is defined as the capacity to do work. AdenosineTriphosphate (ATP), a molecule found in all living organisms isthe immediate source of usable energy for body cells and theirfunction. ATP is built up by the metabolism of food in cell'smitochondria. ATP is characterized as a coenzyme because the energyexchanging function of ATP and the catalytic function of enzymesare intimately connected.
ATP is made up of the nitrogenous base adenine, the five-carbonsugar ribose and three phosphate groups. Three phosphate units(triphosphate), each made up of one phosphorus atom and four oxygenatoms, are attached to the ribose. The two bonds between the threephosphate groups are relatively weak and yield their energy readilywhen split by enzymes. Inside a cell the ATP molecule is splitat one of the high energy bonds, releasing the energy to powercellular activities. Adenosine diphosphate (ADP) and phosphorus(P) are produced in the process. With the release of the end phosphategroup, 7 kilocalories (under laboratory conditions) of energybecome available for work.
ATP + H2O → ADP + Phosphate
ATP needs to be regenerated continuously by the recombiningof ADP and P. From the foods and beverages people eat and drinkand through the process of digestion and absorption, cells breakdown several types of compounds to release enough energy to causeADP and P to recombine and replenish ATP stores. These compoundsare phosphocreatine (PCr), carbohydrates, fats and proteins.
ATP is the most frequent molecule that supplies energy in coupledreactions. In coupled reactions, endergonic reactions or transportprocesses are linked to exergonic reactions that provide a surplusof energy. This makes the entire process exergonic and able toproceed spontaneously. The covalent bonds linking the two phosphatesto the rest of the molecule are easily broken, which release energyin the amount of 7 kilocalories per mole (under laboratory conditions).
Most of the energy consuming reactions in cells are poweredby the conversion of ATP to ADP; they include the transmissionof nerve signals, the movement of muscles, the synthesis of proteinand cell division.
Amber Iqbal -- 2000
|Another quality webpage by
|home | contact
bent | chaos | eworld | facts | physics | <urn:uuid:44cad09a-56ef-4ed7-942a-be72717b0aac> | 3.078125 | 966 | Content Listing | Science & Tech. | 35.658297 |
For systems like Coq that are based on type theory, this question is trickier to answer than you might expect.
First of all, what does it take to "know" the consistency strength of some system? Classically, the most thoroughly studied logical systems are based on first-order logic, using either the language of elementary arithmetic or the language of set theory. So if you are able to say, "System X is equiconsistent with ZF" (or with PA, or PRA, or ZFC + infinitely many inaccessibles, etc.), then most people will feel that they "know" the consistency strength of X, because you have calibrated it against a familiar hierarchy of systems.
Coq, however, is based on something called the Calculus of Inductive Constructions (CIC). Without going into a detailed explanation of what this is, let me just mention that the core of CIC doesn't have any axioms, but typically people add axioms as needed. For example, if you want classical logic, then you can add the law of the excluded middle as an axiom. To get more power you can add more axioms (though you have to be careful because certain combinations of axioms are known to be inconsistent). But trying to line up the various systems you can get this way against more familiar set-theoretic or arithmetic systems is a tricky business. Typically, we cannot expect an exact calibration, but we can interpret various fragments of set theory in type theory and vice versa, showing that the consistency of CIC plus certain axioms is sandwiched between two different systems on the set-theoretic side. If you want to delve into the details, I'd recommend the paper Sets in Coq, Coq in Sets by Bruno Barras as a starting point. | <urn:uuid:2638bcb8-f9f2-4a25-8978-fd58b04e26de> | 2.734375 | 374 | Q&A Forum | Science & Tech. | 43.837745 |
Makes a request to a Uniform Resource Identifier (URI). This is an abstract class.
Assembly: System.Net (in System.Net.dll)
Thetype exposes the following members.
|ContentLength||When overridden in a descendant class, gets or sets the content length of the request data being sent.|
|ContentType||When overridden in a descendant class, gets or sets the content type of the request data being sent.|
|CreatorInstance||When overridden in a descendant class, gets the factory object derived from the IWebRequestCreate class used to create the instantiated for making the request to the specified URI.|
|Credentials||When overridden in a descendant class, gets or sets the network credentials used for authenticating the request with the Internet resource.|
|Headers||When overridden in a descendant class, gets or sets the collection of header name/value pairs associated with the request.|
|Method||When overridden in a descendant class, gets or sets the protocol method to use in this request.|
|RequestUri||When overridden in a descendant class, gets the URI of the Internet resource associated with the request.|
|UseDefaultCredentials||When overridden in a descendant class, gets or sets a Boolean value that controls whether default credentials are sent with requests.|
|Abort||Aborts the Request.|
|BeginGetRequestStream||When overridden in a descendant class, provides an asynchronous method to request a stream.|
|BeginGetResponse||When overridden in a descendant class, begins an asynchronous request for an Internet resource.|
|Create(String)||Initializes a new instance for the specified URI scheme.|
|Create(Uri)||Initializes a new instance for the specified URI scheme.|
|CreateHttp(String)||Initializes a new HttpWebRequest instance for the specified URI string.|
|CreateHttp(Uri)||Initializes a new HttpWebRequest instance for the specified URI.|
|EndGetRequestStream||When overridden in a descendant class, returns a Stream for writing data to the Internet resource.|
|EndGetResponse||When overridden in a descendant class, returns a WebResponse.|
|Equals(Object)||Determines whether the specified Object is equal to the current Object. (Inherited from Object.)|
|Finalize||Allows an object to try to free resources and perform other cleanup operations before the Object is reclaimed by garbage collection. (Inherited from Object.)|
|GetHashCode||Serves as a hash function for a particular type. (Inherited from Object.)|
|GetType||Gets the Type of the current instance. (Inherited from Object.)|
|MemberwiseClone||Creates a shallow copy of the current Object. (Inherited from Object.)|
|RegisterPrefix||Registers a descendant for the specified URI.|
|ToString||Returns a string that represents the current object. (Inherited from Object.)|
is the abstract base class for the .NET Framework's request/response model for accessing data from the Internet. An application that uses the request/response model can request data from the Internet in a protocol-agnostic manner, in which the application works with instances of the class while protocol-specific descendant classes carry out the details of the request.
Requests are sent from an application to a particular URI, such as a Web page on a server. The URI determines the proper descendant class to create from a list of descendants registered for the application. descendants are typically registered to handle a specific protocol, such as HTTP or HTTPS, but can be registered to handle a request to a specific server or path on a server.
When Status is UnknownError, additional details about the protocol specific response error may be available using the Response property. If the Response property is not null, this indicates that the remote server responded with an error code. In this case, the Response property can be queried for more specific information about the response.
Because the class is an abstract class, the actual behavior of a instance at run time is determined by the descendant class returned by Create method. For more information about default values and exceptions, see the documentation for the descendant classes, such as HttpWebRequest.
Use the Create method to initialize new instances. Do not use the constructor.
When you inherit from , you must override the following members: Method, RequestUri, Headers, ContentType, Credentials, Abort, BeginGetRequestStream, EndGetRequestStream, BeginGetResponse, and EndGetResponse. In addition, you must provide an implementation of the IWebRequestCreate interface, which defines the Create method used when you call Create. You must register the class that implements the IWebRequestCreate interface, using the RegisterPrefix method.
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers. | <urn:uuid:0ce6315b-0e70-4e28-9efe-88efddf770aa> | 2.71875 | 1,054 | Documentation | Software Dev. | 30.114732 |
Arctic sea ice extent for January 2013 was well below average, largely due to extensive open water in the Barents Sea and near Svalbard. The Arctic Oscillation also remained in a primarily negative phase. Antarctic sea ice remained extensive due to an unusual northward excursion of ice in the Weddell Sea. December of 2012 saw Northern Hemisphere snow cover at a record high extent, while January 2013 is the sixth-highest snow cover extent on record since 1967.
Overview of conditions
The average sea ice extent for January 2013 was 13.78 million square kilometers (5.32 million square miles). This is 1.06 million square kilometers (409,000 square miles) below the 1979 to 2000 average for the month, and is the sixth-lowest January extent in the satellite record. The last ten years (2004 to 2013) have seen the ten lowest January extents in the satellite record.
As has been the case throughout this winter, ice extent in the Atlantic sector of the Arctic Ocean remained far below average. While the Kara Sea was completely iced over, nearly all of the Barents Sea remained ice free, and open water was present north of the Svalbard Archipelago. The lack of winter ice in the Barents Sea and the vicinity of Svalbard has been a common feature of recent years. Recent work by Vladimir Alexeev and colleagues at the University of Alaska Fairbanks provides further evidence that this is related to a stronger inflow of warm waters from the Atlantic as compared to past decades. On the Pacific side, the ice edge in the Bering Sea continued to extend slightly further to the south than usual.
Also, a new paper by Jinlin Zhang and colleagues at the University of Washington analyzed the effect of the strong August 2012 cyclone on last year’s record sea ice minimum. While they found a large effect in the immediate wake of the storm, the effect declined quickly and overall it had only a small effect on the final September minimum extent.
Conditions in context
Through the month of January, the Arctic gained 1.36 million square kilometers of ice (525,000 square miles), which is slightly higher than average for the month. Air temperatures at the 925 hPa level were 2 to 5 degrees Celsius (4 to 9 degrees Fahrenheit) higher than average across much of the Arctic Ocean. Conditions were especially warmer than average near Svalbard where ice-free conditions persist. Below average temperatures characterized parts of northern Eurasia and northwestern Canada. The dominant feature of the Arctic sea level pressure field for January 2013 was unusually high pressure over the central Arctic Ocean, consistent with a predominantly negative phase of the Arctic Oscillation.
January 2013 compared to previous years
Average Arctic sea ice extent for January 2013 was the sixth lowest for the month in the satellite record. Through 2013, the linear rate of decline for January ice extent is -3.2 percent per decade relative to the 1979 to 2000 average.
Looking at Northern Hemisphere snow
As noted in a previous post, Northern Hemisphere snow cover extent for June 2012 set a record low, continuing a downward trend in springtime snow extent. Satellite data from the Rutgers University Global Snow Lab show that after Northern Hemisphere snow cover extent for December 2012 reached a record high for the month of 46.27 million square kilometers (17.86 million square miles), extent during January increased to a monthly average of 48.64 million square kilometers (18.78 million square miles). This was the sixth-highest January extent in the record, dating back to 1967. Snow cover was higher than average throughout much of the western United States as well as northern Europe and eastern China. Snow cover was lower than normal over the central U.S., and much of southern Asia, including the Tibetan Plateau.
A visit to Antarctica
Turning to Antarctica, we note that January 2013 saw an unusual northward (towards the equator) excursion of sea ice in the Weddell Sea. The ice edge was found approximately 200 to 300 kilometers (124 to 186 miles) beyond its typical location. Overall, sea ice extent in the Antarctic was nearly two standard deviations above the mean for most of the month.
The cause of this is very unusual sea ice pattern appears to be persistent high pressure in the region west of the Weddell Sea, across the Antarctic Peninsula to the Bellingshausen Sea. This pressure pattern means that winds are tending to blow to the north on the east side of the Peninsula, both moving the ice northward and bringing in cold air from southern latitudes to reduce surface melting of the ice as it moves north.
Intense Greenland surface melting inspires new Web site
In recent years, the surface of the Greenland Ice Sheet has experienced strong melting, but the 2012 melt season far exceeded all previous years of satellite monitoring, and led to significant amounts of ice loss for the year. NSIDC’s new Web site, Greenland Ice Sheet Today presents images of the widespread surface melt on Greenland during 2012 and scientific commentary on the year’s record-breaking melt extent.
Throughout the coming year, the site will offer daily satellite images of surface melting and periodic analysis by the NSIDC science team. NSIDC scientists at the University of Colorado Boulder developed Greenland Ice Sheet Today with data from Thomas Mote of the University of Georgia, and additional collaboration from Marco Tedesco of the City University of New York.
The Greenland Ice Sheet contains a massive amount of fresh water, which if added to the ocean could raise sea levels enough to flood many coastal areas where people live around the world. The ice sheet normally gains snow during winter and melts some during the summer, but in recent decades its mass has been dwindling.
Alexeev, V.A., Ivanov, V.V., Kwok, K., and Smedsrud, L.H. 2013. North Atlantic warming and declining volume of arctic sea ice. The Cryosphere Discussions 7, 245-265, doi: 10.519/tcd-7-245-2013.
Zhang, J., R. Lindsay, A. Schweiger, and M. Steele. 2013. The impact of an intense summer cyclone on 2012 Arctic sea ice retreat. Geophysical Research Letters, In press, doi: 10.1002/grl.50190. | <urn:uuid:b7ffa2cf-0560-4da1-a72e-2e3600942384> | 3 | 1,294 | Knowledge Article | Science & Tech. | 52.216966 |
This artist's impression of Saturn's moon Titan shows the change in observed atmospheric effects before, during and after equinox in 2009. The Titan globes also provide an impression of the detached haze layer that extends all around the moon (blue). This image was inspired by data from NASA's Cassini mission.
During the first years of Cassini's exploration of the Saturnian system, Titan sported a "hood" of dense gaseous haze (white) in a vortex above its north pole, along with a high-altitude "hot spot" (red). During this time the north pole was pointed away from the sun.
At equinox, both hemispheres received equal heating from the sun. Afterwards, the north pole tilted towards the sun, signaling the arrival of spring, while the southern hemisphere tilted away from the sun and moved into autumn.
After equinox and until 2011, there was still a significant build up of trace gases over the north pole, even though the vortex and hot spot had almost disappeared. Similar features began developing at the south pole, which are still present today.
These observations are interpreted as a large-scale reversal in the single pole-to-pole atmospheric circulation cell of Titan immediately after equinox, with an upwelling of gases in the summer hemisphere and a corresponding downwelling in the winter hemisphere.
The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. NASA's Jet Propulsion Laboratory manages the mission for NASA's Science Mission Directorate, Washington, D.C. The visual and infrared mapping spectrometer team is based at the University of Arizona, Tucson. The composite infrared spectrometer team is based at NASA's Goddard Space Flight Center in Greenbelt, Md., where the instrument was built. JPL is a division of Caltech.
For more information on Cassini, visit http://www.nasa.gov/cassini and http://saturn.jpl.nasa.gov. | <urn:uuid:ed1e2a87-d2a6-4b55-81f1-8bb3c0e169cf> | 3.828125 | 418 | Knowledge Article | Science & Tech. | 43.87677 |
In accelerators we shoot very high momentum particles at each other to probe their structure at very small length scales. Has that anything to do with the HUP that addresses the spread of momentum and space?
Related, when we accelerate a proton at exactly, say, 1 GeV, then we know exactly its momentum. But for high momentum particles the Broglie wave length also shrinks, the particles position becomes more precise. But that would violate HUP.
What goes on with high momentum particles and their momentum spread? | <urn:uuid:9095fbe4-c41d-4019-b545-9a61cbf40167> | 2.71875 | 107 | Q&A Forum | Science & Tech. | 50.630853 |
NMR could help clear landmines
Apr 17, 2002
Landmines could be cleared more quickly with a new technique based on nuclear magnetic resonance. Most landmine detectors search for buried metal, but these devices can be inefficient because they also detect rusty nails and shrapnel. The technique developed by Markus Nolte of the University of Darmstadt in Germany and colleagues could solve this problem by detecting the nitrogen in the explosive TNT, which is particularly hard to spot (M Nolte et al 2002 J. Phys. D: Appl. Phys. 35 939).
Manufacturers of landmines deliberately use little metal in their devices to make them hard to find with conventional detectors. Physicists originally tackled this problem with a technique based on ‘nuclear quadrupole resonance’, which detects the nitrogen that most explosives contain. When the explosive is placed in an electric field, the spins of the nitrogen nuclei line up, and emit a characteristic signal when the field is switched off.
But the strength of this signal depends on the crystal structure of the explosive, and TNT – or trinitrotoluene – which is widely used in mines, only produces a weak signal. This means that the method can spot anti-tank mines, which contain around five kilograms of TNT, but not anti-personnel mines, which contain around a hundred times less.
To solve the problem, Nolte and colleagues developed a device based on a nuclear effect known as cross-relaxation. The researchers use a strong magnetic field to align – or ‘polarize’ – the spins of the hydrogen nuclei in a sample of explosive. The magnetic field is then reduced until the hydrogen nuclei fall to an energy level that matches that of the nitrogen nuclei in the sample. At this point, the hydrogen nuclei ‘transfer’ some of their polarization to the nitrogen nuclei.
The magnetic field is then increased to its original level and the researchers measure the polarization of the hydrogen nuclei. This reveals the number of nitrogen nuclei present in the sample, and the method is sensitive enough to detect as little as half a gram of TNT.
Nolte and co-workers stress that their technique has only been demonstrated in the lab, and say that it will be difficult to build a practical device. But they are very optimistic about the potential of their method. “The technique is providing us with stunningly accurate results, which we hope will one day save many lives,” says team member Alexei Privalov.
About the author
Katie Pennicott is Editor of PhysicsWeb | <urn:uuid:4e8c7f73-f5a7-45eb-ae25-04797aed2465> | 3.078125 | 537 | Truncated | Science & Tech. | 41.435077 |
Still have questions? Dr. Bryan Wallace, Conservation International’s resident sea turtle scientist, and his colleagues are here to help! Click here to send Bryan your unanswered queries. Each month, he’ll select one question to explore in greater depth with his friends, and post their findings here for all to see.
The question of how old sea turtles get is important and seemingly simple, but one we actually don't know the answer to.
What we do know is that sea turtles certainly live a very long time, probably similar in lifespan to humans. Depending on the species, sea turtles reach sexual maturity in 10-15 years (for ridleys) to 30-40 years (for greens) after emerging from their nests as hatchlings. Because it takes them so long to reach adulthood, we know that sea turtles must spend decades as adults to reproduce long enough to replace themselves (and their mates) in the population.
One of the main methods that scientists use to infer age in sea turtles is called skeletochronology. This method is similar to counting tree rings, except that instead of layers of wood deposited annually by a growing tree, a turtle's bone—particularly the dense humerus in a sea turtles front flipper—deposits rings as well, supposedly on an annual basis. Skeletochronology has been used to determine age at maturity as well as ages of individual turtles, and has been shown to work well in some species (e.g., ridleys), but not as well in others (e.g., leatherbacks).
So, while Crush from 'Finding Nemo' was right that sea turtles live a long time, we're still not sure about the actual number.
Bryan Wallace, PhD, is the Science Advisor for the Sea Turtle Flagship Program at CI and is Adjunct Assistant Professor in the Nicholas School of the Environment at Duke University. His work deals with the applications of sea turtle ecology research to pertinent conservation issues. Specifically, Bryan’s research focus has been on how the interactions between sea turtles and physical and biological conditions of their environments affect their physiology, ecology, life history, and population demography. He has worked extensively in Latin America and the U.S. on sea turtle research and conservation projects. He speaks Spanish fluently, has coordinated teams of volunteers on field projects focusing on sea turtles and other species. He has served as a program co-chair, regional travel chair, and student judge at multiple International Sea Turtle Symposia. He received his PhD from Drexel University (2005), and was a post-Doctoral Researcher at the Duke University Marine Lab (2005-2007). | <urn:uuid:d89dc4d3-e168-4377-97dd-0a169d47866c> | 2.84375 | 544 | Knowledge Article | Science & Tech. | 49.256981 |
Epimetheus (pronounced ep-eh-MEE-thee-us; adjective: Epimethean) and the neighboring moon Janus have been referred to as the Siamese twins of Saturn because they orbit Saturn in nearly the same orbit. This co-orbital condition (also called 1:1 resonance) confused astronomers, who at first could not believe that two moons could share nearly identical orbits without colliding.
These two moons lie amongst Saturn's rings and have orbital radial distances from Saturn of roughly 151,500 km (94,100 miles). One moon orbits 50 km (31 miles) higher (farther away from the planet) and consequently moves slightly slower than the other. The slight velocity difference means the inner satellite catches up to the other in approximately four Earth years. Then, the gravity interaction between the two pulls the inner moon faster moving it to a higher orbit. At the same time, the catching-up inner moon drags the leading outer moon backward so that it drops into a lower orbit. The result is that the two exchange places, and the nearest they approach is within 15,000 km (6,200 miles). During the 2010 trade off the Epimethean orbital radius dropped by approximately 80 km (50 miles), while Janus increased by only approximately 20 km (12.4 miles). The Janus orbit changes only a quarter of the Epimetheus change because Janus is four times more massive than Epimetheus.
Epimetheus and Janus may have formed by the break-up of one moon. If so, it would have happened early in the life of the Saturn system since both moons have ancient cratered surfaces, many with soft edges because of dust. They also have some grooves (similar to grooves on the Martian moon Phobos) suggesting some glancing blows from other bodies. Together, the moons trail enough particles to generate a faint ring. However, except for very powerful telescopes, the region of their common orbit (and the faint ring) appears as a gap between Saturn's more prominent F and G rings.
Epimetheus and Janus are the fifth and sixth moons in distance from Saturn. Both are phase locked with their parent; one side always faces toward Saturn. Being so close, they orbit in less than 17 hours. They are both thought to be composed largely of water ice, but their density of less than 0.7 is much less than that of water. Thus, they are probably "rubble piles" -- each a collection of numerous pieces held loosely together by gravity. Each moon has dark, smoother areas, along with brighter areas of terrain. One interpretation of this is that the darker material evidently moves down slopes, leaving shinier material such as water ice on the walls of fractures. Their temperature is approximately -319 degrees Fahrenheit (-195 degrees Celsius). Their reflectivity (or albedo) of 0.7 to 0.8 in the visual range again suggests a composition largely of water ice.
The Epimethean mean diameter of 113 km (70 miles) comes from its potato-shaped dimensions of 135 x 108 x 105 km (84 x 67 x 65 miles, respectively). These numbers reflect pronounced flattening at the Epimethean south pole associated with the remains of a large crater. Epimetheus has several craters larger than 30 km, including Hilairea and Pollux.
Audouin Dollfus observed a moon on 15 December 1966, for which he proposed the name "Janus." On 18 December of the same year, Richard Walker made a similar observation, now credited as the discovery of Epimetheus. At the time, astronomers believed that there was only one moon, unofficially known as "Janus," in the given orbit. Twelve years later, in October 1978, Stephen M. Larson and John W. Fountain realized that the 1966 observations were best explained by two distinct objects (Janus and Epimetheus) sharing very similar orbits. Voyager I confirmed this in 1980, and so Larson and Fountain officially share the discovery of Epimetheus with Walker.
The Cassini spacecraft has made several close approaches and provided detailed images of the moon since it achieved orbit around Saturn in 2004.
How Epimetheus Got its Name:
John Herschel suggested that the moons of Saturn be associated with the mythical brothers and sisters of Kronus. (Kronus is the equivalent of the Roman god Saturn in Greek mythology.) The International Astronomical Union now controls the official naming of astronomical bodies.
The name Epimetheus comes from the Greek god (or Titan) Epimetheus (or hindsight), who was the brother of Prometheus (foresight). Together, they represented humanity. Epimetheus and Prometheus' father is Iapetus (who is one of Kronus' brothers).
The craters on Epimetheus include Hilaeira (who was a priestess of Artemis and Athena) and Pollux (who was a warrior in "The Iliad" and who carried off Hilaeira).
Astronomers also refer to Epimetheus as Saturn XI and as S/1980 S3. | <urn:uuid:b083e558-95ae-4099-8bb5-98a12d4deba5> | 3.796875 | 1,044 | Knowledge Article | Science & Tech. | 47.983443 |
Sculptor, Grus, and Phoenix - Downloadable article
Many of the finest nearby galaxies lie tucked away in the southern constellation Sculptor.
March 3, 2009
|This downloadable article is from an Astronomy magazine 45-article series called "Celestial Portraits." The collection highlights all 88 constellations in the sky and explains how to observe each constellation's deep-sky targets. The articles feature star charts, stunning pictures, and constellation mythology. We've put together 11 digital packages. Each one contains four Celestial Portraits articles for you to purchase and download.|
"Sculptor, Grus, and Phoenix" is one of four articles included in Celestial Portraits Package 5.
During autumn evenings, while the heart of the Milky Way sets in the southwest, the northern reaches command our attention. This often leaves the area away from the galaxy's plane ignored. While the southerly autumn sky is sparse, it is by no means devoid of interesting targets. These include galaxy groups, nearby galaxies, and a couple nearby deep-sky objects you wouldn't expect to find so far from the Milky Way's glowing band.
As viewed from the lower tier of states, Grus the Crane looks like an inverted "y" of mostly 3rd- and 4th-magnitude stars riding along the southern horizon. Its two brightest stars, Alpha (α) and Beta (β) Gruis, are 2nd-magnitude gems, the latter looking distinctly orange. Northwest of Beta Gruis lie Delta1 (δ1) and Delta2 (δ2) Gruis, a pleasing naked-eye pair of 4th-magnitude stars separated by a quarter degree. Phoenix is a nondescript grouping of 4th-magnitude stars spread over a wide swath of sky capped by 2.4-magnitude Alpha Phoenicis, a yellow spectral type K star 77 light-years away. The area around Sculptor is the most star poor in the sky, making it weakly defined. Although Sculptor encompasses 475 square degrees, its brightest star glows at magnitude 4.3. To read the complete article, purchase and download Celestial Portraits Package 5.
|Deep-sky objects in Sculptor, Grus, and Phoenix|
NGC 55, NGC 134, NGC 150, NGC 253, NGC 288, NGC 300, Sculptor Dwarf, NGC 613, IC 5148, NGC 7213, IC 5201, NGC 7418, IC 1459, NGC 7462, NGC 7582, SX Phe | <urn:uuid:e9942f0f-ac90-4b75-b21f-e8f7b28ef1c0> | 2.765625 | 544 | Truncated | Science & Tech. | 61.990702 |
Louisiana delta, Southern USA — The Deepwater Horizon Oil Spill of April 20, 2010, was the largest that the US had ever experienced, with over 250 million gallons of crude oil released into the Gulf of Mexico. Many areas were greatly impacted by the oil spill, including the Gulf coast of Louisiana. The impacts from this spill on resident and migratory wildlife are likely to last a decade, or more, based on the impacts the Exxon Valdez oil spill on local and regional wildlife in Alaska. One species potentially impacted by the Deepwater Horizon oil spill is the Common Loon. Satellite telemetry data and recovery data from bands attached to loons suggests the entire Midwest breeding population, (around 5000 pairs) and some of Manitoba and Ontario's loons, overwinter in the Gulf of Mexico for up to six months of the year.
Winter is an energetically stressful time for loons because of both physiological changes (the move from freshwater to marine water) and morphological changes (such as molting). Chronic exposure to polyaromatic hydrocarbons (e.g. petroleum) can cause many debilitating sublethal effects, such as immune system suppression, hormonal imbalance, and red blood cell damage, which can lead to death by other means such as starvation, disease and predation.
By monitoring the Louisiana coast for adult loon survivorship and health, we will be able to better determine impacts from the Deepwater Horizon oil spill on Common Loon breeding populations. Objectives will include determining the diet of wintering loons, as well as their habitat use, and monitoring their movements and behavior. We’ll assess the health of individuals and determine cause of death of any loons found in the area.
Our results will contribute to U.S. Fish & Wildlife Service's National Resource Damage and Assessment process, which oversees studies that attempt to quantify the extent of any damage and the best methods for restoring resources.
Meet the Scientists
Dr. Jim Paruk
International Loon Center for Conservation and Research
Dr. Jim Paruk is Director of the International Loon Center for Conservation and Research, for the BioDiversity Research Institute.
Following his Master’s in biology, completed at Northern Illinois University, he travelled alone for six months, driving past the Arctic Circle in Alaska, to the Baja peninsula, New England, and the Florida Everglades. He studied for his Ph.D in Biology at Idaho State University, Pocatello, ID, then accepted a professorship at Feather River College in northern California. He later took another professorship at Northland College, Wisconsin. In both positions he taught numerous courses including ecology and ornithology. He has directed several research projects, served as vice president of the North American Loon Fund for two years, and as board member for LoonWatch, chairing the research committee, for five years.
This is not Dr. Paruk’s first partnership with Earthwatch - between 1993 and 1996 he was a lead scientist on a former Earthwatch project investigating the parental roles, aggression, and social behavior of Common Loons in the Great Lakes Region.
“My spirit is drawn to those aspects of a loon that symbolize wilderness, independence and freedom. My experience with Earthwatch in the past is that volunteers are knowledgeable, bright, caring people who want to contribute. They often make great suggestions, and they bring their energy and enthusiasm to the project that keeps me going.” | <urn:uuid:d812a739-20c6-46e3-9f39-dbc60ff26e87> | 3.640625 | 705 | Knowledge Article | Science & Tech. | 35.920273 |
Science Scoops: Order! Order!
by Stephen James O'Meara
If you thought finding several new species of insects was exciting, get this: Danish and German scientists have recently discovered a new order of insects! In biology, an order is a primary classification of related animals, which is then subdivided into genera and species. Several new insect species are found each year. But the discovery of a new insect order is not so common. In fact, it's the first time such a discovery has been made since 1915!
The first member of this new insect order was discovered by an international team of entomologists (scientists who study insects) that went to the Brandberg Mountains in Namibia, Africa. The new insect, which looks something like a cross between a stick insect and a praying mantis, was immediately recognized by one team member, Oliver Zompro (Max-Planck-Institute in Plön, Germany). You see, the previous year, Zompro had discovered this same life form in a 45-million-year-old piece of amber that was in a collection at the British Natural History Museum in London. So the newly discovered insect order has been around for at least 45 million years!
The new insect has jaws with three small teeth and long antennae. The scientists say that based on the insect's stomach contents, it appears to be a carnivore (flesh-eater). Indeed, rows of spines on its front and middle legs indicate that the animal held on to its prey with its legs, as some insect-eating locusts do. It has been given the provisional name “Gladiator.” This new order, christened “Mantophasmatodea,” brings the number of insect orders known throughout the world to 31.
- order: A group of animals or plants that are similar in many ways. Rodents such as rats, mice, hamsters, and beavers belong to the same order.
- Taxonomists divide these kingdoms into subcategories called phylums or divisions. A phylum distinguishes amongst animals with different evolutionary traits. A division distinguishes amongst plants with different evolutionary traits. Taxonomists then divide the phylums and divisions into classes. Then taxonomists divide the classes into orders. Organisms in the same order share certain characteristics. An order is divided into families. Families are divided into genuses, and finally, genuses are divided into species.
Print a copy of the chart titled: Classification (PDF file). This chart is separated into two columns. In the first column, write the word “Human” at the top. Using an encyclopedia, look up the information for the biological classification of a human. In the second column, write the word “Eastern Lowland Gorilla” at the top. Using an encyclopedia, look up the information for the biological classification of a gorilla.
- Study your chart after you have completed it. What conclusions might you draw from the information that you found? | <urn:uuid:5a9eed44-c12f-45d4-8201-1ac2716dca75> | 3.796875 | 621 | Knowledge Article | Science & Tech. | 45.872027 |
Resource depletion, climate change and rising oil prices have led to large investments in renewable energy and new fossil fuel extraction techniques. However despite the positive headlines on solar, wind, and shale, all of these sectors are beset with problems that add to our uncertain energy future. We appear to be decades away from finding a renewable source that will help us avoid the impending energy crisis.
Yet amongst this bleak outlook a relatively unknown Italian inventor could be about to spark an energy revolution?
Andrea Rossi appears to have produced the first working “cold” fusion device, or low energy nuclear reaction (LENR), with his Energy Catalyser (E-Cat) machine; a technology previously declared impossible by the scientific community.
The E-Cat machine could provide almost limitless, clean, cheap energy. It could prove to be one of the greatest inventions of all time. Rossi could become one of the most revolutionary scientists since Darwin.
Notoriously reluctant to speak to the media about his work, we have had the honour of interviewing Mr. Andrea Rossi about his E-Cat machine.
Some of the questions we asked Mr. Rossi take a look at:
• Why it took so long for him to go public with his discovery.
• How the E-Cat will produce energy costing $10/megawatt hour.
• When he will release more detailed information on the E-Cat.
• Why he believes international media coverage of the E-Cat has been so muted.
• His feelings towards critics and the scientific community.
• His manufacturing and distribution goals.
• How the E-cat will help reduce mankind’s dependency on fossil fuels.
• + Many more details on the e-cat, LENR and Rossi himself.
Oilprice.com: What exactly is the E-Cat and how does it work?
Andrea Rossi: The E-Cat machine is basically a heater. It uses a secret catalyser to fuse hydrogen and nickel together to form copper. Copper has a lower energy state than Nickel, and the excess
energy is released in the form of a gamma ray. The gamma ray hits a wall of lead where it is absorbed and transformed into heat. The whole process is incredibly efficient and can heat any fluid that passes through the machine.
Full article at: >The Limitless Potential of the E-Cat: An Interview with Andrea Rossi | <urn:uuid:947bfdd6-f3d0-46f2-89d4-4f5d394880fe> | 3.375 | 491 | Audio Transcript | Science & Tech. | 45.533302 |
Microscopic evolution with thermal fluctuations on a discrete space...
On the microscopic scale, one considers an atomistic or mesoscopic model. There is a lot of freedom to choose the model, but to have an application in mind let us think of a spin system on a lattice. The microscopic evolution is usually governed by two mechanisms:
- an equilibration mechanism tries to minimize the energy of the microscopic system, and
- the thermal noise prevents the system of freezing in a local minimizer of the energy.
Because of the noise, the microscopic system may approach a statistical equilibrium state.
...lead to a macroscopic evolution without noise on a continuous space
Now, let's have a look at the microscopic system at a coarse scale. In our example this means looking at averages of spins gathered in blocks. By considering coarser and coarser scales, one observes two phenomena:
- The discrete lattice space approximates a continuous space, and
- the noise is getting smaller by an averaging effect. The stochastic evolution becomes more and more deterministic.
We want to answer the following questions:
- Is there a deterministic evolution on the continuous space that is approximated by the coarse-grained microscopic evolution?
- How is the deterministic evolution linked to the microscopic energy?
- How fast is the approximation?
- What are the statistics of the fluctuations around the deterministic evolution?
A two-scale approach
The two-scale approach represents a general strategy to answer the questions from above. The main idea of the two-scale approach is to gain better understanding by considering the atomistic or mesoscopic system at two scales:
- the microscopic scale describes the fluctuation of the system around a macroscopic state,
- the macroscopic scale describes the coarsened macroscopic system.
An important ingredient is the fast equilibration on the microscopic scale; this means that the statistical equilibrium of the fluctuations is attained very fast. The fast equilibration allows to further neglect the microscopic scale.
By averaging, one can already assume that the evolution on the macroscopic scale is already deterministic. Now, one only has to identify the limit of the evolution on a discrete space that becomes more and more continuous.
Equilibration in high dimensions - the logarithmic Sobolev inequality
As outlined in the last section, it is very important to show that on the microscopic scale there is a fast equilibration. In the hydrodynamic limit, the system size goes to infinity. Therefore, it is important to measure the equilibration with a tool that is able to handle high dimensions. For that reason, we measure the equilibration with the help of the relative entropy, which is closely connected to the logarithmic Sobolev inequality. To have a sufficiently good equilibration, it is sufficient to show that the statistical equilibrium state of the microscopic system satisfies the logarithmic Sobolev inequality uniformly in the system size. This motivates our interest in deducing the logarithmic Sobolev inequality for several systems.
References and further reading
For a general introduction to the two-scale approach we recommend to read the article:
- Natalie Grunewald, Felix Otto, Cédric Villani, and Maria G. Westdickenberg.
A two-scale approach to logarithmic Sobolev inequalities and the hydrodynamic limit.
Ann. Inst. H. Poincare, 45(2):302-351, 2009.
download (PDF, 529 kbyte)
For a general introduction to hydrodynamic limits, we recommend to read and:
- S. R. S. Varadhan.
Relative entropy and hydrodynamic limits.
Stochastic processes, 329-336, Springer, New York, 1993.
For a general introduction to logarithmic Sobolev inequalities, we recommend to read:
- M. Ledoux.
Logarithmic Sobolev inequalities for unbounded spin systems revisited.
Sem. Probab. XXXV, Lecture Notes in Math., Springer 1755:167-194, 2011.
- G. Royer.
Une initiation aux inégalités de Sobolev logarithmiques.
Cours Spécialisés, Soc. Math. de France, 1999.
Works of our research group connected to this research topic:
- Georg Menz.
LSI for Kawasaki dynamics with weak interaction.
Commun. Math. Phys. 307, 817-860, 2011.
MPI MIS preprint 31/2010
- Georg Menz.
Equilibrium dynamics of continuous unbounded spin systems.
Dissertation, University of Bonn, 2011.
- Georg Menz and Felix Otto.
Uniform logarithmic Sobolev inequalities for conservative spin systems with super-quadratic single-site potential.
MPI MIS Preprint 5/2011, accepted by Ann. Probab.
- Felix Otto and Maria G. Reznikoff.
A new criterion for the logarithmic Sobolev inequality and two applications.
J. Funct. Anal., 243(1):121-157, 2007.
download (PDF, 281 kbyte)
- Georg Menz
- Felix Otto
- Criteria for Logarithmic Sobolev Inequalities, Application to hydrodynamic limit (see PDF, 140 Kbyte) | <urn:uuid:30999b98-a32e-40b5-a77e-7d2fa68335f3> | 2.828125 | 1,142 | Academic Writing | Science & Tech. | 34.584751 |
Zoologger is our weekly column highlighting extraordinary animals – and occasionally other organisms – from around the world.
Species: Physeter macrocephalus
In the chill darkness 2 kilometres below the surface of the Southern Ocean, one of nature's greatest battles is being fought. One of the combatants is a kraken – the largest invertebrate known to exist, a colossal squid, over 12 metres in length. The other is its predator.
Named after the creamy sperm-like fluid found in its head, sperm whales are one of the few animals that can hunt and kill adult colossal squid. It may be bad news for the Antarctic's krakens, but it is good news for us: the whales' deep-sea depredations function as a carbon sink, slightly easing the effects of anthropogenic climate change.
Almost all marine life is found within 200 metres of the surface, the so-called photic zone. In this sunny region there is enough light for microscopic plants called phytoplankton to photosynthesise, absorbing carbon dioxide. In turn, the phytoplankton support a network of animals that feed on them and each other.
Earlier this year it was shown that Southern Ocean baleen whales help keep this process going by releasing huge amounts of iron in their faeces.The Southern Ocean is short of iron, limiting the amount of life it can sustain, but these injections of iron help out.
Now Trish Lavery of Flinders University in Adelaide, South Australia, and her colleagues have gone a step further. They have found that while the baleen whales merely help keep the iron cycle going, sperm whales actually inject iron into it by hunting their prey at great depths and then defecating when they return to the photic zone. In effect they ferry iron from the depths of the sea to the surface, where the phytoplankton can use it.
Based on existing studies of sperm whale behaviour and anatomy, Lavery and colleagues calculated that the 12,000 sperm whales living in the Southern Ocean collectively eat 2 million tonnes of prey each year – including 60 tonnes of iron.
Of this, about 36 tonnes ends up in the photic zone. That keeps swathes of phytoplankton going there, taking in CO2 from the atmosphere, that otherwise couldn't make a living.
Of all the carbon taken in by the phytoplankton, between 20 and 40 per cent ultimately sinks to the bottom of the ocean as various forms of waste. Lavery's team calculated that 400,000 tonnes of carbon gets dumped in this way every year as a result of the sperm whales' activities – far more than the estimated 160,000 tonnes the whales release by breathing.
All of which raises the question, how do sperm whales bring down such monstrous prey? No one has ever seen a sperm whale attack a colossal squid, but we can still work some of it out.
The whales have one advantage: in the cold Antarctic, colossal squid are rather slow-moving, lying in wait and ambushing their prey rather than actively hunting them. A hungry sperm whale might not have to chase very hard.
Similarly, the rather less enormous jumbo squid of the eastern Pacific actually go deeper to cool off, and the local sperm whales may hunt them at depth because they are slower and thus more vulnerable down there. And even the sizeable giant squid is something of a weakling despite its fearsome reputation.
Still, a weapon or two might be handy. And this could be where the sperm comes in. The spermaceti fluid is used to focus loud clicks, which the whales use for echolocation. The clicks can be over 230 decibels, making them the loudest sound produced by any living thing. It's been suggested that the whales use these clicks to stun their prey – but when they were played to prey animals in the lab there was no effect.
The spermaceti also cushions the whales' heads, particularly in males, which have much more of the stuff than females do. This allows fighting males to ram each other (or, in Moby-Dick, ships). However, there is no evidence of them ramming prey.
Females have their own hunting method. They live in tightly knit groups called pods, communicating with loud clicks and sharing the care of their calves. A tracking study earlier this year suggested that females hunt in packs, herding jumbo squid into "bait balls" just as dolphins do with fish. Young males live in "bachelor" pods and may therefore do the same thing, but as they get older they become solitary.
But if we really want to find out how they win their titanic battles, we need to get a video camera down there.
Read previous Zoologger columns: Globetrotters of the animal kingdom, Judge Dredd worm traps prey with riot foam, Flashmobbing locusts have redesigned brains, Smart camo lets glow-in-the-dark shark hide, Attack of the self-sacrificing child clones, The most kick-ass fish in the sea, The most bizarre life story on Earth?, Keep freeloaders happy with rotting corpses, Robin Hood meets his underwater match, The mud creature that lives without oxygen.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Wed Jun 16 07:42:37 BST 2010 by SewerRat
A fascinating article relating the eating habits of these beasts to sea chemistry. The ability of any mammal to dive to over a mile in depth is a mind boggling feat in itself. The sperm whales' pre-dive breathing mechanism, collapsible ribcage and hydrostatic pressure equalising system provide a tantalising glimpse of what fantastic animals may exist in extreme conditions elsewhere in the universe.
Don't Believe It
Wed Jun 16 15:14:18 BST 2010 by Jefferson
Sorry but considering the decibel scale is exponential I find it hard to believe that they are claiming over 230 db and surely the whale itself would go deaf.
Don't Believe It
Wed Jun 16 15:39:03 BST 2010 by Michael Marshall
Hi Jefferson, thanks for your comment.
The 230 dB claim is based on a study in which the sperm whales' clicks were recorded in the wild, and which is linked to in the article. The researchers actually reported volumes up to 236 dB; I was being conservative and rounded down. Click http://dx.doi.org/10.1121/1.1586258 to see the paper.
I also doubt that the whales would go deaf: there are mechanisms for solving the problem you mention.
For instance bats, which of course also echolocate, momentarily disconnect their receiver apparatus when they send out a pulse of sound, so that it doesn't get overwhelmed. They then switch it back on again immediately in order to pick up the echo.
I don't know for sure, but the whales may well do the same thing.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:2d47ca71-cae1-46b8-9268-ff94f741d748> | 3.78125 | 1,566 | Comment Section | Science & Tech. | 54.566089 |
METHANE clouds scud over an icy landscape that barely registers a temperature above -180 °C. If life exists on Titan, surely it has to be about as otherworldly as any our solar system could support?
Perhaps not: a simulation of conditions on Saturn's giant frigid moon shows that some of the key molecular precursors of life as we know it are likely to have formed there. The results raise the odds that Titans, if they do exist, might be less alien than we imagine.
There is good reason to hope that a search for life on Titan will prove fruitful. The moon is swathed in a thick, protective, nitrogen-rich atmosphere. Beneath that is evidence of surface liquid - albeit in the form of hydrocarbon lakes.
So far, though, the hunt for living Titans has drawn a blank. The ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:588b9f03-504b-427b-9407-68d24be5f224> | 3.640625 | 200 | Truncated | Science & Tech. | 56.081147 |
Transcript for Oceanfloor Legacy, segment 13 of 14
The sea floor images combined with the seismic profiles, the thirty cores, and the four sonar mosaics make up a package of basic scientific research that begins to answer questions about the processes shaping the sea floor west of San Francisco. The policy makers who must deal with the dredged material disposal question can now focus in on specific areas of the sea floor. They will place current meters at candidate sites, eliminating those where the sediment will wash away. The managers of the Gulf of the Farralonnes National Marine Sanctuary can more confidently define the threat posed by the radioactive waste drums. The basic research will also contribute to a clearer understanding of this dynamic environment.
In programs like this that really need a multiple - you know, you need a whole bunch of people that do all different types of things - sedimentologists, geophysicists, electronic technicians - that's what we're set up to do. We do things rapidly. You know, we can go in a blitzkrieg mode and get a lot of talented people and draw from the community to do it.
It may not be that we'll have all the answers, but we'll certainly provide a lot more insight into what's going on on the sea floor in this particular region where the disposal site may be located.
The U. S. G. S. Marine Geologists will continue to map and explore shallow water regions of the U. S. exclusive economic zone. Within the framework of their basic research program they will focus on the needs of other population centers and address problems similar to those faced in the San Francisco bay area. | <urn:uuid:2ce61deb-febe-4057-80b0-4ee218b9673a> | 3.09375 | 333 | Audio Transcript | Science & Tech. | 51.787245 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
June 20, 1996
Credit: Apollo 12 Crew, NASA
Explanation: In November of 1969, homeward bound aboard the "Yankee Clipper" command module, the Apollo 12 astronauts took this dramatic photograph of the Sun emerging from behind the Earth. From this distant perspective, part of the solar disk peers over the Earth's limb, its direct light producing the jewel like glint while sunlight scattered by the atmosphere creates the thin bright crescent. Today at 10:24 pm Eastern Daylight Time is the Summer Solstice. From an earthbound perspective, the solar disk will climb to its greatest northern declination marking the Northern Hemisphere's first day of Summer and creating the longest day -- with over 15 hours of daylight near latitude +40 degrees.
Authors & editors:
NASA Technical Rep.: Sherri Calvo. Specific rights apply.
A service of: LHEA at NASA/ GSFC | <urn:uuid:85975851-e752-493e-9257-07b114156146> | 4.28125 | 215 | Knowledge Article | Science & Tech. | 47.511396 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Some believe that it is always warmer during daytime at the equator because the Sun is directly overhead at midday and is able to warm everything equally underneath it. In reality, there are only two instances in the entire year when the Sun can be directly above an area at the equator. One is the autumnal equinox while the other is the vernal equinox.
To understand how this is so, allow me to refer to the image above. If you notice, the Earth is always tilted towards one direction. This degree of tilt changes over a period of thousands of years. Thus, it would be safe to say that the tilt stays the same as the Earth revolves around the Sun throughout an entire year (or even for several decades).
There are two instances during a full revolution when the Sun’s behavior at one end of the Earth is the exact opposite of that at the other. The two instances are called solstices.
At one solstice (summer solstice), inhabitants at the northernmost regions will experience longer days, while those at the southernmost regions will experience longer nights. In fact, the Sun will be up 24 hours at the north pole and hidden 24 hours at the south.
During the other solstice (winter solstice), the exact opposite happens, with northernmost dwellers experiencing shorter days and southernmost dwellers shorter nights.
Solstices happen when the Earth’s axial tilt is farthest or nearest from the Sun. Halfway between the solstices, as seen on the image, is the time when the equinoxes occur. If we were to draw an imaginary line joining the center of the Sun and the center of the Earth, the only time that the line would make a 90 degree angle with the Earth’s rotational axis would be during equinoxes.
During vernal equinox, as well as in autumnal equinox, the Sun lights up all parts of the Earth at equal durations. In other words, sunlight shines on all parts of the world at approximately 12 hours each. That is why, in a vernal equinox, which happens in March during spring, the temperature is neither too warm nor too cold.
The word ‘vernal’ comes from the Latin word ‘ver’, which means spring. In more recent text, the term vernal equinox is replaced with the more neutral-sounding March equinox. The reason is because it is only in the northern hemisphere that it is spring during the so-called vernal equinox. Down south, it is fall or autumn.
More information can be found at NASA:
Check out this podcast at Astronomy Cast:
Solar Altitudes on Equinoxes and Solstices | <urn:uuid:1bd48f64-f7cb-403c-832a-dafd2c7835f2> | 4.21875 | 598 | Knowledge Article | Science & Tech. | 51.003973 |
This is a false color image of a mosaic of Mercury.
Click on image for full size
Courtesy of NASA.
Observations of Mercury from Earth
Before the Mariner 10 mission of Mercury, it was very difficult to
see any markings on the surface of the planet from Earth. This image
shows a view of Mercury obtained from a telescope on Earth. The first
attempts to find out the day length of the planet found an 88 earth-day
rotation period, equal to the orbital period, or year length. It was only in the
1960s, when a radar technique allowed the rotation rate to be
determined, we found that Mercury spins on its axis every 59
Earth days. But the length of a day on Mercury is about three times
this. To find out why, click the link below.
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more
Mercury's orbit is so close to the Sun that it is difficult to see from the ground. This explains why some early astronomers never saw the planet. Viewed from Earth, Mercury is never far from the Sun...more
It takes Mercury about 59 Earth days to spin once on its axis (the rotation period), and about 88 Earth days to complete one orbit about the Sun. However, the length of the day on Mercury (sunrise to...more
Before the Mariner 10 mission of Mercury, it was very difficult to see any markings on the surface of the planet from Earth. This image shows a view of Mercury obtained from a telescope on Earth. The...more
Mercury, like the other planets, is believed to have formed in the earliest stage of the evolution of the solar system as dust came together to form even larger clumps and eventually small planets or...more
Mercury, the innermost planet of the solar system, is a little bigger than the Earth's Moon. The surface of the planet is covered with craters, like the Moon, but temperatures there can reach over 80...more
Mercury has a radius of 2439 km (1524 mi), and the metallic iron-nickel core is believed to make up about 75% of this distance. Measurements of the planet's magnetic field made by Mariner 10 as it flew...more
The Caloris Basin is the largest feature on the surface of Mercury. This crater was formed by the impact of a large meteorite in the early formation of the solar system. We only know what half of the...more | <urn:uuid:0841a9be-603e-4580-a3ac-17052af2c908> | 4.1875 | 606 | Content Listing | Science & Tech. | 62.244845 |
Frogs can hear using big round ears on the sides of their head called a tympanum. Tympanum means drum. The size and distance between the ears depends on the wavelength and frequency of a male frogs call. On some frogs, the ear is very hard to see!
Ever wonder how frogs that can get so LOUD manage not to hurt their own ears? Some frogs make so much noise that they can be heard for miles! How do they keep from blowing out their own eardrums?
Well, actually, frogs have special ears that are connected to their lungs. When they hear noises, not only does the eardrum vibrate, but the lung does too! Scientists think that this special pressure system is what keeps frogs from hurting themselves with their noisy calls!
Back to Strange but True Facts.
Back to FROGLAND. | <urn:uuid:61c9e9dd-fb02-4465-9049-ee9d347706b0> | 3.90625 | 178 | Knowledge Article | Science & Tech. | 78.970476 |
Many agree with this theory. That the author mentions this as a possible slam dunk against climate change gets into the real meat of the misunderstanding. Climate change is not defined as all gloom and doom in the science world, but the general public has gotten that impression. In the article's scenario the plants are still changing. In this case, the change is potentially beneficial; crops are changing in growth cycles and in the ability to survive. This is change. This could be climate change. It's still doing something; that what verbs mean.
There is the potential for climate change to really disrupt our present way of life, but we can probably adapt. The question is can plants and animals survive these changes as well? Scientists have figured out from fossil records that more species have gone extinct than the number that exists today. Extinction happens, sometimes in nature, sometimes because of man. We nailed the passenger pigeon and the dodo and the Caribbean fur seal. We probably have done the same thing to species we didn't even know were there. But other species have died out naturally because they were out competed by other species for resources or could not adapt to changing environmental parameters.
Each species has its own particular requirements to survive in an ecosystem. For example, take a particular species of flower that must be pollinated by a particular moth or else it cannot reproduce. That flower also needs a certain amount of light, warmth, and moisture to survive. If the moth dies out, that flower is up the creek without a paddle. Some flowers can survive on a wide range of pollinators, light, heat, and rain, but others have to be way pickier about where they live and what they eat. Sort of like if I am on a deserted island and only have rations of peanut butter and crackers. My friend, who is also in this unfortunate situation, is allergic to peanuts so he can only eat the crackers. When the cracker supply runs out, I am going to survive a lot longer than he can because I can eat the different resource, and my friend can't even though he wants to (In this scenario we are not cannibals or particularly great fishermen).
This makes changes in parameters like precipitation and temperature very, very important to where and if things live. For instance, a lot of species have to have ice in order to complete their life cycle. Some need fire, some need a certain amount of rain, and some need a mild spring.
Why are the poles feeling the effects so much faster than the rest of the planet? Scientists do not understand entirely why in particular the Western Antarctic is one of the fastest recorded warming places on the planet. That's what we're trying to figure out. A few degrees difference in temperature may not seem like a big deal in Tennessee, but that same difference in degrees can make all the difference in the world at the poles. The Antarctic Peninsula's average temperature is shifting from below freezing to above freezing, which is the same thing that is happening in the Arctic. Since the substrate (ice) depends on temperatures being cold in order for it to exist, obviously this is a problem for species that depend on the ice for their survival. | <urn:uuid:7cfbe46f-ef54-4ba7-b6ea-507aa789ac16> | 2.703125 | 644 | Personal Blog | Science & Tech. | 49.854313 |
6.1.2 File Object Creation
These functions create new file objects.
- fdopen (fd[, mode[, bufsize]])
Return an open file object connected to the file descriptor fd.
The mode and bufsize arguments have the same meaning as
the corresponding arguments to the built-in open()
Availability: Macintosh, Unix, Windows.
- popen (command[, mode[, bufsize]])
Open a pipe to or from command. The return value is an open
file object connected to the pipe, which can be read or written
depending on whether mode is
'r' (default) or
The bufsize argument has the same meaning as the corresponding
argument to the built-in open() function. The exit status of
the command (encoded in the format specified for wait()) is
available as the return value of the close() method of the file
object, except that when the exit status is zero (termination without
None is returned. Note: This function
behaves unreliably under Windows due to the native implementation of
Availability: Unix, Windows.
- tmpfile ()
Return a new file object opened in update mode ("w+"). The file
has no directory entries associated with it and will be automatically
deleted once there are no file descriptors for the file.
See About this document... for information on suggesting changes. | <urn:uuid:1273041d-781d-4d40-9f5b-90670111e971> | 3.171875 | 293 | Documentation | Software Dev. | 40.718809 |
To rate this resource, click a star:
Though corn is "all-natural" in some ways, in others it is entirely manmade. This news brief from February 2007 explains the evolutionary tools
that ancient humans used to engineer modern corn and the tools that scientists are using today to reconstruct corn's evolutionary history.
UC Museum of Paleontology
This article includes a set of discussion and extension questions for use in class. It also includes hints about related lessons that might be used in conjunction with this one. Get more tips for using Evo in the News articles in your classroom.
- Artificial selection provides a model for natural selection.
- People selectively breed domesticated plants and animals to produce offspring with preferred characteristics.
- Evolution results from selection acting upon genetic variation within a population.
- Scientists test their ideas using multiple lines of evidence.
- Scientists use multiple research methods (experiments, observational research, comparative research, and modeling) to collect data.
- As with other scientific disciplines, evolutionary biology has applications that factor into everyday life. | <urn:uuid:07cc57b3-3db9-4d3b-ae8b-c768a5219901> | 3.96875 | 215 | Content Listing | Science & Tech. | 21.450989 |
GCC moves to the C + + compilation of himself in order to improve code quality
by admin ·
To begin with only the modified bootstrap code. Purpose – to improve the quality of code (because the C + + works harder to type). When there will classes and templates .. Officially stated reasons for using C + +:
- C + + – standardized, the popular language.
- C + + – is almost a superset of C90, is used within the GCC.
- Compatible with C C + + code is as effective as simply the code C.
- C + + supports cleaner code in many important situations.
- C + + makes it easier to create and maintain a clear interface.
- C + + will never require a curve code.
- C + + is not a panacea, but an improvement. | <urn:uuid:33558503-ada2-4c02-a9b8-a6a8a220fcd4> | 3.21875 | 176 | Listicle | Software Dev. | 59.768541 |
AC Power Theory - Advanced maths
This page covers the mathematics behind calculating real power, apparent power, power factor, RMS voltage and RMS current from instantaneous Voltage and Current measurements of single phase AC electricity. Discreet time equations are detailed since the calculations are carried out in the Arduino in the digital domain.
For a much nicer arduino code snippet version of this page see: AC Power theory - Arduino maths
Real power (also known as active power) is defined as the power used by a device to produce useful work.
Mathematically it is the definite integral of voltage, u(t), times current, i(t), as follows:
Equation 1. Real Power Definition.
U - Root-Mean-Square (RMS) voltage.
I - Root-Mean-Square (RMS) current.
cos(φ) - Power factor.
The discrete time equivalent is:
Equation 2. Real Power Definition in Discrete Time.
u(n) - sampled instance of u(t)
i(n) - sampled instance of i(t)
N - number of samples.
Real power is calculated simply as the average of N voltage-current products. It can be shown that this method is valid for both sinusoidal and distorted waveforms.
RMS Voltage and Current Measurement
An RMS value is defined as the square root of the mean value of the squares of the instantaneous values of a periodically varying quantity, averaged over one complete cycle. The discrete time equation for calculating voltage RMS is as follows:
Equation 3. Voltage RMS Calculation in Discrete Time Domain.
Current RMS is calculated using the same equation, only substituting voltage samples, u(n), for current samples, i(n).
Apparent Power and Power Factor
Apparent power is calculated, as follows:
Apparent power = RMS Voltage x RMS current
and the power factor:
Power Factor = Real Power / Apparent Power
This page is based on Atmel's AVR465 appnote page 12-15 which can be found here | <urn:uuid:abc24511-7d7e-4955-b081-76ce767245ca> | 3.796875 | 437 | Tutorial | Science & Tech. | 40.912701 |
As others pointed out, analyzing recursion can get very hard very fast. Here is another example of such thing: http://rosettacode.org/wiki/Mutual_recursion http://en.wikipedia.org/wiki/Hofstadter_sequence#Hofstadter_Female_and_Male_sequences
it is hard to compute an answer and a running time for these. This is due to these mutually-recursive functions having a "difficult form".
Anyhow, let's look at this easy example:
(declare funa funb)
(defn funa [n]
(if (= n 0)
(funb (dec n))))
(defn funb [n]
(if (= n 0)
(funa (dec n))))
Let's start by trying to compute
funa(m), m > 0:
funa(m) = funb(m - 1) = funa(m - 2) = ... funa(0) or funb(0) = 0 either way.
The run-time is:
R(funa(m)) = 1 + R(funb(m - 1)) = 2 + R(funa(m - 2)) = ... m + R(funa(0)) or m + R(funb(0)) = m + 1 steps either way
Now let's pick another, slightly more complicated example:
Inspired by http://planetmath.org/encyclopedia/MutualRecursion.html, which is a good read by itself, let's look at: """Fibonacci numbers can be interpreted via mutual recursion: F(0) = 1 and G(0) = 1 , with F(n + 1) = F(n) + G(n) and G(n + 1) = F(n)."""
So, what is the runtime of F? We will go the other way.
Well, R(F(0)) = 1 = F(0); R(G(0)) = 1 = G(0)
Now R(F(1)) = R(F(0)) + R(G(0)) = F(0) + G(0) = F(1)
It is not hard to see that R(F(m)) = F(m) - e.g. the number of function calls needed to compute a Fibonacci number at index i is equal to the value of a Fibonacci number at index i. This assumed that adding two numbers together is much faster than a function call. If this was not the case, then this would be true: R(F(1)) = R(F(0)) + 1 + R(G(0)), and the analysis of this would have been more complicated, possibly without an easy closed form solution.
The closed form for the Fibonacci sequence is not necessarily easy to reinvent, not to mention some more complicated examples. | <urn:uuid:064b64f2-b5f0-4d75-86aa-d60ed7f3e66a> | 2.71875 | 633 | Q&A Forum | Software Dev. | 90.534936 |
A female hammerhead shark gave birth to a pup in the Henry Doorly Zoo in Nebraska in 2001 despite having no contact with a male shark. Thanks to new DNA profiling technology, scientists have been able to show conclusively that the shark pup contained no genetic material from a male. Before you whip out your Book of Revelations and start begging for forgiveness, you should know that this is an example of a naturally occurring phenomenon called parthenogenesis. In parthenogenesis, egg cells develop as an embryo without the addition of any genetic material from a male sperm cell. This reproductive process has been witnessed in bony fish before, but never in cartilaginous fish like sharks.
Asexual reproduction decreases genetic diversity, and thus can weaken a species as a whole. Witnessing the hammerhead’s virgin birth is thus a cause of concern for scientists, worried about further weakening of already threatened shark species in the world. | <urn:uuid:80b319c6-5712-4f2d-bf82-2e9d61e153fd> | 3.5625 | 185 | Personal Blog | Science & Tech. | 33.171613 |
9. Dr. Boris Komitov, Bulgarian Academy of Sciences,
Institute of Astronomy, and Dr. Vladimir Kaftan:
Central Research Institute of Geodesy, Moscow.
From their paper: Komitov, B., and V. Kaftan, (2004), The sunspot
activity in the last two millennia on the basis of indirect and instrumented indexes: time series models and their extrapolations
for the 21st century, paper presented at the International Astronomical Union Symposium No. 223.
Comment from paper: “It follows from their extrapolations
for the 21st century that a supercenturial solar minimum will be occurring during the next few decades….It
will be similar in magnitude to the Dalton minimum, but probably longer as the last one.”
10. Dr. Theodor Landscheidt (1927- 2004),
Schroeter Institiute for Research in Cycles of Solar Activity, Canada)
Among his comments from many years of research on solar climate forcing include: “Contrary to
the IPCC’s speculation about man made warming as high as 5.8(degrees)C within the next hundred years, a long period
of cool climate with its coldest phase around 2030 is to be expected.”
Dr. Tim Patterson: Dept. of Earth Sciences, Carleton Univ., Can.
From an article in the Calgary Times: May 18, 2007.
Indeed, one of the more interesting, if not alarming statements Patterson made before the Friends of Science luncheon is satellite
data shows that by the year 2020 the next solar cycle is going to be solar cycle 25 – the weakest one since the Little
Ice Age (that started in the 13th century and ended around 1860) a time when people living in London, England,
used to walk on a frozen Thames River and food was scarcer. Patterson: “This should be a great strategic concern in
Canada because nobody is farming north of us.” In other words, Canada – the great breadbasket of the world - just
might not be able to grow grains in much of the prairies.
Ken K. Schatten and W.K.Tobiska. (In other works D.Hoyt)
From their paper presented at the 34th Solar Physics Division meeting
of the American Astronomical Society, June 2003:
“The surprising result of these long range predictions is a rapid decline in solar activity, starting with
cycle #24. If this trend continues, we may see the Sun heading towards a “Maunder” type of solar activity minimum
– an extensive period of reduced levels of solar activity.”
13. Dr. Oleg Sorokhtin.
Merited Scientist of Russia and Fellow of the Russian Academy of Natural Sciences and researcher at the Oceanology Institute.
From recent news articles, regarding the next climate
change he has said: “Astrophysics know two solar cycles, of 11 and 200 years. Both are caused by changes in the radius
and area of irradiating solar surface….Earth has passed the peak of its warmer period and a fairly cold spell will
set in quite soon, by 2012. real cold will come when solar activity reaches its minimum, by 2041,and will last for 50-60 years
or even longer.”
14. Dr’s. Ian Wilson, Bob Carter, and I.A. Waite.
From their paper: Does a Spin-Orbit Coupling Between the Sun and
the Jovian Planets Govern the Solar Cycle? Publications of the Astronomical Society of Australia 25(2) 85-93 June 2008).
Dr. Wilson adds the following clarification:
“It supports the contention that the level
of activity on the Sun will significantly diminish sometime in the next decade and remain low for about 20-30 years. On each
occasion that the Sun has done this in the past the World’s mean temperature has dropped by ~ 1-2 C.”
15. Dr’s. Lin Zhen-Shan and Sun Xian. Nanjing Normal University, China
From their paper in Meteorology and Atmospheric
Physics, 95,115-121: Multi-scale analysis of global temperature changes and trend of a drop in temperature in the next 20
“… we believe
global climate changes will be in a trend of falling in the following 20 years.” | <urn:uuid:4c586a0d-e06e-4836-8804-2fda44f0385a> | 2.765625 | 935 | Comment Section | Science & Tech. | 59.857974 |
New evidence has shown that a “hot spot” could cause sea levels on the eastern seaboard of the US to advance faster than the projected global average. This increase is attributed to a change in the North Atlantic current, which scientist say is warming and as a result slowing down.
The affected are stretches over 600 miles, from North Carolina all the way to northern Massachusetts. In a study conducted by the USGS, global sea levels have risen between 0.6 and one millimeter per year since 1990, but levels along this portion of the eastern seaboard have gone up 3.7 millimeters in some areas- four times the global average. You may be thinking that this is such a small rise, how could it possibly affect things? Over a few years, yes, the difference may be fairly negligible, but over several decades the change adds up. This rise happens not just at a quicker rate, but at a more rapid pace, like a car on a highway “jamming on the accelerator,” says the study’s lead author, Asbury Sallenger Jr., an oceanographer at USGS. He has observed sea levels since the 1950′s, and noticed a change beginning in 1990.
By the year 2100 global sea levels are anticipated to rise more than a meter, the added increase caused by this “hot spot” could add almost an extra foot of water on top of that. “Extreme water levels that happen during winter or tropical storms, perhaps once or twice a year, may happen more frequently as sea level rise is added to storm surge,” says Karen Doran, co-author of the USGS study. This will undoubtedly cause many large population centers below this new waterline more than a little trouble in the coming decades. The number of people living in New York City, Boston, and Philadelphia, just to name a few of the cities that will likely be affected—and their likely exit from the area before, during or after the floods—poses a real problem. Where are all of these people going to go? New York City alone has over 8 million people. That’s more than a serious traffic jam; it’s an exodus, a migration of mass proportions.
Regardless if you are for or against the argument that man has caused global warming, the simple fact is that the world as we know it is getting hotter. We cannot ignore the reality that sea levels and climate as a whole are going through a major transition-nor the fact that this is a part of normal Earth function. Our planet constantly ebbs and flows between warm and cool periods- and as a result wet and dry periods. More water is locked up in ice during the cooler periods, resulting in lower sea levels, while during warmer periods more water is in liquid form, causing sea levels to rise.
Whether we are speeding up the process, all of this is part of the Earth’s natural cycles. As a species we have even experienced it before- though this was thousands of years ago and little of our ancestor’s accounts of such phenomenon and how they dealt with them remain for us to study. But many cultures share in common a flood story of some type, where in the earth is inundated by massive floods that wipe much of the earth clean of life— or at the least dramatically change the landscape.
Are we in for another flood? Scientists think so, but not on the order of world-ending myths so common to many ancient cultures. No need to rush out to your local hardware store and start construction on an ark. That being said, many cities and countries might want to take some preventative measures.
Immediate or not, we need to start to think outside of the box as to how we will deal with climate change, and building over water is one alternative to trying to divert it. This may be one of many answers to increased sea levels that seemingly every scientist agrees are in our future, the time to argue over the existence of global warming has come and gone. The time to take action is now and the sooner we prepare, the less the effects will be felt by future generations.
By Will Inglis
Who doesnt like a great workout to get your adrenaline going and have a burst of energy? But is that burst of energy slowly draining the environment? Eco-expert Kim Carlson is encouraging workout enthusiasts (and those less enthusiastic) to green their routine. Before hitting the gym, check out Kim’s top 10 tips to consider when greening your workout. | <urn:uuid:95cb5eb9-c256-4b32-9a9f-098990150ebe> | 3.734375 | 929 | Personal Blog | Science & Tech. | 53.235026 |
Protecting Biodiversity in Tengchong
Description: The project site is located at the southern edge of Gaoligonshan Nature Reserve in Yunnan province, China. The proposed project is a small-scale reforestation project under CDM. Mixed forests will be established around the buffer zone of Gaoligonshan Nature Reserve and the area adjacent to the Nature Reserve, using native tree species.
The project will generate approximately 150,000 tons of CO2 benefits (offsets) over its 30 year lifetime.
All forests projects in China » | <urn:uuid:44f6e0a8-54dd-402e-8e73-dd73180c6ed0> | 2.703125 | 114 | Knowledge Article | Science & Tech. | 20.5075 |
#include "inndcomm.h" int ICCopen() int ICCclose() void ICCsettimeout(i) int i; int ICCcommand(cmd, argv, replyp) char cmd; char *argv; char **replyp; int ICCcancel(mesgid) char *mesgid; int ICCreserve(why) char *why; int ICCpause(why) char *why; int ICCgo(why) char *why; extern char *ICCfailure;
ICCopen creates a Unix-domain datagram socket and binds it to the server's control socket. It returns -1 on failure or zero on success. This routine must be called before any other routine.
ICCclose closes any descriptors that have been created by ICCopen. It returns -1 on failure or zero on success.
ICCsettimeout can be called before any of the following routines to determine how long the library should wait before giving up on getting the server's reply. This is done by setting and catching a SIGALRM signal(2). If the timeout is less then zero then no reply will be waited for. The SC_SHUTDOWN, SC_XABORT, and SC_XEXEC commands do not get a reply either. The default, which can be obtained by setting the timeout to zero, is to wait until the server replies.
ICCcommand sends the command cmd with parameters argv to the server. It returns -1 on error. If the server replies, and replyp is not NULL, it will be filled in with an allocated buffer that contains the full text of the server's reply. This buffer is a string in the form of ``<digits><space><text>'' where ``digits'' is the text value of the recommended exit code; zero indicates success. Replies longer then 4000 bytes will be truncated. The possible values of cmd are defined in the ``inndcomm.h'' header file. The parameters for each command are described in ctlinnd(8). This routine returns -1 on communication failure, or the exit status sent by the server which will never be negative.
ICCcancel sends a ``cancel'' message to the server. Mesgid is the Message-ID of the article that should be canceled. The return value is the same as for ICCcommand.
ICCpause, ICCreserve, and ICCgo send a ``pause,'' ``reserve,'' or ``go'' command to the server, respectively. If ICCreserve is used, then the why value used in the ICCpause invocation must match; the value used in the ICCgo invocation must always match that the one used in the ICCpause invocation. The return value for all three routines is the same as for ICCcommand.
If any routine described above fails, the ICCfailure variable will identify the system call that failed. | <urn:uuid:629269d0-6631-41c7-8ca2-2d34fad887fc> | 2.734375 | 601 | Documentation | Software Dev. | 54.5276 |
› View larger image
This image represents a simulation of a day's worth of sea surface salinity observations as planned to be made by the Aquarius instrument. Credit: NASA Aquarius Science Team/Aquarius Data Processing System
With more than a few stamps on its passport, NASA's Aquarius instrument on the Argentinian Satélite de Aplicaciones Científicas (SAC)-D spacecraft will soon embark on its space mission to "taste" Earth's salty ocean.
After a journey of development and assembly through NASA facilities, a technology center in Bariloche, Argentina, and testing chambers in Brazil, the Aquarius instrument, set to measure the ocean's surface salinity, recently made the trip from São José dos Campos, Brazil, to California's Vandenberg Air Force Base for final integration and testing before its scheduled launch on June 9.
Aquarius will map the concentration of dissolved salt at the ocean's surface, information that scientists will use to study the ocean's role in the global water cycle and how this is linked to ocean currents and climate. Sea surface temperature has been monitored by satellites for decades, but it is both temperature and salinity that determine the density of the surface waters of the ocean. Aquarius will provide fundamentally new ocean surface salinity data to give scientists a better understanding of the density-driven circulation; how it is tied to changes in rainfall and evaporation, or the melting and freezing of ice; and its effect on climate variability.
"The ocean is essentially Earth's thermostat. It stores most of the heat, and what we need to understand is how do changes in salinity affect the 3-D circulation of the ocean," said Gene Feldman, Aquarius Ground System and Mission Operations Manager at NASA's Goddard Space Flight Center, Greenbelt, Md.
The photo slideshow highlights the international travels of the Aquarius satellite.
The development of the Aquarius mission began more than 10 years ago as a joint effort between Goddard and NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif. In 2008, Goddard engineers completed the Aquarius microwave radiometer instrument, which is the key component for measuring salinity from space.
"The radiometer is the most accurate and stable radiometer built for sensing of Earth from space. It's a one-of-a-kind instrument," said Shannon Rodriguez-Sanabria, a microwave communications specialist at Goddard.
JPL built Aquarius' scatterometer instrument, a microwave radar sensor that scans the ocean's surface to measure the effect wind speed has on the radiometer measurements. The radiometer and scatterometer instruments, along with an 8.25-by-10-foot elliptical antenna reflector and many other systems, have been integrated together at JPL to form the complete Aquarius instrument. A number of other instruments aboard the SAC-D spacecraft are contributions from Argentina, France, Canada and Italy.
In June 2009, Aquarius was flown via a U.S. Air Force cargo jet to San Carlos de Bariloche, Argentina, a destination known for its natural scenery of blue lakes and verdant mountains, to be integrated with Argentina's SAC-D spacecraft. A year later, the fully assembled spacecraft and all the instruments now referred to as the "Aquarius/SAC-D Observatory" were shipped to Brazil. There, engineers began a nine-month campaign of alignment, electro-magnetic, vibration, and thermal vacuum testing to ensure it will survive the rigors of launch and orbiting in space.
JPL will manage the Aquarius mission through Aquarius' commissioning phase, scheduled to last 45 days after launch. Goddard will then manage the Aquarius instrument operations during the mission. Argentina's Comisión Nacional de Actividades Espaciales (CONAE) will operate the spacecraft and download all of the data collected by Aquarius several times per day. Goddard is responsible for producing the Aquarius science data products. JPL will manage the data archive and distribution to scientists worldwide.
Aquarius will collect data continuously as it flies in a near-polar orbit and circles Earth 14 to 15 times each day. The field of view of the instrument is 390 kilometers (242 miles) wide, and will provide a global map every seven days. The data will be compiled to generate more accurate monthly averages during the mission, which is designed to last a minimum of three years. | <urn:uuid:71e8e602-97de-4dbd-b292-9bef79d3fdf9> | 3.578125 | 911 | Knowledge Article | Science & Tech. | 29.244795 |
Forget the northern star, follow the cows instead. Researchers have discovered that when grazing or resting, cattle and also deer, tend to point their bodies towards Earth’s magnetic poles, which suggests that they are able to sense magnetic fields like smaller animals.
As a result, you’ll notice that these animals often tend to point north or south, but not in any other direction. The research groups ruled out many other reasons for why animals orient themselves in such a way.
There is no consistent wind pattern among the different locations , and if animals were basking in the sun, researchers would have seen them standing outside of one another’s shadows. Next time you want to mess with some farm animals, try waving some magnets in their faces (at your own risk). | <urn:uuid:f03c2d81-41a6-4fd0-b0a3-46c3bc7a2de4> | 3.125 | 159 | Personal Blog | Science & Tech. | 50.120796 |
AB and CD are two ideal springs having force constant K1 and K2 respectively. Lower ends of these springs are attached to the ground so that the springs remain vertical. A light rod of length 3a is attached with upper ends B and C of springs. A particle of mass m is fixed with the rod at a distance a from end B.and in equilibrium, the rod is horizontal. Calculate period of small vertical oscillations of the system. | <urn:uuid:16d233fe-3eb4-46cd-b570-8b808edeae92> | 3.34375 | 91 | Content Listing | Science & Tech. | 72.460707 |
experiences the force of gravity
. It is the weakest of all the forces of nature, but it is the dominant force
in astronomy and cosmology. It controls the movement
of planets in our solar system, the structure
of stars, the shapes of galaxies and the ultimate fate
of our universe.
The concept of a universal law of gravity was first presented by Isaac Newton in the 17th century and it can explain the properties of most astronomical systems. However, when bodies are moving at high speeds or in strong gravitational fields, Newton's theory is inadequate and we need Einstein's theories of relativity. Einstein's special and general theories of relativity play an essential role in understanding black holes, active galaxies and quasars.
Find out more about: | <urn:uuid:41f15ca5-830d-4e38-bfcd-f626ba1b10d4> | 3.703125 | 153 | Knowledge Article | Science & Tech. | 36.470729 |
Dec. 10, 2012 A theoretical and numerical study of graphene sheets reveals a property that may lead to novel opto-electric devices and circuits.
One-atom-thick sheets of carbon -- known as graphene -- have a range of electronic properties that scientists are investigating for potential use in novel devices. Graphene's optical properties are also garnering attention, which may increase further as a result of research from the A*STAR Institute of Materials Research and Engineering (IMRE). Bing Wang of the IMRE and his co-workers have demonstrated that the interactions of single graphene sheets in certain arrays allow efficient control of light at the nanoscale1.
Light squeezed between single graphene sheets can propagate more efficiently than along a single sheet. Wang notes this could have important applications in optical-nanofocusing and in superlens imaging of nanoscale objects. In conventional optical instruments, light can be controlled only by structures that are about the same scale as its wavelength, which for optical light is much greater than the thickness of graphene. By utilizing surface plasmons, which are collective movements of electrons at the surface of electrical conductors such as graphene, scientists can focus light to the size of only a few nanometers.
Wang and his co-workers calculated the theoretical propagation of surface plasmons in structures consisting of single-atomic sheets of graphene, separated by an insulating material. For small separations of around 20 nanometers, they found that the surface plasmons in the graphene sheets interacted such that they became 'coupled' (see image). This theoretical coupling was very strong, unlike that found in other materials, and greatly influenced the propagation of light between the graphene sheets.
The researchers found, for instance, that optical losses were reduced, so light could propagate for longer distances. In addition, under a particular incoming angle for the light, the study predicted that the refraction of the incoming beam would go in the direction opposite to what is normally observed. Such an unusual negative refraction can lead to remarkable effects such as superlensing, which allows imaging with almost limitless resolution.
As graphene is a semiconductor and not a metal, it offers many more possibilities than most other plasmonic devices, comments the IMRE's Jing Hua Teng, who led the research. "These graphene sheet arrays may lead to dynamically controllable devices, thanks to the easier tuning of graphene's properties through external stimuli such as electrical voltages." Graphene also allows for an efficient coupling of the plasmons to other objects nearby, such as molecules that are adsorbed on its surface. Teng therefore says that the next step is to further explore the interesting physics in graphene array structures and look into their immediate applications.
The A*STAR-affiliated researchers contributing to this research are from the Institute of Materials Research and Engineering
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by The Agency for Science, Technology and Research (A*STAR).
- Bing Wang, Xiang Zhang, Francisco García-Vidal, Xiaocong Yuan, Jinghua Teng. Strong Coupling of Surface Plasmon Polaritons in Monolayer Graphene Sheet Arrays. Physical Review Letters, 2012; 109 (7) DOI: 10.1103/PhysRevLett.109.073901
Note: If no author is given, the source is cited instead. | <urn:uuid:fdaee29b-65c1-4388-b9b1-73dc065fb4d8> | 3.859375 | 698 | Truncated | Science & Tech. | 28.255643 |